Windows Copilot Newsletter #34 - LLaMA roars; Mistral soars; Who owns Copilot?
Windows Copilot Newsletter #34
Meta launches LLaMA 3.1 models with a Zuckerberg manifesto; Mistral releases their second foundation model; No one in Microsoft seems to have responsibility for Copilot
G'day and welcome to the 34th edition of the Windows Copilot Newsletter, where we collect and curate some of the most important happenings across the field of AI chatbots. This has been a massive week of announcements, so let's dive in...
Top News
Meta LLaMA 3.1 opens GPT-4 quality models: This week, Meta released its 3.1 version of its family of LLaMA large language models; the largest of these - at 405 billion parameters - looks like a near equal to OpenAI's GPT-4 family of models. Mark Zuckerberg penned a manifesto about the importance of open source models that's well worth the read.
Mistral gets 'Large Enough': Not wanting to be forgotten, French AI startup Mistral released their latest 'large' model, just a third of the size of Meta's biggest - but with similar performance. Sometimes size doesn't matter.
Copilot for Word gets a big upgrade: Microsoft's Copilot integrations into Word have been hamstrung by a paltry 'context window', making it difficult to analyse or summarise large documents. This week Microsoft announced a quadrupling of the context window, to 80,000 characters. Perhaps that'll help convince people to use Copilot for -- anything.
Who takes responsibility for Copilot? The answer, according to this analysis in The Verge, may be no one at all. Which explains much.
Top Tips
The Verge 'Cheat Sheet' for AI: Confused about all these new terms, such as RAG, context windows, and more? Staff at The Verge created a great cheat sheet to bring you up to speed.
How to use Gemini in Gmail: Now that Google's putting Gemini into all the things, here's your list of tips and tricks for making the most of their chatbot in their mail client.
Safely and Wisely
Is Gemini reading your private documents? The answer is complicated, explains The Register - and may depend on your settings.
Don't trust chatbots for real-time facts: The Washington Post asked public AI chatbots about recent political events in the US (of which there have been many) and got poor responses. A reminder that chatbots are not real-time!
Longreads
Self-Regulating AI companies? A year ago all the big players in AI chatbots promised self-regulation. MIT's Technology Review looks at how that's been going.
AI data disappearing: The New York Times reports that the data used to train AI chatbots is disappearing behind paywalls and license agreements.
‘De-Risking AI’ white paper - now out
AI offers organisations powerful new capabilities to automate workflows, amplify productivity, and redefine business practices. These same tools open the door to risks that few organisations have encountered before.
Wisely AI’s latest white paper, ‘De-Risking AI’, lays a foundation for understanding and mitigating those risks. It's part of our core mission to "help organisations use AI safely and wisely". Read it here.
If you learned something useful in this newsletter, please consider forwarding it along to a colleague.
We'll be back in a fortnight with more of the most interesting stories about AI chatbots!
Mark Pesce
mark@safelyandwisely.ai // www.safelyandwisely.ai