Windows Copilot Newsletter #29 - An apocalyptic outage; Recall refused; TIME ❤️ Anthropic
Windows Copilot Newsletter #29
An unexplained simultaneous outage takes down some of the major chatbot providers; Everyone wants Microsoft to recall Recall; TIME goes weak in the knees for Anthropic…
G’day and welcome to the twenty-ninth edition of the Windows Copilot Newsletter, a curation of what we reckon to be the most significant of this week’s reports on the exanding field of AI chatbots. Lots to cover, so let’s dive right in…
Top News
Massive Chatbot Outage: On 4 June a middle-of-the-night outage took down ChatGPT. And Perplexity AI. And Anthropic Claude. Were they related? Was it an attack? No one is saying. Read about what happened here.
No one likes Recall: From across the globe, privacy and security experts have been ringing in with one message - Microsoft Recall is not safe, and shouldn’t be rolled out to Copilot+ PCs. Unsurprisingly, Microsoft’s Chief Scientist disagrees.
Your Future You is Chatting: A team from MIT’s Media Lab unveiled their ‘Future You’ chatbot, a GPT designed to help folks make better decisions today for their lives tomorrow. Read about it here.

Self-Dealing Sam: The Wall Street Journal published a piece of investigatory journalism revealing how OpenAI CEO Sam Altman - who famously owns no shares in the rockstar firm of the AI revolution - has been enriching himself via investments in startups that use OpenAI’s tools. Conflict of interest much?
Top Tips
3 Ways Educators can use free GPTs: Free, powerful - and very useful for teaching. Read that here.
Hard prompts and how to work with them: What is a ‘hard prompt’? How do you know when you should make your prompt easier for a chatbot to digest? Read all of that here.
Safely and Wisely
Legal Hallucinations: Testing at Stanford University shows that legal chatbots have a 1-in-6 hallucination rate. Pretty scary to consider when you’re facing a judge. Read that here.

Google’s ‘AI Overviews’ adjusted: After Google told folks rocks are an essential part of a balanced diet they…adjusted their ‘AI Overviews’. Why? Read the story here.
Longreads
TIME profiles Anthropic: A long, rather loving profile of Anthropic - the firm that puts safety first - and founder Dario Amodei.
Maggie Appleton has the cure: The brilliant and considered technologist-humanist reckons we need to use AI tools to create a powerfully decentralised development ecosystem. Read her thoughs (with slides) here. (h/t John Allsopp)
‘De-Risking AI’ white paper - now out
AI offers organisations powerful new capabilities to automate workflows, amplify productivity, and redefine business practices. These same tools open the door to risks that few organisations have encountered before.
Wisely AI’s latest white paper, ‘De-Risking AI’, lays a foundation for understanding and mitigating those risks. It's part of our core mission to "help organisations use AI safely and wisely". Read it here.
A big week ahead, with Apple’s Worldwide Developer Conference starting on Monday 10 June.
If you learned something useful in this newsletter, please consider forwarding it along to a colleague.
See you next week!
Mark Pesce
mark@safelyandwisely.ai // www.safelyandwisely.ai