Windows Copilot Newsletter #28 - OpenAI cuts deals; Google adds a plus to Chromebooks, tells users to eat rocks...
Windows Copilot Newsletter #28
OpenAI cuts deals with big media orgs worldwide; Google turns on Gemini features for a re-christened line of ‘Chromebook Plus’ devices, while its ‘AI Overview’ search results eat glue…
G’day and welcome to the twenty-eighth edition of the Windows Copilot Newsletter, where we curate and present all of the most interesting stories from the rapidly-changing field of AI chatbots. It’s been a weird week, so let’s dive right in…
Top News
OpenAI partners up with potential litigants: Instead of being sued by the likes of News Corp, VOX and The Atlantic, OpenAI has struck deals with each of these publishers - offering them access to AI technologies in return. Will this benefit the publishers? One writer for The Atlantic outright called it a ‘deal with the devil’. Read it all here.
Google search as light comedy: The addition of ‘AI Overview’ to Google’s search results produced an instant sensation - but not the good kind. The search engine suggested eating rocks, using glue on pizza - and leaving your dog locked in a car on a hot day. None of these are good ideas, and Google quickly pulled the plug - but the damage was done to their core brand. Read all about that.
Copilot in Telegram: Meta has billions of users of Meta AI via integrations in Messenger, Instagram and WhatsApp - so Microsoft immediately began integrations in Telegram. Is a chatbot in a private, “secure” messenger app a good idea? Read about that here.
The ‘Plus’ means AI: Google re-christened its Chromebook line as ‘Chromebook Plus’ adding integrations to its Gemini AI chatbot throughout the device and its cloud-based apps. Coming after the announcement of Microsoft’s “Copilot+” AI PCs, this feels… like a copy? Read about it here.
Top Tips
10 ChatGPT use cases: We love a listicle, and this one features all the new uses of ChatGPT-4o, powered by GPT-4o. Read it here.

Safely and Wisely
ChatGPT inaccurate when answering computer questions: Scientists have tested ChatGPT on a range of computer programming questions - and found it wanting. Something to keep in mind when using it as a ‘pair programmer’. Read about that here.
AI antitrust: FTC Chair Lina Khan has suggested that AI models in use today may violate US antitrust laws. Read all about that.
Longreads
AI firms mustn’t govern themselves: Ousted OpenAI board members Tasha McCauley and Helen Toner write at length in The Economist about the governance failures at OpenAI - from the inside. It’s quite a story.
Lessons learned from a Year Building with LLMs: It’s long, it’s detailed - and every word is solid gold. This is a great collection of what need to know if you’re working with AI chatbots. Read it here.
‘De-Risking AI’ white paper - now out
AI offers organisations powerful new capabilities to automate workflows, amplify productivity, and redefine business practices. These same tools open the door to risks that few organisations have encountered before.
Wisely AI’s latest white paper, ‘De-Risking AI’, lays a foundation for understanding and mitigating those risks. It's part of our core mission to "help organisations use AI safely and wisely". Read it here.
More next week - we’re halfway through the gap between Microsoft’s Biuld event and Apple’s big event on 10 June.
If you learned something useful in this newsletter, please consider forwarding it along to a colleague.
See you next week!
Mark Pesce
mark@safelyandwisely.ai // www.safelyandwisely.ai