Windows Copilot Newsletter #31 - ChatGPT leaks; Gemini for kids; Disability bias in LLMs
Windows Copilot Newsletter #30
OpenAI patches a macOS ChatGPT security flaw; Google offers Gemini to students; ChatGPT reacts negatively to humans with perceived disabilities…
G'day and welcome back to the 31st edition of the Windows Copilot Newsletter, where we curate the most significant stories from the ever-expanding universe of AI chatbots. As we've been away for a few weeks, there's a lot to cover, so let's dive right in...
Top News
OpenAI patches macOS ChatGPT app: An investigation by The Verge revealed OpenAI's ChatGPT desktop app for macOS stored chat conversations in plaintext - readable by anyone with access to that machine. OpenAI rushed out an update - but it's disturbing that security wasn't designed in from the start. Read about it here.
Would you like some carbon with that AI? Last week, Ars Technica intimates that AI isn't so bad for climate change. This week, The Verge reports a gigantic jump in Google's carbon emissions - because of AI. Which is it?
Gemini for the children: Although all public AI chatbots have age restrictions, Google has decided that kids need to be prepared for a world where generative AI is everywhere (possibly because Google force-fed it to them) announcing Gemini will be bundled into its cloud-based educational suite. Read that here.
Copilot on Win 10, finally: Microsoft's drive to Copilot 'all the things' hit a roadblock on systems with multiple monitors - and who doesn't have a few monitors, these days? That delayed the full Windows 11 rollout of Copilot by several months. It also slowed the rollout of Copilot to Windows 10 systems - but a fix just hit the 'Canary' channel for testing.
Top Tips
Block the bots with Cloudflare: It appears that many AI companies are scraping websites, whether or not they're permitted to do so. (Yes, we're looking at you, Perplexity.) CDN Cloudflare has developed a service - available even to their free tier - that promises to block those scraping bots. Try it out!
Safely and Wisely
ChatGPT's Disability bias: University of Washington researchers found that resumes which implied a disability were discriminated against by ChatGPT. They also found that it was possible to correct for some of this bias - but surely this reflects a training data set that's biased against people with disabilities? Read about it here.
Your lying chatbot: WIRED profiles Bland AI, whose customer service chatbot will tell customers that it's not a chatbot. So trustworthy. So believable. Read that here.
Longreads
How generative AI could reinvent what it means to play: MIT Technology Review explores the intersection of games, simulations, and LLMs - and likes what it sees.
Neo-Nazis are all in on AI: A disturbing investigation by WIRED shows how far-right groups harness AI to spread disinformation and hate. Chilling.
‘De-Risking AI’ white paper - now out
AI offers organisations powerful new capabilities to automate workflows, amplify productivity, and redefine business practices. These same tools open the door to risks that few organisations have encountered before.
Wisely AI’s latest white paper, ‘De-Risking AI’, lays a foundation for understanding and mitigating those risks. It's part of our core mission to "help organisations use AI safely and wisely". Read it here.
If you learned something useful in this newsletter, please consider forwarding it along to a colleague.
We'll be back next week with more of the most interesting stories about AI chatbots!
Mark Pesce