Windows Copilot Newsletter #20 - US hires CAIOs, Microsoft blocks hallucinations, and a teacher hacks cheaters...
Windows Copilot Newsletter #20
Biden orders US government to get match-fit for AI; Microsoft releases Azure AI tools to counter hallucinations; and a teacher uses a bit of prompt injection to detect homework cheats...
G'day and welcome to the twentieth edition of the Windows Copilot Newsletter, where we curate all of the most important stories in the rapidly-evolving field of AI chatbots. A curious week of news, so let's dive right in...
Top News
AI into US government C-suites: US President Joe Biden signed an executive order requiring all Federal departments to appoint a 'Chief AI Officer' (CAIO) within the next sixty days. Several have already done so. Read that here.
Azure AI Studio catches attacks & confabulations: Microsoft announced upgrades to Azure AI Studio - which offers API-level access to its growing suite of chatbots - that will both protect against 'prompt injection' attacks and catch hallucinations before they make it out of the chatbot and into the world, in an effort to make AI tools built on Azure AI more secure and reliable. Read about it here.
Updates in Copilot for Microsoft 365: Microsoft announced upgrades of Copilot in its Microsoft 365 suite of office apps that bring "priority access to the GPT-4 Turbo model to work with both web and work data. We will also be removing limits on the number and length of conversations while increasing file uploads." Plus an improved Designer.
US Congress bans Copilot: "The Microsoft Copilot application has been deemed by the Office of Cybersecurity to be a risk to users due to the threat of leaking House data to non-House approved cloud services," says the Chief Administrative Officer for the US House of Representatives. Copilot is not in the House!
Top Tips
How to use Copilot to generate summaries: One of the most useful features of any AI chatbot, explained clearly here.

Gemini drives Google Maps: You can 'auto-submit' voice commands to Gemini, which means it can now pop you into Google Maps while you're driving, all hands-free. Read how here.
Safely and Wisely
Hallucinated packages download malware: Coding companions like Github Copilot can hallucinate "includes" - programs that get incorporated into other programs. One researcher found a commonly hallucinated package - then made it real, demonstrating a huge, AI-generated security hole. Read how they pulled it off here.
Bad Advice from NYC Chatbot: MyCity, an official New York City AI chatbot, was found by investigative reporters to be delivering incorrect answers to some basic questions. Read that story here.
Teacher hacks chatbot to expose cheats: An educator based in Toronto mastered the art of 'prompt injection' - secreting commands to a chatbot within a homework assignment, making it easy to determine which students had ChatGPT do their homework for them. Prompt injection attacks are the domain of cybercriminals and could be illegal. Please don't try this at home.
Longreads
Persuasive chatbots: Research published this week shows that AI chatbots are 81% more effective as persuaders of humans than other humans. Which means we might soon be seeing chatbots trying to argue us into (or out of) everything. Read the full research paper (PDF) here.
Understanding the Microsoft Copilot Pro Value Proposition
In February, Drew Smith and I released our first Wisely AI white paper - designed to help organisations evaluate whether the features in Copilot Pro justify handing Microsoft a hefty AUD $45 per month per person subscription fee. Read or download the white paper here.
Our next white paper, 'De-Risking AI', will be released on the 18th of April. To receive a copy upon release, sign up here.
More next week - we’ll be back with the latest AI chatbot news!
If you learned something useful in this newsletter, please consider forwarding it to someone else who might benefit.
Mark Pesce
mark@markpesce.com // Wisely AI