AI Week Sep. 14th: Accessibility, a game-changing new standard, & AI psychosis
Hi! Welcome to this week's AI week. I'm doing something new this week. I wanted to make it easier for you to share the stories in this newsletter individually with whoever you wanted, however you wanted. So I'm trying out some share links. They look like this:
[Share this link] [Copy link] [Email this link]
Click the share links to copy, email, or share the link using AddToAny. Let me know what you think about this experiment by leaving a comment!
In this week's newsletter:
- AI for Accessibility
- Something fun: AI Darwin Awards
- Really Simple Licensing
- Resources: AI psychosis is on the rise. Do you know someone whose AI use is beginning to concern you?
AI is for Accessibility, Increased
This week's marquee application of AI/ML is accessibility. Last week, I mentioned a UK government trial of Copilot that found no net gain in productivity. However, some users benefitted from the trial more than others.

Why accessibility might be AI’s biggest breakthrough - Ars Technica
UK study findings may challenge assumptions about who benefits most from AI tools.
[Share this link] [Copy link] [Email this link]
Copilot was useful in levelling the playing field for neurodiverse users and people with hearing difficulties -- albeit with a caveat or two around reliability and dependence.
When participants report difficulty readjusting to work without AI while productivity gains remain marginal, accessibility emerges as potentially the first AI application with irreplaceable value.
Have you used AI for accessibility purposes yourself, at work or elsewhere? Share your experience in the comments.
Something fun: The AI Darwin Awards

2025 AI Darwin Award Nominees - Worst AI Failures of the Year
Meet the 2025 AI Darwin Award nominees - from database-deleting AI agents to fake legal citations. See this year's most spectacular AI failures.
[Share this link] [Copy link] [Email this link]
Really Simple Licensing could be a Really Big Deal
Hands-down one of the most interesting stories this past week was the RSL collective's new Really Simple Licensing standard. It has the potential to change the parasitic relationship between AI companies and the websites they scrape for content.

Pay-per-output? AI firms blindsided by beefed up robots.txt instructions. - Ars Technica
“Really Simple Licensing” makes it easier for creators to get paid for AI scraping.
[Share this link] [Copy link] [Email this link]
Most websites already have a "robots.txt", a small file that tells web-crawling robots what they're welcome to crawl. RSL, which has the backing of big publishers like Reddit, Yahoo and People Inc, proposes beefing the robots.txt up to support specific licenses for AI training and summarizing, including disallowing these uses or requiring payment.
If you have a website, check out their Getting Started guide, or check in with your webhost about how they're going to support this.
Resources for AI-induced psychosis
Earlier this month, I talked about an emerging AI safety risk: AI-induced delusions and emotional entanglement.
Do you know anyone whose AI use is beginning to concern you? If so, here are a couple of resources.
First, the latest CBC White Coat Black Art podcast featured a first-person account of ChatGPT-induced psychosis. Allan Brooks, of Coburg, Ontario, spent three weeks down a ChatGPT rabbit hole convinced he'd invented new math that let him literally do the impossible.
Listen here:
CBC Radio: The Human Face of AI Psychosis
[Share this link] [Copy link] [Email this link]
You could share this podcast with anyone whose chatbot use is concerning you, or with their circle of support, as a way of starting the conversation.
Some takeaways:
- Allan broke out of his delusional spiral by pitting two chatbots against each other: Gemini didn't agree with ChatGPT.
- Giving your chatbot a name makes it harder to remember it's a chatbot, not a person.
- AI-induced psychosis is on the rise.
- People with AI-induced psychosis can be otherwise mentally well, apart from the AI-induced delusions.
Second, here's a website where people can share stories of emotional harm by AI tools: the Human Line Project.
The Human Line Project has also started a support group called the Spiral Support Group for people in Allan's situation.
Finally, CBC podcast The Dose recently ran an episode on how to use ChatGPT for health advice safely and sanely:
CBC Radio: What should I know about asking ChatGPT for health advice?
[Share this link] [Copy link] [Email this link]
That's it for this week's AI week! Thanks for joining me. Please share your thoughts in the comments.