SAIL: Labor,
August 31, 2025
Welcome to Sensemaking, AI, and Learning (SAIL). I focus on AI and higher education.
Anthropic released a report recently that evaluated higher education’s use of AI. The results aren’t surprising - content/curriculum creation, help with writing, research, assessment, using AI as a thought partner, creating games, and visualizations. AI is rapidly making it’s way into society and it’s changing how people do things. Some of the comments from educators get closer to what the future will hold for universities: “AI is forcing me to totally change how I teach.” This touches on my most frequent point of whining. I spent last week visiting several university systems, meeting with senior leadership and faculty. The recognition of impending change is palpable. What’s lacking is vision, direction, a place to move toward. We’ll be announcing our fifth online conference shortly and as part of that we have a small workshop (5-10 people in leadership roles) happening prior that will focus on envisioning the shape, structure, and role of universities in the future. If you’re interested or wish to nominate a dean or provost, reply to this email, please!
AI and Education
Not related to AI, but one of the best videos I’ve seen all year: Listers. Somewhat of a documentary. More importantly, the enormous learning benefits of passion and challenge (Thanks Zeke!).
We were just awarded a Gates Foundation grant to study maladaptive use of AI in learning. We’re partnering with colleagues from Georgetown, U Minnesota, and University of South Australia. We’ll share more soon, but for now, this comes against the backdrop of numerous negative reports on LLMs being used for mental health with catastrophic consequences. “Basically it’s the wild west and I think we’re right at the cusp of the full impact and fallout of AI chatbots on mental health”. I think there are viable and positive uses of LLMs to support wellness and mental health. But safeguards are needed. Those don’t exist yet.
Emotional manipulation by AI companions. Social media was expertly capable of manipulating human attention and increase time on app. AI agents raise manipulation capabilities enormously. This study takes a look at the “good bye” moment of a conversation and where agents begin to actively manipulate and drive continued interaction. We’re fish in a barrel.
The bad press about end-user maladaptive use of LLMs has spooked providers. OpenAI reports out to police. And they have a new focus on helping people when they need it most.
10—20% chance AI will eliminate humanity. So says Geoffrey Hinton. Whew. I thought it was higher. It’s a good overview of short and long term risks. The discussion turns to how we should relate to AI that is smarter than us. Hinton offers an example of a mother and child as a circumstance where a smarter entity cares for another. AI mothers as our only hope. It feels inadequate, as do all examples of future human-AI relationships.
One slip and you’re guilty. Higher education’s plodding response to AI has been unfair to students. Students have only limited understanding of what they should do and what the limits are with AI use and even avenues to appeal when AI use has been wrongly assessed.
How should parents talk to their kids about AI (thanks Pete!). Some good guidance from a largely AI critical crowd currently promoting books. “Let’s just remember the point of all this — of caring for people, of caring about the world.” This focus, of course, is not exclusively the domain of critics. AI deployed properly will enable new research, personalized learning, ready access to medical advice, etc. Caring for people is making the knowledge landscape equitable and easy on-ramps to opportunities that are currently limited to a smaller group.
General AI
Just following up from this last month: Best open models are Chinese.
The big news in AI this week was a report on how entry level jobs are being impacted by AI. The numbers are alarming and are hitting new grads particularly hard while actually helping existing employees continue to succeed. It’s exactly this delta that causes Noahpion to comment that something doesn’t feel right.
Microsoft finally drops their own models - one is voice and the other is a foundation model (assuming to eventually replace their reliance on OpenAI).
Announcements of advances in AI are getting mundane, but this improved performance by GPT5 on medical reasoning are impressive. Better than humans…