SAIL: Agents, Mental Health
April 3, 2025
Welcome to Sensemaking, AI, and Learning (SAIL). I look at trends and impacts of AI on higher education.
The AI hypescape is vibrant, with the “AI will change everything” on one end and the “AI is meh (or worse)” on the other. Similarly the AI influencer camp and AI doomer camp bookend discussions of broader impact on society and humanity. What is indisputable is that no technology has received as rapid and as wide national government interest, economic investment, and consumer uptake as AI technologies. AI is inevitable. It will reorganize society, it will restructure human life, it will remake work and institutions and even war. There are some contrary voices, but the system level adoption of AI is the source of intelligence - i.e. it’s the compound systems of existing tech, AI, automation, robotics, etc. that will drive change.
AI & Education
I am eager to see AI enable learning opportunities for those who are excluded from the current system. It’s not a surprise that leading AI labs are launching initiatives like Anthropic’s Claude for Education or OpenAI’s Academy. What I find interesting is that the education role for learning and development feels very mundane. Anthropic has added some ability for learners to use their tools to create study plans, but this isn’t a moonshot rethinking of education.
Mental health issues are significant concerns on campuses (well, society). A few recent reports suggest that there is help (and worry) with AI. A report this week detailed the many benefits AI affords for supporting individuals in therapy: “Fine-tuned Gen-AI chatbots offer a feasible approach to delivering personalized mental health interventions at scale”
Agents are promising in education. Anthropic released a popular post last year that I’ve linked to previously. Here is a short video from one of the authors of that post on how they build agents.
We haven’t seen many large scale effective examples of AI agents in higher education. Khanmigo has most of the hype here. However, UpLimit (focused on workforce re-skilling) launches learning agents to rapidly skill employees: “new AI agents tackle what Uplimit identifies as critical pain points in corporate learning. The skill-building agents facilitate practice-based learning through AI role-plays and personalized feedback. Program management agents analyze learner progress, automatically identifying struggling participants and sending personalized interventions. Teaching assistants provide 24/7 support, answering questions and facilitating discussions.” Discrete problems being solved by agents are a start. A smoothly coordinated systems is quite another.
General AI things
This has been making the rounds: AI 2027. There has been mixed reception with some calling it basically ridiculous and others, like Bengio, recommending that it is worth reading. At minimum, the timeline is provocative.
OpenAI raises $40b, the largest private round in history.
Amazon launches Nova Act. They offer their agent infrastructure as a solution to this problem: “Our dream is for agents to perform wide-ranging, complex, multi-step tasks like organizing a wedding or handling complex IT tasks to increase business productivity. While some use cases are well-suited for today’s technology, multi-step agents prompted with high-level goals still require constant human hovering and supervision.”
PWC drops an Agent OS: “a consistent, scalable framework for building, orchestrating, and integrating AI agents across a wide range of platforms, tools and business functions”. Agents, orchestration, and related function calling/tooling feels to me like a shaky ground on which to build. This is the natural value extension domain of OpenAI/Google/Anthropic.
Ok, this is cool. The CEO of Microsoft vibe codes the Altair Basic from 50 years ago. It took Bill Gates and Paul Allen six weeks. Satya Nadella delivers in 10 min.
AI & Safety
Google has put out a few AGI papers in the last year (see this article on levels of AGI). Their most recent from a few days ago focused on their current AI safety focus - identifying and risking access to potentially harmful capabilities. It feels a bit quaint though - if something like AGI is achieved, these plans may be easy to circumvent.