SAIL: GenAI and Feedback, Loneliness,Jobs, AI and LMS
March 22, 2025
Welcome to Sensemaking, AI, and Learning (SAIL). I review AI trends that impact higher education.
AI & Education
What is happening in the labor market? Paul Fain provides a thorough review: “A growing number of signals suggests that artificial intelligence will soon transform substantial swaths of the labor market, with serious but unclear implications for education and job training.”
New model of education. This rhetoric of educational change has been prominent for a few decades and is vague enough to apply to any era: “The four-year degree would become an antiquated relic, replaced by a modular learning system that allows students to acquire and demonstrate skills as needed.” We’re still waiting for the trend that delivers that promise.
Students’ Perceptions of GenAI-powered Learning Analytics in the Feedback Process. GenAI has the capacity to personalize all parts of learning, creating new opportunities to vastly improve learning design and the learning process itself. Assessment and feedback is one area in particular where AI can contribute to “augmenting feedback practices by providing innovative, personalized, and scalable feedback solutions.” The study is largely supportive of AI, but does note impact on engagement is lower than expected. The “feedback literacy” discussion early in the paper is good and AI developers would benefit by attending to it.
How is AI reshaping the LMS. LMS providers are at risk due to the capabilities of AI. An LMS locks in format and bakes in pedagogy. The rather limitless and unstructured nature of LLMs (especially in dialogic learning) is confounded by the LMS forced structured. I’m not convinced that LMS providers can make the shift, which is structural, to fully take advantage of AI. With that said, this article reviews the need for human centered learning and trust in AI.
HCAST: Human-Calibrated Autonomy Software Tasks. Spend time with this. It’s essentially a benchmark to assess the capabilities of autonomous agents performance on real world tasks. I’m more interested, though, in the break down of tasks of human/AI because that remains one of the key challenges for us to understand educationally: what is still worth teaching/learning? OpenAI’s CPO says AI will be able to exceed human performance in coding by 2026. The CEO of Anthropic says AI will do 90% of software coding in the next 3-6 months.
AI &Technology
Even if AI progress peaked today, the capabilities created for brainstorming, thinking, and coding would be sufficient to develop entirely new approaches to learning. One area that remains a substantive weakness for LLMs, but that universities have in troves, is structured information and data. Course syllabi, outcomes, rubrics, evaluation criteria, etc. are all important for generating relevant and accurate engagements with learners. We’ve (Matter and Space) placed a bet on knowledge graphs as the organizing framework to leverage and accelerate AI as a tutor. Here’s a good article unpacking the value of structured data for AI use, ultimately also landing on graph models. Since AI is good at creating content, we mostly need to provide it with semantic structuring.
AI is driving consulting profits. Accenture booked $1.4b in genAI revenue Q2 2025.
Claude launches search. Claude builds slower than OpenAI…but what they build is generally mature and developed by release. OpenAI sometimes throws things out there before being fully baked. I’m still struggling with Operator and finding value in it.
Affective use and emotional wellbeing in conversation with ChatGPT This is an excellent paper reviewing how advance voice mode impacts wellbeing. The discussion section is thorough and covers related topics of anthropomorphism, emotional resilience, and sociotechnical safety.
How does AI use interact with loneliness? “Overall, higher daily usage–across all modalities and conversation types–correlated with higher loneliness, dependence, and problematic use, and lower socialization.”
OpenAI launches a new series of advance voice models (text to speech and speech text). Significantly improved error rates and part of co-evolution of tools (i.e. the agent SDK from last week with new audio models co-evolve).
Why do multi-agent systems fail?. An excellent taxonomy on Figure 2 of why failure happens. Agentic workflows and multi-agent systems are key to creating effective educational products. Understanding why they fail is an excellent starting point.