SAIL: Transmutation, Assessment, Robots
May 2, 2025
Welcome to Sensemaking, AI, and Learning (SAIL). I focus on how AI impacts higher education.
For people even adjacently interested in AI, we’ve all undergone a two year “AI literacy by practicing” process. Which means most people know the value of generative AI in brainstorming, co-thinking, and planning. They also know the limitations of AI in terms of memory, math, and sometimes off kilter responses. Educationally, LLMs have drawn interest in two low hanging fruit application areas: AI for developing content and AI for chatbot/tutoring.
I’m interested in the ability for AI to drive content creation to zero. With the prevalence of open education resources, LLMs, some planning on the part of designers, there should be no cost to students for generic content. Obviously specialized games and VR/AR content is different. But one of the most valuable aspects of LLMs is in information transmutation. This is somewhat under appreciated. Many years ago, Stephen Downes (who still runs the best daily overview of educational technology - I encourage you to sign up) mentioned that content is like MacGuffin - it helps to advance learning but is in itself not critical to the plot of learning.
Why does this matter?
All indications are that AI, even if it stops advancing, has the capacity to dramatically change knowledge work. Knowing things matters less than being able to navigate and make sense of complex environments. Put another way, sensemaking, meaningmaking, and wayfinding (with their yet to be defined subelements) will be the foundation for being knowledgeable going forward.
That will require being able to personalize learning to each individual learner so that who they are (not what our content is) forms the pedagogical entry point to learning. LLMs are particularly good and transmutation. Want to explain AI to a farmer? A sentence or two in a system prompt achieves that. Know that a learner has ADHD? A few small prompt changes and it’s reflected in the way the LLM engages with learning. Talk like a pirate. Speak in the language of Shakespeare. Language changes. All a matter of a small meta comment send to the LLM. I’m convinced that this capability to change, transmute, information will become a central part of how LLMS and AI are adopted in education.
AI and Learning
Ad Age picked up the release of our Butterflies video, noting the theme of “Indigenous boy uses AI learning for ecological justice in sci-fi film”. The campaign can be followed in Instagram. We wanted to portray a relentlessly optimistic vision of AI enabling a return to our humanity. Which includes using technology to help humans solve problems of substance in their own communities. We were privileged to have the creative output lead by The Work and Sarah Eagle Heart, a two time Emmy winner. AI will re-write our lives, our institutions, and our society. We want to shape its impact to align with values and ideals that center on all learners, especially those not served by today’s education system. We’re gearing up for our first group of university and industry partners in September and will continue to roll out with broader, international, partners in 2026.
Pedagogical framework for hybrid intelligent feedback “The paper conceptualizes the role of GenAI feedback as either an independent source or as part of a collaborative process with humans referred to as “Hybrid Intelligent Feedback”. Building on this conceptualization, it discusses the approaches and principles of hybrid intelligent feedback and then proposes a pedagogical framework that outlines the implementation steps for hybrid intelligent feedback.”
Duolingo goes AI first. This has early 2000 vibes as organizations started going digital first. Eventually, all organizations will be AI first. For now, it’s interesting to see the logic of early adopters of this shift (I shared Shopify’s shift a few weeks ago). In Duolingo’s case, it will result in job loss for contractors. Their view of the opportunity: “For the first time ever teaching as well as the best human tutors is within our reach. Being AI-first means we will need to rethink much of how we work. Making minor tweaks to systems designed for humans won’t get us there. In many cases we’ll need to start from scratch. We’re not going to rebuild everything overnight and some things-like getting AI to understand our codebase-will take time. However we can’t wait until the technology is 100% perfect. We’d rather move with urgency and take occasional small hits on quality than move slowly and miss the moment.”
Speaking of Duolingo - it took them 12 years to develop 100 courses. In the last year, they developed an additional 148. AI is an accelerant with an impact in education that is hard to overstate. “Instead of taking years to build a single course with humans the company now builds a base course and uses AI to quickly customize it for dozens of different languages.”
General AI
We’re in the middle of agentic AI - then we have to work through multi-media and AI, then wearables/VR/AR, and then we will be greeted by robots. This is a good overview of the state of robots and practical concerns (like foot design, weight balance) so you can prepare for 2027 :).
When new models are released, some comparison is generally included by developers about the models performance on a leaderboard. That’s not working well anymore as these can be gamed and researchers are questioning their sustained value. In this case, they’re focusing on Chatbot Arena and the overfitting that models do when it has access to the arena in advance (meaning it is succeeding in Chatbot Arena, but that ceases to be an indication of quality of the model overall).
Alibaba dropped Qwen3, now the top open model. If you scroll down, you’ll see their promoting increased agent capability and also MCP support.
AI Safety. Good talk by Bengio (who, with LeCun and Hinton won the 2018 Turing Award) on the risks of AI. I find it slightly alarming (well, intensely terrifying) that the people who are most involved in the development of AI and aren’t pursuing monetary gain explicitly are the ones basically saying “hey, there’s a functionally good chance that we won’t be able to control this soon”.
This is the most expensive build out in human history: $7 Trillion to scale data centers “Our research shows that by 2030 data centers are projected to require $6.7 trillion worldwide to keep pace with the demand for compute power. Data centers equipped to handle AI processing loads are projected to require $5.2 trillion in capital expenditures while those powering traditional IT applications are projected to require $1.5 trillion in capital expenditures (see sidebar “What about non-AI workloads?”). Overall that’s nearly $7 trillion in capital outlays needed by 2030—a staggering number by any measure.”
This has been blowing up in various online forums this week: Unethical Research on Reddit with AI. “In a brief summary of the research posted online—but subsequently removed—the researchers report that the AI content was significantly more persuasive than human-generated content receiving more “deltas”—awarded for a strong argument that resulted in changed beliefs—per comment than other accounts. The comments personalized with inferred user information performed best in the 99th percentile of all commenters within the subreddit.”. Basically, AI persuades better than humans do. Which means we’re kinda screwed.
OpenAI, somewhat related to the item above in that it’s messing with human emotions, had its own challenges last week as an update to their 4o turned to be ridiculously sycophantic. It assured me that I was likely the best person to have ever lived. Though it’s hard to validate that. However, in their defense, they have owned up to this update, rolled things back and provided a fantastic explanation. It’s a glimpse behind the curtains of model deployment in the leading AI lab.