SAIL: Sensemaking AI Learning

Subscribe
Archives
August 14, 2025

SAIL: Flipping things, LLM Plateau?, Always Agents

August 13, 2025

Welcome to Sensemaking, AI, and Learning (SAIL). I focus on how AI impacts higher education.

Many years ago, back when the cool kids talked about MOOCs, there was a growing interest in the idea of the flipped classroom. The core logic was to have learners do mundane knowledge download via lectures/MOOCs outside of the classroom and then devote classroom time to application and engaged thinking with peers. Conceptually, it was a good idea.

People like Ronald Barnett have stated that the growing complexity of knowledge means that we shift from epistemology to ontology in framing the intent and practice of education. A flipped classroom approach to learning with AI then requires that we do general epistemological work in collaboration with AI as an active tutor and then devote our time in person time to more ontological or beingness attributes of education since these are harder to scale with AI.

AI and Learning

  • I started Discord channel to randomly chat with people about AI. If you’re interested, join here.

  • We are hosting our first annual Agentic AI Academy in Norway next year. Details are here. Let me know if you have questions.

  • Education is a natural ground for AI since our concern is the development of knowledge. Google Anthropic OpenAI have all announced applications that are dialogic or socratic in nature. If open course ware scaled content, MOOCs scaled instruction, then it looks like AI is going to scale engagement and tutoring.

  • ChatGPT in Education: An Effect in Search of a Cause. New technologies have an immediate call, by researchers, to evaluate what works and what doesn’t work. I remember this early on with online education. And MOOCs. And learning analytics. The one challenge that the existing system faces is that new technologies are future facing and assessing the against the existing system, especially early in development, will likely not provide a clear assessment of future impacts.

  • Google is pretty active shipping new products. Genie 3 is an interesting one for the education sector. With a prompt, you can “generate dynamic worlds that you can navigate in real time at 24 frames per second, retaining consistency for a few minutes at a resolution of 720p”

  • The USA federal gov’t continues to target changing labor market and emerging technologies. The latest is the future workforce. The report notes the rapid pace of AI development and its mismatch to the existing university system. Basically, higher education (and society) is not absorbing or reacting to AI-driven changes fast enough. Later in the document, an interesting point is made about developing a “credentials of value” score card.

  • The AI Therapy Disaster. This seems overstated. Anecdotally, a common refrain has been that AI can serve as an effective therapist, thinker, dream interpreter, coach, counselor, and a range of other self-help roles. Early research shows therapy is an area of high self-interest in chatbot use. However, stories like this increasingly appear as well, suggesting that this is a domain where we remain massively under-informed. And of course, when individuals make health decisions and return to diseases from a century or two ago, there is valid cause for concern. Not about AI. But about humanity.

  • Bullshit universities: the future of automated education “No datasets were generated or analyzed during the current study”. You don’t say.

  • I’ve shared this before. But evals are roughly the equivalent of a qual/quant methodology for determining how well agents (LLMs more accurately) are responding to user questions/discussions. When things aren’t working well, most organizations will primarily adjust prompts. So, in higher education, evals help universities assess how well their prompts are working in relation to needs of learners. Of course it’s more complex and involved than that, but here’s a solid FAQ - a fantastic resource to bookmark.

  • Online learning is struggling. Student expectation vs institutional preparedness. Add AI to the equation and it will get much worse.

AI in General

  • Microsoft has stepped into the AI talent wars.

  • OpenAI launches a series of long rumored open models. Response has been positive and smaller models are promising for on-local device use and the related cost and environment impact reduction.

  • GPT5, on the other hand, has not been well received. An interesting subreddit conversation with OpenAI gives an overview. As do many many online articles. A general sense after GPT5 was launched is that we are in the twilight stage of LLM scaling and are now looking for the next structural innovation. Promising aspects of GPT5 are reduced hallucinations, faster reasoning responses, and an expected reduction in model environmental impact. We seem to be at a similar stage to when each new iphone was no longer a qualitatively different experience and smaller improvements were the norm.

  • People were outraged with GPT5, so they brought back GPT4o. AI models are entities that we connect with. “Sorry, Bob’s not here. He’s…gone”. That won’t fly with many people. Andrew Ng offers a tight summary of GPT5 changes/improvements. Willison does a much deeper dive.

  • Building AI Agents. Free download. Fairly popular. Good overview with the right conceptual/technical balance.

  • Interesting tech/research balance at OpenAI: Engineering and research/tech staff make up ~88% of total employee count.

Don't miss what's next. Subscribe to SAIL: Sensemaking AI Learning:
Powered by Buttondown, the easiest way to start and grow your newsletter.