SAIL: Sensemaking AI Learning

Subscribe
Archives
June 6, 2025

SAIL: Meeker AI Trends, Fluency, Model Use, Data Centers

June 6, 2025

Welcome to Sensemaking, AI, and Learning (SAIL). I focus on AI’s impact on higher education.

I’m a broken record on this, but universities need to become AI product builders, not simply buyers of what is being built by startups and frontier labs. This raises questions around what a team would look like. If your university went out tomorrow and launched an AI product builders group, who should be represented? At minimum:

- Faculty
- Technical talent (someone from IT who has LLM expertise)
- Designer (learning)
- Project manager

That is the minimum. However, I found this post on AI Engineer stack interesting in unpacking the AI engineer role. And this set of real world build examples.

AI and Education

  • AI literacy is a big deal. In Canada, Google partners with AMII focused on “bringing together 25 post-secondary institutions across Canada. The consortium will develop and distribute AI curriculum resources to faculty, making it easier to incorporate AI concepts into existing coursework and reach an estimated 125,000 students.” This is exactly how universities should approach this. Consortium-models. Network or die (or be taken over by frontier models). I’m looking forward to seeing how this unfolds. Canada has deep technical capabilities (Waterloo) - I recall this from the earliest days of online learning.

  • AI Fluency - I linked to Anthropic’s course on AI Fluency last week. Zapier states that all new hires must be AI fluent and offers a rough rubric.

  • Mary Meeker has released an AI Trends report. It’s exceptional and well worth time to get an understanding of the state of AI.

  • AI Eats the World. A great presentation that balances the breathless optimism in some circles with the dismissiveness found in others. Both backward and forward looking. And leaves the viewer with excellent questions of impact.

  • LLM Course. Well worth time if you want a one-stop “figure this LLM thing out” course. I believe I’ve shared it before, but saw it on Twitter today and was reminded of how solid a resource it is.

  • How to prepare teams for AI coworkers. “AI agents are changing how work gets done, for the better. But the transition will not happen automatically. Adoption will require strong and transparent policies, where teams are not treated as afterthoughts to exciting new technology, but as an essential part of the journey.” Right. But we haven’t even figured out AI’s impact on teaching and learning processes.

  • When should you use different models? I’m finding myself shifting more to Claude and Gemini. Learning to use models is like learning which teammates are the most helpful in solving certain tasks. It takes some time to get the feel and focus right. ChatGPT seems to have the best memory across conversations, which adds significant value.

  • Superhuman performance. These reports are getting old, but still worth the odd call out: “LLM displayed superhuman diagnostic and reasoning abilities, as well as continued improvement from prior generations of AI clinical decision support. Our study suggests that LLMs have achieved superhuman performance on general medical diagnostic and management reasoning”

  • Simulating human behavior. AI in its current form is a bonanza for psychologists and learning researchers. We’ve been playing with synthetic data and personas, but this paper provides an excellent overview of the opportunities once the technology has been built out. “If these agents achieve high accuracy, they could enable researchers to test a broad set of interventions and theories, such as how people would react to new public health messages, product launches, or major economic or political shocks. Across economics, sociology, organizations, and political science, new ways of simulating individual behavior—and the behavior of groups of individuals—could help expand our understanding of social interactions, institutions, and networks.”

AI Technology

  • Shocking, but there appears to be hype and deception in AI. “It turns out the company had no AI and instead was just a group of Indian developers pretending to write code as AI.” (it was, at one time, valued at $1.5b)

  • It’s Waymo’s world. I was in San Francisco this week - rather stunned to see the exponential increase in Waymos around the city since my visit late last year. They’re everywhere. “The company now has a safety record over more than 50 million driverless miles—the equivalent of driving across America roughly 20,000 times.”

  • AI supercomputers. This is pretty impressive. It’s all USA and China. Other countries barely make a blip. EU is exceptionally out of the loop.

  • Data centers in the sky. Because, of course. “The lunar economy will grow, and within the next five years we will need digital infrastructure on the moon,” Eisele says. “We will have robots that will need to talk to each other. Governments will set up scientific bases and will need digital infrastructure to support their needs not only on the moon but also for going to Mars and beyond. That will be a big part of our future.”

  • Google releases their latest Gemini 2.5 pro and it’s leading on roughly all the benchmarks.

Don't miss what's next. Subscribe to SAIL: Sensemaking AI Learning:
Powered by Buttondown, the easiest way to start and grow your newsletter.