SAIL: Halted Innovation
Feb 13, 2025
Welcome to Sensemaking, AI, and Learning (SAIL). I focus on how AI impacts higher education.
The last time I saw as much daily innovation, new apps, new technologies, and new opportunities was in the early 2000s when Berner Lee’s vision of the read/write web was being realized. Instead of information creation being in the domain of a small group of companies and experts, the web should be one where we collaboratively and collective contribute. Eventually, Web 3.0 became preferred jargon. Universities, however, never fully absorbed the idea of read/write information ecosystems. Online learning needed an LMS and central control, exemplified by Coursera and edX adopting the one way primary information flow of lectures and university classrooms.
Lost in the duplication of classroom to the online environment, is the idea that each one of us has latent knowledge and capability that we can contribute to the growth of others. Knowledge isn’t centralized. Learning isn’t one way flow requiring a teacher.
Ease of creating digital content (Twitter, FB, IG, YouTube) meant that we needed new skills. Absorbing what a prof taught was no longer the full learning experience. Engaging with others, learning to network, learning to filter information, understanding the creation-engagement loop, understanding that transparent learning is an act of teaching others, all became as important as traditional instruction. The prof, no longer a central hub, became a node in a knowledge network.
We’re there with AI again. I keep emphasizing the need for universities to be AI product creators. And it’s easier than ever. The range of tools (Replit, Cursor, OV, Windsurf, Bolt, Lovable) is the equivalent of the read/write web. Anyone can create software. Perhaps not incredibly complex, but the skills have somewhat shifted from knowing how to code to meta skills of knowing how to ask the right questions and how to use AI to trouble shoot code and help you solve problems. Everyone can develop personal apps, personal information systems, personal APIs. What this means is that apps will be developed for AI to access and process because agents are becoming the equivalent of clicking. Design for AI. Design for agents. The WWW infrastructure and standards were meant for humans to navigate. What’s happening now is about building the web for AI to connect to AI and you at the center as the builder of your own apps/tools.
AI and Learning
Generative AI in Higher Education: A Global Perspective. “The findings reveal that universities are proactively addressing GAI integration by emphasising academic integrity, enhancing teaching and learning practices, and promoting equity.”
Read this: How much does it cost to build an online course? then this OELM. I’m thinking course design should cost about zero.
Wearables are a big deal. At Matter & Space, we’ve been playing with tools like Whoop, Meta Glasses, Apple Watches, etc. to help learners gain insight into themselves and their study habits. It feels like glasses/AR devices will be the phones of the future. There are secondary tools that are worth attending to such as Bee: “Bee is a personal AI that transforms your conversations, tasks, places and more into summaries, personal insights and timely reminders.” Clearly privacy issues, but still a neat idea.
The Impact of Gen AI on Critical Thinking: “Specifically, higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking.” Btw, how do you read papers now? I read, drop into Gemini for summary. Upload to NotebookLM for a short audio. Discuss with ChatGPT. It’s less about reading and more about interacting and engaging.
A Week in My Life as a Product Leader with AI. I like articles like this - clear and practical examples of working where AI is a near constant partner. In education, learning designers, faculty, learners, and administrators would all benefit from exploring how their peers integrate and work with AI.
Hamel Husain is one of the best teachers and practitioners of LLMs, specifically, AI testing and evaluation. His article on LLM as Judge gained significant attention. Here’s a great video: Look at Your Data: Debugging, Evaluating, and Iterating on Generative AI Systems We’ve been evaluating the experiences of learners in dialogic processes. We’ve found langfuse to be a great platform, using Hamel’s approach.
When should you use a GPT and when should you use a reasoning model. OpenAI breaks it down.
AI in General
There is a gulf between EU and USA approaches to AI. VP Vance spoke in EU and succinctly captured the distinctions. The talk is worth a listen. He states his interest is not to talk about AI safety, but rather to talk about AI opportunity. The focus is to drive “pro-growth AI policies”. USA and UK decline to sign a declaration that calls for “"open", "inclusive" and "ethical" approach to the technology's development”
The CEO of Anthropic responds: “Time is short, and we must accelerate our actions to match accelerating AI progress. Possibly by 2026 or 2027 (and almost certainly no later than 2030), the capabilities of AI systems will be best thought of as akin to an entirely new state populated by highly intelligent people appearing on the global stage—a “country of geniuses in a datacenter”—with the profound economic, societal, and security implications that would bring.”. And what, are you asking, is the impact of a country of geniuses in a data center? Potentially it “could represent the largest change to the global labor market in human history.”
Changes to how OpenAI does models: “In both ChatGPT and our API, we will release GPT-5 as a system that integrates a lot of our technology, including o3. We will no longer ship o3 as a standalone model.” The brick-by-brick ways that AI has been built is now integrating into models that don’t require as much discrete decision making by the end user. The system is the thing.
Europe has been (and remains) behind in the AI game. They’re starting to step up with a new €200bn investment.