SAIL:
April 25, 2025
Welcome to Sensemaking, AI, and Learning (SAIL). I focus on AI’s impact on higher education.
AI’s longer term impact on higher education is still largely unknown. Short term indications suggest that it will help learners be better at many of the tasks that are now required: writing, self-testing, brainstorming, etc. This requires a new range of skills and literacies in how technology is used. It doesn’t change the learning process itself. The longer term impact is less clear. With the hype of imminent AGI, one could argue that if we hit exponentially accelerating AI that sets its own goals and completes self-defined tasks, that we’re somewhat obsolete as learning and knowledge holding entities as defined by the existing education system (which was my point in this tedx talk from last year). At minimum, we need to work through how to engage with mind-like entities (see 39 min mark).
If the existing system of learning primarily remains as is, but with greater use of AI, then the current strategy of signing licenses with Microsoft, Google, OpenAI, and Anthropic is a fine.
If AI can start to serve as a suitable content creator, tutor, and assessor (i.e. asking o3 to create a personal machine learning study plan based on MIT OCW lectures), then universities need to find value add approaches. These will likely come in the form of graph models of content to ensure content is relevant or pedagogical agents that promote specific skills (case study analysis, role playing). The question for faculty and universities should be: “what are we adding that students can’t get from ChatGPT?”
If, however, the concept of encoding declarative knowledge in the brains of learners becomes a financially foolish endeavor - because AI can outperform us on those tasks on many fronts - then a broader rethink of the entire system of learning is needed. What perplexes me is that no one seems to be thinking at this systems reorganization level.
AI and Learning
- Advancing AI education for American Youth. Focuses on a literacy, including a task force and courses. I think this definition of AI literacy as a type of legal obligation is interesting.
- Hiring and building an AI engineering team. All universities should have an AI engineering team. They will be as fundamental as a learning design or teaching innovation departments. We’ve been playing with exactly this challenge internally and AI/LLM development expertise is not easy to come by.
- Are you ready for an AI university. Before starting with M&S with Paul LeBlanc I was doing talks around the idea of an “AI-First University”. Many of the concepts in this doc align with the idea that AI is about structural change to education systems: “Imagine a university employing only a handful of humans, run entirely by AI: a true AI university. In the next few years, it’s likely that a group of investors in conjunction with a major tech company like X, Google, Amazon, or Meta will launch an AI university with no campus and very few human instructors. By the year 2030, there will be standalone, autonomous AI universities.” That goes harder than I would. I love universities and the spaces that they enable for people to develop, think, and grow. AI, by taking over mundane work, may end up freeing us to connect with ourselves, others, and nature. This vision of a human-less learning systems isn’t appealing. With that said, I absolutely expect AI to scale universities where we have systems with 10s, even 100s, million learners. And it will likely be in conjunction with some of the big tech organizations suggested here.
- 2025: The year the frontier firm is born. This is the type of document that will look outdated in six months, but currently is needed and helpful. Easy to track but raises important points about optimal ratio between humans and agents and introduces the idea of humans as agent bosses. Google has data and compute on its side, but Microsoft has much more detailed insights into employees. Consider this rather intense statement (first video, 18 second mark) “an intelligent data layer to infer skills from user activity, mapped to a built in but customizable taxonomy”. There is significant momentum to restructure work with agents given that “specialized AI agents are already delivering productivity and speed-to-market boosts of 50% or more”. This should be read as marketing hype, so we might want to temper our enthusiasm. But only a little.
- AI companies want to give you a new job. “none of us wants to spend time doing tedious, repetitive tasks. But managing an army of digital agents working on our behalf comes with a host of difficult questions and open problems.“
- No agents in EU Commission meetings. EU continues its cautious approach to AI by banning AI agents that record/transcribe/summarize meetings.
- AI in personalized learning. Scoble is all about the hype. Still worth a skim to see how some in the tech community see AI directing and impacting learning.
AI Advancement
- Anthropic continues to produce the best publicly accessible reports on AI. This one looks at AI welfare: “as we build those AI systems and as they begin to approximate or surpass many human qualities another question arises. Should we also be concerned about the potential consciousness and experiences of the models themselves? Should we be concerned about model welfare too?”
- Values in the wild. Another Anthropic output. “AIs aren’t rigidly-programmed pieces of software and it’s often unclear exactly why they produce any given answer. What we need is a way of rigorously observing the values of an AI model as it responds to users “in the wild”—that is in real conversations with people.”
- 7 Lessons for Building with AI. “The real unlock isn’t AI in isolation. It’s AI plus well-scaffolded humans. That’s how we scale our systems without losing our standards or our soul.
- Trends in AI Supercomputers. “AI supercomputers double in performance every 9 months cost billions of dollars and require as much power as mid-sized cities. Companies now own 80% of all AI supercomputers while governments’ share has declined.” The rapid move of key AI research from university labs to corporate more closed environments is one of my primary motivations for calling for greater university engagement in AI product building.
- Bending without breaking: optimal design patterns for effective agents. Short overview - given the lofty heading, I was hoping to dive into something meatier. However, the trade offs to consider when building with agents are clearly laid out, notably around where to manage the process and where to enable agency from the agent. In education, this process is critical because getting it long can negatively impact learning outcomes that persist (versus, say, having to get irritated with a web agent while buying shoes). The stakes are higher for the education sector.