SAIL: Jobs and things we should learn
Welcome to Sensemaking, AI, and Learning (SAIL). I focus on how AI impacts higher education.
We’ve been hearing for years that AI will destroy jobs. Frey and Osborne tried to classify the impact of automation in 2013, boldly proclaiming that up to 47% of the USA job market was at risk. Many WEF working papers have similarly argued for dramatically new skill sets in the future (WEF papers primarily tell people what they already think and then gives them the confidence to state it boldly. Such as this one. If you cite a WEF paper (like I often do) you’re mainly saying “I thought this before and now I have proof that agrees with me”).
Now we’re starting to see labor market impact of AI and the backdrop of failed predictions makes it somewhat hard to trust new proclamations. Anthropic, however, released a good report today on the economic impact of AI. Computer science and related fields had high adoption and use. Interesting was the ways that AI either augmented or automated work tasks (p. 10, Figure 7).
If AI is taking over (it is), or at least augmenting, more of our cognitive tasks, the question I’ve been asking for several years, but haven’t had a satisfying answer to, remains prominent: “What should we be teaching in higher education?”. Take any university program and ask yourself “how will this look in five years”? Or look at Grow with Google or Microsoft skills. What will remain relevant in five years and what will be replaced? And if replaced, what will be taught instead? Not generally or conceptually, but practically: what will the course calendar and four year degree schedule look like for a computer science student in 2030?
AI and Education:
The Duolingo Handbook. Simple and accessible overview of the culture behind an innovative culture and product developed in the process of building a learning app.
Higher education needs tool builders. These tool builders should be creating roughly all things AI - from agents to course builders to student help. I’m not seeing much of that happening, but for higher education to start leveraging AI effectively, we need to start building tools ourselves. How should you build an AI engineering team? Here’s a good video detailing roles.
We’re all trying to figure out the ways in which AI differs from human intelligence. The Vatican weighs in with many words: “While AI is an extraordinary technological achievement capable of imitating certain outputs associated with human intelligence, it operates by performing tasks, achieving goals, or making decisions based on quantitative data and computational logic…Human intelligence, in contrast, develops organically throughout the person’s physical and psychological growth, shaped by a myriad of lived experiences in the flesh”. The document covers education, misinformation, deepfakes, surveillance, privacy, our common identity, etc.
Transforming Science with Large Language Models. A somewhat odd paper that gives a year one visual of “this is science” and “this is the hypothesis generation process”. But if you move past that and the 300+ citations, the paper offers a review of how AI will impact everything from search, to writing to analysis.
AI as teammates. In this case, in healthcare: “Unlike tools, agentic AI has the potential to take initiatives; rather than waiting for queries and data, agents can proactively monitor and pull data from the health-care system to identify issues and propose solutions. An AI agent can maintain long-term memory and context, tracking complex patient histories and interactions over time.” Obviously, in education, AI/human teaming is a logical focus.
General AI:
Google was caught flat by ChatGPT and they’ve been scrambling since. They’ve now developed top tier models. The difference though is price. Google is going to soon become a product builders first preference, unless OpenAI starts dropping their prices. Altman says prices will drop 10x annually.
Hugging Face’s free Agents course opened today. It’s all agents these days. Might as well get up to speed.
How LLMs store facts. I think I’ve shared this before. But it’s excellent.
Building Effective Agents. Anthropic released this mid-December. And outstanding read.
The robots are coming. This is a development that I’m looking forward to seeing unfold. Especially practical around the house/lab/university/yard type of robots. They’re coming and they’re getting almost affordable. I imagine by 2026 we’ll see this develop rapidly as a consumer product.
Europe has been busy regulating rather than building AI. Or at least, that’s the common framing. Mistral, however, has launched an impressive new AI assistant: Le Chat. The AI world is now essentially three domains: USA, China, and a bit of EU.
Amazon is investing $100bn, largely in AI, in 2025. That’s a ridiculous number, but matched with Microsoft’s $80bn and OpenAI’s announced $500bn, sanity is held in check since the anticipated rewards of AI dominance are so significant. And to fully get the scope of investment: over $238b by seven USA big tech companies in 2025 alone.