SAIL: Critical thinking, More DeepSeek
Welcome to Sensemaking, AI, and Learning (SAIL). I focus on how AI advances impact higher education.
What does innovation with AI look like in universities? It’s tiered somewhat like this:
AI agents/co-pilots. This is what we’re seeing adopted currently - ranging from recruitment to help support to active tutoring. Low hanging fruit.
AI to make academics, students, and administrators more effective in their teaching, writing, research, and work. Low hanging fruit.
AI to create learning content. This is still slightly low-hanging fruit, but it’s a bit more involved to implement. David Wiley has been writing about this here and here and here. With the right planning by universities, students should never have to pay for educational content again.
AI to change teaching and learning at the university level. This requires more effort in that it includes multiple departments and can’t be implemented by a solo academic. IT, faculty senate, department and school leadership, as well as student support services all need to be involved to ensure that innovations support and assist students.
AI to create a new system. This is what I think is most needed and requires that we move outside of the limitations of existing universities. It’s increasingly difficult to justify the cost of higher education, especially in USA. Once someone solves the credential bottleneck (i.e. requirement that validation of learning comes from an accredited agency), true systems change will begin in learning, especially with learners who are not served by the existing sector. The benefit here is that the anomalies that have accrued since the development of online learning (i.e. the university has not taken advantages of the opportunities and affordances presented here) will amplify pace of change when credentialing is solved.
AI & Learning:
This is from last year, but made its rounds on social media again: AI can durably reduce conspiracy thinking: “The AI chatbot’s ability to sustain tailored counterarguments and personalized in-depth conversations reduced their beliefs in conspiracies for months”
AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking “a significant negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading..These results highlight the potential cognitive costs of AI tool reliance, emphasising the need for educational strategies that promote critical engagement with AI technologies.”
Also last year: AI Scientist: “This paper presents the first comprehensive framework for fully automatic scientific discovery, enabling frontier large language models (LLMs) to perform research independently and communicate their findings.”
Responsible AI Consortium . There is an influx of consortia and collaborative “meet the challenge of AI” incoming to the university sector. Here is one.
General AI:
DeepSeek remains all the rage, though sanity seems to be setting in as people realize it isn’t some entirely magical new thing that surpasses existing AI for a fraction of the costs of existing models. It’s impressive. But more to it than the simple narrative from a week ago. A few articles:
Collection of DeepSeek articles/tweets
DeepSeek impacts the stock market. But only a $1 trillion impact.
DeepSeek’s victory over American AI. The cool kids call this clickbait. It’s more about an established method (supervised learning) being effectively implemented and then openly shared.
AWS and Azure hosting DeepSeek models. Easy access and “click button” launch with existing cloud providers will drive adoption rapidly.
Good reflections by Andrew Ng, basically: China is in the game, open models win, and scaling isn’t the only way to get to AGI.
Interview with DeepSeek CEO. Excellent. “If the goal is to make applications, using the Llama structure for quick product deployment is reasonable. But our destination is AGI, which means we need to study new model structures to realize stronger model capability with limited resources.”
OpenAI partners with USA National labs “This is the beginning of a new era, where AI will advance science, strengthen national security, and support U.S. government initiatives.”
2025 State of AI development. Activity is accelerating, but only 25% report AI products at deployment stage. Use is also interesting as it’s mainly focused on document parsing, chatbots, coding, etc. Modality is mainly text. OpenAI is the primary company used.
Audio/Voice is going to be big in the world of AI. ElevenLabs raises $250m.
Speaking of voice: this is an excellent overview of the state of AI voice.
OpenAI’s next reasoning model, o3, dropped today. System card is here.
An excellent technical dive into mixture of agent (MoE) approaches. “Using an MoE architecture makes it possible to attain better tradeoffs between model quality and inference efficiency than dense models typically achieve”
Human Things:
International AI Safety report. A significant report (~300pgs) by excellent researchers. Just the opening section and exec summary are helpful. TLDR: AI progress has been incredible. It’s not to late to be safe. But time is running out
Goal of AI is to crash human wages “A world in which human wages crash from AI — logically, necessarily — is a world in which productivity growth goes through the roof, and prices for goods and services crash to near zero”. No doubt this is the goal for the billionaire class…I personally think that if you don’t have at least several moments per day where you stare into the AI abyss and not realize that work will be dramatically different starting now then you’re not understanding the current accelerating capacity of AI.
Copyright and AI. “some forms of AI generated content can, in fact, receive copyright protection, provided that a human substantially contributed”