SAIL: AI & Science, Higher Education Landscape
February 19, 2025
Welcome to Sensemaking, AI, and Learning (SAIL). I focus on AI’s impact on higher education.
I’m preparing for a talk next week to a board of trustees of a large state system. It’s challenging to separate hype from trends of substance when the technology landscape is moving as rapidly as it currently is. I keep settling on two primary questions as guides for leaders to determine the role of AI in Education:
What should be taught/learned and what skills and knowledge should people posses to be successful in this economy/society/era?
How should the required knowledge, skills, and ways of being be taught?
I’m most worried about the vision & values that underpin a university’s adoption of AI. It’s a perfectly fine response from senior leadership to say “we value human contact and knowledge growth that comes from dialogue with peers and small classrooms and time in nature, so we’re not using AI in our teaching and learning”. This is a principled response - it comes from a place of values driving decisions. It might not be the best response for learners entering the job market, but education is about more than jobs.
Unfortunately, many universities have not intentionally made this decision on values. Instead, the current state of AI in almost every university that I’ve engaged with comes from a stance of lack of vision. The UF initiative is the most intentional I’ve seen. Even traditional flag bearers of innovation in USA higher education have primarily signed licenses with OpenAI or Anthropic or Google. Buying technology is hardly vision. So I return to my questions: what should be taught/learned and how should we teach/learn when AI is accessible? Surely it’s a structural and phase change, not business as usual…
AI & Education
EDUCAUSE released a report on the state of AI in Education. Includes results on strategy, policy, use cases, and workforce. 22% report an institutional AI strategy. Which is higher than I expected. The quality of that vision in centering on student needs is worth exploring. Any vision needs to match the capabilities of AI and the needs of learners as a primary intent.
Why should faculty bother with AI? This is exactly my worry: “We’re concerned that if higher education doesn’t take the lead in this area, private sector companies offering AI credentials will fill the void.” The current climate of declining trust in higher education, reduced funding, political pressure (in parts of the world), concerns about value, growing range of options, etc. feels to me like parts of higher education face a reckoning that can’t be ignored. We need to communicate value (social and economic) to our learners and to society.
AI Essentials for Tech Execs (just pretend it says higher education leaders): “Using plain language in AI isn’t just about making communication easier—it’s about helping everyone understand, work together, and succeed with AI projects. As a leader, promoting clear talk sets the tone for your whole organization. By focusing on actions and challenging jargon, you help your team come up with better ideas and solve problems more effectively.” It’s no fun if you can’t say graphrag and test-time compute.
AI has huge potential unanticipated implications. The social, even existential, concerns of AI that we can’t control have been extensively proclaimed by AI/neural network pioneers like Hinton and Bengio. Activists are starting to coalesce under the banner (or one of the banners): PauseAI
Accelerating Scientific Breakthroughs. Google releases a paper that introduces “AI co-scientist, a multi-agent AI system built with Gemini 2.0 as a virtual scientific collaborator to help scientists generate novel hypotheses and research proposals, and to accelerate the clock speed of scientific and biomedical discoveries.”
What’s happening in the software developer job market?
AI News
OpenAI is dropping some of the “mature" content” restrictions.
The big news this week is x.ai’s launch of Grok3. It’s a state of the art model, placing it in the performance domain of OpenAI’s models. Here’s a detailed breakdown from Karpathy: “Grok 3 + Thinking feels somewhere around the state of the art territory of OpenAI's strongest models (o1-pro, $200/month), and slightly better than DeepSeek-R1 and Gemini 2.0 Flash Thinking. Which is quite incredible considering that the team started from scratch ~1 year ago, this timescale to state of the art territory is unprecedented.”
The robots are coming. Apple is in the game. So is Meta. China is leading.
In addition to ChatGPT, OpenAI has given the world many startups. SSI, Anthropic, and now former CTO (and friends) new venture Thinking Machines. “Scientific progress is a collective effort. We believe that we'll most effectively advance humanity's understanding of AI by collaborating with the wider community of researchers and builders. We plan to frequently publish technical blog posts, papers, and code. We think sharing our work will not only benefit the public, but also improve our own research culture.”
Meta can read your mind. “On new sentences, our AI model decodes up to 80% of the characters typed by the participants recorded with MEG, at least twice better than what can be obtained with the classic EEG system.”
Glasses and other wearables are the future of real world interactions. Meta, Google, and others are already in this space. AugmentOS is a new open source contender: “The open source smart glasses operating system.”
Where is AI Adoption going? (in five simple charts). Costs are going down, capabilities are increasing, and consulting firms are here to help.
The frenzy in AI related technologies is seeing a few failures. This week it was HumaneAI (the wearable pin from last year). Anyone remember Rabbit R1? I purchased it and have yet to have it functionally work. It’s a lovely orange paper weight.