SAIL: Sensemaking AI Learning

Subscribe
Archives
June 28, 2025

SAIL: Only now matters, New Tools, Wellness

June 27, 2025

Welcome to Sensemaking, AI, and Learning (SAIL). I focus on higher education and AI.

Herbert A. Simon said that information consumes attention. Additionally: “a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.”

It’s worth noting that nothing matters anymore. There are still the naive ones in our midst who seek truth. But networks are antagonistic by nature in that they reduce the mediatory effects of shared structures (like news organizations and even expert systems like government agencies). This allows us to coalesce around people who we like and agree with. The rapid pace of news cycles means that events (political or otherwise) that would have defined a generation and consumed our attention for months are now a hashtag for a few hours before being replaced with the next attention absorbing event. Truth is a meme shared in the camaraderie of your globally networked in-group.

All indications are that AI will accelerate the assault on human attention and meaning making. How do we prepare learners for this type of future? How do we foster more artisanal thinking? Why aren’t our university leaders guiding us in this existential crisis?

AI and Learning

  • Radical change needed for higher education to survive. “We’re going to have to see some quite radical change. The attitude has always been ‘evolution not revolution’ in the university sector, but I do wonder whether it’s time for a bit of revolution, and there’s probably not going to be too much choice about some of this.” AI didn’t make the “worry about” list.

  • How do people use Claude for companionship. Anthropic continues to publish confronting research at a pace and quality that is unmatched by other labs.

  • How are people using AI? This is a good resource. We know AI is impacting traditional web search. But how is AI impacting daily life? Considering it’s only a 2+ year consumer technology, the impact is already profound with 20% of USA adults using AI daily. Only about 3% pay. Despite the education sector being slow to make effective use of AI, learning remains one of the top five use categories.

  • 2 Sigma in 2 hours. An overview of AI in schools (K-12). Runs 80+min, but worth spending time on since it’s the process of bringing learning sciences, new models of education, and AI to shape how students learn. In this framing, the focus is on AI as a lever for systems change. Learners in this program outperform most learners in traditional settings. They emphasize personalization as the key feature in those gains.

  • LLM providers are becoming operating systems. OpenAI has released two recent products that support this: 1. Connectors which gives ChatGPT access to email and files. (btw, how is it possible that OAI beat Google to making their own products useful? This reminds me of Google, about 15 years ago, solving laptop search because Microsoft was unable to address that need for their users). 2. ChatGPT is playing in the space of Granola, Notion, Zoom, Otter, and others who offer meeting notes and summarization. Both avenues provide new data for OAI, though they say certain client tier interactions are not used for model training.

  • Using copyrighted texts to train LLMs is fair use, according to a recent ruling. A short summary is here. “This decision indicates it is okay to train on legitimately acquired data to build models that generate transformational outputs, and to convert printed books to digital format for this purpose. However, downloading from pirate sites (as well as permanently building a “general purpose” library of texts, stored indefinitely for purposes to be determined, without permission from the relevant copyright holders) are not considered fair use.”

  • Memory across LLMs is a key need. Social networks have lock-in via peers (i.e. it’s hard to shift to a new network without losing peers and audience - consider how challenging it has been for any network to replace Twitter - fragmentation has resulted in smaller more isolated communities with limited reach and impact). LLMs will likely use memory and feature integrations as their lock-in. Which is why I’m finding appeal in the idea of a central memory service (see mem0’s Chrome extension) across multiple LLMs promising. It’s glitchy. But it’s a start.

  • I’m interested in the types of skills learners will need to effectively work with peers and AI. What parts of what we’re teaching in our schools is no longer relevant? One option is to teach meta-cognitive and knowledge application skills such as learning how to engage in civil disagreement.

  • LLMs to create interactive lessons. Generating lessons, assessments, and even learning content is a quick and effective use of AI. This paper details one such process, using decomposed subtasks (versus one prompt to generate the entire lesson) for more effective outputs.

  • Which jobs will AI not be able to automate? Twitter thread - focus is (perhaps we’re whistling in the wind?) on jobs that affirm human interaction or that are experience based. Another framing is the shift from “how” to “why”. Table 11 from this paper suggests that physical work has the most immunity.

  • Thou Shalt Use AI. So says Microsoft to its employees. And they are “considering formal metrics for evaluating how much employees use AI during the workday.”

  • Glasses are a fantastic multi-modal LLM learning tool. It’s not surprising that big tech providers are starting to offer their own versions. This one is from Xiaomi (largest mobile phone manufacturer in China). When an architect, designer, or student wants to engage with AI, phones have a horrible form function. Glasses, in contrast, provide improved engagement through ambient computing.

  • There is a mean person in my life (Pete, you know it’s you) who keeps sharing all the bad things about AI. Here is a recent article on the growing pushback to AI. Strangely timed with one of the interviewees having recently published a book decrying AI. AI will gain most of its value from being a central node in a broader products and services ecosystem.

  • Pearson and Google are going to be friends. This is a rather short article that provides no value and basically says “personalization is the salvation of learners”. Still, how industry members are connecting to address the education sector is worth follow.

  • We’re not taking AI seriously enough. Politicians are speaking up as AI advances: “the terms of what it is like to be a human are about to change in ways that rival the transformations of the Enlightenment or the Industrial Revolution, only much more quickly.”

AI Technology

  • AI labs are running out of data. With a range of partnerships where OpenAI or Google or Anthropic buy data (Reddit, Bloomberg) or splice up and use books

  • We’re a bit fixated on LLMs as the marker of AI progress. However, there is rapid progress in related fields such as self-driving cars, robots, and human health. Google, long at the forefront of genomics research, announced AlphaGenome. It’s offering API access to the model with a significant 1 million base pair sequence (sort of like a context window). Google humbly states that “This makes it a strong foundation for the wider community to build upon. Once the model is fully released, scientists will be able to adapt and fine-tune it on their own datasets to better tackle their unique research questions.”

  • On device AI is where we’re trending. Google recently shared Gemma 3n developer documentation.

  • Meta is scoring a few victories against OpenAI in hiring talent. I wonder how long Meta will retain it’s “open source AI” focus. Once you start investing (raising) $29 billion, economics shift.

  • Google released Gemini CLI this week. Simon Willison (must follow/read) covers it here: “All three of the largest AI labs now have their own version of what I am calling a "terminal agent" - a CLI tool that can read and write files and execute commands on your behalf in the terminal.”

  • OpenAI is focusing more on developers. OAI has been more consumer focused, whereas Anthropic and Google have been developer focus. When we first started with Matter & Space, it was a struggle to get OAI’s attention or any level of support. The landscape is changing (thanks Google) and OAI is focusing on the developer needs of their technology. You can now build your own deep research tool with OpenAI’s API.

Don't miss what's next. Subscribe to SAIL: Sensemaking AI Learning:
Powered by Buttondown, the easiest way to start and grow your newsletter.