SAIL: Learning Design?, AI Timeline, Future Automated Company
May 5, 2025
Welcome to Sensemaking, AI, and Learning (SAIL). I focus on AI’s impact on higher education.
AI and Education
I’ve mentioned before that the cost of developing content has basically zero economic value. This video, running at about two hours, is the best example of what I think learning design (content) with AI will look like in the near future. It’s a one person media studio extravaganza. It’s a web of content pieces, moving from a prompt to an image to a video to audio to a sequence. Sure, it’s clunky now, but content creation - rich media learning content - is cheap and impressively high quality media.
Matt Tower and I spend some time talking Matter and Space. The focus is on systems and systems change. We even get to connectivist MOOCs. But mainly we focus on learning and emerging AI/learning needs. Appreciated Matt’s preparation and conversation!
Deeplearning.ai (Andrew Ng’s project) is one of the best sources for courses that provide a quick introduction to key topics. Consider this one on vector databases. Courses are generally short with practical exercises. Well worth the time.
The structured data that universities hold - in the form of syllabi, course sequencing, student data, operational data, etc. - has tremendous value. This data flywheel serves as a moat against external AI startups and big tech since it allows for personalization, model accuracy, and sharpened focus of content within a domain (i.e. constrains LLMs to a focused domain).
Just AI things
Crazy to see this three year (!!) timeline of AI. Lots has happened and nice to see the sheer scope of it in such a concise listing.
What fully automated companies will look like. This started as an essay, but with the low content creation effort of GenAI, has been turned into an interesting video describing a possible future AI-centric firm. Btw, Dwarkesh Patel is a must follow podcast - he consistently brings on top thinkers/leaders in the AI and tech space.
LLM Benchmarks. I’m still sometimes intrigued by how well AI adds a layer of value to what used to be frustrating processes. This link has a listing of LLM benchmarks. Typically, I’d look through it and decide which links to look at (i.e. “what does this benchmark actually benchmark”). That could easily take an hour. But I can drop the link in Gemini and ask it to produce a short summary of each link and present in a table. And then export to Google Docs. Which is here. Still kinda cool.
It’s Time To Get Concerned Klarna UPS Duolingo Cisco And Many Other Companies Are Replacing Workers With AI “The new workplace trend is not employee friendly. Artificial intelligence (AI) and automation technologies are advancing at blazing speed. A growing number of companies are using AI to streamline operations cut costs and boost productivity. Consequently human workers are facing layoffs replaced by AI.”
Glasses will be important learning technologies and most Big Techy organizations either have a glasses/AR/VR play or one in planning. Brilliant offers a product vision, and code, for deploying with Gemini.
AI Agents Protocol. Good summary (and current with recent protocols launched over the last month). “In this paper we provide the first comprehensive analysisof existing agent protocols proposing a systematic two-dimensional classification that differentiates context-oriented versus inter-agent protocols and general-purpose versus domain-specific protocols.”
Deep Dive into Long Context. Great video. Google (Deep Mind) has one of the largest context windows (roughly the working memory of an LLM - what it can remember and engage with from the user like chat history or profile attributes). When context windows are not able to handle what users want (due to token quantity), RAG is useful to process internal organizational data or add information that was not included in the LLM. An ongoing discussion is whether large enough context windows will eventually kill RAG. Which is nonsensical given that organizations need to have LLMs interact with their private data and as Nikolay, in this interview, states - a context window can handle some of this data but organizations have billions (trillions) of tokens of data.