SAIL: Gearing up for 2025
Welcome to Sensemaking, AI, and Learning (SAIL).
The last year has been hectic on the AI advancement front, accelerating in December as Google (DeepMind) and OpenAI took turns announcing significant new capabilities in their models. One of my personal goals for 2025 is to increase the signal to noise ratio by doing deeper dives into substantial developments and spending less time with more transient advances. And, as always, finding spaces where consequential voices are speaking.
Here’s a few reflections on 2024 and expectations of 2025 that are worth spending some time exploring:
2025 Beckons Scroll down for some important orienting thoughts for this upcoming year from a range of entrepreneurs and researchers, including the token focus on agents as a key trend this year as well as AI as a uniting technology (Audrey Tang)
AI Orchestration. There is a growing ecosystem that surrounds foundation models - like GPT series, Gemini, Claude…as well as lesser known tools like ElevenLabs for voice or discrete applications such as Hume for voice and emotion and tools for avatar generation like HeyGen. Given the anticipated prevalence of agents and agentic systems, orchestration and coordination frameworks will become more critical. Lanchain, Llamaindex, OpenAI’s Swarm, HuggingFace’s just announced smolagents, etc. are part of a growing range AI development platforms. Developers I speak with generally prefer to work outside of frameworks since it adds a level of abstraction that reduces their control. However, as more people get into AI building, orchestration frameworks and agentic workflows will see increased adoption.
Hamel Husain and friends launched an open course on LLMs. It’s exceptional and practitioner driven. I paid for the Maven version, but all videos are free on this site.
Things we learned about LLMs in 2024. Excellent. Savor this.
Building effective agents. One of the rather wonderful things about the current state of AI is that many of the companies driving advancements share their research and experiences openly. This is an outstanding resource from Anthropic. Practical and accessible.
Factors Influencing Trust in Algorithmic Decision-Making. This was a final article that we got published at the end of the year (after much bouncing around with journals). Colleague Fernando Marmolejo-Ramos was lead. Trust in algorithms (well, even before that - understanding when and how we are subject to algoritms in our daily lives) requires a strong focus on literacy: “policymakers should consider promoting statistical/AI literacy to address some of the complexities associated with trust in algorithms”
AI in its play era. Paul Fain and team offers an great overview of the education, work, and learning impacts of AI. (from Aug)
UpHop - AI and learning tools are starting to proliferate. This is one of many. I find it interesting simply because it takes a traditional “click next” instructional model and has moments of AI assessment in the instructional line.
Ilya Sutskever (formerly OpenAI) offers a short look back at a paper that stood the test of time. And offers some thoughts on how pre-training will change as well as general advancement of LLMs.