SAIL: Claude Code, Obsidian, Robots
July 27, 2025
Welcome to Sensemaking, AI, and Learning (SAIL).
There are moments of change that happen that are qualitative in nature, not only an advancement of existing trajectories. Key to any technology transition is the way in which it integrates into the daily life and workflow of users. Going back about two decades - the capabilities of what became facebook were readily available to anyone. Unfortunately, the standardized social profile that underpins Twitter and Facebook was still not present (MySpace is a great example of un-moderated look/feel). Blogs, flickr, delicious, wikis, etc. all required a bit of extra time and effort to coordinate. Facebook made it easy to be online.
Currently, AI is a chaotic space. We’re still going to multiple tools that are somewhat disconnected from our daily lives. For AI to be impactful in daily lives and workflow, it must live in our spaces or match our habits. Instead of going to ChatGPT, it should be in the space where I work now (Apple tried to make some progress here with their partnership announcements lasts December, but nothing qualitatively different has happened on-device for users). Claude Code is interesting. It’s been a slow burn as it has come up in more and more circles. Basically, it’s a CLI that you access through your terminal. It’s surprisingly impressive (and will have a larger audience once it’s in an app). However, it becomes genuinely jaw dropping when combined with other tools such as Obsidian (open-ish note taking software). Here is an excellent video that details the setup process and leads right into Obsidian. It’s beginner accessible. Your future will thank you.
AI and Learning
Why do multi-agentic systems fail? In the ongoing theme of agents, here’s an overview of common failure. I think it’s reasonable to expect agents in all aspects of education and it’s the area where universities can most start taking ownership of AI use because agents are easy-ish to get started. Figure 2 presents a good taxonomy of why multi agentic systems fail. (paper if from April)
AI and Higher Education: An Impending Collapse. “If students learn how to use AI to complete assignments and faculty use AI to design courses assignments and grade student work then what is the value of higher education? How long until people dismiss the degree as an absurdly overpriced piece of paper?”. Yup.
AI doesn’t understand, it just predicts. Eventually the philosophy of a system is displaced by the use of a system (it’s no longer about the argument of what and how the world should be, it’s about what can I do with a thing that helps me). David Wiley says it well “If a model can make accurate predictions with a high degree of consistency and reliability, does that means it understands? I don’t know. But when a person can make accurate predictions with a high degree of consistency and reliability, we award them a diploma and certify their understanding to the world.”
As the AI tooling and overall product ecosystem improves, universities will have great incentives to run/deploy their own models (at least, on highly important/sensitive data. Here’s a short guide on setting on your own local LLM. LM Studio is my current preferred on-device LLM. Ollama is good too. If you just want to play with multiple LLMs to see how they respond to prompts, Chorus is good.
How do Diffusion models work (think images, video). Fantastic and well worth your time.
AI and Technology
Robots are getting…cheap-ish. Unitree launched a $6k version. Apparently it does backflips. Not sure what else.
China is on a different path. “China and the US are not running the same race. Deployment is China’s dividend, and destiny is America’s dream. Each is chasing what it values most. Chinese companies integrate open-source models into daily life because speed-to-market incentives pay off fastest at the application layer; Silicon Valley pours capital into ever-larger proprietary models, hoping to reach AGI first – whatever that means at this point.”
AI regulation is a huge area of focus. Some LLM providers are attempting to position responsibility and safety as selling features. Anthropic is one - signing on to EU’s AI Code of Conduct which “advances the principles of transparency safety and accountability”
AI Model performance. I’ve shared this before (and will again as the landscape changes). China is winning the openness game. USA still the center.
Alibaba drops glasses. We’re very interested in glasses as a learning tool. They’re not functionally at a deployable state, but by mid-2026, I’m confident that they will be, especially with numerous open standards (in addition to Snap, Meta, Gemini)
Kimi K2 is the most important model of the year. I keep coming back to Kimi since it’s under-covered and is high performing and open. A good overview. The article is a bit heavy on gifs, but does have helpful overview of the key strengths of Kimi. The agentic discussion is of most relevance for education.