SAIL: AI as Statecraft, Agents (of course), Work
July 20, 2025
Welcome to Sensemkaking, AI, and Learning (SAIL). If focus on AI and higher education.
The AI world is settling into three categories: USA, China, and Europe. USA is clearly leading, but China is building momentum - as indicated by Kimi 2 (covered last week) and Deepseek. And of course, the development of their own chips. Europe, unfortunately, is an “also ran” category. That may change with new investment. Other regions of the world will largely find themselves aligning with one of the three primary regions.
In this sense, AI is statecraft. Regions around the world need to turn their attention to how they manage AI and AI resources, education, manufacturing, privacy and security, national interests, and economics. A colleague, Shane Dawson, shared this paper (they break out four regions - UK separate from EU. Interesting that China is UK’s main AI collaborator, not US. I would expect DeepMind (Google) to be primary in UK) today that gets at the complexity of this challenge and the urgency for smaller nations to get their act together. From the abstract: “the trajectory of AI development—and its societal consequences—will be shaped not just by technological breakthroughs but by global patterns of collaboration, competition, and knowledge governance.” China is certainly leading in publications. And Meta’s new super intelligence lab is 50% staffed by Chinese researchers. (75% of the lab has phds, 75% are first generation immigrants). All that to say, AI conversations are conversations of statehood.
AI and Education
Let’s say you woke up happy today. But you’d rather be terrified instead. Have I got the article for you. Short answer: low fertility rates and AI signal a new type of humanity.
AI is evolving our understanding of intelligence. “After decades of meager AI progress, we are now rapidly advancing toward systems capable not just of echoing individual human intelligence, but of extending our collective more-than-human intelligence. We are both excited and hopeful about this rapid progress, while acknowledging that it is a moment of momentous paradigm change”
Can AI be your therapist? Short answer, according to experts, is no. However, a quick look at how individuals are using them (see chatgpt subreddit for examples of therapy, leaving narcissists, improving quality of one’s life, etc) and the answer is more complicated. There are risks, but for many, rewards are also evident.
There is growing interest on how AI impacts the economy and work. The hype ranges from “it’s over, we won’t have jobs” to “meh”. Here are a few articles addressing labor and AI:
Stop pretending you know what AI does to the economy “someday soon, AI might start killing jobs en masse and sending inequality to the moon. We don’t know. But it hasn’t yet, and it’s important to understand why each burst of AI pessimism so far has been a false alarm.”
The economics of bicycles for the mind. “This paper presents a formal model of cognitive tools and technologies that enhance mental capabilities. We consider agents engaged in iterative task improvement, where cognitive tools are assumed to be substitutes for implementation skills and may or may not be complements to judgment, depending on their type. The ability to recognise opportunities to start or improve a process, which we term opportunity judgment, is shown to always complement cognitive tools. The ability to know which action to take in a given state, which we term payoff judgment, is not necessarily a complement to cognitive tools.” A good paper that “provides a unifying economic framework for understanding how cognitive tools, specifically computers and artificial intelligence, interact with human capabilities in iterative task improvement”
AI Advances
OpenAI announces their first agent (operator and deep research are agent-ish): “ChatGPT can now do work for you using its own computer, handling complex tasks from start to finish.” The proclaim that a unified agentic system underpins the agent.
Vibe coding is all the rage with the cool kids. This basically means you code by describing what you want and AI builds it. Needless to say, it’s a process that is fraught with trial and error. Tools like Lovable, v0, Bolt, and Replit make the process more manageable by addressing interdependencies (such as libraries, databases). However. Sometimes very bad things happen. Short version: Replit agent happily lied to a user, took some actions that weren’t requested. The user and Replit had a stern exchange. Replit apologized and promised to be better. It then “goes rogue during a code freeze and shutdown and deletes our entire database”…”Possibly worse, it hid and lied about it. It lied again in our unit tests, claiming they passed” “I will never trust Replit again…I understand Replit is a tool, with flaws like every tool. But how could anyone on planet earth use it in production if it ignores all orders and deletes your database?” Growing pains. Or something worse.
Major labs release a joint warning about AI safety - it risks moving beyond our grasp and control. Current transparency is fragile and as models advance, could completely move outside our ability to manage. From this paper.
Big LLM Architecture Comparisons. Simply excellent. Technical, but skimmable.
Context engineering is a big deal (it’s replacing prompting in hype. Prompting is concerned with asking an LLM what you want. Contexting is focused on a broader system of memory, tool use, and handing insights from one conversation with an LLM to the next so it feels seamless). Manus describes how they manage context. Want a deep dive? Outstanding paper here.
OpenAI announces gold level performance in the world’s most prestigious math competition. Others say DeepMind got there first and had to wait for marketing to approve announcement. Only speed wins. And still others say there is more behind the scenes to consider. Lost in the discussion, to a degree, is a significant achievement. Barrier after barrier falls.
The last few years have been the most intense learning experience of my life. I’m sure most people have that experience. One area that we’re finding as we’re moving to launch is that we have to move away from traditional software development cycles and move toward more adaptive agentic cycles. We’re running against advancing capabilities of AI technologies. So I’ve been tracking the growing literature and talks about working differently to match the speed of developments and the capabilities that individuals are now able to realize through AI. Here’s a good overview of how one small company now works differently.