What Changes When The UI Is Language?
Kicking off my newsletter, I'm diving into AI's impact on expertise and why it's worth simplifying complex concepts.
Hello all,
Welcome to my inaugural newsletter! This will primarily serve as a monthly capsule of recent writing and stray updates regarding the worlds of AI, data, and geo.
The last month has had me thinking a lot about the edges of LLMs, how people interact with them and how they change our relationship with computing. I'm fascinated by what happens when the computing interface shifts to natural language - how does this change who holds the power, who creates the value, and how we understand these tools?
Recent Writing
The Dynamic Between Domain Experts & Developers Has Shifted: When the interface is natural language, domain experts are the differentiators, not programmers. How might this change business dynamics?
The first generation of AI-powered products (often called “AI Wrapper” apps, because they “just” are wrapped around an LLM API) were quickly brought to market by small teams of engineers, picking off the low-hanging problems. But today, I’m seeing teams of domain experts wading into the field, hiring a programmer or two to handle the implementation, while the experts themselves provide the prompts, data labeling, and evaluations.
AI Chatbots Are Like Observational Comics: They’re incredibly good at creating authority through performance, but the trick fails when you’re an expert on the topic at hand. Noticing and remembering this helps you keep the guardrails up during your LLM usage.
Overcoming Bad Prompts With Help From LLMs: When the interface is natural language, UI design ends up developing new tricks. In this piece we explore two features – one from OpenAI and one from Anthropic – that demonstrate how UI design can mitigate our bad prompts.
This Month's Explainer
MCPs are APIs for LLMs: Everyone is talking about MCPs, without explaining what they are and how you can try them, today.
Why I Write Explainers
So much of AI coverage avoids simple explanations. I chalk this up to two reasons.
First, this stuff is confusing and moves fast. Most people don't have the baseline of technical knowledge to efficiently get up to speed on a new nuance. Diving into an AI topic requires navigating a spiraling, fractal rabbit hole. It's easier to hit the headline, trust the PR release or paper, and hand wave the rest.
Second, people are incentivized to make this stuff sound complex or magical. When you understand something, it loses its aura. And aura's are very useful. They make one's expertise more valuable. And they let you project any number of your audiences' hopes and dreams across the fuzzy mirage. (I wrote about this in January, showing how people used DeepSeek's mysterious power as evidence for whatever desires or fears they harbored.)
As a result, casual users of AI end up developing their own, inaccurate mental models for how these tools work. And when there's a disconnect between how we think our tools work and how they actually work, we risk inefficiency and misuse.
I'm going to try to publish an explainer once a month, aimed at non-technical audiences, in a small effort to close the gap between our understanding of how AI works and how it actually works.
Past explainers include:
- A Gentle Intro to Running a Local LLM
- On Synthetic Data: How It's Improving & Shaping LLMs
- The 3 AI Use Cases: Gods, Interns, and Cogs
On deck is a reasoning model explainer. There's plenty of noise there, and not much clarity around what we mean when we say these models "think".
If there's a topic or question you'd like to better understand, or you see plenty of confusion around, please shoot me a note. Gotta fill the backlog...
Thanks for signing up to my newsletter. I'll be sending it out roughly once a month, linking to recent writings and providing a bit of context.
If there's something you'd recommend or you have any feedback to share, don't hesitate to reach out. This is my first missive, after all.
Thanks again for joining the conversation.