Beyond the Hype: Simon Willison and the New Discipline of Agentic Engineering
Beyond the Hype: Simon Willison and the New Discipline of Agentic Engineering
As AI changes software development, Simon Willison highlights a critical divide between the chaotic 'AI slop' of low-effort automation and the emerging, structured discipline of Agentic Engineering. This new paradigm treats AI agents as active participants in a rigorous, feedback-driven development process.
In the fast-evolving landscape of software development, a clear dichotomy has emerged, separating casual AI usage from professional, high-leverage workflows. At the center of this debate is web developer and AI researcher Simon Willison, who has become a leading voice in distinguishing between 'AI Slop'—low-effort, high-volume automated output—and the nascent, highly disciplined practice of 'Agentic Engineering.'
Defining the Divide
The industry has recently grappled with the term 'AI Slop,' a pejorative referring to the glut of low-quality, mass-produced digital content generated by AI that prioritizes volume over substance. In the coding sphere, this is often associated with 'vibe coding'—a loose approach where developers prompt LLMs to generate code without deep understanding, verification, or structured architectural intent. The results are often brittle, buggy, and prone to technical debt.
Conversely, Willison champions 'Agentic Engineering.' He defines this as a practice built on the core capability of code execution. Unlike simple LLM wrappers that merely suggest syntax, true coding agents—such as Claude Code or OpenAI Codex—can execute the code they write, observe the results, and iterate autonomously. This creates a feedback loop that transforms the AI from a glorified autocomplete into an active participant in a disciplined engineering process.
Technical Deep Dive: The Agentic Loop
At the technical core of Agentic Engineering is the concept of a 'loop.' An agent receives a goal, uses a set of tools (compilers, test runners, git) to generate and execute code, and refines that code based on the feedback from the execution environment. Willison emphasizes that the skill lies not in generating the code itself, but in three critical human-led pillars:
- Goal Definition: Clearly articulating the problem and desired outcome.
- Tool Preparation: Equipping the agent with the right harnesses, documentation, and sandboxed environments to succeed.
- Verification: Serving as an architect who reviews, tests, and integrates the agent's work into a stable, maintainable codebase.
This shift moves the developer from a 'coder' to an 'agent lead,' a role that requires a high degree of domain expertise, architectural thinking, and quality control. As Willison notes, 'writing code is cheap now,' but navigating the tradeoffs of software architecture remains a profoundly human activity.
Why Quality Control Matters
As the industry matures, the distinction between these approaches will likely determine long-term success. Teams relying on 'vibe coding' risk being overwhelmed by the maintenance burden of the 'slop' they have generated. In contrast, those adopting Agentic Engineering practices are building modular, test-driven systems that leverage AI's speed while maintaining human-level oversight. This transition marks a shift in the developer’s role, moving away from manual typing and toward managing complex systems of intelligent agents, demanding a more rigorous, rather than less, understanding of the underlying software stack.