News From The Server Room logo

News From The Server Room

Archives
5 May 2026

AI took my job as a coder. Then it made me a better software engineer.

Mentor Monologue

My first foray into LLM-assisted coding was adding Copilot autocomplete to my NeoVim configuration. Soon, the spooky fun of "Huh, that's exactly what I was going to write!" turned into "Ah, get out of my way already!", and I turned it off again.

What brought me back was my surprisingly positive experience when I started experimenting with agent-supported programming. I suspect I was lucky with my timing and choice of model because I read about many rough first encounters with different models and tools. But something clicked for me, and I started to notice a pattern that I hadn't expected.

The pattern was this: the quality of our web application and infrastructure code with its coding standards and test suites made all the difference. This is actually the second time I get the satisfaction of good coding standards paying off. Our codebase was already a few years old when we introduced linting, unit and integration tests. Back then, it didn't take us long to notice how steadily our confidence deploying code during the day grew, and the amount of nightly emergency fixes sank. This time, I saw how the agent could orient itself more easily because our code followed consistent conventions. When it added or fixed code, our tests sent a clear message whether it actually was an improvement. I got off to a good start thanks to the kind of "boring" engineering investment that pays dividends in ways you can't always predict.

Another key to me deploying generated code with confidence was that I consistently reviewed it. Over the years, I got a real appreciation for peer review. While it might be tedious occasionally, I'm convinced that the effort of polishing any non-trivial change will pay off in the long run. It does so not only in the operation and maintenance of the software but also in the growth of our own skills and standards. You know that's important to me when you look at my tagline, "I help DevOps people grow". That's why I don't mind looking into an agent's merge request with the same gentle but critical eye as a colleague's.

From what I read online, this is the point where many other programmers notice something disturbing.

They start experiencing skill atrophy, loss of job satisfaction, and a feeling of disconnection from their own codebase. It's almost like the AI equivalent of the Monkey's Paw: it gives you what you want, and then takes away something dear to your heart.

Not for me, though. I don't feel like the LLM is pushing me aside. It's pushing me upwards, into a different role. It helps me grow.

You see, I've never been primarily a programmer. I learned it in the 1980s, but focused on system administration in the 90s. I switched to a management role in the 2000s, and I've had to be a jack of all trades since I started a hosting company in 2010. Our whole team is punching above its weight; we're less than a handful of people operating a business that runs thousands of websites at nearly 100% uptime.

This setting used to not leave me any time to practice "formal software engineering", let alone to learn what that would comprise in the first place. I just built, tested, and deployed code that I had thought out more or less, depending on the situation. There wasn't much of a formal process to it. We couldn't afford to have one anyway.

This has changed since I have access to a sparring partner that's always available and has read almost everything about software development. I started out describing a plan and asking what's missing. I got back long lists in response. Then I noticed how much more comprehensive the descriptions in the agent's merge requests were, providing better context to the reviewer. This led me to write better issues, too: ones that tell the reader not only what's broken, but also what the conditions and consequences are. If there's one thing a Large Language Model can do well, it's language. Especially compared to me as a non-native English speaker. Next, I set up a custom workflow where the agent workshops a new project with me, resulting in a thorough description of its motivations, goals, and most importantly, its scope. You see, technical debt is a constant struggle for our team, and so is the temptation of a "good opportunity to tackle this old and only tenuously related issue". With agent support, I can efficiently define and document both what a project entails and what it doesn't, which helps with another of our eternal struggles: getting projects across the finish line.

I had already dabbled with Architecture Decision Records (ADR) to better document our technological path. After 16 years at the same company, you've learned to appreciate the value of organizational memory. Especially when it's been lost. And we all have an easy time maintaining documentation, right? Right. It doesn't come as a surprise that an LLM can really help with that. At one point, another acronym popped up: PRD. I asked about it, and the agent explained what a Product Requirements Document is, what differentiates it from an ADR, how they complement each other. Today, I started a new project that comes with a PRD and five ADRs from the get-go. For someone like me who strongly believes in the Zen of Python, "explicit over implicit", this new amount of clarity up front is an epiphany.

I'm now coding less than before, but I don't feel disenfranchised by the tool. My work isn't to deliver code, or to deploy servers, or to type words into a blog editor. My job is to contribute to a healthy business. I'm enjoying being able to do that job more effectively.

Have I become an AI Booster? I don't think so. I'm struggling with reconciling the many quandaries around AI usage with my family's addiction to food and shelter all the time. My message is not "Use AI!" it's "When you have proper software engineering practices, everything works better" — even LLMs. I don't believe this new era is the "Game Over" for developers. It's the push for a "Level Up".

What could that level-up look like for you? Hit Reply and let me know!

Recommended reading

  1. I built an AI SRE in 60mins, you should too by Goutham City
    Using Claude Code and Grafana's gcx CLI, the author built a learning AI SRE agent that investigates alerts, writes post-mortems, and updates its own runbooks after each incident.
  2. Achieving High Availability with distributed database on Kubernetes at Airbnb by Artem Danilov
    Airbnb runs a distributed SQL database across three Kubernetes clusters in separate AWS AZs, using custom operators and EBS volumes to achieve 99.95% availability at 3M QPS.
  3. Introducing the Zen of DevOps by Tibo Beijen
    A re-imagining of the aforementioned Zen of Python for operations work, offering timeless principles like "Favor changes that make you faster over those that slow you down."
  4. Physics of Data Centers in Space by Kristian Köhntopp
    An explanation of why dense modern GPUs are physically unusable in orbit due to heat dissipation limits, radiation-induced bit flips, and exponentially higher leakage currents.
  5. Kubernetes Is Overkill for 99% of Apps (We Run 500k Logs/Day on Docker Compose) by Polliog
    The author makes the case for boring tech, running a production observability platform on a single server with Docker Compose, 99.8% uptime, and a $14/month bill.

New cohort starting soon: Basic Linux System Administration

Want to get into DevOps/SRE? It all starts with Linux.

Sign-up is open for our next cohort starting on 11 May 2026. Learn all the important basics of Linux system administration with a friendly group of peers, over the span of 12 weeks. All course materials and weekly live sessions with me included.

Learn more and sign up on my website!


Until next time, take care!

Jochen, the Monospace Mentor

Don't miss what's next. Subscribe to News From The Server Room:
Powered by Buttondown, the easiest way to start and grow your newsletter.