July 23, 2025, 1 p.m.

How I Use AI as a Personal Stack Overflow

Sustainable Development and More

SocialMedia.jpg

If you haven't checked out my new Ruby Web Framework, Brut, I did a 15ish minute screencast where I make a blog from scratch. Check it out on PeerTube, read the source, or, if you must, watch on YouTube.

And now, what I hope is perhaps the only post I make about AI for the time being. I'm not a booster or True Believer, and I'd be happy if this entire ecosystem was shot into the sun, but it's here and part of our lives, and I feel like I need to understand it by using it. This is where I'm at, and you might find it useful.

AI is Just as "Accurate" as Stack Overflow, but More Ergonomic

I've had mixed (though mostly negative) results with "AI agents" that take instructions and make code changes on my behalf. I've had a more success using ChatGPT as my own personal Stack Overflow. It gives precise answers to my specific issues, which I can then triangulate against real documentation and testing to avoid wasting too much time when it's wrong.

It's been working great for when I know what I'm doing and need API reminders or syntax help. But it's also been extremely helpful when I don't know what I'm doing and need to learn something that doesn't have great docs or community.

When I Know What I'm Doing

When I know what I'm doing, like building a web form, or a bash script, I typically need help on API specifics or syntax. Pre-Internet, I would read man pages or documentation. Pre-AI, I'd also search the web and usually Stack Overflow would produce an answer I could adapt.

With ChatGPT (or whatever), I can ask it instead, and it will give me a very specific answer to my question that I almost never have to adapt. If I include details like paths or variable names, those will usually be in the answer it gives me.

Like Stack Overflow, it gives wrong or outdated information on occasion, but since in this case I generally know what I'm doing, I can quickly tell that it's not what I need and either ask another way or use tried and true methods like web search and documentation. Of course, I can always try what it suggests in a safe way.

Even though ChatGPT is not 100% correct, the interaction is far better than web search or skimming documentation. Being able to give it my exact issues, or exact error messages, or exact paths is hugely helpful to getting a useful answer back, including occasions when I don't notice something important, but ChatGPT picks up on it.

Some of this could be accomplished with editor auto-complete, i.e. GitHub CoPilot. I find CoPilot's suggestions on this sort of thing generally wrong or unhelpful. It also doesn't provide any supplemental information about why it's suggesting what it is, so I end up having to take it or leave it. I usually leave it. ChatGPT's "justifications" when it's wrong are often useful in finding the right answer.

Now, when I don't know what I'm doing, using ChatGPT as a personal Stack Overflow still works, but the wrong answers are more costly, so more care must be taken.

When I Don't Know What I'm Doing

Pre-Internet, I read docs, Post-Internet I read docs, source code, Stack Overflow, forums, GitHub Issues, etc. It works, but you quickly realize that certain technologies just aren't widely adopted enough for there to be a wide variety of help.

In these cases, ChatGPT will, of course, give confident answers to any question, but you must triangulate what it says against the documentation and actually trying it. Having documentation to back-up what it's saying is critical, but it is faster than trying to figure something out that just isn't widely used.

A great example is OpenTelemetry (OTel), an open standard for application observability and runtime instrumentation. I needed to get OTel working in Brut, the new Ruby Web Framework I'm building.

OTel's client code is highly convoluted and abstracted, and their documentation is effectively non-existent (meaning that there exists a thing called documentation but it is not helpful in explaining how to use the software). There also isn't a lot of chatter online about integrating OTel into a Ruby web framework - most people consume OTel from their framework or use a wrapper API.

After banging my head against the wall, I relented and asked ChatGPT. It was surprisingly helpful! It was definitely working off of some outdated APIs, and got subtle details wrong (like suggesting symbols instead of strings), but it was still something to go on!

While I'm no expert in OTel, I was able to solve my problem. In light of the code ChatGPT helped me write, I was able to revisit the documentation and code and verify that what I was doing was correct and part of the public API.

There is a special form of not knowing what I'm doing when I highly suspect there is a solution to my problem, but cannot figure out how to explain it enough to get Google or Kagi to return useful results.

When I Know There's a Solution

I needed to prevent double-tap to zoom on mobile browsers without sacrificing pinch to zoom or other accessibility features. Searching for this produced a lot of useless or offtopic results. I have a "project" in ChatGPT for this app I'm building, so when I asked ChatGPT directly how to do what I needed, it could include that context.

It lead me to the touch-action CSS property, advising the use of the manipulation value. This sounded absolutely made up or, at best, compatible only with Android.

But, sure enough, as corroborated by both MDN and caniuse.com, it is, in fact, supported by Baseline and does exactly what I need.

I have countless examples if it basically being a far, far better search engine than Kagi or Google, even when accounting for its inaccuracies. I always triangulate the results with official documentation or source code, even when something works.

Of course, what's going to happen when it runs out of actual Stack Overflow answers to train on?

Addendum: Coding Agents Need Some Work

I did try using Cursor and OpenAI's Codex, which ostensibly allow you to write out what you want done and the AI Agent will do that for you. I find these tools either just bad or requiring so much pre-specification as to be a net negative time saver.

  • I used Cursor to convert my two Brut apps from ERB to Phlex, when I decided to go with Phlex. It worked and was way faster than me doing it. This got my hopes up. They were quickly dashed.
  • I then asked Cursor to help add Twilio SMS support, which did not work well at all. I have done this myself before and had a way I wanted it done, and I ended up having to specify so much detail that it would've been easier for me to just do it.
  • I cancelled Cursor after asking it to help write a command line tool for Brut using Ruby's OptionParser. I had specified coding style and other rules and it just continually ignored them, causing me to rewrite a lot of what it had done. I have a particular way I want code written, which is not always idiomatic Ruby, so I guess it just couldn't do it.
  • I tried OpenAI's Codex to help with something extremely minor and was amused to watch it use all the tools I've used over the years to write code without an IDE: grep, sed, find, etc. Sadly, it was even worse than Cursor (though it was command-line based and didn't force me to use a garbage IDE). It proposed a file change that I accepted, but then simply wouldn't apply it and it exited. Repeated attempts yielded the same results—a first for AI, I guess?

I do plan to try Claude Code, but I'm just not used to specifying what I want to do in such detail, even when working on a team. I always try to work with and hire people that can quickly understand context and have at least some intuition as to what to do and how to do it.

Even with junior engineers who need to spend time learning, it's far more effective (in my experience) for them to take a stab at something and then have a discussion about it, versus me over-explaining everything that needs to be done and exactly how.

I did find this Mastodon thread by Nate Berkopec interesting, as he described a regimented process for including AI Agents in your daily workflow. To me, this sounds absolutely dreadful and perhaps even worse than having to do 100% pair programming. Of course, even with guardrails to prevent the AI from doing things you don't want, life finds a way to delete your production database.


Unless otherwise noted, my emails were written entirely by me without any assistance from a generative AI.

You just read issue #20 of Sustainable Development and More. You can also browse the full archives of this newsletter.

Read more:

  • I Made a New Web Framework in Ruby, Yes it's 2025

    Long time no talk - I've been sorta busy and sorta out of it, more on that below. First up, however, I wanted to tell you about BrutRb, a web framework for...

  • Learning Things From Search or AI Results

    Before Stack Overflow, programmers at the time had to learn things by knowing someone who had the info, reading documentation, or going through code (if it...

Share on LinkedIn Share on Hacker News Share on Reddit Share via email
Website
Powered by Buttondown, the easiest way to start and grow your newsletter.