Alejandro's Eclectic Newsletter

Subscribe
Archives
January 19, 2025

EN 76: "Pairing with Claude"

Using LLMs while building a tiny programming language to enhance an internal tool

As surprising as it may sound, I’ve been writing a tiny language at work. It’s not the kind of thing I typically do, but the possibility appeared to investigate how to improve an internal tool UX, and having a code editor and a custom language was the ideal way to do it.

I’m not an expert on building programming languages or integrating web IDEs. On one hand, it’s been a bit scary, on the other hand, I've been enjoying the learning process a lot and the feeling of being “in the zone” this week has been great. Learning about parsers and compilers was on my learning list for a long while, which makes this research exciting, I can have my cake and eat it too.

The scary part is that I’m acutely aware that I'm not knowledgeable about the topic yet, which means that I’m bound to make plenty of mistakes, discover better ways to do things and change my mind—and the code—a dozen times. That’s also what makes it exciting, though. In a way, it’s like playing.

Here’s an example of the scary feeling of not knowing enough: I already got to the point of considering my current approach as flawed, and envisioning better alternatives that will still be flawed. At the same time, my approach is more than enough for the use case and time, it doesn’t require writing a complex programming language or a language server.

This week of writing a parser for the language and integrating it with a code editor has also been an insightful exercise in how to use LLMs, and in discovering if they could help me. The conclusion is that using an LLM has been of great help, and I’ve been more productive. At least, I have the feeling that it would've taken me more time to get to where I got without it.

My main approach using the LLM was to treat it like a pairing session of sorts. Claude was my pairing partner, not Van Damme, but 3.5 Sonnet.

I’m not sure if GitHub Copilot influences it, but one of the first things I had to tell it was not to show me code and implementation details right away except when asked, it was too eager to do that. The second thing was to prompt it to treat it like a pairing session, discussing approaches from a high-level perspective first, the plan, potential alternatives and that we were going to go in small atomic steps. I’m not interested in seeing a solution first. Thinking, designing and researching is the priority. For me, it was more valuable to walk through the problem, clarify issues, discover alternatives and having a back and forth, like a rubber duck that can talk back.

While the solution is not normally the goal, once I knew what I wanted or there were things that I had written a few times already, I started to get too lazy to write the code from scratch. In those situations, it was better to ask Claude to write a solution and either modify it myself, or ask it to iterate on the solution based on my proposed changes, or both.

In working with LLMs, there has to be a balance. Part of writing a custom language was for me to learn, and copying indiscriminately robs me of the opportunity to do so. Even worse, it might give me a sense of fake competence. When using code from Claude that I didn’t write from scratch, I made sure to understand it, ask for clarifications, explain back what I understood from the code or from what it said. Moreover, I also shared my line of reasoning and what I would do for the next step or feature, and see what the AI thought about it.

Where the LLMs clearly shine is in summarising and synthesising information and finding relevant answers. For this project, I’ve been all over the place:

  • Read about parsers and lexers

  • Learnt how to use a lexer and parser toolkit and ended up migrating to another (kudos to chevrotain)

  • Explored the integration with the code editor

  • Looked at some dubious webpack plugin code and translated it to rollup

And probably a few more things I don’t remember. In many of these areas, having Claude helping with summarising, showing me various ways to join stuff together or explaining things was great. There were times when I was debugging, found some convoluted information, and asked the LLM to tell me what it learned about it based on my code and what suggestions it had. It didn’t work all the time, but when it worked, it was fantastic.

These AI assistants aren’t perfect (let’s not mention the ethical and climate implications here), sometimes Claude gave me ridiculous or plain wrong code or suggestions, lost track of the context, or was frustratingly going in circles, but my experience overall was positive. If anything, I realised that I’ve chatted way more than wrote lines of code and got rid of a lot of repetitive typing with the autocomplete. It felt like a different way of coding, more high level, more conversational.

Interesting links

  • Delete that Test Column (Katja Obring). Katja talks in the presentation about removing the test column on the board. Having a test column is such a bizarre thing, shouldn’t quality be embedded in every step of the process?

  • How I program with LLMs (David Crawshaw). David describes ways to program with LLMs that are similar to what I’ve found this past week. Highly recommend it

  • The problem with growth: why everything is failing now (Joanna Weber). Great article. “Something that provides a win-win-win for all the stakeholders in the equation, rather than ruthlessly exploiting everyone and making them miserable.”

Don't miss what's next. Subscribe to Alejandro's Eclectic Newsletter :
Start the conversation:
Bluesky LinkedIn
Powered by Buttondown, the easiest way to start and grow your newsletter.