Reader Mailbag: June Crunch Edition
Hey everyone!
We gotta start with some housekeeping. First, you might have noticed that the newsletter email's changed. Several people informed me that Gmail was sending the newsletter to spam and Buttondown suggested I send from a custom domain to fix it. There shouldn't be any changes needed on your end, though if you have message filters you might need to update them. And you can still email support@buttondown.email
if you see any issues.
Second, registration for the TLA+ conference is now available! Register here. I'm teaching this year's TLA+ preconf workshop with an attendee cap of thirty. For the record, my usual cap is four. God help me.
Finally, this is the last weekly newsletter until July. I'm teaching a TLA+ workshop next week, and after that I've got to go all in on the learntla June deadline. I might do an off-the-cuff newsletter if something's really stuck in my head but don't expect anything. By July we'll be back to at-least weekly as per usual.
For the last newsletter in a while I wanted to do something interesting. Then after several hours of getting two paragraphs into an interesting idea and then abandoning it, I gave up and did a mailbag instead. And then it took me three days anyway, ugh I've had brain fog all week. First word of each question links to the asker.
Mailbag time!
Will the occupation of "programmer" still pretty much exist in 10 years or will it largely be replaced by asking an AI to do a task?
Instead of answering this question I'm gonna tell a completely unrelated story. Last year I got really into "GAN art", which is a form of AI generated art. It's extremely simple: you give the AI a text prompt and it spits out an image. Here's what one generator does with the prompt "the gardens of paradise by Gerardo Dottori, oil on canvas":
Pretty cool stuff! I ended up making my own generator, the art machine, which is designed to be as easy as possible for nonprogrammers to use. I think there are now even easier options available, though.
Anyway, the stuff I can create is pretty cool, but it doesn't hold a candle to the real GAN jockeys. People like @rivershavewings. Here's what she got with the same prompt:
The art machine has four parameters that affect the image: prompt, seed, "flavor", and "weirdness". One generator popular among experts, Disco Diffusion, has over 70. You don't need a lot of skill to get something good from GAN art. But you need to deeply understand the AI and know how to set it up just right if you want to get something spectacular out.
AI's might be good enough to do end-to-end programming in ten years, but if they are, they'll only be able to output good code when controlled by a good enough AI speaker. And developing those skills could potentially be as difficult as learning to code well.
There's some fairly solid research and evidence towards "being able to revert changes"+ "make changes quickly" being much more effective than improving your test suite (beyond the first 20% of effort). How do you make the argument for investing in rigorous testing, knowing that?
I'm going to reject the original premise and say we don't have solid research and evidence towards this. The usual source cited for this is Accelerate, but I don't think that argues that being able to revert changes is more effective than improving your testing. Also, to my understanding, Accelerate is based entirely on self-reported surveys, which can be very useful but have methodological limitations and need to be complemented with other forms of research. So I'd be skeptical of claims about the research being "solid" vs just "suggestive".
(I should clarify that I haven't read Accelerate yet, just articles about it.)
I think this is all tangential, though, because most people aren't going to decide between "investing in CI" and "investing in testing". It's more likely they'll use "CI is better than testing" as an excuse not to invest in testing, and then they won't ever get around to investing in CI, either. All I can do is make people enthusiastic about investing in testing. If they genuinely don't want to test, they'll always find another excuse.
Are comparisons like Alloy vs TLA+ helpful for beginners to get an understanding of the landscape or do they tend to confuse?
Turning it around: is comparing programming languages helpful for beginners? Depends on what you're comparing and why. Telling people that C#'s object oriented while Haskell's functional is worse than useless: it makes them grapple with complicated topics well before they're ready, and bound to just leave them more confused than they started. But you can talk about things at a high conceptual level: C# was designed by Microsoft for corporate and enterprise software, Haskell was originally a research language, and a lot of the ideas it explored end up being very useful for all kinds of software, so people use it for business code too.
(Neither of these is very accurate, but beginners don't need accuracy, they need simple mental models they can easily build on. They'll learn the nuance in due time.)
What's it useful for beginners to know about TLA+ vs Alloy? Probably strong points, differences in tooling, that's the main stuff. Something that'd help them pick which one to learn.
If they're already learning one, I'd speculate it's more useful to do a short survey of other FMs, just for enrichment's sake. So if they're learning Alloy, you show them a little bit of TLA+, but also a bit of Spin, Event-B, PRISM, and mCRL2, or whatever you have the capacity to do. Something that helps them understand the broader context Alloy lives in.
Why do you think literate programming isn't widely used?
I don't know why it isn't widely used, but I have three predictions for contributing factors. First, literate programming isn't tool-composable: you can't easily integrate it with your normal workflow tools. Without tool-composibility you have a lot of extra friction that will turn people away from using it.
Second, it seems more suited for complete self-contained code than code under constant modification by different people, which describes the majority of production software.
Third, it involves a lot of extra prose writing, and getting software engineers to write prose is like pulling teeth. There are many devs who are morally opposed to writing comments for Christ's sake
Everyone says that learning a new language will help devs level-up their skills, but that’s v time-consuming (and is it always true?). In contrast something like statecharts provides benefits and can be learned in an hour. What’s in the middle? Skill that takes 10 hours to learn?
I think the best "ten hour" skills would be learning your tools better. Some example questions:
- How do you grep for 4-digit numbers in a document?
- What are some of the advanced things you can do with
git log
? - How does your language's debugger work? What can you do with it?
- In your shell language, how do you send stdout to both the terminal output and a file?
- How do you add tab-autocompletion to your CLI?
If I had to pick one specific type of tooling, I'd say automation: shell scripts and the like. That saves you more time in the long run that you can reinvest back into learning stuff.
Why is there repeated discourse around the "right" way to do software testing? Why does there have to be a "right" way, instead of a broad spectrum of valuable approaches that are best when combined?
Charitable answer: Saying "use a broad spectrum of techniques" isn't honestly that helpful for most people, since it doesn't give guidance. And it's a lot easier to teach one technique well than four techniques sorta-okay and usually more useful for the student, so if you pick a "right" way to do software testing and get really good at that, you'll be a lot better at testing than most people who don't. You'll be worse than the people who do several ways really well, but there's an opportunity cost to doing that that you don't have to pay.
Uncharitable answer: Tribalism.
Metagaming answer: "what's the broad spectrum of valuable approaches" doesn't trigger discussion as "what's the right way to do software testing", so most of the discussion you do find will be biased towards the latter.
Happy June everyone, and I'll touch base again once learntla is up!
If you're reading this on the web, you can subscribe here. Updates are once a week. My main website is here.
My new book, Logic for Programmers, is now in early access! Get it here.