Programming Proverbs in 1975 and 2025
As developers, we tend to think that our best practices are universal laws that we've discovered and which get refined over time. That's true to an extent, but I think we underrate the ways our environment and technology shape what a best practice even is or what the best way to use a developers time might be. Looking at the past can help us calibrate what is and is not part of our environment.
Recently, I was at a used book sale and I came across this:
It's a book called Programming Proverbs by Henry F. Ledgard and it was published in 1975, and apparently written even earlier than that. Naturally, I was intrigued, especially given the price (effectively zero, the sale was $1 for all the books you could fit in a bag). As far as I can tell, by the way, Ledgard is 82 and still alive. And he wrote a bunch of books, so I respect that.
It's paperback and square bound, not very thick, it looks like a chapbook. Honestly it looks like a lot of 70s counterculture media, kind of cheap, not super well typeset.
I thought I would read this book and do a big compare and contrast as to how best practice advice has changed. As it happens, most of the book is worked out examples, and there's really only one big way in which practices have changed that colors all the other advice in the book.
To explain that, I need to give you the context of 1975. I'm not quite old enough to have professional memories of 1975, so this is a mixture of research and extrapolation. (If you think this was an excuse to research what programming was like in 1975... I can't really argue with that.)
There is basically no such thing as a home computer programmer -- The Apple I is still two years away, and whatever the earlier generation of home hobbyists were doing is much more oriented around hardware.
If you are programming computers for a living, you are likely working for the government, for a university, a research lab, or for one of a small but growing number of mostly large companies that are using mainframes for their business. If you are working at a company, you are probably writing Fortran or COBOL, researchers used a wider variety of languages. (The book mostly uses Algol 60 -- I'm not completely sure how to reconcile that with the sources I found for programming language popularity.)
C technically exists, but is only a couple of years old and has not really escaped Bell Labs yet. Object Oriented Programming, by and large, does not exist. Even conditionals like "if" are a little new-fangled. The oft-referenced "Goto Considered Harmful" letter is only a few years in the past (1968).
The number of professional computer programmers was a fraction of what it is now. The statistics I looked up suggest that it was a tenth of what it is now, but I'm highly dubious, that sounds way too high to me. I think the definition of what was considered a "programer" has change over time... my guess is the number of people professionally doing something like coding the kinds of logic that Ledgard is talking about is 1000x from 1975. (Note that a lot of the things he talks about are easily managed by spreadsheets now.)
There's no UI to speak of. You may not even have a screen. Emacs doesn't exist. Vi doesn't exist. There are editors, but they are line by line editors, and they don't really do much beyond read and save keystrokes. You might not even have a keyboard. Unless I missed it, Ledgard doesn't talk about what computers he's working on, but he does make references to "typing or punching" in the programs (meaning using punch cards), and there are a lot of references to reading data off of punch cards.
The problems you are solving are mostly batch processing. There's data, there's a known set of logic, the program ends and it produces output to a terminal or a teletype or something. Debugging could mean trying the entire program again. Not only does this potentially mean messing around with punch cards, but it also is very likely that you are sharing an oversubscribed computer and you can't get access to just run your program again, you need to schedule time.
It's not much of a stretch to say that literally everything about my day-to-day work as a coder is different from what Ledgard was writing about.
What follows is a mix of advice that could have been written yesterday crossed with advice that is extremely specific to that particular time and place. And yet the similarities of language and terminology kind of hide how different Ledgard's working day was than mine is.
The book exists to advocate for what Ledgard calls "Top-Down" programming, and I was quite sure that I knew what he meant by that. I was so sure that I actually misread the text at first.
Here's what Ledgard means by Top-Down programming:
Step one is to have an exact problem definition. Ledgard stresses this point: "It is senseless to start any program without a clear understanding of the problem". In the part of the book that is actually programming proverbs the very first proverb is to define problems exactly and completely.
Right off the bat you can see this is going to have problems being applied to current programming practice. Ledgard's process is almost by definition a waterfall process (oh -- the term "waterfall" hadn't been popularized yet). One of the salient features of attempts to deal with software process from about 1990 on was the understanding that it was not always possible or desirable to have an exact problem definition at the beginning of the process, but that you needed to still be able to build software. The subtitle of the original XP book, from 2000, is "Embrace Change".
But fine, the problems being dealt with here are both amenable to exact definitions and small enough such that if the problem changes, just redoing the code from scratch is viable. This is an ecosystem change, as computer programming has been asked to do more and more complex things, starting with an exact, permanent problem definitions becomes less and less viable.
We then get a description of the top down method. I'm paraphrasing here, and I think I've got it, but I'm not 100% sure.
You start with what we would now call a pseudocode implementation of the entire program, I quote...
The programmer initially uses expressions... in English that are relevant to the problem solution, even though the expressions cannot be directly translated into the target language.
Then we go into a loop:
- Pick a part of the code that is still too abstract and refine it to the next level of abstraction down, postponing details to lower levels as needed.
- Ensure that the program is correct, the idea here is that individual sections can be written and validated independently of each other.
- Continue to refine and debug until you have completed the program.
I think I've got this correct, but I'm not sure, even though the book explains it at least twice.
Let me put it this way -- looking at that set of steps with your 2025 programmer brain and knowing you start with pseudocode, when would you start writing actual code?
I'm going to guess that you had the same reaction I did. Cued by the term "debug" and the idea of ensuring correctness, I initially assumed that you'd convert the pseudocode to real code immediately. This makes it similar to a structure I've used and called "top-down" before: I write the highest level function, that top-level function calls a bunch of other functions, then I write those functions and so on.
But that's not the process as Ledgard presents it. In the examples in the book, the programmer goes through the entire process in pseudocode until the program is complete and only then does the programmer translate it to an actual programming language. In fact, in the examples in the book, the pseudocode is often translated into multiple languages and they pick which language to use after seeing the code in all three languages.
And yes, this means that Ledgard expects the programmer to debug pseudocode. There's an explicit example in the book where pseudocode is debugged, it's something like a formal proof of correctness.
When I realized this, my brain exploded.
I started to think about how the superficial similarities between 1975 and 2025 made me think one thing was going on, but actually the differences in our environments mean that in some ways, Ledgard and I are doing extremely different things, both called "programming".
In Ledgard's world, computer time is expensive -- much more expensive than programmer time. So it makes sense for a programmer to do pencil-and-paper work to debug a program that doesn't even exist so that computer time is not wasted on successive improvements of code that doesn't work.
In my world, programmer time is much more expensive than computer time, so it makes sense for me to offload testing, syntax checks, and whatever to the computer.
I'm not sure when the inflection point happened between computer and developer time, but I'd guess it was about 1979 and that the first group to experience an environment where computer time was significantly less expensive than programmer time. was the Xerox PARC team working with Xerox Stars -- there are accounts of that team feeling guilty about leaving their computers idle overnight.
The relative value of computer and programmer time has actually continued to change in my lifetime -- when I started coding in earnest, developer tools were one of the most resource intensive programs you could run. This is no longer true. All kinds of little nice code management features that are in basically any 2025 tool, like syntax coloring or anything an LSP server does are all things that would have been difficult or impossible when I started coding for real in the late 1980s (at least, not without creating your own operating system, like Smalltalk did).
My point is that the practices that made sense given the problems and cost structures in 1975 don't all make sense now.
My follow up point is that, wherever LLMs and the like land, we are in the middle of a rapid change in the relative cost of different kinds of programmer tasks. A lot of boilerplate tasks that were relatively expensive pre-LLMs either are right now or will shortly be significantly less expensive.
Even if you are -- like me -- a little dubious that the entire development stack is headed for a 10x or more speed improvement, it seems to me pretty likely that parts of the process that are boilerplate or amenable to "just good enough" code (like, potentially, testing) are already at or near this level of improvement.
I'm not sure I've really seen a good reckoning with what that cost/benefit change is going to mean for individual developer practice, or team developer practice.
For instance, it might once again be feasible to ask an LLM to render a problem in multiple languages and choose the best one based on performance or whatever. It might be feasible to generate the same code in multiple designs to see which one you like the best.
Where this goes is anybody's guess, and the effect on developers as a whole might be weirder than we think (for instance, programming might have Baumol effects, where the cost of human labor rises even when the productivity of the human labor does not). Not that I understand Baumol effects well enough to predict.
I'm glad I picked this book up. I really like the history of programming, and I did learn something, and not what I expected, which was great.
Dynamic Ruby is brought to you by Noel Rappin.
Comments and archive at noelrappin.com, or contact me at noelrap@ruby.social on Mastodon or @noelrappin.com on Bluesky.
To support this newsletter, subscribe by following one of these two links: