Paradigms succeed when you can strip them for parts
On the value of "scavenging" for paradigm adoption
I'm speaking at DDD Europe about Empirical Software Engineering!1 I have complicated thoughts about ESE and foolishly decided to update my talk to cover studies on DDD, so I'm going to be spending a lot of time doing research. Newsletters for the next few weeks may be light.
The other day I was catching myself up on the recent ABC conjecture drama (if you know you know) and got reminded of this old comment:
The methods in the papers in question can be used relatively quickly to obtain new non-trivial results of interest (or even a new proof of an existing non-trivial result) in an existing field. In the case of Perelman’s work, already by the fifth page of the first paper Perelman had a novel interpretation of Ricci flow as a gradient flow which looked very promising, and by the seventh page he had used this interpretation to establish a “no breathers” theorem for the Ricci flow that, while being far short of what was needed to finish off the Poincare conjecture, was already a new and interesting result, and I think was one of the reasons why experts in the field were immediately convinced that there was lots of good stuff in these papers. — Terence Tao
"Perelman's work" was proving the Poincaré conjecture. His sketch was 68 pages of extremely dense math. Tao's heuristic told him it was worth working through is that it had interesting ideas that were useful to other mathematicians, even if the final paper was potentially wrong.
I use this idea a lot in thinking about software: the most interesting paradigms are ones you can "scavenge" ideas from. Even if the whole paradigm isn't useful to you, you can pull out some isolated ideas, throw away the rest, and still benefit from it. These paradigms are the ones that are most likely to spread, as opposed to paradigms that require total buyin.
Let's take as an example functional programming (FP). ML/Haskell-style FP has a lot of interconnected ideas: higher-order functions, pattern matching, algebraic data types, immutable data structures, etc. But if you ignore all of that away and stick to writing everything in idiomatic Java/Python/C, but the only change you make is writing more pure functions, then you've already won.
This has several positive effects on FP adoption. It immediately demonstrates that FP has good ideas and is not just some weird cult practice. It also means people can "dip their toes in" and benefit without having to go all in. Then they can judge if they're satisfied or want to go further into learning FP. And if they do go further, they can learn new ideas gradually in stages.
By contrast, compare array programming languages (APLs). There's lot of cool ideas in APL, but they're so interconnected it's hard to scavenge anything out. If I want to use multidimensional arrays in Python I can't just whip it up myself, I have to import an external library like numpy and learn a whole bunch of new ideas at once. So most of the ideas in APL stay in APLs.3
A related idea how techniques are only adopted if you can "half-ass" them. TDD is useful even if you only use it sometimes and phone in the "green-refactor" steps, which helped adoption. Cleanroom depends on everybody doing it properly all the time, which hindered adoption.
Scavenging from formal methods
Part of the reason I'm thinking about this is that business has been real slow lately and I'm thinking about how I can make formal methods (FM) more popular. What ideas can we scavenge from writing specifications?
Obvious ones are predicate logic and the implication operator, but they're both poorly supported by existing programming languages, and I'm not sure how to best pitch them.2 Also, they're "too big", and it takes a lot of practice to wrap your mind around using quantifiers and implication.
Sets, definitely. Sets are unordered, unique collections of objects. Lots of things we store as lists could be sets instead. I use sets way more after learning some FM.
There's also a lot of high-level ideas in FM that usefully apply to all other code. System invariants are things that should always hold in every possible state. More broadly, we can categorize properties: "safety" is roughly "bad things don't happen", "reachability" is "good things always could happen", and "liveness" is "good things always do happen". Fairness, maybe. Refinement is definitely too in-the-weeds to be scavengable.
I'd also call out decision tables as a particular technique that's extractable. You can teach decision tables in five minutes to a completely nontechnical person.
As I said, short newsletters for the rest of this month. Have a great week!
Things I'm curious about
- Logic Programming: What ideas are scavengable from LP? I know that pattern matching originated in LP and found its way to FP from there, but that's associated with FP more now.
- Nontechnical People: What's a good word for them? "Nondev" seems like it leaves out data scientists, DBAs, etc. "Normie" and "Muggle" are too offensive. "Layfolk" just sounds weird.
- TLA+ workshops: Would people be interested in an "intermediate" TLA+ workshop, for people who've already written production specs but want to get better? I'm thinking half a day, 6-8 people, with people bringing up what they want to learn about in advance. If there's enough people who'd want this I can plan something out.
-
Domain-Driven Design. ↩
-
This is one big reason my attempt to write a book on predicate logic has been in a long stall. ↩
-
One idea that mathematicians scavenged from APLs is the Iverson bracket:
[P] = P ? 1 : 0
. ↩
If you're reading this on the web, you can subscribe here. Updates are once a week. My main website is here.
My new book, Logic for Programmers, is now in early access! Get it here.