Formal Methods can't fix everything and that's okay
Mickens.
Finally done with conferences and glad to be back in Chicago. Next big event is the TLA+ workshop on October 16, which still has slots available! The faster I can fill those up, the sooner I can stop putting advertisements in my newsletters. I don't like doing it any more than you like reading it. Use the code C0MPUT3RTHINGS
for 15% off.
Keeping on the topic of TLA+, I saw this question on linkedin:
I've always wondered - is there much use of building formally verified software on top of non-formally verified systems (like the OS the software runs on)? To me, the absolute layman here, it feels like we're building proofs on top of potentially/probably flawed axioms. If we can verify Raft is a solid algorithm but it's built on non-verified networking devices do we end up putting a wrong amount of trust into the system?
This is the impossible problem of trusting trust: is formally verified code really provably-correct if it's compiled with an unverified compiler? What if the code's correct and it's running on an unverified OS? You could use a verified OS like seL4 but what about the hardware itself? What if the hardware design is proven but the manufacturing process adds a backdoor? Verifying the whole process is too hard for anybody but the richest countries. So why do formal verification at all?
Mossad gonna Mossad
In times like these, I turn to the wisdom of James Mickens.
If you haven't heard of Dr. Mickens, he's a professor of distributed systems research at Harvard. Also he writes like this:
BREAKING NEWS: I've received tenure at Harvard! I want to thank all of the enemies that I had to destroy to achieve this great honor. Roger Davis at Princeton’s department of nutrition—you questioned my research on the efficacy of an all-Pop-Tart diet, but I am living proof that the diet works. Yes, I have nose bleeds every day and my pancreas has the dysfunction of a failing Soviet client state, but I believe that having constant double vision makes me twice as optimistic about life. [3500 more words]
Basically all of his writing and talks are great but the relevant one here is This World of Ours, his rant about computer security.
If the security people are correct, then the only provably safe activity is to stare at a horseshoe whose integrity has been verified by a quorum of Rivest, Shamir, and Adleman. Somehow, I am not excited to live in the manner of a Pilgrim who magically has access to 3-choose-2 {Rivest, Shamir, Adleman}, mainly because, if I were a bored Pilgrim who possessed a kidnapping time machine, I would kidnap Samuel L. Jackson or Robocop, not mathematical wizards from the future who would taunt me with their knowledge of prime numbers and how “Breaking Bad” ends. […] My point is that security people need to get their priorities straight.
He helpfully lists the three different priorities people should worry about, which I'll painstakingly transcribe here:
Threat | Solution |
---|---|
Ex-girlfriend/boyfriend breaking into your email account and publicly releasing your correspondence with the My Little Pony fan club | Strong passwords |
Organized criminals breaking into your email account and sending spam using your identity | Solution Strong passwords + common sense (don’t click on unsolicited herbal Viagra ads that result in keyloggers and sorrow) |
The Mossad doing Mossad things with your email account | Magical amulets? Fake your own death, move into a submarine? YOU’RE STILL GONNA BE MOSSAD’ED UPON |
Yes this was all an elaborate way of saying the word "threat modeling", but if I just said "threat modeling" I wouldn't have been able to introduce you to James Mickens, now would I?
Mossad gotta budget
Threat modeling is fundamentally about cost-benefit analysis, on both sides of the threat.
Attacking a weak password is really easy, so it pays to attack even low-value targets. Adopting a strong password is also easy. If you're a low-value target, this will protect you from a lot. But if you're a high-value target, attackers will be willing to spend more resources attacking you. So you'll need to invest more in your defenses, which makes fewer people able to attack you. As this escalates, both the attacks and defenses get more and more expensive, as ever smaller groups of people can afford them.
This ends up being what protects us from the Mossad. We're ultimately beneath their notice because they want to spend their resources aiming at higher-value targets. Even so, we're vulnerable to people looking for easier meals, which means we still need strong passwords and 2FA and phish-literacy and stuff.
How this ties back to formal methods
Like with security, how much we do for correctness depends on how valuable our work is, and how easy it is to "attack" it.
Most work is "low-value", and bugs aren't going to cause much damage or cost a lot of developer time. This kinds of work doesn't even need formal methods; tests, type systems, and code review in some combination are enough to deal with them.
Some work is extremely high value. Militaries want verified OSes in addition to verified code. For a long time all of the seL4 users were militaries, though I think that's changed in the past few years. I'd guess they also have really strict manufacturing standards or something to prevent supply chain attacks.
Finally, some work is high-value enough that we should verify the code/design but not valuable enough to be worth investing in a verified OS or hardware stack. There might still be bugs from the OS but {cost of OS bugs} < {cost of verifying the OS} while also {cost of verifying code} < {cost of code bugs}.1
In fact we almost never want to verify the whole app, either! Generally we only apply formal methods to a subset of the system, like a single module or a single part of the abstract design, that is most important to get right. The point is that we apply it in ways that still add value.
I think this is where FM's potential as "provably correct" harms its adoption. We know type systems and unit tests can't catch all possible errors, so we don't expect them too, we're happy when they catch more errors than they catch. But we know it's possible for FM to catch everything, so we're reluctant to go only partway with it.
-
I'm mixing verifying code and verifying designs here. I believe that often verifying code is not worth it while verifying designs is. ↩
If you're reading this on the web, you can subscribe here. Updates are once a week. My main website is here.
My new book, Logic for Programmers, is now in early access! Get it here.