What does 'TLA+' mean, anyway
The secret Illuminati meaning behind the Three Letter Acronym (plus)
TLA+ Workshop
Feb 12th. I've decided to reduce the class size from 20 to 15, so there's only a couple of slots left! I'll be making a little less money this way but it should lead to a better teaching experience for the attendees. Use the code NEWSLETTERDISCOUNT
for $100 off.
What does "TLA+" mean, anyway
It's a running joke in the TLA+ community that we're afraid of saying what "TLA+" means, lest it scares people off. It's not on the front page of Lamport's Website, or on the github repo, or on learntla (though I put it in the FAQ). So here we go:
TLA+ stands for "Temporal Logic of Actions, Plus".
Okay but that doesn't explain anything. So let's talk about what that actually means. I'm going to oversimplify both the history and math to avoid spending two more days on fact checking.
Modal Logics
A "logic" is a mathematical framework for reasoning about truth statements, like how "arithmetic" is a framework for reasoning about numbers. There are lots of logic but the one we're all familiar with is propositional logic, which is just manipulating raw true-false values. Propositional logic is what gives us stuff like
!(x || !y) == !x && y
Which we use to simplify our conditionals. Then comes predicate logic, where you add in the ability to say "P(x) is true for all x" or "for some x." Predicate logic is used almost everywhere in math. I'm writing a short book/long pamphlet on predicate logic, I think it's underused in programming.
Less common are the various modal logics, which is add "modes", or qualifiers. □P
means that "P is necessarily true" and ◇P
means that "P is possibly true". Note that ◇P
is defined as !□!P
: P is possibly true if it's not necessarily false.
What does "necessarily true" mean? Ah, there's an interesting question. "Necessity" means different things in different contexts:
- When talking about "possible worlds", □P could mean that P is true in all possible circumstances, while ◇P is true in some reality. □(2+2=4), ◇(unicorns are real).
- When talking about proof, □P could mean that we have evidence that P is true, while ◇P means we don't have evidence that P is false. □(unicorns don't exist), ◇(aliens exist).
- When talking about ethics, □P could mean that P is something you must do, while ◇P is something we may do. □(don't punch babies), ◇(kick unicorns).
A cool one I ran into at SPLASH 2021 was belief programming for drones. Let's say you have an altimeter that's accurate to within 10 meters. If it reads that you're 100 meters off the ground, then □(height >= 90)
while ◇(height >= 109)
.
In formal methods, the most interesting mode is time, because you can use that to mode to represent algorithms.
Temporal Logics
Say that "necessarily P" means "P is always true" and "possibly P" means "P is sometimes true", and we get temporal logic. Or, more precisely, a temporal logic, because there's quite a few. One of the big questions comes from whether to model a coin flip as two totally separate timelines or a single timeline that branches in two. This matters because they lead to different interpretations of "sometimes". If you flip a coin once, is ◇heads
true?
- Linear model: No, because there's two timelines, and in one of them, you flip tails.
- Branching model: Yes, because there's one timeline that branches, and in one branch you flip heads.
In 1980 Lamport wrote the influential paper "Sometime" is sometimes "no never" where he argued that linear models are better for modeling concurrent programs.1 The next year Clarke and Emerson would announce the first model checker, which checked a branching time logic. So both approaches are really important in the history of formal methods, but Lamport preferred the linear time, which decided the rest of this story.
Anyway, the dominant linear model of temporal logic is called... linear temporal logic.
Linear Temporal Logic
Time is broken down into a sequence of abstract steps, with three main temporal operators:
- G/□: true now and in all future steps.
- F/◇: true now or in at least one future step.
- N/X/◯: true in the next step.
(All of them have ASCII alternatives because computer scientists hate weird symbols as much as other mathematicians love them. Also there's some more operators like Until and Release that aren't strictly necessary so I'm leaving them out.)
Here's how we can model "x
is true in this state and false in the next state":
x && N(!x)
Here's the boolean flipping between true and false in the next step:
(x && N(!x)) || (!x && N(x))
Here's a boolean that starts true and flips every step:
Flip(x) = (x && N(!x)) || (!x && N(x))
Spec = bool && G(Flip(bool))
Here's a bicycle blinker which turns on when you click it once, blinks while it's on, and turns off when you click it again.
Flip(x) = (x && N(!x)) || (!x && N(x))
Init = !light && !active
TurnOn = !active && Flip(active) && N(light)
TurnOff = active && Flip(active) && N(!light)
Spec = Init && G(
TurnOn
|| TurnOff
|| (active && N(active) && Flip(light))
)
I think I got that right! There's programs that convert LTL fragments if you feel like showing how I'm wrong.
Action Logics
LTL is arguably the most widespread temporal logic, and lots of formal methods languages use it for expressing properties. But it's not commonly used for expressing entire systems. There's lots of very specialized reasons for this and the most easily-explainable one is that updating variables sucks. Say x
is an integer you want to increment. You'd want to write something like N(x) = x + 1
, but you can't do that, N(x)
returns a boolean. You could also try N(x = x + 1)
, but that doesn't work because both x
's in the N
refer to the next value of x. There's no way to get one side of the =
to be the current value of x
and other side to be the next value.
Instead you have to do something like this:
some z: Int | (z = x + 1) && N(x = z)
Oooooor we could just define x'
to be the "the value of x in the next state"
x' = x + 1
You read x'
as "x-prime". Using prime to mean "the next value" goes at least back to Turing, but mathematicians and physicists were using it in the same-ish way for far longer.
Boolean formulae with primes are called actions. Lamport claims to be the first person to make an action-based temporal logic and I haven't found any information that refutes him. Epistemically ◇(he was the first), though I can't assert □(he was the first).
Anyway, Lamport published his new logic in 1990 and then more thoroughly in 1994. He called it the "Temporal Logic of Actions", or TLA.
PLUS
TLA is the formal logic: what model of time we're using, what □ and ◇ mean, what makes a valid formula, etc. TLA+ is the specification language built on top of the formal logic. It adds things like conditionals, sequences, integer arithmetic, etc. You could design a totally different specification logic using the same TLA core. Or you could embed just the TLA stuff in a theorem prover and leave out the + stuff.
-
¿Por qué no los dos? You can have a model that's contains both linear and branching timelines, like CTL*. The main downside is that it leads to much more difficult computational problems. Satisfiability, for example, is 2EXPTIME-complete. ↩
If you're reading this on the web, you can subscribe here. Updates are once a week. My main website is here.
My new book, Logic for Programmers, is now in early access! Get it here.