Computer Things

Archives
Subscribe
February 11, 2026

Proving What's Possible

When "what CAN happen" is as important as "what SHOULD happen"

As a formal methods consultant I have to mathematically express properties of systems. I generally do this with two "temporal operators":

  • A(x) means that x is always true. For example, a database table always satisfies all record-level constraints, and a state machine always makes valid transitions between states. If x is a statement about an individual state (as in the database but not state machine example), we further call it an invariant.
  • E(x) means that x is "eventually" true, conventionally meaning "guaranteed true at some point in the future". A database transaction eventually completes or rolls back, a state machine eventually reaches the "done" state, etc.

These come from linear temporal logic, which is the mainstream notation for expressing system properties. 1 We like these operators because they elegantly cover safety and liveness properties, and because we can combine them. A(E(x)) means x is true an infinite number of times, while A(x => E(y) means that x being true guarantees y true in the future.

There's a third class of properties, that I will call possibility properties: P(x) is "can x happen in this model"? Is it possible for a table to have more than ten records? Can a state machine transition from "Done" to "Retry", even if it doesn't? Importantly, P(x) does not need to be possible immediately, just at some point in the future. It's possible to lose 100 dollars betting on slot machines, even if you only bet one dollar at a time. If x is a statement about an individual state, we can further call it a reachability property. I'm going to use the two interchangeably for flow.

A(P(x)) says that x is always possible. No matter what we've done in our system, we can make x happen again. There's no way to do this with just A and E. Other meaningful combinations include:

  • P(A(x)): there is a reachable state from which x is always true.
  • A(x => P(y)): y is possible from any state where x is true.
  • E(x && P(y)): There is always a future state where x is true and y is reachable.
  • A(P(x) => E(x)): If x is ever possible, it will eventually happen.
  • E(P(x)) and P(E(x)) are the same as P(x).

See the paper "Sometime" is sometimes "not never" for a deeper discussion of E and P.

The use case

Possibility properties are "something good can happen", which is generally less useful (in specifications) than "something bad can't happen" (safety) and "something good will happen" (liveness). But it still comes up as an important property! My favorite example:

A guy who can't shut down his computer because system preferences interrupts shutdown

The big use I've found for the idea is as a sense-check that we wrote the spec properly. Say I take the property "A worker in the 'Retry' state eventually leaves that state":

A(state == 'Retry' => E(state != 'Retry'))

The model checker checks this property and confirms it holds of the spec. Great! Our system is correct! ...Unless the system can never reach the "Retry" state, in which case the expression is trivially true. I need to verify that 'Retry' is reachable, eg P(state == 'Retry'). Notice I can't use E to do this, because I don't want to say "the worker always needs to retry at least once".

It's not supported though

I say "use I've found for the idea" because the main formalisms I use (Alloy and TLA+) don't natively support P. 2 On top of P being less useful than A and E, simple reachability properties are mimickable with A(x). P(x) passes whenever A(!x) fails, meaning I can verify P(state == 'Retry') by testing that A(!(state == 'Retry')) finds a counterexample. We cannot mimic combined operators this way like A(P(x)) but those are significantly less common than state-reachability.

(Also, refinement doesn't preserve possibility properties, but that's a whole other kettle of worms.)

The one that's bitten me a little is that we can't mimic "P(x) from every starting state". "A(!x)" fails if there's at least one path from one starting state that leads to x, but other starting states might not make x possible.

I suspect there's also a chicken-and-egg problem here. Since my tools can't verify possibility properties, I'm not used to noticing them in systems. I'd be interested in hearing if anybody works with codebases where possibility properties are important, especially if it's something complex like A(x => P(y)).


  1. Instead of A(x), the literature uses []x or Gx ("globally x") and instead of E(x) it uses <>x or Fx ("finally x"). I'm using A and E because this isn't teaching material. ↩

  2. There's some discussion to add it to TLA+, though. ↩

If you're reading this on the web, you can subscribe here. Updates are once a week. My main website is here.

My new book, Logic for Programmers, is now in early access! Get it here.

Don't miss what's next. Subscribe to Computer Things:
Join the discussion:
  1. D
    Dylan Sprague
    February 11, 2026, evening

    To check if I'm understanding this correctly, A and E are basically the universal and existential quantifiers over the set of all future states in the linear time theory, right? Whereas P is the existential quantifier over the possible future states in the branching time theory? I suppose A would also be the universal quantifier over possible future states in the branching theory, then.

    Reply Report

Add a comment:

Share this email:
Share on Facebook Share on LinkedIn Share on Hacker News Share on Bluesky
Powered by Buttondown, the easiest way to start and grow your newsletter.