Website Haitus, Info on the Learntla Rewrite
Website Haitus
Fortunately not depression this time, just a responsibility thing. From the announcement:
I’m not letting myself work on software content for the website until I finish both the Alloydocs update and the learntla rewrite.
(I’ll still be writing the weekly newsletter.)
Yeah, I'm incentivizing myself to write by rewarding myself with more writing. Hey if it works it works.
About that rewrite...
The Alloydocs update shouldn't take more than a week, so the real big goal is the learntla rewrite.
After I learned TLA+ back in 2016, I wondered why more companies didn't use it. I thought the answer was obvious: it took me weeks of wrestling with Specifying Systems before I could write even the most basic specs. Most people don't have my stubbornness. People needed a gentler introduction to TLA+, written by a non-Turing award winner. So I took a couple weeks off and hammered out learntla. That lead to a Strange Loop talk, which lead to a book deal, which lead to consulting gigs, and now I'm the Formal Methods Guy.
I'm pretty proud of learntla, and I know it's helped a lot of people. It was also something I wrote just a few months into using TLA+, and that shows in a lot of places. Huge sections of TLA+ aren't even mentioned and there's barely anything on writing specifications well. It's ultimately a resource written by a beginner, for other beginners. In the five years since I've gotten much, much better at formal methods. More importantly, I've gotten much, much better at education.
So, a rewrite. I want to cover the existing material with better organization, better explanation, and better examples and exercises. I want to add new material on using TLA+ without PlusCal, how all the tooling works, and all of the advanced TLA+ features available to you. Finally, I want that all to be a minority of the content. The majority should be about technique: specifying quirky systems, writing clean operators, model optimization, how to quickly skim an error trace, stuff like that.
This is all going to be a complete ground-up rewrite. Part because I think most of the current explanations are pretty poor and part because I'd have to convert things from markdown to rST. more on that later.
A very rough outline
Things are gonna change as I get deeper into the work, but here's what I'm thinking for version one:
- Introduction, conceptual overview
- Beginner Tier:
- Setting up the tools
- Basics of operators
- Sets and sequences
- Deterministic PlusCal with multiple starting states
- Quantifiers
- Structures and Functions
- Concurrency and Nondeterminism
- Beginner+:
- Constants and Models
- Function sets
- Temporal logic
- Fairness
- Procedures
- Basic optimization
- Beginner++:
- Higher-order operators
- Pure TLA+ specifications
- Action properties
- Module Instances and Refinement
- Fairness & Machine Closure
- Beginner+++:
- Not sure yet
- Tooling
- Toolbox IDE
- Command line model-checking
- Config files
- TLC module
- Community Modules
- Example specifications, with breakdowns
Some techniques I want to (eventually) cover: recursive computation, state-sweeping, conditional invariants, complex liveness properties, aux vars, variable organization, property-based refactoring, adversarial modeling, and general spec organization. I'll add other techniques to the list as I remember them.
Note that this is a different order from both learntla and Practical TLA+. The more I work on FM, the better sense I get of how to best teach it.
Also tooling stuff
So that's what I guarantee will happen. Now here's some wide-eyed things I'm promising for myself. The original learntla is written in Hugo, which is a general-purpose static site generator. The rewrite is instead in Sphinx, which is a specialized documentation generator. I used Sphinx for Alloydocs and it was so much wildly better than markdown pages that I can't imagine going back to Hugo for this. I realize this will make it harder for the community to contribute new content, but the community wasn't contributing new content anyway so eh.
Using sphinx opens up a lot of new tricks. In Practical TLA+, I needed a way for people to check whether they modified a spec correctly. I came up with showing the (states / distinct states) for each run. I had to update this whenever I modified a specfile. With Sphinx, I can define a new directive like this:
# Note, this is just a prototype example
def state_space_role(name, rawtext, text, lineno, inliner,
options=None, content=None):
# Populated in environment from a yaml file
data = yaml_data["state_spaces"][text]
# Error handling if the key is not found
except KeyError:
msg = inliner.reporter.error(
f"role was not able to find a state space with name {text}", line=lineno)
prb = inliner.problematic(rawtext, rawtext, msg)
return [prb], [msg]
body = f"({data['states']} states / {data['distinct']} distinct)"
node = nodes.inline(rawtext, body)
return [node], []
def setup(app: Sphinx):
app.add_role('ss', state_space_role)
Where the yaml data looks like this:
state_spaces:
example_model_run:
states: 10
distinct: 4
Running a corresponding model then populates the appropriate yaml key, which can be picked up by the ss
role. In the text itself, I can write :ss:example_model_run
and Sphinx will replace that text with (10 states / 4 distinct)
. It keeps things in sync better and requires less effort to make changes on my end.
Sphinx makes some documentation ideas possible, if tricky. I like the idea of marking out techniques and examples in the text and aggregating them in special pages for people, or having embedded "refresher" popups for people, or having call-forwards. Juries out on if I'll actually do any of these— in practice most might take more effort than worthwhile. But it's fun to daydream.
The current version of learntla is now also mirrored to old.learntla.com. I want to have 404 redirect so as not to break existing links.
Leaving with a TLA+ trick
One of the techniques I mentioned up there was "property-based refactoring". It's one I started using recently but is a huge help. Basically if you want to refactor an operator like
Op(dog, bakery) ==
\* some complicated computation
You write a second operator and pair it with an invariant:
Op2(dog, bakery) ==
\* some simpler computation
RefactorInv ==
\A d \in Dog, b \in Bakery:
Op(d, b) = Op2(d, b)
You can do the the same with actions, though you then have to write it as
RefactorActionProp == [][
\A d \in Dog, b \in Bakery:
Act(d, b) <=> Act2(d, b)
]_vars
In both cases the invariant/property fails if the two ever disagree, meaning your refactor is a breaking change.
(If you're replacing an action with a less restrictive action, IE one that covers a wider set of values but is the same on existing values, you can instead do Act2(d, b) => Act(d, b)
.)
If you're reading this on the web, you can subscribe here. Updates are once a week. My main website is here.
My new book, Logic for Programmers, is now in early access! Get it here.