The Making of Making Sense logo

The Making of Making Sense

Subscribe
Archives
June 23, 2025

Two Christenings and a Funeral

Emerging from my cave to announce that the projects I've been working on for almost two decades are now online, so it's time to tell their stories.

I am emerging from my cave to announce that Sense Atlas, my tool for policy analysis, design rationale, and general “organizational cartography”, is now ready for a private alpha phase. This entails that Intertwingler, the application server I had to write, to—​among other ventures—​make Sense Atlas possible, is now online for realsies. It’s no longer confined to my laptop—and fielding real internet traffic—which on its own is cause for celebration.

It’s still really slow though, since I have yet to write Intertwingler’s internal caching mechanism, which is the next thing I’m doing. There’s also plenty of janky visual design, mainly in places I haven’t decided how I want it to look yet. I want to underscore that the instance at senseatlas.net is the one I’m working on directly, so if it doesn’t impress you today, it’ll always be better tomorrow.

This milestone punctuates an odyssey that has spanned almost two decades. I feel like I can finally express what’s been beaning around in my head this entire time, and start tying together what has probably looked like disparate threads of effort into a single coherent body of work. I can also barely begin to describe the crushing irony of having no visibility into when this day would come, because the tool that would give me that visibility was the very thing I was trying to create. So I have something a little more for you today than groggy morning warmup videos or shitposting on BlueSky: I have the story of how it all came to be.

Sense Atlas

Screenshot of Sense Atlas where the subject is the task “Ship a front end with interfaces for IBIS, PM, and a rudimentary FOAF/ORG editor.”

A screenshot of Sense Atlas, which doesn’t do it justice. You’re probably better off watching a video.

The story of Sense Atlas goes all the way back to 2007. I was working in the infrastructure lab at an antivirus/​antispam company, and our project was to create a system that scanned the entire internet✱ looking for IP addresses that were busy sending e-mail, that shouldn’t.

✱ The company sold these appliances that business customers would plug into their networks to filter spam at the protocol level. These sent back a telemetry packet every five minutes (and there were thousands of these things deployed), so ours was a passive scan; we just needed to design a way to effectively deal with all that data. Essentially what we were looking to do was map out mainly residential networks so we could summarily junk them, since a cable modem (or DSL or whatever) has no business sending e-mail other than by way of its ISP.

The project was already running behind when I was brought in—​in flagrant violation of Brooks’ Law, but whatever—​and the reason why it was late was obvious: There was no clear vision for how this thing was going to work—​even how it was going to be. Nevertheless, my boss—​like most bosses—​was adamant that we provide estimates for the time it will take to complete our work, and that those estimates be accurate.

In other words, we had to attest a deadline that a given assignment would be complete, and then we had to make—​or preferably beat—​that deadline.

Of course, like most software shops, especially in the mid-2000s, we didn’t have a process for this that was better than just guessing—​and then maybe padding the guess. I had been writing code for a living for a few years at this point, so I saw this as an opportunity to level up my forecasting skills. So I set out to master the dark art of software development effort estimation. And to my ultimate surprise, I kinda…did?

It started out innocently enough. I reasoned, as one does, that the way you go about figuring out how long a given piece of software is going to take to write is by figuring out what it needs to do, and that will give you an indication of how much work it’s going to be. So I got an ordinary, off-the-shelf outliner, and started bashing out bullet points.

I should footnote that it’s not like I just dove in on first principles, but the techniques in the literature at the time were…not great. I even have a tattered copy of Barry Boehm’s 800-page tome Software Engineering Economics, from 1980. Perhaps the most sensible—​and contemporaneous—​thing written on the subject, was a blog post by Joel Spolsky arguing that you should use data from timesheets from comparable tasks as the basis for your estimates. But of course, we didn’t have timesheets for any of this because we were making something that nobody had made before.

What I ultimately did—​and I don’t remember exactly how I arrived at this—​was to write down a sort of free association exercise, one bullet point per prescription or proscription, of how the software artifact in question must or must not behave. I would indent the bullets to indicate a condition or drill-down of scope. This was English, not pseudo-code, and I found that each consideration, when written down, would either spur more considerations, or it wouldn’t. A peripatetic coworker, who would come and sit with me from time to time, would alert me to concerns that I had missed. When I was satisfied that I had exhausted all the salient concerns, I eyeballed the bullets into four-hour chunks, added them up, and that made my estimate.

The four-hour chunk thing is something I had come up with the year before, when I was complaining that the hour was a terrible timekeeping unit for software development, and suddenly realized that there’s no reason why you can’t just make up a new one.

It would also be irresponsible not to mention that my disposition toward effort estimation has fundamentally changed since this experience, and now I finally have the infrastructure to enact it.

The estimates I made using this technique, which I eventually dubbed behaviour sheets, were astonishingly accurate. There was, however, a catch: If I started a behaviour sheet first thing Monday morning, then I could tell you by lunchtime on Wednesday that the work product in question would be ready by close of business on Friday. In other words, it took as long to create the estimate as it did to write the code. In other other words, it was useless for its intended purpose. You see, my boss didn’t just want accurate time estimates (or really, slight overestimates), he wanted them quickly.

Now, I will concede that there’s no counterfactual to what I’m about to say, as that would require a parallel universe in which I didn’t use this technique, but I don’t think I would have written that code in less time if I hadn’t. Because what the behaviour sheet did—​why it generated an accurate estimate—​was because the process surfaced, in advance, all of the ways and places I could have been surprised, or otherwise encountered some indeterminacy that I would have to resolve before I could continue. And it did it in a medium that is quicker and cheaper to work with than code.

This technique was manifestly valuable, but it wouldn’t be palatable if it couldn’t be sped up. (At least to people like my boss, who was not in the least bit impressed with this achievement. I didn’t hang around at that company for much longer than that.) The problem with a process like this, though, is that you can’t really speed it up, as much as you can take away the things that slow it down:

  • Relax the hierarchical constraint: One thing I found when working with a conventional outliner is that when the list got big enough, I kept pathologically rearranging and re-rearranging it, to group the elements together in the way that made the most sense, and being a strict tree structure, I could only pick one layout. So here, the appropriate structure for this information was obviously a graph, not a tree.
  • Reuse the elements: It also became clear, after doing a few behaviour sheets, that some of the elements would show up again and again, like de facto patterns. It sure would make sense to be able to reuse those, because they are invariably going to be attached to a whole wad of other de facto patterns, many of which will also be relevant.
  • Put it on the net: Having my coworker review my behaviour sheets was valuable, but contingent on his availability. Having the workspace online where he could review it asynchronously—​and add his own remarks—​would require less real-time coordination. Moreover, if you’re going to be building up a repository of reusable design and engineering concerns, it would be pretty silly if you didn’t have it provisioned in such a way that your whole team could access and add to it.

Sense Atlas isn’t (just) about estimates, mind you; that was just the catalyst. 2007 was also around the time I had started absorbing the work of Alan Cooper, who argued very persuasively that a great deal of the uncertainty around software development, including what people would use and pay for, could be resolved before a single line of code was ever written. He also argued, furthermore, that code could be disposable, if all the decisions about how the code ought to behave resided in some other artifact. I imagined a structure that would start with the outer concerns of the business, and burrow inward into increasing levels of specialist detail, and eventually find its way to the exact lines of code that do the job.

This thinking is also heavily influenced by the work of the architect Christopher Alexander, as well as shearing layers/​pace layers, respectively from Frank Duffy (another architect) and Stewart Brand. Many years later it became a conceptual framework I called (and later did a conference talk on) The Specificity Gradient.

Christopher Alexander, known to the software community for coining the concept of pattern languages (although those of his and his colleagues were of course for buildings), had written several other books besides. One of those—​a short, sleek paperback with the foreboding title Notes on the Synthesis of Form—​was his 1964 doctoral dissertation. In it, he asserts that the way you solve any complex problem is to break it into subproblems, recursively, until you have a set of problems that are each simple enough to solve. The rest of the book is about how to go about doing that.

Three identical copies of a connected graph with two island-like clusters connected by two edges. One is plain. One shows two circles around the the two islands that only cut the two connecting edges. The other shows a blob-like object that snakes through the structure and cuts twelve edges.

The shaded, slug-like figure depicts what happens when we prematurely ascribe categories to the design concerns (Alexander called them “fitness variables”). The circles show the mathematically-derived solution.

The way you do this is exactly by creating a graph structure, where the nodes represent architectural concerns, and the links represent mutual influence. The task is (mathematically) to find the partitioning line that cuts the fewest links, and then repeat for each of the two new pieces, and then keep repeating until it stops making sense to keep cutting. I looked at this and was like holy shit, this is sure to inject some rigour into my tactic of “eyeballing four-hour chunks”. There were plenty of valuable insights besides:

  • Structure rules everything: Resist the urge to prematurely categorize. Doing so will doom you to failure. The reason why is that the categories represented by words are extremely unlikely to match up to the partitions that cut the fewest links. Why? Because there are so many more ways to partition even a small, 20-node graph (2²⁰ = 1,048,576) than there are words in the English language (~600,000, and the chances they’ll match are slim). So the way you partition is by min-cut algorithm (which have improved considerably since 1964).
  • New information can radically upset the structure: A new concern is likely to attach to several existing ones. Subsequent partitioning operations are therefore overwhelmingly likely to yield wildly different solutions. What this means is that you can’t commit too prescriptively to a specific sequence of tasks, because the sequence is liable to get blown up on the regular (not to mention the task descriptions themselves) by the inexorable influx of new information.
  • Stop haggling over requirements: People will generally agree on whether a design concern is valid; where they tend to disagree is on the degree to which it is important. Under certain (technologically mediated) conditions, it’s cheaper to just record a design concern than to argue over whether it merits being recorded. Just shut up and write it down. The structure is more than capable of accommodating it.
  • Prioritize by opportunity: A structure like this is bound to fill in incrementally over time, and a lot of the work is just getting the information to put into it. So-called “requirements analysis” is in reality an ongoing process that will last the entire life of the project (and potentially beyond). All other things being equal (like how valuable the outcome is), do the thing right now that you can do right now.

The last point became clear when I tried to operationalize this method. Just getting a graph that I could apply the partitioning algorithm to was a ridiculous amount of work. But:

  • identifying a design concern,
  • connecting it to its neighbours,
  • attaching the evidence supporting it,
  • making that available to stakeholders…

…is a concrete and countable unit of progress. That is something you can show to your boss (or client) and say “we did this today”. The effort that it takes to do one of these—​assuming you’ve already gone and done the legwork—​is on the order of a Tweet.

What Alexander was describing here, in 1964, at the same age as I was when I was reading it, is a process that does not take well to either preemptive partitioning (effectively picking at random from the combination of partitions, or 2ⁿ for n objects), or preemptive sequencing (that is, doing the same with the permutation of sequences of length m, which is m!—​its factorial). In other words, unless you take this stuff seriously when planning your project, you have a vanishingly small chance of breaking up the job into the appropriate pieces, and an even tinier chance of doing them in the most efficient order. And the result of that is it’s going to cost you time, money, and possibly even the success of the project.

So it seemed really obvious to me, even back then (it would have been 2008 or 2009 at this point), that software people needed a different sort of deal that did not overprescribe either the structure or the sequence of operations, but nevertheless communicated to stakeholders, on an ongoing basis, that yes this excursion was finite, and yes we are making progress, and yes the result will be more valuable than what it cost to produce. To be sure this is what Agile proponents had in mind, but I think Agile, writ large, has lacked two things:

  1. A theory for why it has to be this way,
  2. tools—​and not just software tools—​for demonstrating the underlying order in the ostensible chaos.

The Agile tenet of “responding to change”, for example, I believe misses the mark. Genuine change doesn’t happen nearly as often as simply unearthing some latent requirement that has demonstrably been there all along. Nevertheless, the effect of an event like this is it might as well have been a genuine change, because it blows up everything you’ve promised your stakeholders so far. Obviously you should “respond to change”, but even better would be a narrative that said something like “our process is first and foremost one of discovery”. So the key, in my opinion, is to craft a promise to stakeholders that can accommodate this kind of routine occurrence, and then back that up with the necessary representations.

When the client of a construction project visits the job site—​especially a day or week or month to the next—​they can immediately see progress. Hole in ground gets deeper. Then gets filled up. Then floors go up. Then walls and windows. Signals a cave man would understand. We don’t have that in software, at least not naturally, so part of the goal of Sense Atlas is to atomize the process of gathering information—​which is most of the work—​so that it can be surfaced and presented to onlookers as a smooth (or at least much less staccato) progression of concrete and substantive steps forward.

And the target here is any work where most of the job is figuring out what the job even is, which is well-represented by, but far from limited to software.

I realize we still haven’t gotten past 2009 in this story, but I have one more episode and then I can fast-forward. So at this point I had established that representing works in progress as a graph was a generally good idea, and moreover what you could do with one when you had it. There was a missing piece that Alexander never discussed, which was how you work the graph to get all the concerns it represents to roughly the same “conceptual scope”. That is, the lofty and abstract concerns you start out with will skew the structure, because they’ll end up being part of everything. What you operate over, is rather the details at the “bottom”. So what was needed was a way to express hierarchy—​not in the ordered-tree sense of an outliner, but in the sense of “a house is bigger than a car”.

A split-screen showing Doug Engelbart performing The Mother of All Demos; a route map is on the monitor part of the screen.

Here is a still of Engelbart giving his famous 1968 presentation, aptly nicknamed The Mother of All Demos.

I’m not sure how I happened across it, but I was watching a video of a salon presentation at Google by Doug Engelbart, who, if you’re not familiar, had been working with these kinds of of structures since before even Alexander. In the Q&A near the end, he drops the phrase “structured argumentation”, and it was immediately salient to me that I had to go and figure out what that was. This brought me (via some spooky stuff that was interesting but much harder to operationalize), to IBIS.

It is worth noting that the spooky stuff—​some kind of counterintelligence support system—​was also out of SRI, Engelbart’s home for many years.

IBIS, or Issue-based Information System, is the project of one Horst Rittel, who, it incidentally turns out, was a colleague of Alexander at Berkeley in the ’70s. Rittel was the guy who coined the term “wicked problem”—​one that has many stakeholders, each with diverging or even conflicting agendas, and trade-offs which can be said to be relatively better or worse, rather than some singular, objectively correct solution. I was like, wow, that sounds a heck of a lot like what I do; I should really make one of these.

IBIS, it turns out, is a really simple system that is nevertheless capable of creating complex structures. It consists of:

  • Issues, which are exactly what they sound like: states of affairs in the world that need something done about them—​or steered around,
  • Positions, which are specific proposals about what to do about a particular issue, and
  • Arguments, that support or oppose a given position.

Each of these elements can then go on to suggest other issues, which generate positions, which generate arguments, and so on. There is also a way to express that an entity of a given type is a more general or special case of another—​the scaling hierarchy I was looking for.

Now, Rittel and his colleagues came up with IBIS in the late 1960s, and they implemented it on index cards. It’s an obvious candidate for digitization, given its structure, and people have, dating all the way back to the 80s. There are even extant IBIS tools on the market today. So why isn’t everybody using them?

Screenshot of gIBIS showing an diagram of an issue network, an alternate text representation, and a description of a node in focus.

The original digitized IBIS tool from 1988, by Conklin et al.

The answer, it looked to me, was something I ultimately termed the then-what problem. You put all your information into this structure, and you hook it all together. Then what? You need to be able to take the next step—​whatever that is—​and for that, you need the data. Other apps, ones not anticipated by whoever made a particular IBIS tool—​and even other IBIS tools—​need to be able to access and interpret this data. So my first move in this direction—​this would have been about 2012—​was to design an exchange format.

So I had the vocabulary, but it took me another year to make it go. I bashed out my first IBIS tool prototype in the latter two weeks of October 2013. I actually did so (initially) for mundane technical purposes: I had designed a protocol that was only tangentially related, and needed a data vocabulary that was relatively complete and self-contained to test it with. So I grabbed that one. The result was a lot more useful than any breadboard prototype had any business being, but for technical reasons it was fundamentally limited in exactly the way I diagnosed the other tools: it existed in a bubble; it wasn’t permeable, and rectifying that was going to be A Lot Of Work™, that I neither had time for, nor anybody to pay for it.

If Sense Atlas was going to grow into a fully-fledged product (though note, I deliberately refrained from naming it until literally last month), it needed a suitable substrate. It took me another ten years to get around to making one.

Intertwingler

Just a note that this is going to be a lot more technical than the previous section—​which, yes, I know—​but it still takes the format of a personal narrative, so you may glean something from it even if not technically inclined.

To achieve the desired effect for Sense Atlas, it turned out that I needed to write my own application server. You’ve no doubt seen me write about it before; it’s the thing that eventually came to be known as Intertwingler. The story of Intertwingler actually goes back even farther than Sense Atlas, but I’ll try to be brief. One thing though that you need to know about me first, though, is that I consider myself more of a designer than an engineer; I just picked up the technical skills because at the time they paid more money. What this entails, I think, is that I have the skillset of an engineer but not so much the instincts, and that means I tend to go for weirder and more divergent solutions.

Yes, Intertwingler is named after Ted Nelson’s concept of intertwingularity. No, I did not come up with it. I didn’t come up with Sense Atlas either, which is weird, because I’m usually pretty good at coming up with names for things. So, thanks to my respective friends for those; you know who you are.

Even though I could definitely go earlier, I’m going to place the beginning at 2006. It was an accident of my employment for the previous several years that I had had exposure to much deeper, danker plumbing than your typical Web application developer is ever likely to encounter. I thought, hey, a lot of the things we do when we make websites, like apply presentation templates, resize images, and even sanitize input, could be construed as “dumb” filters that just operated over bytes of message data. The application code, that actually had to know what was going on, could also be much simpler, because it would be running in a milieu in which all those mundane details were already taken care of. What really kicked things off, though, was a job interview (which I got, by the way!) where one of the interviewers caught me bullshitting about REST, which embarrassed me enough that I felt obligated to go read up about it.

Shouts out to Ian Brown for making me feel sheepish enough to go and read Roy Fielding’s PhD dissertation, which, next to Alexander’s work, is probably one of the most professionally influential documents I’ve ever read.

It turns out that REST, which stands for Representational State Transfer, is a much, much more profound kind of thing than what I thought it was, which was yet another way to get data in and out of websites (though in my defense, most people who have heard the term still seem to think that). Fielding, who can be credited with designing the bulk of the nuts and bolts of how the Web actually functions, envisioned an abstract system of hypermedia resources, which he defined as many-to-many relations between a set of identifiers and a set of representations. These representations would embed the identifiers of the resources it was connected to. Fielding then imagined the process of, for example, navigating a website (he was explicit that the Web was just one candidate system that could be brought into compliance with this architecture), was a state machine, where each state was identified by a URL and represented by a webpage, with the user (or agent thereof) as the state transition function. If you understood what I just wrote, then you would recognize this as a mind-blowingly elegant way to picture this system.

Three boxes; the first (labeled URIs) contains ellipses “my-manifesto”, “a-flower”, “a-rose”. The second, labeled Resources contains “A Document” and ”A Picture”. The third, labeled Representations, contains “Text” “HTML”, “JPEG Image”. Lines connect the ellipses “my-manifesto” to “A Document” to “Text” and “HTML”, and from “a-flower” and “a-rose” to “A Picture” to “JPEG Image”.

Unlike a file, which has (usually) one identifier and exactly one representation, a resource can have multiple identifiers and multiple representations. (Web resources should actually count the request method as well, though it is not pictured.)

Another elegant feature of REST, as Fielding defined it, is something that bears the characteristically inelegant acronym HATEOAS, which stands for Hypermedia As The Engine Of Application State. (Fielding later renamed it “the hypermedia constraint”, but it never really stuck.) Under this regime you are prohibited from knowing the identities and locations of information resources (save, I suppose, for an initial entry point); your only legal move is to follow links. An interesting and valuable feature of this constraint, is that because you’re not allowed to know in advance where the resources are, you are liberated from the burden of having to care. This is in stark contrast to the way Web development is still largely currently organized, which is by location. The conventional way is like saying that if an object is in the fork drawer, then it must be a fork. Under the hypermedia constraint, it is much more important to know what things are than where they are (indeed, as they can be “anywhere”), as well as what is meant by the links that connect them together. So the challenge to me was to figure out how to express that.

The solution that I found to this is the technology called RDF (Resource Description Framework). RDF is the thing that underpins the Semantic Web, the quasi-utopian project Tim Berners-Lee jumped straight onto almost immediately after he invented the original Web in the first place. RDF is another one of these ideas which is devilishly simple, but with extraordinarily complex consequences. The basic idea is that you use URLs (URIs for the pedants) to identify the types of information resources and the meanings of their properties, including the links between them. This gives you two things in particular:

  • Globally unambiguous terms so it was clear what you were talking about (i.e, my formal definition of the class Person is different from yours, but if we both just use Person in our instance data, other people won’t be able to determine which one of ours it is),
  • Control over the domain of discourse (literally), so anybody could put up a vocabulary of terms online that anybody else could use, so rather than us each coming up with our own Person, we could just use foaf:Person, and anybody else with a system that understands foaf:Person could consume our data without changing anything. (And of course, we can extend that with our own semantics, as RDF is more or less object-oriented).

There is a third item which didn’t materialize until later (about 2010 when people finally got their acts together), that the machine-actionable vocabulary could be embedded in its prose documentation—​which naturally would be up on the Web—​so you could click on a term and go straight to where it was defined, which would also be where you point your computer to interpret it. This is how I design all my vocabularies, and it frankly shocks me that anybody would put up with less.

RDF has been declared dead numerous times. The Semantic Web, which RDF was designed to power, indeed never materialized because it was co-opted and marginalized by the very forces it was intended to disrupt. Another major drawback that made—​and continues to make—​RDF hard to work with is a distinct lack of infrastructure (like RDF Schema, OWL, RDFa, JSON-LD, SHACL, SPARQL…) that all had to be invented piece by piece. And finally, RDF uses a shitton of URLs, which have to be managed somehow. That’s where I come in.

It was Ted Nelson who coined the term “hypertext”, in 1960. He had a vision for a network of computer terminals he called Xanadu, whose aim was to liberate creative professionals by monetizing their work in the form of royalty micropayments that would get charged every time somebody accessed their copyrighted content—​which could be filleted all the way down to a single letter. The scope of this endeavour is utterly bananas, even by today’s standards—​to say nothing of the early ’60s. The effect of his design, though, is that not only can you link to other people’s work, but you can see everything that links to yours. To this end, Nelson has been a vocal critic of the Web, particularly of its linking system, which is embedded rather than overlaid, and only goes in one direction.

It is worth noting that Xanadu is the greatest vapourware project in the history of computing, with several unsuccessful attempts made over the decades. Nelson himself never learned how to code, at least in any load-bearing capacity, so he was always dependent on others executing his vision. I suspect that if he learned the necessary skills to do it himself, he probably would have realized that his vision was unattainable, and given up.

The thing is, Nelson isn’t wrong. The Web’s linking system does suck, and it sucks for the reason he gives (which in my opinion can be largely palliated, even within the Web’s existing constraints) and one other huge important one: URLs are a pain in the ass to manage.

Tim Berners-Lee is on record saying that the URL is his most important invention, and I agree with him: a little bit of text that can be embedded into any visual or symbolic medium (it can even be read out on the radio), that unambiguously denotes a link to more information. What makes URLs great, though, is also what makes them such a terrible pain: Once you mint one, it goes out into the wild, and you lose exclusive control over it. The result? A phenomenon called link rot.

Link rot is what happens when a Web resource gets moved, renamed, or deleted, but the URL used to refer to it is still out there, linking to nowhere. Follow that link, and you get the infamous 404 error. I reasoned that if you’re going to have a system with zillions of URLs, you’d better figure out a way to make them durable.

With this principle in mind to never break a link, I set out in 2008 to redesign my personal site. I started writing a policy manual that describes, more or less, what I’ve been talking about here, albeit in much greater detail. My objective was to create a true hypertext document in the Nelsonian sense, where individual units of information were pulverized into single scrollbar-free screenfuls, parentheticals and digressions were hived off onto their own pages, the structure was dense in the sense that there were many paths through the information, and every page had a list of everything that linked back to it. The overarching goal with this policy manual was rapid uptake and comprehension of technical information, that didn’t require you to read any more than you had to.

What I found, in the course of writing this thing, was that URLs were the biggest bottleneck by far. What would routinely happen is that I would be writing away, then I’d digress (as one does), so I’d chop out that part and open up a new page for it. Ah, but what to call it? A common, sensible practice is to name your URLs as some derivation of the title of the document. But what if you hadn’t decided on a title yet? Remember, this isn’t your private file system; renaming a URL has consequences. If you’re out to eliminate 404s, you have to treat every URL (at least, that you’ve ever exposed to the public) like somebody out there depends on it.

The fact that I was doing all this by hand was also a pretty burdensome. Another side effect of the no-broken-links constraint was that I couldn’t publish until I had a closed set of documents, meaning every link—​at least pointing within the site—​had to have something on the other side of it. I’d set out to write one little item and a week later I’d find myself a dozen documents in, and the whole ensemble still in an unpublishable state. I got to about 40 before losing track of them, and went back to writing ordinary essays. That said, Intertwingler is precisely the kind of thing that would make a project like that bearable.

The conclusion that I ultimately came to was that renaming URLs is okay, as long as your system remembers what they used to be called. And the way you accomplish that is by assigning every resource a very large, randomly-generated identifier, which is minted once and treated as authoritative forever. Then you overlay a nice, friendly, human-readable one on top. If you decide to rename it, you can have the system maintain the association with all the previous names, and redirect all requests appropriately. I mocked this up initially just with plain HTML files, and by recording the renamings in version control. That’s how I ran things for almost a decade, until 2018, when I wrote the first sketch of what was eventually to become Intertwingler.

Intertwingler began life as a thing called RDF::SAK, for Resource Description Framework Swiss Army Knife. It was an inchoate snarl of code that was really just a bundle of “good ideas that some Web infrastructure should do someday”. Effectively, it was a static website generator. It drove all my client extranets, and I’d just tack new capabilities onto it every time I needed it to do something it didn’t already do. This puttered along for about five years, until I got a grant in 2023 from the inaugural Summer of Protocols program, that enabled me to transform it into a fully-fledged application server. (I actually decided late in the program to make it the most ambitious version of itself, which cost me the rest of 2023, most of 2024, and part of this year too. I credit the additional support to Polyneme LLC, who are eager to use Intertwingler in their projects.

I’ve found that using Intertwingler has forced me to re-learn how to make websites. Whereas the ordinary✱ way is something like, you determine all the sections you want, and then you determine all the pages that go in them, Intertwingler doesn’t require you to make those kinds of commitments. “Site structure” (at least qua “sections”) is really just a fiction anyway. You can decide on one when you’re good and ready.

✱ I’m assuming people still do it like this. I have not made a website like this in a very, very long time.

When working with Intertwingler, you really start having to think in terms of resources rather than pages. A resource, again, is simultaneously less and more than a page. A page (which I will concede is technically still a resource) is almost always a composite object with exactly one representation (i.e., HTML). The ideal resource, by contrast, is an elementary resource with potentially many representations (e.g., HTML and JSON). So there’s potentially multiple resources in a page, which can be repurposed and composed into other pages. This, moreover, brings URLs closer to a one-to-one relationship with units of actual content.

What you find, moreover—​what I found, at least, when making Sense Atlas—​is that you can get away with not having to write much (if any) server-side code. Indeed, I’ve been jokingly characterizing Sense Atlas as “barely software”, because all it’s doing is dressing up the data embedded in the pages it downloads. The only server-side code I had to write for Sense Atlas is a handful of what I called “catalogue resources”, which can be understood as a microservice that disgorges an inventory of what’s on the site, who’s logged in, etc., so it can orient itself. (I don’t consider that exclusive to Sense Atlas though; it would be useful for any app running on Intertwingler.) Presentational templates are routed by matching the type of the current resource. All data fetching is performed exclusively by following links (typed of course so it can tell what they mean), and all mutation operations consist of just adding and removing statements from the graph.

That last one has saved me an unbelievable amount of time. I wouldn’t operate out on the open internet though, or in any situation where you can’t trust people to behave—​at least not yet. The Sense Atlas private alpha is all going to be private clients anyway, and they’re each going to get their own instance. But solving for both data integrity and fine-grained access control are on the high-priority list.

I’ll make one last remark, and that’s the fact that Intertwingler is designed around an RDF graph database as its main source of state, which is a completely different beast if SQL is what you’re used to. They’re definitely pretty bare-bones compared to SQL in many respects, but unlike SQL, which requires copious manual labour even when heavily instrumented (e.g. by ORM), you pretty much just copy and paste the data from inside the graph out onto the Web.

Diagram showing the flow of an HTTP request-response pair traveling through Intertwingler

URLs are resolved to their canonical identifiers, then the request passes through a series of transforms before hitting a stack of content handlers, then the response is passed out through another series of transforms.

I could drone on about Intertwingler’s resolver, or its handler architecture, or its transformation queue, or its content-addressable store, but I’ve already written over 2,500 words here (plus almost 3,200 for Sense Atlas). So I’m going to have to save some of it for later.

Retiring the Prototype

The title of this newsletter is Two Christenings and a Funeral, so this is the funeral. I have officially decommissioned the IBIS tool prototype that taunted me with being just shy of serviceable for twelve entire years before I supplanted it with Sense Atlas. (Okay, maybe eleven and a half.)

In an old talk from 2002, Alan Cooper entreats us to imagine a world where we retire software with dignity when it has outlived its usefulness, so that’s what I’m doing here. I created the original prototype for the mundane purpose of testing a protocol I had designed for getting graph operations from the browser to the server using nothing but ordinary HTML forms—​the thing that I’ve mentioned twice already that has saved me a ridiculous amount of time. The test of the protocol was to make a simple app with it, to see if that surfaced any gaps or impossible constructs. I threw it together in the last two weeks of October 2013; I remember because it was Halloween when I finished it.

Even though the protocol implementation was successful, as an app in its own right, the IBIS tool prototype was constitutionally limited in its capabilities. Simply put, expanding it in any significant way was just way too much work, and the result would have been suboptimal. Specifically, the RDF framework for language I had written it in (Perl) lacked the component called a reasoner, that infers additional graph statements from the inheritance structure of terms explicitly asserted in the database. As such, all the inferencing the tool needed to function, had to be hard-coded. What this meant in practice was that adding a new type of resource entailed transcribing hundreds of lines of inference rules, the kind of thing that the software should be giving you (better, faster, and with no mistakes) for free. As soon as I had written this prototype, I realized I had two options:

  1. Rewrite it in a language with an RDF framework that has a reasoner,
  2. Try to write a reasoner in Perl.

In my defense, I didn’t know I needed a reasoner until I had made the prototype (with a schwack of hard-coded inference rules), and realized that’s what a reasoner did. Also, it turns out that the only language people seem to write RDF reasoners in is Java (although there is an abandoned and broken one for Python), and—​it turns out—​Ruby.

Writing a reasoner is hard—​like really hard—​and writing one in Perl is unwise. While it had been my daily driver for over two decades, by the twenty-teens it was looking a little long in the tooth. Once the lingua franca of the Web—​which is why I know it—​Perl’s best days are definitely behind it. I suppose, in a way, that this eulogy is also for my relationship with the programming language I had used for most of my career.

For the span from the end of 2013 through mid-2017, the IBIS prototype more or less sat idle, because I couldn’t really use it. I also couldn’t expand it into a useful state, and I didn’t have the time to rewrite it. I did actually, in 2017, take a fairly serious look at writing what would have been an Intertwingler-like thing in Clojure—​as sort of a Waldo (those robotic arms for handling radioactive material) for Java—​but I was still staring down the prospect of spending months writing the various wrappers and interfaces I needed before I could even begin. Not the kind of thing you can afford to do when you spend half your waking life chasing consulting clients.

It’s astonishing how long a project takes when you don’t have the resources to do it full-time. It was clear, certainly by 2018, but even much earlier, that a tool like this would need a novel application server, purpose-made to accommodate it. Even still, I could never really get up the gumption. I remember deciding that 2023 would be The Year™, even before I got the SoP grant; even before I knew it existed. In retrospect, I have no idea how I would have wrangled it if I had had to get that money some other way.

Even after the summer of 2023 it was a slog. As mentioned, I decided to go large with Intertwingler, which I strongly believe was ultimately the right move, though it pushed “summer” out until at least October. Then my girlfriend very suddenly (and probably illegally?) lost her home, so I put her up in my tiny Norwegian-prison-cell of an apartment until she could marshal a new one. That manifestly degraded my productivity for the rest of that year.

2024 was a bit of a blur. To port the IBIS tool over to Intertwingler, I had to first split the front-end from the back. This I accomplished in late January, but didn’t manage to pick it up on the Intertwingler side until late August. It’s funny, the story I tell myself of 2024 was a year of hard R&D mode, but the numbers suggest it was client work (and the pitching thereof) that was occupying the bulk of my time. The first boot-up of what was to become Sense Atlas didn’t happen until late January of this year, and I wouldn’t consider it to have been fully ported over until the end of March.

I’m just going to designate Q2 2025 as one-more-thing season. The first is that I have a client who is really eager to use the resource planning functionality that I designed, that had been lying in wait for a suitable substrate for nearly two decades (though he only found out about it a couple years ago), and it was only as of April that I could even consider finally implementing it. Unlike the original prototype, which only took two weeks end to end, this took the entire month. That said, the business-level dynamics of IBIS were worked out almost 60 years ago, and the computer-specific details almost 40. This process model stuff, by contrast, is something I invented from scratch, and I’d so far not had a way to try it out. That was actually the impetus to finally bring Sense Atlas (and by extension, Intertwingler) out into the open: I had reached the limit of what I could accomplish on the workbench.

There is a video of me on May 9th where I show Sense Atlas—​an old friend helped me pick the name—​but it’s still running on my laptop. The first bona fide deployment on the real live internet was actually may 25th (though the video from the 29th is better).

Finally, there was the matter of actually getting all this the hell online. Heretofore I had just been running Intertwingler on my laptop, because I hadn’t fortified it yet for the open internet (and still haven’t, so be gentle). For one, Intertwingler is supposed to drive arbitrarily many sites, but I hadn’t tested that yet. (Luckily that only required a light massage, but pretty much worked right away.) Another matter was that the real estate, where Intertwingler was intended to go, was already occupied. You see, the prototype had no way to express more than one universe of discourse (designing how that was to function is how I spent the first couple months of this year). As a hack, I had made it so a fresh instance would be automatically spun up on a wildcard domain. What this meant was I had 40 domains worth of data to coalesce into half as many (three public, including senseatlas.net, and one private; plus 16 client extranets that thankfully mapped one-to-one). That took the rest of May, but since then I’m proud to proclaim that Sense Atlas is now online, and I can focus completely on maturing it into a real product.

A question many people might have is why didn’t I try to get funding? The answer is I did try. I wasted five years trying to get people interested. I ultimately found that it was nigh-impossible to get people to understand, let alone value what I was trying to accomplish:

  • People are just ostensibly not acclimatized to the value of knowledge graphs in general,
  • If you don’t yet understand why structures like these are valuable, you won’t until you see one full of content that (at least tangentially) concerns you.

…so I was in the awkward position of trying to sell a thing for which the path of least resistance was to create the genuine article. Fake-it-till-you-make-it was emphatically not an option, because it cost more effort to fake it than to just do it for real. Only now do I feel confident enough to sell this thing, which is why I finally put it online—​even though there are still numerous flaws that desparately need to be rectified, I can confidently say that my vision has been realized.

There’s a sort of ship-of-Theseus phenomenon happening in Sense Atlas, in that the ghost of the original prototype still lives on inside it. The old code, however, can now be laid to rest. Godspeed.

Epilogue

Sense Atlas is actually serviceable today. I’ve been using it to plan its own development for over a month now, and I’m doing that out in the open, on senseatlas.net. I’m going to be provisioning Sense Atlas as a free add-on to my regular consulting practice through tentatively about this time next year. I can start taking clients for this immediately, so if you think I can help you with your business—​with or without Sense Atlas—​do reach out, either to projects@methodandstructure.com or by replying to this newsletter.

Intertwingler, on the other hand, is still a dog’s breakfast. However, the dynamics have now changed from supply-pushed to demand-pulled. Since most Sense Atlas issues are really Intertwingler issues, it’s now crystal-clear what the priorities✱ are. I also now have Sense Atlas to plan it. On top of fleshing out its capabilities and improving its performance, there’s still a lot of work to do in tests, tutorial documentation, and a serviceable management interface, as well as half a dozen diplomatic missions to other open-source developers to patch things I’ve hacked to make Intertwingler function. A reasonable target for an installable open-source package (at least outside of a Docker image) is probably the end of this year.

✱ For one, it’s still really slow because everybody knows you don’t work on performance until the thing works correctly (unless it’s so slow you can’t use it at all), so speeding it up is now priority #1.

At any rate, the seal is now broken. The eggs have hatched. The ancestors are in the ground. Time to get back to work.

Don't miss what's next. Subscribe to The Making of Making Sense:
Twitch GitHub YouTube Bluesky My Website
Powered by Buttondown, the easiest way to start and grow your newsletter.