The Making of Making Sense logo

The Making of Making Sense

Archives
May 16, 2026

Cutting the Gordian Hairball

A meditation on the close of a veritable odyssey, and how Intertwingler—the application server I'm creating—is actually a frontal assault on link rot.

As I sit mere hours away from completing a task that I have known for years I would eventually have to do, which has been “urgent” since of August 1 of last year, which has been “critical” since around Christmas, and which has turned out to be by far some of the most surreptitiously punishing code I’ve had to write in ages, I figure the most on-brand thing I can possibly do is to halt all programming activity and write it up.

A gigantic dependency graph of all the crap I have to do.

This is the hairball in question: a dependency graph of most of the product work I’ve set out to do in the next while. The thing I’m hours away from finishing is the red object on the very left. I suppose I could have waited to write this until I was finished, but I probably wouldn’t be as motivated because I’d want to get to using this new capability to do all the things that have been waiting for it to be finished.

I have written many times that Intertwingler, my nascent application server, began life as “a bundle of opinions and good(?) ideas about how the Web ought to function”, but I haven’t gone into much detail about what any of those opinions or ideas are, or what outcomes I believe the Web ought to function for.

The term I’ve been using to describe the goal of Intertwingler has been dense hypermedia, which is succinct, but only a handful of people in the world seem to understand what that implies. I’ve tried to tell the story about how it contrasts with sparse hypermedia, which is how I see the Web as we’ve all experienced it for the last three decades, but again, unless you’re already plugged into the theory, it still doesn’t convey the value of the things I’ve been working so hard to change.

One of the challenges of working from the inside out is it takes years to arrive at language that speaks value to a general audience—​or at least “general enough” within an initial constraint of being Western (or at least anglophone), educated, urban, white-collar, et cetera. The motivation for developing this capability was better business information infrastructure, by which I mean better ways to assess the environment, manage relationships and risk, coordinate, strategize, plan, prioritize, recall (policies, procedures, concepts), and so on. Of all the many ways to articulate the veritable constellation of value propositions, I’ll pick two to focus on:

  • Understanding more while having to read less,
  • Conveying more while having to document less.

This is deeply personal to me. I feel—​keenly, painfully—​like I have to read too much to understand what’s going on, and I feel like I have to write too much in order to communicate. I feel like I have to read too much because a lot of what I read ends up being filler, if it isn’t stale, obsolete, or just plain garbage. I feel like I have to write too much—​or take your pick of whatever expressive medium—​because I routinely have to say, to a first approximation, the same things over and over, in different ways, to different people, over different channels. In proliferating these slightly varying and highly perishable messages, I exacerbate the problem of clearly communicating accurate and current information—and so does everybody else in that position.

And AI makes this situation even worse, because most of what it does is generate industrial quantities of florid, disposable, unstructured, and potentially misleading text.

The way to solve this problem, in my opinion, is to create a substrate that makes arbitrarily small pieces of information globally available for reuse, by people always, and by computers whenever possible—​and importantly, to do so in a way that you can tell where any piece of it comes from.

The World-Wide Web, as a medium, affords this vision in principle, but in practice there is a mass of habit and inertia that undermines bringing it to pass. I diagnose this situation as being largely due to satisficing conditions that we have ultimately outgrown—​and I say we here because I have trouble believing I’m the only one on this planet encountering this difficulty. The Web is “good enough” as-is for a certain set of outcomes that are desirable to most people, but my objectives fall outside that envelope. Intertwingler, therefore, has been an exercise in how much convention do I have to rip up to get the effect that I’m after? The answer is apparently quite a bit, but not actually all that much in absolute terms. It’s (still) small enough to be a one-person job—​indeed, I’m not sure how it could have been anything but a one-person job, since I myself am learning as I go, about how Intertwingler (and soon, Intermingler) ought to exist in the world.

Understanding more while having to read less

How I interpret, and therefore act toward the goal of “understanding more while having to read less” is to supplant certain functions that would ordinarily be the role of the text, with other techniques and media—​especially graphics and interactive processes. I submit that conventional business—​and academic—​communication is cluttered at its most basic level. Documents, and document-adjacent artifacts like this newsletter, tend to carry a lot of bulk. They have poor internal addressing—​by convention or even inherent structure—​and the information they contain is easily misplaced. This makes it easier to copy their contents than it does to reference them. This introduces errors and inconsistencies, especially as the contents of these documents—​which are fixed—​diverge from reality over time. This is a paper-centric view of the world, and we have had in our possession information technology that is categorically, paradigmatically better than this, for decades at this point.

I want to be clear that paper, and paper-mimetic artifacts, still very much have a role to play. I just believe that paper’s time as the dominant paradigm for business information infrastructure, is manifestly over.

One thing the Web does infinitely better than paper is that it affords putting a document at an address on the network that anybody in the world—​it is the World-Wide Web, after all—​can instantly access. And, if you update that document, the next time people access it, they will see the new version. This capability alone made the live document a meaningful concept. Not only is it unlike print in the way that the act of publication itself no longer dominates the authorial and editorial cycle, but it’s also unlike print—​for which even minor errors are a disaster—​in that you can just reach into the document and fix it, which also instantaneously fixes it for everybody. Like magic.

Granted, this introduces the new problem of making sneaky substantive changes to a text while nobody is looking, and not signalling that changes have been made or what had been written before. That said, this is a problem that has been solved in principle, and can be solved in practice. I view this as just another thing that Intertwingler can guarantee.

The other major advantage of the Web over paper, that everybody is familiar with, is its ability to link between resources. The linking mechanism also extends to embedded images and audiovisual media, as well as code and other assets for interactive interfaces and presentations. Linking is great because it affords a concrete mechanism to dereference information, something that in the paper paradigm—​all manner of digressions, parentheses, footnotes, endnotes, marginalia, glossaries, indices, and bibliographies—​is primitive and clumsy. While paper has to interleave these features all into the same document, the Web can hive them off, and thus take up much less space from the point of view of the reader.

Paradoxically, I’m inclined to describe the Web as having effectively unlimited space, and because of this, you can make any view of it at any given instant take up a lot less of it—​but this is both a blessing and a curse.

How the Web falls down in this regard—​and this affects both of these capabilities—​is that its addressing mechanism is notoriously unreliable. This has been a known problem for decades, and yet apparently, we have yet to see it conclusively solved. Granted, you can’t solve it on your own for the entire Web, but you can make continuity guarantees about the addresses you expose, and you can take steps to palliate, for your own users, the failures of your neighbours. Intertwingler is such an attempt, and one by which I hope to lead by example.

The more you think about it, the more you realize that the inherent dodginess of Web URLs really is why we can’t have nice things. After all, why link to a page if there’s an even chance it won’t be there when your readers go to click on it? Or if it is there, will it be the same thing as it was when you linked it? These two related phenomena, respectively called link rot and content drift, are in my opinion the root cause of why the Web, despite being so successful as a vehicle for content—​and especially software—​still makes for such lousy hypertext.

What I’m saying again, roughly, is that if you could make certain pledges to your readers:

  • Continuity guarantees for the addresses you expose on your Web properties,
  • continuity guarantees for links between resources on your own Web properties,
  • tools and procedures to help mitigate broken links that lead outside your Web properties…

…then the hypermedia features that afford “understanding more while having to read less” would actually be worth the investment. Why? because the strategy depends on shrinking the individual pages, and cranking up the density of the links between them by a couple orders of magnitude. And if you’re going to do that, they have to be reliable.

The particular characteristic of hypermedia I’m most interested in, at least in the near term, is the ability to address a superposition of audiences. When you’re constrained to static text, that is nominally intended to convey some substantive information or other, you can only realistically address one audience at a time. This is because—​leaving aside the question of whether or not they’re even interested in the subject matter—​different people have different levels of understanding, and varying degrees of expectations for intellectual rigour. In other words, some people will need concepts explained and elucidated, while others will insist on detailed defenses of your arguments—​backed by receipts. These components take up space just like any other, and so a conventional document that contains both will be extra long and satisfy nobody. It’s no problem, however, to pull this off using hypertext. And for the straight shot down the middle—​somebody who both understands all the relevant concepts and already buys all the arguments—​you don’t have to waste any of their time getting to the point.

In business contexts, furthermore, there are also often big egos in play. There is therefore great value in affording a person who doesn’t understand what’s being said the ability to educate themselves discreetly by going down a side track, and getting up to speed without losing face. Conversely, it’s the wonks and the lawyers who insist on reading the fine print, so you don’t have to clutter the central message for everybody else with caveats, footnotes, and citations.

There is another aspect of hypermedia that I am particularly keen about, and that’s using formal mathematical structures to convey information, tell stories, and undergird argumentation. This entails drawing diagrams and creating interactive behaviour out of structured data objects embedded in the content. It also relies even more heavily, than mere definitions or footnotes, on what a given link means. Again, this is something that isn’t worth doing under conditions of link rot (and content drift, which for the time being I’ll consider a special case of link rot), and in my experience is essentially unmanageable with conventional Web development techniques. Another reason why I created Intertwingler.

Conveying more while having to document less

In many professional settings, the process of documenting is typically considered “extra” work on top of whatever it is you “actually” do. This is because documentation typically has to chase after whatever “real” process occurred, and write it down. Later on, if policies, procedures, or states of affairs change, you then have to chase down all the places the old information has gotten to, and update them all. There’s only so much chasing one can afford to do, and so some of it invariably gets neglected. When the whole thing gets too caked up with entropy, there is a strong impulse to just toss it all and start over. At this point, your organization loses valuable memory, because it’s too inseparable from obsolete crud. In some contexts, one is even prompted to ask what the point of documenting even is, if it’s just going to fall out of sync before anybody reads it.

I did a talk at a conference a few years ago in which I said you can imagine different pieces of information as having varying degrees of perishability, like different ingredients for a meal, and a document is like the meal itself. And just like a meal, a document is often more perishable than the most perishable ingredient you put into it. Even if it doesn’t kill you, it could be like a cheeseburger that’s been in a take-out container for 20 minutes: cold, soggy, stiff, and generally unpalatable.

In my opinion, the root cause of this problem is that most documenting happens in documents—​or document-adjacent artifacts. What these have in common is that they tend to be monolithic objects with very little formal structure and poor internal addressability, and they lack a fundamental capability of hypertext, which constrains the situation such that it’s far easier to copy information into a document than it is to reference it.

I speak, of course, of the capability known as transclusion, which is the embedding of arbitrarily narrow slices of information—​potentially recursively—​into other documents. Not only is the transcluded content something you nominally only have to write once and can reuse many times over, any changes to it that you do make automatically propagate. Despite the fact that true seamless transclusions have been designed and implemented in hypertext systems long before the Web existed—​and even plenty of ad-hoc solutions have been invented on top of it—​the Web still has no sensible standard mechanism to do transclusion properly. Once again, I’m inclined to point the finger at link rot.

There have been proposals over the years, for example for “seamless” <iframe> tags, but the story typically goes something like blah blah security, blah blah how do you integrate the layout. Moreover, prior to stuff like React, most transclusion was taken care of on the server side. Client-side behaviour that generally passes for transclusion is also typically constructed from API endpoints that are specified in advance, rather than links in documents. As such, I wouldn’t consider contemporary specimens of transclusion to be especially ambitious.

My thinking on the matter goes like this: The reason why there’s no proper transclusion standard is that there isn’t a corpus of content to be transcluded—​at least outside of individual systems with their own idiosyncratic mechanisms. There’s no corpus of transcludable content because there are no (common, easily-accessible) tools for producing it. And there are no common tools, because leaving Microsoft’s hegemony (in this case over what it even means to communicate in a business context) aside, any tool that afforded composing transcludable elements would need a way to manage them. And all that falls apart if the addresses by which you locate them are fundamentally unreliable.

What I have found I needed, for this repository of arbitrarily small, finely-addressable pieces of information, was something that could be available on the network, to share with others and to maintain consistency across my own numerous display surfaces. In other words, what I need is a Web server. In fact, I’m inclined to make a pretty radical claim here: if you insist on treating a Web server as something you upload files to from your computer, you will forever be hamstrung. Files and file systems have too many pathologies (strict hierarchical structure, in-situ writes, arbitrary and mutable naming schemes, poor to nonexistent internal addressing, abysmal data semantics…) and they export those pathologies to systems that try to accommodate them. The file system can be used for backups and for projections, but the authoritative content should live on the Web. I’m sure I will incur vehement denunciation for this position (and I can already anticipate who from), but I think it’s time to leave files—​at least as the canonical interface to information storage—​behind.

This, incidentally, is why I am ultimately unsatisfied with the likes of Logseq and Obsidian.

While repairing the reliability of Web URLs does lay the groundwork for new authoring experiences, it also introduces new quandaries. For example, the goal is to create atomized chunks of information, not devoid of context, like some people assume, but actually for multiple contexts at once. It entails that you are able to see every place your content is referenced, so you can better knit the edges together. That’s a formidable challenge, but it’s one I’m more than elated to take on.

None of what I’ve written here so far, however, engages head-on with the central problem of documentation, which is that it’s ostensibly condemned to forever chase after the “real” business processes and write down what happened. This seems backwards to me. If anything, documenting a process should drive that process whenever possible, because that will make the act of documenting essential, instead of “extra work”. That’s something we have the technology to do; we just haven’t sufficiently shifted our paradigm to do it.

And it’s kind of ironic, because we’re very close. Take Slack, for example. One thing Slack does (and Discord, and Zulip, and presumably Teams—​although mercifully I’ve never had to use it) over its predecessor, IRC—​I mean beyond custom emojis and GIFs—​is that every message gets its own URL. It’s a pretty reliable one too, if you bracket the fact that Slack’s business model is to hold those messages for ransom, requiring you to pay an admission fee for everybody who’s ever going to follow a link to one of those messages, in perpetuity. That fact alone is enough to undermine Slack’s original stated mission (and backronym): to be a “Searchable Log of All Communication and Knowledge.”

Of all these new-generation work-chat products, I actually like Zulip the most, which doesn’t have this problem. That said, I still mostly find it unsatisfying. Now, I’ll tell you why.

The main impediment to documentation driving business processes is what I view as an artificial (and indeed, anachronistic) distinction between messages and artifacts. This distinction is very real in the world of paper: messages are generally held to be ephemeral and targeted—​and thus assumed to be disposable—​whereas artifacts are meant to be durable, often widely-circulated, authoritative information resources. The thing is, on a computer, every message is also an artifact—​at least until you delete it.

So what I’m suggesting here, is to flip this around: make an interface for creating artifacts that are also messages. This is, at root, what I’m trying to accomplish with my eventually-to-be-retail-ish product, Sense Atlas. Instead of a chat-like structure which is primarily temporal, I am creating a collaborative environment where the elementary information objects are connected by semantic relation: what the elements mean to each other. And to create an environment like Sense Atlas, I ultimately needed a substrate like Intertwingler.

Where It’s At, Then

What has taken me almost a year to get around to doing—​albeit mostly not working on it because it turned out to be ten times the slog I had anticipated✱, plus client work and other endeavours—​is Intertwingler’s internal caching mechanism. It needed to be made from scratch for numerous Reasons™, a discussion of which was originally the purpose of writing this newsletter (I was going to put it out on the other feed, you see). I’m actually happier, though, that I wrote this first.

✱ There is the additional irony, that I’m sure I’ve mentioned before, that Sense Atlas—​as in the thing that is currently broken that this caching infrastructure is supposed to fix—​is purpose-made to help with resource planning and effort estimation.

As of this writing, I still have about a hundred lines of code left to write on this caching infrastructure before I can fire it up. It will for sure break all over the place—​as all new, untested software does—​and I’ll have to scurry around fixing it, but the end is in sight. The first thing you’ll notice is that Sense Atlas will no longer be excruciatingly slow to operate. The caching infrastructure also affords finishing other outstanding work that is keeping me from running all of my Web properties off of a live instance of Intertwingler (which I’ve been waiting to do for years), as well as all the other things that have to happen that will finally bring Sense Atlas to the open market.

Note: clients already get their own instance of the alpha version as a matter of course.

As a final note, I fund the development on Intertwingler and Sense Atlas through paid subscriptions to my newsletters, teaching software teams the techniques I use to create these products, client projects, and advisory services for technical leaders. If you are in the market for anything like any of that, or know anybody who is, I hope you keep me in mind, and share this material widely.

I’m also teeing up the eventual separation of Intertwingler into two parts, the other being a thing I’m tentatively calling Intermingler. This I imagine as a piece of high-performance infrastructure that isolates the anti-link-rot properties currently vested in Intertwingler, as well as a mechanism for what I call intelligent heterogeneity—​the ability to seamlessly mix back-end services from various vendors and/​or written in various programming languages, by crisply defining a set of standard interfaces over which they can communicate. Because I see Intermingler as being much more legible to people who work with the Web—​and therefore more conventional funding strategies—​I intend to start shopping it around for funding options later this summer.

I also have the technical discussion in the pipe about creating this caching infrastructure which I’ll be putting out on the Method & Structure newsletter, so if that’s your jam, subscribe to that if you haven’t already.

I’ll send you off with my latest morning warmup video, which was relatively well-received.

Don't miss what's next. Subscribe to The Making of Making Sense:
Share this email:
Share on LinkedIn Share on Hacker News Share on Threads Share on Reddit Share via email Share on Mastodon
twitch.tv
GitHub
www.youtube.com
Bluesky
mastodon.social
doriantaylor.com
Powered by Buttondown, the easiest way to start and grow your newsletter.