The Making of Making Sense logo

The Making of Making Sense

Subscribe
Archives
September 12, 2025

Why Pivot to Protocols

The current state of Sense Atlas (well, more accurately Intertwingler), and an announcement about office hours.

Since my last newsletter, in addition to expanding and improving Sense Atlas—​my knowledge graph tool I’m referring to as an organizational cartography kit—​I’ve been hard at work crafting new client service offerings around it. While Sense Atlas probably has at least another year to go before anybody on the internet will be able to come along and plug their credit card into it, for private clients in a supervised capacity, it’s perfectly serviceable as-is.

It really is humbling going from something that is merely technically internally consistent, to an artifact that is roughly product-shaped, to a product with the level of finish that the typical retail user has come to expect. I consider myself to have just achieved “product-shaped”.

This situation doesn’t leave me a lot of time to write, since I find it takes me at least a full day to produce one of these. As such, I spend my interstitial time on BlueSky (and I also monitor Mastodon), and on mornings when I wake up with a coherent thought in my head, I’ll cut a YouTube video.

lines of code

My morning warmup series gets a new dispatch whenever I roll out of bed with a spiel in my brain that’s coherent enough to talk about. The rules are: one take, no script, no longer than ten minutes, while I finish my coffee. Here’s one from a couple weeks ago about how vibe coders are using lines of code as a productivity metric.

A Calendrical Experiment

I’m taking some inspiration from people I know (namely Troy Winfrey and Dan Hon) and I put up a Calendly page for free 30-minute consulting calls. I’m opening up a modest block of hours as an experiment for the next few weeks to see how it resonates. So, if you’re in a leadership position—​or even if not—​and you’re familiar with what I’m about, schedule a call and we can chat for half an hour about what you’re working on, trying to work on, thinking about working on, or whatever professional topic you want. No cost, no strings attached.

Once again, the lines are open.

If you aren’t familiar with the kind of work I do, I can use my experience setting up Calendly as a prop for an example. Unsurprisingly, Calendly needs access to an online calendar, both to avoid scheduling conflicts, as well as to propagate the bookings to somewhere you’ll actually see them. The only problem is, Calendly only offers connectivity to the two Microsoft offerings (365 and Exchange—​not sure why these are distinct interfaces) and Google. The provider I use, however, only supports the standard protocol, CalDAV.

Hand-drawn diagram depicting a system with proprietary API clients

Proprietary API adapters attach directly to the system’s internal representation.

If I was advising Calendly, I would counsel them not only to implement CalDAV, but to design their infrastructure around it. Not only would they pick up the extraordinarily long tail of systems that only converse using the standard, any proprietary interface—​such as Google or Microsoft—​could be implemented as an adapter. I would offer to help scope this project, perform valuations and risk assessments, as well as research existing adapter vendors for potential licensing, or even as acquisition targets. If there were any technical burrs, I would help reconcile those as well, since implementing creative readings of protocols and data format specifications is something I have over two decades of experience with.

Hand-drawn diagram depicting a system with proprietary API adapters plugged into a protocol implementation

The protocol implementation intermediates between the system’s internal representation and proprietary API adapters, as well as protocol-compliant systems.

Now: it’s presumptuous of me to assume that the company that specializes in calendars didn’t already know about the networked calendar publishing standard that has been on the books since 2007. Perhaps they evaluated it and found it too cantankerous to work with (not entirely implausible if you know anything about CalDAV), and the story is simply one of a conscious trade-off to target (I assume) 80% of the addressable market with two presumably-easier proprietary implementations instead of three. Or, maybe Calendly has deals with Google and Microsoft to shepherd its own users onto their respective platforms.

Is that even a thing? I mean, I suppose it could be a thing. We could compare it to McDonald’s only serving Coca-Cola products. Except supporting two fiercely-competing productivity suite vendors and deliberately passing on the open protocol is like serving Coke and Pepsi but no tap water. I don’t know why you’d insist on that, unless you were ideologically committed—​or, I suppose, paid by somebody who was—​to visitors to your restaurant only being allowed to drink soda.

Setting aside the extremely weird (though not completely unheard-of) hypothetical where Calendly is getting enough payola from both Google and Microsoft to not implement a standard protocol interface that it compensates for the lost revenue—​or heck, even taking something like that into account—​in my professional opinion, smaller companies benefit from embracing open protocols. The alternative is either rolling your own proprietary API—​which is risky and a lot of effort—​or sharecropping on somebody else’s, which is less effort but no less risky.

The thing about rolling your own API is that nobody wants to use it. People only implement API clients if they absolutely must have whatever’s on the other side of them, and are sufficiently resourced to get it. In fact, the experience of implementing one of these things on the client side—​Calendly should know, as they’ve done it 44 times—​largely consists of reimplementing manipulations to overlapping, yet frustratingly incompatible data semantics.

Of course Calendly does have its own API, though it seems to largely consist of operations that are outside of the scope of CalDAV. In other words, reasonable. It’s also entirely reasonable to do what Amazon did and design your entire organization around a set of networked services with published interfaces. One thing that jumps out at me, however, is that at the time of this writing, there’s a banner across the top of their developer site with a deprecation notice saying that version 1 of the API is being terminated. That’s a constant hazard of proprietary APIs: they can change on a whim, and in a way that’ll mess you up. You don’t get that with standards, whose changes are glacial—​not to mention largely non-destructive—​by comparison.

The risk, I submit, that comes with eschewing standards and protocols, is putting all your eggs in the “platform” basket. An extreme (and extremist?) example of this is Substack. This is a company whose backbone is a standard protocol—​e-mail—​who is nevertheless on an indefatigable march to steer its users into a proprietary cul-de-sac. Substack has infamously declined to expose an API at all, and has even refrained from writing the meager amount of code that would let people manually paste HTML or even Markdown into its text editor, opting instead to insist that you type your words directly into their product. They are also in the process of supplanting (standard) e-mail subscribers with (proprietary) “followers”, who interface only through the proprietary app. Astute commentators have recently noticed that when payments are processed through the iPhone verson of said app, not only does Apple take an additional 30% off the top of paid subscriptions (er, “followers”), you’d lose whatever’s left of that revenue if you were ever to leave Substack.

My observation for tech startups in 2025—​and what I’d advise clients—​is that this is a precarious position. You’re basically betting the farm that you’re going to be so indispensable, forever that everybody will put up with whatever shenanigans you inflict upon them. And Substack in particular is a poster-child for shenanigans. In fact, I think they’re the perfect specimen of a company that deliberately uses platform dynamics to advance a niche ideological position. And what they’re essentially betting is that enough of their authors either support that ideology, are sufficiently captured, will never know, or will never care enough to leave—​in perpetuity.

Substack is also trying very hard to pull a Google (or Kleenex, Escalator, etc.), and make “Substack” synonymous with “newsletter”, which is weird considering they’re also trying very hard to make their newsletters no longer run on e-mail.

Platform users break down into the following categories:

  • the stans,
  • the captured,
  • the clueless,
  • and the apathetic.

The overwhelming majority of the biomass is concentrated in “clueless” and “apathetic”, and so what you’re ultimately concerned with, as your platform matures, is that the clueless stay clueless and the apathetic stay apathetic. Twitter is roughly in this situation now, although I reckon that a lot of people who believe themselves to be “captured”, are actually “clueless”. The stans pay subscription fees (but then some of them get paid back?), and everybody else serves as eyeballs for the ad machine.

The refrain from putatively captured accounts on Twitter (sorry, 𝕏) is “I can’t abandon my audience”. Well, what if you were unable to detect that your audience has abandoned you? If you aren’t paying your eight dollars after all, your tweets go nowhere, and if you do pay, then your audience—​at least the part of it paying attention—​is just a bunch of Elon stans.

I’m also cheating a bit insofar as I’m not acknowledging a fifth category—​although arguably an offshoot of “apathetic” insofar as they’re indifferent to the goings-on of the platform at large—​and that’s the people in insular groups who settled on the platform incidentally, and only really interact with each other. The extent to which those people are a load-bearing constituency, though, would depend heavily on the platform in question. Indeed, I would say for social networks, especially ones that are a big open area, they don’t factor in very much, but the platforms that aren’t first and foremost social networks, insular groups are the only kind.

What I’m suggesting is that due to their limited coercive power, platform monopolies ultimately hinge on the combination of ignorance and apathy. There have been multiple platform exoduses over the last twenty years and change, but it’s always been from one platform to another. Now we’re seeing the real possibility—​led by entities like BlueSky—​of the concept of a protocol entering the public consciousness. If enough people make the connection that open protocols mean power, control, and personal agency, then platform monopolies will be seen as pariahs that you only interact with if you absolutely have to. What this means, is that if you aren’t already a multi-trillion-dollar platform, you get out in front of this dynamic. And the way you do that is open protocols.

There are a lot of companies out there whose exit plan, barring an IPO, is to groom themselves into a sale to the gigacorp oligopsony. There are some, though, whose leadership is rather unobtrusively trundling along, one year to the next, creating value for customers, from which they earn a profit. The paradigm case to look at here is FastMail. This is a company who has been around for 26 years, and it converses exclusively using standard protocols. What makes them remarkable is that they didn’t just conform to the protocols that were available, they went and invented their own.

Calendly take note: if you find CalDAV too unpalatable, there’s a JMAP profile for calendars, authored by none other than FastMail.

And that is my final piece of advice: authority over a protocol—​even in a first-among-equals context like you often see on the internet—​is soft power projection. It’s a gift to the community that nevertheless puts you at the centre. On one hand, it eliminates a wheel that would otherwise have to be reinvented, and on the other, it makes every user of your protocol implicitly compatible with your products. It also makes them an advocate. So if anybody in the audience knows the leadership at Calendly, I’d be grateful if you forwarded them this newsletter.

Or, use the link to book a free 30-minute call.

Minor (Okay, Major-ish, or At Least Medium) Setback

About this time last month, somebody alerted me to an issue thread on the WHATWG’s GitHub repository. It was opened by a member of the team for Google Chrome. The subject was a proposal to eliminate the markup transformation language XSLT from the suite of Web standards. The community ultimately revolted, which resulted in the them locking out the thread, and the browser engine with ~80% of the global install base is moving ahead with ripping out its XSLT implementation.

This represents a tremendous inconvenience to me. I have been a heavy user of XSLT since 2001. I use it on every single one of my Web properties. In my opinion it is by far the most elegant system for manipulating markup—​at least in principle. The net result is that I’m going to have to overhaul literally everything I have online, including my nascent knowledge graph product, Sense Atlas.

I started a more extensive write-up, but ultimately abandoned it to others. It’s not the best use of my time, as it’s kind of a fait accompli. After all, the same guy who opened the issue with WHATWG for “community feedback”, signaled his intent to remove XSLT from Chrome three hours earlier. Any day now, a Chrome update will drop, and everything I have put online will cease to function for 80% of the people on the internet. As such, I have to scramble to compensate.

The way I’m going to deal with this uh, change, in the short term at least, is by moving XSLT to the server side—​but that’s going to take me a couple weeks at least. To do that, I first have to add internal caching to Intertwingler—​the application server I designed that powers Sense Atlas, and eventually everything else of mine. Adding internal caching is something I was planning on doing anyway, but Google has forced my hand to do it now.

The silver lining is that I was planning on migrating all of my Web properties to Intertwingler, so this upgrade would take care of everything in one shot. It’s just weeks of effort is all, and the timing really couldn’t be any worse.

A Lesson on the Political Economy of Open-Source

The precipitating event for this move to eliminate XSLT was a set of security flaws in the code that parses it, which was inherited by Chrome back in 2011, by way of Apple’s Safari. This particular code has been in maintenance mode for over 20 years. The open-source maintainer—​who also inherited it from somebody else—​walked away from it, declaring it unfit for purpose. In a private correspondence to me, said it needed at least a year or two to rehabilitate, which is manifestly not worth the candle. As somebody who has an infosec background, I can kind of sympathize with the desire on the part of Google to cut this liability loose. I am less sympathetic, however, with the conducct of those same people when they take their Web browser maintainer hats off, and don the hats of a standards body.

The argument to remove one particular irreparable XSLT implementation from one particular Web browser is one matter, but to eliminate it as a recognized specification for the Web as a whole is something else. What it means is that with a flourish of keystrokes, Google no longer has to allocate an infinitesimal sliver of its trillions of dollars to put that functionality back. People like me get screwed as a result.

There’s that soft power I was talking about.

One rationale for removing XSLT was that only a fraction of a percent of websites use it. Except a fraction of a percent of a billion websites is still millions of websites. Another reason given is that XSLT is old, and that we should be using more “modern” techniques. This is flatly disingenuous. XSLT is heavily used in the digital publishing industry, and has undergone three major revisions since it was first added to browsers in the Y2K era, most recently, by coincidence, right in the middle of this debacle. It’s the browsers that haven’t kept up.

These events highlight an important detail. If you go digging through the issue trackers, you’ll see that the browser developers actually blame the third-party, volunteer maintainers of the open-source, hobby-grade, unfit-for-purpose codebase for not keeping up to date with the XSLT specs. As if these people are supposed to work proactively to increase the value of Google’s product for free. While I concede there was ostensibly no coordinated lobby against browser vendors to bring their XSLT implementations up to date, it’s troublesome if their prevailing attitude is “not my problem”.

The random-person-in-Nebraska conundrum of open-source software has been discussed fruitlessly for years. There needs to be a way to ensure that open-source developers at the very least get compensated for their time. I’m of a mind to either find or design a new open-source license, after the business about no warranty or expectation of merchantability or fitness of purpose, that says something like: If you are a business entity who uses my software anywhere in your business function, you agree to pay a reasonable time-and-materials fee for any requested changes to that software, including the cost of integrating any code you supply, the rights to which you agree to assign to me.

Or something like that. If you prefer, you can soften “assignment” to “worldwide, perpetual, irrevocable, unrestricted license”, but otherwise that’s fine for a hip shot.

Technical Details

These events highlight another important detail: XSLT is the closest✱ thing to a standard way that has ever existed—​in the browser or otherwise—​to compose pages and attach presentation markup. Template processing is something everybody who makes a website has to do. It’s a wheel that has been reinvented countless times, every time proprietary. The “modern” methods touted by the chief proponent of nuking XSLT (such as React) are all proprietary, subject to the whims of the companies that own them, and the increasing volatility of the JavaScript ecosystem. The fact that an XSLT template works the same as it did 25 years ago, by contrast, is not a flaw, but a feature.

✱ I say XSLT is the closest thing because despite actually being a standard way to schlep markup, it only consumes XML, not HTML. So if you want to use it with HTML, you have to use some other mechanism to ensure that it’s valid XHTML. That said, it’d be pretty straightforward to define XSLT processor behaviour for ordinary HTML.

This situation might actually get me off my ass to do something about the fact that there is no standard templating language for the Web. The benefits would go far beyond durability. One of the essential upsides of XSLT is—​at least theoretically—​its security. Malicious (at least, server-side) templates have long been an attack vector for elevating privileges and smuggling vital information out of poorly-protected databases. XSLT, rather, only has access to the information you give it. It also only operates over markup (and in later versions, JSON); it’s not a general-purpose scripting language. It was genuinely useful to be able to consider it separately from the noise and clutter of JavaScript.

Some might remark that React, or any other client-side JavaScript templating library, also doesn’t have access to any information you don’t explicitly give it, but what it does have access to is everything else JavaScript can do, whereas XSLT only has a limited repertoire of capabilities—​which are nevertheless more than adequate for applying presentation markup.

As I said in my “impassioned comment” in the GitHub thread, the semantics of XSLT could be dressed up in a compact, more palatable syntax, namespaces could be hidden from view (unless you insisted on them), and CSS selectors could be silently transformed under the hood into XPath. This is an idea I had years ago that is long overdue for action. From a technical (and marketing) perspective, it would be a labour-intensive, yet fairly well-defined project to turn XSLT (currently in the process of drafting version 4.0) into SWeT: Standard Web Templates.

The idea of Standard Web Templates would be to take the problem of Web templates, which was (in my opinion) definitively solved a quarter-century ago with XSLT, and make it more amenable to the tastes of contemporary Web developers. Under the hood it would still be XSLT—​since the technical matters have been settled since the mid-2000s—​it would just conceal its unfashionable XML pedigree. Precedents for this kind of thing include RelaxNG compact syntax, and JSON-LD.

One of the more frustrating things about the browsers not keeping up with XSLT is that pretty much all the major things about it that existed to complain about were fixed in 2.0, which shipped in 2007.

The goal, here, is to create a robust, secure, efficient, and importantly, durable mechanism for something everybody who works with the Web has to do. If that’s something you’re interested in collaborating on—​or, even better, sponsor—​let’s chat about it.

Don't miss what's next. Subscribe to The Making of Making Sense:
https://twitch.tv/m… GitHub https://www.youtube… Bluesky https://mastodon.so… https://doriantaylo…
Powered by Buttondown, the easiest way to start and grow your newsletter.