Making the Solution Transparent
Two ways I've noticed video can actually take less effort than writing, plus a meditation on an alternative to product roadmapping.
I’ve noticed that the channels through which I broadcast to the world are a reflection of how busy I am, and I’m realizing that what I put out where can be read as proverbial tea leaves. BlueSky, for example, and to a lesser extent Mastodon—just as Twitter once was—are for when about all I have are a few moments here and there for the occasional shitpost (though I may pause for an itinerant meditation). And if I’m not posting there, I’m probably in the zone. The question is, which zone am I in?
Three and a half years ago (wow?) I started doing what I call “morning warmups”: short videos where the main constraint is that I have to talk about the first coherent thought that pops into my head when I wake up, and do so in one take. I started doing this because unlike any coherent written thought, I can go from zero to shipped in about an hour, and that’s including some light editing and the occasional sprinkle of B-roll; more recently that plus title cards and indexing in the video description.
This is an anomaly for video. I find the instant I introduce a script, all bets are off. So many times I’ve gone to jot down a handful of bullet points for things I want to remember to say, and lose a week to the exercise. Same goes for any significant quantity of B-roll, even if it’s no more sophisticated than a slide deck.
I actually had probably one of my best streams so far the other afternoon, hashing out some otherwise boring plumbing in Intertwingler (the application server I wrote that runs Sense Atlas, which I also wrote), that will nevertheless speed it up by two orders of magnitude (which it desperately needs).
The other anomalous video genre that I’ve found that I can easily make time for is streaming. That’s actually even easier than morning warmups because what I’m invariably streaming is some other work I’m up to. I have arranged my gear to stream in three places at once:
- YouTube,
- Twitch,
- and the newest addition, Streamplace.
That last one is especially interesting because it’s designed on top of AT Protocol, which gives it a number of unique and interesting properties, most obvious of which at the moment is that you don’t need to sign up to use it—you just need your BlueSky account. It looks a lot like the federated identity stuff I worked on at some startup, twenty entire years ago.
Writing has become for me something that requires a lot more dedicated attention. Even when I have absolute clarity over what I want to say, I find the minimum spend to be on the order of eight to twelve hours—and that’s all in one shot. The median time for one of these newsletters I’d say is on the order of two or three days, and it’s often the case that if I don’t finish it by that weekend, it doesn’t ship. I’ve got a number of 99%-finished newsletters that I couldn’t get back to in time for their subject matter to still be relevant. The essays on my website, which are true essays in the sense that I’m writing them to solidify what I think about this or that topic, are much more durable, and might take me in excess of a week. I long to once again have the luxury of an entire week to dedicate just to think; it’s been a while. So what’s had me so busy?
Busy doing what?
Since I got Sense Atlas properly online back in May, in addition to the baseline software development workload it demands along with the occasional client excursion, I’ve been working on packaging up a set of offerings, that if they don’t strictly foreground Sense Atlas and/or Intertwingler, certainly use them, in the interim before Sense Atlas becomes a thing that anybody on the internet can sashay up to and plug in a credit card. Like I said at the time, it’s far enough along that it can deliver value in a number of ways to your organization now—as long as I’m facilitating.
If your organization has anything to do with information—which just about all of them do—I entreat you to consider these questions:
- Do you need greater visibility into what is going on around you?
- Do you need more clarity in the language you use within and between teams and other stakeholders?
- Do you need tighter control over your messaging?
- Do you have a project where a big part of it is determining what the project even is?
These are just a sample of the kinds of issues people have come to me with over the years, whether they are looking to internationalize their codebase, or take a census of their vendors, or visualize HR activity, or come up with a taxonomy of concepts, or do a content inventory, or design a network protocol. What I have found is that in my career doing more-information-than-technology consulting, it’s rarely the same thing twice, though it’s often a lot of the same components. The job is always to gather and concentrate information, and then represent it, as Herbert Simon wrote, so as to make the solution transparent.
I drew this business-meta-ecosystem diagram recently to try to enumerate—and roughly express the interrelationships between—the classes of entities an organization has to be cognizant of. How this ultimately cashes out as real people and/or organizations, of course, will be different for each client.
Gathering information is an irreducible investment; there’s a certain amount of it you just aren’t going to be able to ChatGPT your way out of. That’s a matter of applied research, and a key step in that process is getting some sense of what the answer might be worth. That is definitely something I can help with.
It’s the concentrating and the representing, though, that are at the core of my expertise. Make things, make sense is my motto, and the reason why you hire me. It’s also the reason why I put so much effort into creating this software.
Not All Representations are Created Equal
One ultra-important aspect of how information is represented is how amenable it is to being transformed into other representations, while still preserving its meaning. This is not only essential for basic tasks like data visualization, but for operationalizing the information you work so hard to gather and concentrate.
Take something like a content inventory, or asset or vendor inventory, or even a taxonomy of terms. These are often delivered in a document or spreadsheet. The problem with an artifact like this is that it can only be so useful. About the best you can hope for is everybody in the organization knows where this document is, and recognizes it as authoritative. Even still, it’s an artifact designed purely for human consumption, which means any changes to it have to be propagated to the rest of your organization by hand. Which means that’s somebody’s job—a task that has to compete for time with others. Wouldn’t it be nice to just take that task right off the table?
Systems Run Best When Designed to Run Downhill✱
✱ With apologies to John Gall
I tend to see data semantics as being like a hill. At the bottom of the hill is something like a PDF, or worse, a JPEG image of a handwritten note. (Excel is maybe a third of the way up.) Pushing information in the direction of maximum utility costs effort, like pushing a boulder up a hill. OCRing the PDF or transcribing the note depends on expensive and/or unreliable methods, and only gets you a little bit of the way. Transforming said information into a format consumable by some piece of software or other, however, is like rolling the information downhill, provided of course that you’re starting out from higher up. So one of my principles is to deliver all my work product to my clients as far up the “information hill” as it will go. And one way I achieve that is to start as much of it as possible out at the top.
In the business this is called machine-actionable, to differentiate from being merely machine-readable, since all digital data is machine-“readable” to the extent that it can display it on a screen for you to interpret. What we’re looking for are representations of information that are crisp enough that the computer doesn’t need you to manually move it from one representation to another.
I’m sure I’ve told this story before, but I was working with a client a few years ago and we did this impromptu exercise where I had the team enumerate all the modules in their product’s codebase, and place them on a plane, where one of the axes represented how complex the module was, and the other its value to the company. We did this exercise in Miro because it was convenient. After we finished, I was like wow, you know, this information would be extremely valuable to use as an input to a computational model that would help plan the project that they had hired me to help plan. Alas, since we had improvised the exercise in the first place, there was no budget to spend the time needed to transform what could be gleaned from Miro (the coordinates and text of its virtual sticky notes) to something that would have done the job—let alone that job itself.
Graphs Are All You Need†
†With apologies to the authors of the Transformer paper
This is the current state (as of December 2) of the issue network I’m using to plan Intertwingler’s minimum-viable-product. The nodes radiate out from accessible, easily-understood concerns, to dank specialist details. This is what I am now calling the “diagnostic view”, since while it’s clear I need others, this one shows everything.
Another major concern for representing information is its shape. For just shy of 20 years, I have been working pretty deliberately with structures inconveniently called graphs. Not like charts-and-graphs—though I do those too—but rather like networks. The thing about graphs is that it’s fairly common to encounter one that you can’t draw a picture of without it coming out looking like a hairball. This is one of the reasons why, I suspect, that people prefer the much simpler structures known (for obvious reasons) as trees, but the difference in expressivity between a tree and a graph is the same as your company’s org chart versus all the ways the people in your company actually interact with one another.
If you’re going to faithfully model the complexity that actually exists out in the world, then you’re going to need a way to wrangle it. The way you get around the hairball is by amortizing its complexity over time. It’s possible to show a slice, or patch of the graph, and then connect to the next one with links, affording a makeshift trail through the information space. If you’ve ever gone down the Wikipedia rabbit hole, this is what’s happening.
Incidentally, I think this is the single most powerful yet underutilized aspects of computers: being able to represent structures of information that would be indecipherable if printed out on paper. Instead, we still mainly use computers to simulate paper.
Single Source(s) of Truth(s)
The last concern I’ll talk about for now, when it comes to representing information, is preserving its integrity. This entails making sure everybody (human or otherwise) is seeing the same thing. Under the paper paradigm, the economics were such that the basic publishable unit had to be an entire document. This meant that any updates had to wait for a new edition. It also, incidentally, meant that locating a specific fact meant pulling out the enclosing document and scanning to a particular paragraph on a particular page. We’ve had the technology for decades to point directly at facts, ensure they’re up to date, and even do away with the vestigial documents that surround them. Nevertheless, we’re still largely shipping files around as if they’re paper documents, except now we can modify them in microseconds. We get none of the benefits of direct lookups, coupled with major hazards of encountering stale information.
The solution is to put information on the network in such a way that every piece of it has a location that is understood to be authoritative. This is often called a “single source of truth”. A naïve reading of this principle imputes that there must be only one of these per organization, with the authority vested, whether de facto or de jure, with the IT department. This ignores the fact that there tend to be many truths that don’t compete with one another, and so to comply, you only need one single source per truth. Under the new regime, it’s not only possible to to save people time by pointing them directly at the information they’re looking for, but it’s also possible to ensure everybody’s looking at the same information, by passing around references to live resources instead of potentially stale copies.
Once again, my three prescriptions for representing information:
- Ensure information is machine-actionable (thus making it amenable to transformation and therefore reuse),
- Don’t prematurely simplify (use graphs, not trees),
- Make it addressable, and put it on the network.
These principles work together to produce a much better bang-to-buck ratio than conventional techniques.
To-Do What, Exactly?
Sense Atlas came out of what I percieved to be a gap in the knowledge worker’s toolchain. When you look at common tools like bug trackers, project management software, and even ordinary to-do lists, you’ll notice that they all operate under the assumption that you’ve already determined what goes into them. Whomst amongst us has never felt sheepish for writing out a to-do item that isn’t actually actionable? This is because as information tools go, to-do lists are strictly tactical. Bug trackers and project management software are just to-do lists in multiplayer mode.
I contend that for any sufficently complex project—and it really doesn’t take a lot to qualify—determining what, precisely, gets onto the to-do list, is a significant part of the job. This is when you smoke out all the concerns of the various stakeholders, wring out all the dependencies and the conflicts, and weigh out the costs, the benefits, and the risks of each aspect of the intervention. This is planning in the sense of strategic design, and you don’t see a lot of purpose-specific tooling for it. So I made some.
Rhetorical Role
I have found over my career that it’s difficult to advocate for an investment of effort that is either many degrees removed from palpable value, or despite having many cumulative benefits, there is no single big conspicuous payoff that obviously justifies it. How this situation cashes out is it doesn’t, because these interventions, of potentially enormous value, never get the green light.
I diagnose that historically, the problem of telling a compelling story that fixing some gizmo ostensibly miles away from the bottom line will significantly affect it, or that an endeavour may not produce one big win, but several small ones that add up to big, has mainly been one of logistics. The process of crafting the argument is just too damn resource-intensive. That on its own is a big speculative risk—after all, what if you spend all that effort, only to be told no?
This problem largely solves itself, in my opinion, if you do your planning work in a graph structure that happens to be made of machine-actionable data and is addressable on the network. Much of the time, all it will take is walking the stakeholders through the argument. If making your case requires visualizing it a different way, the data is already amenable. Even better: involve your stakeholders in the construction of the argument. Let them participate asynchronously, from wherever they happen to be.
Valuable Byproducts
In the process of planning, one typically has to mention all sorts of entities: people, organizations, events, places, products, concepts, and copious literature and other evidentiary material. It is useful on its face to represent these as—once again, network-addressable, machine-actionable—entities in their own right, if for no other reason than it is trivial to produce a response to a request like “show me everything in the network that has to do with this person.” In fact, it’s built right in when you click on their name.
Not only does such a structure lend itself to greater comprehension by connecting disparate entities that in other representations would only exist as words on a page, these are bona fide data objects with their own properties. They can be imported, exported, transformed, queried, and manipulated, just like any other.
“Road-Mapping” Under Optionality
I have long been irked by the concept of a “product roadmap”, because real road maps are artifacts that tell you how to get to places where people have already been. (After all, somebody saw fit to build a road to get there.) As such, it seems like a misleading metaphor to frame a course of action that by definition ventures into uncharted territory.
What’s more, though, is that this faux certitude of going somewhere you’ve already been doesn’t even really reflect how a lot of newer companies operate. They’re just as likely to “pivot” off course before the ink on the roadmap is dry, either because they have to, or to chase some opportunity that comes along. The software industry in particular understands in its bones as a concept called optionality.
The person responsible for the term “optionality”—from his book Antifragile—is Nassim Taleb. He shares a number of proclivities with the software industry, for good or for ill.
Optionality can be understood as making a bet that has a fixed downside and a variable upside. You can’t lose any more than a certain amount, but you can win enormously. One can also arrange for each individual bet to be quite affordable, like in options trading. Options are the financial derivatives that give you the right—but not the obligation—to buy (or sell) an asset at an agreed-upon price. All you stand to lose in such a deal is the price of the option.
Another, more colloquial way to think about optionality is that it literally gives you options, though personally I prefer the more formal definition. Again, the criterion is that your risk is fixed and your gains are variable, and you can bet in whichever direction you believe things are going to go. The highest-profile example of optionality in action, after all, is The Big Short.
My prescription for modeling this kind of growth is less like a road map, and more like a tech tree. This is the structure you see in strategy games like Civilization—and a misnomer, as a tech tree is almost always another directed acyclic graph. The nodes represent capabilities or analogous achievements (in the games they are literal technologies), and the links represent dependencies of one achievement upon another. As is the case in real life, one node may require several tributaries to become achievable, and may be one of many inputs to some other node downstream.
Unlike real life, however, tech trees in video games are exhaustive, produce predictable outcomes with a well-defined outlay, and importantly, have no side effects. But what they do capture is the space of possibilities, and the requisite paths to achieve them.
When you go looking through contemporary literature of product roadmapping, there is a lot of talk about the order in which you want development to go, not the order in which it has to go. There are likewise nearly always proclamations of milestones, whose dates are derived through some unspecified methodology, and are almost certainly aspirational. This further contributes to roadmaps, at least in certain industries, earning a reputation as a class of document that is untrustworthy, and ultimately unserious.
Product roadmapping once upon a time looked a lot more like the alternative I’m describing. The practice ostensibly started at Motorola in the 1970s. Its function was a lot more about situation awareness than about task prioritization. Roadmaps (at least at Motorola) furthermore were a comprehensive eight-chapter document, rather than a single visualization. The goal was to smooth out growth in revenue and market share by plotting the maturity arcs of different product lines, and the technical achievements they depended on. In a big enough organization, these would get developed in parallel.
(Thanks to Bill Seitz for surfacing this and other historical documentation.)
The difference between a roadmap in the contemporary sense and my tech-tree proposal is a shift in the underlying assumptions. The tech tree shows the space of all the (currently known and/or supposed) possible paths you could take, into which your current priorities are merely embedded. That way, changing priorities to chase an opportunity that presents itself is not a “pivot”; the potential for change is implied. It is further assumed that both qualitative and quantitative aspects of the tech tree will change as new information is surfaced. The dates attached to each node in the network, furthermore, don’t represent when you hope they will be accomplished, but rather when they would have to be, in order to be sufficiently profitable.
It doesn’t matter how much noise you make about dates on roadmaps being noncommital; putting dates on anything is an invitation to onlookers to forge an expectation. Under the new paradigm, forecasts of dates and dollar figures would actually be functions that return probability distributions. These in turn can be evaluated at different points to get a sense of the contours and frame expectations. Instead of saying “we believe we will ship by date X”, we say “we must ship by date X to maintain a Y% chance of earning (over some predetermined maturity horizon) at least \$Z in profit”. Profit requirements, furthermore, can be calibrated in either absolute or relative terms (or both), or substituted entirely by proxy metrics for other kinds of organizations.
I definitely have to write something just about this strategy. This is a resource planning methodology I designed a number of years ago, and I’m laying the groundwork to put it into Sense Atlas. You can watch an early timing test I did to learn more about it, or read the script for the talk.
What we’re fundamentally after with a planning methodology like this is a permission structure to engage in optionality. Rather than “roadmap” to assert a course of action we may have to justify abandoning, we commit only to pursuing value, by using our judgment to make fixed bets on unlimited payoffs.
Private Alpha Opportunity
It’ll still be several months before Sense Atlas is a mature enough product that anybody can just plug a credit card into, but like I maintain, it’s been serviceable enough for a while to use in facilitated contexts. As I’ve mentioned many times, I’d been using its predecessor as a matter of course in client projects for over a decade.
What I can do for your team today is what I’ve been doing for years, except with tooling that produces more powerful and valuable deliverables than a PDF, spreadsheet, or PowerPoint deck:
- General scoping/recon/applied research briefs for technical projects (1-2 week turn-around),
- Business ecosystem mapping,
- Content inventories/audits,
- Information architecture,
- Concept/audience mapping,
- Development of domain models, ontologies, data formats, APIs, network protocols,
- Computational modeling and data visualization.
I’m not quiiiite ready yet to use Sense Atlas as a real-time collaborative medium—for instance in a workshop—but I’m going to try to get that going over the next couple months. I’m also toying with the idea of doing seminars around the underlying design of Sense Atlas and Intertwingler, if your team is interested in how I’m putting together a FAIR Data app that adheres to linked data and REST/HATEOAS principles. Heck, if I get enough individual interest, I might do a retail one. I’ll have more details in the next newsletter.
In the interim, if you’re interested in chatting about any of this, do reach out by email or book a free 30-minute call.