For the Umarell In Your Life
Further elucidation of my plan to run a “cozy private alpha” of my organizational cartography kit, the essay that got me calling it that, and a final installment from last year's Summer of Protocols.
Last newsletter, I announced that I was feeling out the contours of starting a “cozy private alpha” of my yet-unnamed organizational cartography toolkit, which will be the first tangible product running on top of the application server I designed called Intertwingler. The first iteration will be an updated version of the prototype I've been waving around for the last decade, which has been patiently waiting all this time for a suitable substrate—and with the Intertwingler engine, it finally has one.
The point of this tool (the first installment in the kit) is to analyze what the creator of its underlying conceptual framework called wicked problems, for the purpose of ultimately solving them:
- Catalogue every issue—every state of affairs you want to do something about (or have to steer around)—that arises in the characterization of a (e.g. design) problem,
- as well as every position that responds with what, if anything, to do about a given issue,
- along with every argument that supports or opposes a given position.
I took an interest in this framework (it's called IBIS) all those years ago because I was concerned with the problem of project planning. Namely, I found that accurate time estimates for software development depended✱ on being able to wring out, as cheaply as possible, all the ways you could be taken by surprise. It turns out IBIS is a pretty decent fit for this. What's more, since the process yields a formal data structure, merely filling out said structure gives you the material you need to determine what parts to break a project down into, and what order to do them in.
✱ It's weird to have to say something so obvious, but I have seen so many cockamamie schemes for effort estimation that amount to little more than guessing. Only the most sophisticated ones actually rely on historical data (which you were supposed to be scrupulously collecting), but data about past projects isn't enough. No matter how similar a past project might have been (if it was identical, why not just reuse it?) it's the dissimilarities that will screw you. You have to knead those out.
This isn't only a problem with software development, but it features prominently in software because so much of the job is just figuring out what the job even is, so the goal is to do that using as cheap a medium as possible.
All this is exciting, to be sure, but to me the most exciting part is the ability to show stakeholders the path from concerns they have either personally voiced or are otherwise on record endorsing, to where the work is being done to address those concerns. Literally give them an entry point they can click through to see the accumulation of work product. Next time a grouchy client accuses you of not making any progress on a tough design problem, you can point to this thing and go “here”.
I have personally spent an absurd chunk of my career writing and presenting progress reports, and while I recognize that certain clients will always insist on being spoon-fed, a self-serve option will at the very least help me help them.
One thing that software development and related practices are missing is an analogue to the construction site. Bracketing the fact that you don't break ground on a construction site until a pile of design and engineering is already done, construction sites are self-documenting artifacts: hole goes down, building goes up. You can literally watch progress happening right in front of you. The Italians call the old guys who watch construction sites umarells, and a big goal of this tool for me is to create a construction site for the umarells to watch.
Being able to (literally!) click along the path from issues stakeholders have raised to units of work—whether scheduled, in progress, or done—also affords its own rhetorical leverage. Suppose a cluster of concerns converges on a set of empirical questions. “Well, friendly stakeholder”, you say, “given that you understand the value of having these concerns addressed, you can invest some fraction of that into the necessary research.” Same deal for greenlighting design work: a list of issues with some key stakeholder's name attached to them is about as good of a case as you can make for allocating the necessary resources to resolve them.
A final remark I'll make about this tool is that it was really important to me to make it not behave like “extra work”. Extra work is a surefire way for a tool to not get adopted. At root, this is just a networked, interlinked, collaborative note-taking device with slightly more structure than a more familiar tool like an outliner or PKM. All the value, however, is in that additional bit of structure. Since the tool wraps a programmatic interface to an open data specification, that's about as friction-free as I can make it to hook into your own favourite note-taking product.
I did a stint in cybersecurity-land for about five years in my twenties, and one category of tool that always intrigued me was the passive scanner. It just sits on your network and watches. Leave it running for a few days—or heck, minutes—and it'll surface all sorts of interesting stuff. That is, it creates a valuable artifact just as a byproduct of being switched on and plugged into the network. By analogy, you use the planning tool to plan, and it creates its report—or at least the raw material for one—as a byproduct.
Organizational Cartography & You
Maps pack a lot of information into them per unit of area, though this metaphor is otherwise very loose. We're talking about “mapping” different aspects of your business, products, projects, and internal systems, into a coherent, easily-understood, and reusable structure, in a way that can be brought online piece by piece, aspect by aspect, in a manner designed to be truly open, extensible, and durable for years to come.
What the tool currently does really well—and has for over a decade, using a 50-plus-year-old framework—is help with policy analysis (or e.g. requirements analysis, or even forensic analysis of legacy systems), and design rationale (what I just described, though if you squint you'll realize these are the same process), as a precursor to project planning. Very early on I also found it useful to extend the tool to support the design of concept schemes (think a glossary with additional structure that relates the terms together), because consistent terminology is so important both across teams and even within them, and for communicating with customers and users. This is all stuff I currently do with clients, and I use this tool to support my more conventional work product. This week I'm moving it over to its new back-end, and once I do, I will be adding:
- A receptacle for business intelligence on organizations, people, products, etc.—like a kind of corporate social network analysis,
- proper structured bibliographic records, because having that all in one place, and in an exportable/reusable format, is extremely handy,
- the ability to just stick plain-vanilla notes anywhere you want them (useful),
- fleshing out a whole bunch of support for artifacts used in interaction design (personas, scenarios…) and content strategy (inventories, audits…)—this has been sitting, waiting to go for a while,
- eventually, a comprehensive infrastructure for organizational memory and resource planning, all based on open standards and open-source software,
- and in the near term, this probably means hooking into existing bug tracking and project management systems, whichever ones you're using.
The overarching structure that this information lives in is called a knowledge graph. You can kind of think of the technology like the opposite—or at least the complement—of AI. Instead of trying to make a smart machine, it's a dumb (and much simpler) machine that makes people smarter. What I just listed are examples of “knowledge graph applications”, although really they all connect together as a single informational fabric, using Intertwingler as a substrate. I firmly believe knowledge graphs are an essential technique for both resolving and communicating complex situations, because they smooth out the friction and strip away all the excess that gets in the way of a person truly understanding a situation. They just got upstaged in the last decade by AI hype—despite, again, complementing AI and reinforcing it. My focus over the last several years has been to develop both the technique and the infrastructure for adding knowledge graphs to the armorium of the modern professional team.
The long-term goal is to create the kind of information infrastructure I described in my specificity gradient talk (or the shorter, original video if you like, or the article if you don't like video): organizational memory that enables you to “see down”—or really, in both directions—from macro-scale business goals, all the way through layers of product, design, and engineering decisions, to the resulting lines of code. That's the vision, and the capability I have to share with you today is headed in that direction—but even part of the way there as it is, I and my clients get a heck of a lot of value out of it.
So, if you run a team in software development, digital media, information infrastructure, or in client services in this orbit, and you're interested in introducing this kind of capability into your workflow, here's what I have to offer you:
- Hire me, as a consultant,
- for an eminently reasonable, fixed-ish allocation of monthly hours/dollars,
- for a certain number of (let's say at least six) months,
- you get my attention, and access to my 25+ years of experience, wherever you can use it—strategy, product, infrastructure, etc,
- I help you populate your instance of the tool, and do the initial curation,
- I teach your team how to use it, add to it, and extend its capabilities,
- anything interesting or weird or cool or useful or valuable I find I can do with its contents, I do at no additional charge,
- (if it isn't sufficiently implied, all of this activity is confidential and protected)
- you get some influence over the tool's development priorities,
- this capability becomes a “secret weapon” for your organization.
Your insurance here is that all of your data is always completely available in an open spec format, and there is code (at least) on GitHub and in Docker images, so even if I get hit by a bus, you could still use this thing. On a more strategic level, because all (as in literally 100% of) the data is open-spec and fully exportable, anything you put in this software can be pumped out at any time and (optionally) transformed into something else. The details of how to do this are something I will share with your team.
I want to underscore that I am actually trying to play the opposite game of platform capture here. The ultimate goal is to ship a modestly-priced SaaS product, and while I anticipate it'll take another year or two to get to a fully shrinkwrapped offering, I'm sure even after six months, we could arrange some affordable managed hosting for the tool if you didn't want to run it yourself—but you could run it yourself if you wanted to. Or, we could continue to work together like I just described!
I probably have capacity for up to half a dozen arrangements like this, at an average of say, 12 hours a month apiece. That should be enough contact to understand your team and project, help set you up, do knowledge transfer, and regular consultations, all while preserving some time to actually develop the thing. If you are interested in this offer or want to learn more about it, get in touch at projects@methodandstructure.com
.
Distilled Capability
What I didn't mention, because I'm trying to be cute with the segue here, is that people have already responded to my aforementioned proposal with interest. One of them, which I wasn't expecting, was a consultant by the name of Troy Winfrey, who characterizes his work as “using techniques from psychological research to identify your ideal customer so you can make more money” (I'm inclined to call that positioning, but I don't want to be too reductive). He offered to do a call, which itself was fruitful, and he gave me some very helpful homework. I'm not gonna give away his secrets—you can book a call with him yourself and find out—but expect what I learned to diffuse out through my output over the next little while.
One thing we talked about is the fact that it has bothered me for years that promotional copy for software, as long as it has existed, has overwhelmingly been fixated on features. It's something I have given a lot of thought to. A feature at once reflects a capability on the part of the user, as it represents—an unfortunately very flexible—unit of work on the part of the developer. In this sense, features are intra-organizational underpants talk. In other words, you're showing your ass to the customer when you talk about them.
Even Apple, who has all the money in the world, still advertises their products in terms of features. It's not just “now you can do X!”, but “now with 100 new features!” with the implication that you can now do a hundred new things. What new things? Who cares! But you can now do a hundred of them.
A feature in software is the mirror image of a capability on the part of the user. The thing about capabilities is that they are binary: you can either do the thing, or you can't. My beef with features as an organizing principle is that given that you can achieve an outcome at all, feature-ese is silent on the extent of how excruciating and onerous the experience of achieving the outcome is—or how lobotomized and broken is the feature itself. This spawned my clever little antimetabole: You can define features in terms of behaviour, but you can't define behaviour in terms of features.
Summer of Protocols: Retrofitting the Web
The other thing that happened since I shipped my last newsletter was that Summer of Protocols finally published my article from the program last year. The compendium is a giant three-ring binder with serialized inserts mailed out every couple months, and mine was in the last of those. The essay talks about the goals behind, and theoretical underpinnings of Intertwingler, which I'm sure some of you are dying to read.
That's all for now!