The Specificity Gradient, and Other Things
A conceptual framework I coined last year shows up again, and apparently I am a YouTuber now. Also: a new, limited-run, subscriber-only newsletter: The Nature of Software.
The Specificity Gradient
The other night, I came home from seeing some friends and hadn’t checked Twitter in a while. When I did, I saw this:
In the last month my go-to in consulting situations is @doriantaylor’s Specificity Gradient.
I want to be clear here that I am actually pretty jazzed that at least one real live person is clocking actual (Australian) consulting dollars using a conceptual framework I came up with in a one-take, nine-minute video that I blasted out in February of 2021 and didn’t think too much about it since. It means there’s some actual operational value to it, and it’s not just me being a crazy person scrawling on a homemade blackboard at eleven at night.
That all said, this conceptual framework has some history. The “shearing layer” or “pace layer” concept has had some currency in the design community for some time. The basic idea is that certain concerns operate at certain time scales, and more importantly, the time scales vary enough that anything trying to bridge between them will be ripped apart in the quasi-tectonic shear. So if you’re going to build anything, you build within the respective layers, not across them.
The idea behind the specificity gradient is to map the pace layer framework onto software development. We begin at the outside, with the slow-moving, gross anatomy of the system—the organization and how it interacts with its environment—and we zoom in through product, user, and technology, until we reach the finest-grained—and I argue the most perishable—medium of executable code. I have a full write-up elsewhere, but I’ll list the layers and their transitions here:
Business ecosystem to user goals,
user goals to user tasks,
user tasks to system tasks,
system tasks to system behaviours,
system behaviours to executable code.
I decided to spare you newsletter subscribers the exhaustive write-up, but do go take a look at it if it’s your thing.
Also note that my original formulation had business goals, and I have since decided to change that, because I believe ecosystem is more appropriate for what I am trying to accomplish.
The goal of the specificity gradient is to gravitate all organizational knowledge and decision-making process toward the biggest, chunkiest, most durable, closest-to-the-surface layer in the gradient, that is applicable to that particular piece of information. The ultimate test of success would be to be able to take any line of code and trace it all the way back through the decision process that brought it into being, to the function of the organization that it serves, or likewise to stand from the point of view of the business and see a synoptic view of all the infrastructure that exists to support it.
I came up with the specificity gradient because I have found with my own experience, and that of clients, that the race to get to running code typically means neglecting—if not outright discarding—all the valuable precursor material that shaped it. This makes for a codebase full of dead code and worse: code that nobody wants to touch because it (purportedly) does something important, but nobody knows how it works or even really what.
My (perhaps controversial) claim, is that code, while obviously necessary to have a product, could be de-emphasized in importance—not to mention volume—if it only reflected business decisions recorded elsewhere, rather than being the residual place where those decisions were made.
Go read the write-up on The Specificity Gradient
Trying the Video Thing
A while ago I started streaming while I work, in an effort to demystify my day-to-day activity. While it has its moments, going back to re-watch the videos has all the excitement of an uncut fishing show. I haven’t really been doing much streaming recently since the content has either been confidential (can’t show it) or requisite of me poring over reams of documentation (extra boring). I’m beginning to think the best context for live-streaming is semi-mindless work I can chug through while doing running commentary.
It is interesting to me that even minor changes in affordances within a medium lend themselves to sharply different characteristics. Consider Twitch versus YouTube. Both of these are nominally websites with videos on them—user-generated videos at that. Twitch lends itself to hours-long sessions featuring live interaction with the chat (plus a raft of other cutesy mechanics to drive engagement). YouTube has streaming with real-time chat too, though with a decidedly different vibe. It will always reign, however, as the go-to destination for short, no-budget, prerecorded, talking-head content.
This is exactly the kind of content I am interested in trying out making. What I found so far is that it takes about an hour to make a ten-minute YouTube video: ten for the recording itself, 20 to cut out the worst of it, and another 20 to set it up in YouTube, give or take a rewatch. I’ve been doing what I’ve been calling “morning warmups”, where I just talk about what’s on my mind while I’m having my first cup of coffee.
This was a prompt from Visakan Veerasamy, who suggested to just pump out content regularly, like he does. I’m currently on week two of doing it every morning, but I anticipate maybe paring back to something a bit more like twice a week once I get a groove I’m satisfied with. I’m also considering going through my back catalogue of conference talks, especially the ones that were never recorded in the first place, and re-shooting those.
One thing I noticed while doing these videos is that if you constrain yourself to a single, sub-ten-minute take, with no B-roll or really any kind of post-production at all (save for cutting out garbage), you’re really constrained in what you can talk about. To wit, I have found that to stay on mission, I can only really discuss the ideas that have matured in my head for a while. If the test of understanding a subject is to be able to talk extemporaneously about it at length, then the test of really understanding it is to be able to compress that talk into a ten (or five!) minute précis. As such, I see the YouTube thing as a potentially viable way to smoke out ideas—like the specificity gradient above—that I know so well that I may have forgotten about them altogether.
A New Newsletter: The Nature of Software
In some sense, my next project was picked out for me when I wrote my retrospective on Christopher Alexander’s work in the wake of his passing some weeks ago. I have resolved to do a limited-run, subscriber-only newsletter, attempting to reconcile Alexander’s four-volume, 2500-page masterwork, The Nature of Order, with the craft of software development. This will be a personal interpretation, with a new installment every two to three weeks, starting soon. There will be at least 17 issues, one for each of the Fifteen Properties, plus an introductory issue, plus a conclusion. Unless I decide to add more, for some reason. USD $7 a month gets you access to both delivery and archive. See you there!
Please go to buttondown.email/natureofsoftware to subscribe.