In the Pipe, Five By Five
My first update since indefinitely suspending my activities on Substack. Nice to be back in the saddle.
I sent out my last newsletter on Substack in January. I probably would have sent one sooner than now, but I had mistakenly anticipated that I would first have to disentangle people who had subscribed on one platform and unsubscribed from another, but didn't say that within virtual earshot of Justin at Buttondown until a few weeks ago. He assured me his subscriber importer handles that just fine. That said, if you'd unsubscribed and you still got this email for whatever reason, then that's what's going on.
It was around about the end of March, I think, that I got Intertwingler, the application server I have been working on, to a done-enough state that I can flip around and start making things on top of it. One of the idiosyncrasies I find about making novel software that is really, really generic, is that you can only write so much of it into a vacuum. Eventually you have to start trying to use it—as in, write applications on top of it—while you're still finishing, nay, in order to finish it. The simple fact is that you can't write one side of an interface without also the other. I am likewise confident that Intertwingler is now in a state that I can at least start deploying it on intranets, which is what I originally wrote it for: helping my clients make sense of their business relationships, internal processes, and conceptual structures.
Since April, then, I've more or less been doing all sorts of mop-up of all the various things I had put on hold for the six or so unexpected extra months it took to get Intertwingler to work. First on the docket was to get another chapter of The Nature of Software out the door. This, if you aren't aware, is my serialized book project attempting to reconcile the later work of the architect Christopher Alexander—that is, after his infamous patterns—with the craft of software development, spurred by his passing in 2022. Each chapter takes one of what he termed the “fifteen fundamental properties of living structure”, which are kind of LEGO-like elementary procedures and their concomitant outcomes, and tries to imagine how—or even if—it could be applied to software. The current chapter is 8: Deep Interlock & Ambiguity, which at first blush doesn't sound like something you'd want in a software project. Chapter 9 (Contrast) is in the works, and subscribers of course get access to the much-slicker archive at the.natureof.software
.
I am also in the (slow) process of introducing Intertwingler—which I am already using to generate the archive site—more prominently into The Nature of Software. I'm eventually imagining some kind of collaborative annotation/discussion community thing, but we aren't quite there yet. In the interim I'll be starting with simple things like a glossary/index (which is up but not finished), bibliography, and photo credits. Like all things Intertwingler, these artifacts contain embedded, machine-actionable data.
Books are Open for Q4 2024, Q1 2025
I still have some time in the rest of this year to take on client projects, and will happily start taking bookings for 2025 as well.
If you are in a leadership position at a company (it can be an association, university, government agency or other institution too, I'm not picky), or know somebody who is, and you have a project that needs planning, or a website or knowledge base that needs organizing, or some other hairy, gnarly thing in their organization that needs understanding, I may be able to help.
I do some product stuff, but my main professional focus is infrastructure: internal processes and systems, though, since for many companies these days, the infrastructure is the product, so maybe the distinction isn't necessary. If you're subscribed to this newsletter, then you probably know my motto is Make things. Make sense.—and I mean that literally. I help people in organizations think and talk about the things that are important to their work, and I do that by gathering information, concentrating it, and representing it in novel, insightful ways. I've also been writing software for over 25 years, so can interface just fine with technical teams, do knowledge transfers, or even the occasional heavy technical lift.
The kinds of things people have hired me to do in the past include:
- Taxonomies of terminology,
- Ontologies (data semantics), format specifications, and protocol design,
- Content inventories/audits,
- Information architecture,
- Data visualization intranets,
- Simulations,
- Vendor inventories/capability assessments,
- Application security audits and design,
- Organization-wide internationalization/localization strategy,
- Codebase overhaul planning,
- General resource planning,
- Birthdays, weddings, and bar mitzvahs (kidding, obviously).
A new, bounded-scope service I'm currently offering is something I am tentatively calling a “lit review on steroids”. Indeed, most of my projects have had something like this at their core for years, but now I'm offering it as a stand-alone service. Every organization I've ever encountered has a set of concepts and procedures that everybody needs to understand and coordinate around, but to the extent that they're written down at all, the documentation is usually pretty ad-hoc. The idea, then, is to pull these processes and concepts all out and organize them, and most importantly, set up a skeleton for them to stick to.
The utility of this kind of service ranges from helping disparate teams and departments standardize on terminology, help new hires get up to speed, unstick a stuck project, and set the organization on a trajectory where the governance of this aspect of their business is an attainable goal.
Another thing I want to call specific attention to is the line item vendor inventories and capability assessments. One principle that suffuses my work is that relationships ought to perform. A question my clients have found worth asking is, cost-benefit aside, to what extent do our vendors help us achieve our qualitative goals, versus the extent to which they get in the way? Are there alternatives, and what are the implications of making a change?
Unlike many consultants, I try to imagine how my work product might actually get used, and as a rule do not trade in mere PowerPoints and reports destined go into a drawer somewhere. For starters, everything I make that has any kind of formal structure, like a concept scheme or data visualization, I deliver in a way such that the data can be extracted—mechanically, of course—and repurposed. Whether or not you're into that kind of thing, it's a bonus that I'm pretty confident you won't get anywhere else.
I can do engagements as short as one month; two to four is typical. Hiring me for the duration is in the same ballpark cost as hiring a senior IC (individual contributor). I also do one-on-one sparring sessions with leaders, either as a one-off or on retainer. If you want to schedule a chat to discuss how we might work together, email projects@methodandstructure.com
.
Like Winning Two Dollars in the Algorithmic Scratch & Win Lottery
I have been busy enough over the last several months that I haven't been able to sequester the minimum amount of time it takes for me to write anything coherent. I find it takes me no fewer than four contiguous hours to get anything worth reading onto the page, and typically much longer. My median newsletter I estimate is about a 12- to 16-hour affair, which I typically disgorge in one shot—or close to it—over a day or two. Nature of Software chapters take me at least a solid week with nothing else on the docket. My ordinary website essays land somewhere in between, except if I'm doing an interactive gadget, in which case that typically takes at least a week all by itself.
I actually found Substack itself was a significant hindrance to getting newsletters out the door, on account of their stubborn refusal to accommodate people who do the writing part of their newsletters anywhere other than Substack. It regularly took me at least two or three additional hours to reformat and re-link—by hand—a stripped, pasted-in text every time I sent something out on Substack. Not the case with Buttondown, which lets me paste in Markdown and be done with it.
What I've found to fill the gap—which I wasn't expecting—is video. Specifically, I seem to have stumbled across a format that goes from zero to shipped in under an hour✱:
- No rehearsals — the subject matter should already be pretty well-rehearsed,
- Five to ten minutes — it usually turns out that the videos clump closer to either five or ten minutes, one or two related topics that nucleate surprisingly naturally at about five minutes apiece,
- One take — I don't want to have to spend lots of time editing,
- Minimal B-roll — ditto, though I will if it's something I can just chuck in there,
- First thing in the morning — while it's quiet and the day has yet to shovel its myriad distractions onto my plate,
- While I finish my coffee — which has turned out to be my tagline.
✱ Well, it was under an hour until I started making title cards, so more recently it's been closer to two. I anticipate I'll be able to beat this back down though, once I streamline my method.
The videos, which I'm cheekily calling Morning Warmups, are on my YouTube channel, @methodandstructure
. I try to do one or two a week. As of this writing I'm sitting at just shy of 200 subscribers, so you should, as the Zoomers say, smash that subscribe button.
Why I Actually Wanted To Bring This Up
Self-promotion aside, I noticed something odd the other day, namely that one of my videos was actually getting a little bit of traffic. More specifically, it was getting traffic by way of The Algorithm™.
The video in question, shipped on June 21st. Note no title card or anything, just little ol' me.
I find traffic to my little corner of Ye Olde Tube of You fades out pretty quickly after I post anything, so I was surprised to notice the little timeline visualization in the analytics dashboard humming away about three days after publishing. Further inspection revealed that it was all going to the same video. I had initially suspected somebody put it on Hacker News, like what had happened with my Agile as Trauma article, but nope, the calls were coming from inside the house.
Show me on the chart where Google cut off your traffic…
And then, after another day or so of feeding me eyeballs, Google unceremoniously shut off the traffic spigot. In total the attention didn't amount to even 400 views (I crossed that line under my own power). It has since happened again not once, but twice, albeit to a lesser extent.
I wanted to talk about this because as modest as it was, the experience was like when, as a novice surfer—which I absolutely am—a wave catches your board for the first time, when every previous wave slid underwhelmingly by beneath you. I have no idea what I did that got it to catch that time. My current working hypothesis is probably nothing. The subject matter was mildly provocative, but all of my morning warmup videos are precisely mildly provocative. I circulated it to the socials like I always do (I got a lot more reach just from Twitter before Musk put the clamps on, let me tell you) and even came back and boosted it later (since the new platforms are strictly chronological). Nothing different in the program.
The point is, there is no point in trying to do Kremlinology on the YouTube algorithm, especially at this stage in my uhh, “creator journey”. Moreover, it's not my goal to become “a YouTuber”. More-moreover, do you have any idea what that entails? Holy shit. Before Google lets you past the velvet ropes of monetization, they want 500 subscribers (I have 193), and “3,000 public watch hours” in one (rolling) standard year. I have 188. I would have to clock 8.2 hours of viewing time every day for a solid year before I could monetize my YouTube account. The only time, incidentally, that I got that kind of magnitude, was at the peak of that first algorithmic boost. So if it happens, it happens, but I'm sure as hell not going to try to game something which has demonstrated itself to be as opaque as it is capricious.
About the only change to my behaviour is that I now do title cards. As mentioned, this move has—hopefully only temporarily—doubled my overhead. Funnily, it wasn't my brush with the YouTube algorithm that motivated me, but a cold email from some marketing guy who tried to neg me into paying him to do them for me. Total non-starter, as the entire point of the morning warmups is to blast them out in one shot and ship them immediately. But furthermore, I've been doing graphic design for something like 30 years so I'm well capable of doing title cards myself. I only started doing them anyway because videos without title cards look kind of janky, and it's a bit embarrassing. It's not clear so far if it's made any difference in viewership.
P(Dumb)
I am actually proud of how this little Photoshoppage turned out; I busted out the Wacom and everything.
I am a voracious listener to Lawfare's podcasts, but their recent discussions around artificial intelligence have me concerned that putatively serious people, who have real, material influence on the shapers of law and public policy, are preoccupied exclusively by science-fiction AI doomsday scenarios, at the expense of any attention whatsoever on real, documented harms and risks that are happening around AI today. I was irritated enough to write down why I think this is tantamount to a dereliction of duty. I started writing it here, but it grew to over 5,000 words, so I clipped it out and put it into its own article.
The TL;DR is that there is a sequence of events that has to happen before AI doom can even be put on the table as a possibility, let alone something you can assign a probability to. These events will be large and conspicuous to be in the news, at least for their respective fields. They involve significant advances in math, theoretical computer science, AI engineering, business, geopolitics, and most importantly, computing hardware. What I argue in the piece is that much like it doesn't make sense to worry about a shark attack in the middle of a desert, until each of these supporting events come to pass—and there's no guarantee that they will—the probability of AI doom is zero. Importantly, you will absolutely notice a change in your surroundings long before the risk becomes something to worry about. That is, you can reassess when you find yourself floating in the South Pacific, but there will be opportunities to do so before then.
I also have some bonus material that didn't make the cut, a devil's dictionary of disingenuous AI startup terminology:
- AI itself is and always has been a marketing term. What it refers to is a moving target, that is different today than when it started out. Roughly, it's supposed to refer to computers that can genuinely think, but conceptualizations of how to achieve that have changed. Nowadays, the vogue is a thing called machine learning, which has all the baggage of a statistical, rather than deterministic—and thus reliable—system.
- AGI stands for Artificial General Intelligence, basically an electronic mind that can perceive and think and reason and has goals and initiative. It is largely based on a naïve model of cognition.
- AI risk/AI safety is a tandem shibboleth used by peddlers of these systems to refer not to real, documented risks and harms of AI, but hypothetical future ones (see alignment).
- alignment refers to the propensity of AI to do what you want, versus doing what it wants. Since AI does not want anything, this is a red herring, concocted by vested interests who do want things very much, like your money.
- foundation model is an attempt to frame the discourse around AI models that are so big and expensive that the only way to interact with them is by renting access. The idea is that they will provide a foundation upon which you add your own tailoring, like some kind of digital sharecropper.
- frontier model refers to the biggest, most expensive AI models. It's just trying to make them sound cooler than merely something somebody spent a lot of money on.
- hallucination is when the AI makes a boo-boo. The purpose of this term is to conflate when it simply produces garbage, with when it makes an error of serious consequence.
- P(Doom) is your personal confidence level, typically rendered as a percentage, that artificial intelligence—and nothing else—will annihilate humanity. In order to be meaningful, a figure like this needs a date attached to it, but it doesn't, which is one way (of many) you know it's not serious.
- X-risk is existential risk (to humanity) due to AI launching nukes, inventing bioweapons, building a robot army, or whatever other sci-fi has been dreamt up in the last century.