This project was conceived in part by the overwhelming response to what I ended up calling a retrospective of Christopher Alexander's work at the time of his passing. In it, I referenced a keynote address he gave in 1996 to a room full of programmers. The keynote covers three major points:
For starters, there are an order of magnitude more programmers than there are architects.
Twenty-six years later and not only are we still talking about patterns, we're still talking about them wrong. Their inventor is no longer around to correct us, but he left instructions. The caveat, however, is that those instructions are laid out in a four-volume tome that weighs twelve pounds and is over 2500 pages long. Like the patterns, the text is calibrated to the construction of actual buildings, and needs to be reinterpreted—and not to mention significantly compressed—for software, in order for us to make use of it. This is what I endeavour to do.
This series is going to take a personal view on Alexander's magnum opus, The Nature of Order. It is an essay in installments. I have no qualifications other than the fact that I have closely read it, as well as pretty much everything else Christopher Alexander has ever written, and that I started working in software around the time he gave that keynote. The best I can offer is to be clear and honest in my own interpretation; anybody who wishes to contest it can read the source material for themselves.
This series focuses exclusively on The Nature of Order. It begins with an introductory issue (what you are currently reading), then every 2-3 weeks or so, an issue for each of the 15 fundamental properties/transformations (if you don't currently know what I mean by that, I will be explaining it below), then at least one synthesis/conclusory issue afterward. I will not be talking about patterns except insofar as they are relevant. I will also be assuming that my audience mainly consists of software professionals who know something about Christopher Alexander—at least about patterns—and is at least aware of The Nature of Order.
Christopher Alexander had a rare gift, not only to be able to see what many others could not, and not only respond to what he saw with his own work, but also put it into words for others to do the same. This, however, does not make him a saint. It took him, over six decades, hundreds of buildings and thousands of pages of published text to articulate what he saw. If he had succeeded by his own standard, his ideas would have seen more uptake during his lifetime by dint of being more accessible than they were; by being addressable by more people than they are. People wouldn't be able to publicly misrepresent his ideas or co-opt them for their own agendas. If he had been successful, promulgating his message wouldn't take a four-volume, 2500-page monsterpiece that costs a fortune and takes a year to read and several more to digest—on top of another 2000 or so pages of foundational material.
This is not to say that Christopher Alexander was overly verbose, or an obscurantist. Quite the opposite: his writing is eminently plain and accessible—there's just that much to discuss.
What Alexander did achieve during his lifetime is consistent with a person operating at the very edge of their capability. There are consequences to that. We in software have our own examples in people like Ted Nelson (who is, as of this writing, still alive) and Douglas Engelbart. Perhaps the problem is too big for one person to tackle in a career, or too nebulous to even articulate such that one can solicit effective help. In any case, it is a high-risk mode of operation that tends to involve the people around you. If you choose this path, your sacrifices are not your own. This is nothing to celebrate; nothing to beatify. I empathize with this position, but as I age, I find it gets harder to justify.
Buildings and software both shape human behaviour. Indeed, software is a newcomer, and in some ways a powerful solvent and catalyzing agent to buildings and other staid forms of organizing social activity: law, philosophy, mythology and religion, ritual and ceremony, science, art, et cetera. The Nature of Order touches on these topics, but it is proximately about buildings, so buildings are where we will be making our main comparisons.
All structure is borne of constraints, because without constraints there are no affordances. Buildings dictate—or at least influence—how people move about them through the placement of materials. Software does something analogous, though both what it does and how it does it are harder to summarize.
All of us are likely to know what a constraint is, but I'm sure not as many have heard of an affordance, even though it will be natural to you once you do. We can think of an affordance as the complement of a constraint; it's the capability you gain from applying a constraint. But it's a little more than that. An affordance is formally defined as a relationship between the environment (in our case, an artifact in the environment) and an individual. It's the set of capabilities, of potentialities, of next steps, that the object affords better than it does others.
For more about affordances in software design, see Andrew Hinton's book, Understanding Context.
The relationship between constraints and affordances is well-demonstrated by contemplating the simple machines: the lever, the pulley, the inclined plane, all trade off the height you can raise an object (constraint) for how heavy an object you can move (affordance). In the wild, these relationships are going to be much more numerous and complex, but they will always be a function of the structure of the object in question, and its situation in an environment.
The fact that constraints and affordances derive from structure is important because this is how we make the connection from buildings to software. Structure in buildings reduces to selections of materials and their geometric configuration. Software is all made out of (effectively) the same material, and it does not have an inherent geometry. It does, however, have a topology. Certain configurations of materials and geometry in buildings and other physical artifacts also occupy a semiotic dimension: a building can be crafted to signal to people in ways other than direct influence over the movement of their bodies. Software, being made completely out of signs and symbols, is only ever✱ operating in this way.
✱ Proximately. Most software doesn't move atoms around directly—at least not entire ones. It usually enlists us humans for that.
The built environment likewise encodes all kinds of social norms, for good or for ill. Consider gendered bathrooms, segregated building entrances (by race, gender, socioeconomic status, etc.), gated communities, throne rooms, anti-homeless spikes, and so on. Even something as ordinary as locks on doors or frosted glass. Software goes one farther and actually encodes what does and does not get to be recognized as a legitimate concept. Everyday examples include:
While buildings act as a substrate for shaping human behaviour, in addition to this, software also exhibits its own behaviour. Software can (nearly) always behave in ways that only its author is aware of, as well as behave in ways its author isn't aware of. There's very little way to guarantee that at any given moment, a piece of software is working for you and only you, even if you wrote it. This is a universal meta-affordance of software.
As you can see, there are parallels between buildings and software, and considering constraints and affordances—that is to say, the configuration—of topology and semiosis and cognition, is how I intend to reconcile Alexander's work in architecture with the craft of software development.
One hurdle we will have to get over before we can proceed, that could be glossed over with the patterns but not The Nature of Order, is Alexander's metaphysical stance. Christopher Alexander was a Catholic, and as such, when he wrote the word God, it is a safe bet that he meant it exactly as a religious person would. The words spirit and spiritual likewise appear dozens of times in the text. This is an essential part of his thesis, and those among us who are allergic to this kind of language can't merely skip over it.
I will admit to having personal trouble with the word “spiritual”. My strongest association with the term is seeing it deployed disingenuously by sloppy thinkers pretending to have special access to hidden knowledge. Even earnest people define a “spiritual experience” in terms of its attribution to the supernatural, a logical leap (often into a circular loop) for which there is no basis.
Attributing intense emotional experiences to supernatural intervention is a bit like saying “I saw a UFO [that is, an unidentified flying object], therefore it must be aliens.” That is, if the flying object was actually aliens, then it would no longer be unidentified. In turn, ostensibly what defines a spiritual experience, is the act of calling it one.
I am inclined to distinguish “being spiritual” from a “spiritual experience”. The former is an (often self-reported) personal attribute; the latter is a label for a (potentially) prematurely-categorized phenomenon. Words associated with these experiences include: warmth, light, love, safety, protection, meaning, purpose, infinity, eternity, connection, belonging—but ultimately the person undergoing the experience is unsatisfied with a verbal description. “Spiritual” people will often claim that espousing certain beliefs (typically their beliefs) is a prerequisite for a spiritual experience, but it isn't clear why an individual couldn't experience the phenomenological equivalent in every way, save for that final catapult into supernatural attribution.
Christopher Alexander was not a sloppy thinker; he was a demonstrably rigorous one. It would be a mistake to dismiss him as a crank—as many do—because of his proclamations that deep feeling and spirituality are essential to his method; that when he said he wanted to “make God appear in a field”, he meant it literally. It nevertheless sounds a heck of a lot like his methodologies are open only to those willing to make certain ontological commitments, and so we must find a way to reconcile.
Baruch Spinoza (who gets an endnote in Book 4), in the 17th century, famously “proved” the existence of God by simply defining “God” as that which has infinite extent—in other words, as a synonym for “everything”. From this we can substitute all references to God, anywhere we see one, with an ordinary definition of everything and not lose any meaning. Indeed in so doing we remain perfectly consistent with Alexander's other central notion, that of wholeness. So unless your cosmology lacks a concept of everything, you don't need to change your beliefs in one direction or another.
For “spiritual”, we can perform a similar manoeuvre, but first we need to back up a little. Alexander asserts in The Nature of Order that the method prescribed therein is contingent on what he calls deep feeling. He criticizes the scientific establishment—rightly, in my opinion, though I come to it from a different angle—for discounting human feeling as a legitimate empirical measure. Works of art, literature, music, poetry, and so on, observably impel us emotionally, and trained artists can fairly reliably produce works that evoke the specific feelings they desire. To say that an architect could do the same with a building (or, for that matter, a programmer with software) is not much of a stretch. We may not be able to define a precise unit for feeling, and take measurements along its continuum like we do in physics, but we can observe and record that one artifact or process elicits a stronger emotional response, and does so in more people (and much more than can be accounted for by differences in taste), than another. This indeed is Alexander's essential testing criterion.
What I have described here, and will treat in greater detail later on, is a thing called the “mirror-of-the-self test”.
Even if we take our (read: my) cynical framing of “spirituality”, that consists of a belief in the unfalsifiable with its basis in private feeling, there is still room to move within it. The first is to recognize that there is a difference between unfalsifiable in practice (say, when it isn't feasible to create the experimental conditions) versus in principle (i.e., questions that are fundamentally unanswerable). If we concentrate on the former (although really, not necessarily), we can see that we effectively act on unsubstantiated belief all the time. We have a word for this in our discipline: heuristics: procedures that generate good-enough results under incomplete information that are much less costly to run than exhaustive algorithms.
The next component is to recognize that belief itself doesn't have to be binary. We in our discipline are familiar with this too: it's called Bayes' Theorem. With it, it's easy enough to talk about a degree of belief that concomitates with how much we're willing to stake on a particular position. Again, there's no rule saying a given bet has to be all or nothing.
And now, we get to feeling part: there is evidence from linguistics, cognitive science, and developmental psychology that suggests that symbolic concepts—that is, things that can be talked about with words—are rooted in the body, or in other words, everything that can be said is but a mere subset—albeit a much more precise one—of that which can be felt.
In his landmark presentation Doing With Images Makes Symbols, Alan Kay invokes the work of Jean Piaget and its expansion by Jerome Bruner in the titular process of the development of reasoning in children. George Lakoff (in Women, Fire, and Dangerous Things and with Mark Johnson in Metaphors We Live By) advance the notion of a kinaesthetic image schema as one of three bases (alongside genus and metonymy) for concept formation. It is an entirely reasonable, testable hypothesis that thought and language are refined from feeling; one that is even reflected in the anatomy of the human brain.
So, if isn't too presumptuous, we can do to “spiritual” what Spinoza did to “God”: append a compatible definition, one we can keep strictly phenomenological. We can say something like, a “spiritual experience” is one that is especially profoundly felt. We can furthermore conjecture (pending empirical study) that the profundity of the feeling is partly due precisely to its resistance to articulation: the emotional intensity is compounded by the losing struggle to put it into words. Whether the subject of the experience chooses to attribute it to a soul, spirits, demons, fairies, or god(s), is up to them, but none of that is necessary to account for the experience itself. A spiritual experience can be defined as at least a profound feeling of feelings, and you're free to take it farther than that if you want to. I don't think Christopher Alexander would have minded either way.
Alexander's core thesis is that an artifact's ability to elicit deep feeling is an empirical characteristic of its physical and geometric structure. Certain configurations are objectively more evocative than others, and the way to compare them is by actually feeling the difference for yourself. Furthermore, since (according to Alexander) this elicitation is a property of the artifact, different people will tend to report similar feelings with similar intensity, even though the feeling itself is, of course, an entirely private process. It will be interesting to imagine how we can apply deep feeling to the creation of software.
Christopher Alexander used a peculiar definition of life. In short, life according to him is not an exclusive feature of biological organisms, but rather a more general process of local elimination of entropy that biological organisms just so happen to exhibit. This more expansive definition tracks very closely to Schrödinger's (who gets considerable treatment in Book 4) that arbitrary regions of space can be compared to one another as more or less alive. This expanded definition of life can moreover occur at wildly different scales of time and space. This non-biological “life” is what Alexander calls living structure.
Conventional examples of this living structure are things like anthills, termite mounds, beehives, wasp nests, coral reefs, et cetera. Colonies of eusocial insects are referred to as superorganisms, and their “bodies”, with their own metabolisms, immune systems, and goals, are made of dirt, wax, chalk, or mud. In a way, humanity can itself be construed as a superorganism, with a skeleton made of concrete, metal, wood, brick, and stone.
Perhaps the easiest way to understand living structure is to compare it to structure that is dead. The quintessential dead structure is a monument. The only thing to do with a monument is to restore it to the state that it was when it was built, the state away from which it is constantly decaying. Maintaining the monument is a sunk cost. Take away that support and the monument will crumble to dust. Living structure, by contrast, is lived in. It is continually being revised and expanded. Living structure attracts (biological) life because it supports, protects, nurtures, and encourages it. The inhabitants of living structure have an incentive to contribute back to it, thus making it more living. Thus living structure grows organically, as if it was itself organic.
This description of living structure sounds a lot like software. Software with any kind of currency is constantly being revised and expanded. If neglected, software detaches from its surroundings, becomes unusable and is quickly forgotten. Furthermore, software with deliberate monument-like characteristics is extremely rare.
I am inclined to give TeX as an example here, as it has been around for 44 years and is steadily refining toward a fixed, crystalline state. However, there is such a vibrant ecosystem around TeX, that even if its core fossilizes, it will still maintain its cult following long after its author, Donald Knuth—who is in his 80s—passes on.
The unmaintained building decays in its own right, due to water, wind, rust (the bane of reinforced concrete) and other chemical processes, fungus, algae, plants, and animals. Unmaintained software, rather insidiously (and modulo failures in the hardware upon which it is stored), stays put. It's the world that moves underneath it. Maintaining software is something of a misnomer; rather what we are “maintaining” is its connection to reality.
We can say then that the default assumption underpinning software is that it is intended to be living structure. Much like living structure in the ordinary building sense as Alexander meant it, it's what the code does—its effect on the world and the people around it—that attracts the resources and attention that keeps it alive.
As a Canadian, I spell the following word centre, but I will use the American spelling center to refer to the specific term of art. Indeed I think it's a little odd that somebody born in Austria and raised in England would adopt American spelling, but I suppose he spent most of his adult life in the US. Expect to see both centre and center going forward.
Another essential Alexandrian concept, after life, living structure, and wholeness, is that of a center. The idea might actually be easier to apply to software than the built environment, because software is inherently discrete. In the IRL formulation, a center is a field-like region of space which may or may not have a crisp boundary. A center nevertheless has a thing-ness—an identity—and that is really its essential property: A center is something you can point to, and at the very least say “that one”. Centers are recursive; they're made of centers and embedded within other centers, and living structure tends to have a lot (like, a lot) of them.
Again, since software is made up almost completely of discrete identities, we're going to have to think about how this concept of centers applies. (For one, just having lots of them isn't sufficient; their arrangement is also significant.)
The natural draw of software people to Alexander's method, I suspect, is due to the fact that we actually have to work this way. Project management strategies based on how buildings are constructed have been repudiated over and over. Lobbying for iterative and incremental processes goes back to the sixties. The Agile Manifesto signatories are big Alexander fans. They were mobilized because Waterfall—which is (roughly) how you build contemporary buildings—doesn't work for software.
So what is Alexander's method? It reduces to the successive, recursive application of structure-preserving transformations. At the centre is the fundamental differentiating process that looks a lot like what we would recognize as an event loop. It has been compared to Boyd's OODA Loop, insofar as it contains steps to observe (the space under consideration), orient (toward the center in most need of transformation), decide (on which transformation to apply), and act (test that the intervention is the best possible one).
We can think of this process as a computation, where the main loop selects a transformation based on the current state of the system, applies it, and returns the system in its new state. This archetypal process should be intuitive and immediately recognizable to us.
The process can either run until changes become too small to measure, or simply terminate after a fixed number of iterations. In Alexander's practice, this equated to allocating a fixed budget, and then spending it.
The fifteen properties are both the nouns that pick out the characteristics of living structure, as well as the verbs that create it. That is, the properties are also structure-preserving transformations. Each issue, in succession, will examine one of these properties and try to apply it to software. Here they are:
We should note a few things about these properties/transformations. First, they primarily concern geometry, so we're going to have to abstract them in order to apply them to software. Second, the fifteenness is not especially significant—it's just the number Alexander ended up with. He had a separate list of eleven properties for colour, seven of which map 1:1 to the geometric properties, and four others map onto two apiece. So we might actually end up with a different number of software properties entirely.
The presence of a different set of properties just for colour is what prompted me to believe that yet another set of properties for software was feasible. Moreover, the mapping of the colour properties to the geometric ones was the principal clue that the number of properties, or even their exact identities, was not essential per se, but that they were best-effort labels for something more fundamental, that was harder to articulate.
To wit, I posited in an earlier work that when you look through an information-theoretic lens, the fifteen properties cluster around three categories of conveying information, compressing information, and throttling information to facilitate uptake—a connection we will surely be revisiting. This falls in line with the semiotic aspect of the properties, in their capacity to communicate their significance just by looking at them.
So that's the program: one property per issue, mapped onto software. Then a conclusory, synthesis issue the end—at least one, or however many it takes to make sense of what we surface. Thanks for being part of the journey.