Christopher Alexander expressed a strong moral conviction that what he called living structure was good, and a lack of it was bad. The way to create living structure, according to Alexander, was through successive applications of (again, what he termed) the fundamental differentiating process, each step featuring one or more of these properties as structure-preserving transformations.
Whether living structure is a real phenomenon—and furthermore, whether it is immoral not to create it when you have the opportunity—are assertions you can have a genuine, good-faith debate about. You can even argue over whether or not a stepwise process of incremental differentiation is necessary to produce it (to the extent that it exists in the first place). That these fifteen properties are present in the built environment, in greater or lesser degree, is not as negotiable. They are conspicuous features of the geometry of the space in question; you shouldn't have to go digging into the drawings to perceive them. They require neither a certification in architecture, nor any professional interpretation to recognize. The only training necessary is the list of properties itself, with handful of examples, that anybody can memorize in an otherwise idle afternoon.
As I age, I inch farther toward the position that any sufficiently sophisticated and/or esoteric rationale for imposing one's will onto the public is indistinguishable from arbitrary. While said rationales need not be inherently disingenuous, they will nonetheless often be construed as such by the public. Where I've landed currently is what I'll call the Einstein Paraphrase: make your rationale as simple as possible, but no simpler—where “simple” not only denotes few moving parts, but those that remain are familiar. If you can't ground your rationale in language and concepts that ordinary people understand, that's a sign you don't understand it very well yet yourself.
Why we're interested in The Nature of Order, again, is because the fundamental differentiating process looks a heck of a lot like a particular known-good way for making software: a stepwise, incremental process of iteration and refinement, most recently incarnated as Agile, but that has been on the books for decades prior to that. The fifteen geometric properties are at once the nouns and the verbs of an otherwise content-free grammar for creating buildings; the goal of this project is to get a clue to their semiotic and topological analogues for creating software.
Again, these properties are much simpler than any inventory of patterns is going to be.
Whether or not we land on exactly fifteen of our own properties is immaterial—Alexander always maintained that the number was incidental. The ultimate goal of The Nature of Software is to be able to attach labels to a handful of objectively observable phenomena—to the extent that anything is objective—that are “fundamental enough” to serve as a basis for analysis and critique of software, as well as elementary operations that organize its development.
or, What Even Is Symmetry, Anyway?
In recent years I have begun to think of symmetry in similar terms to how Lakoff and Nuñez characterize mathematics in general: Reality exhibits empirical regularities, and these regularities give reality certain properties—which are as real as real can be. It's we humans, however, who ascribe rulesets to these regularities—like sequences, quantities, ratios, distances, angles, algebra, calculi. These rulesets correspond to the regularities with great precision—if they didn't, they wouldn't work, and we would discard them. But correspond is all they do, because these concepts are ultimately artifacts of language, which describe reality but then generalize, abstract, and analogize out to things that have never been observed, and likely never will be. Another way of saying this is that we can never know if mathematics in general, or symmetry (in the strict mathematical sense) in particular, is actually “out there” in the environment, or if it's just part of the story we tell each other called mathematics, and real reality has some additional aspect to it that we just haven't happened to encounter yet.
There is a point to this. Keep reading.
Shape, or form, is one regularity that is “as real as real can get”. Of course if we get too specific with distances and angles, we start to pollute this statement with interpretation, but we can say that different objects✱ of different shapes will exhibit different properties. Round objects roll, square ones stay put, that kind of thing. The edge of a knife is much sharper than the edge of an axe, but the axe is stronger—that kind of thing. The shape of space is highly consequential.
✱ The very idea of a discrete “object” is itself highly laden with interpretation—at least, in my opinion, moreso than “shape” is. One could imagine an ontology that was perfectly continuous: space just gets more and less viscous in places, and there are gradients, but nothing ever completely begins or ends. It would still require a concept of shape.
Nevertheless, I'm not really sure how to proceed without admitting a concept of discrete objects.
If you take two objects, each with a straight edge, and place them edge to edge, the edges will meet without a gap. If you take four objects, each with—what we happen to call—a right angle, not only can you make all four objects meet at their corners, but you can swap any two objects with each other. Moreover, you can take any one of these four objects and flip it over, and it will still fit. This, and other operations like it, appears to be an inherent property of space, at least at the scales we can directly manipulate. And, furthermore, from what we can observe, the shapes of objects at macro-scales are the results of the composites of shapes of their parts, at micro-, nano-, pico-, and femto-scales. A salt crystal, for example, or iron pyrite, may form by all apperception a perfect cube, with right angles that will make either one a valid candidate for our aforementioned test. This is the result of regularities in the spatial arrangement of (again, what we call) molecules, that propagate to macro-scale structures that we can perceive without technological assistance. That said, if you zoom in close enough, you will detect aberrations—noise—in the regularities. The cubical salt crystal would be perfect if not for the stray carbon atom that showed up in process that constructed it. Molecules—to say nothing of atoms—are so small, and so sparse, that any precise claim about their geometries is in the province of statistics. Perfect symmetry only exists as a conceptual artifact.
Entities like molecules, atoms, nucleons, and quarks are only observable through some kind of medium. Quarks themselves are only “observable” to the extent that they do a good job explaining what is observed. And that's the point: these concepts are extremely robust, despite nobody ever having directly seen one. Feynman once demurred that he can't say quantum electrodynamics is like a ball bearing on a spring, because a sufficiently precise description of a ball bearing on a spring means invoking quantum electrodynamics. Nevertheless, we had to begin with something like a ball bearing on a spring to arrive at quantum electrodynamics. It's an extraordinarily effective theory, but it's still a theory, and some other, more effective theory could always come along and subsume it.
As such, the claim that exact symmetry exists “out there” in reality is tenuous. Regularities exist “out there” to be sure, because without regularities, there wouldn't be any structure at all. So we can know regularities exist by deduction—or even by definition, given that we as observers are situated in the same milieu being described. Symmetry, by contrast, is a higher-order construct. The bilateral symmetry we observe in our own bodies, that provides the basis for the concept, is, after all, approximate, though it is grounded in real physical processes. A couple weeks into gestation, an embryo produces a seam that orients its subsequent development. By this point, front and back have already been defined. The primitive streak defines top and bottom, as well as left and right. A remarkable characteristic of morphogenesis is that it is highly path-dependent, embedding information about the next state transformation into the current state. What this means, from a perspective of analyzing generic processes, is that the agents (i.e., the cells) don't need to take their orchestration cues from anywhere besides their immediate neighbourhood. What that's going to yield, if it isn't disrupted, is a bilaterally symmetrical structure along the sagittal axis—or at least close enough to one.
It is actually really quite impressive that morphogenesis reliably produces symmetrical-enough body plans. It is even more interesting how easily we mentally elide minor asymmetries. For example, one would judge my own face to be pretty symmetrical, but if you look up at my nose from the underside, you can see that it is quite conspicuously crooked. You don't notice that looking at me straight on.
If any symmetry we apprehend directly with our senses is approximate, the fact that we platonize it is very interesting. The fact that generalize that platonism, like some cognitive toy, is even more interesting, especially when that toy becomes instrumental many years later. Évariste Galois was not thinking about quantum mechanics when he concocted his nascent group theory, but it would be very hard to do quantum mechanics—described a century later—without symmetry groups. They work so well that one is inclined to say that the symmetries are “out there”, and maybe they are, but again, all you can really claim is that the regularities are out there, and the theory only corresponds to them, even though it corresponds better than anything else in the history of anything that has ever been measured.
“If you're an astronomer, and the contemporary paradigm says that the universe is made of omelettes, you build instruments to search for traces of intergalactic egg.”
If I was going to organize a theory around what symmetry represents for us, it would be cognitive ease. Symmetrical things simply take less effort to contemplate than asymmetrical ones. Symmetry is almost universally considered beautiful, and I conjecture that what we're actually feeling when we experience beauty is the relief that we don't have to think about it any harder than we do. Moreover, the folk wisdom that given competing theories, the more elegant theory is probably the correct one, can be explained by the fact that you're going to be more nimble with an elegant theory than a clumsy one. Even if it turns out to be wrong, you'll at least find out sooner. Where the testing of a given theory is less rigorous or the results of testing are muddy, elegance can be ultimately deleterious. Theories—and here I don't exclusively mean scientific theories—with conspicuous symmetry will have an operational efficiency that gives them an ultimately political advantage, even if they happen to be wrong.
As for beauty, Schmidhuber argues that what we're appreciating about beautiful-symmetric things is a form of compression, and then goes on to argue that interestingness is beauty's time derivative. In other words, the phenomenon's description is already compressed; how much more—through abstraction and generalization—can you compress it? What deep truths can you infer about the rest of the world? If symmetry is psycho-semiotic compression, then we get more compression from more symmetries, within symmetries, within symmetries.
A Rorschach test is an extreme example of global symmetry with no local symmetries.
If symmetry is about cognitive ease, then the question is ease for whom? Let's meditate on this question in the context of a quote from Volume 1, Chapter 2.7 of The Nature of Order:
“[T]he exact relation between life and symmetry is muddy. Living things, though often symmetrical, rarely have perfect symmetry. Indeed, perfect symmetry is often a mark of death in things, rather than life. I believe the lack of clarity on the subject has arisen because of a failure to distinguish overall symmetry from local symmetries.”
The first association that comes to mind regarding global symmetry is tombs, or monuments more generally. Since precisely symmetrical objects tend not to get very large in nature (besides trivial symmetries like planets), symmetry signals artificiality. A large, symmetrical structure is going to be conspicuous a landmark on the terrain. Tombs like the Giza pyramids and the Taj Mahal have global symmetry, as do many monuments, temples and (famously cruciform) churches, government buildings, and of course, modern skyscrapers. While a fixation on global symmetry could stem from religious piety, political interest, or just plain ostentation the part of the patron, it can also be a vehicle of legibility for the architect. Le Corbusier, for instance, had highly prescriptive, utopian tendencies, and aspired to design for volume. Since his goal was modular, self-contained arcologies, they had to be organized somehow. The plan, it seems, for Ville Contemporaine, Plan Voisin, and Ville Radieuse, was to stick 'em in a symmetrical grid. Less ambitious—by Corbusian standards—but nevertheless completed projects got similar treatment.
Nowadays, with sophisticated parametric techniques, a wildly asymmetrical structure is arguably more en vogue than a precise, globally-symmetrical one. One wonders, if Le Corbusier was operating today, would he just generate his oeuvre using some parametric software called “Modulor”?
Le Corbusier's master plan, which had several incarnations, involved, at one point or another, knocking down half of Paris, underground highways, air traffic, and literally, physically stratifying society. Mercifully, it was never built.
Global symmetry can also have more mundane or instrumental applications, such as engineered structures, where the envelope represents a pure cost and the desire is to build it as cheaply as possible, or otherwise an overall symmetry serves some material function of the building. Examples include warehouses and factories, but also scientific buildings. These are not confined to the modern era: ancient Indian stepwells—technically warehouses—tend to be globally symmetrical, as do their medieval observatories. Another form of engineered structure, I submit, is the residential condo. Here, the “engineering” is the maximization, not only of profit, but of its predictability: symmetric floor plans are easy to build, and offer few surprises when it comes to laying out units, or calculating their price.
Albert Speer loved him some global symmetry; real jackboot energy. I'm mainly including this image because Alexander did, but also because it demonstrates just how menacing global symmetry can be.
A globally-symmetric structure can furthermore not get too big without making demands on the building site. It contravenes the pattern of site repair. Unless the site itself is completely featureless, something will have to be bulldozed. What this suggests is that the overall symmetric shape is more important than the space it occupies, implying it could really go anywhere. This primacy of the plan and indifference to the site are a sign of the most oppressive kinds of bureaucratic faits-accompli—those in the vicinity are left to deal with decisions that have been made somewhere else. The building has no interest in blending into the landscape; rather, it has come to colonize it.
A globally symmetrical plan may be just the ticket if you want to obscure what's going on inside the building, like the British MI6 here. I could have easily used the GCHQ building (a doughnut), the CIA (also globally bilateral) or the Pentagon.
Local symmetries, we may estimate, serve the cognitive ease of the user. The quintessential example, used by Alexander and myself, are the Nasrid palaces of the Alhambra. The word palaces is plural because while it presents as one building, it is composed of multiple apartments, built over several centuries. Craggy hilltop upon which the structure perches notwithstanding, the protracted timescale of construction meant there was no opportunity for a globally-symmetric master plan.
In stark contrast to the Nasrid palaces, is the one belonging to Carlos V, which is in the same compound, just a few paces down the street. It is a gaudy, puffy, Renaissance-era thing, shaped like a squared-off doughnut, perfectly symmetrical with a circular courtyard. I will echo Alexander here in remarking that just because a building is old, doesn't make it good.
The Nasrid palaces in the Alhambra have local symmetries abound. Contrast with the Carlos V palace partially visible on the lower left.
More saliently, though, the Nasrid palaces aren't a monument: they were somebody's personal residence—albeit somebody who was extraordinarily wealthy. A top-down symmetric plan is neither necessary, nor especially desirable for the comfort and enjoyment of their inhabitants. You experience this on the ground when you move from one zone to the next. Each space has its own unique character, but they all consistent in their placidity, and their local symmetries play a palpable role in that.
I had the pleasure of visiting the Alhambra in May 2015. We went through it twice—once at night, and again the following morning. Irony upon ironies, Alexander never went there. He had a piece of the damn thing but he never saw it in person. I'm sure he meant to go but just never got around to it. It's kind of a single-purpose trip, but it's worth it.
Global symmetry in buildings is effective when seen from afar—or perhaps from above—like a god, or some other overseer with similar ambitions. It really doesn't matter to any actual person standing on the site, whose physical scale is bound to exist within a certain order of magnitude, whose eyes and ears will definitely be situated within a band spanning the same few feet off the ground. Past a certain scale, the cognitive effect of symmetry will naturally diminish. A sufficiently large symmetrical structure from the point of view of a human being on the ground is just going to look distorted. It will have virtually no effect on other stimuli like acoustics or conducting air flow. By contrast, no single space in the Nasrid palaces is bigger than maybe a hundred feet on the longer side: it is eminently human-scaled. The bilateral symmetry has the effect of being apprehensible, while orienting your position and bearing in the building as a whole. Apprehensions of where you are and what's adjacent to you are moderated by the fact that no two spaces are even close to identical, which would not be the case under a regime of global symmetry.
Try, for example, to figure out where you are in a typical airport concourse. Try to infer, from where you're standing, what kind of shape it even is.
My final remark is that a globally symmetric structure only admits certain changes, because most changes would violate its symmetry. To take a contrived example on a hypothetical building with global (bilateral) symmetry: say you wanted to put an access ramp on one side, you would have to put an access ramp on the other side too, even if that didn't make sense. I suppose you could elect to break the symmetry, but then why was it so important in the first place? This is what I infer Christopher Alexander meant by global symmetry being “a mark of death”: such designs are delivered frozen—they're not meant to be changed.
My first instinct here is that any perceptible symmetry in software—again, discounting any graphical interface—would be inherently local. What would it mean for software to have “global symmetry”? We can definitely make comparisons around the concept: a superficial elegance that trades off flexibility and extensibility, likely due to a childish stubbornness to adhere to some naïve understanding of “order”. Sound like any software you know?
I suppose we could consider symmetries in software being “more global”/“less local” and “less global”/“more local”. One phenomenon I see often are things I call mandalas: these are contrived symmetries that get installed at the upper conceptual level. You see them in management consulting and design thought-leader milieus too—they aren't exclusive to software. A mandala is a typology, or really just a bag of three to six—or maybe more, but not too many—concepts, often arranged in a regular polygon. Or, they rhyme or alliterate, or maybe form an initialism. Or perhaps any combination of these characteristics. “The HURR Framework”, or “The Four P's” or something; those are mandalas too. People love making mandalas into things like UI menus and website navigations. Mandalas are usually bad (though not always), and typically deserve any mockery they get.
Mandalas tend to be bad, in the first place, because at least as important as the concepts at the vertices, are the semantic relations that connect them. Hey, maybe there aren't any! But if there aren't, don't try to represent them like there are. Problem number one with mandalas is that they're typically just a bag of terms. Torturing them into a regular polygon implies more structure than is actually there, or elides structure that is there, but only relates a subset of the concepts (like maybe two of them).
This diagram I drew for the now-defunct Information Architecture Institute is on the edge of mandala territory. It is supposed to represent five categories of mediating relationships between pairs of constituencies at the poles, plus another two categories of relationships the organization engages in directly. While I still believe the idea of organizing in terms of relationships is useful for business ecosystem modeling (which is what this was), I generally consider this diagram to be a failure. By mandalizing this conceptual structure, I blunted the most important message the diagram was trying to convey. I know this because the response I got when I presented it was “let's make that the main navigation of the website”.
Over the last ten years since I first read The Nature of Order, I have numerous times observed the mandalism of Alexander's fifteen properties, by onlookers trying to “make sense” of them. It's hard to watch. It's like Kepler struggling to fit the orbits of the planets into the five platonic solids, ignorant that there are actually nine✱ of them. Which brings me to another point: coverage. Mandalas, or categorization schemes in general, are presumed to be exhaustive. The probability that you will discover a little jewel that perfectly encapsulates your universe of discourse is negligible. It has the same motivation as imposing a global symmetry on a building: compromising the functioning of the contents for some semblance of order.
✱ I grew up with the understanding that Pluto was a planet. The fact that it has since been demoted only reinforces my point: Kepler lacked the tools to see Pluto in the first place, let alone reclassify it.
Kepler's ill-fated model of the solar system.
Not all typologies are mandalas, but people will inevitably try to ascribe a geometry to them. It's like when you write satire, no matter how over-the-top you make it, somebody will come along and take it seriously. The fifteen properties themselves represent a best effort over two decades to boil a number of concerns down to a set of handles that are useful. If you spend enough time with them, you'll find that they are not orthogonal. What they are, is good enough to work with. Alexander was not insistent that they were stable, and asserted that their number was incidental. Furthermore, his PhD, Notes on the Synthesis of Form, which he completed many decades earlier, was all about the topological decomposition of conceptual structures. The entire point of it was to communicate that:
- the underlying structure of a problem domain may elude articulation, - the underlying structure may surprise you.
One thing these structures don't tend to do, is yield global symmetries. Anybody claiming otherwise is selling something.
Symmetries between concepts are a rare and special thing. Why should it be that concept A relates in any way to concept B, let alone a symmetrical one? Well, software development is in the business of conceptual structures, and we can make them any damn way we like.
A program, or computation, is a procedure that runs on the computer for a while and (hopefully) terminates. The substance over which it operates is representational state. That state is then used to do something in the world outside of the computer. This means a given computation should be legible at some level, as should the state it produces. A programmer needs to have a mental model of the state of a computer program as it runs, up to a certain level of detail. Any detail elided needs to be immaterial to the program. I think of this like the real computer is running a simulation of a much simpler computer, which in turn is running the program.
Most programming languages don't require the level of detail of, say, knowing where memory is, but a C programmer absolutely does. Joke's on them though, because they're being told a fairy tale by the kernel, which in turn is being told a fairy tale by the processor, which itself could be a Matryoshka doll of virtualization. I suppose Turing equivalence could be considered a symmetry in its own right.
There is very little, if anything, inherently symmetrical about a computer program, either at rest or while it's running. That is structure that we have to add. Named subroutines (or function pointers, or lambdas for that matter), for example, could be construed as analogous to translation symmetry in the topological space in which the program is embedded—you can move it around and it doesn't affect its behaviour. Just like page numbers in books, programming languages didn't always have named subroutines—somebody had to invent them. Indeed, programming languages with symbols that were remotely intelligible at all had to be invented. The history of programming is lined with attempts at cognitive easing, some more successful than others.
The history of programming is a ratchet on cognitive easing, though this may be a bad metaphor. Ratchets are one-way devices, but the mechanism that makes this possible is isomorphism, which is a symmetry I have already discussed. Each major advancement in programming languages can be understood as an attempt to make the medium simpler and more intelligible to humans. There is one recent easing which is extraordinarily powerful, which we're just figuring out how to wield—and no, I don't mean AI, but rather functional programming.
I strongly doubt that generative AI will help people form accurate (at least up to isomorphism) mental models of the state or behaviour of their programs. Rather, I suspect on average it will do the opposite.
Programs usually (but not always) have inputs, also called parameters or arguments, and they have outputs. There's no point in computing anything if you don't have an output. That said, there is more than one flavour of output. There's the return value, which is what comes out the other end of the program, and then there's not the return value, otherwise known as a side effect. Side effects are how you make the computer do anything that touches the outside world, so they're important. But, side effects are also the reason why it's so hard to mentally keep track of state. Interactions with hardware actually have to be considered part of the state, because that includes what's on your screen, or whatever's happening with your network connection.
A language like C is almost completely driven by side effects. A common pattern is to pass in references to complex data structures as inputs, have the program twiddle those, and reserve the return value to indicate whether or not the operation was successful. Languages with more robust error-handling subsystems don't need to follow this pattern—that is, they can use the return value for what it's meant for—but they do still exhibit mutable structures passed in as inputs. Furthermore, the entire point of objects, of their eponymously-oriented programming languages, is to conceal arbitrarily large and complex gobs of state. Any time you touch an object, you're potentially changing its state. You don't know what's going on inside there, and that's deliberate.
Now, a procedure that only communicates via its return value, and doesn't foul up its inputs, can be called a pure function. It's about as close as you can get in computer parlance to what “function” means in math. We like pure functions in the first place because they have no side effects. State, at least as far as we care, only appears at the either end of the function, and nowhere else. This is closely related to the concept of referential transparency, which is a guarantee that a pure function given certain inputs will always produce the same output, so if you see that pattern of function plus inputs anywhere, you can just mentally substitute it for its output, because that is what the computer is guaranteed to do.
Before the mathematicians in the audience remind me that a function is just a relation between sets and doesn't do anything, I will remark that I regret that the word “function” ever stood to mean what it does in programming. This is why I have been deliberately using the term program or procedure or subroutine. It's also why I go to the trouble of spelling out “pure function”. That said, you could say that the (pure) function relates two sets precisely by computing the point in the range from the point in the domain. Also: there is no rule of math that says functions must only relate sets of numbers.
Another thing we like about pure functions is that they compose, and the resulting composition is another pure function. This makes them like Lego: you can create arbitrarily complex computational structures, and you don't have to worry about anything weird happening. Among other beneficial properties, these compositions can include themselves, giving you recursion (which I would characterize as a symmetry) which is an elegant way to do looping without looping, and under these strict preconditions, writing recursive functions doesn't suffer the same brain-melting quandary it normally does.
I am referring to the fact that when you write recursive subroutines in most languages, you need to be extra careful not to write something that either loops forever, or blows the stack. Moreover, lots of higher-level languages are noticeably slower doing recursion than loops, even if they implement tail call optimization. So even though recursion is prettier, it often isn't the right thing to do.
Having elevated the floor for our little terrarium of pure functions, we can examine opportunities for even more local symmetries. A function, for example, can have an inverse, and indeed this is often considered essential. It isn't feasible for every function or even most of them. Say you have a function that shrinks an image—well, those pixels are gone now; there's no function that will put them back. For invertibility, I imagine a roughly concentric set of concepts, like this:
- only certain relations between sets are functions - only certain functions are computable - only certain computable functions are invertible - only certain invertible computable functions have inverses that cost a reasonable amount of time to run.
If you wanted to nitpick, you can conceivably have non-computable invertible functions, but those aren't very interesting to us here.
This asymmetry, by the way, is how public-key cryptography works. More mundane inverse function pairs would be like zip
and unzip
, or to nitpick again, their underlying DEFLATE
algorithm. A function composed with its inverse is equivalent to the identity function. It puts you back where you started, with nothing to show for it but some spent CPU cycles. This to me has always been a very tangible expression of isomorphism. Some functions, even, are their own inverses, notably the pseudo-cipher rot13
. In other words, put two of those end to end, and for anything that goes through that pipe, it's like nothing happened.
Another layer of local symmetry we can add to a set of pure functions comes from elementary school math. What are the arithmetic operators, anyway, besides some dressed-up binary functions? Nowhere else is this more evident than Lisp, where a + b
is pronounced (+ a b)
. Now, say you have some function f
that takes two arguments, but you want to operate over more than two things. There is no inherent expectation that f(a, f(b, c))
does the same thing as f(f(a, b), c)
, for all a
, b
, and c
. This property, associativity, is something you have to define into f
.
A closely related property, commutativity, is when f(a, b)
does the same thing as f(b, a)
. The set of functions that is commutative is even smaller than that of the associative ones. But again, you're writing the code, so you can make it behave that way if you want. A binary function (as in, it takes two parameters) that is associative can be extended to take as many parameters as you like. If it's commutative too, then the parameters can be in any order. If we pretend for a moment that the Lisp +
doesn't already do this (which it does), we could write it like (reduce '+ '(a b c))
for any list of operands. If that reduce
looks familiar, that's because it's the same reduce
as MapReduce. It takes as many operands as you like and successively applies a function to them, boiling them down to a single value. map
, as many of us know, applies its function to its operands, but returns the same sequence of operands, transformed by the function. Just like any pure function, you can compose map
and reduce
into complex structures, and—provided the operator functions you plug into them are associative and commutative—the resulting computation will behave the same whether you're running it on one computer, or a million of them.
There is also a kind of “commutativity” that operates at the level of function composition, and that's when g(f(x))
does the same thing as f(g(x))
, for specific f
and g
, and their shared domain (and range!) x
. This property would be extremely symmetric, probably more symmetric than would be useful. But, if you relax the condition somewhat, you can talk about distributivity, which, unlike associativity and commutativity, considers the rules governing interactions between different functions. Distributivity is the property that a • (b + c)
means the same thing as a • b + a • c
. We can extend this idea further and define rules for how pairs of different functions ought to interact with one another, and we can build those rules up into an algebra.
The “commutativity” remark here is probably better understood if I write the examples using the function composition operator:
(g ∘ f)(x)
versus(f ∘ g)(x)
. It's kind of an abuse of terminology, as the composition operator itself is eminently not commutative under typical circumstances. The functions that make up its operands would nevertheless have some serious symmetry if you could compose them in either order and the resulting compositions were equivalent. The only thing I can think of that would do that off the top of my head is some contrived subset of Rubik's Cube (i.e. group) operations, or something close to that.
The symmetries in an algebra are extraordinarily powerful. Not only do they Lego-ize a set of formal expressions—as we can see with ordinary algebra (just “algebra”, no “an”) and arithmetic—to formulate questions and compute their answers, but also rearrange the expressions into equivalent structures. This is highly germane to software, as two expressions that provably yield the same results, may differ wildly in what each costs to run. A real-world version of this is an SQL query optimizer. The idea behind those—when they work—is you write your query, and they use the relational algebra, and statistics about the contents of the database, to calculate the equivalent query that will be the cheapest to run. That is work you don't have to do.
It is worth noting that the relational algebra is just something some guy made up, and yet it is a perfectly valid algebra. This implies you could make your own algebra too. The reason why we have SQL, though, and not what Codd envisioned, was that the engineers got their hands on it and decided they didn't need all that high-falutin' math stuff.
So if functional programming is so great, why not make programming languages 100% functional? Well, sometimes you want side effects—say, for interacting with hardware—and a purely functional programming language doesn't have those. If you want side effects, you have to cheat; you have to smuggle them in somehow. One particularly clever strategy comes from the language Haskell, which uses structures called monads to do the things that you just can't express as a pure function. It then keeps them around for a bunch of other useful stuff. You don't need to use exotic languages, however, to inject these local symmetries into your software, just a little discipline.
All that you need to know about monads is that they are monoids in the category of endofunctors. See? Simple.
Functional programming is one particularly fertile source of local symmetries at the level of the program itself, yet these symmetries are about as local as you can get. They sit at the most elementary level, the “bottom” of scope. Concepts and conceptual structures sit at the top. Now we talk about the middle, the process of software, both creating it and running it.
Again, why I conjecture computer people gravitate toward Christopher Alexander has to do with his insistence that the construction process must be incremental, informed by the building site and the partially completed structure on top of it. It must also be iterative, insofar as one carries out in-situ tests and experiments with different forms and substructures, to find the most appropriate one. As I said above, this sounds a heck of a lot like the Agile methodologies, and the prior scholarship that preceded them.
As I argued in the introduction, if you are interested in Agile methodologies as well as pattern languages, you ought to be really interested in Alexander's fundamental differentiating process, which marshals the process of creating centers using each of the fifteen properties as elementary operations. Since we're coming up on halfway through the list, we can already start to see how the properties, as they relate to software, are starting to cluster.
The reasons, again, why Alexander advocated for local symmetries over one global symmetry are because global symmetry is brittle to changes in its environment, and you can't do global symmetry without obliterating at least some features of the building site. The “building site” of software is the context in which it is operated. If we're going to have local symmetries then, that are smaller than grand concepts but bigger than elementary operations, a question worth asking is local to what?
One fairly common failure mode of software is that its makers fail to accurately model its context, namely who is going to use it and what are they going to use it for. Often the software is viewed in terms of its technical feasibility and little besides. This is analogous to an architect (or perhaps structural engineer) plunking down a building without bothering to look at the site where the building will be situated. Thankfully user research and design are finally on the software development radar, though these still have challenges being taken seriously.
You can tell because we still talk about software in terms of features, both in the making and in the marketing of it.
No matter the genre, software has a definite sense to it. Batch programs run until they terminate, while apps, daemons, and even operating systems run in an event loop. You can think of the loop like a wrapper around one or more individual batch programs. Websites are different still, because each state is effectively output from a program which has terminated, being used to drive a different piece of software that the Web developer doesn't even control. In this sense, the simplest website is actually a complex software system.
Once upon a time, we wrote CGI programs which started up when you hit their associated URL, and exited once they had disgorged their output. Later on, these were put into their own event loops to get rid of that overhead, making them little daemons in their own right. A Web page is always the output of a program though, even if it's just a simple program that locates a file and chucks it down the pipe.
We can imagine the software's user—and the software itself—as always occupying some point in state space. Unless the current state is a terminus, subsequent states are reachable via a directed graph of state transitions (this is particularly pronounced on the Web). Here is a place we can think about local symmetries: just as there are two paths from the northwest corner of an intersection to the southeast, we consider symmetrical paths through the program's state space. A counterexample to this is the wizard, mentioned briefly in the last chapter, which forces people to step through a single prescribed sequence of states, when the order—or at least most of it—often doesn't actually matter.
Every software system has a palpable, macro-scale state space which is “smaller” than an overarching concept scheme, but “bigger” than individual operations in the code. A batch program has a very simple state space, running and finished✱. Something like a word processor or Photoshop has a sprawling state space bounded only by the totality of what a person can do with the software, so we may have to consider a “state schema” (cribbing the concept of an axiom schema) to say that for the purpose of analysis, we don't need to know the content of the system state, just that it maintains its integrity. Finally, operating systems and large multi-user systems can have “islands” of state: disconnected processes that don't—or can't—interact with one another.
✱ This is rather callously skating over error states, which will always vastly outnumber the good ones, something that software developers tend to forget.
When considering state, one obvious symmetry is undo, which I covered in a previous chapter. When every function has a proper inverse, undo is implicit. Nevertheless, having only invertible functions isn't a very likely situation. It is much more plausible that the software will make destructive changes to its state. To implement undo, then, you need to record every meaningful change in the system's state. If you can't record just the changes, then you'll have to record the entire thing, which, despite several orders of magnitude of growth in computing hardware capacity, can still get really big really fast. There is also the matter of what is considered worthy of “undoability”: you may be able to undo changes in your work, but not changes you make to the software's settings. In this sense, undo is not a feature you add when you have the budget for it, but a strategic posture that you have to consider from the very beginning.
Discussing undo invariably brings me to operations that are structurally incapable of being undone, such as when the software sends information across administrative boundaries, or otherwise mediates between two or more people or organizations. This I have already covered, but I will add here that despite the software having a unidirectional sense, one symmetry to take into account is that every message sender has a recipient. That is, if you're modeling what it's like to send a message, you should also be modeling what it's like to get them. The general phenomenon of complementarity has already come up numerous times in this work (alternating repetition, positive space), such that is on track to be the most important symmetry that software exhibits.
One final complementarity symmetry that I want to recall attention to—that I wrote in the chapter on strong centers—is that of the fourfold differentiation of the work product into normative (spec) and descriptive (implementation) along the first axis, and prose and code on the second. So if you're carrying out the fundamental differentiating process and you're looking for local symmetries, ask yourself if the center you're looking at has something—potentially missing—on the other side of it. Then, just like this most recent example, you can ask if the resulting pair has a complement. One becomes two becomes four, just like in the real living process of morphogenesis.