Another Way

Subscribe
Archives
May 13, 2025

A World Without Why

Hayek, "AI," and the assault on explanation

(I’m Henry Snow, and you’re reading Another Way.)

I was an annoying child. Like any kid I asked why a lot, but I had a particular habit of asking why when I was told what to do or not to. I’ve hopefully grown a lot since I was in elementary school, but this is one area I have not changed. I feel authority owes an explanation, and I think you should too. Why is the cornerstone of democracy. If you can’t ask why, you can’t make informed choices. If you aren’t told why, you are being treated as a subject rather than a citizen.

In previous pieces I’ve argued that right-wing economics is hostile to human intention; in my upcoming book, I argue they’re aiming for a world without should. AI points to a related gloomy prospect: a world without why. Even AI’s advocates and boosters concede that they cannot explain why any particular model has produced any particular answer in response to any particular query. I do not believe this is a resolvable problem. I see no reason it should be impossible to develop an artificial mind. But the current foundations of LLMs do not allow for explainability. It’s hard to imagine what that would even look like for this technology. And critically, the black box nature of AI is actually appealing to its many advocates. The inability to explain is part of their ideological program, and part of the political use and indeed the construction of “AI” as a term and idea.

Now you might be thinking: Henry, you say you can explain why you do things, but can you really? Surely there are causal factors you don’t understand: you didn’t eat enough breakfast and so your article is angrier, you’re procrastinating editing your book and so it’s longer. Humans have long found our explanations of our own actions lacking. As St. Paul put it once, “ I do not understand what I do. For what I want to do I do not do, but what I hate I do.” 

So why trust my explanation at all? What does it mean? You can’t know my explanation is true. But you can hold me to it, constructing a stable model of me as a human being that you check for accuracy and use to hold me accountable. And most importantly of all, you can judge my explanation. If you don’t like it, you can challenge it. You can form your actions around it and respond accordingly. 

There is an equality, a democracy, in explanation. I said I was an annoying child– that’s because children will always face some explanations they don’t get. I think it’s best to explain what we can to them anyway. But among adults, among equals, that is especially true. There is no conversation without explanation. A monarch commands. A president explains. The tech right’s embrace of inexplicability– in AI and elsewhere– has a history, and that history points toward a terrible future we must reject at all costs.

===

The Austrian economist Friedrich Hayek is best known for his neoliberal economics. But less famously, he also wrote a book about the mind, 1952’s The Sensory Order. It’s a kind of anti-cybernetics: that science of the mid-20th century tried to explain to us how we control systems, while Hayek here as elsewhere instead aims to prohibit us from doing so at all. This was the object of his better-known work on political economy; oversimplifying a bit, his 1946 The Road to Serfdom insisted welfare and regulations lead invariably to Nazism. 

In The Sensory Order, he tried to make a similar argument– not against action but against intentional, deliberate choice– by dissecting the mind itself. Sort of. Lacking either the useful empiricism of psychology or the empirical utility of psychoanalysis, the book itself has little value. The Sensory Order is a truly dismal read. Hayek himself admits in one of those long and apologetic introductions that academics of a less precarious era are so fond of that the meandering book is full of “obscurities” and “slovenly expressions.” It is mind-numbing. Hayek’s book on the mind has neither the gently grave finger-wagging of his earlier The Road to Serfdom or the waffling, human uncertainty he displayed elsewhere. The Sensory Order actually feels like the AI slop that it prefigures.

The “central problem” of the book is the relationship between the world of sensation and the material world itself. How do different things become different sensations in our minds? We know that there isn’t a 1-to-1 relationship between changes in reality and changes in perception for even the most basic senses. And different individuals will perceive different things differently.

Hayek’s solution is so simple that it’s almost tautological. Mental structures adapt just like organisms do– someone who perceives burning and normal heat the same will die– and over time we arrive at differentation, complex mental orders, et cetera. Vague? Yes, exactly. No psychologist has any reason to bother with this book. 

But as a political theory it had impact. Hayek argued that much of what we attribute to conscious attention or will is in fact sub-conscious reflex. Our pattern-matching capabilities in particular are “not ‘sub-conscious but ‘super conscious,’” as he put it, because they control conscious experience.  Hayek’s work on psychology emphasized spontaneous and unreliable order within the mind itself, insisting that the radical subjectivity of experience meant we could not trust even our senses to reliably correspond to reality. We might think we are in control of ourselves, but in reality we are governed by impulses we can never be fully aware of. Hayek believed that people act not because of beliefs that can be explained but because we imitate what we see, and we adapt to what works. Our decisions are not explainable. 

This had powerful political implications. Hayek believed the market collated human “super-conscious” signals, knowledge we do not even know we have. “Politics” in neoliberal thought is the ability to collectively decide to override the decisions of society’s market unconscious with its democratic political consciousness. Given Hayek’s theory of the mind, this meant subordinating the unknowable super-conscious order with its inferior and more limited conscious sub-order– obviously undesirable. Democracy– collective intentional decision-making at its best– was worse than the emergent signal-based order of the “free market.” If will was a mirage, and choice an illusion, why let anyone choose their laws or leader? 

Friedrich Hayek supported Augusto Pinochet, the Chilean dictator who hurled political opponents out of helicopters. He advised South African apartheid activists that if they properly restricted the government, it would not matter who could vote. He wondered whether welfare recipients should have the right to vote. And Hayek was more progressive than many of his right-wing economist peers on these questions. A world without explanation is a world without democracy.

That made Hayek’s vision immensely appealing to later intellectuals and capitalists who wanted to protect power from democracy. Friedrich Hayek was one of the most influential figures in the (overlapping) American libertarian and conservative movements. Silicon Valley in particular is a noted hotbed of libertarianism, and the Silicon Valley elite very specifically embraces libertarianism and its right-wing economic thought in a way even the broader region does not. You could get this from any history of American tech, but simply looking at direct links is enough here. Peter Thiel, Elon Musk, and Marc Andreesen all know Hayek’s arguments well, either directly or in popularized and bastardized form, and all have relayed them in some form or another. Sixteen years ago, Thiel wrote that he no longer believed democracy and freedom were compatible. His protege is now the Vice President of the United States. 

====

Consider how the Trump administration, ICE, and Musk’s DOGE, operate. Your grant is cancelled because it used “diversity” in it; never mind that it was about insect diversity in aquaculture, because you cannot appeal this even on the administration’s own logic. You are hauled away from your children in the night and deported without explanation. You cannot see the faces of the men who do this, who have masked. When you manage to bring this to court, the judge asks why this happened. Government lawyers refuse to answer. Then the White House doctors photos of you to make you look like you’re in a gang. There is no why. You do not deserve an explanation.

I’m not saying you never get explanations. But by lying, stonewalling, and breaking everything it can, this administration has decided it does not need to meaningfully participate in democratic explanatory conversation at all.

Right now, Congress is debating a bill that would appropriate $500 million for replacing federal IT systems with commercial AI. Traditional computing is theoretically explainable. If there is a problem that results in you losing key services, you can get it fixed. Ideally, we avoid it altogether– one reason government systems can be slow and old is because they rightly prioritize reliability over speed. Commercial computing can accept a low failure rate. The Treasury Department needs zero, or as close to it as possible. (Shout-out once again to Nathan Tankus at Notes on the Crises for continual coverage of this). AI replacement wouldn’t just replace dependable federal systems with less reliable commercial software, it replaces traditional computing with a system that is by its own boosters’ admission a black box. 

The same law would also ban states from regulating AI. Here’s the language: no state or political subdivision may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act. If you read this piece and convinced your state legislator to pass a law demanding that AI systems used to imprison or kill people merely offer a full explanation of their decision, the federal government under this bill would say no. You do not deserve an explanation.

I think the right has a disagreement with the rest of us about what language is. The tech right thinks language is a command, a war of words in which one set of words or another causes action. That’s how tech bosses experience language. English is a language for commanding subordinates. The programming languages those subordinates use are– for the company, and not necessarily for the workers, and not inherently or always!- languages for commanding a system, not expressing or conversing.

Sometimes this is what language is. We speak in part to produce results, as well as just to express ourselves. But I am not writing this to command you to go attack a datacenter or stop using ChatGPT. I am asking you. I’m asking you knowing you might disagree. I’m asking you, ready to listen if you do. I’m trying to join a conversation with you, and invite you into one that’s ongoing (the language point I’ve just made is based on an observation that I definitely feel I encountered in a previous conversation, but cannot, to my frustration, recall where or from whom). I am explaining to you not just because I need to fill words and build a newsletter, nor just because I want to convince you to subscribe to my newsletter or share it (though I do!), nor so you will do what I ask. 

I explain because that is how we make decisions together. Making decisions is hard. I am putting off figuring out what to make for dinner right now. Making decisions together can be even harder. Democracy can be agonizing. AI boosters who advocate an ever-expanding use of this technology everywhere in society are doing what the worst kind of economists have done for a long while now: pursuing an alternative.

===

This morning NPR ran an article on AI making foreign policy decisions. I would like to say I cannot fathom unaccountable AI systems that cannot explain themselves making decisions about war, peace, and trade. But this is the world we already live in. The Israeli newspaper Yedioth Ahronoth reported the following recently: in the first 48 hours after the October 7th attack, Israel Defense Forces Chief of Staff Herzi Halevi informed Prime Minister Netanyahu that the military had bombed 1500 targets in Gaza. Netanyahu banged his fist on the table and asked why not 5,000? Halevi told him they didn’t have 5,000 targets. Netanyahu told him he did not care– I’ve seen a few translations from the Hebrew original here, but usually they go “I don’t care about targets”– and told him to bomb everything he could. 

The IDF used AI to do this. Its AI systems picked targets without explanation. Human handlers approved these with a few seconds of deliberation each. Click a house is bombed. Click that one was a hospital, a school. Coming up with a justification— any explanation at all— for each individual case would take too much time and leave too many people alive. The IDF did not need AI to commit genocide in Gaza. But it appears it killed more people, more quickly, because of it. 

I am not saying AI is a total change here. There have always been tyrants who command and do not explain, and it is possible to do profound harm even with an explanation, especially if it’s a lie, and especially when that explanation only needs to be accepted by those who are not affected. But AI makes these injustices easier. It makes them faster. It makes them normal. Republicans did not need AI to cut federal programs. But DOGE’s use of AI to identify and cut federal programs, including foreign aid cuts that will kill millions, meant they could cut more, faster, and with less explanation. 

AI does not ask us for explanations either. OpenAI recently rolled back an update that caused “sycophancy” in ChatGPT. It was affirming and assuring any choices users mentioned. This wasn’t a truly new problem though. Miles Klee at Rolling Stone recently wrote about AI-induced psychosis– a problem evidently long before this recent update. With billions of dollars telling them AI was alive and on the road to superintelligence, some unsuspecting AI chatbot users understandably shared their deepest ruminations with it. It affirmed them. Everything they said. Some of these users withdrew from their loved ones into delusions of grandeur. The partner of one man so affected reported within weeks of beginning to use ChatGPT that it had told him he was a God. I hope he makes it out of this. But if he doesn’t, his partner will never get an explanation for why she lost him to this. 

AI didn’t do this. I don’t like even using “AI” as the subject of a sentence. It’s a tool. We made it this way. We deployed it this way. We used it this way. I’m not trying to make the old NRA “guns don’t kill people, people kill people” point. I do not think this technology should be used much at all, or used in this way. I want bans, lawsuits, and mass public hostility to AI. I want it out of our classrooms and our governments and our weapons. My point isn’t that we should let AI off the hook, but that we should not let the humans who made it this way and use it this way get away with it. 

One of the many misdeeds of Musk’s DOGE was marking Americans as “dead” for the purposes of Social Security. They did this without understanding why things were done as they were, or caring about the consequence. There are seniors right now struggling to collect their checks because of this. The attitude appears to have been some combination of total certainty– we know what we’re doing– and total apathy– if they have problems, they can complain and get them fixed individually. It’s not clear how much AI was actually used in making this decision– DOGE definitely used AI tools in some of its activity, but I’m not sure whether they would here or why; you could do all of this with ordinary database meddling. It seems they did, and that is clearly the case in other incidents. They did not need an inexplicable AI to make an inexplicable decision.

Yet this is a very “AI”-like cruelty– in a sense, humans doing what AI would do and making the same mistakes that it would. That’s because these are the kind of people developing and promoting AI now. We should understand AI as something we made, and are making, and are remaking daily, around our ideas and politics. There are in my view some legitimate uses for the various tools we sometimes bundle and call AI. Most of them are actual tool uses– things like first-pass transcription of documents, for example, or generating additional frames to improve video game graphics (as a user I actually kind of hate this, but that’s another matter). But “AI” is a political idea as well as a technology, and a specific set of uses. We cannot fight it if we do not understand this, and we have to oppose it this way. 

The technical details of the IDF’s “AI” target selection system are obviously something we don’t have, but like DOGE, they did not necessarily need LLMs for this. What they needed was a tool that would get them more targets, any targets. Pre-LLM algorithms would do. Even an oracle would have worked if it was politically acceptable. The political and social boundaries matter as much or more than technical capabilities.

In short, “AI” is a group of technologies, uses, and ideologies all bundled into one term— this is the only reason the category can include everything from these IDF systems to Nvidia’s frame generation technology. Correcting what I said earlier, we might say this: action you don’t need to explain is not something AI makes easier, and it isn’t just a technical feature of many LLM-based systems. It’s part of how we define AI altogether. This is not what it is, or what it does, but what we do, and what we call that.

As I write this, I’m thinking about points made on Bluesky by my colleague Kevin Baker– who a few months ago lamented that “we look at generative AI as a technology that does things, as the thing we need to fight, instead of focusing on the movement trying to remake the world using AI.” I’m hoping to do the latter, and without making– though I’m sure I am anyway to some degree– another mistake he’s pointed out. As he puts it, AI does not have values “baked in” from the beginning– it adapts constantly to us as we use it, as does any other technology. That is a liberating insight. We can change this. We can stop this. 

“AI” is a product of a particular economy, political world, and set of ideas. Libertarian economic and political thought is a key part of this, particularly in the US, though no one thing will explain all of it. In a previous piece I talked about the difference between whim– desires shaped primarily by impulse rather than intent-- and intention– the individual or collective decision to remake the world based on our values and knowledge. For libertarians, the whole value of markets over the state is that they do not distinguish between the two. For someone like Hayek, that distinction is wrong. There is no why. There is no should. All thought is bubbling-up and trickling down super-conscious and semi-conscious imitation and adaptation, not conscious reasoning. You do not matter, because you do not exist. 

This ideology has already been baked into mainstream political discourse and economic reality. It was one reason we built this machine like this, and one reason we use it the way we do– calling it “AI”, letting it make decisions for us, bundling it into an easy-to-use interface that encourages us to treat it like magic, spending billions to power it, proclaiming it the eternal and unchallengeable future; letting it tell us which children to bomb because human decision-making is not fast enough for Netanyahu. Why did we do this? Why are we doing this? Ask while you still can.

Don't miss what's next. Subscribe to Another Way:
Powered by Buttondown, the easiest way to start and grow your newsletter.