Interrupted Thoughts: Systems Analysis at the Intersection of Policy, Privacy, and Culture. logo

Interrupted Thoughts: Systems Analysis at the Intersection of Policy, Privacy, and Culture.

Archives
December 18, 2025

Dumb Skynet

This email is a bit speculative. It’s an attempt to articulate a framework for understanding anxieties about AI and the challenges it poses for information literacy. I call it the “Dumb Skynet” thesis. It’s largely a synthesis of research that has been done with great detail by others. Still, I think I have a framing that usefully cuts through a lot of confused discourse about AI harms and risks, while allowing room for action and hope in the recognition of a simple fact: everything we are afraid of is already happening, and it is bad, but we are also figuring out responses and ways to deal with it. I’m going to lay it out in two propositions.

1: Artificial General Intelligence and Superintelligence are moving targets with no clear definition beyond “smarter than human” and “recursive self-improvement.” The leap from talking about these possibilities to PDoom (or “probability of doom”) by AI transhumanists says more about hype than real risk assessment. We have “smarter than human” AIs; we have smarter-than-human calculators. The vagueness obscures how normal it is to have technology that is better than humans at particular tasks. That’s the whole point of any automation: we encounter it daily, and it is not scary. We should not be scared that a computer might be smarter than us at the tasks it has been built to assist us with. “Smarter than human” is not an adequate definition of AGI, and anyone using it should be dismissed as blowing smoke.

“Recursive self-improvement” is a little more complicated. The fear people attach to this is that AIs become capable of improving themselves in capacities beyond the human ability to comprehend and then set goals for themselves that are incompatible with human survival or thriving. Because they are smarter than us, they outcompete us, and we lose the great human/machine war. This is the “smart skynet” scenario that is behind such popular books as If Anyone Builds it, Everyone Dies.

There are a number of basic infrastructural and technical reasons why LLMs are exceedingly unlikely to achieve anything remotely like human cognition and learning capacity. For one, the current infrastructure for LLMs separates the energy-and-compute intensive training phase from the deployment phase. This video provides a summary I find compelling. The process is, as I understand it, as follows: A foundational model is trained over a long period of time on one set of processors, that model goes through intensive refinement and safety testing, copies are made and deployed on a separate set of processors, and that’s what people interact with when they boot up Claude or ChatGPT. Those instances are no longer training; they reset after every conversation. It is a little different for “open weight” models, but even though users can refine those for specific tasks, they are not doing foundational training.

There are obvious safety reasons for this split: deployed models with training cut-offs can be controlled for much better than ones that are continuously training. But there are also financial reasons. It would cost way too much in energy and compute for deployed instances to continue foundational training—not just for the companies’ budgets (already burning money) but for the physical infrastructure for energy and processing that exists in the world. There’s no functional scaling here. Even assuming exponential growth in processor speeds and efficiency, we aren’t getting there any time soon, and it’s uncertain whether any investor would ever find this worth the obscene costs.

This is significant because this means LLMs do not continuously learn in the real world. They can look stuff up for you, but that new information does not stay in the model after the end of the conversation. In my experience, it often doesn’t even stay readily accessible to the model through the duration of a longer conversation thread. Think about that; undoubtedly, a core capacity of human intelligence is its plasticity. Our ability to continuously learn new things in our encounters with reality and other people is what makes for our “superintelligence.” LLMs are static once deployed. The millions of conversations they have through APIs like Claude teach them nothing. For models to advance, companies have to be training them on separate architectures removed from the variability of real life. Without true, real-time, adaptive plasticity, there is simply no road to AGI. This is beyond the important philosophical debates about whether unworlded neural nets of language patterns could ever be said to “think.”

That is a technical infrastructural obstacle, though. The smartest generalist response to these “superintelligent threats to humanity” scenarios is AI as Normal Technology. The authors rightly insist that AI developing incredible capabilities should not be confused with AIs having the power to impact the world. How power is distributed remains human: social, political, and economic. We would always be choosing to “let AIs loose” to shape our reality, or (more likely) allowing AIs to facilitate neofeudal accumulation of power by “The Magnificent Seven” and aligned governments. This is why I agree with the authors that “fears about AI are fears about the current direction of capitalism.”

As Cathy O’Neill detailed at length before the AI age, humans are the agents behind algorithms. We build them, grant them power, and treat them as objective to disguise bad policy-making. Similarly, AI optimization of Amazon warehouse and delivery workers is brutal and dehumanizing to the workers, but it is still a human system that gives the AI capacity to terrorize workers with efficiency metrics actionable in the real world.

It’s often said that tech leaders talk about PDoom to hype the transformative potential of the technology, their own importance and power having these tools at their fingertips, and the seriousness of their stewardship of our future. This is certainly part of it and helps drive the unprecedented investments flowing towards AI capex, but it’s probably not even that thought out. If the machine is the thing that’s capable of evil, then we start to see the human companies at the reins as an ethical force of constraint on an amoral alien intelligence. Under capitalism (and soviet communism, for that matter), humans have always blamed inhuman systems beyond themselves for their amoral decision-making. To return to Cathy O’Neill, this is the “weapon of math destruction”; some algorithm for measuring teacher performance or for sentencing guidelines is implemented, and then its results become unquestionable. “We really don’t want to fire you, but according to the algorithm, you didn’t meet the metrics.” It’s an evasion of responsibility and ethics, not an honest portrait of the power of the machine. It seems to me the PDoomers are engaged in the same shell game, positioning themselves as saviors while they build LLMs that would make them money by causing mass unemployment, societal disruption, and suffering.

Finally, a lot of how these fears are passed onto the public is in intentionally vague terms. No one knows how to actually scale AGI or what it is. And real risk assessments and research (alignment problems, red teaming, AI psychosis) are conflated with existential risk. But the problem that a terrorist might get online and find instructions for building a chemical weapon or a teenager might be convinced to commit suicide by something on the internet are not new; these are (grounded) anxieties as old as the internet. In one sense, AI safety research is the first time we’ve ever seen tech companies actually work to understand the safety and risks of their products and publicize them. What research on risks and harm did any tech companies do on engagement-optimized algorithms, infinite scrolls, AdWords, and commercial surveillance? We know now, for instance, that Meta did internal research that found social media directly harmed users’ mental health and suppressed it. Tech has been a wildly irresponsible industry and has done much social harm with little accountability or concern, long before ChatGPT.

But in my opinion, current AI safety research does not seem to represent any real lessons learned. Critical observers generally think even Anthropic’s much-touted safety research has been a failure to address the core problems. In addition, it’s been observed that the safety research paradigms employed by in-house researchers are misguided. The motivation behind the safety research we are seeing is probably quite shallow. Social media platforms evaded responsibility for similar harms because the infamous Section 230 has let them escape liability for content hosted on their platforms. When feed algorithms choose and promote harmful content into everyone’s feed, the platforms always had this get out of jail free card, even though they were always doing more than passively hosting content. Their algorithms are exercising editorial control over what users see on the website. But chatbots don’t host content; they have no plausible 230 exception. Even though they are built from scraping the whole internet for inputs, they are producing legally original outputs. And if the AI dredges up instructions for chemical weapons from its sea of data, AI companies likely have much more legal culpability. This is playing out in lawsuits against OpenAI now, and their defenses seem rather pathetic to me.

The distance between these (serious) risks and Smart Skynet/pDoom is immense. I think we should reject AI leaders changing the subject to speculative existential risks and instead hold AI companies responsible for what their AIs are already doing and their failures to solve the known and named problems with alignment and security vulnerabilities before deploying AI models to the public. Here, the focus is on human actors, like Sam Altman and his absurd irresponsibility in dropping ChatGPT all of a sudden without meaningful test trials with sample populations, legal guidelines, and agreements between competitors to slow deployments to a safe rate, not distracted by murder robots from science fiction.

2: Unfortunately, we don’t need superintelligence to meaningfully direct mass human activity into something destructive. Humans like to think we are the universe’s geniuses, and the great showdown between man and machine will be between the best human intelligence and ingenuity has to offer and a god-like machine. It’s a familiar enough scenario from our religious texts, and science-fiction authors have long repurposed Moses on the Mount, Job before the whirlwind, and Christ on the cross facing down God and the Devil for story ideas. If anything, “smart skynet” and pDoom are theological constructions, not scientific ones. But this is indicative of our great vulnerability. We are pattern-matchers, just like the AIs we build, but perhaps less objective about how much we use patterns to think.

I’m a thorough humanist. I’m not saying the machines are improvements on us. They lack too much. I’m saying humanism isn’t about only believing in the romance of human potential overcoming obstacles and the comedy of life; it is also about the tragedy of our flaws and the irony of our venality and stupidity. And the last decade of big tech platform consolidation leading up to AI has been a feast of venality and stupidity, with tragic consequences for us all.

If you’ve been reading my posts, it goes without saying that I have an extremely negative view of the business practices and products of big tech over the last decade. I’m hardly alone. Dip into recent non-fiction about contemporary tech, and you will learn all about “enshittification,” data surveillance, the cultivation of addiction in teens, the mental health and attention crisis occasioned by phones and social media, the resulting economic and political deformations, and the recklessness and greed of Zuckerberg, Bezos, Altman, Cook, Musk, Karp, and Thiel. Slowly but surely, we are seeing the crystallization of a comprehensive populist anti-trust and redistributive political movement to break up big tech, put wealth back into wages, and liberate the world’s information and creativity from platform enclosure. Unfortunately, so much of this energy still gets misdirected by the very platforms causing the problems into pseudo-populist scams led by Trump, MAGA celebrities, and people like RFK Jr.

The core technologies that monetize platform capture and drive so much of the destructive and stupid mass behavior we are seeing are already machine learning algorithms. But they are not the “smart” LLM neural nets; they are fundamentally “dumb.” By that I mean they are narrowly focused recursive loops optimizing use engagement through an infinite capacity for large sample size A/B testing and data surveillance. These are feed algorithms, which decide what information gets promoted based on inputs about users and what keeps their attention engaged and monetizable in ad sales and data products—and perhaps even more diabolically, what information gets buried. They are learning and optimizing for one thing: engagement. We could not call this intelligent. It’s utterly mindless in its entirely narrow focus (except for the human minds that designed the systems). This is dumb, Skynet, and it’s proven destructive enough. We don’t need to fantasize about something worse in the future; we need to deal with the monster that’s already here.

A major objection someone might raise at this point would be something like this: Okay, fine, engagement-driven algorithms aren’t great, but it’s still humans making the content. This isn’t Skynet; it’s the same race-to-the-bottom impulses we’ve seen throughout the history of media, from yellow journalism to Jerry Springer.

There’s truth here; we very much are in the world of human decision-making in implementing this technology in a way that exploits humanity at its worst. It could have been different. It wasn’t an evil machine that created our problems.

But that agency is significantly obscured, and the scale at which so many human agents, up to and including the executive branch of the U.S. federal government, are driven to act according to algorithmic incentives is unprecedented. Tech platforms chose to cultivate addiction and engagement, but the outcomes of that choice were driven by a dumb machine learning how to appeal to humans at our most vain, desperate, fearful, and paranoid.

So let me break down a rough, general portrait of the system in place, in which scale makes for significant qualitative shifts.

Humans make content and post it on platforms. But so many humans are making content; millions and millions. Comparing this to the Chomskyite model of “manufacturing consent” and media filtering, it’s an opening of the floodgates. Instead of a select few getting through the educational and institutional hoops that allow one to achieve a media platform by demonstrating their own skills at cultivating attention while staying within acceptable boundaries of discourse, anyone can say or produce basically anything. But that is not actually a situation of free information, and we are long past any naive belief in the liberating potential of social networks giving birth to counter-hegemonic political movements that we may have had in 2010-2016.

MAGA (a disease of algorithmic systems) is not counter-hegemonic; it’s rather members of an insider elite group that had already passed through the doors of elite institutions and wealth, waging war against members of their own class. It adopts counter-hegemonic rhetoric to mobilize populist discontent for overturning remaining legal limitations on exploitation and corruption. It’s not all that different than the Reagan/Shareholder “revolution,” although it’s substantially more cynical and venal. Although it buddies up with fascists and enjoys authoritarianism and racism, it’s the gangsters’ relationship with fascism, not that of an ideological true believer. Real fascists wouldn’t loot state capacity for personal wealth; they would build up the state and use its productive capacity to pay off gangsters to enforce discipline through violence.

Asking “How did Trump and his cronies pull off this scam?” gets to the heart of dumb skynet. Trump and people like him have always existed in the ranks of the elite. There were always those who made their way, by hook or by crook, into the highest ranks. These are the people who exploited racism, power, casinos, and the lowest common denominator exploitation of human attention to rise from the ranks of mere millionaires into truly elite wealth and power. But they were rightly marginalized, both inside and outside elite circles, as parasites (look at Epstein). If anything, the public often had a clearer sense that they were barnacles than the elite, who still kept them around because they did have the money, after all, and their political donations, investments, and media outlets made them useful. You do have occasional instances of such crooks getting some populist notoriety (John Ganz has a great analysis of John Gotti along these lines), but it was usually not viable majorities. They inspired more loathing than love—something that is probably still true of Trump despite his electoral victories.

Trump is a master of getting attention. They’ve always said “no press is bad press,” but that tended to apply to businesses, not politicians. Peter Thiel and Alex Karp cultivate media coverage of themselves as evil, ruthless masterminds of the kill chain security state because, while most of us are horrified, Palantir’s government clients want that. It’s negative attention, but it attracts business.

Trump works on instinct, not forethought, but when “what gets the most attention” is equivalent to what goes viral, he’s essentially training himself on algorithms that have trained themselves on humans at their worst, en masse. His political performances are “engagement optimized” just like social media feed algorithms. As Southpark has surprisingly astutely observed, most of his Cabinet act like content creators. Policy gets driven by attention metrics. This is “dumb skynet.”

Dumb algorithms with narrow optimizations are driving the vast majority of what news and information most people receive. I don’t think tech intended to have this power or shape society in this way; they just wanted eyeballs and clicks to maximize ad sales and data surveillance. But it’s created this massive parasitic structure: someone says something outrageous, people condemn it, and it gets more and more attention. Now that the person is famous, and maybe some people like one part of it, or (more likely) like who that person is, pissing off. Now they have constituencies and fans. Look at how much nominally liberal content is just clips of some conservative somewhere saying something racist and saying, “Isn’t this racist”? All of that is parasitic on the MAGA attention machine and fuels it. Just like 90% of conservative content is “Ben Shapiro OWNS woke college student.” There are no longer ideas, ideologies, or goals in political discourse, just attention. And everyone has been trained on the algorithm on how to get it and turn it into money. If this is what is setting the agenda of your government, you are being ruled by dumb Skynet.

These harms infiltrate so many areas of life: local governance, book-banning campaigns, transphobia, excessive censoriousness from some on the left, wellness culture, and fintech scamming. We have been deformed by it from top to bottom, and if we want to be able to act collectively at all ever again in ways that aren’t just short-term attention maxing and reacting, we have to build systems with different incentives. I don’t know the first step, but this is why I think that if it is possible to remove yourself from algorithmically generated infinite scroll platforms, you should. I am caught in the web of addiction and dependency, so I make no judgments (and this post will go on the LinkedIn profile I maintain for work), but nothing good is ever going to happen for the world on the platforms we have if they remain engagement optimized. This is the real Skynet harm we must confront, and the tech leaders warning about future superintelligence are the people culpable.

Don't miss what's next. Subscribe to Interrupted Thoughts: Systems Analysis at the Intersection of Policy, Privacy, and Culture.:
Powered by Buttondown, the easiest way to start and grow your newsletter.