M.A.D.
I’ve been writing a lot about the future of A.I., lately. Largely that it doesn’t have one. But I recently came across a video that made me think the whole debacle may be a lot worse - and a lot stupider - than even I had feared.
I’m going to try to stop banging on about this subject pretty soon, although that’s no guarantee that I’ll succeed, but if you want a little more of my pontificating about all the ways that artificial intelligence looks to be harmful, then strap in. This is going to be “Complaining About A.I.: Doomsday Edition.”
No Such Thing As Bad Apocalypse Publicity.
Plenty of people are worried that A.I. is going to bring about the end of the world. Or at least, they claim to be. For the most part, I suspect anyone who brings this up is one of three main types:
1) Paranoiacs.
People who genuinely think that ChatGPT is the forerunner of Skynet from the Terminator movies. It isn’t, we don’t need to worry about that, it’s still telling people to eat rocks and put glue on pizza. Anyone who earnestly thinks that chatbots are going to seize power and exterminate us all is certainly misinformed and probably a little crazy.
2) Fantasists with boring lives.
These are the same people who started declaring World War 3 when Russia invaded Ukraine. To be clear, it’s not GOOD that Russia invaded Ukraine, but there’s a very, very large gap between “conflict” and “nuclear armageddon.” People who are quick to declare the end of the world, usually in excited tones, are often harbouring secret fantasies of surviving and becoming Mad Max at some unspecifed future point. They really just need hobbies or something else to distract them.
3) People who work in Silicon Valley.
You’d think this third group would be cause for concern, given that they’re the people who work most intimately with Artificial Intelligence. When Sam Altman, a man who resembles a shop manequin remembering that it left the gas on but who is actually the CEO of OpenAI, says that we should be worried that A.I. will take over the world, this might logically be seen as a bad sign.
Except that it’s not. It’s just advertising.
There’s really not much difference between Altman and friends declaring that A.I. could become sentient and wipe out humanity, and packaging in the 90s that declared snack foods to be “X-TREME.” Often, X-TREME to the max.
The resulting products were usually pretty bland. The packet just had to promise that new super-hot, mega-spicy Doritos were going to literally melt your face off because they were so hard core, and then some people would buy them. And be disappointed. Claiming that ChatGPT is so smart that it’s going to take over the world and enslave your family is really the same type of marketing, in that it’s over-hyped bullshit. If Sam Altman really gave a shit about reigning in the supposedly godlike power of A.I., then he wouldn’t have lobbied against proposed restraints on A.I. development every single time they’ve come up.
Altman is like Sarah Connor, targeted by the machine overlords of the future and narrowly escaping assasination at the hands of the Terminator, spending the rest of her life writing letters to her Congressman about how there should be more funding for robotics companies.
Altman is a bullshit artist, and also potentially a puppet carved by an old Italian man in a fairytale, and if he were really worried that A.I. could be a world-ending threat then he’d act differently. But I suspect he knows the truth - A.I. is about as good as it’s going to get for the foreseeable future, which is to say: not great. Altman and friends need people to believe that A.I. is about to become dangerously, superhumanly smart because they need the money to keep coming in. They need us to believe that it can either cure cancer and make us into an interstellar civilisation and fix global warming, or else that it’s going to be so smart that it’s going to crush us under its digital boot and rule over humanity, because either of those things is preferrable to the truth that it’s not actually “smart” in any meaningful way at all. Because if people realised that, the bubble might burst.
Yeahhhh, about that bubble…
The Iron Bubble.
The whole A.I. con should, by any sane metric, have fallen apart by now. People hate it. People don’t want A.I., and don’t trust it. It’s driving up electricity bills for anyone who lives near one of the data centres that these companies rely on, because it means the whole area uses more power and so households have to pay out. Of course, giving public money to people who need help is good if those “people” are wealthy corporations, and despicable socialist heresy when it’s anyone else. Remember: Socialism is only for the rich. Elsewhere, A.I. is disastrous for the environment and harmful to the brains and wallets of everyday people.
Almost none of the theorised benefits of A.I. seem to have materialised, although a lot of the theorised job cuts have become crushingly real. Google is worse, art is worse, online discourse is worse, and the staunch defenders of A.I. remain a small minority.
So why hasn’t the whole industry collapsed?
People like me (and I should know - I’m one of them) have been decrying A.I. as a useless, harmful dead end for a while. If we’re right, then why does it persist?
Well, it turns out that tech CEOs aren’t entirely stupid. Or maybe it turns out that people in government ARE entirely stupid. Either way, the result of one or both factors is that big tech has been allowed to embed itself in various national governments while promising them the world, and is now being supported in part by fat defense contracts. Aside from the fact that A.I. development is now literally the only thing keeping the American economy (and by extenstion the global economy) moving in a forwards direction, it’s also become a part of national defense strategy for multiple nations. OpenAI had previously promised not to allow their tech to be used for military or weapons purposes, but then decided that actually, no, the opposite of that and fuck you.
In America, Big Tech has become so symbiotic with the military that the Chief Technical Officers of Palantir (a surveillance company already used by the U.S. government, founded by PayPal lunatic Peter Thiel) and Meta, along with an OpenAI executive, were sworn in as military reservists in a fancy ceremony this year.
There’s an old saying on the Left that “there’s always money for war,” but it’s true. This means that as long as A.I. firms can continue to convince governments that there are important defence applications for their product, they will always have funding. This has some very bad implications.
Korea(i).
In the 1950s, American pilots who were shot down over Korea and captured by the Chinese began to cause a problem for Washington.
Specifically, they began to confess to war crimes, admitting that they had bombed civilian targets and slaughtered non-combatants.
This isn’t normally a problem for the American military. If their troops kill innocent people or break the Geneva conventions, the official reactions typically range from “no we didn’t” all the way down to “so?”, and the wider world usually nods along and admits that these are excellent points.
In the case of the pilots in Korea, however, something was amiss. The pilots had not, in fact, committed any of the crimes they were confessing to. U.S. commanders knew exactly what missions the pilots had flown, and the data didn’t add up. These men were confessing to crimes that they provably couldn’t have committed.
The lesson was clear: Dastardly Communists had invented a mind control drug!
They hadn’t, obviously. What the Chinese had done was starve these men, deprive them of sleep and beat them. This is a lesson as old as humanity; if you don’t let someone sleep, or eat, and hit them with a stick, after a week or so they’ll say whatever you want. There’s no need for anything fancier than that.
The Americans, however, put two and two together and decided the answer was about seven million. It wasn’t that sleep deprivation and regular beatings cause people to crack. Rather, the Communists, either the Chinese kind or their brothers in arms in the Soviet Union, had created secret mind control drugs, and if the Communists had that kind of technology available, then by god, Uncle Sam needed to start developing it, too! They needed help with their own mind-control serum, and that was EXACTLY the kind of thing they’d been keeping a castle full of escaped Nazis for!
…Okay, yeah, that also needs explaining.
“We Did Nazi Anything Suspicious…”
At the close of World War Two, with the full horrors of the Third Reich laid bare, the victors encountered an awkward problem.
Initially, the plan had been to jail or execute every single Nazi and then re-start Germany with a government composed of people who hadn’t been involved in Hitler’s government. Pretty quickly, however, it became obvious that anyone with any experience of running a country, or experience in any position of authority, was already a Nazi.
There are whole books dedicated to how culpable the German people were for what happened in their country in the 1930s, but at the very least, if you wanted a decent job in Nazi Germany then you had to become a member of the party.

As such, there was literally nobody left to run Germany who wasn’t in some way tied to the Nazi regime.
This led the Allies to adopt a policy of coughing awkwardly and looking at their shoes while numerous former Nazis were put into government to begin the rebuilding of Germany.
“Okay, sure,” a useful straw man might say, “that seems like there weren’t a lot of good options on the table, so it’s not ideal, but it’s not totally morally bankrupt…”
The total moral bankruptcy came when the Allied victors realised that while the Nazis had committed crimes against humanity on an industrial scale and carried out inumerable bizarre and horrifying experiments, some of the stuff they’d learned was potentially useful.
The poster child for this was Werner Von Braun, who had developed the V2 rockets that fell on London in the second half of the war. The Americans liked the sound of “rockets”, and so Von Braun started pronouncing it Von Brown and went to live in America, working for NASA on their rockets and claiming that he was only ever a humble, everyday, run of the mill… er… rocket scientist, and whatever his giant exploding steel tubes might have once been used for wasn’t anything to do with him.
It wasn’t just rockets, however. German scientists of all disciplines were scooped up, given a swift talking to, and then put on the payroll of the American government in case they knew anything useful. It was referred to as Operation Paperclip.

The Russians were doing the exact same thing in the East, by the way, and just to make sure nobody comes out of this looking good, Winston Churchill was busy mulling over Operation Unthinkable, which would have involved immediately re-arming the Nazis and using them to march on Moscow before Stalin got any ideas. Sure, the Nazis were bad, and all, but The Enemy (whoever that was from whichever perspective) was forgiving them and using their secrets, so the best plan was for your side to start doing that, too!
The upshot of all this was that when the Americans became convinced that the Communists had invented mind control drugs, they literally had a castle full of escaped Nazi scientists they could call on. When asked, the Nazi scientists said that they’d looked into the idea of mind control drugs and truth serums, and never really cracked it, but that they’d discovered some potentially promising avenues around an obscure chemical called Lysurgic Acid Diethylamide, or L.S.D.
This is how the C.I.A. got involved with L.S.D., which at the time was legal and unregulated. They never found a way to control minds with it, but they did do a lot of unethical experiments on people who had no idea what was happening to them, first in Project Artichoke and later in Project MKUltra. We know some of what MKUltra got up to because, when the whole thing was in danger of being discovered, the U.S. Government destroyed the files but accidentally left several thousand documents in a warehouse.*
We know that Ted Kaczinski, the Unabomber, was probably a test subject for MKUktra while he was in college. There are interesting, semi-proven ties between Charles Manson and MKUltra, although there isn’t room to go into them here. Sirhan Sirhan, who shot Robert F. Kennedy Sr. in 1968 and radically altered the path of both American and Brain Worm history, claimed to have been a victim of MKUltra experiments.
The point of this whole long digression is that America spent millions of dollars and committed countless crimes against its own citizens in order to create mind control drugs that never worked, and all based on the justification that The Other Side Was Doing It, Too.
They did this with the help of escaped Nazis, whose crimes they’d ignored in exchange for scientific data, in part because The Other Side Was Doing It, Too.
A lot of the worst things people do are justified by arguing that someone else is already committing whatever crime is being proposed, and that OUR side can’t afford to be left behind.
Which brings us back to A.I.
Competitive A(i)dvantage.
Every major government that is trying to incorporate Artificial Intelligence into its millitary would plausibly be able to claim that they’re doing it so they don’t get left behind.
As Big Tech claim that A.I. could become superhumanly smart and take over the world, credulous governments listen and decide that if there’s going to be a world-ending God A.I., then it had better be their world-ending God A.I. Whether these peoples’ brains are so poisoned by an Us-and-Them mentality that they don’t understand the concept of “humanity” as a shared species, or whether they’re just hoping that Skynet won’t shit where it eats and will therefore only wipe out all the OTHER countries is a weird, depressing thought experiment, but ultimately pointless, because - and I keep saying it - there isn’t going to be an all-powerful computer intelligence. Smart A.I. is a con.
Trying to develop a super-smart A.I. before “they” can perfect theirs will lead to the same results as the attempts to make a mind control drug to combat the mind control drugs the Soviets allegedly already had: Nobody will achieve anything, it will cost a ton of money, and a lot of laws will get broken.
Unfortunately, just because it’s not going to work doesn’t mean that there can’t be horrific consequences. Let’s turn back the clock again and look at missiles.
The Miss(a)ile Gap.
Once the Korean war was over, and while the C.I.A were busy tripping balls and trying to hypnotise brothel clients, the nuclear arms race between the U.S. and the Soviets kicked into gear.
What stopped this arms race from spilling over into the end of life on earth was, in part, the Eisenhower Doctrine, also known as Mutually Assured Destruction. The acronym was far from ideal.
Basically, U.S. President Dwight Eisenhower told the Soviet Union that they were welcome to start trouble if they wanted, but that if they did, the U.S. would respond with nuclear weapons. As the Soviets had similar weapons, they would inveitably respond in kind, and then everyone, everywhere would wind up dead. The threat was simple: If you start a fight, it’s literally the end of the world. So you’ve got to ask yourself a question: do I feel lucky? Well do you, comrade?!
Playing nuclear chicken is a fucking lunatic idea, but it worked. Given the choice between “everyone dies” and “any other option at all”, most people go with the latter. This didn’t, however, stop there from being a lot of dick swinging. Every year, each side built more and bigger missiles, and tested bigger and more deadly bombs.
In the U.S., no less a maniac than Henry Kissinger was a regular commenter on the “missile gap” - the disparity between the number of nuclear missiles the Soviets had and the number in the American arsenal. America, in the eyes of the pro-nuclear-war lobby, couldn’t afford to fall behind.
Interestingly, there’s a similar thing at work here to the “starve and beat people vs. mind control potion” confusion. The Soviets didn’t have anywhere near as many missiles as it appeared. If the Russians held a parade in Red Square to show off their military prowess, they’d often drive a missile through on a truck, have it go around the block and join the parade again, doubling the number of missiles people saw passing through. Even the missiles that were on show often didn’t have engines. This level of simple trickery was once again enough to panic the C.I.A. into thinking the Soviets were way more capable than they actually were, although in the C.I.A’s defense, a lot of them were on acid.
America and the Soviet Union spent vast amounts of money and effort building up stockpiles of weapons they were realistically never going to use. Nuclear weapons only ever sat in silos, un-fired, until they were decomissioned.
One could lament the hideous waste of it all - how many starving people could have been fed for the price of nuclear arsenals that were largely intended as a bluff? - but even if we acknowledge that A.I. is also going to be a colossal waste of money that could have done something useful, there’s a key difference between A.I. and nuclear weapons: Nobody let the public fuck around with the nukes.
Wa(i)r.
Every nation on earth is probably involved in digital warfare against its enemies.** This has been going on for decades. What’s new is that A.I. is now involved, so where thousands of simple Russian bots were employed on social media to try to swing the Brexit vote or elect Donald Trump (both successful operations), it’s no longer just bots making individual Twitter posts. Now there are A.I. “people” that can interact with the guillible and therefore seem more convincing. There are A.I. videos of things that never happened, that can be produced with no effort and used as propaganda.
Every nation will be doing this soon, if they aren’t already, but that’s not what I’m worried about. Not really. It might make the internet even more unusable than it has become in recent years, and that will be a shame, but the real danger of A.I. is not that it’s going to become smart and kill us all, and it’s not that Russia, or China, or Iran, might use it to seduce us with chatbots espousing devious Russo-Chino-Iranian ideologies. Although I’d love to know what THOSE look like.
The biggest danger is what it’s always been: That stupid assholes are going to bet big on A.I., and it’s going to fail. This is going to tank every economy that is invested in big tech, which is all of them.
A.I. is going to be promised as a saviour to greedy CEOs, so they’re going to fire their employees in droves. Everyone is going to end up poor and unemployed, in whatever country they live in, and other hostile nations won’t have to lift a finger to cause this.
People are already using A.I. instead of thinking. It’s having a noticeable effect, and ChatGPT, the flagship brand, has only been with us two years. Countries are propping up A.I. firms with fat defense contracts and subsidies of public money, even as their products make the citizens of those countries measureably dumber and more reliant on a technology that can’t be trusted.
Art is becoming worthless because too many people think they can get it for free from generative A.I. News is becoming meaningless because deepfakes are letting people choose their own realities.
Of the people that habitually use A.I., a frightening number are developing psychoses and god complexes. A.I. is not-infrequently driving people to madness and delusion, and this isn’t something that is being inflicted on people; they’re doing it to themselves, and as long as governments keep supporting these companies out of fear of losing a god computer arms race that is never going to lead anywhere, then entire countries are doing this to themselves.
Nations don’t need war, or even enemies, to collapse, so long as they keep propping up companies that are attempting to make the citizens into delusional, impoverished halfwits who never leave their homes. And if every nation is pumping money into a technology that causes this outcome, then every nation is doing that to its own people. It’s like building an arsenal of nuclear weapons that will never be launched, except that the company building them is also encouraging your citizens to go in and lick the Uranium cores because they think it might give them super powers.
If this comes to pass - economic, artistic, intellectual collapse of every nation that ensured its A.I. sector was too big to fail - then the machines really will have brought about a doomsday scenario. Or our governments will have caused them to bring one about. Except that the A.I. still won’t be smart, and won’t want to rule us. It will just jig about nonsensically for our amusement without ever becoming anything clever, slowly making everything stupider, like humanity has rolled over and capitulated in order to make way for Barney the fucking Dinosaur.
By rights, the A.I. bubble should have burst. What frightens me is that government defense contracts all over the world might be invested in ensuring that it doesn’t burst. Because the longer the A.I. charade is allowed to go on, the more harm it does, and the harm it’s doing isn’t to the “other side.” It’s to everyone.
*These kind of fuck ups are, incidentally, why most conspiracy theories don’t pan out. If a lot of people are involved in a plot, then someone is going to talk, or someone is going to forget where he stashed a filing cabinet full of evidence. You think the hundreds - if not thousands - of people who would have to be involved with collapsing the World Trade Centre wouldn’t have included at least one idiot who would leave the paperwork lying around somewhere? Statistically, it’s impossible.
**Maybe not Costa Rica. Costa Rica has no standing military. Although it’s technically U.S. territory, so it’s debateable. Either way, at the end of the book Jurassic Park, the island is bombed by the Costa Rican air force, which spoiled the whole “cloned dinosaurs” story for me by making it unrealistic…