Luddites Unite: Keeping ScOR #21

In South Carolina, as in many other places, the sprouting of AI data centers is putting stress on communities and local governments, raising questions about how to manage it all. What guardrails should local agencies put around the construction of data centers, the new sources of energy needed to power them, and the facilities’ use of water and other resources? You might think this could prompt the need for new regulations, or at least the thoughtful application of existing ones. (In New York, lawmakers have introduced a bill that would place a moratorium on new data centers in that state.)
But lawmakers in South Carolina are busy pushing in the other direction. They’re moving to strip away any regulations that might create speed bumps for the AI boom. “We do not want to have unelected bureaucrats making rules and laws that are forced upon people,” the Republican speaker of the State House, G. Murrell Smith, told the New York Times.
No word on whether Rep. Smith is concerned about having AI itself, and all that comes with it, forced upon people by unelected forces.
Artificial intelligence is a vast topic with sprawling ramifications — economic, scientific, moral, cultural, aesthetic. Like a lot of non-experts, I’m trying to read enough to gain some basic understanding of generative large language model (llm) technologies and how they’re altering our world — that is, when I’m not actively shoving my head into the sand and wishing the whole thing would blow over. It will not, of course.
In this short piece I won’t try to address all the big questions surrounding AI, or to argue against any and all use of the technology. But I do find myself thinking about a few aspects of the AI tsunami that happen to echo themes we’ve explored on Scene on Radio. (See especially Season 5, The Repair, and Season 7, Capitalism.)
Slaves to Growth
AI probably won’t enslave the human race. It probably won’t exterminate us. Then again, maybe it will. It’s not really comforting to hear from a leading AI executive that his p(doom), or “probability of doom” estimate, is “only” 10-to-25%. That’s the likelihood, says Anthropic CEO Dario Amodei, that the AI experiment will go catastrophically wrong, potentially dooming human civilization.
But even if nothing so calamitous comes to pass, we’re already shackling ourselves to AI — and the fragile, unsustainable economic system that it’s currently keeping afloat. The vast investments in AI by a handful of giant corporations are the only reason the stock market has continued to rise in an otherwise stagnant economy. “By one measure, more than 90 percent of economic growth in the first half of 2025 came from investments in computer equipment and software, which economists chalk up to projects linked to the rush to build data centers and remain in the AI race,” the Times reported in December. The AI giant Nvidia, all by itself, accounted for 15% of the S&P 500’s total return last year.
AI is the new frontier, the next slice of Terra incognita into which capitalists are now plowing their capital to leverage the next wave of growth in our (well, mostly their) economy. For 500 years, capital has gone in search of new lands, new stuff to pull from the earth, new sources of labor with which to build wealth. Early on, it was the enclosure of European land, the trade in kidnapped and enslaved Africans, colonial conquest and gunboat capitalism. Then came other forms of investment — in the early U.S. we saw the violent push into the cotton-growing states, then into Mexico and the west, and beyond. Capitalists poured money into lumber, steel, railroads, oil, automobiles, and the Web.
Now the big bet is on AI. It came along just in time, answering the perennial question: where will GDP growth come from next? The big money made its choice before most people had begun to wrap our minds around AI — and before the mainstream of society could take seriously the debate about whether, just perhaps, it might be time to turn away from a regime of endless economic growth. Climate scientists and heterodox economists have spent decades explaining why an economic system based on ever-expanding extraction and consumption is plainly unsustainable and has us hurtling toward ecological disaster as we barrel through one planetary boundary after another. Earth can only give — and take — so much.
So much for that talk! Like an addict who pauses for a moment to consider his life choices, then says “ah, screw it” and goes off on a bender, we’ve rushed to embrace AI and what it will do for productivity, GDP, and the Nasdaq. This choice is doubly suicidal given that AI requires vast new expenditures of energy, much of which will come from the burning of fossil fuels.
What are we even doing here?
I’m saying “we,” but who’s we? Most of us are spectators — and guinea pigs and potential victims — as the people with real power, the movers of capital, hurl humanity down this dangerous path. AI is already driving up your utility bills and exacerbating global heating but on the other hand it will soon take your job. It’s always heartwarming when tech moguls talk about the potential for AI to “free” people from work — as if, in this you’re-on-your-own political environment, freeing people from their work will not also free them from their paychecks.
Honestly, though, who needs people when you’ve got AI? Sam Altman, the CEO of Open AI, made waves at a recent conference in India with his innovative defense of the energy required to train an AI model. “It also takes a lot of energy to train a human,” Altman said. “It takes like 20 years of life, and all the food you eat before that time, before you get smart. And not only that, it took like the very widespread evolution of the hundred billion people that have ever lived and learned not to get eaten by predators and learned how to figure out science and whatever to produce you, and then you took whatever you took.” Altman is essentially calling humans “inefficient meat computers,” one person retorted on social media — which, come to think of it, is precisely what the AI models themselves may conclude before voting us off the island. “Anybody who talks like this about humans should not be allowed a job that in any way impacts other humans,” wrote another poster.
Given this rant of mine, it won’t surprise you that I’ve never been what you’d call an early adopter. I got my first cell phone a couple of years after many people did. I was similarly slow to trade in my flip phone for a smart one. But I’m no technophobe. I use technology all day long — including non-generative AI that transcribes my audio recordings. When I’m exercising or washing dishes, I will listen to the AI-generated audio version of a newspaper article. But the onslaught of generative AI feels qualitatively different from other tech “advances” in our lifetime, even the Internet revolution, and it’s helped me discover my inner Luddite. I know I’m far from alone. As historians have often pointed out, the Luddites themselves smashed machines not because they hated technology. “They just wanted machines that made high-quality goods,” historian Kevin Binfield has said, “and they wanted these machines to be run by workers who had gone through an apprenticeship and got paid decent wages.”
So, yes, it’s about the absurd energy-guzzling and the accelerated roasting of the planet, but it’s also about labor, quality, dignity. No thanks, I don’t want to try out the new AI tool on your platform. I don’t want to click on your AI-generated video. I have zero interest in your novel, your movie, or your podcast created by AI. Call me old-fashioned but I don’t want an llm to write anything for me — not an essay, not an email, not even an outline or a syllabus. I’d rather not read emails from friends or colleagues that were drafted by AI.
People I respect use generative AI and express high hopes for its potential. Here’s to hoping the whole thing turns out great. Maybe AI will cure cancer and Alzheimer’s and solve countless other scientific conundrums. Hell, maybe it’ll somehow solve the climate crisis. It will help us to organize and analyze our data and our lives and it will free us from much drudgery and busywork.
Forgive me if I’m skeptical, in part because I don’t believe that any of these noble goals are fueling this runaway locomotive. The hundreds of billions of dollars being invested are seeking trillions in return — which will come if and when productivity soars and labor costs are driven down on an enormous scale. To the detriment, almost certainly, of millions of us.
The politics of it all have been slow to gain steam — largely, I suppose, because there’s so much else going on. But it’s coming. Politicians as diverse as Mikie Sherrill, New Jersey’s new Democratic governor, and Florida’s right-wing leader, Ron DeSantis, have taken up the mantle of regulating data centers and their impact on energy bills. The news media is increasingly taking note that Americans, on the whole, hate AI and want it tightly regulated.
It would seem that one political party or another would go all in as the party that will save us from unconstrained AI. But that would require such a party to declare independence from the corporations investing those hundreds of billions. Those of us raging against this machine, or just deeply uneasy about it, are waiting with bated breath.
*
Like this? Share Keeping ScOR with a friend, or several, by forwarding this message to them.
If you missed past posts, find the archive here.
Add a comment: