How the powerful want us to see AI
A funny thing happened when Isaac Newton discovered gravity. As Peter Dear writes in his new history of science, The World As We Know It, English authorities in the early 1700s advanced a particular interpretation of what gravity meant. It wasn’t merely the mutual attraction between two masses – like the earth and the sun, or an apple and the earth. Dear writes (emphasis mine):
For the Newtonian guardians of orthodoxy, God ruled the universe by imposing His will upon it; gravity was a law of nature willed by God on inert, passive matter. Similarly, in human society, laws are imposed by those at the top on the rest of the population, which has no say in the matter. . . .
Newtonianism’s rapid acceptance in England . . . was in significant part due to its usefulness in promoting orthodox religion and supporting the established political order.
You might not think that gravity, of all things, could be politicized, but here we see that Newton’s scientific findings were co-opted by people who wanted to preserve the status quo. Those in power promoted an interpretation of Newton’s work that served their own interests. (Meanwhile, across the Channel, Voltaire advocated an opposing idea, which was that gravity stemmed from the objects themselves – and thus the people, not the church, should have the power.)
History rhymes, goes the old saying, and today in the age of AI hype we’re seeing the same pattern play out. People in power want to use a new, not-very-well-understood discovery to serve their ends. So any new observation about an AI system is an excuse for the boosters to excitedly ask questions like: “Is it sentient?” “Does it deserve rights?” “Maybe it’s a god? Or at least godlike enough to replace most jobs?” In all cases, they’re hoping we’ll agree to the answer that benefits them: “Yes.”
Every week of tech news provides multiple examples; a recent one comes from AI-company CEO Matt Shumer, whose essay Something Big is Happening has gotten a lot of attention. Shumer suggests that recent AI models have hit an inflection point that is so mind-blowingly awesome that – well, let me quote Shumer:
Amodei has said that AI models "substantially smarter than almost all humans at almost all tasks" are on track for 2026 or 2027.
Let that land for a second. If AI is smarter than most PhDs, do you really think it can't do most office jobs?
Think about what that means for your work.
He mentions “Amodei.” That’s Dario Amodei, CEO of Anthropic, the (Google- and Amazon-funded) company behind the Claude AI platform. Yes, that means the viral column about AI’s amazing capabilities, written by an AI CEO, quotes the breathless prediction of – another AI CEO. Let that land for a second.
Nonetheless, the claims are bold, stating that AI is on track to be smarter than all of us – a prediction that, of course, fits the purposes of an AI booster. As AI expert Gary Marcus writes in response, the essay is “a masterpiece in hype . . . completely one-sided, omitting lots of concerns that have been widely expressed here and elsewhere.”
Much like Newton’s findings about gravity, recent developments in AI are being interpreted in a very particular way by the most powerful people in the world.
If you need a refresher on where power resides today, take a look at Wikipedia’s List of public corporations by market capitalization. The top five are Nvidia ($4.5 trillion), Apple ($4 trillion), Google ($3.8 trillion), Microsoft ($3 trillion), and Amazon ($2.2 trillion). That’s over $17 trillion of wealth tied up in convincing us that their AI platforms are the most important technology ever created.
Working in partnership with the current occupant and his regime, a blob of tech billionaires and authoritarians have bet the entire American economy on a particular vision of AI – one dominated by a few companies, served up by multi-billion-dollar data centers, and paid for (via pension fund investments, higher electricity prices, drained aquifers, polluted air, and so on) by American citizens and surrounding ecosystems. It’s an unthinkable huge gamble, bound to fail, but – and this rhymes with the 2008 financial crisis – the oligarchs only need to maintain the ruse long enough for them to get paid. Then they’ll leave the rest of us holding the bag.
Hence the self-serving interpretation of AI that’s sentient, all-powerful, godlike, inevitable, and so on.
As long as we’re looking at interpretations, I’ll offer mine. I think AI can actually be a helpful tool. After all, if you take off the layers and layers of hype, AI is essentially just advanced statistical computation. And – to state the obvious – computation can lead to good outcomes, if it’s deployed right. Our world is saturated in computation, from the apps we use to the infrastructure that connects everything around us. In theory, nothing should prevent new models of computation – new ideas in AI – from generating real benefits in the future.
The problem, and the obstacle to those benefits, is that AI today is dominated by a handful of unethical companies. Until we find a way to route around, or directly oppose, those companies, AI will continue to be defined by a group of predatory oligarchs. (I do mean, literally, predatory.) The way to make AI better, it turns out, is by resisting the current power structure – and beginning to build our own.
. .
Listen to my interview with Peter Dear, author of The World As We Know It, on this week’s Techtonic:

Other AI items:
Eryk Salvaggio writes that there’s a big difference between “AI” that looks for patterns in medical imaging (helpful) and “AI” that relies only on language model to make diagnoses (dangerous). Read The Illusion of AI (Feb 9, 2026).
Interesting short video showing off the latest Seedance 2.0 AI model from China-based Bytedance. Technically impressive.
For Creative Good members, I’ve posted more resources on our members-only Forum:
AI agents are posting on Moltbook, a thread about how a social network for bots has gotten lots of attention – and, to be fair, generated a few interesting examples. Predictably, the boosters are interpreting this as sentient! godlike! etc.
Instagram CEO claims his platform isn’t addictive – speaking of unethical companies and predatory leaders. (See also my post about Facebook/Meta’s $10 billion data center in Indiana.)
Why I use Buttondown for the newsletter, and not Substack
Good Reports listing of Amazon Ring alternatives: Home cameras that can record to local storage, not the cloud.
To support my writing, and to get access to all my posts, columns, comments, and Good Reports alternatives on the Forum: please join Creative Good.
Until next time,
-mark
Mark Hurst, founder, Creative Good
Email: mark@creativegood.com
Podcast/radio show: techtonic.fm
Follow me on Bluesky or Mastodon