Full of slop until the bubble pops
There are growing warnings about the imminent end of the AI bubble. Brian Merchant (who I've written about) posted about "the beginning of the end of the generative AI boom," while Ed Zitron predicted that the "generative AI boom is unsustainable, and will ultimately collapse."
Even more striking is the warning from Goldman Sachs, a company that would not hesitate to boost AI if it seemed profitable. But prospects apparently look so grim that it issued a report describing the pitfalls of AI investments. A New York Times profile of the author followed (gift link, Sep 23, 2024).
Amidst all the noise, I noticed one other signal that got no attention. Axios reported on Sep 24:
A new AI startup using AI to transcribe and summarize pet appointments for veterinarians has raised an $8.2 million seed round led by Andreessen Horowitz.
Pet appointments? Anyone who lived through the late-90s dotcom bubble knows the particular danger sign of millions of dollars going into a startup for pets. (Read Wikipedia on Pets.com.) Yes, the "vet scribe" software probably has some use - just as a pet-supply website had a few customers, too, until the entire dotcom economy imploded.
The AI bubble is going to pop. The difficulty, of course, is that no one knows when. (As the old phrase says, "The market can stay irrational longer than you can stay solvent.")
In the meantime, we're stuck with the spreading effects of Big Tech's AI platforms. Max Read, in New York magazine, writes that we're drowning in AI slop (Sep 25, 2024):
- Sci-fi magazine Clarkesworld had to pause accepting submissions because so many ChatGPT-generated short stories were coming in.
- WordFreq, a site that tracks frequency of word use online, will stop updating its database. “I don’t think anyone has reliable information about post-2021 language usage by humans,” says the creator.
- Scientific research is getting slopped, with up to 10% of academic papers using generative AI.
- Most nightmarishly, entire AI-generated books are now sitting on library shelves. Max Read writes:
In the worst version of the slop future, your overwhelmed and underfunded local library is half-filled with these unchecked, unreviewed, unedited AI-generated artifacts, dispensing hallucinated facts and inhuman advice and distinguishable from their human-authored competition only through ceaseless effort.
What if no one wants AI?
In my Techtonic episode this week, I asked: What if no one wants AI? (Sep 23, 2024). Sure, there are some legitimate uses for AI, but increasingly we're being sold an insidious vision of the future in which AI inhabits everything, bringing with it the surveillance, the inaccurate results, the embedded bias, and the security risks – all brought about by unnecessary computation, powered by an obscene amount of energy and water.
Some of the examples of AI-enabled devices are worth a laugh. A Gothamist article (Sep 9, 2024) covers the "huupe," a $10,000 basketball hoop with a computer screen on the backboard topped by a surveillance camera:
There's also the AI-enabled dumbbells (I'm not making this up), which "count reps, track velocity, and analyze form in real-time." As I said on Techtonic, this product clearly addresses the key pain point in using dumbbells: the sheer pain of counting reps.
And then there are the Big Tech platforms that continue to roll out AI-and-surveillance features that are exploitative of customers.
For example, LinkedIn Is Training AI on User Data Before Updating Its Terms of Service (Joseph Cox in 404 Media, Sep 18, 2024):
LinkedIn appears to have gone ahead with training AI on its users’ data, even creating a new option in its settings, without updating its terms of service.
Microsoft-owned LinkedIn defaulted the surveillance setting to ON for all users, and didn't notify anyone about it. (You'll have to dive into Settings -> Data Privacy to turn it off.)
There is an enduring link between AI and surveillance, made clear by the comments of one Big Tech billionaire. From Business Insider (Sep 15, 2024):
Larry Ellison, the billionaire cofounder of Oracle . . . said AI will usher in a new era of surveillance that he gleefully said will ensure “citizens will be on their best behavior.”
Does anyone notice any more when the world's most powerful people - tech billionaires - speak openly about their aims to build an authoritarian surveillance state?
None of these examples show any long-term benefit for citizens, communities, or the environment. And the companies are showing that they have no interest in us, either.
Over at OpenAI, Sam Altman has been pushing the company away from its nonprofit roots, and toward becoming "a for-profit benefit corporation that will no longer be controlled by its non-profit board" (says Reuters). Whenever OpenAI faces a choice between serving humanity or harming it to benefit shareholders, you know where Altman stands.
We have to ask: What if no one wants this?
The bright side
The good news is that some people and organizations are pushing back. I was happy to see this Register article (Sep 10, 2024) about the Vivaldi browser, whose team is explicitly refusing to add any AI-enabled features.
“LLMs are essentially confident-sounding lying machines with a penchant to occasionally disclose private data or plagiarize existing work,” Julien Picalausa, a software developer at Vivaldi, said in a memo to users. “While they do this, they also use vast amounts of energy and are happy using all the GPUs you can throw at them, which is a problem we’ve seen before in the field of cryptocurrencies.”
And in the public sector, FTC chair Lina Khan continues to do excellent work in taking on the cartels and monopolies that dominate the economy.
I was happy to see Khan profiled on 60 Minutes (Sep 22, 2024), showing how her efforts have already drastically lowered prices on asthma inhalers and other meds, even as she takes on the Big Tech giants for their exploitative behavior.
AI slop is not inevitable. The tech industry's latest bubble will pop eventually. And until it does, we can choose to stand with fellow citizens to say: no.
Invitation: Another way you can resist Big Tech, and support my work on this newsletter, is by joining Creative Good. (Our Forum is all member-generated, so there's no AI slop!)
Until next time,
–mark
Mark Hurst, founder, Creative Good
Email: mark@creativegood.com
Podcast/radio show: techtonic.fm
Follow me on Bluesky or Mastodon
P.S. If someone was nice enough to forward this to you, please subscribe to this newsletter.