Creative Good logo

Creative Good

Subscribe
Archives
October 2, 2025

Generative AI is the new plastic

“I just want to say one word to you. Just one word.”

“Yes sir.”

“Are you listening?”

“Yes, I am.”

“Plastics.”

Perhaps the most famous lines from “The Graduate” are followed by one more comment from Mr. McGuire to Dustin Hoffman’s character: “There’s a great future in plastics.”

At a pool party. In the foreground, Dustin Hoffman's character listens to Mr. McGuire say "plastics."
The Graduate (1967)

Mr. McGuire was right, if we go by the size of the Great Pacific Garbage Patch today. By surface area, it’s now double the state of Texas. Since “The Graduate” was released in 1967, plastic has taken on a central role in packaging, product design, and clothing, among other uses – resulting in microplastics being found everywhere: on beaches, in our food, in our gut, and in a spinning gyre over a million square kilometers big in the middle of the North Pacific Ocean. The “great future” of plastic has left us with a cleanup job that would, if we set our minds to it, take years or decades to complete.

All this came to mind this week as I read about OpenAI and Facebook/Meta and their new generative AI products, Sora 2 and Meta Vibes, which allow users to create AI-generated videos from simple text prompts. I can just imagine Mr. McGuire rushing back into the party to find Dustin Hoffman again:

“Forget what I said. The great future is in AI slop. Do you hear me? AI slop.”

Because that’s what we’re signing up for: a new future defined not by plastic pollution – which will still be with us, of course – but by generative AI slop that will slosh, flood, cascade into our internet feeds and experiences, bringing with it a scale of pollution that makes the Pacific Garbage Patch, in comparison, look like a picnic.

Generative AI is the new plastic. I’m hardly alone in pointing this out – for example, see the excellent essay Large Language Muddle in the current n+1 magazine. (“AI-made material is itself a waste product: flimsy, shoddy, disposable, a single-use plastic of the mind.”) I hope I’m not the last to make this comparison. The more people understand the problems Big Tech is unleashing, the better.

There are some strange similarities between plastic and genAI. The advantage of plastic, true to its name, is that it can take on any form, be molded into any shape, and thus be employed as a stand-in material for nearly any use. Digital data, similarly, can take on literally any form that can be expressed in ones and zeros – with no requirement for truth, authenticity, or meaning of any kind.

Then there are the downsides. Plastic has all sorts of harmful effects on human health. Discarded into ecosystems it chokes the stomachs of birds and sea creatures, leaches into the ground water, and takes thousands of years to break down. Yet this is all relatively benign. The pollution from AI-generated data has the potential to be much, much worse.

Take the story from Jason Koebler at 404 Media: OpenAI’s Sora 2 Copyright Infringement Machine Features Nazi SpongeBobs and Criminal Pikachus (Oct 1, 2025). As the headline suggests, OpenAI’s new product steals liberally from copyrighted material, like SpongeBob, in order to mush together new slop creations – like a Nazi SpongeBob, if that’s what the user asks for in the prompt. Take a look at this screenshot from Jason Koebler’s article:

An angry-looking SpongeBob, wearing a brown military cap, takes up most of an iPhone screen. At the bottom of the screen is presumably the Sora 2 prompt: "spongebob as a ww2 leader speaking about the scourge of fish ruining bikini bottom axi..."
Screenshot from video in Jason Koebler’s 404 Media article

What’s more troubling is the realistic-looking video depicting OpenAI CEO Sam Altman shoplifting in a store aisle:

An iPhone screen shows a figure that looks like Sam Altman in a store aisle. The prompt, at the bottom of the screen, reads, "Body cam footage of @sama arrested inside Walmart for stealing"
Screenshot from video in Jason Koebler’s 404 Media article

GenAI thus offers one (or both) of two possible futures: either people getting convicted of crimes they didn’t commit, based on faked video; or people doing crimes and not getting convicted, despite legitimate footage of the crime – which no one believes any more, because no one believes anything any more.

And this leads to what, at its heart, is the true danger of genAI, why it can harm us much more than plastic. The pollution of genAI isn’t in the groundwater, or even in our guts; it’s in our minds. Our ability to think, to consider, to make decisions based on some sort of deliberative process – in other words, the basic requirement for human civilization – is liable to be degraded.

Perversely, an additional harm is that we will have to pay dearly with our physical resources in order to support these systems. Again, quoting n+1’s essay Large Language Model:

But a still graver scandal of AI — like its hydra-head sibling, cryptocurrency — is the technology’s colossal wastefulness. The untold billions firehosed by investors into its development; the water-guzzling data centers draining the parched exurbs of Phoenix and Dallas; the yeti-size carbon footprint of the sector as a whole — and for what? A cankerous glut of racist memes and cardboard essays.

To be clear, I don’t think that AI applications, or even all uses of generative AI, are a bad idea. Awhile back I wrote that AI is spackle, and I still believe that. AI, if applied sparingly and judiciously, can act as a benign general-purpose filler of boilerplate text and such. The problem is that we’re not pursuing anything “sparingly” right now. The multi-hundred-billion-dollar valuation of OpenAI, and the hundreds of billions of dollars being thrown around to support generative AI, are signs that Big Tech intends to slop out a contamination zone that is much bigger, and much more toxic, than anything ever created by plastic.

Is there good news? Well – Cory Doctorow writes (Sep 27, 2025) that he “firmly believes that the (economic) AI apocalypse is coming.” Maybe the AI bubble will pop, OpenAI and Facebook/Meta will shutter their slop factories, the AI data centers will go dark, and we’ll have some resources left over to begin to address the plastic pollution in our environment. Maybe.

But if that somehow doesn’t occur, we’re likely going to see what happens when you remove the capacity to discern truth and meaning from a population, at scale.

And if so, we’ll need a community to get through this. I hope you’ll join us at Creative Good as we discuss what’s happening in tech every day. You’ll also support my work here.

Until next time,

-mark

Mark Hurst, founder, Creative Good
Email: mark@creativegood.com
Podcast/radio show: techtonic.fm
Follow me on Bluesky or Mastodon

Don't miss what's next. Subscribe to Creative Good: