I Hate Wasting Time on Identifying AI Slop
I Hate Wasting Time on Identifying AI Slop
By Alex
I hate spending time trying to decipher if something is AI slop or not. I do not enjoy that I need to commit a portion of my cognitive energy in staring at ads to try to figure out what I'm looking at. I do not enjoy reading synthetic text (as we say in The AI Con, "If they couldn't be bothered to write this, why should we be bothered to read it?"), and I do not think anybody needs to waste time and energy reading synthetic text. I disdain watching YouTube Shorts or Instagram Reels and wondering if the content I'm scrolling through is something of interest or if its cheaply-generated trash that brute forces the algorithms for engagement.
Why does it matter? There's a few reasons. First, if I am looking at an ad or some other type of media, it gives me an idea of what the brand thinks of me and the rest of their audience. This signals to me: Oh, the brand was too cheap to employ a designer or artist to do this work. They instead decided to engage in cheap practices to produce media which is, as Tressie McMillian Cottom has summed up as "mid". The brand decided to generate, probably a few dozen images, which consumed gallons of water and expelled kilos of carbon emissions, to generate this?
Second, it puts the brand on the same level as scammers and engagement farmers. As I've mentioned here before, I'm obsessed with the Glasgow Willy Wonka scam for many reasons, one of them being because it's taking the online scam into the world of the corporeal. But another is that AI slop is the stuff of con artists, pyramid schemers, and anything that gets as many eyeballs on content as possible.
Third, and I think this is the thing that grinds my gears the most, is that it's forced another level of cognitive load onto consumers, and worse, students. Many in the field of education (and I should say, this goes for both AI optimists and pessimists) have suggested that we need to teach students to learn how to be critical AI consumers, to understand that if they are going to use "AI" in the classrooms, they need to be savvy about the outputs and to take such outputs with a grain of salt. These people argue that "AI literacy" has become part and parcel of fostering holistic information literacy. Which, sure. Students need information literacy. They need to understand where the information they consume comes from, as Emily has argued often. They need to foster the ability to understand the context of speakers, the incentives of people creating those media, what the author's intended goals are, and so on. This is at the heart of good journalism but also important discourse analysis, which investigates what kinds of language people use and why.
Deciding whether something is AI slop is none of those things. It's an annoying cognitive task: detecting weird photo artifacts, bizarre movement in videos, impossible animals and body horror, and reading through reams of anodyne text to determine if the person who prompted the synthetic media machine cared enough to dedicate time and energy to the task of communicating to their audience.
I hate that this is the bleak future which venture capitalists and AI boosters have gleefully laid out for us, that they consider this to be a "democratizing" technology in any real sense of the word. Far from strengthening democracy, these are technologies more apt at propping up scam capitalism and multi-level marketing schemes. I would like my time and mental space back.
Our book, The AI Con, is out on May 13, 2025, but you can pre-order it now where ever fine books are sold!