If it ain't broke, break it

2026-02-04


(note: I started this post last week but then so much newsworthy shit started happening, it seemed tone deaf to post something about AI art when every conspiracy theory of the last thirty years turned out to be true. Anyway I have plenty of more current-event-y thoughts but wanted to get this one off my plate)

[Tab] to improve, [Esc] to dismiss 

This afternoon I sat down to write a quick email to a new vendor. I've been using Gmail for a few decades now because in the early aughts, at least compared to dread Outlook, it was the Second Coming of Christ in mailbox provider form. It was sleek, quick, had a GUI that even a caveman like me could navigate, and searching for that potentially lawsuit-avoiding exchange from 6 years ago was a breeze. 

But today: Either something had changed, or I clicked the wrong button, or there was a glitch in the Matrix… but before I'd even had a chance to type the first letter there was an fully-composed email staring back at me that read like a Google re-translation (into Swahili, then into Khmer, then back into English again) of an ancient Chinese take-out menu stuck the the bottom of a kitchen junk drawer. 

In my adult life I've written probably a million emails. Is that an exaggeration? Perhaps. But at least many tens of thousands.  In my corporate days, in the decades between fax machines and Slack, it was our primary means of communication. I'd often send an email to someone sitting in the office right next to mine because I'd be damned if I was going to hand off a project to Trevor without putting every last detail in writing so I couldn't be blamed for Trevor's inevitable fuckups.  Also, as a moonlighting writer, I took pride in crafting clear, concise-ish, and easy-on-the-eyes correspondence. This was something I was good at. And no Computer Robot Man was going to best me. 

Google -- or Alphabet? -- or Gemini? -- seems to think differently. They are so gung-ho on me using their Computer Robot Man that they're shoving it deep up my dick and are forcing me to pull it out with a pair of pliers. It took me far longer to "fix" Gemini's cardboard missive than it would've taken me to write the email myself from scratch, in a language I didn't understand, with one finger on my bad hand.  

Here is the one and only situation where it maaaaybe might be worth outsourcing emails to the Computer Robot Man: Correspondence with dull middle-management types with poor comprehension skills, a lobotomy victim's grasp of subtext, and zero interests outside of work besides the suit they've been making from the skins of their victims. Not because LLMs would do a better job. Quite the opposite. It would be shit text for shit people who only understand shit text and I wouldn't have to waste a beautiful email on someone who wouldn't appreciate it. 

However I now work for myself so there's no HR or management miniboss I need to appease. Just me, my clients, my vendors, my lawyer, and occasionally my accountant, all of whom are wonderful people chosen by me specifically for their wonderfulness. Self-employment is a bitch sometimes, and by that I mean most of the time, but being able to choose whom I work with and when is something I'd never trade for all the AI-startup stock options in the world.  

What I have described above, as you might be aware, is what's known as "enshittification." A term first coined by the hilarious Cory Doctorow to describe the process by which companies gradually make their product or service worse after achieving market domination. Either because it saves money or just because fuck you, that's why. 

However what makes AI-generated content different is that it was never not shitty. The technology has certainly evolved by leaps and bounds since those early days of DALL-E "art" that looked like low-res Angelfire GIFs. But the end results are now just better-rendered shit. 

In some cases, AI has actually gotten worse by getting better. I'll admit to following a few AI video content generators on Instagram. Not because their content is good but because it’s bad... but not so bad as to be intentionally bad. This is what's known as the "Uncanny Valley Effect." Like early CG-animated films e.g. Final Fantasy and Polar Express, primordial AI video was an unnerving Lynchian hellscape culled from a quagmire of errant ones and zeros, like a bad DMT trip with a machine elf who hates you. In other words: What made it interesting was the technology getting it wrong. Now that it gets it right, it's not worth looking at. 

You'll note that I have not called these AI generations "art" except thusly quotation-mark-ensconced, so as to indicate sarcasm.   

This is because Artificial Intelligence cannot -- and I mean all-caps CANNOT -- create art.

And I don't mean that I don't like it. This is not a criticism of the sort that a MAGA-capped midwestern tourist might make when visiting MOMA for the first time. 

As a matter of fact, in some instances I actually DO like it...  e.g. the Lynchian hellscape videos mentioned above. However just because it interests me does not make it art. A commuter train derailment is interesting. Not art. A crow picking at a moldy Jersey Mike's sub is interesting. But again, not art.  And so it is with AI generations.  AI can make words, sounds, images, perhaps even smells. But it cannot, will not, ever be able to produce ART. 

Here is a common argument that a slow-witted person might make on one of those final-stage enshittified social media platforms: 

"But it's just a tool! It's no different than PhotoShop or ProTools or Final Cut Pro or..."

Yes it is different, you dolt. A tool is something that you, a human being, would use to pull a piece of your preconsciousness from the void and render it into a shareable medium. It's a hammer. It's not the finished house. 

And code-created tools have been around for a while, AI or otherwise, long before the stock market boom and marketing deluge. Plugins for Abelton Live and Photoshop and Premiere that might speed up a mundane process or emulate a real world analog tool or even a new tool born from the coder's imagination. But it was not doing the actual "creating" part for us. That came straight from our domes. 

Art requires a human being because art requires human emotion. It's not just a thing you look at, or listen to, or read, for a brief distraction. It has meaning TO ITS CREATOR that cannot be expressed any other way. It is a form of COMMUNICATION. 

Further it requires a sense of impermanence. Of death looming just over the horizon ready to take us at any time. Art is what we mortals do to make a small part of us immortal. But AI has no concept of its own mortality. It can live forever, or at least until the next software update.  There's no urgency, no regret, no grief, no pining for an old flame, no memories of a loved one long gone to tragedy, sickness, or disease. No joy. No wonder. No lust. No disgust. No love. And without these things lurking in the cracks and crevices of an artist's work, often unseen or at most merely hinted at, what you have before you is something distinctly non-art-like. 

By letting AI do the actual lifting for you, you are skipping out on the most important part of the creative process. A text prompt is barely a rough sketch. A proof-of-concept. And for the Computer Robot Man to even understand it, it has to be absolutely literal. Subtext and deeper meanings would be impossible. You can't tell MidJourney to "remember that time I fell off my bike and scraped my knee and cried more from the humiliation than the pain but then I saw my mother in the window and she was wearing the yellow dress with the flowers on it and she came out to the porch with a washcloth to clean the wound and give me a hug and I felt better. But also waterpaint a grizzly bear riding a Harley." 

Ironically, one of the worst AI hot takes of all time came from someone who was perhaps in the best position to make a legitimate case for it. Holly Herndon is a composer and installation artist who holds a PhD issued by Stanford's Center For Computer Research In Music And Acoustics. Her use of AI in her work is what I consider "best case." She creates her own models, trains them on her own compositions, and indeed uses them as "tools" to bring her compositions to life. It's exactly how I imagine 20th century composers like Iannis Xenakis and John Cage would use this technology if they had access to it. 

Her work is experimental and bold and is, without a doubt, non-quotation-mark-ensconced art. More on the museum side, down in NEA grant corner, but nonetheless worthy of the distinction. 

So it was disappointing to see Holly release such a shit-for-brains take as this: 

"If an expert uses AI to help them write an article they otherwise would not have the time to complete, it will not be of less substance than a non-expert with time on their hands writing something by hand."

Jesus fucking Christ, Holly. You're better than this. 

First of all, there's no such thing as an expert who would not have time to complete an article. By nature of them being an expert, time would never be the impedance. When I write, I am never at a loss for words.  They gush out of me like a science fair volcano.  This is what's known as being "in the zone." And all true artists know exactly what I mean by that. It's when the non-egoistic parts of your brain take over and it feels more like stenography than creating. This is not the time-suck. This is not the "hard part." This is the only reason we do this shit. Everything else is gravy. 

The time-suck is parsing it down into something more concise, more readable, and of course cleaning up all those nasty typos. But this part of the process is what editors are for. And editors are not copyright holders. They don't generate the art, they just refine it. And if what Holly were defending was a tool that puts editors out of work, that's fine. Not for editors -- and I'd always argue that the human version is better than the machine version -- but this would at least avoid the conundrums of plagiarism and dehumanization. 

But also there are already tools that do this and they've been in existence for several decades. Who could forget Clippy, the Jar Jar Binks of Office 97? I would also argue that Clippy, for all his annoying intrusions, condescension, and creepy Bill Cosbyian rape-eyes, was better at this particular job than Gemini is today, as of this morning. 

But also this is not the AI use-case that anyone is upset about. If AI were simply correcting my syntax I’d be all for it. What’s upsetting is the idea that AI can do something a) it can’t by nature of it being AI b) that positions AI to take jobs away from the actual humans who CAN do the job AI can’t do but is taking away anyway c) that only serves to benefit a count-em-on-one-hand number of AI mega-scale investors WHO ARE ALREADY BILLIONAIRES.

Ironically, the best thing for AI will be the bubble bursting and investors running away from it like a magneto-optical drive full of NFTs. Once it’s back in the labs where grad students and PhDs can explore its full possibilities based on needs-driven applications, not profits, we might finally see its true potential. To IMPROVE our world. Not break it.


Don't miss what's next. Subscribe to The Velvet Ambassador:

Add a comment: