AI isn't going to take your job – just your joy
This week, I reflect on AI's hype versus its creative limitations and my recent artistic journey.
I apologise for the lateness of this week’s issue, normally scheduled for Friday mornings. I wracked my brain to think of a good excuse but wasn’t able to come up with anything convincing, so I asked an AI bot to write some instead:
"Sorry it’s late – email platform outage — but we’re back now!"
"Sorry it’s late – I held out for a last-minute update — and it’s here!"
"Sorry it’s late – quick 24-hour bug, but I’m back on track!"
Convincing.
This is relevant because I’ve been thinking a lot about AI this week. It’s the topic du jour and in the last few days I read this piece by Ed Zitron who argues that AI is “a technology that lacks any real purpose, sold as if it could do literally anything”.
He refers to tech writer Casey Newton as “Chief Valley Cheerleader”, who in turn wrote a piece this week about “the phony comforts of AI skepticism”.
In that piece, Newton criticises academic (and AI critic) Gary Marcus for his perceived moving-the-goalposts when it comes to attacking AI failures. And then Marcus himself hit back with a blogpost detailing the technical faults in Newton’s piece and his alleged misquoting. Whew.
I share all of this because I think it demonstrates the strength of feeling in the tech world about AI/LLMs and their application—and impact. This week’s newsletter is about my feelings and experiences, rather than summaries of the latest high drama in Silicon Valley, but I think it’s relevant.
The sunk cost fallacy
We’re at a crossroads where a huge amount of money has been sunk into artificial intelligence and lots of folks are expecting to start seeing returns on it. My software developer friends and colleagues seem to be split into two opposing camps on it: those who embrace it and let it write code for them (and think that anybody not using it must be a luddite), and those on the glass-half-empty side who are worried about surrendering control to machines, or are concerned that most of the excitement is just hype and AI will never do all the things everyone keeps saying it surely will one day soon.
I was going to reflect about the nature of most public AI proponents being loud, rich, arrogant men – but it’s also rare to find someone not matching this description in a tech leadership role generally. It certainly fits the technocrat dream of punting problems into the future for the technology itself to solve: eg. it doesn’t matter if AI uses an obscene amount of energy/water, because AI will help us invent more environmentally-friendly ways of solving this in the future.
I’m always instinctively suspicious of anyone who claims to work in technology but possesses an unshakeable faith in the ability of technology to, well, work.
Rachel Coldicutt has written consistently on how “FOMO is not a strategy” – eg. following the crowd and jumping on the AI bandwagon because everyone else is doing it and you’re worried you’ll look like a dinosaur if you don’t. It’s business strategy at its worst: a self-fulfilling prophecy that will see many people lose their jobs: not because ChatGPT is going to do it better than them, but because credulous C-suite folks will sink company money into fruitless, poorly-conceived AI projects which will force them to make unrelated cuts to fund the losses.
Ghosts in the machine
When I made my first album last year, I used AI to generate the album cover artwork, reasoning that I wasn’t an artist, and couldn’t afford to pay someone to paint the image I envisioned. I was happy enough with the resulting artwork, though it has weird artefacts – I’m sure one of the distant birds is flying upside down.
For my new album—available on all good streaming services—I shot the cover photography myself after buying some cheap theatre mask props off eBay. I’m not a visual artist and had to get some help from a photographer buddy (thanks Mark!) for advice on choosing the best shot. But I’m so much happier with the results – because I made the art.
Outsourcing creativity—or seeing it as a chore or a product to pay for—is the future the AI zealots want. The (perceived) cheapness, the convenience and lack of messy human involvement is presented as a pro, when in reality these things just take away from the rough, human reality of art.
Sure, the AI thought leaders keep telling us that their goal is for their tools to do the boring work for us to free us up to do the stuff that’s really fun. But once these tools are out there, nobody can control what gets built using them. Look at “Spine”: a new company plotting to publish 8000 AI-written books next year. Who wants this? But who can stop it?
Back in 2011 I deleted my Facebook account, and I wrote about it at the time:
I'm glad, for once, not to surrender control to technology. Some things need a warm-blooded, absent-minded, stubby-fingered and fiercely alive human being to work best.
I still believe this today. We’ve all been through the ups and downs of the social web to know the risks and the inevitable troughs after the peaks. Why are we so quick now to put all our eggs in the basket of AI hype?
Mini-feels this week
A mysterious endnote
I’m in the midst of some big life-changing stuff right now but can’t write about it for various reasons – sorry to tease, but you’ll hear about it soon. That’s perhaps a better explanation for this email’s lateness, rather than the poorly-construed ChatGPT ones above. Until next time!
— Matt