Welcoming (Or Not) Our New AI Overlords
Everything Is True
Ada Hoffmann's author newsletter
I've been thinking about AI writing a lot these past few months, which is no surprise, because so has half the Internet. But for me, it feels personal in a different way, because I worked on AI creativity for my PhD, back in 2018, before it was cool. The program I wrote was pretty useless (and it was basically a toy that was supposed to do a cool/funny thing - even if it had worked well, it wasn't going to replace humans!) But my theoretical writing - about how we define creativity and how it could be evaluated, taking cues from psychology and other fields that computer scientists often ignore - was judged good enough for a doctorate.
Nowadays I teach cognitive science to undergraduate students, and their interest in this topic is huge. I rewrote a whole unit in one of my courses last fall so that we could talk about large language models. I vividly remember the discussion I got into with one student who'd read Blake Lemoine's LaMDA transcripts and was convinced, like Lemoine, that LaMDA might be sentient. (Spoiler: it is not, but it is very good at pretending to be, based on our cultural expectations of the kinds of things a sentient AI would say.)
I've also written, in the Outside series, about tropey AI that takes over the world. I wrote it that way, not because I think AI will actually take over in that way, but because I was reaching for something oppressive and religious to hang the worldbuilding off of ; "AI Gods" seemed like as good a concept as any.
I was always clear - in my head, at least - that the AI Gods were just fantasy and not a representation of AI from real life. At the beginning of the series I was fine with that. By the time I got to the third book, I'd started to question it more. I'd had time to think about how AI hype and inflated notions of AI's abilities can itself be harmful. As Ted Chiang writes so eloquently, the problem isn't AI "getting out of control"; it's corporations that deliberately or callously use AI to harm people. The idea that the AI is somehow all-powerful, unerring, or wiser than us, enables a lot of that harm. Sometimes we have to puncture the hype, not so we can downplay the harm, but so that we can see the harm clearly in the first place.
(Speaking of Ted Chiang, he also wrote one of the best articles demystifying how LLMs actually work, and why they aren't as clever as they appear to be.)
In THE INFINITE I started to play a little more with the idea of where the Gods came from. (Mild spoiler: they did not just randomly decide to take over; they took over because a specific, powerful group of humans programmed them to do it.)
After that, I took a break. I tried a few WIP ideas, many of which fizzled, one of which is definitely becoming a novel-length manuscript*, most of which were fantasy and none of which involve AI at all.
(*Too early to say anything about it now, though. This is a finicky business.)
It's important to be clear about these things because of the sheer amount of hype there is - and the vested interest that many corporations have in maintaining this hype. As just one example: OpenAI wrote a paper claiming that LLMs will disrupt at least 10% of work tasks across 80% of all industries in the near future. The paper was not peer reviewed (and didn't offer any policy suggestions for how to mitigate the potential harms of this disruption; OpenAI seems to think that's someone else's job). Is the disruption actually going to be that big? Who knows! Maybe! Could be! But making you think it's that big is to OpenAI's advantage in obvious ways.
To a great extent, if AI takes your job, it's not because the AI is better than you. It's because someone convinced your boss that the AI was better - or cheaper.
At the same time, just because there's hype doesn't mean we can shrug and assume it blows over. These models are already changing how people think about writing. (And that's not even to begin to address the stark shifts they've caused in other fields, like when they write code for the tech industry, or when they do university students' homework for them. And boy have they ever been doing my students’ homework for them. More on that in another post.) Traditionally published fiction is slightly insulated because it depends on forms of artistry that LLMs are, frankly, bad at; but my acquaintances who do freelance nonfiction are already feeling the pinch. Clients expect them to rewrite ChatGPT's output instead of writing something of their own, and they expect to pay them less for this work, even though it's just as difficult and time-consuming.
The worst case scenario here isn't that AI ends up surpassing human abilities; it's that we all end up drowning in a sea of Content (tm), of a type which is worse and shoddier than what a human could produce, but which is all any of the major players will pay for anymore. Frankly, there were a lot of groups inclined take things in this direction even before ChatGPT showed up.
Or maybe that won't happen. In the US, the government is actively consulting with stakeholders to try to figure out what it should do, and lawsuits about it are also pending. There are a lot of proposed reforms (especially with regards to how companies train these models, and whether it violates copyright for them to train the models on human work without permission) that could take some of the scariest teeth out of the problem. Maybe those reforms will work. Maybe something else will. Maybe the field of writing fundamentally changes, but in a way that is manageable and doesn't leave the majority of human writers out of a job. Maybe very little actually changes, the hype blows over, people get bored by the characteristic problems in what LLMs generate, tech corporations' stocks plummet, AI winter finally comes, and we're on to the next problem.
It's trite to say that the outcomes are up to how we respond as a society, but it's at least somewhat true. Some of us are in a better position to create those responses than others.
Which is why I've been coming back to this topic lately, not primarily as a fiction author,* but as a scholar.
(*I have some story ideas about it too, because when I have a sufficiently intense interest it affects all areas of life, but it's too early to say if they’ll amount to anything. I'm finding that my writing process these days involves spinning up a lot of ideas, trying them for a chapter or two to see how they feel, and discarding most.)
A colleague convinced me to write a short paper with him, based loosely on what I said in my "Midjourney Mess" posts ([1] and [2]), and this weekend I got the acceptance notification. It's called "Should we have seen the coming storm? Transformers, ethics, and CC." This is a bit of an inside-baseball paper - it's about the specific research community I was part of in grad school and why we were unprepared for the directions the tech sector went in with these technologies, not to mention the social furore that resulted. It's a position paper supported by a literature review, rather than a source of new data. But I've got a grant application in the works that will hopefully allow me to gather new data in the near future. I'd like to study the actual effects that LLMs have on authors over time. Some groups, notably Humanity In Fiction and the Author's Guild, have already started this form of research through surveys; I'd like to find a way to use academic resources to go deeper.
This is notable because, since 2019, I basically haven't been doing any research. I've been adjuncting and dealing with burnout (which I'm no longer in, but it lasted a few years post-PhD, in part because I wasn't happy with my living situation.) Now I've got the research bug again! Like, wow, I actually love this? I forgot that I love it? It is reaching special interest levels - which is fun because my special interests don't normally have much to do with my day job. (Although also it is weird having a special interest in something that bothers/worries me; I'm not the first autistic person to experience this mix of emotions, but it's new for me.)
And part of why this has a hold on my emotions right now is because, frankly, I spent the entire period from like 2014-2021 hiding in my room, mired in my mental illnesses and personal problems, feeling helpless about the state of the world. (So that covers the entire Trump presidency, among other things.) It really is true what they say about helplessness. I have no delusions that I can single-handedly save the world from LLMs, but being able to do something useful - even if that thing, in classic academic style, is "gather data and then bitch about it" - is immensely emotionally improving. I've known this abstractly for a long time but I feel like it didn't click for me in terms of actual practice until now.
So, I'm probably going to be talking about AI on this Substack a lot more. Hopefully that's of interest! There will still of course be Autism Content and other posts related to the fiction world. But this really is the Ada Hoffmann Substack and I'm not really good enough at masking to make it about anything other than The Stuff That's On Ada Hoffmann's Mind at a given moment.
I can't promise anything, but I hope I can help protect the needs of authors while the world changes around us.