Footnotes

Subscribe
Archives
August 3, 2025

What of the Storyteller in an AI Economy?

I have frequently returned to Ted Chiang’s 2024 New Yorker column, “Why A.I. Isn’t Going to Make Art,” even as my own impressions of AI (which I use here as shorthand for LLMs) have evolved. Chiang’s piece begins:

In 1953, Roald Dahl published “The Great Automatic Grammatizor,” a short story about an electrical engineer who secretly desires to be a writer. One day, after completing construction of the world’s fastest calculating machine, the engineer realizes that “English grammar is governed by rules that are almost mathematical in their strictness.” He constructs a fiction-writing machine that can produce a five-thousand-word short story in thirty seconds; a novel takes fifteen minutes . . . .

But Chiang muses that there is more to art than mere calculations. Rather, “art is something that results from making a lot of choices.” He argues that AI cannot make choices in the same way, and floats words like “meaning” and “intention” throughout the essay to clarify what he means.

These qualifications are necessary. I’ve played around enough with machine translation in my capacity as a translator to see that an LLM is able, thanks to rulesets and programming, to change its output depending on context and other specified variables. As Chiang notes, to be as successful as a human storyteller, AI must successfully “fill in for all of the choices” that a human storyteller would make. Proponents of LLMs would instead tell you that the AI can and does make these choices, as expressed by this ability to output different information given different variables.

But it is quite different. Such a change in “behaviour,” based on inputted variables, is not spontaneous. At most, human programmers (or, woe betide us, programs writing programs based on programs once written by humans) are instructing the program which “choices” to make and how. In short, machines must be told not only to do this, but how to do it, or they will not do it at all. But when someone tells us which choice to make, it is no longer a choice—it is an instruction.

Perhaps this is still enough to “fill in for” the choices that go into composing a story. If there is a book at the end of it, perhaps it is all the same?

It seems useful to clarify what it means for a writer to make a choice. Chiang wonders whether AI-generated writing is “interesting art,” but I think this does not matter when talking about the viability of an AI-generated novel. One need only look at any bestsellers list to see that a great many reader is interested in books for reasons other than how interesting the art is. I am myself extremely in my mystery era, a genre hardly known for its originality: there is a murder, and it will be solved. It is very seldom interesting art. It is, however, a lovely way to unwind.

Reading for leisure is a hard-won skill for me, and though I also read a great deal to learn about craft, to reflect, to be moved, or for the aforementioned interesting art, most books are not bought because they are interesting art. Many consumers find the idea of an AI-generated book appealing precisely because they can tailor the story to their specifications. AI, once having been given the prompts it requires to “fill in for the choices” a writer would make, aggregates various choices other writers have already made into something resembling a story.

But it is not a story; it is merely a string of symbols. Chiang uses the analogy of an empty apology: “I’m sorry” without the feeling of regret behind it is tantamount to air. To borrow badly from semiotics, AI may be perfectly capable of using signifiers (the words themselves), but it lacks the required cognition to apply the important second element—the signified, or the actual state of being sorry—that is necessary for a symbol to have meaning. In other words, AI does not generate a story; it generates words. Chiang, again, said it better than me: “A large language model is not a writer; it’s not even a user of language. Language is, by definition, a system of communication, and it requires an intention to communicate.”

As any student of literature can tell you, the author is dead. And as any sane writer will tell you, this is for the best; to be wholly responsible for how others react to your text is a very effective impetus to never produce another one. But what this sentiment means at its core is not that the author’s intention does not matter; rather, that a text and the society that receives it are in dialogue. To incorporate one’s environment, life, and experiences is fundamental to the task of telling a story, to which readers then respond.

Whether or not it makes for interesting art—this is a task an LLM is simply unable to perform. In observing the debate about the uses of LLM AIs, I am most interested in what people believe AI is capable of. There is what it is clearly capable of; and then there are the delusions. Because venture capital so heavily relies on hype to garner the investment that keeps the industry afloat, VCs are highly motivated to lie about what their product can do. Moreover, they are motivated to lie about what their product may be able to do tomorrow—and more motivated again to actually believe in the shit they’re saying.

Believing that AI may someday exceed humanity’s capacity for thought requires a huge amount of support in the form of linguistic scaffolding. It is useful to talk about AI in human terms to help others understand what it does or is supposed to do—but somewhat surprisingly to me, the analogy is also used the other way.

Can you describe what a memory is without using computer terms like “store” and “retrieve”? No one can—despite that memories don’t really work like that. This problem of language has been around since well before we started worrying about our current iteration of AI. Robert Epstein wrote in 2016 about the faulty ways we talk about memory and how the human brain operates nothing at all like a computer.

This is one of those lovely pieces that’s a whole lot of takedown without being quite able to float an alternative framework, but I love it for that. That’s science, baby! We know fucking nothing! Cognition in particular is so unknown to us. But we do known about how human babies come into the world. Per Epstein:

Thanks to evolution, human neonates, like the newborns of all other mammalian species, enter the world prepared to interact with it effectively. … Perhaps most important, newborns come equipped with powerful learning mechanisms that allow them to change rapidly so they can interact increasingly effectively with their world.

Newborns, from birth, can learn—making learning one of the most essential elements of human experience. Children quite famously need to experience things in order to learn. Babies who enjoy a great deal of sensory play can advance their sensory development, learn about the world around them, and—crucially—develop their brain function when all five of their senses are engaged. For older children and adults, experiential learning as pioneered by Kolb posits that ideal learning entails a complex four-step process involving experience, reflection, conceptualization, and experimentation. We already know that kids whose education was derailed by the pandemic are experiencing a higher rate of learning, cognition, and social delays due to, among other things, the two-year limitation on their ability to have new social and practical experiences.

It is this experience—one’s environment, life, memories, learning—that storytellers put into their work. In his 2017 Nobel Prize speech, Kazuo Ishiguro put words to what seems to me the essential task of the storyteller:

Stories can entertain, sometimes teach or argue a point. But for me the essential thing is that they communicate feelings. … In the end, stories are about one person saying to another: This is the way it feels to me. Can you understand what I’m saying? Does it also feel this way to you?

AI is fundamentally incapable of integrating any of this into any text it produces. Call it meaning, feeling, choice, learning, memory, cognition—it cannot perceive or integrate any of this into a text as it, itself, has experienced nothing. Just as human memories have no associated data file, AI has no experience to draw on. It has, per Epstein, “information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently.”

But, Epstein again: “Not only are [humans] not born with such things, we also don’t develop them – ever.” And, just the same, AI cannot develop what Homo Sapiens has developed through evolution over hundreds of thousands of years.

Human beings and machines are two different things. No matter how confused our language becomes, they cannot be compared. Machines may offer a capable analogy of thinking, but that doesn’t mean they are capable of thinking. They are capable only of responding—to us—by synthesizing material we have already written, based on the prompts we are feeding it.

They are not making choices; they are following instructions. They are not life forms; they are information processors. And information processors may be able to produce a book, but a true story cannot be told by something that cannot experience, learn, intend, or choose.

The market for storytelling is another matter. But in this way—certainly in our current day and near future—humanity simply cannot be replaced. And I deeply hope, no matter the market conditions, that you and I never decide to leave the matter to the machines.

Such a text is quite literally not the same.

Don't miss what's next. Subscribe to Footnotes:
Powered by Buttondown, the easiest way to start and grow your newsletter.