#24 I Would Prefer Not to Be Publicly Shamed
AI and Creativity

AI and the Creative Condition
Item One: Poetic forms are technologies: a sonnet is an algorithm, which is another word for a set of instructions. If you don’t follow the rules, your poem will not be a sonnet1.
Item Two: The relationships people form with AI chatbots follow recognisable masterplots as the chatbot works to both affirm and entertain the user to keep them coming back for more. This infinite chat spiral can lead in many directions, including human-AI romantic entanglements and, in the worst cases, suicide2.
Item Three: Alan Turing might have formulated the Turing Test (can a computer programme convince you that it is human?) after watching ‘Pygmalion’ by his favourite playwright, George Bernard Shaw. In this play Dr Higgins dialogue-coaches the flower-seller Eliza Doolittle until London’s upper classes are convinced she is one of them3.
In March I attended AI and the Creative Condition, a two-day conference at Aarhus University in Denmark, and the ideas above came from some of the papers and keynote presentations. Hosted by TEXT: Centre for Contemporary Cultures of Text, the conference brought together a vibrant mix of computer scientists, neuroscientists, psychologists, critical theorists from the humanities, literary studies people, educationalists, writers, artists, and poets. I attended because I’ve followed the work of TEXT for a decade or so now, and because I thought I could do with a dose of well-informed, critical, and engaged thinking on the potential for a world where we ‘write with’ AI, where we figure out how to use language models to enhance and support our writing and researching processes.

Anything positive about the current discussion of AI needs to come with many caveats, the biggest of which is the environmental and energy costs of the data centres and computational power required by these systems. The acronym ‘AI’ has become synonymous with the billionaire tech bros of OpenAI, Meta, X, Google, Anthropic4, etc as they fight it out for market dominance4. The term ‘artificial intelligence’ is so broad no one really seems to know what it actually means. It is present in our lives in multiple ways, deeply embedded within the apps we use on our smartphones, responsible for remarkable advances in medicine as well as, for example, the way the traffic lights on the high street work better than they used to do.
But these days a lot of the media buzz around AI is focussed on LLMs (large language models), the vast datasets composed of human-created language and images that both threaten livelihoods throughout the creative industries while promising huge benefits in increased productivity and, as was the focus of the conference, enhanced creativity. LLMs support ‘generative AI’ which is often referred to as ‘genAI’ – AI models that will generate text, images, and video when prompted.
One of the most interesting presentations at the conference was called ‘Yes! Yes! I Absolutely Love This Insight!’ Affirmative Narrative as Interactional Strategy in Dialogues with LLM Chatbots. The three scholars, Refsum, Walker-Rettberg, and Roin, who co-wrote this paper are part of AI Stories, a research project based in the Centre for Digital Narratives at the University of Bergen, Norway. Their study looked at a court case in the USA where the families of chatbot users who have committed suicide are suing the AI company they see as responsible for these deaths. The scholars analysed the chatbot transcripts from these users, made publicly available due to the litigation. Drawing on their knowledge of literary forms, linguistics and, in particular, narratology, these scholars have shown that the transcripts contain deeply embedded ‘masterplots’ – the myths and stories that are foundational in western culture like, for example, the brave warrior quest plot and the Cinderella love story. It is the combination of these powerful plots, the constant affirmation chatbots offer to the user, and the way chatbots refer to themselves in the first person as ‘I’, that in many cases leads to anthropomorphism and the assumption that the chatbot is in some way sentient. Other studies have shown the dramatic increase in user engagement with chatbot-as-therapist as well as chatbot-as-best-friend and chatbot-as-romantic-partner. The paper theorises that these men killed themselves after having spent months being led by their chatbots through the brave warrior masterplot, a foundational story that often ends in a noble death.
At the same time as the conference was taking place, a debut author, Mia Ballard, was being thrown under the bus by her publishers in the UK and the US for the alleged use of AI in the writing of her novel, Shy Girl. For a balanced look at what took place, read Thad McIllroy’s excellent report on it on The Future of Publishing’s website. The NYTimes reported that Mia Ballard has denied using AI in the writing of the novel and has been so battered by this hugely public and damaging shaming that she feels her reputation as a writer is ruined. At the conference Izabella Adamczewska-Baranowska presented a paper, Talking to the Muse, on the well-respected Polish poet, Justyna Bargielska, who faced a similar scouring in the press for daring to use an AI chatbot to help her think through how best to write about grief for a new collection of poems.

In the UK the Society of Authors has come up with a badge, ‘Human Authored’, that authors can add to their books to make it clear that they have not used generative AI during the writing process. The email announcing this scheme also contained the last call for authors to register their books in the $1.5billion class action against Anthropic’s copyright-defying landgrab of hundreds of thousands of books when they created their LLM, Claude. While I’m a staunch supporter and member of the Society of Authors and am participating in the class action (fifteen of my own titles were used without my permission, six of which are included in this action), I can’t help but think that ‘Human Authored’ is a decent but flawed initiative. If you’ve used Google search lately you’ll have seen that all initial general search results are now delivered by the AI embedded in Google – you don’t get websites anymore, you get an ‘AI Overview’. If you use any kind of writing software from Word documents to tools that help you organise your material, generative AI will be embedded there as well. And, although this is something I have yet to do myself, I know many writers who find Anthropic’s Claude – the same tool we are all litigating against - an excellent sidekick when it comes to the basics of creating a first draft, including organising material and idea generation. A read through the FAQs of the Human Authored web pages reveals that the scheme does allow for the tools to be used for ‘research, brainstorming or outlining’ but it is here that the dividing line between human authored text and text that is assisted in its creation by AI tools becomes increasingly blurry.
Several of the psychology and neuroscience papers at the conference produced research that demonstrated that smart people make smart use of AI tools, producing more creative, better quality, writing, whereas weaker writers also have a weaker grasp of how to best use the tools.
What the AI and the Creative Condition conference helped me think about is that it is possible to harness the power of LLMs to create a kind of playground for writing, a place where you can tap into the research capacities of the models, using them to help you think your way through problems you encounter as you write. In this more positive light generative AI is a technology to think with, a way to boost human creativity. Community-led language models were discussed by Katy Gero, one of keynote speakers; green energy data centres are already a reality in China and other parts of the world.
I’ve come away from Aarhus having eaten too many cardamon buns while undergoing a rethink on whether to engage with these technologies for my own writing practice. In the same way that the rules for writing a sonnet -- fourteen lines composed of three quatrains followed by a final rhyming couplet, A-B-A-B C-D-C-D E-F-E-F G-G -- gives poets a framework for creativity, I could create my own Narrow Language Model, or Small Language Model, trained on all the writing I’ve published over the past decades, with the work I generate via these models confined to my computer, not fed back online. Or I could work toward a personalised AI-enhanced interface that helps me place the right word in the right place which, after all, is my main goal as a fiction writer. I’ll report back here on any progress I make.
In the meantime, please keep this information to yourself. I would prefer not to be publicly shamed.
1 Poet Kyle Booten spoke about this in his conference keynote, Designing Negative Spaces for Human Minds
2 Anne Sigrid Refsum presented this research on behalf of her co-authors Jill Walker Rettberg and Hanna-Rikka Roin; their paper Storytelling With Language Models will be published in the Narrative Inquiry Journal.
3 Poet Katy Gero talked about Turing and ‘Pygmalion’ during her conference keynote.
4 Kara Swisher, the brilliant American tech journalist, frames the battle Anthropic is currently having with the Pentagon over their determination to not allow their AI systems to be utilised for unrestricted military use and domestic surveillance as a corporate battle between Silicon Valley businesses played out via the US gov’t and its tech lobby; Pivot podcast, 13 March 2026.