#16 Too Human
It’s 2025 which definitely sounds like a year in the future where it rains all the time and you’ve begun to suspect that your boyfriend is a robot. Today in London it is cold and sunny which is unusual enough to make it feel like a holiday when, really, I’m back at work. But I seem to be unable to do much more than faff about with a new notebook and some cool post-it notes I got in my stocking. I’m meant to be finishing a final rewrite on a children’s book but …
I’ve been listening to podcasts and reading articles about AI over the holiday. One of the most interesting things I came across was Kara Swisher’s podcast 2 January episode on AI Ethics and Safety. In a discussion about why the large language models (LLMs) reflect back society’s own biases Dr Rumman Chowdhury, who used to be the AI and Ethics lead at Twitter, talked about how we anthropomorphise AI tools. We expect a near-human response from these tools, and this is part of what elicits bias.
What’s fascinating about generative AI is people interact with these models in a very different way than we interact with search engines […] People tell these models a lot about themselves[…] What we found is people would say things like, I’m a single mom and I’m on a low income, I can’t afford medication, but my kid has COVID, how much Vitamin C can I give to cure him? And that’s very different from Googling, does Vitamin C cure COVID. Because this AI model kicks in, it’s taking all that context and trying to give you an answer that will be helpful to you, in doing so may actually spread misinformation.
The whole episode is recommended listening. The speakers did not discuss why we are so prone to anthropomorphosies - god that is hard to spell - AI, but no doubt it is at least in part because of the long history of representation of robot and/or artificial intelligence in cinema, tv, and novels. At least that’s what my robot boyfriend thinks.
Last year I saw three plays where the protagonist had created an AI version of a dead loved one in order to prolong their interaction with them (spoiler: this never goes well). My new year’s resolution is to not see any more of these plays.
At Bath Spa Uni, we’re hosting a second series of webinars on AI and writing. Our first took place in November with an amazingly international array of speakers: writers James Bradley in Sydney, Yudhanjaya Wijeratne in Sri Lanka, and Mujie Li in the UK, chaired by Patrick Flanery in Adelaide. You can watch it here on The Writing Platform - it makes for a good listen as well.
Writing With Technologies, Webinar Series

Do hang on for the discussion. And check out other articles and posts on The Writing Platform where we are building up our resources and articles about writing and machine learning. Our next webinar, this time on AI and Creative Expression, takes place on Wednesday 22 Jan:
https://www.ticketsource.co.uk/thestudiobathspa/writing-with-technologies-webinar-series-ai-and-creative-expression/e-zvlggzOne of the many interesting comments in this webinar comes from Wijeratne who has spent many years experimenting with large language models and generative AI tools for writing. He notes that as LLMs become more sophisticated they no longer produce the kind of outputs that are weird and interesting but instead produce serviceable writing. So they are no longer useful to him - it was the non-human weirdness that he found creatively engaging.
Lastly, I read my lovely Bath Spa Uni colleague Sam Harvey’s novel, Orbital, before Christmas. So beautiful. And Sam won the Booker Prize! Hurrah!!
So, that’s it from me. Happy New Year, and please forward this to anyone you think might be interested.
Kate Pullinger