distillations/constellations logo

distillations/constellations

Archives
February 1, 2026

Distillations/Constellations #14: AI myths and AI slop

Lessons from a session held at a library in Berlin debunking myths around AI.

Purple background with orange and white blobs across it, with writing in black capital letters. The writing says: KI: Macht, Mythen, Missverständnisse, and underneath in smaller writing Eine Gesprächsreihe über Künstliche Intelligenz. In the bottom left hand corner are the logos of Stadt Bibliothek Pankow and SUPERRR.
In English: AI: Power, Myths and Misunderstandings. A discussion series about artificial intelligence.

On Wednesday, I joined Julia Kloiber and Frederike Kaltheuner at the Bibliothek am Wasserturm (a public library in Pankow) here in Berlin, to talk about AI myths, together with a group of around 50 members of the public. It was the third and final week of the AI myth sessions, organised by SUPERRR, and our focus last week was what AI can’t and won’t ever know about us.

It was such a lovely evening, full of surprises and really engaged discussion! A few things became clear throughout the session: first, that people had a real desire to talk about AI, the impact it’s having and the myths or narratives that they’re hearing in a non-judgemental space. Total credit to Julia for facilitating and creating a format that made people feel comfortable enough to say what they were thinking openly.

An example: after hearing Frederike and I give very short inputs (3 minutes!) – someone asked about the recently published ‘constitution’ for Claude, Anthropic's generative AI product. They said that when they originally saw that a ‘Constitution’ had been published, they thought of it as a good thing – it meant that the creators were thinking holistically about the social impacts of Claude. But after hearing us, they were wondering: maybe it’s not?!

(My answer: publishing a constitution for a tech product is a fantastic example of myth creation, of contributing to a narrative where generative AI is much more than just a tech product, that it has rights and can have as much influence as actual countries. And also: trying to create that narrative most certainly does not make it true.)

Second: that, somewhat counter to the narrative I often read in mainstream spaces, people who joined us were both quite open to hearing our input, and also pretty critical of the role of AI. Of course, there was some significant self-selection happening – if you’re willing to leave your house in minus temperatures on a dark January evening to hear about AI myths, you must be quite keen to learn more. But even so, the level of critical thinking that everyone was already engaged in about AI was fantastic to see – I feel like ‘the public’ writ large get a bad rep for engaging with AI or technology more generally, and it’s unwarranted.

I saw that underestimation in practice when I told someone the following day about the session – before I could finish my thought about how surprisingly critical and uncertain people felt about AI, he interrupted me with a somewhat dismissive comment about ‘the public’ not being able to understand complex topics like these. I really disagree with that approach: most people (especially those who have experienced some kind of power imbalance, marginalisation or structural inequality) are extremely able to understand the impacts of power concentration and to critically assess what’s happening. The fact that they’re making non-value aligned decisions about technology says more about the failure of communication from those of us working in this space, and the lack of viable alternatives aside from Big Tech.

Another example: after hearing us speak for an hour or so, an older man put his hand up to say something along the lines of “But everything you’re describing doesn’t sound like tech problems, aren’t they society problems? So they can’t possibly be solved using tech anyway?!” (Honestly, I felt like cheering. Yes!)

And third: that creating this kind of space actually did much more than ‘just’ make space for discussion about AI. There were people who had come to all three sessions that we’d held, who recognised each other and chatted during the break – and at the end, wondered out loud “where will I see you next week?”

That was a consequence of the sessions that I have to admit, I didn’t see coming. In these times of rising authoritarianism and weakening democracy, it feels like creating spaces where people can build relationships with one another is an invaluable contribution, almost regardless of the actual topic we were talking about. That we got the opportunity to engage the public in critical discussion of AI, listen to what they’re experiencing and create a space where they strengthened their ties with one another, makes me really happy.

A few people mentioned that AI chatbots are becoming the place that people go to for therapy, support, to talk about their days. Providing therapy aside, listening to someone’s worries or just hearing them talk about their day sounds a lot like activities that family or friends might also do, and it all made me think about the connection between the rise in use of generative AI and rising social isolation and loneliness. A research project from 2022 reported that more than one third of respondents were lonely at least sometimes and 13% were lonely most of the time – with loneliness having huge social and health impacts on people. Interestingly, the World Health Organisation press release I linked to about the impact on health of loneliness, includes a quote starting with “Even in a digitally connected world…” – honestly, I wouldn’t at all rule out the spread of digital technologies in certain contexts actually exacerbating our feelings of loneliness, rather than mitigating them.

On a personal note, too – it was a fun challenge to do this entirely in German. For the sake of accessibility to the public, we decided to avoid using ‘Denglish’ (Deutsch-English) or simply importing English terms into German.

So, my new German term of the week: KI-Grütze, in English known as AI Slop – content made with AI that is produced in high volume (somewhat similar to spam, also called ‘digital clutter’) and often looks like it could be realistic (eg. emails with all the right words) that don’t actually contribute to anything helpful. It’s the reason why a report by MIT Media Lab showed “95% of organisations [who invested into generative AI] see no measurable return on their investment.”

What became clear throughout the workshop series (of which there will, hopefully, be more!) – is the amount of effort being put into creating grand myths around AI. None of that is by accident: it’s the people who stand to benefit from money being spent on AI, realising that they don’t have a viable business model, trying their best to convince others (and themselves) that AI really is a game changer. And while there are extremely valid uses of purpose-built AI being helpful (quick translation; increasing accessibility; specific programming uses, and more) – the grand myths and legends we’re being sold, are just that. Stories.

Links from around the web

  • I really appreciated this analysis by Dizzy Zaba on the structural crisis facing community organising – specifically, that ‘digital’ and field/in-person organising are considered to be different skillsets, housed in different teams in organisations, instead of as more integrated in a way that would compound rather than separate their impact.

  • Similarly, this long read from Aarathi Krishnan gave me a lot of food for thought. It covers a lot of ground, including the importance of building despite (or because of) a lack of clarity. She writes that we need to stop speaking the old language, and listen for the new, develop a ‘different way of seeing’.

  • A group of people in Quilicura, a community in the north of Santiago, Chile, home to “one of the highest concentrations of data centres dedicated to artificial intelligence” set up a chat interface, Quili.AI where instead of asking AI questions, you could ask members of the community your questions. Their goal was to help people to “reevaluate the unnecessary use of AI” and save their water resources during a time of massive drought in the country. The chat interface was open for just one day – yesterday – but perhaps this approach of responsible prompting will take off. (Thanks for the link, Julia!)

  • I mentioned this in my input on Wednesday – the Slop Evader, a browser extension which only returns content created before ChatGPTs first public release in Nov 2022, a project from artist Tega Brain. That way you can be sure the recipe you find isn’t just a randomly plausible-sounding collection of ingredients, and was (most probably) written by a human.

Don't miss what's next. Subscribe to distillations/constellations:

Add a comment:

Share this email:
Share via email
Instagram
zararah.net
Powered by Buttondown, the easiest way to start and grow your newsletter.