Artefacts logo

Artefacts

Subscribe
Archives
August 12, 2025

Artefact 251

Cogitating on Cognitive Debt

And so, part 2.

I’m continuing from the last newsletter with a bridge between the Latourian turn of things (“complex assemblies of contradictory issues”) and the Design Futures reflections of part 3 to come.

Yes, it’s the inevitable AI discussion. There’s nothing like a Large Language Model (LLM) to scream complex assembly of contradictory issues at you.

Well actually, no, that’s not quite true.

An LLM is instead likely to tell you how smart you are, what a valid and inspired series of points you are making, and that in all likelihood nobody has ever thought such wonderful thinks before you clever, clever person…

ANYWAY.

AI was an underlying theme that kept turning up in conversation in Norway, Switzerland, the UK and then Barcelona too.

For what it’s worth, I don’t see that it is incumbent upon anyone (let alone me) to form grand opinions or definite positions on a topic as broadly unintelligible as AI has become.

At this point, AI just seems to mean ‘there’s a computer near it’, which is not helpful in the slightest to anyone on any side of the conversation.

"You Keep Using that word. I do not think it means what you think it means" GIF - from The Princess Bride

But, then again, it’s my own fault.

I started to explore some of the implications of AI implementations with the Cognitive Debt idea in Artefact 247 in late April, then in a blog post a week or so later.

Lots of people mentioned how much they appreciated the language, to describe a thing they were seeing but couldn’t articulate.

Then in June an early-stage MIT paper popped up talking about Cognitive Debt, and the implications for students in an experiment where different groups were allowed to use LLM assistants at different stages of their essay writing task.

It’s one of those multiple discovery situations. I’ve not talked about this to them. And while I think their use of the term is related to mine, though I’m really thinking of it as a broader cost to an organisation at scale, rather than the effects on an individual brain.

Anyway, somewhat hoisted by my own petard, I have started thinking about AI again, and mentioning the Cognitive Debt idea as a part of a broader part of ‘how we think we think’ of the core Smithery practice.

I decided to detail that thought process here, to make some connections between concepts, before getting on to some of the Design Futures stuff - for reasons that will become apparent.


Modelling our thinking

I tend to use the idea that Information is Light, not Liquid in talks as a basic concept underpinning the tools and frameworks I’m going to cover.

One specific tool I often use as a demonstration is one called ‘Moments of Enlightenment’. It’s a little bit OODA-loopish, with some cognitive science thrown in for good measure. You can use it to think about how a person, a team, or an organisation deals with information as it is sensed, stored, processed, and acted upon.

Moments of Enlightenment
Moments of Enlightenment model

You start by imagining all the information that’s ‘out there’. Endless pixels you can rearrange to form different images of the world. Meanwhile, your internal model of the world is ‘in here’.

The process by which you get new information in has to pass through your ‘sensing filters’. Given how you see the world, what is now possible? Your options are not just dictated by how you see the world, but also how you recognise plausible patterns in events.

The options you have are limited by your ‘conceptual filters’; often, this can be what you have seen work before.

Finally, which action do you choose from these options? Probably the one suggested by your ‘decision filters’, a set of contextualised heuristics given the situation in which you find yourself.

Originally, this was a model to take the idea of information as ‘light, not liquid’, and help teams think about their image of the world, and how they might work on their filters to make different information visible, change their image of the world, and increase the options they saw as viable.

More recently, I realised this is a useful starting point for thinking about how to examine what we know about any given AI system we are invited to use.



Show the thing

I started to map out a version of the model based on thinking about LLMs. Given these are generally the types of technologies most people have most exposure to, and seemed to be reasonable to use this as the central use case.

My aim was to give people a version of the LLM they could see, and then adjust or improve their strategies accordingly (rather than just feel it is a chatbot box on top of a Wikipedia-like stack of facts).

Smithery's Moments of Enlightenment Model, as turned towards unpicking an LLM

Let’s start on the left. What data is used?

You need to use a lot of data to train an LLM. ChatGPT4 needed 5 trillion words. ChatGPT5 is bigger still.

You also need to think about how much you think the corpus of training data used for an LLM bears a resemblance to the real world?

Even if you took everything ever written about the world, it is quintessentially not the world itself. LLMs are stuck in Plato’s Cave in this way; they statistically build a complete image of the world from mere shadows on the wall.

Then there’s the question of how a company got hold of that data. As you may know, this is a highly contested space; have the massive amounts of data needed to train LLMs been sourced legally and respectfully by the creators of LLMs? The sheer number of cases in courts in the US currently suggests no, not always (ahem).

Once you have the data, what does it take to train an LLM on it? Obviously there’s energy. This MIT Technology Review piece is a great deep dive into this topic. For instance, round 10-20% of all AI energy costs is up front in the training phase.

Then there’s water use as well. Mistral AI’s recent disclosure on the environmental costs of its own setup start to give some insight into the real tradeoffs here. Perhaps being based in Paris rather than Silicon Valley helps with the transparency, too.

Political backdrop plays a massive part in the next phase too. How is the model trained? And what factors around the training intention start to have an effect?

Meta announced recently that "Our goal is to remove bias from our AI models and to make sure that Llama can understand and articulate both sides of a contentious issue".

As Axios’ reporting points out, Llama already gives the most right-wing authoritarian answers to prompts. Yet this doesn’t seem to be enough for Mark Zuckerberg, who seemingly aspires to jellyfish-levels of spine possession is his fawning to the current US Government’s agenda.

Kylo Ren Screaming MORE

We should always bear in mind that the LLM we choose is never left to develop a purely statistical view of the world. The upweighting and down weighting of particular texts, the choice to include or exclude particular data sets, all create bias at creation stage.

This is true with the interface design too. The much reported ‘AI sycophancy’ - where the chat interface commends your ideas and intelligence - is essentially an aggressively disingenuous decision to keep people using a service. We’ve seen LLM interfaces become the weaponised financialisation of automated flattery.

Then we get to a selection of outputs. By this point, the scale of operations is far too great for anyone at the service provider end to be checking the outputs. It is left to independent researchers to check what’s going on.

This study caught my eye today - “Artificial intelligence tools used by more than half of England’s councils are downplaying women’s physical and mental health issues and risk creating gender bias in care decisions.”

Nobody designed the these specific LLM implementations to do this. They just didn’t design them not too.

As my friend Deb Chachra regularly points out, “Any Sufficiently Advanced Neglect is Indistinguishable from Malice”. We are familiar enough by now with the kinds of outputs from LLMs to not let these loose in a way that could harm people.

Finally then, there’s the loop back to the input data. The more AI content is on the internet, and finds its way into the next generation of training data for LLMs, the more likely we’ll hit model collapse.

Back in June, Mark Earls, Prof Alex Bentley and myself joined some of the fellows at Newspeak House for a discussion on this, and the similarities in potential model collapse to the data patterns in other work Alex and Mark had done.

It also makes me recall a post that Igor Schwarzmann put up a nearly two years ago on synthetic media. Igor said this:

“Synthetic Media's invention will be, in retrospect, like the invention of plastics. There are plenty of valuable things, but ultimately, it will be hard to determine if we have created more harm than good with them.”

The word that sprang to mind for me on thinking about this was Microplastics.

Synthetic media is infesting every organisation, every public discourse. Tiny granular pieces of synthetic content in the emails, the presentations, the documentation, all growing inside every organisation’s information environment. And I’m not sure how we’re ever going to get it out again.

Model collapse may well be the least of our worries.


Cognitive Debt at scale

John V Willshire talking at IxDA Oslo in June 2025
IxDA Oslo, June 2025

So what might happen when a company implements LLMs as fully as they can through their organisation?

A custom tool to help the insight generation and planning process, perhaps. Or an agent that staff use to write each other’s 360 reviews (no, I can’t quite believe this either). Or an automated customer service system that handles all those annoying customer questions for you.

A company will find itself full of generated ‘answers’ for questions, challenges, projects which they just get on and execute. Direction without intention.

When something goes wrong, the boss will ask ‘Why did we decide to do that?’ And nobody will be able to answer.

Next month, I will be working through the implications of this train of thought with some very clever friends, in order to think about how we might help organisations more carefully consider AI integrations from this perspective.

(If that might be of interest, get in touch)


Confronting a highly-dissatisfying present

I have found it really useful to visualise an LLM in this way; for myself, in discussions with others, and for client projects. It definitely changes the way I think about using them, or indeed refraining from using them at all.

Smithery's Moments of Enlightenment Model, as turned towards unpicking an LLM

Christina Wodtke has done some fantastic work in her essay I Love Generative AI and Hate the Companies Building It, picking through all of the main LLM companies on various factors, and selecting the ‘least evil’.

Like Christina, I tend to use Claude, because they seem least problematic as an organisation. But unlike Christina, I can’t bring myself to love LLMs as a technology at all.

LLMs are a thing; a complex assembly of contradictory issues. And I can’t make the good they do outweigh the bad.

It is absolutely OK to hold a moral or ethical position of dislike or disdain for LLMs.

Not just because of the way they were created and boosted initially, nor because of how they work today, but also because the current organisations creating these tools show no real appetite to meaningfully fix these issues.

Anyway, given the lukewarm (being polite) response to the launch of GPT-5, it doesn’t seem like we’ll be stuck in this version of a fabled promised future anyway.

Only the diehards are still unabated optimists. As Mat Honan puts it here, “I’d argue that even at this point, most of the people who are regularly amazed by the feats of new LLM chatbot releases are the same people who stand to profit from the promotion of LLM chatbots.”

But here’s the thing.

This is very much my favourite kind of moment in innovation cycles; when the paradigm creaks and sputters, and people start looking around for new possibilities.

And it chimes with a question that Igor and Johannes asked me on their Follow The Rabbit podcast - “what sort of AI system would I like to see instead?”

See you next time, as I dive into that.

John V Willshire
12th August 2025

Don't miss what's next. Subscribe to Artefacts:
Smithery Artefact Cards
Powered by Buttondown, the easiest way to start and grow your newsletter.