"I've got an idea!" On pluralism and resisting deskilling
A ramble around plural knowledges, data and AI, inspired by the wisdom from friends and colleagues in Nairobi and watching The Pitt. Plus! a random picture of an elephant
After a hiaitus of 364 days, Just enough Internet is back back back with an update from the frontline of technology pragmatism. This one is a ramble through the importance of plural knowledges and technology infrastructures, the problems of “cognitive offloading” (blech), and watching The Pitt. But really it’s all about why being able to say “I have an idea!” is so important. There’s also a picture of an elephant at the end.
What I’ve been up to
If you opened this hoping for a Society for Hopeful Technologists update, I’d encourage you to scoot over to the SoHoT newsletter. We’re in the process of forming as a co-operative and are just a few weeks away from being able to launch as a membership organisation 🙌. If you’re interested in helping out as we move into the next phase, there’s more information on how to get involved on the website.
In day job news, Careful Industries has been conducting a foresight review into the safe adoption of AI in engineered systems for Lloyd's Register Foundation. I’m writing this newsletter as I travel back from a workshop we co-hosted in Nairobi the Global Center on AI Governance, where we previewed the findings of some research we’ve done with the Data Labelers' Association on good work across the AI supply chain.

The project is huge but the tl;dr is that we're describing "AI safety" as technology that is safely created, safely deployed, and safely used and maintained. We're setting a high bar, and it kicks in way before concerns about existential risk become material. Is your technology made in ways that exploit human labour and disregard planetary boundaries? If so, there's a strong argument to be made that it's not safe and should be subject to appropriate mitigations.
Pluralism is great!
One of the things coming through at this stage of the project is how difficult it is to maintain a vibrant and plural AI ecosystem, or set of ecosystems, and enable different approaches to development. The dominance of the big labs and preoccupation with AGI means that most “responsible” efforts get distracted with tidying up after obvious mistakes. As such, there’s relatively little time or funding left to invest in alternative approaches such as pursuing computing within planetary boundaries or building capabilities around small AI. One the one hand, this looks like a complex problem that brings together issues of funding, infrastructure, and geopolitical power, but it’s really a choice about direction. To grossly simplify, the choice is between opting in to either US or Chinese models of development, or investing in new coalitions and national capabilities.
Whether I’m speaking with sustainable computing researchers in the UK, multilingual specialists in India, or governance experts in Kenya, it’s really clear there’s a need for commissioning and development models that can integrate into different cultural and political contexts without taking over. In the UK, “sovereignty’ is currently being mistaken for building unicorns rather than taking a longer term approach to infrastructure development, but that’s what happens when short-term economics dictate priorities. (As Dr Florence Ojongo of CIPIT at Strathmore University said at the workshop, AI applications often "follow the money” rather than the aggregate benefit of a technology to a place, community, or a nation.)
Anyway, I'm on record all over the place saying I think LLMs are extractive and that we can do better - make better choices and build less extractive technologies - but conversations during this visit to Kenya really demonstrated that extraction is just one dimension of the problem. The other significant problem is the way that monopolistic AI systems flatten and colonise everything, from culture and languages to economies.
Plural knowledges

Several case studies we worked through at the workshop in Nairobi examined the role of AI in environmental stewardship in Kenya. We discussed speculative and present-day scenarios including better tools for safeguarding biodiversity, use of data and predictive analytics to enable precision agriculture in a changing climate, and the use of AVs for food-chain and agricultural logistics. (Hat tip to brilliant co-convener and facilitator Selam Abdella from the Global Center for AI Governance who stewarded us through these.) Common to everything we talked about was the importance of well-governed, participatory knowledge systems and high-quality data that represents the many kinds of knowledge and expertise required in complex systems. Fundamentally, the answer to this is not new and is no different to Schumacher’s ideas of appropriate technology: technology that works with people and communities, rather than technology that erases them.
Also it’s important to stress “high-quality data” in a well-stewarded context may mean something very different to the kinds of formats an LLM-builder would expect: it may never be computer readable, never be accessible in the open, and never be replicable at scale. Some knowledge should only ever intelligible to the communities that safeguard it across generations. And while in technology circles it has become a truism that technology systems demand all information is connected, this assumption of frictionless consent prioritises technological agency over community agency.
Indigenous knowledges and knowledge systems are a vital part of Kenyan environmental stewardship, and we returned again and again across the two days to the fact that technology-mediated solutions can contribute to that complex ecosystem of knowledge on a wider scale and skills but they can not and should not replace it. This isn’t really a technological issue but a governance and commissioning one. As Dr Elizabeth Wamicha of Qhala explained, the ultimate power lies further upstream: who decides what is made and deployed? And while nuts and bolts mechanisms such as effective assurance methodologies are important, the most vital part of the supply chain isn’t really about data or access to compute, it’s about agency and decision-making.
Resisting sameification
For people who routinely use LLMs in their daily life, this question of agency and decision-making is also big issue. To what extent are we working with these tools, or working for them? When we offload to an LLM rather than using or developing our own skills and capabilities, what choices are we making? While I’m advocating for better participatory processes to open up national commissioning, it feels important to also think about how LLMs are impacting our day-to-day sense of agency. After all, ChatGPT is unlikely to orchestrate a political revolution.
And that might seem like a far-flung concern, but the impacts of what is known in AI jargon as either "cognitive offloading" and “deskilling” are becoming more and more apparent in the workplace and in educational contexts. (See recent research by Anthropic, which suggests this is a significant risk for some professions, and the unfolding crisis in higher education.) As with many sociotechnical impacts, this is happening both slowly and all at once: on the one hand, we’re experiencing the linguistic flattening of the everyday, with text-based tools that normalise the stochastic sameness of US Business English, while living through major infrastructural shifts as a handful of corporate knowledge products become the default for everything that happens on a computer.
And none of that might seem like a big deal. Using an LLM might seem like a fine choice for your boring admin task or for throwing up a prototype or getting a fast summary of that long PDF, but it's important to remember when we use those things that we’re making a choice. We’re choosing not to build our own power, invest in our own learning and capabilities; instead, we’re choosing to outsource. Sometimes that’s fine: just like the days we choose to get in the car or take the train rather than walking all the way, we don’t always need to do the hard work ourselves, but we do need to do it sometimes.
And what does this have to do with The Pitt?
And what does any of this have to do with The Pitt I hear you ask? If you've not watched it, it's an emotionally exhausting medical drama that follows a team of fictional emergency doctors through a single day. I watched it on the plane home, slightly overwrought with altitude and cabin pressure, and there's a bit in the last episode where the Old(er) Guy Who Isn't Noah Wyle is presented with a bafflingly difficult medical problem. After everyone in the room looks surprised and defeated, he takes a beat and says, "I've got an idea." He briefly spitballs with others, they trade experiences to see who’s done this particular daring procedure before, then together they agree a pathway in which they use everything modern medicine offers, but without ingenuity - built through knowledge and experience - that idea would never have emerged. And I know this is a made-up story, but made-up stories have been teaching us about the world since communication existed, so I’m taking it as a teachable moment.
For me, this is an example of what technology should be great at: offering just enough support so that we can be at our best when we need to be. Enabling and facilitating but not taking over. How can choices about technologies enable more people to have ideas? When is technology appropriate to invoke and when is it not? In particular, how can we collectively safeguard use of AI so we don't just become stochastic replacements - echoing an idea someone had once, somewhere on the Internet - but lead to something richer, more plural, more creative, so that more people can say, "I have an idea!" and be heard by someone who needs to hear it?
And to bring this back to where I started, being able to say “I have an idea!” relies on pluralism. For my last couple of days in Kenya, I was lucky to spend time exploring nature in Laikipia County. Experiencing just a little bit of that extraordinary natural ecosystem was mind blowing. Our planet depends on this plurality - to appropriate and build on the Zapatista slogan, the whole trip brought home the fact that a safer world is one where many, many worlds don’t just fit, but also thrive and flourish. Homogeneity in our technology ecosystems is ultimately as damaging for people and planet as it is in the natural world.
Anyway, enough from me. A proper workshop write-up will come soon (if you want to follow the project you can do so here), and in the meantime I’ll leave you with an entirely gratuitous picture of an elephant in Ol Pejeta Conservancy.
And perhaps I’ll write the next edition before 20 April 2027.
Rachel

(PS As an aside, I’m definitely not a gifted photographer, but my iPhone 14 was a bit flummoxed by taking wildlife pictures. All the animals either look like they’ve been C+P’d onto another image or they disappear into the background. Clearly I should have pretended to be an analogue hipster and taken an SLR with me, but you live and learn.)