Internet main characters, and a new home on Buttondown
Hello hello,
It's a new year and a new home for my astonishingly infrequent newsletter. I'm emailing mostly because I've moved to Buttondown (๐ Buttondown) and thought I should let you know before I start filling your inbox with more thrilling takes on data segmentation and technology governance. The main bit of today's update is some musing on the intersection of Internet Main Characters, the Post Office Horizon Scandal, and community power. But first, some self-justification...
Why did I leave Substack?
Well, like many other people I read this edition of Casey Newton's Platformer newsletter and thought it was time for a change.
I knew Substack's moderation processes were imperfect when I set up there, but it seems particularly futile to write about building a better Internet on a platform that won't remove pro-Nazi content. For reasons I can't explain, I'm still hanging around on Twitter (and yep, still calling it that), so I'm definitely not consistent, but there we go.
Internet Main Characters
Anyway, speaking of Twitter, I'm going to spend the main bit of this newsletter thinking aloud about Internet Main Characters - and in particular how they relate to the Post Office Horizon Scandal.
There are so many aspects of this whole sorry tale that bear closer examination. These include the fact that English and Welsh law assumes computers to be "reliable" unless proven otherwise; that many sub-postmasters experienced "indirect, oppressive" racism when they raised concerns; the lack of oversight from senior official and politicians; and the reality that 1800 victims are yet to receive compensation. The scale of failure is staggering and should lead to significant legal and institutional reform. If I had a magic wand, I'd ensure current and future policymakers take this as a cue to go back to basics and examine the fundamental building blocks of good digital service delivery. Realistically, I know this will be difficult in an election year, when all parties will be tempted to make unrealistic promises about the transformative power of AI, but it's essential that this reset happens and that everyone in the tech community keeps rallying to ensure that better accountability is in place.
The particular thing I want to spend a moment unpicking here, however, is slightly different. It's the fact that, in spite of years of excellent coverage by Computer Weekly, Private Eye, Radio 4 and others, it took a prime-time Christmas drama with a cast of national treasures to create the environment needed for redress.
What makes a good story?
Much has been written over the last couple of weeks about the power of art and drama to bring us together around human stories, and I'm sure that's a factor - but there's another dimension, which is that gradually unfolding stories about technological and systemic failure are complex, and increasingly difficult to land. This is in part because clickbaity memes make the post-Facebook and Google media ecosystem go round, and in part because people prefer stories about Main Characters to stories about technological systems and processes.
And this is not new.
When the Cambridge Analytica story broke in March 2018, the organisation I ran had just published a piece of research about public attitudes to tech. Our report showed that most people had low levels of faith in Facebook and and I was suddenly pulled into lots of media interviews and conversations about trust and technology. As the story continued to unfold there were many twists and turns and it was almost impossible to keep up. I remember, sometime that summer, being in Barcelona and seeing Wylie's face on a massive outdoor screen and having little curiosity for what the ticker below happened to say. If Wylie - at that time, the Main Character in the Cambridge Analytica story - was talking, he was almost certainly saying that something bad had happened.
While I know many journalists and privacy experts kept on top of the whole thing, it eventually became exhausting - even for me, someone with a professional interest and a borderline addiction to current affairs. And although I've not done a representative survey of UK citizens to ascertain this, my strong guess is that many people's memory of the first wave of the Cambridge Analytica drama was that the guy with the pink/green/blue hair said some things and - yada yada yada - it turns out that Facebook does some pretty dodgy things with data.
In the intervening years, the Internet's cast of Main Characters has shifted from being whistleblowers to a small (male) cast of CEOs and investors. Along with it, the mainstream media story of the Internet has moved to a mythical register, populated by heroes and villains in a space-age drama, when it could instead be a soap opera, filled with everyday stories of smartphone folk.
Last year this Main Character phenomenon extended even further beyond the human to include AI itself - or, more accurately, the existential risk AI might create. I've written about this before so won't replay it now, but the framing of existential risk lent itself extremely well to news tickers and muscular headline verbs, conferring an even more godlike or, as Marc Andreesen would have it, superhuman status on those responsible for it.
And while Altman and Musk make headlines, and Bezos appears in Vogue, the real-life consequences of technologies are playing out in the dust of day-to-day experience.
As flies to wanton boys are we to the gods...
The Post Office Horizon Scandal is a miscarriage of justice that has devastated the lives of thousands. If you've not already read it, Nick Wallis's book The Great Post Office Scandal brings together many of those stories, and it starts with the words of subpostmaster Seema Misra: "You may think this could never happen to you - or to someone you love. This book shows you would be be wrong. It happened to me."
The roll-out of more AI and automation in public life makes it possible that similar things will happen to more of us, and that more of us will have to fight against the system to gain justice and redress. Cathy O'Neil's seminal 2016 book Weapons of Math Destruction showed what happens to ordinary people when the numbers don't add up, and not that much has changed since (certainly in the UK). Most of us don't have the ability to stand up to massive systems on our own, and data-driven harms can be particularly difficult to unpick.
What can we do together that we can't do alone?
Which brings me to community power.
The Justice for Subpostmasters Alliance website charts the journey of collective action that brought individuals' concerns to more public notice. It began with a meeting of 30 victims in 2009 and
over the intervening years, the determination of the group was solid and at meetings of the victims of Post Officeโs brutality, people who had run businesses often in the heart of a community, met to offer support to others and confirm their resolve to expose the real truth no matter how long it took.ย It was a long slow laborious process for the JFSA to eventually get Post Office into court in 2017 in a group litigation action by over 550 mainly ex Subpostmaster.
There are many steps on the path to justice. Heroes, villains, and Main Characters may play a part, but the solid, often unsung work of community power and collective action is at the heart of much real change. It can be very tempting to think that technological systems can only be changed or resisted by technological means - in fact, the UK AI Safety Institute is currently advertising for a Loss of Control Workstream Lead to run technical evaluations - but it is people who make the difference. From the JFSA's convening to the SAG-AFTRA Writers' Strike, the potential of people coming together to reshape outcomes should not be underestimated.
I'm going to end this by pointing to some incredibly nerdy blog posts about community governance and AI that I wrote before Christmas, tying together some of the work we've done at Careful Trouble over the last year. More money than ever is being poured into AI development, but the communities that resist and shape the roll-out of those technologies are often expected to be self-supporting, running on hot air and good luck.
In this first post - Recognising AI as a pillar of community governance - I point to the fact that while it's great to see civil society investment growing in North America, digital technologies already over-index on North American values, and healthy democratic resistance needs global funding. In the second, I map some of the ways community power shapes AI systems and call for a Civil Society Observatory.
Unlike legislative systems or regulatory procedures, community power is responsive, diverse and distributed. As technologies created by a few large companies gain deeper influence over our daily lives, we need more space, opportunities and funding for civic convening if we're going to avoid the inevitability of national IT scandals becoming an everyday occurrence.
Here's to 2024 being a year of community power. And the next edition of this infrequent newsletter will definitely, finally, be about what politics can learn from fandoms.