Cold days, chilling headlines
Ah, January. Trees yeeted headlong into woodchippers, ill birds quivering on hostile architecture. Londoners all look alike this time of year, huddled under formless wool coats, skin parchment-pale and eyes bloodshot:
![Promo shot from Nosferatu (2024) showing a shadowy Count Orlok, over which runs the superimposed text "[Over PA] Southeastern apologises for the late running of this train, which is due to…"](https://assets.buttondown.email/images/b3df951b-4cd3-4a28-9d57-595b0c679a96.png?w=960&fit=max)
In the land of the free, the cyberlibertarian mask has slipped. Musk has finally emerged from his edgelord chrysalis to stretch his new far-right conspiracist wings, while his peers are busy genuflecting to tyrants and the war industry. The ’20s has at last found its themes – it’s all AI maximalism, isolationism, and knee-jerk chauvinism from here.
Soul-corroding stuff. But we persist, if only from spite.
Academics
I can finally talk about my dissertation. In short I argue A/B testing is often benign, but not always. If:
your experiment amounts to legitimate social science research (think Facebook’s emotional contagion study),
you’re delivering a welfare-critical benefit such as healthcare,
(a decent proportion of) your users are vulnerable,
users can plausibly claim property rights over their software,
your test violates promises your product has previously made,
your intention is to use customers solely as means for achieving corporate goals,
an experiment is likely to have differential effects on (morally salient) different groups of users,
people have no choice but to use your product,
your product plays a key role in helping people interpret their shared environment (e.g. news sites, government information bureaux), or
your product has been designed by AI with minimal human intervention,
then A/B testing raises significant moral questions and deserves mitigatory action. At the very least the consent bar should be higher in these cases.
I still hope to write a practitioner-friendly translation at some point, but shout if you’d like the 15,000-word philosophical version. I missed a distinction by a few marks in the end. It always a stretch, and a merit from – if league tables are worth anything – the world’s leading ethics programme is still something to be proud of. With hireable space in Oxford apparently a severe bottleneck, it’ll be November before I formally graduate. (And it will be formal. Latin and everything.)
I’ve snoozed my PhD plans for a year, needing a study break and time to calculate what financial and mental expenditure I can commit. Still, I’m still hoping to get a paper or two submitted this year and spent much of the break reviewing papers for a journal I’m guest editing. Ghost of Christmases yet to come.
2025
2024 was a lonely year. Granted, I moved alone to America for six months. My fault. But 2025 can be different.
First, I’m back taking occasional clients. Right now I’m helping the RSPCA kickstart a responsible AI initiative. So far so good, in part because it features a dynamic I love: committed IC staff who get it but recognise good intentions don’t scale without robust policies, processes, and commitments. I’ll have capacity again from March so, as always, drop me a line if you know someone who needs help: training, growing responsible tech programmes, ethical product reviews, in-house talks, etc.
But my main 2025 project is a new book. I’m still honing the elevator pitch but at heart it’s an essay collection, with each (short) chapter addressing an important question, concern, and/or misconception about responsible tech. Definitely a practitioner work, albeit academically informed. So, if I may, a question – or more accurately, a request for free audience research:
What big questions do you have about ethical and/or responsible tech?
I’m working on the obvious stuff – How do I convince management? Is ethics subjective? How can we anticipate harms that haven’t happened yet? Isn’t this all futile under capitalism? – but if you’ve ever thought ‘I wish someone would write about [x]’, I’d love to hear what [x] is. Please reply to this email and you will have my gratitude.
Divertissements

I had a blast at the London Chess Classic, the strongest – and, at nine days long, the most punishing – tournament I’ve played yet (and certainly my first at a football stadium). I finished with 4½ points from 9, way ahead of my predicted score and including this superb victory. The astute reader will note my attack stemmed entirely from my early decision to wreck my opponent’s queenside, leaving her king nowhere to hide.

Improvement is a curious drug: months of plateau drudgery, then clinging on with your fingertips during sudden accelerations. Having added 60-odd rating points in the last two months, perhaps 1600 is in play this season.
My other wasteful pastime of late has been multiplication simulator Balatro. I have less and less appetite for time-chugging epics (with the odd exception e.g. Final Fantasy 7 Rebirth, which was astounding), so a toe-dippable game that still inspires daft lore and eye-widening Reddit fan art is a rare treat.
I’m off X and Threads altogether now, as I hope you are too, and have nestled into Bluesky pretty well.
Of course they’ll hit problems eventually, probably once they start hiring growth goons. But for now it feels a little like home. In a world this grim, these small pockets of mutual support feel ever more important. I’m glad to have you all with me.
Yours,
Cennydd
NowNext Ltd, company 07945946 (England & Wales) · The Old Bank, 257 New Church Road, Hove, East Sussex, BN3 4EL.