Dumb Skynet (addendum)
In rushing to get a newsletter out on a regular basis, I realize I may have flubbed the end of my Dumb Skynet argument a bit. I don’t think I adequately fulfilled my promise to explain why I think the scale of algorithmic deployments of information in our attention economy is qualitatively different from just a lot more tabloid and yellow journalism. I instead got hung-up on naming and describing its deforming effects. While it is important to show that algorithms are a driving force of actual governmental power for my argument, I also need to describe better what is going on that is algorithmically driven (rather than just media, attention, and markets) under the surface.
So, to return to the million monkeys on a million typewriters analogy, humans are producing content, but so many humans are producing so much content that the vast majority of it is all but lost, buried in the infinite scroll (or never appearing in the first place). You could imagine, hypothetically, that anything you can imagine anyone having said, there’s someone who actually did say it somewhere. This is why, although the vast majority of liberals did not celebrate the death of Charlie Kirk and find public assassinations on college campuses to be a horror, conservative attention merchants were nonetheless able to find some people who did. Because someone somewhere, in the millions of posts, is going to say the exact thing that will make you the most mad, on literally every single issue you can think of. If the topic is sufficiently in the public eye, and there are agents (like conservative attention merchants) who have incentives to elevate the outrageous, it is going to be a high-attention post. In addition, posting about the original post will also be high-attention, so suddenly, some random small-town math teacher is in everyone’s feed. Marginal opinions can seem normal because the algorithm will create the illusion that they are by magnifying them.
The more people look, the more the algorithm promotes the posts that are getting attention. But this attention says nothing about the value of the post. Tabloids have crazy headlines to catch attention, but most people know tabloids are full of shit so can resist more than a momentary look and they stay outside of real discourse. The momentary glance you give to the batboy cover in the supermarket doesn’t increase the prominence of batboy stories. But if you linger for a bit over a post in your doomscrolling, that gets registered, and the algorithm learns that post is getting attention.
Social media feeds are social environments created by the algorithm. If everyone jumps on a topic or you start seeing posts about it, suddenly you will have a huge social influence suggesting to you “wow maybe this is a real thing people are saying,” either over there among the wierdos you hate, or here in your ingroup. But it’s just an algorithmic machine picking winners. The marginal position becomes “everyone has been saying,” and now you are influenced to post a response. That response gets more attention (impressions, likes, follows) than your average post, and now you are trained to get dopamine hits by jumping into whatever the “discourse” is that seems to be on everyone’s mind. The machine algorithm that was training itself on the habits of human attention is now training human attention on its narrow engagement metrics.
Now scale that to how attention is monetizable and how everyone thinks they are smart enough to make some money on all this posting. Suddenly, the whole web is consumed by tabloid merchants chasing clicks to monetize a hobby through ads, and the only way to keep making money is jumping into the story of the day in some form, however marginal its protagonists or, more likely, antagonists are in any reasonable picture of our social and political reality. Everyone is playing by the rules of engagement optimization, and if you try to resist and pursue other values, you lose your platform very rapidly. You need to feed the content beast to survive. All the accounts you like are doing this in some way, but I think the most frequent is the simple parasitism of clipping an outrageous statement someone made and then telling your audience they are right to think it’s outrageous. They always make sure they are sharing and amplifying the original statement that got attention in the first place, so the algorithm catches and promotes their post or video.
For a while, this algorithmically driven attention herding seemed relatively apolitical, if not always helpful. The herding of outrage around say, the Covington Catholic kids, was not exactly useful in promoting progressive values, and pile-ons like this probably helped fuel the wave of backlash we’ve seen. But “cancel culture” was never political; the right thrives on it and does far more damage than costing someone a job (Mark Bray being forced to flee the country with his family comes to mind).
It’s impossible to establish this with certainty, but to me it has always seemed these mechanisms favor the right. The one story of a migrant committing a crime is bound to get exponentially more attention than statistics that immigrants of all types commit crimes at far lower rates than birth citizens. It’s made celebrities out of Nazis like Richard Spencer (even when he gets punched) and Nick Fuentes. And Trump is its king, more able to commandeer the attention machine than anyone, having no filter on saying the most outrageous things constantly, cultivating the attention it brings, and using it to build his political machine. But he’s also its ultimate rube—he falls for the illusions of the attention machine harder than anyone, measures everything in terms of attention, and only wants people around him who come to his attention frequently on his media. As a result, we have a policy set by chasing clicks and a content creator cabinet. Trump, more than anyone, has been trained by the algorithm that was meant to be trained by us.
This is what I mean by Dumb Skynet.
This harm is real and immediate. Tech is riding MAGA cynically to defeat regulation on the harms it causes and destroy the anti-trust movement that could reorient its priorities through competitive pressures. I also think some of the worst potential harms of chatbots are largely amplifications of dumb Skynet, but that’s probably a different post.
The good news is that we are not just letting this continue unabated. Lina Khan (better) be back with the next Democrat, Cory Doctorow is writing anti-trust bestsellers, tech workers are defecting and starting non-profits to call attention to and study these harms, like Tristan Harris and the Center for Humane Technology. And teachers like me are realizing that information literacy and source verification are probably the most important skills we teach, and we need to reinvigorate our teaching through understanding critical information studies in terms of the algorithmic mediation of information on social media sites and in chatbots. It is a hard problem, but it is not some existential future terror; it is here now, and we are still alive and still fighting. It is ultimately human systems that need to change, not a godlike robot.