Surveilled 96 — We need more serious conversations about AI
AI may be transformational, but likely not in the way you’re being told today.
After a run of Surveillance logs, shifting gears before the holidays with a new issue of Surveilled proper. Wishing everyone a merry Christmas if you celebrate, a happy new year and a great start to 2026!
We need more serious conversations about AI
The conversation about AI today is strangely binary. For proponents, AI will do our jobs for us, solve cancer, make us immortal, and take us to Mars. For opponents, it will scorch the Earth, take away our jobs, destroy arts and culture, and eventually enslave us. Interestingly, both sides of this debate accord AI quasi-mystical powers.
And yet, history teaches us that any new technology is unlikely to be this disruptive. This dualistic framing distracts from the real issues AI raises, and that we will need to grapple with sooner rather than later. To add insult to injury, the current framing is mostly the result of crafty public relations exercises by the technology sector and their cheerleaders, in the press, business and politics. To use a phrase from investment banking, they are talking their own book.
Reason is hard to find in the debate, even though there are plenty of academics, technologists and others who do offer reasonable and well-founded commentary. There is Yann Le Cun for example, who maintains that Large Language Models (LLMs) like ChatGPT are a dead-end in the quest for artificial general intelligence. Le Cun has skin in the game: he is leaving Facebook (sorry, Meta) to start a new venture focusing on “world models”, in his view a more promising path to human-level intelligence.
Academics Henry Farrell, Alison Gopnik1, Cosma Shalizi and James Evans provide another sober take. In a recent paper in Science, they set out a far more credible approach to the potential, limitations and impact of "large models.” These include not only Large Language Models (LLMs) like ChatGPT, but also image and video generators like Sora.
The authors start by sketching the limits of these models: they merely weave together atomic pieces of information (tokens) in the statistically most likely manner. LLMs are really just models of language. Conversely, they are not capable of autonomous thought, even though they may give the impression that they are.
In a deep dive on LLMs specifically, Benjamin Riley points out that, according to current neuroscience, “thinking is largely independent of language”. Mastery of language does not promote or guarantee original and independent thought. Given that image generators are in effect arranging pixels on a canvas, it seems safe to extend that argument to include them too.
But large models do not need to exhibit autonomous thought to be disruptive. In fact, they can be construed of as a new iteration of cultural and social technologies. Farrell e.a. define these technologies as “allowing humans to take advantage of information other humans have accumulated.” Obvious examples would be the printing press and the internet, but the authors argue that we should also consider institutions, such as markets and even democracy, as falling under this definition.
After all, markets transform the aggregated knowledge of innumerable participants into a single, imperfect data point, a price, which then shapes knowledge and decision making in turn. Similarly, democratic institutions transform individual political opinions into votes and election results. Defined as such, it’s clear that cultural and social technologies can be very impactful indeed.
Large models fit the definition of a cultural and social technology to a tee. They summarise an impossibly large and complex body of information, and can reinterpret it by combining it in new ways. Even though they are merely statistical models, the patterns they surface contain a large amount of information, and they can transform the information they receive by summarising or on the contrary expanding it. Through these abilities, they render an intractable body of knowledge usable.
Ironically, the abilities of the best models still depend heavily on human intelligence. A key ingredient of their performance is reinforcement learning, wherein humans teach the model which outputs are better than others. This, together with the libraries of information on top of which is built, determine the strength of a model, not its algorithms. This also explains why training a new model is so expensive and time-consuming.
Understanding large models as another example of a cultural and social technology allows us to frame and probe its impact on society better. For instance, we know there are many issues inherent to the categorisation and summarisation2 that are essential to large models. Where categorisation tends to oppress identity and individuality, summarisation tends to surface the most common situations found in the models’ training data. Both processes threaten to turn large models into a force for homogenisation. Worries about misinformation and the blurring of truth and fiction are also well-founded.
Large models will affect economic relationships as well. Any cultural and social technology leads to tension between those who produce information, and the systems that process and distribute it. Large models tend to concentrate power in their “owners”, even more so than previous technologies. The rewards of their use will accrue mostly to them instead of the owners of the information they are built on.
To put it differently, large models have decisive potential to shift economic power away from knowledge workers to the owners of capital. This dynamic existed for physical labour in the 19th century, and was ultimately the starting point for the introduction of many of the institutions still configuring society today, like regulations, the welfare state, or competition law, to name just a few. AI technologies could end up having a similar impact.
The media industry is already feeling the full effect of these changing dynamics, with the looming threat of “Google Zero”. After the introduction of an AI snippet at the top of any search result page, Google is sending far less traffic to the websites that originally published the information. Loss of traffic means loss of revenue, threatening the survival of these owners of information.
To a greater or lesser degree, this scenario could be extrapolated to every industry, and the potential of large models to reconfigure society becomes clear. For example, our existing redistribution systems will have to change, even if not to the level of the Pollyanna-like vision of AI doing all the work and humans pocketing their universal basic income.
The good news is that these are not entirely novel situations. Similar disruptions accompanied the introduction of earlier cultural and social technologies. New institutions emerged to curb the greatest excesses and to align the impact of these new technologies with that of the political majority. Crucially, these institutions did not emerge on their own. They were the result of debate and ultimately coordinated action by a majority of the population. We can and should expect similar action today.
So instead of wasting our time on hypothetical AI boom or doom, we should focus on the very real questions that large models raise. Before the current frenzy, Shoshana Zuboff wrote the seminal book The Age of Surveillance Capitalism. She identified the threat posed by data-driven technology to our principle of social organisation, summarised as our ability to answer the questions: “Who knows? Who decides? Who decides who decides?” Seven years later, broad and intellectually honest engagement with these questions is more necessary and more urgent than ever.
-
Gopnik, a cognitive scientist specialising in learning, is also the author of my favourite book on bringing up kids, The Gardener and the Carpenter. ↩
-
Sociologists Marion Fourcade and Kieran Healy engaged with this topic in authoritative fashion in their recent book The Ordinal Society. ↩