Surveilled

Subscribe
Archives
June 4, 2025

Surveiled 93 — AI and the Management Singularity

AI and the Management Singularity

'The Management Singularity', by Henry Farrell

The conversation around generative AI seems to remain stuck in broad generalisations, usually describing it as an all-powerful technology that will put vast amounts of knowledge workers out of a job. Finding a more nuanced analysis of how that might happen and what kind of functions would be automated away is more difficult though.

Henry Farrell, professor of International Affairs at Johns Hopkins, bucks the trend by attempting to chart the impact of AI—specifically Large Language Models (LLMs)—on the kind of knowledge work that we do today, most of which happens in large corporations. Approaching the topic from a sociological perspective, Farrell looks at LLMs' impact on the management of large organisations, and posits that this is where we might see the earliest real impact. To quote:

LLMs are engines for summarizing and making useful vast amounts of information. This is also the most difficult challenge facing large organizations such as government and big business.

It doesn’t take a lot of introspection as a corporate drone to realise that summarising and communicating information up and down the management chain is one of the key features of a large organisation, so intuitively it makes sense that LLMs would have an impact here.

The sociological framework that Farrell applies to the corporation provides a useful structure to assess the areas where LLMs will have an impact, and the author finds four main ones:

  1. Microtasks — automating tedious tasks at the individual level, for example making unstructured information usable.
  2. Knowledge maps — create usable (but imperfect!) summaries of large bodies of information
  3. Organisational “prayer wheels” — generate ritualised organisational text, such as management summaries, presentations and so on.
  4. Translations — bridging communication gaps and translating goals, procedures and generally understanding between different parts of a large organisation.

LLMs’ value for these four types of tasks is immediately obvious, but at the same time, these tasks are not usually the ones where we locate the critical success factors of an organisation. This leads Farrell to conclude that:

If LLMs are radically transformative, it will be in the apparently boring ways that the filing cabinet, and the spreadsheet were transformative, providing new tools for accessing and manipulating the kinds of complex knowledge and solving the big coordination problems that are the bread-and-butter of big organizations.

It also seems fair to say that we haven’t yet seen a radical transformation brought by AI in large organisations. At best, it is being used for the first item in the above list, at the individual level by some workers.

One explanation for this is the ritualistic nature of large organisations. Employing LLMs more structurally to take on all four tasks in the list would likely require extensive cultural transformation of the organisation, and that is not an easy task.

Other caveats also apply. First, there is a big challenge in terms of authenticity, which is arguably required for good management. Imagine, for example, how you would feel if you knew your boss used an LLM for your performance review.

At an organisational level, LLMs could also lead to a hollowing out of the rituals now performed, whereas these rituals came to be precisely to generate real knowledge and judgment. LLMs’ output tends to be homogenous and "maximally unsurprising", and as a result, they may erase real opportunities for competitive differentiation that purely human interpretation may have brought to the fore.

Don't miss what's next. Subscribe to Surveilled:
Bluesky
Powered by Buttondown, the easiest way to start and grow your newsletter.