It was only a matter of time, really, before “a team” (two guys) of “AI experts” (surgeons who have alchemized AI stock hysteria into obscure assistant deanships) descended on my workplace with a “mandate” (that no one asked for) to shove some dogshit LLM tools into various aspects of my job. At least, that’s what it was at first, because these palookas don’t even know enough about what they’re doing to lie about it. When it became clear that I don’t need an LLM to “assist” me with any step of my work process, because I know how to do my job, the rationale shifted. Now, the dogshit LLM tools are supposed to “help” (deskill) my clients, and ensure they never learn how to do the stuff that I know how to do. In one of the meetings we’ve had about this, a third “expert” – a self-styled guru on AI ethics – actually suggested that all of my work interactions be filmed in order to train a model to replace me. (Excuse me, to assist the clients who are currently using ChatGPT, or whatever justification for their own existence they’ve cooked up this week.) Really! It’s so offensively stupid that they should be ashamed, but in the end it doesn’t matter how they feel, because the tools are not going to work. I mean, they might succeed in deskilling all clinical trainees and in laying me off, replacing me with a wrong-answers chatbot, but that’s all they will have done. And even though they don’t understand this, they will be worse off for it.
It really is irresistibly easy to make fun of these guys, for the same reasons we all love to hate confidently stupid people. Pride goeth before the inevitable stock market crash. It remains, however, important not to overpsychologize these folks. They are the most “they know not what they do” people in history, besides maybe the original ones. They have, to an unclear degree of conscious involvement, been conscripted into the national project of buoying obscenely overvalued tech stocks, because as soon as that stops happening, line goes down, and we all know what happens after that. The arguments against American AI are all right and still apply: the technology simply can’t and won’t ever be able to do what it is being marketed for; it’s deskilling the workforce which is something that, were it not run by senile boomers, I might caution an institution of higher learning about participating in; it’s insanely wasteful and unethical to use when a regular piece of computer software, a handheld calculator, or a human brain can do the same thing without slurping up half the world’s remaining fresh water. Tech critics have been making these and other arguments – let’s call them moral critiques – for years, and yet, the power of the tech sector has only grown over those years. And now we’re in really deep shit. Bad omens accumulate. Dead frogs keep piling up on the banks of the river that powers the data center. OpenAI sort of let it slip recently that they plan to ask for a bailout when the bubble bursts. Just yesterday Nvidia reassured Wall Street analysts that it is “not Enron.”
This is, to ask any of the people at my workplace whose job it is to follow this stuff, probably fine! Right? It’s charitable to assume they even know about it – it’s simply not like any of these guys read the news, or understand themselves as part of a dynamic society. A timeless question thus arises again, like a perennial flower: if this is so obviously a bad idea, and so obviously unsustainable and destructive, then why is it still happening? What are the reasons for this that don’t route through the ablated psychologies of our nation’s boomer middle managers? Yesterday morning I saw a tweet from CNBC anchor Carl Quintanilla (welcome to the resistance?) citing a Goldman Sachs report partially attributing the astronomical youth unemployment rate among college graduates (8.5%!!) to AI, which noted that “a further deterioration in employment opportunities… could have a disproportionate impact on consumer spending.” Each of my chakras switched on, like the lights on a pinball machine, from root to crown. I am levitating six inches off the ground and smiling with the serenity of divine wisdom when I tell you that this is what I was writing about over the last few days as I worked on this very issue of the newsletter. The scenario that Goldman describes in the report is what David Harvey calls the “contradiction between production and realization.”
Here’s what that looks like. Employers want to appropriate as much surplus value as possible, so they do things like lengthen the working day and depress wages. This is Volume 1 stuff. Crucially, though, the commodities workers make also need to be sold and bought to “realize” the surplus value solidified in them through the labor process. If workers don’t have any time to buy things because they’re always working, or if they don’t make enough money to afford to buy things, then a huge part of the surplus value involved in the commodity circuit can’t be realized. This is Volume 2 stuff, one of many contradictions that capitalism, to use a favorite Marxist word, “internalizes.” If we think about the labor theory of value regarding these AI innovations, this contradiction takes on an obvious, and obviously self-annihilating, appearance. AI has very little labor content; it uses instead a lot of water and coal. It functions as a “good enough” replacement for human skills and labor in a variety of industries, but it degrades the real productive economy in the process. AI might help institutions meet short-term goals of downsizing and layoffs, but in the slightly longer term, by sapping them of the thing that creates value – labor – AI sort of vitiates these firms, rendering their products and services valueless.
Capitalism is always shifting around to accommodate its constitutive internal contradictions, like a person with a bad case of indigestion. And so a natural question following from the above is: how is the movement of capitalism responding to this contradiction manifested in the contemporary tech economy? A recent paper that someone randomly sent me, by Kohei Saito and Ryuji Sasaki, presents one convincing candidate explanation: a shift to rent extraction. I need to be clear that this is my use/appropriation of their work. Saito and Sasaki don’t talk about rent in the terms that I am discussing it here. They view it instead as capitalism’s possible “final form” rather than as, in my admittedly inchoate view, one possible system-level response to contradictions in the tech sector and the broader economy. Saito and Sasaki explain that surplus profits are “fixed” as rent when there’s some kind of natural monopoly on a scarce resource or something that doesn’t scale easily; most digital infrastructure fits one or the other description. Digital rent is then the “appropriation” of “social wealth” arising from this monopoly structure. The collection and use of data do indeed depend on huge investments in technical capacity and in establishing structures of dependency. Rent is the real endgame of so-called platform capitalism; platforms are ruthlessly designed at the user level and seamlessly integrated into the productive economy (via a very particular regulatory infrastructure, or lack thereof) to ensure that everybody has to be on some kind of platform pretty much all the time, for work and for leisure. AI technology itself sucks raw ass, but that’s not even really the point – seen from the perspective of rent extraction, the wastefulness and inefficiency of the technology are actually strategies to secure tech firms’ monopoly on the huge share of public and private resources collectively termed “computing power” (as I discussed in my extensive criticism of Justin Joque’s book). As Saito and Sasaki put it, “lock in users, extract data, enclose knowledge, maximize rent.”
Thought I find Saito and Sasaki’s analysis useful as an account of how tech capitalism works, and helpful for thinking about how tech firms’ strategy of rent extraction might function as an attempt to transcend the production/realization contradiction, I must again stress that this last part is a heterodox reading of mine, and I do take issues with some of the of the argument they’re actually making. First of all, it’s framed as a critique of the concept of “technofeudalism,” care of Daddy Varoufakis, which leads in my view to some strange and unnecessary preoccupations with the technical definition of feudalism. Saito and Sasaki think our current conjuncture (to use another favorite Marxist word) can’t be described as feudalism because we’re still in capitalism, which I think is a bit obvious. Where I would really object to technofeudalism as a heuristic is with the historical comparison – why describe current events as a return to feudalism, rather than as a newer regime of accumulation layering atop an older one, the two functioning awkwardly together, as so often happens? A further problem. Saito and Sasaki’s argue on the one hand that we are still firmly in capitalism (this is why an analysis based on technofeudalism is misguided) and, simultaneously, that this form of digital rentier capitalism must be capitalism’s “final form.” This is the case, they argue because this regime of rent extraction destroys the basis for “human prosperity,” the commons, thus fatally undermining capitalism itself as well as what they call “anti-systemic” social solidarity movements. They prefer “technofascism” to “technofeudalism,” which is fine, but (permission to lib out) fascism is a mode of governance and not a mode of production. And they don’t really consider how rentier capitalism might undermine or contradict itself as capitalism.
Take it as a given, for example, that this level of rent extraction and appropriation (Vol. 3 stuff, which S&S derive from an admirable reading of the dreadful Marx-Engels Gesamtausgabe, or MEGA) cannot coexist with the more traditional structures of capitalism. What then? Do those older structures just wither away, or what? Saito and Sasaki make an astute point that, in our recent experience, authoritarian governments and digital rentier capitalism make cozy bedfellows. That being the case, the older and underlying contradictions still don’t just disappear. The authors know this, and concede that generative AI and cloud platforms “function as a means of production, either by reducing manufacturing costs or by increasing labor intensity, thereby generating surplus profits.” Which brings me to my original thought. How does digital rentier capitalism – which we’ve established is something more than just the malign death wish of evil billionaires – articulate itself within and through the tensions of the mode of production, particularly as it is arranged in the tech sector? Again, how are rentier strategies an attempt, at the system level, to transcend the ways that AI and other digital technologies tend to degrade the bases of both production and realization of surplus value? There might be some opportunities for contestation on this basis, rather than an increasing bleak future of increasing atomization and surveillance.
Something that I continue to emphasize in my thinking and writing these days is that this could all be undone. This is my personal crusade to combat or at least delay the onset of left melancholy. It seems fatalistic to think that capitalism will actually just destroy itself, and with it all prospects for human solidarity and flourishing, in its hunger for growth and expansion. It has adapted to extreme destabilizations in the past, including extreme polar concentrations of wealth and poverty. This is kind of capitalism’s whole thing, and the point of the MEGA. The Volume 1 stuff, Volume 2 stuff, and Volume 3 stuff are (to paraphrase David Harvey again), “moments in a unity,” separated only artificially for the purposes of Marx’s analytic method. I still believe that various strategies of political resistance are possible, and possibly fruitful. Can we disengage from the platforms? It seems like no longer working for them for free is a good place to start. Can we stop using AI, or (my preferred strategy) never start in the first place? What exactly are the competitive advantages that these digital tools confer, anyway, and are they the same in every sector? My guess is no – in something like supply chain management, they are likely a lot of advantages, and thus pressure to take them up is likely structural, and much greater than in something like my job, where efficiency is not even remotely an important criterion and where “demand” has to be stimulated with embarrassing dog and pony shows. I think it is still possible, within our system of governance, to enact political changes that could reverse some of this stuff. We could take money away from the tech sector and redistribute it, nationalize whatever infrastructure is important, and jettison the rest. We could take away the money and assets of the eight guys who have hoovered up almost all the wealth (real, social, and fictitious) of the world, and redistribute that. Over the next few years, I’ll be very interested to see whether the remnants of rule of law in our system of tenuously representative government will last long enough for some kind of reconfiguration in a more positive direction. A reconfiguration will happen, but what it will look like in light of the actually-existing material contradictions in the system, is an open question.