Sam Altman, Hollow Man
What kind of guy would give us ChatGPT? What kind of guy is it making us?
(I’m Henry Snow, and you’re reading Another Way.)
As a child, an image from Michael Chabon’s 2002 fantasy/baseball novel Summerland stuck with me. The protagonist’s father is recruited to the development of a world-ending weapon of mass destruction. As he works away on it, first as a reluctant prisoner and then as an intellectually curious and semi-willing participant, he physically becomes a flat, two-dimensional man. What feels now like a literalization of T.S. Eliot’s “Hollow Men” was striking as a middle schooler. Today it reminds me of the kind of men I have previously talked about in this newsletter, and I think it describes an irritatingly important public figure now as well: OpenAI’s Sam Altman.
A preface: there’s a lot on Sam Altman, Silicon Valley libertarianism– we could go down a rabbit hole. I want to be more focused here. Asking “what kind of person produces AI?” isn’t a good question– putting aside the problem that we’ve all somehow agreed to call ChatGPT “intelligence,” AI is the product of many, many people. Altman’s not an engineer. He’s a manager.
Let’s invert that original question and instead ask: what kind of person does AI produce? What kind of person does it need in charge? Sam Altman is what his creations have made him. Asking what that means lets us ask who AI might make us. This is also an opportunity to follow up on my earlier essay “Machines that Cannot Err” and unpack what some of the dynamics I discussed in it mean right now.
If you’re as chronically online as I am, Altman’s been hard to miss. A good piece of advice is never to write just about something that happens on Twitter, but I’m going to break that for a moment because of what Altman’s public pronouncements reveal.
March 17th: this is the most interesting year in human history, except for all future years
Why? AI, obviously. He doesn’t even need to tell you.
February 11th: you can grind to help secure our collective future or you can write substacks about why we are going fail
(Theroetically I am trying to do both; consider this my obligatory please subscribe and share notice)
February 9th: openai now generates about 100 billion words per day.
All people on earth generate about 100 trillion words per day.
This one was unsurprisingly pilloried on social media, and here we’re getting into this man’s values– at least, those of his public persona. Behold: the number’s getting bigger! It’s possible to read this with a kind of winking self-awareness. I suspect Altman knows how people like me are going to receive this. Still, the content matters as much as the tone, and the content is unambiguous: OpenAI is on its way toward surpassing humanity itself, and we know this from its quantitative output.
Also February 9th: one of the great pleasures in life is finding undiscovered talent, enabling them with high conviction, and watching them bend the trajectory of things
I find this one fascinating, because Sam Altman’s idea of ‘bending the trajectory of things’ is… questionable. At its founding, OpenAI itself at least theoretically represented an attempt to bend ‘the trajectory of things’: it was originally a nonprofit, meant to pursue AI for public good rather than private profit. This was built into the nonprofit’s structure… until 2019, when it spun out a for profit entity governed by the nonprofit. The results have been obvious. In practice, the hand that gathers the money is the hand that decides. Now OpenAI is simply another tech company.
In November 2023, Altman was briefly dismissed from his position by board members seeking to defend the theoretically controlling nonprofit’s mission from its growing profits. Now, without going into too much detail, I want to stress that I’m skeptical of those who fired him as well: their own ideas on existential risk (from AI) strike me as crankish and implausible, and they are linked with the same utilitarian “Effective Altruist” movement Altman was/is. The board members who fired Altman were probably wrong that their actions would save humanity if they’d succeeded.
But they were undoubtedly right that Altman’s increasingly close relationship with Microsoft was turning OpenAI into the kind of startup that Altman had previously incubated at startup accelerator Y Combinator. The company’s actions are indistinguishable from those of a profit-seeking entity, because that is what it is. It pursues the development of model after model, heedless of the impacts its products might have on our information environment, education, or workers’ lives. When OpenAI’s board of directors tried to actually enforce its principles by firing Altman, he promised to simply go do the same work at Microsoft, which pushed for his return. Employees did as well. He was back in the captain’s seat in a matter of days.
Larry Summers, perhaps the most establishment man imaginable (Harvard president, former Treasury Secretary, architect of post-Soviet privatization), serves on the board now.
Sam Altman appears in the story of AI not as someone who bends the trajectory of things, but someone who adheres to it, even enforces it. OpenAI’s mission and nonprofit structure were probably doomed within capitalism, but the intent behind it was at least to bend its trajectory– to do something different. The growing role of its profit arm as investment pours in has washed any good intent OpenAI once had right into San Francisco Bay. Altman has served as a man who makes excuses for power rather than a visionary who changes it.
Reporting over the November debacle at OpenAI, as well as more recently the vibe among workers at AI firms, has suggested a sense of dread among the people who make these software products. Quite literally these workers tell us they fear they are producing something that could end the world. And they are doing this willingly. Why?
Workers at OpenAI competitor Anthropic– which is trying to be the moral organization OpenAI was originally meant to be, complete with close ties to effective altruism– report fearing “neural network scaling laws.” These come down to the same rule as Altman’s tweet about words: bigger number means more power. The more data a model ingests, the more powerful it becomes. This is partially right and partially wrong: models do improve in performance with more data, but this cannot overcome the fundamental limits (too lengthy a topic to detail right now). But what matters is that many AI true believers believe in the power of sale.
—
Altman himself doesn’t seem to actually dispute the dread-havers’ descriptive analysis of the situation: AI is enormously powerful and growing beyond our control. He just thinks this is good. Consider a 2021 blog post by Altman rallying around “Moore’s Law for Everything.” Moore’s law refers to an empirical observation: every two years, the number of transistors on a computer chip (so, its processing power) doubles. This isn’t inevitable, and has only been made possible by successive engineering insights. Some argue it’s breaking down now.
What is Moore’s Law for Everything? Big numbers for everybody, via AI. Once AI and “robots” dominate the economy, Altman argues we can expect the same dynamics as Moore’s Law to constantly cut costs. “Imagine a world,” Altman writes, “where, for decades, everything–housing, education, food, clothing, etc.–became half as expensive every two years.” This is great for a consumer, but horrific for producers: does your pay go down by half every two years? Does your job change constantly? Altman tells us we will find new jobs, but should we have to?
Sam Altman’s utopia is functionally a universal extension of the 2010s-era Silicon Valley startup economy that gave us companies like Airbnb, Lyft, and Uber. Disrupt everything, maximize the role of markets, cut costs, and voila, a democratic gig economy. OpenAI’s actual practice matches these firms as well. Like many of the big firms of the 2010s– Netflix, for example– it gorges itself on investor cash premised on vast future returns, while constantly burning money (it lost more than it made in 2022 and 2023, and it will continue to do so for the foreseeable future). These firms marketed themselves as a techno-optimistic reinvention of their sectors of the economy.
In practice, their business model has been a combination of regulatory arbitrage and investor subsidy. When this fails, they raise prices and lower wages. If they cannot lower wages enough, they collapse; otherwise they lumber on with unhappy consumers, investors, and workers alike, while the precaritization they contribute to continues to advance. This is probably a best case scenario for OpenAI and its place in our economy going forward. Ask Russians, millions of whom died prematurely because of Larry Summers’s privatization program, feel about a utopian vision with him anywhere near it.
These and any other objections are irrelevant in Altmanworld for two reasons. First, they aren’t quantitative: AI will make numbers bigger and therefore must be good. All else is details. If I say AI firms will prioritize nickel-and-diming workers rather than world-changing improvements in productivity, he’d say the market will fix that. There’s another important objection that AI advocates offer. Altman concludes by writing: “The changes coming are unstoppable. If we embrace them and plan for them, we can use them to create a much fairer, happier, and more prosperous society. The future can be almost unimaginably great.” The unstated alternative to this big if is if we do not, we will suffer.
Sometimes this is explicitly stated, in even more unhinged adjacent philosophies adjacent to Altman’s. In venture capitalist Marc Andreesen’s “Techno-Optimist Manifesto” (which required me and now you, sorry, to become aware that a billionaire is taking advice from somebody who goes by “BasedBeffJezos”), we are told that “any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is [sic] a form of murder.”
If your product can destroy or save the world, surely it’s worth investing in. Both the hope of AI utopia and the dread of AI dystopia make for a good marketing strategy. Is that what they are for Altman, or are they his deeply held beliefs?
I don’t know. I don’t think Altman knows. And I do not think the answer to this question is knowable at all. The Hollow Man is exactly what he appears to be. OpenAI is simultaneously a mission-driven nonprofit and a world-straddling business colossus in the making. Sam Altman is at once a trajectory-bending visionary and a flat man, defined only by the space around him - in this case markets. Truth is in the numbers alone, and Sam Altman has indeed made big numbers.
At the heart of both his business and the ideology it is connected to is oblivion: the wholesale reduction of human intent, a self-excusing inevitability, an agency dedicated to its own annihilation. As the villain of 2007 puzzle-platformer game Portal put it, “we do what we must, because we can.” Action without intent. This is true whether or not AI actually achieves any of its supposed cosmic powers. The fact that these ideas are held by people who already control vast resources and can credibly request yet more is a problem.
The sense of inevitability shared by those who dread AI and those who desire it comes from the hollow man ethos of Silicon Valley capitalism itself. Let us imagine tomorrow that Sam Altman read this newsletter and was inspired to become a socialist, or simply decided to restrict which clients his firm should serve. Perhaps AI shouldn’t replace artists, or direct bombs?
He would be replaced. If Altman did not exist, it would be necessary in market terms to invent him. Microsoft CEO Satya Nadella promised that the once nonprofit OpenAI had no power to stop his company’s AI dreams: he insisted Microsoft was “below them, above them, around them,” and that their work could be taken over without issue if need be. Nadella and Microsoft make a fitting executioner: ChatGPT has been unleashed not by a hungry startup with dreams of changing the world, but by a boring purveyor of enterprise software taking such a startup over.
—
It would be easy and accurate to see Altmania as a product of a market dependent world in general. Silicon Valley techno-optimism is novel and alarming, but it has deep roots and clear historical antecedents. Profits over all is after all the logic of capitalism itself. To close out, I’ll give you one more flat man as an example.
As far as I know, we only know of one incident in which the early 20th-century capitalist Alfred Sloan, ever cried. He devoted his life to his company– in fact, the reason for this weeping episode seems to have been reflection on how little time he’d spent with his wife before his death. Sloan was an important player in GM’s development of tetraethyl lead (TEL) as an additive to gasoline, despite obvious and unknown health risks. He was directly responsible for the transfer of TEL production tech to Nazi Germany. This enabled the mechanized warfare strategy critical to the rapid conquest of Poland in 1939.
Henry Ford is rightly remembered as an anti-Semite because of his seething bigotry. Alfred Sloan’s aid to the Nazis is much less infamous, in part because it was driven only by the cold pursuit of profit. He too was a hollow man, with a terrifying lack of interiority. In private, maybe he and Sam Altman were both something more. But the private Sam Altman who wants a big family and apparently laughs loudly is not the man who we deal with– nor is he the man who runs OpenAI. We worship or cower before or throw stones at “Sam Altman,” the man who makes numbers larger.
Immanuel Kant wrote that humans should treat each other as ends in and of ourselves, not as a means to an end; it is not difficult to find the same lesson elsewhere in humanity’s millennia of moral thought. Altman’s work suggests a clear but not especially rare example of the inversion of this ethic: everything is a means. The only end is the means.
This replacement of should with is has a long history. I write about it at length in my upcoming book Control Science, and I’ve taken lately in my classroom to discussing it as a “collapse of the normative into the positive,” an ideological move with a very long history, linked with capitalism itself. It isn’t just cynicism or pragmatism; for centuries, men like Altman have proclaimed it variously as objective truth, moral necessity, and economic inevitability.
One reason I like the newsletter format– other than the fact that it offers an increasingly rare, if at this point only theoretical for me, opportunity to actually be paid for writing– is the way it allows for building over time on particular themes and ideas in a way disconnected essays cannot. The capitalist collapse of the normative into the positive is more than anything else my single-minded intellectual obsession at the moment, and my angle into the crises of the present. So you can expect to hear a great deal more about it.