|
It’s remarkable speculation, but it’s also grounded in a pretty clear-eyed view of the trajectory of AI, with Aschenbrenner explicitly comparing the technology to the Manhattan Project – “The scientists built it and then the bomb was shipped away and it was out of their hands,” he observed.
From here, I’m not going to leap ahead to the idea that we’re very close to an AI version of The Bomb. (Though it is something worth worrying about.) Nor do I want to suggest that state and intelligence institutions are already aggressively positioning to take control of the AI labs. I merely want to draw attention to some interesting recent events:
The Pentagon’s showdown with Anthropic. In March, right before the launch of the conflict in Iran, the Department of War got into a public dispute with Anthropic over how the latter’s AI could be used in war. Anthropic insisted that it would not allow its models to be used for the surveillance of American citizens, nor for autonomous weapons. The DoW, meanwhile, basically argued that it would never violate the law in its use of AI, and that that should be good enough for Anthropic – that the US government shouldn’t have to check with a private tech company about any of the decisions it makes in war. They couldn’t come to terms, and the Pentagon ended up designating Anthropic a supply-chain risk, barring it from government use. This was an unprecedented move against an American company.
China’s block of Meta’s Manus deal. Manus is a Chinese startup that was one of the first in the world to launch a semi-autonomous agent system that could perform complex actions without much human supervision. It got a lot of attention last year, and Meta announced a $2 billion deal to buy the startup in December. The following month, the Chinese Ministry of Commerce announced a formal review of the acquisition under export control and technology transfer rules. And last month, co-founders Xiao Hong (CEO) and Ji Yichao (Chief Scientist) were summoned to a meeting in Beijing with the National Development and Reform Commission (NDRC), which informed them that they were not permitted to leave the country while the review was underway.
The governments in question are playing a delicate game here. Or rather, they are playing a tricky game, and the Chinese government is playing it delicately: It doesn’t want to scare AI talent out of the country by taking too hard a line on the Manus co-founders, but it’s also reacting to their company’s transfer of tech and IP from the mainland to Singapore, where Manus is now headquartered. Chinese authorities really do not want AI talent and tech flowing out of the country, and they are clearly prepared to use state force to stop that from happening.
The US Department of War is playing less delicately. They clearly see the value of frontier AI technology in wartime applications; if they didn’t, there wouldn’t have been any dispute. They are using Anthropic models already. But they also have access to other cutting-edge, American-made AI, most notably from Anthropic arch-rival OpenAI, which swept in to sign a deal with the Pentagon within days of the dispute going public. By seeking to punish Anthropic so severely with a blacklisting (which is already being challenged in court), the DoW may be trying to intimidate the other labs into cooperating in the future.
Game that trend out a couple of years into the future. That’s what Aschenbrenner was doing back in 2024, when he had that long chat with Dwarkesh. He was thinking about the value of AI in a scenario of serious, large-scale conflict.
“If we’re in a volatile international situation, initial applications will focus on stabilizing it,” he said. “It’ll suck.”
|