OpenAI Does a Sinophobia
By Emily
The book tour and other summer travel have been keeping us busy recently, hence the low frequency of posting on this newsletter, but I wanted to return to a news item from early June that has continued to bother me.
On June 5, NPR reported "OpenAI takes down covert operations tied to China and other countries". The only sources cited in this piece are an OpenAI employee (Ben Nimmo, "principal investigator on OpenAI's intelligence and investigations team") and OpenAI's own "threat reports" (the latest one as of June, and the previous one from February).
This reporting allows OpenAI to position itself as the good guy, busy with identifying, blocking and reporting harmful activity, up against China in particular, positioned as the evil actor that needs to be defended against. This extremely Sinophobic framing does several things at once:
-
It furthers OpenAI's lobbying with the US government, wherein we frequently see the trope that any regulation on the part of the US will hamper US companies' ability to compete with China. China (and by extension, all Chinese people) is framed as the other, the enemy, the bad actor, and this framing happens in the background without nuance. There are echoes here of the ridiculous "AI 2027" document we took apart in Episode 56, wherein the protagonists are working against a fictitious Chinese company (as well as their imagined AGI).
-
It displaces accountability from OpenAI. It is, afterall, OpenAI's technology that those they point to are using for nefarious ends. OpenAI might argue that their tech is "neutral" and can be used for both good and bad, but the legitimate use cases for ChatGPT (or similar) extruded text are few and far between (at best).
OpenAI might also argue that if they didn't make this tech available, someone else would (e.g., DeepSeek). However, this tech is predicated on amassing enormous amounts of data, compute, and financial capital. In addition, OpenAI is the actor that not only provides a convenient interface to their enormous models, but also the actor that pushed everyone else in the field to create and deploy similar tech. OpenAI has also consistently refused to make any attempt to watermark their textual output, leaving it extremely useful for bad actors who want to use synthetic text in manipulation campaigns.
-
It minimizes all of the other harm enabled by the synthetic text extruding machine. The "threats" being highlighted here are propaganda and disinformation campaigns by China (and in a side note, by Russia and Iran). But what about everyone else (including in the US!) who uses synthetic text for fraud? What about the people promoting the use of chatbots for psychotherapy and the increasing reports of ChatGPT exacerbating mental health crises? And so on. OpenAI isn't solely responsible for these outcomes, but it does bear a large part of that responsibility.
-
It normalizes large-scale surveillance by an unaccountable company. OpenAI describes itself as acting in the national interest here, providing "intelligence" on the actions of states considered enemies of the US. But what ensures that OpenAI works "in our national interest", much less in the interest of actual people? OpenAI is claiming to take on a governmental function, but it is not constrained in its actions the way a (democratically elected) government might be. Among other things, it can carry out surveillance activities without warrants.
Furthermore, the data OpenAI has appropriated and continues to collect (through encouraging users to interact with its conversation simulator) concerns not just the people who choose to do that interaction, but anyone they might be talking about or sharing data from. ChatGPT is framed as patient, discrete, non-judgmental, a place to take questions you'd be embarrassed to ask an actual person. People are likely often unaware that far from having a discrete, ephemeral conversation, they are in fact producing written records, centralized to a single, unaccountable entity, even when they have ostensibly deleted their conversations.
(On the topic of sharing other people's data with ChatGPT and similar services, it is darkly hilarious the Tulsi Gabbard, current US Director of National Intelligence, uploaded the JFK files to ask "an AI program" which parts could be declassified.)
As a part of our book tour, I got to be in conversation with Carl Rhodes of University of Technology Sydney in late June, and I think the arguments in his recent book Stinking Rich: The Four Myths of the Good Billionaire are relevant here too: When people (or in this case corporations) amass power based on wealth, that leaves us with unaccountable powerholders.
But just because they are unaccountable in the present system doesn't mean that we have to just give up and live with that. We can and should work collectively to assert the power of the people over the interests of the few. I would argue that a key starting point towards that future is supporting and insisting on journalism that does more than the NPR piece linked here. It's helpful to have OpenAI's actions exposed in this case, but we deserve reporting that takes a critical lens to what OpenAI is up to, rather than leaving that up to the reader.
Our book, The AI Con, is now available wherever fine books are sold!
