The Escalator From Hell (Across the Sundering Seas, #4)
I.
From a news story (if this seems weird, just stick with me):
> Miley Cyrus was caught shoplifting from Abercrombie and Fitch on Hollywood Boulevard today. > > The 19-year-old singer was caught on camera being escorted out of the store by security guards. > > The singer was wearing a black hoodie with the label ‘Blurred Lines’ on the front and ‘Fashion Police’ on the back. > > Scroll down for video > > Shoplifting: Miley Cyrus was caught shoplifting from Abercrombie and Fitch on Hollywood Boulevard today (pictured) > > The singer was wearing a black hoodie with the label ‘Blurred Lines’ on the front and ‘Fashion Police’ on the back > > The singer was also wearing a pair of black-rimmed glasses, a black jacket, black jeans and black sandals.
Why am I sharing a news story about Miley Cyrus with you today? Because every part of it except the first sentence was written by a machine learning model, an AI—including the “Scroll down for video” bit! That same AI generated this mediocre high-school essay:
> It is easy to identify why the Civil War happened, because so many people and so many books and so much television and films tell us that it was the cause, that it has something to do with race or economics or religion. There is much agreement that it was essentially a war of slavery on behalf of capitalism, about a century of slavery. But that’s not what most people think of when they think of the Civil War. Many people think of the war as about states rights. You’re not wrong about that, but it’s kind of misleading to say that the Civil War was a conflict between states’ rights and federalism. So let’s try again. What’s the idea that has inspired so many people to go onto the battlefields of Gettysburg and Antietam and Gettysberg and Petersburg and Fredericksburg? The American idea of the republic–a notion of limited government–is a great part of the history.…
And this fake social media comment:
> Recycling is NOT good for the world. It is bad for the environment, it is bad for our health, and it is bad for our economy. I’m not kidding. Recycling is not good for the environment. It is destructive to the earth and it is a major contributor to global warming. Recycling is not good for our health. It contributes to obesity and diseases like heart disease and cancer. Recycling is bad for our economy. It increases the cost of a product, and in turn, the price of everything that is made with that product. Recycling is not good for our nation. We pay a tremendous price for the privilege of having the world’s most advanced and efficient recycling system. Recycling is a huge, colossal waste of time, energy, money, and resources.…
If you saw the first on Gawker, or had the second handed to you by a run-of-the-mill high school student, or came across the third on Facebook, I doubt you’d think anything of them (with the possible exception of getting Miley Cyrus’ age wrong). They’re horrifyingly normal.
II.
A reader, in response to Issue #2, offered this thought:
> I might be misusing Neihbur here, but in some of his essays on love and justice, he talks about a kind of imagination that helps us consider beyond the practicalities of what is right and ethical today and towards a transcendent understanding of love + justice that stems from a genuine change in the hearts of people. > > I wonder about a similar kind of imagination around how technology impacts not just things of convenience or artificially created ecosystems of commerce, but where our relationship to it is grounded in a vision and understanding of who we are and how we want to relate to each other as humans.
Another reader, responding to Issue #3, wrote of the oligopolies that currently entangle us:
> Given that the overindulgence in what Facebook offers is a key part of the current cultural backlash against Facebook, do we then need to wait until the evils of the monopoly fully materialize or can we somehow create a contrasting alternative which has a chance of succeeding purely on its own merits?
The common thread here, as I noted to each of them individually, is imagination—and a thicker notion of what is good for humans than “people say they like this thing.” The Silicon Valley boom has been built on refusing even to ask whether any given idea is good not merely in the small but in the large, or what its costs might be: if there is profit and some basic attractiveness to the proposal, build it.
We need people doing two kinds of imaginative work to counter that tendency: some closely engaged and interrogating specific new ideas in those terms, and others sufficiently disengaged as to be thinking of better ways to use (and ways not use!) technologies at all. That kind of imaginative work requires time—but time is the one thing no one can afford in the world of venture capital.
III.
I wasn’t actually sure what articles I was going to share this week, until I opened my RSS feed this morning and saw that L. M. Sacasas had commented (insightfully as ever) on this week’s news:
- of the AI-generated text of which I shared a few samples above
- and of AI-generated faces which are difficult to distinguish from real faces, even if you know what you’re looking for.
This kind of news is no longer surprising (even if it is increasingly horrifying). As I noted when that story about digitally-faked pornography came out—you should read it, if you can stomach it—the gist of Silicon Valley’s take on all of these horrors is: “Yeah, that’s bad, but it’s worth it.” No one seems to understand the (incredibly basic!) idea that not every technical advance is worth its societal cost.
Except that for a welcome change, the team at OpenAI behind this new natural language AI does. They have chosen not to release either their trained natural language AI or the code they used to create it—explicitly because they want to take time to work through the implications of this. Fake news stories, mass-generated fake reviews, mass-generated fake social media comments… there’s a lot of ways this capability could be abused. More: if OpenAI can do this today, then the technological capability will be widely available soon, given the current rate at which these things are developing. As one member of the team noted to the Guardian:
> [The] goal is to show what is possible to prepare the world for what will be mainstream in a year or two’s time. “I have a term for this. The escalator from hell,” Clark said. “It’s always bringing the technology down in cost and down in price. The rules by which you can control technology have fundamentally changed.”
As a result, they are experimenting with responsible disclosure: not publishing everything one accomplishes, not racing forward simply because a particular technical advance is achievable. This is the norm in other industries where there are high risks to the research, such as cybersecurity and biotech. For the modern tech sector in general, and for AI researchers is particular, though, this idea is novel—to the industry’s great shame. Credit to OpenAI for attempting to change the game here.
I encourage you to read and think on their whole news release, but I conclude here on a note of hope, with part of their conclusion—emphasis mine.
> Today, malicious actors — some of which are political in nature — have already begun to target the shared online commons, using things like “robotic tools, fake accounts and dedicated teams to troll individuals with hateful commentary or smears that make them afraid to speak, or difficult to be heard or believed”. We should consider how research into the generation of synthetic images, videos, audio, and text may further combine to unlock new as-yet-unanticipated capabilities for these actors, and should seek to create better technical and non-technical countermeasures. Furthermore, the underlying technical innovations inherent to these systems are core to fundamental artificial intelligence research, so it is not possible to control research in these domains without slowing down the progress of AI as a whole. > > … > > This decision, as well as our discussion of it, is an experiment: while we are not sure that it is the right decision today, we believe that the AI community will eventually need to tackle the issue of publication norms in a thoughtful way in certain research areas. Other disciplines such as biotechnology and cybersecurity have long had active debates about responsible publication in cases with clear misuse potential, and we hope that our experiment will serve as a case study for more nuanced discussions of model and code release decisions in the AI community.