Learning to Not Believe Your Eyes
It feels like deep fakes hit the mainstream this week, with disastrous results.
A deepfake is a “video of a person in which their face or body has been digitally altered so that they appear to be someone else.” The first notable deep fake was in 2017, since then, the technology to create them has become widely available and the cost of the tech has fallen to next to nothing.
This is a recipe for malfeasance.
In last week’s Takes & Typos, we talked about how online scammers—promoting non-existent government benefit programs—have posted over 1,600 videos on YouTube, garnering nearly 200,000,000 collective views. Many of these videos featured AI generated images of celebrities such as Oprah, the Rock, Will Smith, and Taylor Swift, hawking fake benefits. This week saw an escalation as Zombie Twitter was flooded with pornographic deep fake videos of Taylor Swift. According to The Verge, thanks to near non-existent content moderation practices, the videos were everywhere:
One of the most prominent examples on X [Twitter] attracted more than 45 million views, 24,000 reposts, and hundreds of thousands of likes and bookmarks before the verified user who shared the images had their account suspended for violating platform policy. The post was live on the platform for around 17 hours prior to its removal.
But as users began to discuss the viral post, the images began to spread and were reposted across other accounts. Many still remain up, and a deluge of new graphic fakes have since appeared.
While we weren’t looking, the capability of movie special effects studios was democratized. The computer processing power to create these videos costs about $4 an hour, using programs like deepfakesweb. You may think this is no big deal but it’s a ticking time bomb.
I invite you to imagine two scenarios.
Scenario One: This fall, after the nominating conventions, deep fake videos of a candidate making racist comments in a private conversation get posted on 4Chan. First, they reach TikTok, whipping up anger among young voters. As the videos spread, the media fact-checks can’t keep up. The message “no, Candidate X didn’t call Group Y a bunch of Slur Zs at a dinner in Aspen” can’t get traction. As the videos cross 200 million views, more views than voters in the US, the candidate’s support craters.
Far fetched? This scenario unfolded in Taiwan earlier this month, as various factions flooded the country with misinformation and deepfakes in the run-up to the January 13th election.
In the US, a country with high levels of political polarization and low levels of media literacy, such a campaign could upend the election.
As a teacher, scenario two is darker and more personal. It involves young men using deep fakes to cyberbully female students, similar to what happened to Swift this week. Sent via WhatsApp and in Telegram groups, the shocking images shoot around the student body and school community. The young women in them have no idea their images have been manipulated and as they spread, lives and reputations are ruined.
This too has already happened in Almendralejo, Spain.
According to the BBC, the suspects accused of creating the images are between the ages of twelve and fourteen.
As someone who was previously a target of maliciously circulated photoshopped images, y’all don’t want this in your lives.
Trust me.
As deep fake technology becomes cheaper and more readily available, we must be more vigilant about what we share and believe. According to GZERO, we’ve already seen AI-generated phone calls trying to suppress votes in the New Hampshire Primary:
The New Hampshire Justice Department said it is investigating reports of robocalls impersonating President Joe Biden. The calls, allegedly featuring an AI version of Biden’s voice, encourage voters to stay home on Tuesday and instead save their vote for November. “Your vote makes a difference in November, not this Tuesday,” the faux Biden said.
While some may disagree, I stand by my belief that technology remains (largely) value-neutral. The concern lies with the unethical actors exploiting it and the absence of regulations guiding its deployment.
As the 2024 campaign unfolds, we’re going to see more of this, especially targeting low-income and low-information voters. On 404 Media’s podcast this week, in talking about an adjacent problem—AI-generated spam news stories flooding Google search results—host Jason Koebler said, “With AI, the internet is becoming robots writing for robots, on a scale humans will never be able to do.”
We are simply not ready for the deluge of AI-generated nonsense heading our way.
As always, if you have any thoughts or feedback about the newsletter, I welcome it, and I really appreciate it when folks share the newsletter with their friends.