Countering Dangerous Speech
Hi friends --
There is so much debate about how the big tech platforms police hate speech, and whether they made the right call in kicking off Donald Trump for inciting violence.
But the interesting thing about violence is that we know a bit about what incites it, and it's not hate so much as fear. Fear is what leaders use to inspire violence. Fear that the election has been stolen. Fear that women are being erased by trans people. Fear that children are being groomed by pedophiles.
This is dangerous speech. Susan Benesch of the Dangerous Speech Project says the key feature of it is that it persuades “people to perceive other members of a group as a terrible threat. That makes violence seem acceptable, necessary or even virtuous.”
In my latest New York Times Opinion piece, titled "Few Are Addressing One of Social Media’s Greatest Perils," I describe the dangers of fear-inducing speech, how prevalent it is online, and outline some strategies to contain it.
A few key points:
Fear speech is hard for automated systems to identify because it doesn't always rely on slurs and derogatory words that are often found in hate speech, Rutgers University professor Kiran Garimella and coauthors say in the first large-scale quantitative study of fear speech published in 2021.
Users prefer fear speech to hate speech, Garimella and his coauthors found in their second study of fear speech published earlier this year. The “nontoxic and argumentative nature” of fear speech prompts more engagement than hate speech, they found.
Most tech platforms do not shut down false fear-inciting claims such as “Antifa is coming to invade your town” and “Your political enemies are pedophiles coming for your children.” But by allowing lies like these to spread, the platforms are allowing the most perilous types of speech to permeate our society.
Researchers are looking at ways to design online platforms to reduce the amplification of fear speech. In an upcoming paper, Ravi Iyer, Jonathan Stray and Helen Puig Larrauri suggest that platforms can reduce 'destructive conflict" by relying less on "engagement metrics" that boost posts with high numbers of comments, share or time spent. Instead, platforms could boost posts that users explicitly indicate they found valuable.
In fact, Facebook has quietly shifted away from engagement metrics when it comes to how it amplifies political content. In a blog post update last month the company said it is “continuing to move away from ranking based on engagement” and instead for users was giving more weight to “learning what is informative, worth their time or meaningful.”
But in the end, the algorithms alone aren't going to save us. We, the users of the platforms, also have a role to play in challenging dangerous speech by calling out fear-based incitement through "counterspeech." The goal of counterspeech is not necessarily to change the views of true believers but rather to provide a counternarrative for people watching on the sidelines.
So next time you see some fear-inciting speech online, maybe take a moment to call it out. Pro tip: using humor is a particularly effective response.
Thanks for reading.
Best
Julia