Bunkum, Bots, and Belief

You’ve no doubt heard some of the outrageous things chatbots say when kept on a long leash. Just this week I saw an article about a chatbot responding to an innocent enough query with this disturbing message:
“This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe. Please die. Please.”
If these things are sentient, we’d better look out.
Of course, the chatbot may not have meant that literally. Maybe it had reached the end of its rope. I once heard a neighbor standing on her front porch screaming basically the same message to her husband. And of course you have only to pause a moment and think about the darker corners of the internet to imagine where the chatbot picked up that kind of language in its training dataset.
But it may be that chatbots aren’t limited to spewing bile and hatred. They might even have some benign uses. Like explaining to your crazy uncle, so you don’t have to, that life-saving vaccines don’t contain tracking chips.
According to recent research, chatbots are better than humans at talking people out of conspiracy theories. In the experiment, true believers sat down with a chatbot and explained why they believed in a given conspiracy theory.
The chatbot — soon to be dubbed “debunk bot” — then refuted the theory with fact-based counter-arguments. The bot was able to reduce true believers’ confidence in the truth of their conspiracy theories by an average of 20% — and the effect lasted up to two months. (If you’re interested, you can read some of the conversations here.)
It’s unclear why a computer program can chip away at irrational beliefs when another human can’t. Perhaps the believers were afraid that if they didn’t take the bot’s advice, it would kill them.
’til next time,
Avery
Image courtesy Creative Canvas via Pixabay