Last week, white supremacists and nazis stormed the American capitol building; this week, everyone is wondering how we got to this point. It’s safe to say that there has been a fair amount of enabling within the tech world. We should take advantage of this moment to reflect on what we can do with our own tech platforms. We have a lot to reckon with as creators of technology, and a responsibility to recognize that all code is political.
In this issue: Why de-platforming Parler is not enough, what free speech even is, the consequences of speaking out against actual nazis, and how identifying insurrectionists isn’t enough to rehabilitate facial recognition’s bad rap.
If you have a recommendation for a theme, or an article you’ve seen recently that you think I ought to share, please do let me know. You can reply to this email, or hit me up on Twitter
As always, you can find back issues of The Ethical Technologist in the archives. And if you found this issue thought-provoking and informative, please share with your friends and colleagues!
If you’ve not heard of Parler, it’s like Twitter, only the founders refuse responsibility for how the platform is used. And, unsurprisingly, it’s become the platform of choice for the alt-right, and served as the organizing grounds for last week’s American insurrection. And, within days, Parler was removed from Apple and Google’s app stores, and its hosting providers abandoned them. But already they’ve found a new host, and is back online. But one might wonder: If Parler was in violation of various services’ terms of service, why weren’t those terms enforced sooner? Terms of service are not just documents we pay lawyers to assemble so our product can be listed in the app store, they are, when enforced, powerful tools for ensuring that our customers are being held to standards of good behavior.
Parler’s de-platforming and Trump’s social media bans have led the right to decry these actions as attacks against free speech. But these kinds of claims are not made with good faith—they are an attempt to distort a principle designed to encourage thriving democratic discourse to achieve decidedly non-democratic ends. J.S. Mill, an early proponent of the concept of free speech, famously noted that inciting an angry mob to violence was not truly speech, but an act unworthy of protection. Here, Peter Ives encourages us to reflect critically on the underpinnings of the concept of free speech, why they were relevant in the 18th century, and which of those remain relevant today. Just because I have something to say doesn’t mean you are obligated to give me a platform to say it.
Last week, a Jewish GitHub employee based in DC warned his colleagues that nazis were roaming the streets. As a consequence, he was reprimanded by HR for using the word “nazi” in the company’s Slack. Two days later he was fired. Although it’s going to be almost impossible for GitHub CEO Nat Friedman to justify the firing, it’s worth reflecting on how at-will employment laws stifle employee’s willingness to communicate openly and honestly. This event is likely to have a significant chilling effect on public discourse at tech companies in the US, at a moment when what we need more than anything is open and honest dialog inside companies about the ethical implications of their platforms and their policies.
It turns out the revolution was not only televised, it was telegraphed in advance on social media, and live-streamed to adoring viewers. The result is a very large pool of facial images that are being fed to facial recognition programs to identify the culprits. But Joan Donovan and Chris Gilliard argue that good outcomes cannot set off harms created elsewhere: The function of a system is its output. And the output of facial recognition technology is a transfer of power away from vulnerable groups, even if it achieves some good along the way.
Case in point: NiJeer Parks, arrested for a crime he did not commit entirely on the basis of a facial recognition match. It turns out that the database used by for matches contains primarily previous offenders, and because Parks had a record, his face was in that database. But the bias here should be obvious: Relying so heavily on the use of this database for evidence means that ex-convicts are more likely to be matched against photographs—and therefore accused—than the broader population.
Does your business need a developer strategy? You’ve heard of developer relations, but what is that? Do you even need it? Katsudon.tech can help you navigate the complex world of developer relations. Every business is different, and we can help you evaluate your developer goals, determine the best way to get started, and ensure you feel confident that you are spending your money effectively. To find out more, and contact us to set up an initial consultation, visit our website: https://katsudon.tech/.
So much for Issue 15! Thanks for subscribing and reading, and thanks to the many folks who have continued to share content with us! If you enjoyed this issue, share with your friends! If you have an article that should be featured in an upcoming issue of The Ethical Technologist, let me know by either replying to this email (what, a newsletter you can reply to!?), or pinging me on Twitter.
Until next time, yours, Chip Hollingsworth & Don Goodman-Wilson