Stay Human
First post! Why AI/ML harm is a pressing equity issue.
by Matt May
Hi there! Welcome to Practical Tips, the official newsletter of Practical Equity and Inclusion. My name is Matt May, and I’m Practical’s founder. Over the next several weeks I’ll talk a little about Practical and the kind of work we’ll be doing, but the main reason this exists is to be able to talk about my fields of study—product equity, inclusive design and accessibility—and the challenges we face, particularly in tech.
So naturally, this first post is about AI.
I’ll start by saying something uncontroversial: I am a humanist. I think that the things we create can only succeed if they meet humans where they are, here and now. I do not believe in technology for technology’s sake. Nor do I believe in designing for generations of humans who have yet to be (though we do owe them a livable planet on which to do what they will). We as designers and engineers need to direct our efforts to learning our past rather than trying to control the future.
I actually have hope for artificial intelligence and machine learning (AI/ML). Some of it, at least. But I want there to be some guard rails around how it impacts actual living people, and acknowledges that replacing labor with technology has economic impacts that need to be accounted for, most likely using mechanisms like taxation and universal basic income. I think AI/ML model cards are a good first step, but that interpretability should be part of every model. Consumers of AI/ML and those impacted directly and indirectly must have a meaningful voice on how this technology is deployed wherever and whenever it is used.
Humanism, it is safe to say, ain’t where the money is at right now. This year has been brutal for career tracks I consider to be the most humanist in tech: user research; diversity, equity and inclusion; accessibility; and human resources. It’s no surprise where the momentum is: it’s all swung toward AI/ML. It’s hard to find new roles in tech that do not touch on it.
The AI/ML space is not being driven by humanists. Far from it. In fact, those who have documented actual harms with existing AI models have been fired from companies developing them. Google, Microsoft and Twitch each eliminated their AI ethics teams in 2023. The social sciences are being forced out of product and into academia, external advocacy and activism. Increasingly, the AI/ML platforms in current use are in the control of the technologists alone.
A long time ago, I referred to a technologist view of humans as being “frustratingly analog.” Where the humanist perspective on a product is very individualized—how is this working for you—it is often a quantitative measure to a technologist.
I cringe whenever I hear a given organization is “data-driven.” It reminds me of all the times I brought a list of accessibility issues to an engineering manager, only to hear those dreaded words: “how many people are we talking about here, really?” One gets the sense from talking with a lot of engineers that life would be a lot easier if we all just happened to do tasks the same way. If we all fit neatly in a small number of categories. Preferably two or fewer. In other words, it’d make a lot of technologists’ lives easier if we behaved more like machines.
Take that attitude and supercharge it, and you’ll get close to capturing the AI/ML vibe. Evaluating harm on a case-by-case basis in these models is literally impossible: they’re trained on billions of images and documents, and now represent trillions of data points in some models. None of the companies that make them would survive the damages from lawsuits providing relief from actual damage to actual humans, and few of them can afford to work proactively to prevent harm. That’s why they care so much about governance over them. Why should we have to mess with judges and juries when they don’t scale? Just give us an aggregate dollar figure measuring how much harm we did, and we’ll write a check.
What does all that have to do with equitable design? Literally everything. The technologists, the folks who’d rather measure harm like it’s Net Promoter Score, are closing ranks around the work they’re doing. Those who stand to gain the most from AI/ML are coming out with “manifestos” (CN: sophistry) naming enemies like (these are direct quotes) “sustainability,” “ESG,” “social responsibility,” “stakeholder capitalism,” “trust and safety,” “tech ethics,” “bureaucracy,” and “the ivory tower.”
They’re saying the quiet part out loud.
The technologists are actively trying to cut the humanists out of the picture. If you’ve been sitting this one out because you think AI/ML is a fad, watch out. They’re already trying to rewrite the rules governing their own accountability—and liability—with governments who understand a fraction of what they’re proposing. What’s going on at a policy level is going to affect everyone, much sooner than most think.
As if on cue, the Biden administration issued an executive order today, setting “new standards on security and privacy protections for AI.” There’s some important stuff in there: watermarking requirements to guard against deepfakes and fraud, for example. But when it comes to actual issues of equity and bias, it’s full of lip service. While Biden is actively directing federal agencies to engage on some issues, when it comes to privacy, equity and civil rights, the order mostly begs Congress to make legislation, and points to previous actions. It also directs the Department of Education to “shape AI’s potential to transform education,” a direction that would invariably lead to less hands-on instruction in underinvested communities that actually need so much more of it.
Given the US government’s role power as a customer for these products, this order amounts to a buyer’s guide—one that was written with the direct input of the sellers. There will be tens of billions of dollars available to help big companies comply with the technical requirements that come out of this work. How much will go to the academics, critics and advocates for a more equitable future, and how much will be used to offset big companies’ own accountability for the harm they’ve already inflicted?
So far, in the AI/ML space, the technologists aren’t just beating the humanists. They’re shutting us out. It’s time to get in the game.
What to read
Speaking of good timing, Joy Buolamwini’s book, Unmasking AI, is out tomorrow. Her article in MIT Technology Review from last weekend is worth a read, as well, and you should take a look at the Algorithmic Justice League, which Buolamwini co-founded with an all-star team of humanist researchers. You can read about her and several of her colleagues in this August article in Rolling Stone.
—
Anyway. Wow! What a weekend. I threw out a subscribe link after 5pm ET on a Friday, and Practical Tips already has followers on five continents! I’ve been greeted with a stream of new-subscriber emails all weekend.
Thank you. Hang in there. They’re not all going to be this heavy. 😳
Office Hours
…are open for this week. For those of you who don’t know, I’ve held free office hours weekly for the last six years, and I don’t plan on slowing down. Sign up for a slot here.
I asked my LinkedIn connections if I should offer paid appointments (after I was informed that making money is a common business practice), and the answer was an overwhelming yes. I’ve got a plan where more paid clients equal more free slots. That’s coming soon.
Okay, folks. We’ve got our work cut out for us. Get your rest. Speak your mind.
-
m