Two gaps and three pillars
by Matt May
Happy Monday! Hope you had a good weekend and weren’t fired by your board for unspecified lapses in communication, before spending the weekend architecting your return and succeeding in getting your foot back in the door, until the board that fired you picks a new replacement at the last second. Or whatever else has happened in the 8 hours since I wrote this.
This is the fourth issue of Practical Tips. If you’re new, welcome! You can read the first three in the archives if you want to catch up.
I’ve been thinking about this one for a while, and when I started writing it up, what came out was more like a book chapter than a newsletter. I decided to strip it down. Maybe the long form will be worth a read someday.
A lot of companies are doing DEI work wrong. And a lot of people are being left behind as a result.
If you ask the average Fortune 500 company what they’re doing for disabled people, they will likely point you to their accessibility team and the work they’re doing. But if you ask what they’re doing for racial or gender equity, they will point you to the DEI teams in their human resources organizations. In other words, companies direct their disability activities primarily toward the products they make, while they direct their efforts for the benefit of other historically underinvested groups primarily into hiring and employee experience.
That leaves two gaps. Accessibility teams, who typically work on products, don’t work in HR, and if they do work on HR issues, it’s because disabled employees are more likely to work on those teams, and need to lobby for (and often even build) their own supports. Disability-related DEI efforts (in my jaded opinion) are more directed to legally-required workplace accommodations than attracting and promoting qualified disabled employees.
But the inverse is also true. In the US, the Equal Employment Opportunity Commission states:
Applicants, employees and former employees are protected from employment discrimination based on race, color, religion, sex (including pregnancy, sexual orientation, or gender identity), national origin, age (40 or older), disability and genetic information (including family medical history).
Human resources organizations are usually firewalled away from product work, so DEI programs in those organizations tend to focus on hiring, promotion and workplace inclusion—but not representation in the work the company conducts. This is increasingly an issue in generative AI, as employees find unrepresentative or even traumatic content being produced about their own communities, but are often told to route those complaints through unpaid employee resource groups.
As I mentioned last week, approaching accessibility as a technical or legal compliance activity gives companies reasonable protection against lawsuits, without necessarily acknowledging that access by disabled users is a civil right.
The inverse is true for all other forms of exclusion and marginalization. Employees of all different backgrounds and identities are owed a say not only in how the company treats them, and how it handles disputes involving them and the communities they represent, but also in how it conducts business in an equitable and nondiscriminatory way.
I think that right now, with both DEI and accessibility teams bearing the brunt of the wave of downsizing that’s happened this year, is a good time to think about how to approach the next rising wave of DEI. Many folks in the field will get a chance to rebuild corporate DEI programs in the future, and I think it’s important that any modern program needs to address three key areas, across all identities:
Who makes up this organization?
How do we work together and manage conflict?
How do we express our values through the work we do?
Those are my three pillars for an equity-focused organization: Who are we? How do we work? What do we make?
"Who are we" is an existing human resources function: recruitment, hiring and promotion. “How do we work” is also HR, in the form of employee experience programs. That third pillar, I believe, is as important as the first two. Turning your values into ethical and equitable products is critical, and now is the time to raise that focus on product equity to the same standing as the existing HR groups. That’s an emerging field called product equity, and one I think many accessibility organizations should be looking into.
Next week, I’ll talk about the relationships between accessibility, inclusive design, and product equity.
Per my previous email
It took me exactly three weeks to have written something I needed to refer back to. This is from issue #1:
The technologists are actively trying to cut the humanists out of the picture(…)They’re already trying to rewrite the rules governing their own accountability.
If you’ve been following the weekend drama over at OpenAI, make sure you look closely. The battle appears to be mostly about business types (once-and-future CEO Sam Altman, Greg Brockman, and allegedly Microsoft CEO Satya Nadella) and chief scientist Ilya Sutskever, whose focus is on “superalignment” (OpenAI’s term for keeping superintelligent AIs from killing us.)
Meanwhile, venture capitalist Marc Andreessen, having recently outed himself as a cringey, regulation-hating, technology-at-all-costs extremist, has reportedly been lighting up the jerkstore formerly known as Twitter with support for an Altman-led OpenAI without any of that “oversight” foolishness.
Anyway, if you do read through all this mess, and I don’t necessarily recommend doing so until the dust has settled, what you will not find is any representation on behalf of marginalized citizens. I can’t even find anyone in DEI or accessibility roles at OpenAI, for one. Curious. Their website says they really care about that stuff.
What to read
Perhaps it’s just a coincidence that a report came out on Saturday that Meta cut its “responsible AI” team (hat tip: Karim Ginena). If you’re reading this and thinking, “didn’t they already do that last year?” Yes and no: that was their “responsible innovation” team.
In case you think I’m being alarmist when I say humanists are being cut out of the debate, here’s a Wall Street Journal article from earlier in the year detailing ethics teams being cut at Microsoft, Google, Amazon’s Twitch, and, of course, Twitter.
But don’t take it from me. Jeff Jarvis is both smarter and better-connected than I am, and he’s got the same question I do about the rarefied air of AI discourse:
Where are the authors of the Stochastic Parrots paper, former Google AI safety chiefs Timnit Gebru and Margaret Mitchell, along with linguists Emily Bender and Angelina McMillan-Major? Where are the women and scholars of color who have been warning of the present-tense costs and risks of AI, instead of the future-shock doomsaying of the AI boys? Where is Émile Torres, who studies the faux philosophies that guide AI’s proponents and doomsayers, which Torres and Gebru group under the acronym TESCREAL?
Finally, Eric Bailey pointed out an article about a lawsuit against US-based insurance company UnitedHealth, claiming its AI for denying care facility claims has a 90% error rate. More people should be worried about the here-and-now issues of life and death that AI is bringing, rather than sucking up all the attention with fever dreams of armageddon.
That’s all for this week. Make it a good one. And may all your power struggles be resolved without coverage in the tech press.