Liberation, or lock-in? The disability innovation trap
by Matt May
One thing you all should know about me is that I have a relationship with gadgets that, for an extended period, bordered on obsession. If it’s a tablet or a pair of headphones, I probably owned it. I’ve shelled out an ungodly amount for VR goggles that I only ended up using for a handful of hours. Weird keyboards, pointing devices, lenses, 3D printers, tiny computers, projectors, you name it: it’s probably sitting somewhere in my basement.
I also have a small collection of assistive technology. For example, I have a small braille notetaker that I use to show off how refreshable braille displays work to people who’ve never encountered them. I have an Eone Bradley watch, which is custom-built with ball bearings and magnets to read the time with a touch. And I bought an Xbox Adaptive Controller and Logitech’s tactile switch kit just so I could noodle around with them. But in general, I don’t keep a lot of these things I’ve used across my career because they’re made in small batches, which makes them expensive, and honestly, kind of fiddly a lot of the time. They’re working tools, and as such, they’re better off in the hands of someone who can use them, rather than buried in my geek closet.
And that’s the thing about commodity hardware, all these other gadgets. They make a lot of them, and they’re cheap. Sometimes, we even find that sweet spot where a mass-market product has a feature set that could provide a service to disabled users that it wasn’t originally designed for. One such product is Meta’s Ray-Ban line, which marries a chunky-looking pair of Wayfarer sunglasses to a camera, a mic, some hidden speakers in the temple, and the thing Meta really needs to gain a foothold: its cloud-based AI engine.
For the last year, TV audiences have seen Chris Pratt and Chris Hemsworth slapstick their way through making these things look marginally useful in the real world. It’s a big ask for a gizmo that goes for around $400 and up, not counting any prescription you may need for it. But somehow, they’ve managed to sell 2 million of the things in the first two years, partly at the expense of its Oculus VR headsets.
So what did they lean on in their second-generation product announcement? Accessibility. Meta knows that its only real, marketable success story for its first-gen glasses is its adoption among blind customers, who seem to be adopting the glasses in droves for their ability to describe scenes in near-real time, reading signs and menus and recognizing objects in view, while also being able to answer time-sensitive queries in less time than it would take to pull out a phone and fiddle around with it. The promise of inclusive design, in a nutshell, is that by starting with the needs of the most excluded users, you can make something more useful all around than if you go for the biggest possible audience first. But sometimes, the market makes that decision for you.
The company leaned in (ahem) to another accessibility story with the release of its $800, second-generation glasses with graphics support, showing off a real-time captioning feature. Apple announced something similar—a live translation feature—in September with its iOS 26 and AirPods 3 Pro releases. Both companies have a shared need: to become (or in Apple’s case, remain) a necessary go-between for everyday communication. In other words, it’s a race to lock their products into your everyday life. Meta CEO Mark Zuckerberg, with characteristic subtlety, basically just said you won’t be as smart as the people around you if you don’t have his company’s AI resting on your forehead.
Am I going to add these glasses to my collection, either because they’re cool new tech, or because I need to track them for work? Hell no. It’s safe to say, given that I’ve advocated for employees to quit, users to grey-rock its platforms, and said they’d be the free space in Tech F**kery Bingo, that my confidence in Meta is somewhat low. I use Facebook from one browser, sandboxed, over VPN, and I log in as little as possible. I’ve made my peace with giving Apple privileged access to my daily life in a way that I can never see myself doing for Meta, particularly where it involves them seeing what I see and hearing what I hear. I do not trust Meta as far as I can throw them.
And that’s just speaking as someone who doesn’t need live captions or audio description. Relying on a company—any company—to provide critical information like that requires a deep level of trust, because you are not just passing them your location and account information, but intimate and potentially legally-sensitive material which is often protected from search and seizure. A human relay captioner, for example, is usually sworn to pass each participants spoken or typed messages to the other, irrespective of their personal opinions on the subject. By law, telecommunications relay service (TRS) workers in the US aren’t allowed to alter or disclose a relayed conversation, whether or not it’s graphic or offensive, and even if they may be aware its content is criminal in nature. While a rise in IP relay fraud has been an unfortunate side effect, this limitation serves an important purpose: to ensure that nobody is tampering with the user’s privacy.
This should already be setting off some alarm bells. At the low end, will these glasses display spoken profanities, or will it sanitize them? Will it adapt to the names, dialects and uncommon words the user is most frequently exposed to? Will it describe adult content to adults who want that, or would Meta’s content censorship policies apply to the open world? Will users have any expectation of privacy of the text transcripts of what their glasses hear, or what their camera lenses see, or is it all fair game for the training algorithm, and by extension, any government agency that might be able to subpoena that data from Meta? In 2025, providing a trail of where you went and who you saw there is not just a theoretical risk, but a practical one—for the user, and everyone in their camera’s field of view.
In other words: what is the real cost of adding this piece of tech to your life, and coming to depend on it year after year? Is this actually designed for the needs of disabled people, with the implications it may have on their long-term safety and privacy, for example, or will this feature set just wither on the vine when it gets a little annoying to keep track of?
It seems like we’ve come to take for granted that a new device like this is just a neutral hardware platform, like PCs were, and that the real innovation can happen thanks to third parties building assistive technology on top of it, which is how we got the screen reader. What we tend to forget is that this is an integrated package of hardware, software and cloud services, and that for all intents and purposes, it’s locked down. You won’t be running a custom Linux build on your Wayfarers; any apps will run at the pleasure and subject to the constraints of Meta and its app store. So the real question is just how much of this device software developers, and by extension, users, would be able to control.
After all the Meta-bashing I’ve done so far, I have to admit that last week they surprised me by opening developer access to their glasses. Third parties (who, for the record, will be vetted and most likely have to sign a license) will get access to device APIs controlling the camera, microphone and speakers (but not the new display or Meta Neural Band accessory), with a view to releasing apps sometime in 2026. The FAQ also mentions support for non-Meta AI engines, which is promising if they can be trusted not to backtrack on it later on.
There’s a clear bottom line here. Gearing mainstream devices for assistive technology tasks is not just a cool marketing exercise. When you propose to put a device on someone’s face, they will want it to help them navigate public spaces. And if you do that without planning for their safety at all times, you run the risk that some of them are going to die. This isn’t just an issue for blind and low-vision people, but with that new display, anybody whose field of vision might be obstructed by a notification being raised. Or even anyone being distracted while walking, wheeling or driving around.
Beyond that, there are countless smaller details that need to be thought through. If a device can track your location and capture audio and video, then anything it keeps, whether or not it’s saved to your camera roll, is called “evidence.” If you’re okay with that as a non-disabled customer, that’s fine. But once this crosses over to something a person relies upon like they would a white cane or a hearing aid, mitigating or at least disclosing what additional personal risk a user incurs to their own privacy.
Finally, AI models have biases: both those imposed by the model makers through their training material and system prompts, and unintentional ones we find through their interactions with users. They’re also updated at an astonishing clip. I wouldn’t entrust any one of them with my physical safety; unfortunately, thanks to the pre-approval of automated driving models in Teslas, Waymos, et al., I don’t have that choice as long as I walk the street in a number of states. But the worst-case scenario for users who choose these devices would be not to know what model they’re using, or be able to update it, or change to a competitor’s. These are tradeoffs that we need to understand before we can agree to them. Once we’re locked into these platforms, they don’t tend to be opened up again. For those of us who come to rely on them as assistive technology, that may end up becoming as dangerous to personal liberty as it is liberating in everyday use.
Office hours
I keep my calendar open on Thursdays for people who want to talk about working in DEI roles in tech, especially given, you know, all this. These are free, as usual. I’ll be doing them through October 23rd before taking a break.

Calendly
My office hours are for people with questions about: product equity, inclusive design, accessibility in general careers in all of the above dealing with depression/anxiety/stress due to all of the above Free sessions are available on Thursdays. If these times aren't convenient for you, please
I have been asked about offering paid sessions again, and I’m planning to do that starting in November. These will be for people who want more in-depth career support, or are having doubts about staying in tech. If that’s you, drop me a line and I’ll give you a preview of what I’m planning.