Let's talk tech Thursday #21
This week we talk Ray-Ban Meta glasses, Proton Mail, and a new pendant for audio transcription. Also, why its so hard to leave social media.
Another week, another newsletter. Welcome back!
Our top story this week concerns the Ray-Ban Meta glasses, and the insane breaches of privacy that they are currently being sued for. Not alone in the list of accusations, workers are going through footage from users' glasses that includes credit card details, bathroom visits, and sex. Oftentimes without the users' understanding or consent.
We also check in on the darling of the privacy-conscious, Proton and, rounding off the data consent/privacy theme of the week, we take a look at a new pendant that will transcribe your voice and only your voice.
Finally, for our blog spotlight this week, there's a deep-dive look at "algorithmic media" and why it's so difficult to leave places likes X/Twitter and Facebook.
Let's dig in...
Top Story
👓 Meta are being sued for their Ray-Ban Meta glasses
A quick summary
Meta Rayban glasses - that tech gadget you kind of wanted but felt a little weird about on the grounds that it felt kind of pervy - are being sued for being very pervy.
I'm being glib for the sake of a line, and I shouldn't be. This is actually a deeply troubling story. In a report by a Swedish newspaper, it transpired that Meta's AI models were being trained by workers in Kenya who were watching footage from the smartglasses. Contractors were tagging objects in clips that included "bathroom visits, sex and other intimate moments". In the subsequent class action lawsuit against Meta, it's being argued that they (Meta) failed to make clear the use of human moderators in the video review process.
Lots of organisations are covering this, as you might imagine. As well as the original article linked above, you can read about it from PetaPixel, the BBC, or Mashable, amongst others.
Why were people watching the videos?
We're far enough into the "age of AI" now, that it's sometimes easy to forget that these models do still need to be trained in order to work. If you've got a security camera that can tell the difference between your dog and the postman, it's because (to vastly oversimplify the process) someone has sat through hours of footage of dogs and postmen and told the AI "this one's a dog", "this one's a postman", "this one's a dog", "this one's actually a cat, but I can see how you got confused". That's essentially what the Kenyan workers identified in the newspaper article are doing. Only they're using the footage from people's smartglasses to do it.
Why? In a suggested upcoming feature, the glasses will be able to recognise other people by sight and do things like remind you of meetings you have with that person (for... reasons...?). This on top of already being able to translate text you're looking at, and work ongoing to allow the glasses to identify specific objects.
There's clear use case for glasses that recognise people and objects for those who are low or no sighted. But even if you believe that those are the only things Meta would do with that kind of training data, they are still training their AI on videos of people going to the bathroom.
Our days of no longer reading the fineprint are certainly coming to a middle
One big problem with all of this, and what one has to assume will be the core tenant of Meta's legal defence, is that they put all this in the terms and conditions. As ZDNET reports, Meta "reserves the right to share user data from Meta AI and wearable devices, such as the Meta Ray-Ban smart glasses, with moderators for review".
Will that be enough to get Meta off the hook? Who knows. They do have very expensive lawyers. But even if people accept that the glasses can record you while you go about your private business, part of this issue is that users weren't always clear on when the glasses were recording.
Here again, Meta tries for an out. It claims that the videos sent to its human workers has already had privacy filters added to it, such as redacting bank card numbers, or blurring faces. The workers in Kenya say that this is true for maybe 50% of the content they see.
This impacts everyone
A while back, we talked about Micrsoft Recall, and how a big concern wasn't just that your messages were being sent to Microsoft, but that the contents of messages from your friends, colleagues, etc were also being sent. While you can pretend to read the terms and conditions before blindly accepting, they don't have that option. There is no possible legal - or actual - way in which they can give consent.
Here we have the same problem, but infinitely more scary. A person wearing Ray-Ban Meta glasses could record everything you do and say in a public space, in a meeting, or as they pass you on the street. All without you even knowing it's happening. In a sentence that seems absurd to have to write out, you cannot give consent to something you don't know is happening.
And as a final thought on this, if you don't think that having people walking around with AI-powered facial recognition devices on their face is cause for concern, you may have missed the Amazon Ring Search Party backlash from a few weeks ago.
What else is happening in the world of tech?
📧 Proton Mail Helped FBI Unmask Anonymous ‘Stop Cop City’ Protester
Proton are a Swiss tech company that provide - amongst other products, VPN services and email accounts. One of their big selling points is that they pride themselves on privacy, capturing as little of your data as possible, highly secure encryption, and being answerable only to the Swiss government. It makes them a popular choice for those worried about the tendancy for big-tech to overreach.
This story is doing the rounds this week because on the face of it, it looks as though Proton have cowtowed to an FBI request for information on individuals involved in the protesting of a police training facility in Altanta.
In actuality, what happened was the FBI asked the Swiss government, who then asked Proton, who then released a limited amount of information. It might seem like a technicality, but the head of comms behind ProtonMail, Edward Shone, argues its an important one.
For many though, the fact that Proton is sharing any data at all with third parties is upsetting. Whatever your takeaway, it's an important reminder that, however much an entity might align with your values, to do business they still need to operate within the construct of the law. Even if that is Swiss law.
🤖 Former Apple engineer raises $5M for a note-taking pendant that only records your voice
This is three weeks in a row now where I've featured some new product (our main story this week notwithstanding). I'd love to say it's because I'm getting a kickback from each of them, but alas I'm still waiting on the day I can sellout to big tech (j/k). I'm sharing this story though, because I think that there is a small but growing trend in less "Orwellian" tech.
In this case, we're looking at a pendant that transcribes audio to text - but crucially only the audio of the wearer. We've talked before on this newsletter about the inherent problems of consent when it comes to (e.g.) the continued persistence of AI Agents joining Zoom meetings uninvited. And of course with our top story this week there is a plethora of issues around the lack of consent of people in your eyeline while wearing smart glasses. This, then, looks to solve some of those consent worries by only working for the owner.
There's a lot of questions still. It's early days for the pendant, will the market really bear it out? Also, how does it only record the wearer? There must be some processing of non-wearer data involved to make that determination?
Either way, I think we're at the bottom of a slow ramp up to a different way of thinking about technology. Which brings us nicely onto this week's blog spotlight...
Blog spotlight
🕸️ Caught in a Multi-Billion Dollar Web: Why Leaving the Legacy Algorithmic Media is So Hard
This week's blog comes from the Technically Good website, the moonlight project of Izzy, a senior data analyst with a passion for data sovereignty and ethics in analytics.
It's actually a follow-up piece to a previous post of hers, but she TL;DRs it in the opening so you don't need to have read it (but if you feel like it, you should).
The reason I liked this post was that it's put into words a lot of things I've been grappling with when it comes to online platforms. She talks about the "switching cost" inherent in online communities, which is something that we definitely noticed in the wake of the Twitter buy-out in 2022. Countless charities and social organisations, at odds with the values and rhetoric of Elon Musk, wanted to leave the platform. But they had nowhere to go. The network effect of a then-hierarchy-less platform like Twitter was so strong, developed over a decade and a half, and would be basically impossible to replicate without time and effort that most organisations don't have.
Anyway, do give this a read. I think it's a really indepth but clear summation of the state of online "algorithmic media" (read: mainstream social media).
And so we wrap up another LT3!
If you've any thoughts or comments on any of the stories from this week, I'd love to hear them. Feel free to reply to this and drop me an email!
Finally, another quick plug for the 2026 Charity Digital Skills Report survey, if you or anyone you know might want to fill it out you/they can do so right here.
Have a good rest of your week, and weekend when it gets here.
Will