October 10, 2021
"the most egregious of a very bad bunch" | The Cat Herder, Volume 4, Issue 39
|
October 10 · Issue #152 · View online |
|
Facial recognition, a bit of Facebook, fundamental rights. 😼
|
|
|
Although enforcement may be patchy and as we’ve seen recently will be fiercely contested by companies from the Facebook family, data protection law in Europe is just as concerned with the acquisition of personal data as the uses to which that personal data is subsequently put.
|
|
Instagram and Facebook use surveillance-driven algorithms that show you whatever content they think will keep you on the platform the longest, to sell ads. The way to stop that is to finally pass a Federal privacy law that makes it illegal to collect the data they need to do that
|
|
|
|
Haugen has taken a great risk with her future career, and has provided the documentation that regulators and policymakers need. For this we should be grateful. But she is not the arbiter of what should be done. So far, when asked about solutions, she’s made vague gestures toward “regulation,” but in the context of her belief that “the version of Facebook that exists today is tearing our societies apart.” To this way of thinking, there is a reachable version of Facebook that would do less harm and be OK. This incremental approach is no surprise. Haugen has already worked for 15 years for companies with names that are synonymous with surveillance capitalism. She doesn’t have a problem with the basic business model of extracting people’s data to sell ads. She just has a problem with Facebook being the most egregious of a very bad bunch.
|
Haugen’s suggestion of a regulator which would be staffed by people from the surveillance-based advertising industry is charmingly naive and would suit Facebook and its peers just fine.
|
|
|
The European Parliament today called for a ban on police use of facial recognition technology in public places, and on predictive policing, a controversial practice that involves using AI tools in hopes of profiling potential criminals before a crime is even committed. In a resolution adopted overwhelmingly in favor, MEPs also asked for a ban on private facial recognition databases, like the ones used by the controversial company Clearview AI. The Parliament also supports the European Commission’s attempt in its AI bill to ban social scoring systems, such as the ones launched by China that rate citizens’ trustworthiness based on their behavior.
|
European Parliament calls for a ban on facial recognition – POLITICO
Non-binding resolution also asks for AI-based predictive policing ban.
|
Meanwhile Clearview just keeps on going and growing in questionable directions.
|
Some of Clearview’s new technologies may spark further debate. Ton-That says it is developing new ways for police to find a person, including “deblur” and “mask removal” tools. The first takes a blurred image and sharpens it using machine learning to envision what a clearer picture would look like; the second tries to envision the covered part of a person’s face using machine learning models that fill in missing details of an image using a best guess based on statistical patterns found in other images … “I would expect accuracy to be quite bad, and even beyond accuracy, without careful control over the data set and training process I would expect a plethora of unintended bias to creep in,” says Aleksander Madry, a professor at MIT who specializes in machine learning. Without due care, for example, the approach might make people with certain features more likely to be wrongly identified. Even if the technology works as promised, Madry says, the ethics of unmasking people is problematic. “Think of people who masked themselves to take part in a peaceful protest or were blurred to protect their privacy,” he says.
|
Clearview AI Has New Tools to Identify You in Photos | WIRED
In an interview with WIRED, CEO Hoan Ton-That said the company has scraped 10 billion photos from the web—and developed new ways to aid police surveillance.
|
In this thread on Twitter Philip Boucher-Hayes outlines the latest developments in his efforts to get to the bottom of the use of Clearview by An Garda Síochána.
Spoiler: the Garda Press Office has a form of words it’s happy with which hinges on a vague phrase about the technology not being “deployed in this State.” Something which could be narrowly technically true about any cloud-based service.
|
|
This is an interesting twist to slowly unfolding saga. In Feb 2020 I asked An Garda Siochana were they using this facial recognition tech. They said they “had no relationship” with the company. 1/ https://t.co/qT3QuXiOiD
|
|
|
|
|
No matter what the area, it seems there’s always someone there with a ‘solution’ which involves layering some kind of monitoring and surveillance on top of everyday human activity.
|
It wasn’t an accident. When Hootman emailed the teacher, she says she was told, “‘Oh, surprise, we have this new software where we can monitor everything your child is doing throughout the day and can see exactly what they’re seeing, and we can close all their tabs if we want.’”
|
Borrowed a School Laptop? Mind Your Open Tabs | WIRED
Students—many from lower-income households—were likely to use school-issued devices for remote learning. But the devices often contained monitoring software.
|
This persistent monitoring has immediate and no doubt longer term effects.
|
“We found that six in 10 students agreed with the statement ‘I do not share my true thoughts or ideas because I know what I do online is being monitored,’” she says. “When you think about this happening in an educational environment where you want students to express themselves, you want students to be learning, you want students to feel free to make mistakes, that response raises questions about whether this will actually undermine the whole purpose of education.”
|
|
|
The ICO published a response to the UK government’s “Data: a new direction” consultation document. Not surprisingly the ICO has reservations about the blatant attempts to undermine the ICO’s independence.
|
|
|
|
This is not news, or new, but is well worth reiterating from time-to-time. Data protection law in Europe is underpinned by the fundamental rights in the Charter.
|
|
|
The EDPB (and EDPS) has reiterated that personal data cannot be considered as a “tradeable commodity”. An important consequence of this is that, even if the data subject can agree to the processing of his or her personal data, he or she cannot waive his or her fundamental rights. https://t.co/O6RciME5Eb
|
|
|
|
|
-
“I have always thought that I had been quite careful online – giving away enough about myself to enjoy conversations with people I’d never met, yet avoiding those games where you reveal the names of your first pet, your mum’s maiden name and simultaneously all of your bank passwords. But the demonstration showed me there were things I’d forgotten about and made it clear that information other people were sharing was adding to the picture. The starting point was Facebook. Thanks to that, and my failure to ever make my account private, Goddard was able to declare: “We know where you work, we know where you went to school and we know where you come from.” From there, via my tweets about Scouting, Goddard had been able to find several of my old addresses. And via old copies of my school magazine uploaded to its online archive he was able to remind me of my success in talking about Welsh rugby and feminism without deviation or hesitation in a sixth form Just a Minute competition.” From ‘How fraudsters can use the forgotten details of your online life to reel you in’ by Hilary Osborne for The Observer.
-
“Courts struggle with privacy harms because they often involve future uses of personal data that vary widely. When privacy violations result in negative consequences, the effects are often small – frustration, aggravation, anxiety, inconvenience – and dispersed among a large number of people. When these minor harms are suffered at a vast scale, they produce significant harm to individuals, groups, and society. But these harms do not fit well with existing cramped judicial understandings of harm. This article makes two central contributions. The first is the construction of a typology for courts to understand harm so that privacy violations can be tackled and remedied in a meaningful way. Privacy harms consist of various different types, which to date have been recognized by courts in inconsistent ways. Our typology of privacy harms elucidates why certain types of privacy harms should be recognized as cognizable. The second contribution is providing an approach to when privacy harm should be required. In many cases, harm should not be required because it is irrelevant to the purpose of the lawsuit. Currently, much privacy litigation suffers from a misalignment of enforcement goals and remedies. We contend that the law should be guided by the essential question: When and how should privacy regulation be enforced? We offer an approach that aligns enforcement goals with appropriate remedies.” From a draft of a new version of ‘Privacy Harms’ by Danielle Citron and Daniel Solove.
-
‘Mishcon’s representative suit is similar to a class-action lawsuit in the US and will have important ramifications for large scale access and use of health data by tech companies in a post-pandemic, post-Brexit UK. Ben Lasserson, the lead partner on the case, stated:“This important claim should help to answer fundamental questions about the handling of sensitive personal data and special category data…It comes at a time of heightened public interest and understandable concern over who has access to people’s personal data and medical records and how this access is managed.”’ From ‘UK law firm sues Google subsidiary for breach of data protection laws’ by Ananaya Agrawal for Jurist.org.
—
|
|
|
If you know someone who might enjoy this newsletter do please forward it on to them.
|
Did you enjoy this issue?
|
|
|
|
In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
|
|
Privacy Kit, Made with 💚 in Dublin, Ireland
|
|
|
Facial recognition, a bit of Facebook, fundamental rights.
😼
Although enforcement may be patchy and as we’ve seen recently will be fiercely contested by companies from the Facebook family, data protection law in Europe is just as concerned with the acquisition of personal data as the uses to which that personal data is subsequently put.
Of all the pieces written about this latest scandal Maria Farrell’s ‘Blowing the whistle on Facebook is just the first step’ is the one which best addresses the real issue.
Haugen’s suggestion of a regulator which would be staffed by people from the surveillance-based advertising industry is charmingly naive and would suit Facebook and its peers just fine.
Non-binding resolution also asks for AI-based predictive policing ban.
Meanwhile Clearview just keeps on going and growing in questionable directions.
In an interview with WIRED, CEO Hoan Ton-That said the company has scraped 10 billion photos from the web—and developed new ways to aid police surveillance.
In this thread on Twitter Philip Boucher-Hayes outlines the latest developments in his efforts to get to the bottom of the use of Clearview by An Garda Síochána.
Spoiler: the Garda Press Office has a form of words it’s happy with which hinges on a vague phrase about the technology not being “deployed in this State.” Something which could be narrowly technically true about any cloud-based service.
No matter what the area, it seems there’s always someone there with a ‘solution’ which involves layering some kind of monitoring and surveillance on top of everyday human activity.
Students—many from lower-income households—were likely to use school-issued devices for remote learning. But the devices often contained monitoring software.
This persistent monitoring has immediate and no doubt longer term effects.
The ICO published a response to the UK government’s “Data: a new direction” consultation document. Not surprisingly the ICO has reservations about the blatant attempts to undermine the ICO’s independence.
—
The Finnish DPA issued a reprimand to the National Police Board for “illegal processing of special categories of personal data during a facial recognition technology trial.” Spoiler: Yes, it was Clearview again.
—
This is not news, or new, but is well worth reiterating from time-to-time. Data protection law in Europe is underpinned by the fundamental rights in the Charter.
-
“I have always thought that I had been quite careful online – giving away enough about myself to enjoy conversations with people I’d never met, yet avoiding those games where you reveal the names of your first pet, your mum’s maiden name and simultaneously all of your bank passwords. But the demonstration showed me there were things I’d forgotten about and made it clear that information other people were sharing was adding to the picture. The starting point was Facebook. Thanks to that, and my failure to ever make my account private, Goddard was able to declare: “We know where you work, we know where you went to school and we know where you come from.” From there, via my tweets about Scouting, Goddard had been able to find several of my old addresses. And via old copies of my school magazine uploaded to its online archive he was able to remind me of my success in talking about Welsh rugby and feminism without deviation or hesitation in a sixth form Just a Minute competition.” From ‘How fraudsters can use the forgotten details of your online life to reel you in’ by Hilary Osborne for The Observer.
-
“Courts struggle with privacy harms because they often involve future uses of personal data that vary widely. When privacy violations result in negative consequences, the effects are often small – frustration, aggravation, anxiety, inconvenience – and dispersed among a large number of people. When these minor harms are suffered at a vast scale, they produce significant harm to individuals, groups, and society. But these harms do not fit well with existing cramped judicial understandings of harm. This article makes two central contributions. The first is the construction of a typology for courts to understand harm so that privacy violations can be tackled and remedied in a meaningful way. Privacy harms consist of various different types, which to date have been recognized by courts in inconsistent ways. Our typology of privacy harms elucidates why certain types of privacy harms should be recognized as cognizable. The second contribution is providing an approach to when privacy harm should be required. In many cases, harm should not be required because it is irrelevant to the purpose of the lawsuit. Currently, much privacy litigation suffers from a misalignment of enforcement goals and remedies. We contend that the law should be guided by the essential question: When and how should privacy regulation be enforced? We offer an approach that aligns enforcement goals with appropriate remedies.” From a draft of a new version of ‘Privacy Harms’ by Danielle Citron and Daniel Solove.
-
‘Mishcon’s representative suit is similar to a class-action lawsuit in the US and will have important ramifications for large scale access and use of health data by tech companies in a post-pandemic, post-Brexit UK. Ben Lasserson, the lead partner on the case, stated:“This important claim should help to answer fundamental questions about the handling of sensitive personal data and special category data…It comes at a time of heightened public interest and understandable concern over who has access to people’s personal data and medical records and how this access is managed.”’ From ‘UK law firm sues Google subsidiary for breach of data protection laws’ by Ananaya Agrawal for Jurist.org.
—
Endnotes & Credits
Find us on the web at myprivacykit.com and on Twitter at @PrivacyKit. Of course we’re not on Facebook or LinkedIn.
If you know someone who might enjoy this newsletter do please forward it on to them.