Facial recognition in Gaza; Guest Cohost Timnit Gebru Joins Us!
A Fresh AI Hell Roundup: March 29, 2024 Edition
By Alex
Senator Scott Wiener has sponsored a "frontier model" bill in the California State Senate, which would protect against what the bill calls "critical harms". The Big Tech-friendly bill proposes a separate agency to oversee the development of models and require annual certification with this angecy. This wouldn't, however, create important bright line rules against uses which are already hurting people--like facial recognition and predictive policing--as recommended by AI Now and others. The bill also heavily relies on language largely drawn from "AI safety" proponents. "Existential risk" here is redefined as "critical harm", including "[t]he creation or use of a chemical, biological, radiological, or nuclear weapon in a manner that results in mass casualties." As Emily has written in a prior issue of this newsletter, this is hype from the jump, and the use of "frontier" is both a settler-colonialist term and one that signals full industry capture (as the only group which uses such terminology).
"California’s 3,696-person Department of Tax and Fee Administration plans to use generative artificial intelligence to advise its approximately 375 call center agents on state tax code," writes Khari Johnson for CalMatters. The state has put out an request for proposals for an AI vendor to produce such a tool, and a meeting with potential vendors attracted 100 attendees for a meeting in January. Getting tax advice from an LLM (which has, perhaps, been screened by an overworked call center employee) sounds like a nightmare use case.
AI companies are using the likelinesses of female influencers and celebrities to advertise for offensive products and ideas, such as erectile dysfunction medication and diet supplements. Similarly, Leonardo AI, an image-generator backed by Samsung and many others, is being used to generate nonconsensual porn of female celebrities. These images are being trafficked on Telegram and *chan sites, and existing guardrails against doing so are being easily circumvented.
Unit 8200, the Israeli military intelligence agency, is broadly deploying a facial recognition tool in Gaza created by the startup Corsight. Using this tool, plus uploading photos to Google Photos, Israel is saying they are attempting to find members of Hamas. This is the same system which misidentified Palestianian poet Mosab Abu Toha in his escape from Gaza to Cairo, facing violence and interrogation by the IDF throughout the ordeal. We agree with Matt Mahmoudi, who notes how the system contributes to a "a complete dehumanization of Palestinians". You can read more on Matt's work in this area in the Amnesty Tech brief, Automating Apartheid.
Relatedly, 404media has discovered that the US Air Force bought an AI-powered chatbot for intelligence and surveillance as part of a $1.2 million deal. The tool, created by Misram LLC (also known as Spectronn) places an LLM layer over existing intelligence analytics. This looks very much like the Palantir LLM for command-and-control which we discussed in episode 14 with Lucy Suchman.
Episode 30 Stream - feat. Dr. Timnit Gebru!
For next week's stream, we have a special guest: DAIR Founder and Executive Director, Dr. Timnit Gebru! Join us as we break down Marc Andreesen's e/acc screedy manifesto, and go after related TESCREAL-y nonsense.