This Week in Responsible AI

Subscribe
Archives
November 19, 2023

This Week in Responsible AI: Nov 19, 2023

This Week in Responsible AI: Nov 19, 2023

Fairness

  • Know Your Digital Rights: Digital Discrimination in Hiring

  • Clinical algorithms, racism, and “fairness” in healthcare: A case of bounded justice

AI Art

  • 'Herndon uses the phrase “identity play”—a pun of sorts on “I.P.”—to describe the act of allowing other people to use her voice. “What if people were performing through me, on tour?” she said. “Kind of like body swapping, or identity swapping. I think that sounds exciting.”'

  • YouTube is going to start cracking down on AI clones of musicians

  • Stability AI VP resigns due to disagreement over company's stance on "fair use"

Privacy

  • Debunking the Myth of “Anonymous” Data

  • Private UK health data donated for medical research shared with insurance companies

  • 'LexisNexis is providing CBP with social media surveillance, access to jail booking data, face recognition and “geolocation analysis & geographic mapping” of cellphones. All this data can be queried in “large volume online batching,” allowing CBP investigators to target broad groups of people and discern “connections among individuals, incidents, activities, and locations,” handily visualized through Google Maps.'

Policy

  • Biden’s Elusive AI Whisperer Finally Goes On the Record. Here’s His Warning.

  • Feds Have No Idea How Many Times Cruise Driverless Cars Hit Pedestrians

  • I Implemented a Federal Government Executive Order on Technology in Mexico. Here’s What I Learned

Disinformation

  • Campaigns are using AI. Tech companies are figuring out how to disclose what’s real.

  • Generative AI will create a 'tsunami of disinformation' during the 2024 election

  • YouTube to roll out labels for "realistic" AI-generated content

Security

  • Watch out: Generative AI will level up cyber attacks, according to new Google report

  • 'this new jailbreaking method, dubbed “SneakyPrompt” by its creators from Johns Hopkins University and Duke University, uses reinforcement learning to create written prompts that look like garbled nonsense to us but that AI models learn to recognize as hidden requests for disturbing images.'

  • "the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision."

Other

  • Meta disbanded its Responsible AI team

  • UnitedHealth uses AI model with 90% error rate to deny care, lawsuit alleges

  • A Material Lens on Coloniality in NLP

  • "The 'Brand Safety' and 'Suitability' industries have financially crushed the news business by keeping ads away from articles that its 'sentiment analysis' algorithms think will make people sad or upset."

Compiled by Leif Hancox-Li

Don't miss what's next. Subscribe to This Week in Responsible AI:
Powered by Buttondown, the easiest way to start and grow your newsletter.