This Week in Responsible AI

Subscribe
Archives
October 7, 2023

This Week in Responsible AI: Oct 7, 2023

New research

  • Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising

  • Making Retrieval-Augmented Language Models Robust to Irrelevant Context

  • Engaging on Responsible AI terms: Rewriting the small print of everyday AI systems

  • Robustness of AI-Image Detectors: Fundamental Limits and Practical Attacks

  • The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice

  • “A strong placebo effect works to shape what people think of a particular AI tool”

  • Data Worlds

  • Overwriting Pretrained Bias with Finetuning Data

  • “predictive models, fed on massive datasets labeled by gamers from different countries, offer better personalized gaming recommendations than those labeled by gamers from a single country.”

  • Open-Sourcing Highly Capable Foundation Models

  • “using Large Language Models like Bing Chat as a source of information for deciding how to vote is a very bad idea”

Guidelines and Techniques

  • Organisational Policies for Generative AI

  • Practical steps for companies to do AI right

  • How to Promote Responsible Open Foundation Models

Surveillance and Privacy

  • Generative Artificial Intelligence is slowly entering children’s lives

  • Digital Dystopia: The Danger in Buying What the EdTech Surveillance Industry is Selling

  • “When you’re databasing my face off my ads, which I have to post to pay my bills, you’re putting me in closer proximity to people who could arrest me, deport me, or evict me”

  • How the “Surveillance AI Pipeline” Literally Objectifies Human Beings

Law/Policy

  • What can the U.S. government learn about participation in AI from Queer in AI?

  • Imagining democratic governance of autonomous vehicles

  • How authoritarian governments are using generative AI

  • Missing Persons: The Case of National AI Strategies

Snake oil

  • Predictive Policing Software Terrible At Predicting Crimes

Labor

  • NASA’s Mars rovers could inspire a more ethical future for AI

  • Monitoring, Streamlining and Reorganizing Work with Digital Technology

Generative AI

  • How generative AI is boosting the spread of disinformation and propaganda

  • “Try as they might, the team was unable to get Black doctors and white patients in one image. Out of 150 images of HIV patients, 148 were Black and two were white. Some results put African wildlife like giraffes and elephants next to Black physicians.”

  • How much can artists make from generative AI? Vendors won’t say

  • $260 Million AI Company Releases Undeletable Chatbot That Gives Detailed Instructions on Murder, Ethnic Cleansing

  • Default No to AI Training on Your Stories

  • Truepic and Hugging Face Partner to Highlight the Latest Innovations in Transparency to AI-Generated Content

  • Bing Is Generating Images of SpongeBob Doing 9/11

  • 4chan Uses Bing to Flood the Internet With Racist Images

  • Critics Furious Microsoft Is Training AI by Sucking Up Water During Drought

  • AI chatbots let you 'interview' historical figures like Harriet Tubman. That's probably not a good idea

  • Nearly 10% of people ask AI chatbots for explicit content. Will it lead LLMs astray?

  • Google adds a switch for publishers to opt out of becoming AI training data

Compiled by Leif Hancox-Li

Don't miss what's next. Subscribe to This Week in Responsible AI:
Powered by Buttondown, the easiest way to start and grow your newsletter.