This Week in Responsible AI

Subscribe
Archives
July 16, 2023

This Week in Responsible AI: Jul 16, 2023

This Week in Responsible AI: Jul 16, 2023

General

  • AI safety on whose terms?

  • 9 ways to see a dataset

  • How to report better on artificial intelligence

  • ACM releases principles for the development, deployment, and use of generative AI technologies

  • AI and climate: 1, 2

  • Artificial intelligence has entered a new era. Here’s how we stay human.

Bias

  • "We find that the portrayals generated by GPT-3.5 and GPT-4 contain higher rates of racial stereotypes than human-written portrayals using the same prompts."

  • NLPositionality: Characterizing Design Biases of Datasets and Models

  • "How reliably can we trust the scores obtained from social bias benchmarks as faithful indicators of problematic social biases in a given language model?"

  • WinoQueer: A Community-in-the-Loop Benchmark for Anti-LGBTQ+ Bias in Large Language Models

Law/Policy

  • The controversy around NYC’s Automated Employment Decision Tool law

  • UK government report: Enabling responsible access to demographic data to make AI systems fairer

  • AI and Law: The Next Generation

  • The FTC’s biggest AI enforcement tool? Forcing companies to delete their algorithms

  • China’s AI Regulations and How They Get Made

  • Japan’s new AI rules favor copycats over artists, experts say

  • Sarah Silverman is suing OpenAI and Meta for copyright infringement

Privacy

  • Need to Get Plan B or an HIV Test Online? Facebook May Know About It

  • Google Updates Privacy Policy To Collect Public Data For AI Training

  • Tax prep companies shared private taxpayer data with Google and Meta for years, congressional probe finds

Generative AI

  • Generative AI’s secret sauce — data scraping— comes under attack. See also: Google hit with lawsuit alleging it stole data from millions of users to train its AI tools

  • Programs to detect AI discriminate against non-native English speakers, shows study

  • Why AI detectors think the US Constitution was written by AI

  • AI-text detection tools are really easy to fool

  • FairPrism: a dataset of annotated harms in AI-generated English text

  • AI moderation is no match for hate speech in Ethiopian languages. See also: The AI startup outperforming Google Translate in Ethiopian languages

  • Google’s AI Chatbot Is Trained by Humans Who Say They’re Overworked, Underpaid and Frustrated

  • MISGENDERED: Limits of Large Language Models in Understanding Pronouns

  • 'Gizmodo Deputy Editor James Whitbrook told the Post in an interview that he’d never dealt with “this basic level of incompetence with any of the colleagues that I have ever worked with,” adding that the chatbot’s seeming inability to even put Star Wars movies in the right order meant it couldn’t be trusted to report anything accurately.'

Compiled by Leif Hancox-Li

Don't miss what's next. Subscribe to This Week in Responsible AI:
Powered by Buttondown, the easiest way to start and grow your newsletter.