This Week in Responsible AI

Subscribe
Archives
March 9, 2024

This Week in Responsible AI, March 9, 2024

This Week in Responsible AI, March 9, 2024

Copyright/data

  • A poster’s guide to who’s selling your data to train AI
  • What Happens When Your Art Is Used to Train AI
  • China court says AI broke copyright law in apparent world first

Labor

  • Emotion-tracking AI on the job: Workers fear being watched – and misunderstood
  • The job applicants shut out by AI: ‘The interviewer sounded like Siri’

Government

  • NIST staffers revolt against expected appointment of ‘effective altruist’ AI researcher to US AI Safety Institute
  • Making AI Work for Government: It All Comes Down to Trust

Evaluation

  • A safe harbor for AI evaluation and red teaming
  • Evaluating LLMs Through a Federated, Scenario-Writing Approach
  • Why most AI benchmarks tell us so little
  • Open Source Audit Tooling (OAT) Landscape

Fakes

  • LexisNexis' Legal AI tool makes up fake legal cases
  • Trump supporters target black voters with faked AI images
  • FAKE IMAGE FACTORIES: How AI image generators threaten election integrity and democracy
  • According to a test, GPT-4 produces more copyrighted text than other popular LLMs.
  • Ads on Instagram and Facebook for a deepfake app undressed a picture of 16-year-old Jenna Ortega

Toxicity

  • From One to Many: Expanding the Scope of Toxicity Mitigation in Language Models
  • Microsoft begins blocking some terms that caused its AI tool to create violent, sexual images

Fairness

  • OpenAI's GPT discriminates against certain race- or gender-coded names when used to rank resumes
  • Dialect prejudice predicts AI decisions about people's character, employability, and criminality
  • Queer = Bad in Automatic Sentiment Analysis
  • "Our analysis identifies significant biases in the current state of sign language AI research, including an overfocus on addressing perceived communication barriers, a lack of use of representative datasets, use of annotations lacking linguistic foundations, and development of methods that build on flawed models."

Other

  • How Automated Content Moderation Works (Even When It Doesn’t)
  • What the digital streaming revolution of the 2000s can teach us about the AI revolution today, according to a former musician
  • The Politics of Data Science: Institutionalizing Algorithmic Regimes of Knowledge Production
  • Artificial intelligence and illusions of understanding in scientific research
  • The Situate AI Guidebook: Co-Designing a Toolkit to Support Multi-Stakeholder Early-stage Deliberations Around Public Sector AI Proposals

Compiled by Leif Hancox-Li

Don't miss what's next. Subscribe to This Week in Responsible AI:
Powered by Buttondown, the easiest way to start and grow your newsletter.