This Week in Responsible AI

Subscribe
Archives
April 7, 2024

This Week in Responsible AI: Apr 7, 2024

This Week in Responsible AI: Apr 7, 2024

Bias

  • Meta’s AI image generator can’t imagine an Asian man with a white woman
  • Measuring Political Bias in Large Language Models: What Is Said and How It Is Said
  • Survey of Bias In Text-to-Image Generation: Definition, Evaluation, and Mitigation

Law/Policy

  • Automated Transparency: A Legal and Empirical Analysis of the Digital Services Act Transparency Database
  • Washington state judge blocks use of AI-enhanced video as evidence in possible first-of-its-kind ruling

AI and Knowledge

  • AI and the Problem of Knowledge Collapse
  • LLMs meet misinformation
  • China tests US voter fault lines and ramps AI content to boost its geopolitical interests

Jailbreaking LLMs

  • Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks
  • Many-shot jailbreaking
  • JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models

Data

  • For AI firms, anything "public" is fair game
  • AI, Encryption, and the Sins of the 90s (NDSS 2024 Keynote by Meredith Whittaker)

Other

  • NLP for Maternal Healthcare: Perspectives and Guiding Principles in the Age of LLMs
  • Resistance in the Black Box Society
  • ‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza
  • The mechanisms of AI hype and its planetary and social costs
  • Meta plans to more broadly label AI-generated content
  • Context Before Code: Meta’s Oversight Board Policy Advisory Opinion on the Word “Shaheed” Calls for Language and Cultural Nuance in Content Moderation

Compiled by Leif Hancox-Li

Don't miss what's next. Subscribe to This Week in Responsible AI:
Powered by Buttondown, the easiest way to start and grow your newsletter.