This Week in Responsible AI

Subscribe
Archives
April 23, 2024

This Week in Responsible AI: Apr 23, 2024

This Week in Responsible AI: Apr 23, 2024

Bias

  • 'If AI-ese sounds like African English, then African English sounds like AI-ese. Calling people a “bot” is already a schoolyard insult... how much worse will it get when a significant chunk of humanity sounds like the AI systems they were paid to train?'
  • White Men Lead, Black Women Help: Uncovering Gender, Racial, and Intersectional Bias in Language Agency

Tools and Resources

  • Announcing MLCommons AI Safety v0.5 Proof of Concept
  • CYBERSECEVAL 2: A Wide-Ranging Cybersecurity Evaluation Suite for Large Language Models
  • "CodeShield is a robust inference time filtering tool engineered to prevent the introduction of insecure code generated by LLMs into production systems."
  • Appropriate reliance on Generative AI: Research synthesis
  • The Markup provides a guide to spotting deepfaked video and audio.

Law/Policy

  • Will You Take This Algorithm to Court?
  • NIST adds 5 new members to its AI Safety Institute
  • Can AI Standards Have Politics?

Other

  • Technological risks are not the end of the world
  • Newsweek is making generative AI a fixture in its newsroom
  • From “AI” to Probabilistic Automation: How Does Anthropomorphization of Technical Systems Descriptions Influence Trust?
  • Taser Company Axon Is Selling AI That Turns Body Cam Audio Into Police Reports

Compiled by Leif Hancox-Li

Don't miss what's next. Subscribe to This Week in Responsible AI:
Powered by Buttondown, the easiest way to start and grow your newsletter.