This Week in Responsible AI

Subscribe
Archives
December 20, 2023

This Week in Responsible AI, Dec 20, 2023

This Week in Responsible AI, Dec 20, 2023

Tools

  • Open AI's "Preparedness Framework"
  • Risky Analysis: Assessing and Improving AI Governance Tools
  • VonGoom: A Novel Approach for Data Poisoning in Large Language Models.

Data

  • DICES Dataset: Diversity in Conversational AI Evaluation for Safety
  • Ethical Considerations for Responsible Data Curation
  • Largest Dataset Powering AI Images Removed After Discovery of Child Sexual Abuse Material
  • The “Trolley Problem” Doesn’t Work for Self-Driving Cars
  • China Using AI to Create Anti-American Memes Capitalizing on Israel-Palestine, Researchers Find

Fakes

  • Facebook Is Being Overrun With Stolen, AI-Generated Images That People Think Are Real
  • Take It to the Spank Bank

Policy / Law

  • Human rights protections…with exceptions: what’s (not) in the EU’s AI Act deal
  • Five things you need to know about the EU’s new AI Act
  • Rite Aid Banned from Using AI Facial Recognition After FTC Says Retailer Deployed Technology without Reasonable Safeguards
  • Google takes steps to prevent abuse of its generative AI tools during U.S. elections
  • FTC Staff Report Details Key Takeaways from AI and Creative Fields Panel Discussion
  • AI cannot patent inventions, UK Supreme Court confirms
  • Model legislation for online civil rights

Privacy

  • Marketing Company Claims That It Actually Is Listening to Your Phone and Smart Speakers to Target Ads

Fairness

  • Repairing Regressors for Fair Binary Classification at Any Decision Threshold

Other

  • Speculative F(r)iction in Generative AI
  • These six questions will dictate the future of generative AI

Compiled by Leif Hancox-Li

Don't miss what's next. Subscribe to This Week in Responsible AI:
Powered by Buttondown, the easiest way to start and grow your newsletter.