This Week in Responsible AI

Subscribe
Archives
October 12, 2023

This Week in Responsible AI: Oct 12, 2023

This Week in Responsible AI: Oct 12, 2023

New Research

  • Embedding Societal Values into Social Media Algorithms

  • Foundation models in the public sector

  • Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem

  • Rigorously Assessing Natural Language Explanations of Neurons

  • Sparse Autoencoders Find Highly Interpretable Features in Language Models

  • Goodtriever: Adaptive Toxicity Mitigation with Retrieval-augmented Models

Representation

  • Talking AI with Data and Society’s Janet Haven

  • Blooming in Muddy Waters: DEI at AI Ethics Conferences

  • Who Authors the Internet? Analyzing Gender Diversity in ChatGPT-3 Training Material

Policy

  • CODE IS SPEECH, AND SPEECH IS FREE: An argument in favor of open-sourcing AI

  • Governing Artificial Intelligence

  • Fight for the Future’s Lia Holland On A.I. Copyright, Human Art and More

Privacy

  • Snap’s AI chatbot draws scrutiny in UK over kids’ privacy concerns

Generative AI

  • AI firms working on “constitutions” to keep AI from spewing toxic content

  • How AI reduces the world to stereotypes

  • Stable Signature: A new method for watermarking images created by open source generative AI

  • (CW: descriptions of anti-semitism) The Folly of DALL-E: How 4chan is Abusing Bing’s New Image Model

Compiled by Leif Hancox-Li

Don't miss what's next. Subscribe to This Week in Responsible AI:
Powered by Buttondown, the easiest way to start and grow your newsletter.