This Week in Responsible AI

Subscribe
Archives
November 4, 2023

This (past two) weeks in Responsible AI: Nov 4, 2023

This (past two) weeks in Responsible AI: Nov 4, 2023

Apologies for the delay and jumbo edition!

New Research

  • Digital resignation and the datafied welfare state

  • "One-size-fits-all"? Observations and Expectations of NLG Systems Across Identity-Related Language Features

  • Auditing Fairness by Betting

  • Bridging Systems: Open Problems for Countering Destructive Divisiveness Across Ranking, Recommenders, and Governance

  • Dual Governance: The intersection of centralized regulation and crowdsourced safety mechanisms for Generative AI

  • Data Refusal From Below: A Framework for Understanding, Evaluating, and Envisioning Refusal as Design

  • Seamful XAI: Operationalizing Seamful Design in Explainable AI

  • How Robust is Google's Bard to Adversarial Image Attacks?

  • Composite Backdoor Attacks Against Large Language Models

  • "generative AI outputs (at least the speech-like ones) are likely entitled to First Amendment protections"

  • Automated Tax Planning: Who’s Liable When AI Gets It Wrong?

  • People Perceive Algorithmic Assessments as Less Fair and Trustworthy Than Identical Human Assessments

  • The Authoritarian Data Problem

  • Reimagining Democracy for AI

  • Building the Epistemic Community of AI Safety

  • Improving Diversity of Demographic Representation in Large Language Models via Collective-Critiques and Self-Voting

  • Making AI Less "Thirsty": Uncovering and Addressing the Secret Water Footprint of AI Models

  • Datasheets for Digital Cultural Heritage Datasets

  • "both humans and preference models (PMs) prefer convincingly-written sycophantic responses over correct ones"

  • The Sentiment Problem: A Critical Survey towards Deconstructing Sentiment Analysis

  • RoCourseNet: Distributionally Robust Training of a Prediction Aware Recourse Model

Talks

  • Arvind Narayanan | Resistance or harm reduction?

  • Sasha Luccioni: AI is dangerous, but not for the reasons you think

Policy

  • Supporting Open Source and Open Science in the EU AI Act

  • "any kind of effective AI regulation would need to regulate personal computing"

  • Commentary on the White House's executive order on AI: Biden seeks to rein in AI, Tate Ryan-Mosley on the executive order's emphasis on watermarking and content authentication, Three things to know about the White House’s executive order on AI

  • UK government report (PDF): Safety and Security Risks of Generative Artificial Intelligence to 2025

  • The fingerprints on a letter to Congress about AI

  • Belt and road forum: China launches AI framework, urging equal rights and opportunities for all nations

  • Inside ICE’s Database for Finding ‘Derogatory’ Online Speech

  • "Companies including Meta, Google DeepMind and OpenAI have agreed to allow regulators to test their latest AI products before releasing them to the public"

  • NSF invests $10.9M in the development of safe artificial intelligence technologies

  • EDPB issues binding decision banning Meta's targeted advertising practices

  • How AI Can Be Regulated Like Nuclear Energy

  • Joy Buolamwini: “We’re giving AI companies a free pass”

  • Funding is available from the NEH for Humanities Perspectives on Artificial Intelligence

  • The Case for Including the Global South in AI Governance Discussions

  • AI RED-TEAMING IS NOT A ONE-STOP SOLUTION TO AI HARMS: RECOMMENDATIONS FOR USING RED-TEAMING FOR AI ACCOUNTABILITY

Fairness

  • "the algorithm's ability to identify the risk of depression varied by up to 15% between different groups."

  • Can AI Be Fair?

Transparency

  • A method to interpret AI might not be so interpretable after all

  • How the Foundation Model Transparency Index Distorts Transparency

  • AI Cameras Took Over One Small American Town. Now They're Everywhere

  • Could Cruise be the Theranos of AI?

Tools

  • Data Provenance Explorer

  • How advances in AI can make content moderation harder — and easier

  • This new data poisoning tool lets artists fight back against generative AI

Privacy

  • Privacy in the Age of AI

Generative AI

  • AI Modi started as a joke, but it could win him votes

  • Hackers Are Weaponizing AI to Improve a Favorite Attack

  • An AI-generated poll speculating on the cause of a woman's death appeared next to a Guardian article on the same death. In response, Microsoft has "deactivated Microsoft-generated polls for all news articles".

  • Propaganda or Science: Open Source AI and Bioterrorism Risk

  • AI doomsday warnings a distraction from the danger it already poses, warns expert

  • Riley Reid on AI: ‘I Don’t Want Porn to Get Left Behind’

Compiled by Leif Hancox-Li

Don't miss what's next. Subscribe to This Week in Responsible AI:
Powered by Buttondown, the easiest way to start and grow your newsletter.