This Week in Responsible AI

Subscribe
Archives
April 1, 2024

This Week in Responsible AI, Apr 1, 2024

This Week in Responsible AI, Apr 1, 2024

"Safety"

  • Why Are Large AI Models Being Red Teamed?
  • Using ASCII art to jailbreak chatbots
  • AI safety is not a model property
  • $250,000 in prizes for ML Safety benchmarks
  • Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations

Fairness

  • SCoFT: Self-Contrastive Fine-Tuning for Equitable Image Generation
  • Maximizing Equity in Acute Coronary Syndrome Screening across Sociodemographic Characteristics of Patients
  • Sensitivity, Performance, Robustness: Deconstructing the Effect of Sociodemographic Prompting
  • Digital Fairness for Consumers
  • Fairness Feedback Loops: Training on Synthetic Data Amplifies Bias

Participation

  • "How can NLP/AI practitioners engage with oral societies and develop locally appropriate language technologies?"
  • Recourse for reclamation: Chatting with generative language modelsx
  • From Fitting Participation to Forging Relationships: The Art of Participatory ML

Transparency

  • Technical Readout - Columbia Convening on Openness and AI
  • Models All The Way Down
  • Revealed: the secret algorithm that controls the lives of Serco’s immigration detainees
  • New YouTube policy: creators must disclose 'realistic' AI-generated content
  • The Impact of Explanations on Fairness in Human-AI Decision-Making
  • Anyone Can Audit! Users Can Lead Their Own Algorithmic Audits with IndieLabel

Law/Policy

  • "This is not a data problem": Algorithms and Power in Public Higher Education in Canada
  • Africa’s push to regulate AI starts now
  • NTIA Artificial Intelligence Accountability Policy Report
  • The EU AI Act passes. Here’s what will (and won’t) change. Some are unhappy with it.
  • VP Harris announces new requirements for how federal agencies use AI technology
  • FTC and DOJ File Statement of Interest in Hotel Room Algorithmic Price-Fixing Case
  • Constructing AI Speech
  • AI Countergovernance
  • We’re headed for big problems if gardaí get facial recognition technology
  • Google fined $272M by French government over AI use of news content
  • NYC’s AI Chatbot Tells Businesses to Break the Law
  • Why we’re fighting to make sure labor unions have a voice in how AI is implemented
  • AI Nationalism(s)
  • U.S. Department of the Treasury Releases Report on Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Sector
  • Envisioning a Global Regime Complex to Govern Artificial Intelligence
  • Revealed: a California city is training AI to spot homeless encampments

Data

  • Data Collection in Music Generation Training Sets: A Critical Analysis
  • A Data-Centered Approach to Education AI

Elections

  • Google restricts AI chatbot Gemini from answering questions on 2024 elections
  • AI image-generator Midjourney blocks images of Biden and Trump as election looms

Privacy

  • Mitigating a token-length side-channel attack in our AI products
  • "it is possible to learn a surprisingly large amount of non-public information about an API-protected LLM from a relatively small number of API queries"
  • Hackers can read private AI-assistant chats even though they’re encrypted
  • Facial recognition technology and protests

Other

  • The Guild of St. Luke: Reassessing Digital-Cultural Infrastructures
  • Towards Optimizing Human-Centric Objectives in AI-Assisted Decision-Making With Offline Reinforcement Learning
  • “Companies continue to promise to deliver the moon when it comes to AI and still provide moldy green cheese.”
  • Artificial Intelligence and Machine Learning + Accessibility
  • GPT 3.5 and 4 "have been globally exposed to ∼4.7M samples from 263 benchmarks"
  • Dating Your (Potential) Executioner
  • Let’s not make the same mistakes with AI that we made with social media
  • Power and Play: Investigating "License to Critique" in Teams' AI Ethics Discussions

Compiled by Leif Hancox-Li

Don't miss what's next. Subscribe to This Week in Responsible AI:
Powered by Buttondown, the easiest way to start and grow your newsletter.