This Week in Responsible AI

Subscribe
Archives
August 10, 2023

This Week in Responsible AI: Aug 10, 2023

This Week in Responsible AI: Aug 10, 2023

I was on vacation, so today's update is a belated one covering the last ~3 weeks!

Research

  • Are Model Explanations Useful in Practice? Rethinking How to Support Human-ML Interactions.

  • A Human Rights-Based Approach to Responsible AI

  • The shaky foundations of large language models and foundation models for electronic health records

  • On the transparency of large AI models

  • White Paper: AI Outputs and the First Amendment

  • When Personalization Harms Performance: Reconsidering the Use of Group Attributes in Prediction

Law/Policy

  • New deal on EU-US data flows sparks privacy fears and business uncertainty

  • Generative AI services pulled from Apple App Store in China ahead of new regulations

  • Equal Employment Opportunity Commission (EEOC) enters into a first of its kind consent decree with a tutoring company for algorithmic/AI age discrimination

  • Seven AI companies commit to safeguards at the White House's request

  • The AI rules that US policymakers are considering, explained

  • UK’s approach to AI safety lacks credibility, report warns

  • TikTok’s algorithm will be optional in Europe

  • The movement to limit face recognition tech might finally get a win

  • AI regulation is taking shape, but startups are being left out

  • Zero Trust AI Governance

  • The AI Crackdown is Coming

  • A.I. Microdirectives Could Soon Be Used for Law Enforcement

  • Why Chinese entities are turning to People’s Daily censorship AI to avoid political mines

AI Harms

  • Q&A: When automation in government services fails

  • In Mannheim, an automated system reports hugs to the police

  • Why watermarking AI-generated content won’t guarantee trust online

  • Sneak Preview: A blueprint for an AI Bill of Rights for Education

  • Unregulated AI Will Worsen Inequality, Warns Nobel-Winning Economist Joseph Stiglitz

  • Does social media polarize voters? Unprecedented experiments on Facebook users reveal surprises

  • Six ways that AI could change politics

  • We need a Weizenbaum test for AI

  • Don't Let the Math Distract You: Together, We Can Fight Algorithmic Injustice

  • Justine Bateman on AI, Labor, and the Future of Entertainment

  • Automated Firing & Algorithmic Management: Mounting a Resistance, with Veena Dubal, Zephyr Teachout and Zubin Soleimany | AI Now Salons

Privacy

  • How to Quickly Get to the Important Truth Inside Any Privacy Policy

  • DHS Used Clearview AI Facial Recognition In Thousands Of Child Exploitation Cold Cases

  • Convicted fraudster Martin Shkreli is touting a medical AI chatbot—much to experts’ concern

  • Climate Justice and Labor Rights | Part I: AI Supply Chains and Workflows

Generative AI

  • The human decisions that shape generative AI: Who is accountable for what?

  • The tricky truth about how generative AI uses your data

  • Remini tops the App Store for its viral ‘AI headshots’ but its body edits go too far, some say

  • Authors are losing their patience with AI, part 349235. See also: The Fear Of AI Just Killed A Very Useful Tool

  • AI language models are rife with political biases

  • Researchers Identify False Twitter Personas Likely Powered by ChatGPT

  • Language Is a Poor Heuristic For Intelligence

  • Exploring Generative AI

Compiled by Leif Hancox-Li

Don't miss what's next. Subscribe to This Week in Responsible AI:
Powered by Buttondown, the easiest way to start and grow your newsletter.