This Week in Responsible AI

Subscribe
Archives
June 29, 2023

This Week in Responsible AI: June 29, 2023

This Week in Responsible AI: June 29, 2023

General

  • Stop talking about tomorrow’s AI doomsday when AI poses risks today

  • REAL ML: Recognizing, Exploring, and Articulating Limitations of Machine Learning Research

  • Ethics Teams in Tech Are Stymied by Lack of Support

  • How AI can distort human beliefs

  • AI Hype Machine w/ Meredith Whittaker, Ed Ongweso, and Sarah West

  • The Last AI Boom Didn't Kill Jobs. Feel Better?

  • Responsible AI Licenses: social vehicles toward decentralized control of AI

Fairness

  • New evidence of Facebook’s sexist algorithm

  • Is My Prediction Arbitrary? Measuring Self-Consistency in Fair Classification

Law/Policy

  • Chuck Schumer Wants AI to Be Explainable. It’s Harder Than It Sounds

  • EU advances rules that wrestle control of user data away from Big Tech

  • Europe to Open Artificial Intelligence 'Crash Test' Centers

  • Reexamining "Fair Use" in the Age of AI

  • How the White House is moving into the action phase of its effort to regulate AI

Privacy

  • With a €40 million GDPR fine against Criteo, French regulators target the Parisian giant over its data practices

  • "with few established guardrails around wellness tech companies' handling of employees’ sensitive health data, workers who grant employers access to their health data could conceivably be left vulnerable to discrimination... The federal health privacy rules in HIPAA don’t apply to employers"

  • Suicide Hotlines Promise Anonymity. Dozens of Their Websites Send Sensitive Data to Facebook

  • Hey, Alexa! What are you doing with my data?

  • Military AI’s Next Frontier: Your Work Computer

  • Tom Morello, Zack de la Rocha, and Boots Riley Boycotting Venues That Use Face-Scanning Technology

  • Google forced to postpone Bard chatbot’s EU launch over privacy concerns

Inclusive NLP

  • Mind the Language Gap: NLP Researchers & Advocates Weigh in on Automated Content Analysis in Non-English Languages

  • Non bigtech African NLP - Asmelash Teka

  • Addressing Equity in Natural Language Processing of English Dialects

Generative AI

  • Bias in Text-to-Image Models

  • Generative AI companies must publish transparency reports

  • Preprint claiming that GPT can ace MIT exams was released without some authors' consent despite privacy and research integrity concerns: 1, 2, 3

  • Using Large Language Models With Care

  • How to Prepare for the Deluge of Generative AI on Social Media

  • The vast underclass of 'taskers' who make AI work. See also: The human labellers that make AI work

  • How AI could spark the next pandemic

  • A storefront for robots

  • Should countries build their own AIs?

  • How energy intensive are AI apps like ChatGPT?

  • Bringing People Together to Inform Decision-Making on Generative AI

  • 'Over 140 major brands are paying for ads that end up on unreliable AI-written sites, likely without their knowledge. Ninety percent of the ads from major brands found on these AI-generated news sites were served by Google, despite the company's own policies that prohibit sites from placing Google-served ads on pages that include "spammy automatically-generated content".'

  • Artificial intelligence will change the future of psychotherapy: A proposal for responsible, psychologist-led development

Compiled by Leif Hancox-Li

Don't miss what's next. Subscribe to This Week in Responsible AI:
Powered by Buttondown, the easiest way to start and grow your newsletter.