This Week in Resposible AI: June 20, 2023
This Week in Responsible AI: June 20, 2023
This is actually for the last three weeks because I was on vacation and then at FAccT. If the mega-update below isn’t enough for you, you should also check out the papers at this year’s FAccT. I gave one on how perspectives from the debate over human intelligence measurement can inform how we design and use machine learning benchmarks.
General
-
“Weak democracies, capitalism, and artificial intelligence are a dangerous combination”
-
‘I do not think ethical surveillance can exist’: Rumman Chowdhury on accountability in AI
-
Artifact news app now uses AI to rewrite headline of a clickbait article
AI fearmongering
- Another Warning Letter from A.I. Researchers and Executives
- Fantasy fears about AI are obscuring how we already abuse machine intelligence
- Artificial Intelligence Isn’t Going to Kill Everyone (At Least Not Right Away)
- AI Doesn’t Pose an Existential Risk—but Silicon Valley Does
- Artificial Intelligence and the Ever-Receding Horizon of the Future
Labor
-
‘I feel constantly watched’: the employees working under surveillance
-
The New Age of Hiring: AI Is Changing the Game for Job Seekers
-
Stack Overflow Moderators Stop Work in Protest of Lax AI-Generated Content Guidelines
-
Tech layoffs ravage the teams that fight online misinformation and hate speech
-
Chatbots Can’t Care Like We Do: Helpline Workers Speak Out on World Eating Disorders Action Day. See also: Eating Disorder Helpline Disables Chatbot for ‘Harmful’ Responses After Firing Human Staff
Harms
-
Black men were likely underdiagnosed with lung problems because of bias in software, study suggests
-
As the AI industry booms, what toll will it take on the environment?
-
Instagram’s algorithms are promoting accounts that share child sex abuse content
-
An algorithm intended to reduce poverty might disqualify people in need. See also Human Rights Watch.
Privacy
-
Illinois residents allege facial image search engine violates BIPA
-
Revealed: the contentious tool US immigration uses to get your data from tech firms
-
A view from DC: The FTC says ‘Let It Go,’ don’t hold that data anymore
-
Within the Operational Enclosure: Surveillance and Subversion in Northwest China
-
Tech that automatically detects and reports protest signs in China
Tools/Papers
-
Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models
-
Stable Bias: Analyzing Societal Representations in Diffusion Models
-
Building Robust RAI Programs as Third-Party AI Tools Proliferate
-
Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations
-
Framing Online Speech Governance As An Algorithmic Accountability Issue
-
Twitter’s Algorithm: Amplifying Anger, Animosity, and Affective Polarization
-
DC-Check: A Data-Centric AI checklist to guide the development of reliable machine learning systems
Law/Policy
-
EU parliament passes AI Act. Here are 5 key takeaways from it and some shortcomings.
-
Reforms to Home Appraisal Bias Target Algorithms and Tech. See also the CFPB on this.
-
National Artificial Intelligence Advisory Committee is having virtual briefing sessions. Register to attend.
-
This is not legal advice: Do Foundation Model Providers Comply with the Draft EU AI Act?
-
Licensing is neither feasible nor effective for addressing AI risks
Generative AI
-
Are Responsible AI Programs Ready for Generative AI? Experts Are Doubtful
-
The AI feedback loop: Researchers warn of ‘model collapse’ as AI trains on AI-generated content
-
OpenAI isn’t doing enough to make ChatGPT’s limitations clear
-
No, GPT4 can’t ace MIT. Also: Did ChatGPT cheat on your test?
-
Professor Flunks All His Students After ChatGPT Falsely Claims It Wrote Their Papers
-
DeSantis campaign shares fake Trump/Fauci images, prompting new AI fears
-
Google’s beta Search Generative Experience plagiarizes without citation and gives faulty medical advice. See also this.
-
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Compiled by Leif Hancox-Li