This Week in Responsible AI

Subscribe
Archives
June 20, 2023

This Week in Resposible AI: June 20, 2023

This Week in Responsible AI: June 20, 2023

This is actually for the last three weeks because I was on vacation and then at FAccT. If the mega-update below isn’t enough for you, you should also check out the papers at this year’s FAccT. I gave one on how perspectives from the debate over human intelligence measurement can inform how we design and use machine learning benchmarks.

General

  • “Weak democracies, capitalism, and artificial intelligence are a dangerous combination”

  • ‘The authors of The Smartness Mandate want to understand how and why we have “come to see the planet and its denizens as data-collecting instruments.” In other words, why do we fetishize computation as the key to just about everything that matters?’

  • What is ‘ethical AI’ and how can companies achieve it?

  • ‘I do not think ethical surveillance can exist’: Rumman Chowdhury on accountability in AI

  • To avoid AI doom, learn from nuclear safety

  • Artifact news app now uses AI to rewrite headline of a clickbait article

  • The Myth of Objective Data

AI fearmongering

  • Another Warning Letter from A.I. Researchers and Executives
  • Fantasy fears about AI are obscuring how we already abuse machine intelligence
  • Artificial Intelligence Isn’t Going to Kill Everyone (At Least Not Right Away)
  • AI Doesn’t Pose an Existential Risk—but Silicon Valley Does
  • Artificial Intelligence and the Ever-Receding Horizon of the Future

Labor

  • ‘I feel constantly watched’: the employees working under surveillance

  • We are all AI’s free data workers

  • The New Age of Hiring: AI Is Changing the Game for Job Seekers

  • “Gig workers contend with uncertain working conditions and algorithmic wage discrimination from the platforms they rely on to match with clients.”

  • “Babbage’s work developing theories of factory labor control and his lifelong pursuit of his calculating engines can be read together as two approaches to answering the same question: how to standardize and discipline work in service of capitalism and the British empire.”

  • Stack Overflow Moderators Stop Work in Protest of Lax AI-Generated Content Guidelines

  • Tech layoffs ravage the teams that fight online misinformation and hate speech

  • Chatbots Can’t Care Like We Do: Helpline Workers Speak Out on World Eating Disorders Action Day. See also: Eating Disorder Helpline Disables Chatbot for ‘Harmful’ Responses After Firing Human Staff

Harms

  • Black men were likely underdiagnosed with lung problems because of bias in software, study suggests

  • As the AI industry booms, what toll will it take on the environment?

  • Instagram’s algorithms are promoting accounts that share child sex abuse content

  • An algorithm intended to reduce poverty might disqualify people in need. See also Human Rights Watch.

  • AI Is Steeped in Big Tech’s ‘Digital Colonialism’

  • Beware of the Binary

Privacy

  • Illinois residents allege facial image search engine violates BIPA

  • Revealed: the contentious tool US immigration uses to get your data from tech firms

  • From “Heavy Purchasers” of Pregnancy Tests to the Depression-Prone: We Found 650,000 Ways Advertisers Label You

  • FTC Says Ring Employees Illegally Surveilled Customers, Failed to Stop Hackers from Taking Control of Users’ Cameras

  • A view from DC: The FTC says ‘Let It Go,’ don’t hold that data anymore

  • Within the Operational Enclosure: Surveillance and Subversion in Northwest China

  • Tech that automatically detects and reports protest signs in China

Tools/Papers

  • Zeno: An Interactive Tool For AI Model Evaluation

  • A New Framework for Coming to Terms with Algorithms

  • How to talk about AI (even if you don’t know much about AI)

  • Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models

  • Stable Bias: Analyzing Societal Representations in Diffusion Models

  • Building Robust RAI Programs as Third-Party AI Tools Proliferate

  • Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations

  • An early warning system for novel AI risks

  • “empowering designers and users to document the entire system over time so that all of these feedback loops between users, between system components, or between the company and regulators, are able to be understood and made legible.”

  • Framing Online Speech Governance As An Algorithmic Accountability Issue

  • Twitter’s Algorithm: Amplifying Anger, Animosity, and Affective Polarization

  • Trust in Artificial Intelligence: A global study (PDF)

  • DC-Check: A Data-Centric AI checklist to guide the development of reliable machine learning systems

Law/Policy

  • EU parliament passes AI Act. Here are 5 key takeaways from it and some shortcomings.

  • Using AI for loans and mortgages is big risk, warns EU boss

  • About Face: How Should Government Regulate Emerging Tech?

  • Japan Goes All In: Copyright Doesn’t Apply To AI Training

  • Reforms to Home Appraisal Bias Target Algorithms and Tech. See also the CFPB on this.

  • “Europe’s smaller but most tech-oriented members rarely feel heard in the halls of Brussels, even as they often disagree with the Commission’s agenda.”

  • OpenAI Lobbied the E.U. to Water Down AI Regulation

  • National Artificial Intelligence Advisory Committee is having virtual briefing sessions. Register to attend.

  • This is not legal advice: Do Foundation Model Providers Comply with the Draft EU AI Act?

  • Licensing is neither feasible nor effective for addressing AI risks

  • Sam Altman Charmed Congress. But He Made a Slip-Up.

Generative AI

  • Are Responsible AI Programs Ready for Generative AI? Experts Are Doubtful

  • The poisoning of ChatGPT

  • ” we discuss the linguistic factors that contribute to the anthropomorphism of dialogue systems and the harms that can arise, arguing that it can reinforce stereotypes of gender roles and notions of acceptable language.”

  • The AI feedback loop: Researchers warn of ‘model collapse’ as AI trains on AI-generated content

  • OpenAI isn’t doing enough to make ChatGPT’s limitations clear

  • “I was surprised, however, by how different the conversations about the risks of generative AI were at RightsCon from all the warnings from big Silicon Valley voices that I’ve been reading in the news.”

  • No, GPT4 can’t ace MIT. Also: Did ChatGPT cheat on your test?

  • “One recommendation stemming from our research is to cease using LLMs that do not properly document training data in scientific papers until there is proof they are not contaminated.”

  • Professor Flunks All His Students After ChatGPT Falsely Claims It Wrote Their Papers

  • Large Language Models, Innovation, and Capitalism

  • DeSantis campaign shares fake Trump/Fauci images, prompting new AI fears

  • Google’s beta Search Generative Experience plagiarizes without citation and gives faulty medical advice. See also this.

  • How the media is covering ChatGPT

  • Evaluating the Social Impact of Generative AI Systems in Systems and Society

  • Explainable AI Reloaded: Do we need to Rethink our XAI Expectations in the Era of Large Language Models like ChatGPT?

  • From Human-Centered to Social-Centered Artificial Intelligence: Assessing ChatGPT’s Impact through Disruptive Events

  • Where We Stand on AI in Publishing

Compiled by Leif Hancox-Li

Don't miss what's next. Subscribe to This Week in Responsible AI:
Powered by Buttondown, the easiest way to start and grow your newsletter.