This Week in Responsible AI

Subscribe
Archives
July 7, 2024

This Fortnight in Responsble AI, Jul 7, 2024

This Fortnight in Responsible AI, Jul 7, 2024

Data

  • Building Better AI: The Importance of Data Quality
  • The Ghost Stays in the Picture, Part 2: Data Casts Shadows

Labor

  • AI could kill creative jobs that ‘shouldn’t have been there in the first place,’ OpenAI’s CTO says
  • AI Employees Should Have a “Right To Warn” About Looming Trouble

Security

  • Why I attack
  • Glazing over security

"Alignment"

  • Adversaries Can Misuse Combinations of Safe Models
  • UnUnlearning: Unlearning is not sufficient for content regulation in advanced generative AI
  • Evaluating Human Alignment and Model Faithfulness of LLM Rationale
  • The Multilingual Alignment Prism: Aligning Global and Local Preferences to Reduce Harm

Magazines

  • Latest from Kernel Mag: Algorithmaxxing, Labor Organizing, & More
  • new XRDS issue: "Technology & Social Justice: Shifting Power through Resistance"

Copyright

  • How to Fix “AI’s Original Sin”
  • Major record labels sue AI company behind ‘BBL Drizzy’. See also: The RIAA versus AI, explained

Consent

  • "if you want to prevent Copilot from using data from your web page in response to a user’s question then you probably need to get de-indexed from the search engine"
  • "I don't want attribution. I want an opt-out that is enforceable."

Climate

  • The Hidden Environmental Impact of AI
  • Google’s emissions climb nearly 50% in five years due to AI energy demand

Fakes

  • Are there any humans left on the internet?
  • "A network of Russia-based websites masquerading as local American newspapers is pumping out fake stories as part of an AI-powered operation that is increasingly targeting the US election"
  • TikTok’s AI tool accidentally let you put Hitler’s words in a paid actor’s mouth
  • Meta is incorrectly marking real photos as ‘Made by AI’
  • AI Tools Make It Easy to Clone Someone’s Voice Without Consent
  • All my beautiful AI children

Perplexity

  • Perplexity’s grand theft AI
  • Perplexity.AI Is Susceptible to Prompt Injection From Arbitrary Pages (and some other issues)
  • Garbage In, Garbage Out: Perplexity Spreads Misinformation From Spammy AI Blog Posts

Other

  • Detroit paying $300,000 to man wrongly accused of theft, making changes in use of facial technology
  • Understanding "Democratization" in NLP and ML Research
  • "Although model cards have been adopted on a broad scale, there is a lack of compliance with established community standards and a striking disparity in the attention given to different sections of these cards"
  • "Contrary to some U.S. discussions of China’s views of military AI, many of the Chinese experts whose arguments have been analyzed in this report voice misgivings about using insufficiently trustworthy AI systems in military contexts"
  • UK housing benefit algorithm "wrongly flags 200,000 people for possible fraud and error"
  • 'The perspectives, lived experiences, and contributions that would transition AI products from “expensive skeuomorph” to “meaningful innovation” won’t, and can’t, come from tech’s noveau riche.'
  • Safe beyond sale: post-deployment monitoring of AI
  • Destroy AI

Compiled by Leif Hancox-Li

Don't miss what's next. Subscribe to This Week in Responsible AI:
Powered by Buttondown, the easiest way to start and grow your newsletter.