This Week in Responsible AI

Subscribe
Archives
July 29, 2024

This Week In Responsible AI, Jul 29, 2024

Labor

  • Everlasting jobstoppers: How an AI bot-war destroyed the online job market
  • 77% of employees in various Anglophone countries say AI tools have decreased their productivity

Surveillance

  • At the Olympics, AI is watching you
  • NYPD Coppelgänger: facial recognition on NYPD cops

AI Slop

  • Everyone Hates That Google AI Olympics Commercial
  • ‘Google says I’m a dead physicist’: is the world’s biggest search engine broken?
  • Twitter owner violates his own policies with video of Kamala Harris with AI voice

Security

  • SAP AI vulnerabilities expose customers’ cloud environments and private AI artifacts
  • CYBERSECEVAL 3: Advancing the Evaluation of Cybersecurity Risks and Capabilities in Large Language Models

Law/Policy

  • AI existential risk probabilities are too unreliable to inform policy
  • FCC pursues new rules for AI in political ads, but changes may not take effect before the election

Fairness

  • What's Distributive Justice Got to Do with It? Rethinking Algorithmic Fairness from the Perspective of Approximate Justice
  • Michigan’s “Fair and Reasonable” Reforms Allowed Car Insurers to Charge More in Black Neighborhoods

Data rights

  • Here’s how to disable X (Twitter) from using your data to train its Grok AI
  • Anthropic’s crawler is ignoring websites’ anti-AI scraping policies. See also: Read the Docs' post on this.
  • A new tool for copyright holders can show if their work is in AI training data
  • AI video startup Runway reportedly trained on ‘thousands’ of YouTube videos without permission
  • Google's Exclusive Reddit Access
  • Meta is training its AI with public Instagram posts. Artists in Latin America can’t opt out

Other

  • "people are going to rush in to use AI, to implement AI, even when they don't know what to do with it. And automation will often appeal to them because it's like the easiest thing to do."
  • Ada Lovelace Institute report on evaluating foundation models
  • Predictive Performance Comparison of Decision Policies Under Confounding
  • "when access [to GPT] is subsequently taken away, students actually perform worse than those who never had access"
  • AI models collapse when trained on recursively generated data
  • Open Problems in Technical AI Governance
  • Developer Blog: Moderating LLM Inputs with PromptGuard
  • TikTok’s algorithm is highly sensitive – and could send you down a hate-filled rabbit hole before you know it

Compiled by Leif Hancox-Li

Don't miss what's next. Subscribe to This Week in Responsible AI:
Powered by Buttondown, the easiest way to start and grow your newsletter.