This Week in Responsible AI

Subscribe
Archives
October 20, 2023

This Week in Responsible AI, Oct 20, 2023

This Week in Responsible AI, Oct 20, 2023

Transparency

  • 'An “algorithmically driven fog of war” is how one journalist described the deluge of disinformation and mislabelled footage on X.'

  • Can you break the algorithm?

  • Food delivery service Glovo: tracking riders’ private location and other infringements

Inclusion

  • Why we need participatory methods in AI

  • Notably Inaccessible -- Data Driven Understanding of Data Science Notebook (In)Accessibility

Privacy

  • Delete-your-data laws have a perennial problem: Data brokers who fail to register

  • Selfie-scraper, Clearview AI, wins appeal against UK privacy sanction

  • ChatGPT Can 'Infer' Personal Details From Anonymous Text

Labor

  • 'for as little as $300, [actors] appear to have authorized Realeyes, Meta, and other parties of the two companies’ choosing to access and use not just their faces but also their expressions, and anything derived from them, almost however and whenever they want—as long as they do not reproduce any individual likenesses.'

  • Uber ordered to pay €584,000 for failure to comply with court order in robo-firing case

Generative AI

  • How ChatGPT and other AI tools could disrupt scientific publishing

  • Large language models propagate race-based medicine

  • A New Tool Helps Artists Thwart AI---With a Middle Finger

  • Multi-modal prompt injection image attacks against GPT-4V

  • AI Images Detectors Are Being Used to Discredit the Real Horrors of War

  • Testing AI or Not: How Well Does an AI Image Detector Do Its Job?

  • NYC Mayor Casually Announces He's Deepfaking Himself, Experts Horrified

  • OpenAI’s flagship AI model has gotten more trustworthy but easier to trick

  • Misinformation reloaded? Fears about the impact of generative AI on misinformation are overblown

  • Invisible AI watermarks won’t stop bad actors. But they are a ‘really big deal’ for good ones

Law/Policy

  • GENERATIVE AI LEGAL EXPLAINER

  • New York City Unveils AI Action Plan that Develops Rules Framework

  • It is 'nearly unavoidable' that AI will cause a financial crash within a decade, SEC head says

  • Fugees’ Pras Michél says lawyer bungled his case by using AI to write arguments

  • Universal Music sues AI company Anthropic for distributing song lyrics

  • Marc Andreessen Manifesto Says AI Regulation “Is a Form of Murder”. See also: 'Our options are not limited to “Fast AI” vs “Slow AI.” You get a different variation on the future of AI if, for instance, the biggest financial payoffs are in military applications vs workbench science applications.'

  • How a billionaire-backed network of AI advisers took over Washington

  • China has a new plan for judging the safety of generative AI---and it’s packed with details

New Research

  • Feminist AI

  • Less Discriminatory Algorithms

  • Evaluating social and ethical risks from generative AI

  • AI safety guardrails easily thwarted, security study finds

  • Collective Constitutional AI: Aligning a Language Model with Public Input

  • The Operational Risks of AI in Large-Scale Biological Attacks

  • The problem with annotation. Human labour and outsourcing between France and Madagascar

Compiled by Leif Hancox-Li

Don't miss what's next. Subscribe to This Week in Responsible AI:
Powered by Buttondown, the easiest way to start and grow your newsletter.