This Week in Responsible AI

Subscribe
Archives
September 4, 2023

This Week in Responsible AI: September 4, 2023

This Week in Responsible AI: September 4, 2023

Tools and How-tos

  • 3 Questions to Ask When Buying AI to Assess Responsibility and Trustworthiness

  • How to stop Meta from using some of your personal data to train generative AI models

  • Meta releases a dataset to probe computer vision models for biases

Transparency

  • YouTube demystifies the Shorts algorithm, views and answers other creator questions

Algorithmic harms

  • 'Our findings suggest that YouTube’s algorithms were not sending people down “rabbit holes” during our observation window in 2020, possibly due to changes that the company made to its recommender system in 2019. However, the platform continues to play a key role in facilitating exposure to content from alternative and extremist channels among dedicated audiences.'

  • «When AI systems are used, they are usually used for surveillance»

  • Person died after Cruise cars blocked ambulance, SFFD says

  • How AI Researcher Dylan Baker Uses Technical Communication to Reduce Algorithmic Harm

Law/Policy

  • A comprehensive and distributed approach to AI regulation

  • All hail the new EU law that lets social media users quiet quit the algorithm

  • US federal agencies put out a request for comment on Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing

Generative AI

  • Generative AI and intellectual property

  • A Harbinger of the Future of Content? The New York Times Starts a Data Strike

  • Large language models aren’t people. Let’s stop testing them as if they were.

  • Datasets as Imagination

  • AI tools make things up a lot, and that’s a huge problem

  • Generative AI closes off a better future

Compiled by Leif Hancox-Li

Don't miss what's next. Subscribe to This Week in Responsible AI:
Powered by Buttondown, the easiest way to start and grow your newsletter.