Dispatch 3: Hallucinations
Here's some news and research that came across my feeds in the back half of November.
📣 security news
'AI pimping' industry takes off - The gen AI nightmare continues to metastasize in layers and layers of theft that seems to disproportionately impact women. There's now a network of "creators" hiding behind AI personas and teaching others their ways. Their ways would be using generative AI tools to create Instagram model personas, then ripping off the content of sex workers and influencers and essentially face-swapping them with their model persona's identity. Stolen influencer content is served up for free at the top of the funnel on Instagram and stolen explicit content is provided behind a paywall on other sites. The "creators" can scale this method up to run many accounts and it doesn't really cost them much at all.
So-called learning platform hacked- Andrew Tate's learning platform (and source of income) got hacked and defaced. Leaked data is available to researchers and journalists at DDoSecrets.
🛟 safer tech
AI medical transcription tool writes fiction instead - Unsurprisingly, a generative AI tool used by over 30,000 clinicians to transcribe and summarize patient interactions has been showed by researchers to hallucinate, or create inaccurate output. In a medical setting, it shouldn't be a stretch to call this unacceptable. For privacy, the tool deletes original audio recording after transcription. This makes it impossible to compare the audio to the transcription after the fact unless it's been recorded elsewhere. If it creates more work for providers in checking the tool's work, do we think most providers have the time to do that? These tools were supposed to save time and boost efficiency, not add the overhead of an "AI babysitter" role to existing jobs.
🤿 culture dive
Tech leaders wanna court Trump like Tim Cook - More tech CEOs are apparently looking to emulate Tim Cook's approach to shaping tariffs and tax policy to be more favorable to their businesses. *the most exhausted sigh* Capital, like life, finds a way...
The effects of critical race algorithmic literacy (PDF) - This paper came across my feed and I was so happy to see it. This is precisely the type of research the moment calls for from our technologists and computer scientists. Building a more just future means we cannot continue to separate our humanity from the designs of our technology.
Broadly, the data suggests that critical race algorithmic literacies prepare Black students to critically read the algorithmic word (e.g. data, code, machine learning models, etc.) so that they can not only resist and survive, but also rebuild and reimagine the algorithmic world.
Check out Algorithmic Justice League!