African AI data workers are being locked out.
A Fresh AI Hell Roundup: March 21, 2024 Edition
By Alex
Data workers in Africa are being locked out of their accounts. Journalist Karen Hao reported on Twitter that data workers are being locked out of their accounts. These workers are employed by Remotasks, owned by Scale AI and used by OpenAI. And blocks have been reported in Nigeria, Rwanda, and South Africa. This comes after Amazon locked out Mechanical Turkers for nearly two weeks with no explanation. For many workers, this is their only source of income, or a way to get a toehold into the tech industry. Neither companies have provided any transparency into what happened.
Healthcare bots are replacing vital functions in patient triage. Amazon's recent acquisition of the healthcare giant One Medical has been concerning for many folks who use the service. Apparently, they are training a chatbot on patient messages, with no opportunity for patient opt-out. Moreover, this has meant patients may be, in the future, shunted to deal with a chatbot, rather than a real healthcare professional.
Meanwhile, Microsoft suggests developing "patient embeddings" for precision medicine, which they say will create a "digital twin" for patients. This sounds like a medical data privacy nightmare, with nearly all medical information--visual, textual, and otherwise--going into one big model. All the while, LLMs are rife with biases which perpetuate myths about race and medicine.
New York Mayor Eric Adams speaks at a Google-sponsored "Stand with Israel" tech conference, and their security assaults a journalist in the process. Tech journalist Caroline Haskins reported from the conference focusing on the Israeli tech industry. One pro-Palestine Google worker interrupted the conference and many more protested out front. The worker who interrupted was subsequently fired, and Haskins was forcefully pushed out of the event for covering the protest.
Academic papers, especially those published by for-profit publisher Elsevier, are filled with easily-detectable AI-generated content. Surprising absolutely no one, people are using ChatGPT to generate academic papers. But you'd think academics are smart enough to remove telltale signs of their use, such as "Certainly, here is a possible introduction for your topic". 404media found the phrase "As of my last knowledge update" returns 115 results in Google Scholar (although on my last search it has increased to 186). This exposes deeper problems with peer review at these journals, or rather, their complete lack thereof.
Next Episode: Karen Hao and AI in Journalism
Speaking of Karen Hao, she joins us next week to discuss the use of AI in journalism! We're starting a bit later, at 5 PM Pacific, on Monday, March 25. Join us over on our Twitch channel!