AI week Apr 8th: Eclipse edition! Hairy mutants, AI targeting, and more
AI Week for April 8, 2024
Happy Eclipse Day! If you're in North America, there was a solar eclipse this afternoon. We're a few hours from the path of totality and drove to see it.
I didn't take pics. This is actually the 1999 eclipse in France, captured by Luc Viatour / Lucnix.be. We saw different coronal flares, and thanks to the light cloud cover, a rainbow effect.
It was an unmatched experience. I recently read a review of The Power of Wonder, a book on the importance of opening ourselves to the experience of wonder. Watching an eclipse brought me to that deep sense of wonder. To be honest, following the developments in AI has also brought me some wonder over the last two years. I can be pretty critical at times, because I don't want to lose sight of the downsides in our very hype-rich environment, but that doesn't mean we can't marvel at the upsides.
What to play with this week: Hairy Mutant
I also get a fair bit of amusement out of generative AI, and this week's image generator is a lot of fun to play with. So, back in 2021, before it was possible to make a vocal deepfake of anyone with 15 minutes of audio, musician Holly Herndon trained a neural network on her voice and released Holly+, allowing anyone to make music with her voice. Last year, she fine-tuned an image model on... well, not herself, exactly, but herself in costume character as herself:
To this end, we began with the goal of amplifying Holly’s cliche, and constructed a costume (tailored by Franziska Muller and Lenna Stam) in which Holly was overrun by her hair. She takes on mutant, promethean, proportions, and her hair, like kudzu, begins to invade and envelop her. We used images of Holly wearing this costume to fine-tune an image model, and that model was recursively refined to produce a consistent character that is able to be spawned by anyone using the interface provided. This model can produce infinite images of this new character. The images produced by this model will mostly all, in some way, be infected by the hairy mutant.
Hairy Mutant model outputs for Toilet, Jellyfish, and Garfield Eating Lasagna
The model is here, and it's enormously fun to play with, because it manages to work ropes of red hair and a weird green suit into nearly anything. Save images you like to Holly's website, where they will be tagged "Holly Herndon" in order to poison future AI models slurping down training data. Someday all AI image generation will respond to "Holly Herndon" prompts with green-suited, red-hair-tangled women! And that's a beautiful thing.
xhairymutantx | Holly Herndon + Mat Dryhurst
Do we get to choose how we are represented on the AI substrate?
Speaking of slurping down training data:
1. The Generative AI gold rush is coming for Photobucket
Photobucket CEO Leonard says he is on solid legal ground, citing an update to the company's terms of service in October that grants it the "unrestricted right" to sell any uploaded content for the purpose of training AI systems. He sees licensing data as an alternative to selling ads.
IANAL but this seems marginal for pics uploaded years ago, whose owners may not visit the site to see the updated TOS, and may not get the updated TOS by email.
https://www.reuters.com/technology/inside-big-techs-underground-race-buy-ai-training-data-2024-04-05/2. Billie Eilish, Bon Jovi, Katy Perry, Stevie Wonder, and others signed an open letter noting that "some of the biggest and most powerful companies are, without permission, using our work to train AI models." The letter calls on AI companies and music services "to pledge that they will not develop or deploy AI music-generation technology, content or tools that undermine or replace the human artistry of songwriters or artists and deny us fair compensation for our work."
Speaking of copyright infringement:
Some bots in OpenAI's "GPT Store" are illegaly scraping textbooks. ArsTechnica and TechCrunch found a pile of copyright-infringing GPTs (among a throng of "spammy, legally dubious and perhaps even harmful" GPTs). It looks like whatever they're doing for moderation -- a blend of automatic and human -- isn't enough.
Publisher: OpenAI’s GPT Store bots are illegally scraping our textbooks | Ars Technica
OpenAI has taken down some bots from GPT Store, but copyright complaints continue.
Sometimes NYC has bad ideas
And sometimes they relate to AI, and I get to include them here.
Bad idea 1: A chatbot trained on NYC's laws. The MyCity chatbot has the same tendency to "hallucinate," i. e. make stuff up, as all LLMs do.
Playing with the Hairy Mutant model above kind of gives a way to visualize the tendency to deviate from fine-tuning, because you know what it's been fine-tuned on. Most of the time, when you give it a prompt, it responds as fine-tuned, adding a bunch of distinctively styled red hair and/or a green suit. But every now and then it just responds to the prompt without reference to the fine-tuning (like this image of a bottle of "chihuahua flavored vitamin water".
NYC’s AI Chatbot Tells Businesses to Break the Law – The Markup
The Microsoft-powered bot says bosses can take workers’ tips and that landlords can discriminate based on source of income
NYC defends it as a prototpye:
"It's wrong in some areas, and we've got to fix it," [NYC Mayor Eric] Adams, a Democrat, told reporters on Tuesday, emphasizing that it was a pilot program. ... The chatbot remained online on Thursday and was still sometimes giving wrong answers.... The city has updated disclaimers on the MyCity chatbot website.
Bad idea 2: Weapons detectors that don't detect weapons
NYC announced plans to install weapons-detecting scanners in its subways, but they chose AI-powered weapons detectors that don't seem to work. Kids have gotten stabbed in schools where these scanners missed knives and the company's shareholders are suing it for misrepresenting what the technology can do. It sounds like a Theranos for security instead of blood tests.
Shareholders Sue AI Weapon-Detecting Company, Allege It 'Does Not Reliably Detect Knives or Guns'
Evolv, which will be used by the NYC subway, is under investigation by the FTC and SEC, and also had to retract a claim that the UK government validated its product.
Much worse idea: AI targeting of humans in war
An article in an Israeli-Palestinian magazine reports on the use of an AI system to identify human targets. This is The Guardian's summary of a longer, more in-depth article in +972 magazine. Quote from the longer article:
During the early stages of the war, the army gave sweeping approval for officers to adopt [AI system] Lavender’s kill lists, with no requirement to thoroughly check why the machine made those choices or to examine the raw intelligence data on which they were based. One source stated that human personnel often served only as a “rubber stamp” for the machine’s decisions, adding that, normally, they would personally devote only about “20 seconds” to each target before authorizing a bombing — just to make sure the Lavender-marked target is male. This was despite knowing that the system makes what are regarded as “errors” in approximately 10 percent of cases... Lavender marks people — and puts them on a kill list.
‘The machine did it coldly’: Israel used AI to identify 37,000 Hamas targets | Israel-Gaza war | The Guardian
Israeli intelligence sources reveal use of ‘Lavender’ system in Gaza war and claim permission given to kill civilians in pursuit of low-ranking militants
FINAL UPDATE on Zombie George Carlin (I hope)
Zombie George Carlin has been laid to rest for good.
If you missed out on Zombie George Carlin's comedy set, you aren't missing much. The comedy set claimed to be an AI-generated parody, but as I noted at the time, "the set sounds like it was written by human comedians with their own axes to grind, then cast in George Carlin's voice by generative AI." When the podcast who put the set online was sued, they admitted that that's exactly what they did: wrote a Carlinesque set and then deepfaked Carlin's voice onto it. And in all honesty, it wasn't a particularly good Carlin imitation.
George Carlin estate forces “AI Carlin” off the Internet for good | Ars Technica
Settlement bars Dudesy podcast from re-uploading its ersatz Carlin comedy special.
Some headlines
OpenAI holds back release of voice cloning tech that we basically already have:
OpenAI holds back wide release of voice-cloning tech due to misuse concerns | Ars Technica
Voice Engine can clone voices with 15 seconds of audio, but OpenAI is warning of potential harms.
Instability at Stability AI:
Stability AI is one of the most high-profile AI companies in the US. Its Stable Diffusion image-generating models (which you might have seen as SD or SDXL) is extremely well-known and widely used; its just-updated music generator Stable Audio can now take a text prompt and produce 3-minute music clips, which are "songs" in the same sense that AI-generated art is "paintings." (BTW, they released this update just days after Billie Eilish, Stevie Wonder, Katy Perry etc etc signed the open letter asking AI tech companies not to do exactly this.) So it's really interesting that Stability AI's CEO recently resigned, developers have been leaving, and they're running out of money and shorting cloud payments. This Forbes deep dive was a good read:
Stability AI Founder Emad Mostaque Tanked His Billion-Dollar Startup
Unpaid bills, bungled contracts and a disastrous meeting with Nvidia's kingmaker CEO. Inside the stunning downfall of Emad Mostaque.
An awesome reader emailed to ask me if this was going to be a common failure mode for AI companies, which I thought was a really smart take. The Forbes article puts a lot of the blame on huge cloud computing costs, coupled with an inability to close enough deals using their tech. Apparently they wound up with so much unused compute they were planning to resell it, which is a desperation move something like "renting out all this office space we're not using".
Google's thinking of paywalling AI-powered search:
So... I'll be able to turn off the AI-powered search results by not paying? The same AI-powered search results that bring me gems like this list of Animals Starting With C? I'm not seeing the downside here.
https://www.theregister.com/2024/04/04/google_ponders_making_ai_search/AI-Generated Nudes File
$10 gets you (nonconsensual) nudes of anyone on Earth
Unfortunately, this article is paywalled, but TL;DR: If you're a wanna-be slimeball who can't find your own NSFW AI art generator and can't even manage to install a nudify app successfully, a guy on Telegram will use AI to generate a nude of whoever you like for $10.
(It's not like it's hard to find nsfw art generators if you have access to Google, and, failing that, people seem to manage to get around the guardrails of SFW generators; not to mention the proliferation of free and paid "nudify" apps that have been making nonconsensual porn within reach of any skeezeball. But, if you can manage a Telegram chat but not any of those, now I guess you just need to know a guy on Telegram.)
‘IRL Fakes:’ Where People Pay for AI-Generated Porn of Normal People
A Telegram user who advertises their services on Twitter will create AI-generated porn of anyone for a price, and has also targeted minors.
NUCA, the camera that takes nudes
This is an interesting camera-slash-art-object. Instead of the usual and gross workflow of "I took a photo of someone, ran it thru a nudify app without their knowledge, and nonconsensually turned it into a nude", NUCA turns every photo it takes into a nude on the spot, right in front of the photo's subject. The nudes don't pretend to represent the body of the person, and don't even look particularly real, which is important in making this interesting rather than gross. An LLM describes the image of the person; that description is used as the base for a prompt requesting a nude from an image-generating model; the model generates an idealized nude matching that description; then a faceswap app pastes the subject's face onto the image.
The article below is SFW, with all naughty bits pixellated out.
This Camera Turns Every Photo Into a Nude
NUCA, a 3D printed prototype camera and art project, uses AI to instantly generate a nude of any subject.
Lottery website generated nonconsensual nude of a user
The Washington State Lottery has taken down a promotional AI-powered web app after a local mother reported that the site generated an image with her face on the body of a topless woman.
How did this even come up in a lottery, you might ask? The lottery's promotional idea was to tempt users to throw money away on buy lottery tickets by generating pics of their dream vacation, with their uploaded pic faceswapped in. So, kind of like NUCA above, except that the lottery's image-generation model wasn't supposed to generate nudes, except that it did.
As Ars Technica notes, "it might not be that simple to effectively rein in the endless variety of visual output an AI model can generate."
After AI-generated porn report, Washington Lottery pulls down interactive web app | Ars Technica
User says promo site put her uploaded selfie on a topless woman's body.
Longreads
1. Full article about AI targeting in 972 magazine
https://www.972mag.com/lavender-ai-israeli-army-gaza/2. Effective altruism
Back in November, EA (Effective Altruism) came up in the context of OpenAI's former board, who were mostly EAs. I tried to give a brief explanation then (scroll down to "Sidebar: Wait, what's effective altruism (EA)?") but the TL;DR is that it's a philanthropic philosophy that's huge in Silicon Valley, especially among AI people, and it's a... unique way to look at giving. From the November newsletter:
The EA principle in a nutshell is to philanthropize more effectively, i.e., to make sure the money you donate is having the maximum possible impact. This principle has led EAs to some surprising places: * Organizing your life around making as much money as possible as a philanthropic goal, in order to give that money away. * Not donating that money to charity, in order to use that money to make more money so that you can donate the largest possible sum on your death -- a philosophy attributed to Binance founder Changpeng Zhao, aka CZ. * Considering possible catastrophic future impacts means prioritizing funding goals like "mitigating the harms of AI" over more traditional charitable goals, like helping people who are at risk of dying today.
Wired has a detailed, thorough, fiery article about how EA works, how EAs think, and why it's a load of hooey.
“As I use the term,” MacAskill says, “altruism simply means improving the lives of others.” No competent philosopher could have written that sentence. Their flesh would have melted off and the bones dissolved before their fingers hit the keyboard.