AI week April 22nd
Meme of the week: It runs on sandwiches
Source: https://www.globalnerdy.com/2024/01/03/the-first-agi-meme-of-2024-look-what-they-need-to-mimic-a-fraction-of-our-power/
Happy Earth Day, AI is sucking up electricity
Power-hungry AI is putting the hurt on global electricity supply | Ars Technica
Data centers are becoming a bottleneck for AI development.
The same day I read this, I got a marketing email from IBM about how AI can help the environment. In the context of AI’s power hunger, that seems a bit like helping a cook by looking up flambé recipes while the kitchen is on fire...
AI-"generated" "movie"
Chinese TV manufacturer TCL released a trailer for a romcom it calls an AI-powered love story. The writers are actual humans (although possibly not actual writers: it was written by a Chief Content Officer a Chief Creative Officer); human actors were involved in a motion-capture sort of way; and "AI animation employed by teams of artists" does the rest. The trailer is sort of staggeringly awful and you should watch it.
https://boingboing.net/2024/04/16/watch-the-trailer-for-the-first-fully-ai-generated-romcom-coming-soon-from-tv-manufacturer-tcl.html(Why is a TV manufacturer making movies, anyway? It's because TCL ships their TVs with a free ad-supported streaming service called TCLtv+, with 200 channels plus movies. They've launched TCLtv+ Studios to feed the beast: kind of like Netflix Originals but, if this trailer is an accurate reflection, much, much worse.)
The Dystopian Future of TV Is AI-Generated Garbage
TCL's new AI-generated movie "Next Stop Paris" is the next evolution in the algorithmification of TV.
AGI vs AI
Last week I mentioned that Elon Musk predicted AGI by next year, and an astute reader wrote me to point out that I hadn't actually defined AGI. AGI, or artificial general intelligence, is more or less what we used to mean by AI. I went into a little more detail in this blog post:
https://rdbms-insight.com/wp/2024/agi-vs-ai/
And now, here's some more context for that AGI prediction of Elon's:
Elon Musk's Worst Predictions and Broken Promises of the Past 15 Years
"I feel very confident predicting autonomous robotaxis for Tesla next year," Musk said in 2019.
AI changes everything in software, and here is how
Pat McGuinness, whose substack I follow, has a really interesting post about the kind of change AI may bring to software development.
AI as Software And Schillace Laws - AI Changes Everything
The AI Software Platform Shift and the Rules for AI Engineers
Dept. of What Could Go Wrong
1. It fires paintballs and tear gas. WCGW?
Porch Piracy Deterrent: A Security Camera that Fires Paintballs and Tear Gas - Core77
For the record I think this is a terrible, terrible idea. Sadly I think it's one that a subset of Americans will love, particularly those who suffer from package theft. The PaintCam is a night-vision-equipped security camera that uses facial recognition and "deters intruders with paintball markers," write the developers,
2. A fighter jet piloted by DARPA AI. WCGW?
I'm not saying this is not impressive. It is super impressive. But is it, you know, a good idea?
DARPA’s AI test pilot successfully flew a dogfight against a human | Ars Technica
After flying against simulated opponents, the AI agent has taken on humans.
Dept. of Information Pollution
1. Meta ranks a Meta AI hallucination as top comment
It has everything a really helpful comment should... the only tiny problem is that it's complete bullshit, because Meta AI doesn't actually have a child.
Technoptimism, TED, and The Road to the Future
It’s kind of surreal to compare some of the talks at TED yesterday with reality. Microsoft’s Mustafa Suleyman promised that hallucinations would be cured “soon”, yet my X feed is still filled with examples like these from Princeton Professor Aleksandra Korolova:
2. Documentarish
Is it still a true crime documentary if some of the photos it documents were actually created with AI?
Netflix doc accused of using AI to manipulate true crime story | Ars Technica
Producer remained vague about whether AI was used to edit photos.
AI-generated still images were also used in a horror film last year, with one major difference -- that film was fiction.
3. Grok doesn't grok the joke
The platform formerly known as Twitter has been using X AI's Grok to create "news articles" based on trending posts. But Twitter was a funny, sarcastic place, and X still has enough of that spirit to thoroughly confuse Grok, which has been making up "news articles" based on users' jokes.
Elon Musk’s Grok keeps making up fake news based on X users’ jokes | Ars Technica
X likely hopes to avoid liability with disclaimer that Grok "can make mistakes."
4. yikes
Microsoft’s VASA-1 can deepfake a person with one photo and one audio track | Ars Technica
YouTube videos of 6K celebrities helped train AI model to animate photos in real time.
Hey gamers, earn money by renting out ur GPUs!
GPUs, or graphics processing units, are in high demand for AI model training. (GPUs aren't the only kind of chip that can be used: there are also TPUs, tensor processing units, or NPUs, neural processing units. Here's a rundown of the differences.) There's a limited supply of GPUs, but also, consumer gaming PCs have GPUs that are just sitting around idle while their owners are at work.
So now gamers can rent out their gpus while they're at work...
... buuuuut it might be used to generate porn, possibly including borderline child porn. If the AI workload is image generation, who knows?
Idle GPUs Are the Devil's Workshop
Salad, a company that pays gamers in Fortnite skins for their idle PCs, also generates AI porn.
I’m now kind of curious how much of the global generative AI workload is porn. What fraction of "putting the hurt" on the global electricity supply is for generating boobies?
Speaking of AI-generated porn
Meta's oversight board is reviewing the way it handled two recent cases of deepfake porn of famous people. TL;DR: They took the deepfaked American woman down ASAP but left up the deepfaked Indian woman much longer.
https://www.reuters.com/technology/meta-oversight-board-reviews-handling-ai-created-celebrity-porn-2024-04-16/Followup: google firings
https://www.reuters.com/technology/google-terminates-28-employees-protest-israeli-cloud-contract-2024-04-18/Meanwhile, Stability AI is having layoffs:
https://www.reuters.com/technology/stability-ai-lay-off-staff-weeks-after-founder-mostaque-resigned-ceo-2024-04-18/Major copyright news
Author granted copyright over book with AI-generated text—with a twist | Ars Technica
Copyright Office changed course after initially denying request.
AI software engineer not engineering (thanks, Louie!)
Devin is "an AI software engineer" introduced last month by Cognition Labshttps://www.cognition-labs.com/introducing-devin)
Meet Devin, the world’s first fully autonomous AI software engineer. Devin is a tireless, skilled teammate, equally ready to build alongside you or independently complete tasks for you to review.
With Devin, engineers can focus on more interesting problems and engineering teams can strive for more ambitious goals.
The company released a video purporting to show Devin solving an upwork job. Only problem is, Devin doesn't do what it says it does. It doesn't actually solve the problem posited by the job.
Maybe this shouldn't be surprising! From Devin's own page:
Devin correctly resolves 13.86%* of the [programming] issues end-to-end, far exceeding the previous state-of-the-art of 1.96%. Even when given the exact files to edit, the best previous models can only resolve 4.80% of issues.
5% to 14% is a really impressive leap! But 14% is too low for the claim "Devin is a tireless, skilled teammate, equally ready to build alongside you or independently complete tasks for you to review." At the moment, software engineers might find that working with Devin is more like working with an enthusiastic intern who doesn't learn from your feedback.
Longreads
1: LLM from zero to hero
Rent cloud compute, they said. It’ll be easy, they said.
This blog post is from someone who left Google (that's what he means by "in the wilderness") to form a startup that just launched a "multimodal" LLM (large language model) that takes text, image, video, or audio inputs.
The "in the wilderness" bit feels a little weird until you read this blog post and realize just how much really excellent tech infrastructure Google's engineers just take for granted. That's what made this a fascinating read for me: it winds up being an in-depth, on-the-ground look at the advantages the big movers have over startups in AI innovation.
Training great LLMs entirely from ground up in the wilderness as a startup — Yi Tay
Chronicles of training strong LLMs from scratch in the wild
Reka may no longer be "in the wilderness," though, since they've just announced they're going to run on Oracle Cloud.