AI Week for Monday, Nov 20th
Hi! This is Natalka with your AI Week for November 20th. Thanks for sticking with me while I get this newsletter off the ground! Please let me know what you think by replying to this email.
This was a big week for AI news, in the sense that there was one giant story: OpenAI, makers of ChatGPT, fired their CEO.
- Big news: OpenAI board fires Sam Altman
- Smaller news: Xbox trying AI-assisted moderation; Google photo AI won't touch faces, bodies
- Two must-read longreads
- ICYMI: Boston Dynamics' creepy robot dog gets a ChatGPT upgrade
1. OpenAI board fires CEO Sam Altman
The big story in AI news this week is former OpenAI CEO Sam Altman's firing. OpenAI are the makers of ChatGPT, arguably the first LLM to get widespread public use. The for-profit part of OpenAI has drawn a lot of investment, most notably from Microsoft, but it is controlled by the OpenAI 501(c)(3) non-profit; on Friday, that non-profit's board abruptly fired then-CEO Sam Altman over a Google Meet call, appointing CTO Mira Murati as interim CEO.
And then it got messy.
The company's co-founder, Greg Brockman, responded to the news with a tweet on X: "based on today's news, i quit". And he wasn't the only one: three senior researchers immediately quit as well. Altman immediately started letting people know that he was thinking about forming a new company, perhaps with Greg Brockman. Microsoft was reportedly not pleased.
By Saturday, the Board was negotiating with Altman to come back. Interim CEO Mira Murati was actively working to bring him on. And a lot of OpenAI employees started indicating their willingness to follow Altman away from OpenAI, by retweeting one of Sam Altman's tweets with a heart emoji.
(screencap from The Verge)
However, Altman put a condition on his return: the Board had to go before he'd come back. The board members agreed in principle to resign, but then... didn't. The drama dragged on through the weekend. On Monday, Microsoft CEO Satya Nadella announced that Sam Altman would be joining Microsoft, and also, the majority of OpenAI employees pledged to quit if the board didn't resign.
OpenAI's board chose a new CEO, replacing Mira Murati -- this is their 3rd CEO in three days, if you're keeping track -- with former Twitch CEO Emmett Shear. Emmett Shear is so much more than the past leader of Twitch: he's also sufficiently embedded in the "rationalist"/"effective altruism" community that he was namechecked in Eliezer Yudkowsky's epic fanfiction-slash-recruitment-text-for-the-rationalist-movement, Harry Potter and the Methods of Rationality. (H/T to Priya Chand for this link!)
(By the way, if you'd like to know more about the connection between some of Silicon Valley's biggest names in AI and a 660K-word Harry Potter fanfic, reply to this email to let me know. It's a whole thing.)
What kicked this mess off?
The OpenAI board's initial statement about Altman's firing on Friday--a blog post on their website, very classy--said "he was not consistently candid in his communications with the board," which is often corporate-speak for lying, fraud or similar malfeasance. Understandably, people had a lot of questions. On Saturday, OpenAI's COO, Brad Lightcap, clarified in an internal memo that the firing was not about "malfeasance or anything related to our financial, business, safety, or security/privacy practices," but "a breakdown in communications between Sam Altman and the board." Eventually it came out that OpenAI's chief scientist, Ilya Sutskever, played a key role in the ouster -- and then regretted it, signing an open letter threatening to quit and join Microsoft unless the board resigned.
Phew: that was a rollercoaster ride, and it's not over yet! It's not exactly clear what Microsoft is going to do with Sam Altman yet (although they're planning on making him a CEO), or how many OpenAI people will join him there. As of this afternoon, it's not even clear that he's going to stay there. I'll come back to this next week.
For more info, The Verge has some really thorough and as-clear-as-possible coverage of this messy story; their full timeline is here.
2. Smaller news
Xbox needs moderation help, is trying AI
I don't think it's news to anyone that Xbox players can be very rude. They can also cheat and make fake accounts. Microsoft has a lot of moderation to do. Last week Ars Technica reported that they've been using Microsoft subsidiary TwoHat's product Community Sift, "an AI and Human-Powered Content Moderation solution".
Google Photos' AI "Magic Editor" won't touch faces, bodies, paperwork
On Monday, the Register reported that Google Photos' new AI-powered "Magic Editor" won’t change pictures of IDs, receipts, faces, or body parts. So you can move Grandpa around in the photo, but you can't change his nose or give him boobs. The idea is to block activities that could harm others, and to avoid helping users commit fraud. Of course, the guardrails aren't perfect.
And in direct opposition to that philosophy: A16z, a Silicon Valley VC firm, made news this week by investing in an AI-image marketplace that's just fine letting its members solicit AI-generated pornographic pictures of real, non-consenting people. (404 media, via boingboing.net)
3. Two longreads worth reading
I read two excellent longreads this week, both in the New Yorker. Both of them are worth your time.
1. Profile of Geoffrey Hinton
https://www.newyorker.com/magazine/2023/11/20/geoffrey-hinton-profile-ai
Geoffrey Hinton's been working on machine learning and neural networks since their earliest days. This gently meandering profile covers his life, his career, and how both have informed his current concerns.
Quote:
“I wanted you to know about Roz and Jackie because they’re an important part of my life,” he said. “But, actually, it’s also quite relevant to artificial intelligence. There are two approaches to A.I. There’s denial, and there’s stoicism. Everybody’s first reaction to A.I. is ‘We’ve got to stop this.’ Just like everybody’s first reaction to cancer is ‘How are we going to cut it out?’ ” But it was important to recognize when cutting it out was just a fantasy. He sighed. “We can’t be in denial,” he said. “We have to be real. We need to think, How do we make it not as awful for humanity as it might be?”
2. Is my toddler a stochastic parrot?
https://www.newyorker.com/humor/sketchbook/is-my-toddler-a-stochastic-parrot
This is a beautiful, touching, and thoughtful illustrated essay. Don't miss it. Tagline:
The world is racing to develop ever more sophisticated large language models while a small language model unfurls itself in my home.
4. In case you missed it
I just started this newsletter last week, but there's a lot of interesting stuff that happened earlier that I'd still like to share. So I'm experimenting with adding an occasional ICMYI: something that's not last-week recent, but is too good not to share. If you love the idea (or hate it), reply to this email to let me know.
ICYMI: Boston Dynamics' creepy robot dog gets a ChatGPT upgrade (and googly eyes)
https://bostondynamics.com/blog/robots-that-can-chat/
In case you missed it, earlier this year Boston Dynamics hooked up ChatGPT to their creepy robot dog and trained it to act as a tour guide. Part of what makes their robot dog so creepy is its lack of a head: it has a gooseneck manipulator arm right where anything that evolved on Earth would have a neck and head. The engineers' solution? They programmed the arm to act kind of like a head--the manipulator even opens and shuts when it talks, exactly the way you'd make your hand "talk"--and stuck googly eyes on it. The video of this thing in action is so, so, so worth watching.
https://www.youtube.com/watch?v=djzOBZUFzTw (8:27, sound on)