※ Turing detectives
Hi there,
Normally I wouldn’t send out another newsletter so soon, but I’ve got a new piece of work I’m eager to share: an interview I did for Quanta with Ellie Pavlick, a computer scientist trying to figure out what it means for a large language model to contain “meaning”. Not philosophically speaking. Empirically. Because, science.
I think of Pavlick as a Turing detective—a variation on William Gibson’s “Turing police” concept from Neuromancer. Those guys were hardboiled cops making sure the AIs of the future (and their useful-idiot enablers) didn’t go rogue. In the real world of AI commentary are researchers like Gary Marcus, Emily Bender, and Melanie Mitchell: they’re out there walking the beat, calling bullshit, and nailing bad actors to the wall. They’re our Turing police. Pavlick is doing something complementary to that, but more interesting. Like a detective, she’s trying to solve cases.
In 2019, there was a thing that seemed to have happened: AIs could apparently “understand” text as well or better than humans. But what really happened? Pavlick was one of the scientists who came onto the scene like Lieutenant Columbo, methodically showing that breathless claims about AI “understanding” were often the result of something much less dazzling. (That was how I first heard of her.) In 2021, I reported on some research showing that neural networks couldn’t reliably encode the concepts of “same” and “different”; in 2023, Pavlick showed that, actually, they can. Now I was hooked: I had to talk to her.