※ Turing detectives
Hi there,
Normally I wouldn’t send out another newsletter so soon, but I’ve got a new piece of work I’m eager to share: an interview I did for Quanta with Ellie Pavlick, a computer scientist trying to figure out what it means for a large language model to contain “meaning”. Not philosophically speaking. Empirically. Because, science.
I think of Pavlick as a Turing detective—a variation on William Gibson’s “Turing police” concept from Neuromancer. Those guys were hardboiled cops making sure the AIs of the future (and their useful-idiot enablers) didn’t go rogue. In the real world of AI commentary are researchers like Gary Marcus, Emily Bender, and Melanie Mitchell: they’re out there walking the beat, calling bullshit, and nailing bad actors to the wall. They’re our Turing police. Pavlick is doing something complementary to that, but more interesting. Like a detective, she’s trying to solve cases.
In 2019, there was a thing that seemed to have happened: AIs could apparently “understand” text as well or better than humans. But what really happened? Pavlick was one of the scientists who came onto the scene like Lieutenant Columbo, methodically showing that breathless claims about AI “understanding” were often the result of something much less dazzling. (That was how I first heard of her.) In 2021, I reported on some research showing that neural networks couldn’t reliably encode the concepts of “same” and “different”; in 2023, Pavlick showed that, actually, they can. Now I was hooked: I had to talk to her.
I don’t believe she’s solved the case of whether large language models “mean” or “know” things. But neither does she. That’s what’s fascinating (and all too rare) about Pavlick: her down-to-earth, nonpartisan unfussiness about sacred-cow concepts like “meaning”, “understanding”, “knowledge”, and the like. Post-ChatGPT LLMs sure do seem to contain those things. Pavlick doesn’t know either way. So instead of blowing hot air about them, she puts her head down and looks for hard evidence. It’s “unsexy”, in her own words. Just shoe-leather scientific detective work. But we need more of it. More Turing detectives.
(Know any?)
3 Good Things I Read On The Internet Recently
I realized (duh) that most things that capture my attention online are worry- or anger-inducing. So here some curiosity- and fascination-inducing things instead.
A New Yorker profile of a scientist who thinks that sentience required warm-blooded nervous systems in order to evolve. (Here’s a non-paywalled copy of the article text, if you need it.)
AI isn’t useless. But is it worth it? Great, thoughtful essay that doesn’t take either of the usual black-and-white stances.
Why You’ve Never Been In A Plane Crash. My mouth was hanging open multiple times while I read this deep dive into a moral-intuition-defying-but-undeniably-safety-ensuring professional culture.
That’s all for now. Take it easy,
J