The Outside View
In Superintelligence: The Idea That Eats Smart People, Maciej Pinboard nee Cegłowski talks about analyzing AI risk with the perspective of the "outside view":
Say that some people show up at your front door one day wearing funny robes, asking you if you will join their movement. They believe that a UFO is going to visit Earth two years from now, and it is our task to prepare humanity for the Great Upbeaming. […] The outside view tells you something different. These people are wearing funny robes and beads, they live in a remote compound, and they speak in unison in a really creepy way. Even though their arguments are irrefutable, everything in your experience tells you you're dealing with a cult.
Of course, they have a brilliant argument for why you should ignore those instincts, but that's the inside view talking. […] The outside view doesn't care about content, it sees the form and the context, and it doesn't look good.
This is one of the most powerful tools in doing research. Analyzing the technical merits of a practice, or technology, or $THING is really hard. Ideally we'd give everything we explore a fair shake, but we have limited time and need heuristics to decide what to spend time on. If something fails the outside view, then we can probably skip it without investigating it further.
What does the outside view look like in software? In my experience, here are a few red flags:
- Is $THING presented as a radical paradigm shift or a revolution in software?
- Do the ideas of $THING mostly come from one or two people, with nobody else further developing the ideas?
- Are $THING advocates dismissive of other practices and ideas without bothering to properly understand them?
- Does the topic have "racing thoughts": people connecting it to a wide variety of different topics without bothering to establish a firm grounding in any of them?1
- Are they fixated on an "enemy" technology they blame for modern computing's problems? For example, "Von Neumann architecture", OOP, HTML.
- Do they obsess over categorization— either claiming lots of unrelated things are really $THING, or saying stuff isn't really $THING because it lacks some minor detail?
- Do they avoid usual channels of idea diffusion: papers, conferences, arguments on Twitter?
- Do they refuse to acknowledge any downsides or tradeoffs of $THING versus other approaches? If there are tradeoffs, are they just humblebrags like "this doesn't work with corporate drones?"
- Do they have bad behavior on Wikipedia? Sockpuppeting, writing articles as advertisements, picking fights in talk pages, stuff like that.
None of these are definitive: some ideas can be valid while still having some red flags, and crackpot stuff might not check everything. But the more red flags you see, the more you should be nodding politely and moving to the door.
Notice that none of these are even about the $THING! That's the power of the outside view: it helps you make these decisions before you dive into the details.
Examples
Let's start with something that checks most of these boxes: Carl Hewitt and the actor model of computation.
(Not the actor model of concurrency, which was primarily developed by Gul Agha, though Hewitt nowadays takes credit for it. I'm talking about the original actor model.)
- Most def, he thinks the actor model solves the halting problem, refutes the Church-Turing thesis, and proves Godel wrong.
- Yes. There have been a few attempts to build on the actor model of computation, but they all notably abandoned it to make progress. For example, Gerald Sussman and Guy Steele tried to explore the model with Scheme, but dropped it in favor of continuations. The model of computation remains entirely Carl Hewitt.
- Yes.
- Yup. Hewitt connects the actor model to cybersecurity, strong AI, "paraconsistent logic", and the fate of the free world.
- Hewitt spends a lot of time villainizing lambda calculus and Turing machines.
- Hewitt will complain at length that things "aren't really the actor model".
- Hewitt was banned from ArXiv and now publishes on SSRN and viXra.
- Uncertain— I've not seen him challenged on the actor model in a way that'd reveal his stance on its drawbacks.
- There is a Wikipedia category "Suspected Sockpuppets of Carl Hewitt".
So yeah, we can safely ignore the actor model of computation, at least until someone who's not Carl Hewitt (or his disciples) makes major improvements.
Note it doesn't go both ways: someone can be wrong and not be a crackpot. As a negative example, consider Uncle Bob Martin and his corpus of work. I think he's wrong about many things and find his personal views odious. But I cannot dismiss him with the outside view test!
- He associates "clean code" with professionalism, but he doesn't consider it radical or revolutionary.
- No, there are many people riffing their own takes on TDD, clean code, and SOLID.
- While he's dismissive of things like formal methods and type systems, he at least tries to form coherent arguments about why he's dismissing them.
- No.
- No, his issues with computing are mostly social ("we're not professional enough").
- Nope.
- He's published several books and is active in mainstream channels.
- He's willing to admit you can take clean code too far and TDD doesn't work in some contexts.
- Not that I know of. I just checked his talk page and didn't see any of the usual signs of bad behavior.
The outside view doesn't give us an easy answer here, so Bob's software ideas should be evaluated on their technical merits.
Advocacy
As a researcher, the outside view helps me filter information. As an advocate, the outside view tells me what not to do. Not just because people can sense it, but because these things are corrosive. They make the community unhealthy. So I'm always checking that my FM advocacy doesn't fall prey to these things.
Maybe that's why I use the outside view so much. I'm afraid of seeing it in myself.
-
This is a little subtle. Plenty of good ideas connect to a lot of other things, but do the groundwork in making those connections meaningful. "Racing thoughts" is more when the people spray connections from a firehouse without bothering to check if they actually make sense. ↩
If you're reading this on the web, you can subscribe here. Updates are once a week. My main website is here.
My new book, Logic for Programmers, is now in early access! Get it here.