GPT is revolutionary
Even a stochastic parrot can sing.
I don't feel comfortable making predictions about the future. There's just too much that goes into it and it's way too easy to be wrong. But if I don't occasionally write my riskiest thoughts, do I really deserve a newsletter?! If this ends up being wrong, I promise I'll do a postmortem. Here goes:
I think GPT-4 will be revolutionary. I think ChatGPT already is revolutionary, and in fact the revolution started in February, when OpenAI released ChatGPT Plus. We haven't seen the full consequences of it yet.
Half of you are thinking that I'm saying the obvious and the other half think I drank the AI koolaid. I think I'm approaching in a very slightly different way than most of the stuff I've read, so please give me a chance to justify why I think this is a big deal.
GPT is different from other AI hype cycles
When I read about how GPT is going to change everything, it's usually one of three things:
- GPT is the precursor to artificial general intelligence. Singularity ahoy!
- GPT is so good at programming it will put programmers out of business.
- GPT is going to create a flood of misinformation, SEO spam, and propaganda.
Really quickly: the first one is laughable. The second one is probably not true; everybody who's shown how easily GPT can write programs are programmers guiding it with their programming experience, and I don't know how well nonprogrammers would do. I'm more sympathetic to the third one: we've already seen troubling signs like fiction magazines having to close submissions due to the flood of ChatGPT-generated stories, and I've read tech "stories" that were pretty transparently AI generated. I'm not looking forward to the full extent of the negative consequences.
At the same time, there's something special here. To see why, consider some of the big AI hypes from the past fifteen years or so:1
- IBM's Watson winning Jeopardy, being sold as a medical device, and being subsequently shuttered.
- AlexNet sweeping image recognition contests and bringing in the age of neural networks.
- Apple, Microsoft, Google, and Amazon all releasing voice assistants.
- AlphaGo and AlphaZero.
- All the self-driving car hype.
- Geoff Hinton's claim that human-driven radiology will be obsolete by 2021.
Some of these ended up being really important, some ended up going nowhere. But in all of them, the AI wasn't the product, it was an implementation detail. A company was selling a product and saying "this has an AI powering new feature X."
OpenAI has done something different. They gave us the AI itself as a product. Put in text and it gives you back more text. Pay per token, do whatever you want.
GPT3 already did this, but it was pitched to businesses and stuff so wasn't as accessible. When ChatGPT came out, I remember a lot of programmers playing with it and arguing about how it would change software engineering. And I'm including myself in that group! But I think we all missed the important thing, which is that the tool was free for the general public. Originally it seemed like it was only going to be during the "research preview", but when they announced ChatGPT Plus they explicitly said they plan to keep offering a free version. Now they could change their mind at any time, but at least for now free AI access is here to stay.
In retrospect, I feel kinda bad for not paying attention to this sooner. Back in November a friend excitedly told me that ChatGPT is sentient, because it gave him good answers to a lot of "tough questions." I tried (and failed) to explain that it was just statistical prediction and wasn't actually conscious. It didn't occur to me that this friend was not a programmer and couldn't call an API to save his life, and yet had no problem accessing and using ChatGPT. That's where the revolution happens.
What's the Big Deal?
Imagine a genie that can make any tool you describe, except it's really shitty. You know, a stamped aluminum cooking pan, or screwdriver where one side of the screw bit is slightly wider than the other side. Something you wouldn't pay five dollars for. How useful would this genie be?
Well for one thing, it's convenient. I don't have needlenose pliers in my toolbox because I never ever need them, but there was one time I needed it for one thing. I had to find a friend who did have them because I wasn't going to buy one for something that never would come up again. Even a shitty pair would have done the job. I imagine that's how people would use the genie at first, for one-off things that they don't have yet.
But there's another thing we can do: ask for tools that don't yet exist. I want a chain fork for a very specific chocolatiering thing, but I can't buy a chain fork, because in no sane world would there be a manufacturing supply chain for a chain fork. But now I can have one. A shitty one, to be sure, and I'd much prefer a high quality one, but I'd rather have a crappy tool than no tool at all.
ChatGPT has over 100 million active users. 100 million people using it a 100 million different ways. I've read a few threads of people sharing their use cases, paying especial attention to people who don't seem like they have a technical background. Some uses:
- Converting video transcripts to essays, or Clean up a voice-to-text transcription
- Summarizing invoices for reimbursements
- Converting a paragraph of meetings into a list schedule
- Making an email sound more professional
- Write a resignation letter
To these people, ChatGPT must seem a minor miracle. I mean, the video transcript idea seems like one to me. I guess I could implement something like it myself given time to learn all the relevant domains and libraries, but I've been programming for ten years and this person has not.
Repeat these discoveries and scale it up to a 100 million people. How could that not change the world?
I’ve created 4 semi-large (to me) scripts using ChatGPT since it came out, and that’s 4 more than I’ve ever made. I don’t want a language to gatekeep me from projects and automation. — /u/padenormous
What about the problems?
So now's worth talking about a few of the problems with large language model output.
It does a lot of basic stuff poorly
Most of the discussion is what AI's can and can't do well, and if they're actually bad or if you just have the wrong prompt, or if it'll be fixed with GPT-4 or if we'll discover big gaps with that, too. The genie can only make shitty tools; GPT will not add numbers better than a calculator.
But even if we freeze AI capabilities with ChatGPT, I'd argue this doesn't matter. Think of it like a Pascal's Wager:
Cost/benefit | AI can do X | AI can't do X |
---|---|---|
You try the AI | Amazing time saver! | You wasted two minutes |
You don't try | You don't have the tool you need | you saved two minutes |
When it's easy to try a lot of ideas, people will try a lot of ideas, even if most of them don't work out.
AIs hallucinate2
This is more dangerous: where the AI gives you something that looks like what you want but it's subtly wrong, so you need to proofread everything you get. To which I'd respond that checking a solution still faster than finding the solution, so it's still a net benefit. To which you'd point out that I myself have argued we are really bad at proofreading, and there have been high profile cases where "proofread" AI articles had major mistakes. Most notoriously, the CNET scandal.
Okay, you got me there. To be fair, though, the CNET case seemed like they were using AIs to automate writing entire articles, and then passing them off to editors. It's hallucination at scale. At the individual level, there's a much tighter feedback loop between generating output and verifying it. It's one thing to proofread an article that someone gave, another thing to proofread a rewrite that you generated, from text you supplied.
There's ways to minimize the consequences of hallucination. I sometimes use ChatGPT to research topics, by asking it to give me keywords that I look up manually. If it confabulates a term, then I find it's not a real term and therefore can't research it. That said, "anti-hallucination" techniques aren't failsafe and I don't expect most people will use them. I guess I'm just not as worried about hallucination for people conjuring up shitty tools than I am in people using AI-augmented search engines.
It's in the hands of a single, mostly unaccountable corporation
This is true but doesn't make the impact it has on individual's lives any less significant. I imagine that multiple competing, publicly-accessible, transparent LLMs would have even more impact.
People will do bad things with GPT
Inarguable and already happening. This sort of comes from the same place as I'm arguing: ChatGPT is accessible to nontechnical people, including malefactors. I don't know how to solve this; I don't think it's even solvable at the technical level, and there's no way to perfectly distinguish between normal and bad uses. For example, if I ask ChatGPT to list "Slurs for Jews", I could be an anti-Semite, or I could be an academic who studies anti-Semitism. It's more likely the former, but the engine can't know that for sure.
There's also the question of how it will affect existing employed people. I'm pretty confident that programmers are safe, because the genie only makes shitty tools, but as I said earlier I'm seeing more AI-generated clickfarm junk. So at least one group of people (desperate freelance writers) are losing a source of revenue, and if there's one, there will probably be more.
But it would be a mistake to see the (significant) negative consequences and ignore that there are positive ones too, or even to dismiss them negligible. GPT has these negative externalities because it's so flexible a tool, which can be used for both good and bad. Society can't neglect the bad, and it shouldn't neglect the good.
In all, I think ChatGPT, as it exists now, is already revolutionary. Not because it's AGI or will replace programmers or anything, but because it can and already does 1) offer an incredible amount of power, 2) for free, 3) to a nontechnical public. A million personal revolutions unfolding in parallel. I have no idea if the future 10, 20 years from now will be better or worse, but I believe it will be significantly different than it would have been if GPT remained an API.
And if I'm wrong, well, you'll get to read an thorough postmortem. That'd probably be interesting to at least some of you. Have a great weekend!
-
I know I'm missing a bunch this is all just off the top of my head. ↩
-
I never liked the word "hallucinate", which means seeing or hearing something that isn't there. But it doesn't mean you believe they're real! I like term confabulation better. ↩
If you're reading this on the web, you can subscribe here. Updates are once a week. My main website is here.
My new book, Logic for Programmers, is now in early access! Get it here.