190 - What's the deal with AI? π€
[Seinfeld at the comedy club] "I mean, isn't all intelligence artificial? What does natural intelligence even mean? Just the other day...I left the house without my keys - boom, I'm locked out! Now that's not very intelligent, is it?" [raucous, canned laughter; fade to black]
Hey there, !
Look, I just wanna talk about AI.
I keep mentioning it here and there - links and allusions to cool AI things and what I'm thinking about it - but thought it would be good to just get one thread of it out that I've been thinking about - specifically:
The perceived danger of AI is proportional to the model being used to conceptualise it
or more succinctly:
Whatever you think AI is, it is
Let me explain.
1. The AI Hype
There's a lot of stuff going around at the moment about AI since ChatGPT, DALL-E and other models have come up. It's been a significant leap for people to be able to talk to a computer and have it respond to you in a really conversational way, seem relatively confident about its answers, and feel like you're talking to another person. Some of the stuff from MidJourney and the other image generation is also incredible, like:
outpainting memes, like the distracted boyfriend meme:
But as with anything AI, there are so many ways they get tricked, for example:
tricking chatGPT with maths...
tricking it playing paper scissors rock...
It's just like taking candy from a...well, b-AI-by (hurhur) I guess.
The thing is, I've seen it create some wonderful stuff, and I'm sure you have as well. It's become a routine to help accelerate ideation and doing a first draft of things - whether that's for emails, or comms, or generating ideas for a newsletter π It's able to do all of this really quickly - much quicker than you would expect - and drives excitement in what's next.
But there are also very respected scientists and researchers who don't like what they've built and want everyone to slow down. There are many different views on when the ultimate goal of Artificial General Intelligence will be realised, and though this has narrowed down over time, it's still a relatively large margin in terms of what forecasters believe.
The most interesting thing to me is that there are some people who think it's going to be here in 5 years, and some who think it's going to be 50, and some who think it'll never happen!
I've tried my best to try to understand why this is the case, and I think one aspect of it is about how we conceptualise AI.
2. Anthropomorphisation
In a lot of movies, literature, whatever - there's always an AI that is essentially like another human being, but quicker and more logical and rational. Think about HAL9000 from 2001: A Space Odyssey (or HAL from WALL-E if that's easier) - "I'm sorry Dave, I can't do that" shivers. There's always a temptation in movies to anthropomorphise AI (i.e. make them more human-like) so that they're more relatable for us to understand...
...but once we do that, we can ascribe emotion to what AI can do, and subsequently we assign it more "consciousness" than we might otherwise - in some cases, with malice (HAL again), with hope (literally WALL-E who shows so much emotion) or even sadness (Marvin the Paranoid Android anyone?). These things make AI seem more 'human', and for the longest time, our computers weren't able to copy that syntax / expression. You'd get stuff like 'ERROR 404 DOES NOT COMPUTE' or 'REQUEST NOT FOUND' - things that we grew to expect from computers.
In more recent times, we had AI assistants who could kind of speak back at us...but they weren't able to hold great conversations, and their responses would have to fall back on 'oh sorry, I couldn't find that, but here are some results from the internet if that works?' I mean, nowadays I use those assistants to just like...report on the weather when I'm getting ready, or help play music across my house.
like seriously...are you guys really AI...?
Now, with ChatGPT and similar LLMs, that concept has started to change. We can hold some long, thoughtful conversations with AI that sound like coherent human speech, and so we start to go 'oh no it's actually real now it has emotions and consciousness!'.
And that's what makes it seem dangerous.
3. Vince Gets To The Point
To get rapidly to the point, I think the main issue with AI at the moment that people think is dangerous is:
Can computers think? Do they have minds? They're so advanced - will they be taking over the world soon?!
And I think the reason we ask this question is that we believe that an AI who can speak and sound like a human (including reasoning out concepts, responding in logical ways, being 'creative' and providing unexpected answers) actually has a mind of its own, just like a human. And with a mind, that entails a bunch of other scary Frankenstein-like thoughts...we've made a monster!
But in examining the edge cases of what it can and can't do, I think there are other ways that we can conceptualise what AI is like:
-
A really quick 10 year old child, or fresh grad: If we think of AI as a child, or a relatively smart graduate that can parrot back what they've learnt from reading lots of books, then we would instead think 'oh okay so it's just regurgitating information that it's learnt from the books it's been given'. It probably isn't very good at maths (sometimes true) and sounds overly confident when it actually doesn't know shit (definitely true). We wouldn't trust the AI to actually know things, and we would be checking over everything they did or were trying to assert.
-
An advanced computer program: which, okay, obvious, but essentially taking out the fact that it has 'intelligence' in any shape or form. It's a really good simulation of what a person might sound like, and can look for information quickly, but it just doesn't have the...agency or consciousness the same way that you or I might have. It's responding to inputs and providing an output. Specifically, if you never engage with ChatGPT, how is it ever going to provide an output of its own? It cannot set it's own objectives, it cannot continuously learn, and can't make a viral meme (yet...I mean I probably also can't make a viral meme but I have potential).
-
A toy: I mean, for now, it's a toy. It's helping with business, sure, and saving time for others, but it's not really meant for anything serious. The accuracy of its conclusions, and how confidently it can assert things that it has absolutely no basis for - it's just fantastically bad. How can you trust it to do anything right?! It can be somewhat creative but not really - it's nothing that's groundbreaking or completely new and different. It's just a well designed toy that we get to play around with.
-
A gun: Any tool in the hands of the wrong person can be used for harm. AI is no different - if it can be harnessed for the wrong things, then it would be dangerous to keep out in the wild. That's what Sam Altman's testimony to Congress was about - making sure that the risks of AI are curtailed and minimised. On this, I feel like a gun nut when I say 'well if only some countries slow down AI, there's absolutely no stopping the outlaws from continuing their development of AI - they'll just win instead!'...yada yada yada good guns vs bad guns. In this case I think it's just that winning AI is going to be so much more important for the future than anything else - and early optimisation like this is just stupid without it being a global agreement.
There's a lot to unpack here, and definitely more frames and metaphors we could use, but I think you get the point - depending on how dangerous AI is, you're probably right. But you're probably wrong, too.
4. Whatever you think AI is, it is
That's it really, for now. I wanted to make sure I wrote down what I thought about this whole AI thing so that I could clarify my thinking on it...if I have more thoughts I'll just keep writing on this :D
Personally, in most instances I still think of it as an advanced computer program, and definitely don't ascribe any traces of humanity to it. It's still just regurgitating stuff that it's learned from humans, and it can't get to the level of creativity in concepts that we might. It doesn't do anything unless someone tells it to do something - it can't come up with it's own objectives and sort them out. Humans need to be there to trigger it first. Sometimes I think it's good to think about it as a bit of an exoskeleton of sorts - helping the mind go quicker, generate things a bit faster, but otherwise not as an independent agent of change.
Happy to be proved wrong though; we'll see ^^
It's a tad longer than usual, but I like to make sure these are long, but not too long, but not too short either. To be honest...
If I had more time, I would have written a shorter letter... - Blaise Pascal
Chat soon :)
Let me know if you have any feedback for the newsletter!
βοΈReal Life Recommendations
-
Spider-Man: Across the Spiderverse - honestly, just go watch it. It's so freakin' good. EXTERMELY highly recommended. So many shots from this movie that I want as art, and a great story to boot AGH it's so good just go watch it. I've been listening to the soundtrack as well it's so GOOD.
-
Death on the Nile - by Agatha Christie...yes, I'm a bit late to the party on this one, but I really enjoyed it. The Poirot stories are different to Sherlock - much more 'oh I think I've got it' by Poirot rather than 'okay let's go do some investigating' and have it slowly unfold through clues. The style is a bit annoying BUT I love how the plot and the characters come together - I love these sort of stories which seem like 'jokes' - there's all this mystery and set up, that gets a very satisfying payoff at the end. Recommended!
π Adventures on the Information Super-Highway
-
Why do railway tracks have crushed stones alongside them? - ever thought about this? I never did - before you click the link, have a go, and then reveal the answer here :)
-
Own-goal football - a story about incentives encouraging the wrong sorts of behaviours...always great to see these in the wild.
-
Crooks' Mistaken Bet on Encrypted Phones - justice, perhaps? Fantastic breakdown of how messaging apps and trying to stay encrypted can end in...well, not great endings.