Intelligence and adaptability
Intelligence and adaptability
Last week on my long run I listened to two episodes of two different podcasts that somehow connected in a really interesting way for me. The first one was on AI and the second on evolution and adaptability. What connects the two for me is learning. Learning and adaptability are the same thing in my view, while intelligence is closely connected them.
The first podcast was titled “the AI Hoax” and featured Mazviita Chirimuuta, a philosopher of perception at the University of Edinburgh (here’s a link to the video, but it is also available as a podcast, the show is called Philosophy for our Times). Mazviita argues that human-like AI is and will remain a fantasy. I’m not going to try to reproduce her argument but she essentially argues that the connection between intelligence and life is stronger than previously believed and, hence, cannot be matched by any computer-based algorithm lacking life (there is something tautological in this argument now writing it down, but I cannot really put my finger on it - also not that important). Towards the end of her talk, she makes the point that while technologies like ChatGPT seem to outperform humans in analysing and synthesising vast amounts of data - while articulating themselves quite well - they are very very narrow in what they can do and not adaptable at all. She likens it to a car that is much better than a horse in transporting humans across long distances, but it requires a specific infrastructure to do so (roads, fuel stations or chargers, etc.) and is not very adaptable to that regard - it cannot adapt to a different terrain in the way a horse can or suddenly get a different type of fuel. And it will also not naturally evolve to be able to do so on its own.
The second podcast is an episode of one of my favourite shows, Radiolab. It’s titled “Crabs all the way down” (what a fantastic title!) and talks about the evolution of crabs. The show shares the story of a team of researchers who wanted to find the hereditary tree of crabs but found out that there is not one evolutionary line of crabs but five. So the shape of a crab has evolved independently five times on the Earth. That is quite incredible but speaks to the adaptability of that particular shape. The different species of crabs live in all kinds of habitat, including on trees (which surprised me). The explosion in crab species happened at a time in the Earth’s history when the climate changed dramatically and the sea levels rose sharply. While many other species struggled to adapt and went extinct, while the crabs were able to benefit from the new circumstances.
What struck me about the connection between the two stories? It’s the movement talked about in each. In the first one, the movement of human-made evolution of technology is going towards more specialisation and optimisation - becoming much better in fewer things. Convergence. In the other story we heard about an episode in natural evolution towards the ability to do more things while maybe not each of them in the most optimal way possible (a crab might not be as good a swimmer as a fish, but a fish can definitely not climb a tree).
[Btw, I’m not saying that there are no cases in natural evolution that show the pattern of optimisation and specialisation - indeed, in stable environments that is the predominant pattern. Yet in these cases, what remains interesting is that all the different highly specialised interlink together in intricate ways to generate a complex ecosystem. Which is not true for human inventions really.]
It was the specific juxtaposition of the two stories that happened for me that made me wonder what we value as human society. The on-going craze about generative AI like ChatGPT seems to indicate that we really like things that can do only a very limited set of things but in a very impressive way. We even assign human-like qualities to this technology even though it is far from resembling anything human.
I am not sure I have a specific point to make with this post. I could now go down the line of arguing for building resilience rather than providing highly effective and optimised solutions for specific problems in a social change context. But even that is not a general recipe (although I think it’s a good rule of thumb).
I’m curious to read what you think about these different dynamics.
Image
Photo by jean wimmerlin on Unsplash