Stop calling it "AI"!
That turkey is far more intelligent than ChatGPT and all its ilk. All these so-called “intelligent chatbots” are certainly artificial by definition, and while they may be the products of some level of intelligence in their creators, they are not in and of themselves intelligent.
As Stephen Wolfram says, “It’s just adding one word at a time.” They may even pass a Turing Test, and may in some ridiculous cases convince a human that they are conscious and should therefore have rights equal to the fool who was taken in by the random word generator that is, at core, the chatbot. But there is no Qualia behind the words. For those who believe in the existence of souls, there is no soul in the machine.
So what should these things be called? I have a suggestion. Let’s call them ASPs - Artificial Stupidity Programs. They fit this name far better. They are definitely “Artificial”. Since they are not intelligent, they can be considered the other extreme - “Stupid”. And they are definitely “Programs” running on a computer.
They are also dangerous. Far too many people, some quite otherwise intelligent, are being taken in by these things. With both Google and Microsoft incorporating ChatGPT into their search engines, the results from these ASPs are being taken as verifiable truth. They are not, as has been amply shown recently.
The world is already facing a plague of misinformation and outright lies from influential sources. Relying on ASPs will enormously increase that trend. With no possible audit trail for the information presented, how are we to determine if the ASP-generated information on which we base major decisions is, in fact, fact?
There is a more subtle danger as well. ChatGPT is what’s called an LLM - Large Language Model. The program is trained with enormous libraries of existing text. Patterns in that text are analyzed for consistency and frequency. When ChatGPT generates a response to a query, it analyses that query, pulls out key words, and creates a response that is an extrapolation of what has happened in similar cases in the training source text. A randomizing element is inserted which gives the impression of originality.
While the libraries of source text are currently mostly human-generated, that will not remain so. The one thing that computers and the programs running on them truly excel at, is making something humans find tedious, slow, or minimally productive, into a high-volume process. A human may spend weeks, months, or years writing a novel. ChatGPT can do in in seconds. Will the results be of comparable quality? Not yet. But the computer-generated version will be out there, available for the next generation of LLM to absorb into the training process.
With their enormously higher “productivity”, computers can (and most likely will) flood the Internet with meaningless content. It can be argued that this has already happened. The danger is that this false, misleading, random content will quickly flood the overall resource pool for LLM training. Crap will train more crap.
Humans won’t have a chance to compete. We will be drowned in a flood of crap. It is what Cory Doctorow calls “Enshittification”. He’s referring to specific Internet platforms, but it won’t stop there. Our entire world is being enshittified.
The word “asp” has another cultural meaning. It is the anglicization of “aspis”, which once referred to any one of several venomous snake species. William Shakespeare gave us the most common reference, in “Antony and Cleopatra”, as the snake which killed Cleopatra. Our modern ASPs may kill us off as well. Let’s hope it’s “the least terrible way to die; the venom brought sleepiness and heaviness without spasms of pain”.