[Seth Says] It's easy to predict the future...
...as long as you don't need to get it right.
People are predicting the future all the time, and if you predict enough futures, some of them are bound to be right by sheer happenstance. Of course, some mediums prefer to use more general predictions to avoid easy falsification. But if you want to go large, you can make lots of big predictions with only a small chance of coming true, because if one does come true, people may size you up as someone with prognosticative prowess.
Of course, if all of your wrong predictions are as visible and well-publicized as your correct predictions, you might get less social credit. (In some cases, people will even call you Nostra-dumbass, which, credit where credit's due, is a pretty great portmanteau insult.) Certainly after the 17th end of the world prediction (Y2K! Mayan Calendar! God hates shrimp!), it's easy to ignore those people. And it's easy to complain about the weatherman being wrong again. But more often, wrong predictions are simply forgotten, and when something happens, we go back and find and publicize the amazingly correct prediction that foresaw the present moment.
BACK TO SQUARE ONE
When I was a young child, a mere 184 years ago, I used to watch a television program about math, called Square One. This was my favorite program because I loved math, I had many friends as a kid, and only one of those two things is true. It was educational television, but smart and funny. As a budding videogamer, I appreciated the Math Man clips, which were a parody of Pac Man that involved solving math problems.
But one of the best recurring sketches on the program was called MathNet, a parody of Dragnet, where a pair of detectives solved crimes and mysteries with the awesome power of mathematical reasoning. And one episode (to the best of my recollection) involved a mysterious fortune teller who had sent someone very specific falsifiable predictions about who would win various sports games. (sporting events? pretty sure my mastery of the terminology here marks me as the Steve Buscemi "hello fellow kids" of sports fans.) Anyway, the fortune teller had been correct five times running. So the presumptive prognosticator (what can I say, it's really fun to use alliterative phrases with words like prognosticate) then asked for a large cash payment in order to correctly predict an upcoming big game, that the mark could place a bet on and make tons of money.
As you might guess from my use of the term "mark", the whole thing was a scam, and the way it worked was pure simplicity: The fortune teller simply sent mysterious messages to 64 people to start with, only following up with the 32 that had received the correct prediction, then only the 16 that had received two correct predictions, etc. People easily ignored a wrong prediction, so nobody heard about those. But one person saw five correct predictions, and that's what got noticed.
This seems like something worth keeping in mind, when someone "correctly predicts" that a certain investment will pay off. The question isn't just was the prediction correct, but how many incorrect predictions did they also make? I think a lot of ostensible experts probably talk up dozens of companies, and then when one hits it big they get to claim it as a feather in their cap, for correctly predicting that Facebook would be a hit. But nobody remembers that they also suggested that TheRockOnline and iToilet would be great investments -- except the people who flushed their inheritance down the Dwayne.
But one of the best recurring sketches on the program was called MathNet, a parody of Dragnet, where a pair of detectives solved crimes and mysteries with the awesome power of mathematical reasoning. And one episode (to the best of my recollection) involved a mysterious fortune teller who had sent someone very specific falsifiable predictions about who would win various sports games. (sporting events? pretty sure my mastery of the terminology here marks me as the Steve Buscemi "hello fellow kids" of sports fans.) Anyway, the fortune teller had been correct five times running. So the presumptive prognosticator (what can I say, it's really fun to use alliterative phrases with words like prognosticate) then asked for a large cash payment in order to correctly predict an upcoming big game, that the mark could place a bet on and make tons of money.
As you might guess from my use of the term "mark", the whole thing was a scam, and the way it worked was pure simplicity: The fortune teller simply sent mysterious messages to 64 people to start with, only following up with the 32 that had received the correct prediction, then only the 16 that had received two correct predictions, etc. People easily ignored a wrong prediction, so nobody heard about those. But one person saw five correct predictions, and that's what got noticed.
This seems like something worth keeping in mind, when someone "correctly predicts" that a certain investment will pay off. The question isn't just was the prediction correct, but how many incorrect predictions did they also make? I think a lot of ostensible experts probably talk up dozens of companies, and then when one hits it big they get to claim it as a feather in their cap, for correctly predicting that Facebook would be a hit. But nobody remembers that they also suggested that TheRockOnline and iToilet would be great investments -- except the people who flushed their inheritance down the Dwayne.
FORWARD TO TRIANGLE THREE
I've done some ghostwriting for various Thought Leaders who have big ideas about the future of the world. All of them have made big predictions about the future, and most of them I haven't remembered, because they haven't been overly correct. But one guy (and I can't be very specific, because ghostwriting and thought leadership) has turned out to have been astoundingly correct about AI so far.
Now admittedly, this was only four years ago, so it's not like correctly predicting the Internet two decades out. But he foresaw not only the rapid advancement and availability of AI, but also the resultant job displacement, and how it would hit middle class thought job types the hardest (not that the middle class in this country has been doing superbly for the past couple decades to begin with). And I guess we're still just on the beginning edge of that, but it's certainly looking very likely that if trends continue, he'll turn out to be even more correct.
FWIW, if you're worried about losing your job to AI (and why wouldn't you be?), he suggested that reskilling into growth industries like nursing and AI-wrangling would probably be the safest bet. Me, I'm going to keep writing, because I'm hoping I'm on a high enough floor that it'll take a long while before the flood waters reach me (presuming I can keep my writing away from AI regurgitators; nobody will get anything like my writing by feeding the AI writing that isn't mine. But we already know that AIs trained on the complete works of comedic legends like Dave Barry or Bill Burr can produce things in their style. This is a reason I've been avoiding taking any jobs where my writing would be fed into an AI, a fate I would like to avoid for as long as humanly possible.)
But the CEO of Upwork (where I do a good chunk of my freelance writing) just announced that they were laying off 15% of their staff, and in some of my writers' forums, it is starting to sound like the lack of writing is on the wall.
Now admittedly, this was only four years ago, so it's not like correctly predicting the Internet two decades out. But he foresaw not only the rapid advancement and availability of AI, but also the resultant job displacement, and how it would hit middle class thought job types the hardest (not that the middle class in this country has been doing superbly for the past couple decades to begin with). And I guess we're still just on the beginning edge of that, but it's certainly looking very likely that if trends continue, he'll turn out to be even more correct.
FWIW, if you're worried about losing your job to AI (and why wouldn't you be?), he suggested that reskilling into growth industries like nursing and AI-wrangling would probably be the safest bet. Me, I'm going to keep writing, because I'm hoping I'm on a high enough floor that it'll take a long while before the flood waters reach me (presuming I can keep my writing away from AI regurgitators; nobody will get anything like my writing by feeding the AI writing that isn't mine. But we already know that AIs trained on the complete works of comedic legends like Dave Barry or Bill Burr can produce things in their style. This is a reason I've been avoiding taking any jobs where my writing would be fed into an AI, a fate I would like to avoid for as long as humanly possible.)
But the CEO of Upwork (where I do a good chunk of my freelance writing) just announced that they were laying off 15% of their staff, and in some of my writers' forums, it is starting to sound like the lack of writing is on the wall.
ABRUPT CHANGE OF TOPIC
Well, enough talk about the AI-pocalypse. (not to be confused with the Al-pocalypse, because it's always a great idea to talk more about Weird Al.) My latest column is about being accidentally mean. I even titled it
because that's the kind of creative title-writing that can never be replaced by AI. Oops, I failed to change the topic. I guess my only play left is
ABRUPT ENDING
Thanks for reading, and back in two weeks with another very human column and accompanying ramble.
Beep Boop,
Sethbot v.9.4
Beep Boop,
Sethbot v.9.4
Don't miss what's next. Subscribe to Seth Says (Parenthetical Digressions):