From whence did your scepticism come?
Hello 👋
Before I launch in, I briefly wanted to say 'greetings!' to my new subscribers, as there have been a fair few of you in the last couple of weeks. Thank you for joining in.
By way of an intro, I try to write regularly about the influence of technology, often through my own experiences of delivering digital programmes in the public, arts, and third sectors.
I also have a genuine interest in the way tech feeds into culture, politics and society in general. More on that below.
If you know anyone who might be interested in my wittering reflections, please share the love.
I think therefore I introspect
I've tried not to make this post too navel gazey.
To date, much of what I've focused on in my writing ends up looking back: an evolving set of notes-to-self both personal and professional.
I've found this to be not just a cathartic exercise, but also a useful way of distilling information and working out what I can apply to my current thinking.
It's not what you'd call a knowledge bank, more of an aide-mémoire – tidbits I can use to build the case for particular approaches, or as justification for pushing back.
Sometimes I feel I come across quite negatively, or perhaps flippantly, and sound like I'm griping for the sake of it.
When I look back at my Twitter or LinkedIn feeds of yore I can see the links I've posted, content I've retweeted, podcasts I've promoted, articles I've liked ❤️ frequently seems to relish in the glass-half-empty.
But I really try to toe a line between what I consider to be healthy scepticism as opposed to barefaced cynicism.
While my internal monologue is a lot more kneejerky, what I end up publishing has almost always passed through a filter and multiple edits (and more often than not my wife's glass-half-full lens).
This post explores the origins of my scepticism; I hope it comes across as something more than a frustrated person howling at the moon.
Behold the zeitgeist
From where I'm standing, it feels like we’re living through a golden age of Internet reminiscence.
In the last few months a number of prominent books have been published that take a long look back at pivotal moments, people, and platforms that have influenced the development of online culture, 35 years on from the creation of the World Wide Web.
I’ve already posted musings about Ben Smith’s Traffic and Kyle Chayka's Filterworld. More recently Kara Swisher’s Burn Book offers a retrospective on all the main characters from the Internet hall of fame.
Naomi Klein's Doppelganger – while so much more than a technology book – makes clear the influential role that platforms have played in the spread of dis/misinformation.
All of these books, to one degree or another, try to make sense of the messy, formative period where the tools which now dominate the digital landscape were forged.
None of it feels like nostalgia for nostalgia's sake. Indeed, while there's a common theme around the hopefulness and energy of the early days of digital technology, there's also a strong emphasis on the inability of those wielding power to do so responsibly, and why we seem so ill-equipped to learn from mistakes of the past.
Swisher's book has attracted criticism for being too cosy, too much of an insider's account, to speak truth to power. Yet her criticism of Mark Zuckerberg is pretty unflinching.
While Elon Musk may be public enemy number one in many people's eyes, she calls out Zuckerberg's naivety, unpreparedness, and lack of capability in dealing with the behemoth he created, and the enormous damage that Meta has done.
I'm sure all this recollection reflects the unsettling times we’re living in. A way of expressing the tipping point where technology stopped feeling like a great enabler, and started to cast a long dark shadow.
This shadow is very much in evidence in Klein's book, which lays bare the dystopia of what she calls the "Mirror World". For a snapshot of this, there's a fulsome interview on the Tech Won't Save Us podcast:
How the Mirror World Distorts Our Reality w/ Naomi Klein - Episodes - Tech Won’t Save Us
A left-wing podcast for better technology and a better world.
Given this proliferation of cautionary tales you might think we'd all be taking a little time out to carefully ponder any negative consequences before leaping on the next bandwagon.
After all, if a sizeable collective of observers, authors and prominent fellow sceptics are, directly or indirectly, encouraging a more cautious approach to our relationship with technology, maybe this is the moment to step back from full-on utopianism?
But there are, of course, many who would reject the very notion of stopping to ask questions first.
When the rot sets in
I've previously referenced Marc Andreessen's Techno-Optimist Manifesto as a viewpoint at the extreme end of things.
What I find particularly depressing is that you can hear echoes of Andreessen's perspective in many different forms, all over the Internet.
Plenty people are content to carry on regardless as if all innovation is righteous, boundaries must be pushed, growth must be hacked, line must go up.
It strikes me that in the last five years the convergence of two distinct occurrences:
the end of one era of digital media, with the last drops being squeezed out of the web 2.0 glory days, and
a global pandemic redefining so many people's relationships with technology,
seems to have ushered in a period of near constant noise over signal.
If you're working at the digital coal face you get used to an influx of solutions looking for problems, but in recent years it's felt ever more flailing and desperate.
The amount of airtime, and resultant headspace, taken up by initiatives that still haven't really made it past the concept stage has kept on racking up.
😺 The Metaverse!
😸 Cryptocurrencies!
😹 NFTs!
😼 Web3!
🙀 Blockchain!
😿 AI!
They're dangled in front of us, demand our attention, plugged to the max, yet their use cases remain woefully undefined.
Sometimes they all blur into one. I love this quote in a Harvard Business Review article:
Put very simply, Web3 is an extension of cryptocurrency, using blockchain in new ways to new ends.
New ways to new ends you say? Thanks for explaining.
For clarity, I'm not trying to maintain that nothing on the list above has valuable uses, or could be the basis of great, transformative products. But I am saying that the scale and reach of their application is probably overblown.
When every journalist piled onto Twitter, the skew towards technology news crossing into the mainstream was noticeable, and I don't see that trend abating.
So if wunderkind Sam Altman scratches his arse, or some awful dashed-off AI solution hits the rails, we all get to hear about it. Stuff that would have been deservedly niche ten or fifteen years ago floats to the top of our feeds.
As I've written before, the problem this causes is everyone's eye gets taken off the ball. Or there are too many balls and not enough eyes. Or we don't know which ball we should be eyeing. Or something about balls and eyes.
Let's start over: we focus on the wrong things.
Y u a sceptic?
The origins of my techno-scepticism are undoubtedly rooted in the bursting of the first dotcom bubble in 2000.
At that point I worked for a web agency with a decent crop of clients and a solid homegrown content management system.
Even with the limited scope of the early WWW, glacial modem speeds, and browser wars, the possibilities seemed enormous.
But just as some of us were starting to get a handle on what those possibilities could yield – design standards, re-usable components, properly scalable solutions – everything spectacularly crashed and burned.
There were multiple reasons for this, but a big contributory factor was the gargantuan sums of money poured into concepts and products that weren't remotely fit for the road.
So while idea generation and marketing campaigns were in full swing, business models were often shaky at best, and the infrastructure was literally miles away.
Many early dotcom projects were based on an excitable 'hold the front page: you could sell X product online!' model.
Much enthusiasm was then piled into website design, flashy ads, and PR, with little time for the dull-but-necessary components that actually make things work.
Why bother with secure transactions, customer service, storage, shipping, and returns when there's a hype train to ride?
If you want a full picture of the hubris, preposterous amounts of cash, and general insanity of the late 90s, it's worth digging in to the story behind boo.com – a clothing website that was so ahead of its time, it didn't even load.
Or you could watch 2001's Startup.com to witness another catastrophic web fail. The full documentary is on YouTube, but it's worth jumping 45 minutes in to flinch at the company chant:
Those examples of ego and folly weren't alone. And sadly when the grossly over-inflated market exploded it wasn't discerning – good companies fell by the wayside as well as those founded on ineptitude.
Those were weird, uncertain times. A website called Fucked Company offered a daily chronicle of the slow motion car crash as company after company folded in succession.
The web agency I worked for went from a peak of about thirty staff to a handful of us working from our bedrooms, unable to attract new clients or sustain projects.
The net result of all this was a massive blow to a flourishing industry. Companies survived, but they scraped by. Confidence was knocked, investment dried up, and it would take years to recover.
But at least we learned some lessons, right?
Best not to pump huge amounts of money into the sketchiest of ideas. Best to consider long-term impact before embarking on a superficial plan. Best to focus on solving real world problems.
Right!
Right...?
Less doing, more inquiring
So yeah, I sit on the sceptical step.
I can't help cocking a snook at solutions that pitch some inevitable future state. I can't help digging in to the backstory of new + shiny stuff to find out if due diligence has been done.
And I can't help question the motivations of those who choose to boost stuff that seems too good to be true.
Take this recent LinkedIn post from the Director of AI at IBM, telling you that your company needs to employ a Chief AI Officer 🤔
It's a classic case of sowing the seeds of doubt: 'forward-thinking' organisations have already embraced this; you'll get left behind if your AI strategy isn't, ummmm, architected; get back to your cave loser, you will never realize at scale 😭
The people who are really pushing AI right now are those who stand to make money, sell consultancy, and intertwine you forever more with their services.
(On a related note, Jon Stewart – riding high once again on The Daily Show – turned his attention to AI this week. "We'll hold down the fort on toast.")
The effect of this spin is already in evidence. In the last couple of weeks I've seen some dubious examples of charities, museums, and public services throwing their hats into the AI ring.
These have felt like untested, ethically questionable, low-value solutions, geared towards grabbing headlines rather than offering any practical use. Remember kids, it only takes one shit idea to sully your brand.
My career has featured many moments where there's been intense pressure to adopt or adapt to the next big thing.
A lot of people of my era have been dragged through the mill with apps, AR/VR, location-based services, short-lived social networks, and novelty dashboard tools.
Sometimes it's been exciting, sometimes it's been frustrating, sometimes I've been proved completely wrong, sometimes I've been so right it hurts (usually when the money and the enthusiasm are finito).
In most cases though, I've searched for an evidence base to help inform decisions. That means finding an array of articles, seeking advice, digging into analytics, subscribing to trial versions, and consulting forums.
This hasn't solely been about snarkily proving a point. Even when I've delivered projects where I don't necessarily agree with the strategic direction or technology choices, there are invariably points to learn from.
And it's important to remember there are nearly always alternative options – innovation can happen without throwing your weight behind technology that is largely untested and is, y’know, causing actual harms.
In Blood in the Machine, Brian Merchant reframes the Luddites as people who weren't just opposing change for the sake of it, but protesting the inequality that large-scale automation was bringing about. He argues:
"...that Luddism stood not against technology per se but for the rights of workers above the inequitable profitability of machines."
So if you're feeling under pressure to be at the bleeding edge, or can’t quite articulate a good reason for a shiny product to exist, why not embrace your sceptical side?
In this day and age, there are no foolish questions.
🤨 Thank you for reading