Loose Wire

Subscribe
Archives
December 20, 2023

[loose wire] bubblenomics

html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"

Dear Subscriber, hope this finds you well. If you're subscribing to me via Substack (
over here
), you may have received a missive from me yesterday about journalists. As the topic isn't usual loose wire fare, and because I intend to write more on the topic (why PR doesn't understand journalists, and vice versa) I've limited circulation of that. If you do want to read it and subscribe to future ponderings, please 
drop by here
.  Those of you used to my consulting work will recognise the topic; those of you at the pointy end of some of my frustrations as a journalist might go there, just so you can yell at me, and demand apologies for past misdemeanours. All are welcome. 

 
This piece is more in the general tone of my year-long concern about where AI is taking us, based on a column by Cory Doctorow, a science fiction author and a long-time thinker about tech since the early days of the web. I tend to agree with him on most things, and I think he gets AI just about right here. 
 
As ever, if you don't want any more of this stuff in your inbox, or you've shifted over to Substack, unsubscribe below. All the best for the holidays if you mark them, and a happy 2024. 
 
Best

Jeremy
 
 

Bubblenomics

Cory Doctorow is one of those people I’ve never met, but I think of as one of the Elders of Web 2.0. Someone who was there for the first bubble (the one that popped in 2000/1) and so has seen the fundamental subterranean dynamic of Silicon Valley-inspired innovation.1 I was there as well, though not from such a lofty perch. But even from the other side of the world I could see what he describes in his most recent piece about bubbles. There are bubbles that leave nothing behind, and those that leave an interesting residue that becomes the foundation for the next layer of innovation. And that one, now 23 years old, was one of those.

He describes how the bubble left behind lots of university dropouts, whose education in HTML, Perl, Python was financed by the influx of VC money in the late 1990s. Now all these young folks were jobless, but had a bedrock of expertise and the helter-skelter experience of startup-land. As Doctorow writes:

People started making technology because it served a need, or because it delighted them, or both. Technologists briefly operated without the goad of VCs’ growth-at-all-costs spurs.

This I could definitely feel from afar. I’ve bored readers for years how the stuff that came after the bust was much more interesting, and solid, than what came before. A lot of it was what could be called infrastructure: open source stuff for behind the scenes (MySQL, Apache, Linux, Python predated the crash, but usage ramped up in the early 2000s), web content management (RSS, blogging platforms, social bookmarking), file-sharing and media (BitTorrent, Podcasting). Social media — essentially what we think of today as the web — was built on these tools.

So what of it? Doctorow argues that AI right now is a bubble. And not the kind that will yield much residue. He says “the massive investor subsidies for AI have produced a sugar high of temporarily satisfied users”, but the apparent eco-system that is flourishing around the likes of OpenAI should not be mistaken for some thriving hotbed of innovation. Everything relies on large language models, the largest of which are expensive — to make and to run.

The question, then, is will people be willing to pay for this once the hoopla is over? For Doctorow this is the key question, which will determine whether the bubble that bursts leaves a useful legacy, or a bubble that leaves nothing behind (for him Enron, or crypto — more of that anon.)

As he points out, the thing that got me so worked up almost a year ago now is the major stumbling block: who would use LLMs to make big decisions when it confabulates and hallucinates? But remarkably that is still the technology’s selling point: to replace or make more efficient existing people/machines/processes. Using an LLM to look at an X-ray should make the process more expensive, Doctorow argues, because an LLM cannot (or let’s say, should not) be treated as accurate. The radiologist would need to spend time on her own assessment and then spend time on the LLM’s diagnosis.

But as Doctorow says, that’s not the business model. AI is being presented as a money saver, a chance to shed those useless people and create content, analysis stuff, process stuff that is just about good enough. AI’s promise is not better quality, it’s the promise of profitable mediocrity.

So Doctorow argues, AI is a bubble and not a good bubble. When it pops, nothing will be left that can be repurposed, apart from some of the stuff that’s going on in the open source, and federated learning space. If you want to take a look at what generative AI might look like without all the expensive servers, check out FreedomGPT, an open source GPT, which works pretty well, so long as you’re not in a hurry.

I suspect Doctorow is right; I believe that we’re essentially playing with subsidised toys, and if the true cost of delivering those toys to us is reflected in the price, we’re not going to be willing to pony up. It is, after all, a fancy search engine, a fancy but less reliable Wikipedia (another phoenix from the dot.com ashes), or an unreliable way to populate eBay listings.

Doctorow is dismissive of crypto, which as mentioned above he dismisses as a bubble on the scale of Enron. I have to declare an interest: I have had clients in the space, though none at the moment, I do agree the space is largely driven by greed, and the DeFi world is largely focused on the wrong things. And each crypto winter so far hasn’t really concentrated minds on what might be useful out of all this effort.

One day, though, I think it will provide the bedrock of a better infrastructure for transferring and trading value over the internet, and that is something that still hasn’t been fixed. Libertarianism has become so engrained in the philosophy of crypto that the origins of Bitcoin, which I see as more akin to the early 2000s mood of “why can’t we just build something simple to fix this annoying problem, and forget about trying to make money out of it?”, have somehow gotten lost. But yes, I can quite see how people might have lost patience with the space.

In fact, I think something similar might happen with AI. Yes, it’s too early to worry too much about “AI safety” as it’s generally meant. AI is not about to remove us as impediments to efficiency. But I do think AI can, in the wrong hands, cause serious damage to us as tools of information warfare. I’ll talk more about that on another day. For now there’s this: Why are we suddenly talking about an AI catastrophe?, and this: Generative AI: Another way at looking at our new overlord.

I don’t think we should assume that the only route to artificial general intelligence, AGI (the version of AI that most closely mimics our own intelligence) is through the brute force and black box of LLMs. I think ChatGPT may have filled in some of the terra incognita ahead of us, and it may fall to more agile, logical approaches that to start navigating that world.

For now, I think it’s still worth playing with these tools while they’re still available and priced for ordinary joes like us. Only by exploring them and kicking them until they break will we understand what might (and might not) lie ahead.

  1. Inspired does not necessarily mean led, but I’m differentiating it from the innovation that has taken place elsewhere, both before and since, and I’m extremely reluctant to join the throng which feels that Silicon Valley is the only source of tech innovation.) 











 
This email was sent to *|EMAIL|*
why did I get this?    unsubscribe from this list    update subscription preferences
*|LIST:ADDRESSLINE|*

*|REWARDS|*

Don't miss what's next. Subscribe to Loose Wire :
custom X LinkedIn
Powered by Buttondown, the easiest way to start and grow your newsletter.