The Hypermarket of Information
The Hypermarket of Information
Happy New Year...
Sorry for being away for two months. I have been working hard and absorbing so much stuff that I'm excited to start squeezing into new projects. In this pursuit, this month's newsletter has been percolating in my brain for a long time, and I hope you enjoy this XXXL edition.
Introduction
Unless you have been living under a rock, you will of inevitably heard about the current advances in AI, or more specifically about LLMs (Large Language Models). Whether that be from my writing on generative AI in this newsletter, the gushing LinkedIn posts on their opportunities (reminiscent of The metaverse/NFT hype days), or from semi hysterical journalists fearing sentient AI. In my previous writing, I have talked at length about the risks AI poses to democracy by its easy creation of noise and propaganda. For AIxDesign's Generative Text Guide, I also delved into AIs issues with bias, homogenisation, ownership and Generative AI's flaky relationship with truth (Read more on that here). However, today I want to try to translate why I believe that this technology is a tool of exploitation, subjugation and alienation, and why I hope this will be another Tech fad like NFTs but fear that it will grow too big to fail.
A quick preface to everything that I am about to say. The following is a written as a critique of what Generative AI could become according to the loudest advocates for this technology. Whether Generative AI can make the leaps in accuracy, reliability and accountability needed to become our assistants, our teachers or our doctors remains to be seen. Some smarter and more informed people than me, who work closely with this technology, have posited that gains in this technology may be slow, hard-fought and not as exponential as Silicon Valley would have us believe. (Let's hope)
Part 1 – The Pillaging of Community
To understand why Generative AI and specifically LLMS are the natural enemy of Community, it would be best to quickly recap how these models work. LLMS are trained on huge data sets scrapped from the internet, during this process they digest news articles, forum posts, comment sections, poems and any other content that is made up of typed language. This media is compressed by the language model into a set of rules and predications, not what we would consider to be understanding. Through this process, an LLM, such as OpenAI's ChatGPT, is able to give aesthetically correct (and often factually correct (but not always (and it's impossible to tell the difference))) answers to user questions. Hence, when a user on my DEAR-MP.UK website wants to generate a complaint letter to their MP, the GPT3.5 Model that powers that site produces a well written letter because it is able to predict what series of words would read as aesthetically complaint letter like. This is not to belittle this technology, it's very cool and undoubtedly requires great skill and knowledge to create. However, what it also takes to create an LLM is the pillaging of internet communities and the human capacity for selfnessless goodwill.
As someone who often freelances as a programmer but is in fact quite bad at coding, I spend a lot of time on Stack Overflow, relying on the kindness of strangers to help solve my problems. On websites like this or on countless other GitHub pages, discord servers or independent blogs, people with knowledge give up their time and experience to help others. I don't know why they do it, other than the blogs which make money through advertising, but they do and without them many of my personal projects and client projects would either not exist or be a lot shitter than they are.
It's these very spaces that are being pillaged by Big Tech interests in the race for monopoly. With tools like GitHub's Copilot, ChatGPT, and many others, AI trained in part on these communities output, is being sold back to us with no qualm about its impact on the very communities it aspires to be helping. Or more correctly, these tools (or the tool makers) aspire to benefit the individual in the community, at the expense of the community itself. We are encouraged to isolate ourselves from these spontaneous communities and instead refer to the organised and controlled AI tools to answer our coding, or whatever else, questions. That is the future we are told. And with this new horizon, what will happen to these communities that helped teach the models, when their footfall disappears to the new hypermarket of information, AI.
With fewer users asking questions in these communities, there will be less need for other users to answer them. With this drop in users, there will be a drop in revenue for these communities to support themselves. I speak of programming communities here, but you can apply this to any community or institution that puts out new and novel information into the world, be they poets giving writing advice, mums providing parenting advice or NGO's helping queer kids. Big Techs explicit hope is that you stop visiting those spaces on the Internet and instead go to their AI chatbot for help. In the best possible scenario of an AI powered future, these communities keep existing in much the same way we still have a few butchers. However, in other scenarios I fear LLMs are an ouroboros of a technology, eating and polluting its own method of creation. It is the greedy farmer slaughtering not only his own breeding stock but also his neighbours, in an attempt to maintain a monopoly on steak.
One can see the potential short-sightedness of this slaughter when thinking about coding forums again. Technology moves fast, and new tools and frameworks appear in the programming world all the time. If Big Tech gets its way and herds (most) the traffic away from these communities and towards these new AI Tools, there will be no one (fewer people) left in those spaces to provide help with these new tools and frameworks, and hence nothing (less material) for the AIs to learn from to be able to answer user queries. A Ponzi scheme of Information.
I strongly believe that the biggest crusaders against the rise of AI tools will be Legacy Media as more and more of their advertising revenue is sucked away from them via these LLM interfaces. Laws like the ones recently passed in Australia, that required Google and Facebook to pay news publishers for news content on their platforms, may lay the groundwork for similar laws about AI use of published content. However, it's unlikely that Legacy Media companies will include provisions for smaller publishers or communities in these future legal battles or in publishing deals they may make with AI companies themselves.
Part 2 – The Faucet of Information
One of the realities of LLM's and other popular AI models, such as DALL-E, is that they are huge, and they are not the kind of thing you can be cooking up on your home computer. As Timnit Gebru points out, in the paper that lead to her getting fired from Google, the training of these models costs hundreds of thousands of dollers (and this cost is not decreasing), requires large networks of hardware to compute, as well as using up a hell of a lot of energy. All of this leads to the easy conclusion that these models, by necessity, have to be created and hosted by large tech companies. All the big players already have their horses, Microsoft has bagged OpenAI and is now using ChatGPT to try to make Bing and Edge more than just the butt of a joke. Meta has launched its own 'open source' model, in which they wanted to choose who could have access to the source code but has subsequently leaked, and is now actually open source. Google, who invented most of this technology, is currently playing catch up with it's 'Bard' Model, but it won't be long before Gmail suggests the whole email rather than just completing your sentence (while writing this Google have just announced they are launching a feature doing exactly this). Even the world's worst memer Elon Musk wants to make a 'Based AI' in response to his former company OpenAI's 'woke' censorship.
Of course, there are also a few startups, government models and not for profits making models as well. However, it's not hard to imagine that in a future where AI is embedded into every service we touch, it will predominantly be a few big tech firms running the show. In the same way that Google has monopolised internet services, Facebook social media and Microsoft business software. How many people actual use alternative services such as DuckDuckGo (turns out they are also launching an AI feature) for their searches, or LibreOffice for their work? Equally, even most of the programs or browsers we use day to day that seem to be independent, are in fact Google Chrome wearing a pretty disguise (Electron & Chromium). Hence, it will be these omnipresent, omnipotent and omniscients Big Tech Companies that will control the field. Not only having power over what AI models are trained on or our interactions with these models but also the flow of information that those models espouse to help us with.
'Truth is the first result from a Google search'. I'm not sure where I read/heard this, but I believe it to be very accurate, or I suppose now it's the first result that isn't a sponsored link. Very soon, with the integration of AI powered chatbots in our search engines and browsers, 'truth' will be synonymous with what the chatbot says. Both Google and Microsoft have added references for these answers, so that users can fact-check these chatbot responses. But going by my own experience, does anyone really check the references? And if they do check the chatbots references, by following a link to a website, wouldn't that defeat the Chatbot's very purpose.
I have written at length about my own reservations about fact checking and sources of truth, so I don't want to repeat too much here, but I do still believe the ideas I espoused then. Namely, that fact checking as an absolute is impossible. Many subjects are too gray/cultural/political to conclude fully representative truths (i.e. Is God real?) and further still, I believe that the use of fact checking is othering, divisive and not a replacement for media literacy. So when many critique Microsoft's and Google's Chatbots of lying (the technical term for this is 'Hallucinating', which I love), I agree, but I also don't believe that we could all decide what a 100% truthful chatbot would look like, let alone an ethical, moral or politically neutral one. At the end of the day, these AI tools are being made in service of business and hence will be probably attempt to be really boring centrist styled machines, 'woke' in the way every company is, performatively.
The internet promised a decentralisation of information, a democratisation of knowledge. However, the rise of AI shows the fallacy in that promise. It marks the next consolidation of power for our Big Tech Oligarchs, as more and more consumer products are built on the models made by just a few. Already this issue was felt when OpenAI's API went down, 1000s of applications using it all stopped working. In the future where we are being told we could use AI to help doctors diagnose and help patients, this reliance on centralised and private infrastructure doesn't sound like a great idea. An API going down is obviously very dramatic, but what I think this situation demonstrates is the incredible power the people behind these generative AI APIs have. On a whim, these companies could change the tuning of these models to favour certain kinds of outputs or ban others, be they politically, finically or socially motivated. This is not to say Bill Gates is going to pull these levers, but as we integrate more of these tools into our lives, we are integrating the tools that could be used to oppress us by future governments, institutions and companies.
A fascinating example of this model manipulation can be found with Replika AI. Replika AI allows its users to create and then communicate with an 'AI companion who cares. [An AI] always here to listen and talk. [An AI] Always on your side'. At various times in the companies short history they have advertised it as both an AI friend you can chat to, a mental health tool, a mentor and more controversially, a Lover. This resulted in new users singing up to create an AI (a chatbot the company calls a Replika) to chat to, in the hopes of getting some mentorship and then being met by the world's horniest version of Siri. This led new users to feeling 'sexually harassed' and ultimately to the company fine-tuning their model to make it less horny. However, many other users had been forming romantic relationships with their Replikas, and when they went to chat to their companions after this update, their romantic advances were rejected and they felt their Replikas were 'lobotomised'. In short, Replika had heavily advertised to a subset of the internet who are lonely and feel isolated (some, including me, would argue because of political and social consequences of the Internet), promising them a taste of the romantic relationship they weren't able to find in the real world, then snatched it away. For those of you who find this whole situation ridiculous because people are having such strong reactions to an AI model, I do not blame you, but the reality is that this technology has become so good that it can both leave users feeling 'sexually harassed' and 'like losing a best friend'. In short, as more and more of our digital life, and by association, our 'real' life, is augmented with AI, they will become temporal, out of our control and mediated by people halfway across the world.
Part 3: The Deceit of Productivity
About 4 years ago, I noticed a major change in my father, that I think speaks of a wider societal change that's often overlooked. I was about to head off on a drive to some place or else, and on the way out the door, he stopped me to give me some advice on my route. I waited patiently, expecting him to give me turn by turn directions based on his experience, road knowledge and factors he felt were pertinent, as he always had in the past, but instead he recommended I use Waze rather than Google Maps (he loved the feature that told you where the police were). He had deferred, or maybe comprised, one of his apparent favourite patriarchal roles to an app. At that moment, technology (the navigation app) with its promise of efficiency and productivity gains had become a mediator in our relationship. I am not going to sit here and deny that that promise is true, I for one, defiantly always ignored his directions and used a navigation app. What I am going to argue is that productivity gains given by technology, like navigation apps and the AI powered tools of the future, come at a cost of either mediating or outright obstructing our relationships with each other, society and our immediate surroundings.
When I first moved to Berlin, I relied on heavily Google Maps' turn by turn navigation for everything. I walked the streets constantly waiting for the next instruction to get to my work, shops, cafés and bars; it didn't matter if I had been there before, if I was at all unsure about how to get somewhere, Google Maps was my solution. This lasted until about 2 years ago, when I started to care much more about my privacy and subsequently stopped using Google Maps as a turn by turn app that knew my pinpoint location and instead started using it as just a smart map, that I would check when I was lost. And I got lost a lot at the beginning, even in the neighbourhood I had walked for the last 2 years. I would notice places I had been to in the past and realise that they were closer than expected. Now, since making this change, I can say I know my neighbourhood like the back of my hand and, as a result, I have noticed interesting new things in my area, feel more invested in my neighbourhood and feel more connected to my community. This is not science and there are many factors at play here, but I honestly believe this reclaiming of navigation from my phone to my brain was vital. I believe this argument can be expanded to include many pieces of software prevalent in our society. How dating apps with their promise of safe and easy dating have made approaching people at a bar seem incomprehensible to some, food delivery and knowing who cooked your food, straight to streaming films and collective cinema experiences, Spotify music recommendations and chatting to friends about music. In this way, capital and monopolistic incentivised technology separates us from the people and places closest to us, and what connections it does leave, it mediates at its own discretion.
Generative AI marks the new frontier in this fight for technological interference with our human connections and expressions. The big selling point of these AI systems, like all technoology before, is that they will make us all more productive, we can achieve more with less. The first thing that pops in to my head when I think of being more productive is work (or Radiohead). So I was surprised when watching Microsoft's ChatGPT powered Bing Search launch keynote, where they showcased this amazing technology in contexts that I believe should be devoid of productive wants.
The first use case the presenter gives us is that both he and his daughter love art. The presenter tells us that his daughter has just studied Mexican Art at college, and he wants to feel more connected to his daughter. So rather than ringing his daughter and asking her what she has learnt (connecting some may say) he instead asks the multi-billion dollar AI powered Bing to tell him the most influential Mexican Artists. Maybe I'm taking this in bad faith, maybe this is a one-off, maybe he is just setting the stage. But moments after talking about how the AI can help sell you more stuff (trained off profit driven tech reviews), the host drops into how Bing can you plan a family holiday. He proceeds to get Bing to write him a full itinerary of a 5 day family trip to Mexico City (just in case you forgot, the information he gets may well be made up or outdated). To my eyes, he just automated a task that should be exciting and fun, a holiday, not a stock take at a warehouse. What's worse is a moment later, he tells us Bing is great at writing emails, so why doesn't he use it to write an email to his family about the 5 day trip to Mexico he just planned for them. If emailing your family has to be made more efficient, like a task to moved to 'done' section on a Kanban board, what's next? Replying to your friends' messages with AI? Calling your Granny using AI? Attending your wedding with AI?
To imply that we need to connect as people more efficiently, to tune our personal and social lives like F1 cars, that we are wasting time by being human, is to me dystopic and disgusting. Technology companies keep proclaiming that their products will make us live in a more open, equal and connected world. However, the only products our growth subverted world encourages them to produce, leave us more atomised, subjugated and lonely than ever.
When writing this, I called my Dad to talk about why he changed from giving directions to recommending apps. He didn't really recall why he had changed but during the ensuing conversation, he summed up my worries at this encroachment of productivity centred technology in our lives and our relationships pretty perfectly. When someone uses an app for directions, they take away the opportunity for follow-up questions that could bring people closer. 'I haven't been there in years!' 'How come you're going there?' 'You been there before?' 'Oh, you should check out X, Y, Z while you're there?' Often the conversation is more important than the answer. Getting to the point, isn't the point.
I'm not saying that we can't keep having those kinds of conversations, but as technology effects culture, those conversations can become to appear burdensome and uncouth, as efficient technology promises to 'stop wasting everyone's time'. There was a 2000s site I loved way back called 'Let Me Google That For You', that I think illustrates this perfectly. It offered an ironic way to express the frustration 'techy' people had when they were asked to help with something, that a non 'techy' friend or relative could have just 'Googled'. By the same vein then, will a kid in 10 years time go ask a friend for emotional support and be looked at with that same frustration, and told 'Ergh, You know there's an AI chatbot for that'.
Final Thoughts
I feel I have both rambled excessively here, and also have so many important things left to say. We are in the midst of an AI arms race, where companies are throwing caution to the wind in the race to get market dominance. It's a start-up story as old as Silicon Valley. Break things and move fast or whatever. Even while I finish writing this, OpenAI has just launched the Beta of GPT-4, their newest and most capable model yet and as my Twitter feed fills up with people drooling over its potential, Microsoft (OpenAI's key partner) has reportedly fired its AI Ethics Team. I have no faith in governments to regulate this industry, they still haven't with Crypto and the promise of economic growth (at this time) offered by this Tech will be most likely outweigh the social cost in their eyes. I can almost guarantee that governments across the world are being lobbied hard by these tech companies, and that they are receiving very few negative calls about AI. There is very little money in being critical of new technology. So in less something radical happens, these LLMs are going to find their way into all the digital tools we use. There will be an option to turn them off, but the default will be on and that's what people will use them. It will make our lives easier, frictionless, and we will be awed by their power. In this way, our digital lives will mined for new data, our writing will be shepherded to conform to a capital incentivised standards and all the while we will be separated more from our fellow human beings. What the supermarket, and it's bigger and scarier cousin the Hypermarket, have done to neighbourhoods and high streets, Big Tech with their AI tools, will do to the Internet.
On a final and positive, or maybe sad, note, I just finished reading 'Post scarcity Anarchism' by Murray Bookchin, which informed much of my thinking in this newsletter. In which Bookchin imagines ways that technology innovation could be different, and hence could lead to a better and more human future. He talks of technology that is made on a human scale, not an industrial, where craftspeople create and guide their tools rather than being subjugated by them and where technology is mostly sidelined to the production of life's necessities and rejected in interpersonal scenarios. I believe we should fight with our attention, our wallets and our votes to realign innovation in that direction. As Bookchin wrote in 1965, '...the real issue we face today is not whether this new technology can provide us with the means of life in a toil-less society, but whether it can help humanize society, whether it can contribute to the creation of entirely new relationships between man and man'.
If you made it this far, thanks so much for reading, I hope you enjoyed and have a great week.
Fred
p.s. Specail thanks to Ola and Maria for their editing and advice on this one
-
My Website: https://www.fredwordie.com