how do you tell the difference between good and bad tech?
capitalism strikes again by injecting generative models ("AI") that we didn't consent to or ask for into our lives. I try to think about how this is or is not like a photograph.
by Zoë Hayden
I've been thinking a lot about "AI" obviously, or "artificial intelligence” (which is the popular term for generative language and image models despite the fact that they are neither artificial nor intelligent). It’s become an inescapable topic in many fields, and unfortunately I’m in IT, specifically in the education sector, where debates are ongoing as to whether “AI” can be considered a legitimate teaching tool or teachers’ aid. The consensus seems to be yes, for some reason. I don’t think this is necessarily wrong, but ever since ChatGPT and predictive language models became a hot topic, I’ve felt very strongly that people are putting the cart before the horse. I loved Ted Chiang’s piece in the New Yorker which used the analogy of compression algorithms to explain why the results from a generative language model, like ChatGPT, can inherently only be poorer imitations of information that already exists. In talking about this with a group of my friends, who also work in tech/education/media, I ranted about comparing ChatGPT to a calculator, and about the inherent differences between math and language. Math always has a correct answer, I said. Language and art requires choices and discernments:
i don't think any majority of people ever thought that like, doing math on an abacus or by hand for example enriched the human experience overall? it wasn't a universal thing that people did that was part of everyday life. but... language always has been, and so has decision-making based on experience or data. data calculation is one thing but interpretation has always been subjective and part of what makes us human
But again, I work in tech. I’ve always been a bit of a gadget nerd and interested in new pieces of technology. I’ve been an early adopter of some things, like the iPod and the Kindle, but have held out on others. I delayed getting a smartphone until 2012. I have refused a smartwatch and wearable tech. I love posting on Twitter (though it took me a few years to start using it in earnest) but haven’t been able to get the hang of TikTok. I think to myself: Some of this is just aging, right? I’m 32 now and I’m statistically less and less likely to adopt new technology as I get older.
But I keep telling myself that maybe there’s a line in the sand I’m not crossing, and I want that line to make sense. I want to know why I’m resistant to something. If I’m opposing something on principle, or on ethical grounds (which is basically how I’d characterize my opposition to generative models or “AI”), I want to be sure of my argument.
My instinct is that the line has to do with exploitation and misuse, and whether exploitation and misuse is inherent to the implementation of a technology. Amazon is a massively exploitative company that habitually immiserates workers and transformed the book publishing industry in myriad ways thanks to its online commerce platform and eBook technology — but the e-reader itself isn’t inherently a tool of exploitation. The Kindle is a very well-designed piece of hardware that does make books more accessible. During my senior year of college, when I couldn’t afford a book I needed for class, I’d purchase or pirate a copy I could read on my Kindle instead, saving myself hundreds of dollars that I could spend on rent and food. The Kindle itself was a gift from my boyfriend at the time who worked at a secondhand electronics store.
With aging also comes experience. Back in 2006, I was so excited to get a Facebook account. I was 16, and I was familiar with the idea of the personal webpage already. I’d made dozens of them before on Geocities and Tripod; I’d had a Xanga and a LiveJournal and a MySpace page for years. Signing up for this new-to-me Facebook page felt similar at the time. I didn’t think about data gathering and digital surveillance because the Internet had not been publicly used in those ways yet, certainly not on a mass scale. But Facebook was (and is) exploitative by design, and it juiced up targeted online advertising into the colossus we know it as today.
We fed it thousands of our personal photos, allowed it to track us as we clicked on different pages, and gave up huge amounts of personal demographic and potentially intimate data to third parties via integrated apps. The data we shoveled into Facebook’s gaping maw has been used for many purposes. It’s currently selling you leggings on Instagram. It was used in attempts to manipulate elections. It became part of a digital surveillance apparatus that can be used to know when you’re near a Starbucks but also potentially reveal your precise location to law enforcement without them needing a search warrant. It became part of an algorithm that has, rather famously, driven users repeatedly towards right-wing conspiracy theories, hate speech, and violent or otherwise disturbing content — and in turn created a whole industry of human content moderators who are responsible for filtering such content despite the severe mental health consequences. (That same industry of content moderators now also regulates the content that OpenAI models are trained on, which is necessary since computers can’t make value judgments and don’t have any way of knowing what is true or whether Nazis and child porn are bad.)
I deleted my Facebook account in 2018, but I still use other Meta products, like Instagram and WhatsApp, so my line in the sand obviously isn’t perfect. Instagram is something a lot of my friends use, and feels like one of the only ways to passively keep in touch with them (a feature of the social media age that I actually like a lot). This is also a justification I have heard from people, including my wife, about keeping an active Facebook account. But everything owned by Meta is inherently a tool of exploitation. By that measure, my beloved Kindle is too. Nothing is ever going to be perfect, and you can’t un-ring a bell. Things that have become part of our everyday lives already (social media, smartphones, e-readers, and even the things that I personally reject because I simply don’t like them, like an Apple Watch) are hard to remove without meticulous care and intent.
Which underscores to me the importance of using that same meticulous care and intent about something before it becomes inextricably woven into our everyday lives. And that’s where I sense we as a society are failing with “AI”, though perhaps through no fault of our own.
Obviously, big tech companies have come to believe that they can make these choices for us, without getting our consent. And it’s consent that is another part of my line in the sand about “AI” and technology — we didn’t consent to this. I didn’t consent to having Microsoft auto-generate possible responses to messages in my work email; I had to manually turn them off. But surely they’re still using my emails to tune their models because that’s something I’m not allowed to revoke my consent for — I can’t stop using my Microsoft email because I need to use it for my job. Capitalism is so good at manufacturing consent. Everything is a “choice” that you are making in the free market. Except it isn’t, because you are dependent on private companies that use your data for survival and communication. No one asked for “AI” language and image models, but just like Facebook’s takeover of our personal data via the “backdoor” of social media, these models are now being trained on everything we post online — pictures, text, video, audio. The implications are chilling and people are already suffering as a result. The generative models are already being used to “reduce labor costs” in creative and customer service fields. It isn’t hard to extrapolate how health insurance companies like Cigna, for example, could use an “AI” model to increase the efficiency with which they deny claims — something they already automate to withhold important medical treatment from patients without actual human review. For something that is actually currently happening, check out the inimitable Molly White writing about Feedly’s anti-worker use case for its “AI”-powered news services:
Feedly launches strikebreaking as a service
The company claims to have not considered before launch whether their new protest and strike surveillance tool could be misused.
“Technology does not always equal progress,” wrote Douglas Coupland in his 1994 short story collection Life After God. It’s a line that has stuck with me since I first read the book, probably around the same time I was making my first LiveJournal posts and learning about what Facebook was. Through Coupland’s writing, and Douglas Adams’ novels and tech criticism work, and TechTV, I was developing a passion for technology, and learning about the relationships and intimacies between humans and technology. I was also living with my father in a rambling log home, constructed in pieces probably between the 1790s and 1850s, and I was fascinated by its history, by old methods of living and building, by abandoned buildings and stuff I could find at thrift stores. I became an analog photography enthusiast in high school, first with point-and-shoots and Polaroids. I took pictures of those abandoned buildings and thrift stores, and of railroad tracks and friends and family and food. Documenting them in this way was a deliberate anti-technology act for me, and a sort of experimental visual diary.
In college, I signed up for a photography class, and bought a Zenit 412LS on eBay for the occasion — a not particularly beloved Russian-manufactured SLR. The Zenit annoyed my college photography professor because I was shooting it fully manual without giving much thought to what I was doing, and he was a former advertising photographer who was actually trying to teach us the technical ins and outs of actually making a photographic image. This involves understanding your settings, using a light meter, and paying attention to detail, especially if you don’t have the benefit of a camera like a higher-end Nikon which can do some automation on your behalf. (Most of the other students in my class were shooting on Nikons.)
But I didn’t want to think about it, and the unpredictability of my images was always part of the fun for me, and what has kept film photography in my life when of course I could get technically better and more consistent results from my iPhone or a digital camera. It took me years to learn to shoot on my own, since I wouldn’t listen to my teacher. I’d come into the class with the wrong mindset, the assumption that I knew everything already because I’d been taking pictures for years. Of course I knew nothing; I’d not actually been trying to know anything, and I was doing analog for the sake of doing analog. As I’ve gotten older, I’ve also become more discerning about where I forego technology; I think it’s made me a “better” photographer, even with the Zenit and my sometimes ham-handed aperture adjustments. It’s made me a better digital and cell phone photographer, too.
More recently, at age 32, I took a vacation with my wife to Sicily. I agonized for months about getting a new 35mm SLR for the occasion and I decided I wanted something that did have some computerized features that would help me with aperture and exposure so that I could shoot more quickly and freely. I settled on the Nikon FA, an “advanced amateur” SLR from the 1980s that is somewhat hard to find in good working condition. My mom’s partner had a 1960s-era Nikon lens which he gifted me for Christmas to go with the body, which arrived from Japan via eBay a few weeks before we left for Italy. I already felt guilty about taking a vacation and spending money; the only reason we could comfortably afford a trip like this and a new camera was because my father had passed away the previous year and we had some life insurance money in the bank. I had always told my dad that we would go on a vacation like this to Italy, and I’d pay for it, if he got his shit together. Instead he died, and I was going to Italy without him.
I also felt guilty about the Nikon itself, about using the “programmed” mode when I’d been shooting manual for so long. The images that come out of the Nikon shouldn’t have the technical mishaps that caused some of my previous photography to be charming or interesting. How much of a gap is there between using my Nikon’s microcomputer to help me with exposures and using a generative image model, really?
Prompting a generative model to create the output you want is, of course, a skill in and of itself — just like framing a photograph is, even if you are just taking it on your phone camera in half a second.
But, unlike DALL-E, humans can make aesthetic and moral choices. Any generative model is trained on choices that humans have made. The “compression” metaphor feels especially stark when you remember that. The generative model hasn’t actually created anything — it’s responded with an approximation of something that already existed. Any illusion of “choice” is just an often bland, or sometimes horrifying, distillation of choices that humans have already made, resulting in deceptive illusions and outright lies. Or, what would be a deception or a lie if the generative model knew the difference between truth and fabulation, or was even capable of debating that difference. There’s a reason even very “photorealistic” images generated by these models don’t hold up to close scrutiny. There’s a reason the tone they “write” responses in has a certain repetitive cadence. There’s a reason why it feels especially insulting to be driven towards an article that was clearly not written by a person when you are trying to find useful information.
https://x.com/ChrisShehanArt/status/1641645930544766977
The “choices” made by a generative model are actually also the choices of its programmers and its content moderators, not to mention the billionaires and executives helming the companies that invest in it. The “choice” on display is, at its root, a choice to train an algorithmic model to do things badly that humans can, generally, do well, if given time and space and resources to do so. By diverting those resources to the model, and away from actual humans capable of making choices, these people are already lining their own pockets at the expense of workers and information consumers.
As insecure as I am about my photography sometimes, I have to say that in making photographs, I make a lot of choices. I eliminated two choices by using my Nikon FA — I didn’t need to set the aperture or shutter speed. That’s essentially using a calculator instead of doing long division. I chose my film stock, framed the shot, focused the lens, evaluated the light, picked the moment to fire the shutter. It’s still the same visual diary I’ve been keeping since I was 14, but I’ve learned a lot along the way about the technical side of photography and how to make my images count. All photographs are poorer representations of reality, just like the whole-cloth-WinZip-reality of a ChatGPT or DALL-E response, but they are informed by human choices, discernments, and creativity. These choices are made in the moment and in real-time, as opposed to condensed into a generative model.
So maybe my “line in the sand” is really about real choice and discernment, and my desire to use my brain to make those choices over and over again, rather than abdicate responsibility for those choices. It’s also about the choice I make to participate in something that can have serious material implications for workers. As a “productivity tool” I’ve heard it pitched as something to “get you started”, much like reading a Wikipedia article on a subject — but again, Wikipedia editors are humans, who arguably subject their work to some of the more rigorous, transparent, and thoughtful editorial practices in human history (including discussing the potential utility of “AI” in the context of Wikipedia).
The problem, of course, is that even if we as individuals make these choices, the models will continue to be used under capitalism to do things without choice and without discernment and without evaluation, because people need to get their work done, or risk losing their jobs. Unlike the workplace productivity changes that came with the advent of typewriters, photocopiers, or computers, these can’t simply be treated as tools because they do something quite different than expedite your output, solve math problems, or create facsimiles. They can, quite literally, write something instead of having a person write it, or design a logo without having a person draw it. It will of course be low-quality, unoriginal, and perhaps even disturbing or wrong, but that doesn’t really matter in a lot of contexts that capitalism has created for us. What matters is that it exists, that someone will potentially click on it, that it’s words on a page.
Because we are human, and because we have an inherent desire to see and experience things like beauty and art and original work, I believe that these things will always exist. My concern lies in the fact that these things are becoming increasingly commodified and inaccessible, and that as a result, other inherent human desires — the ones we have for agency, independence, and freedom — will be harder to engage with and achieve.
The progression of the “information age” has been curious this way. For all the technological “progress” we’ve made in disseminating information around the globe instantly, we have faced startling resistance to efforts towards making that information accessible, thoughtful, and true. It’s obvious that this is, if not necessarily by design, by willful omission. Misinformation is not new, and has been common across every communication medium probably since the dawn of humanity. The Donner party was led astray in the 1840s by a grifter named Lansford Hastings, who published a pamphlet to promote settlement in Oregon directing people to take a dangerous route through the Sierra Nevada mountains that he himself had never navigated, but of course they had no way of knowing that. In the 1780s, a Scottish solider named Gregor MacGregor sought to draw settlers and investors to a Central American territory called Poyais, which did not even exist. Grifts like these rely on the marks not being able to verify information, and often have tragic consequences. And a generative model couldn’t know any better than a member of the Donner Party that the utility of Hastings’ route was a fabrication, if the route was made available to it as a possibility.
The proliferation of data in our current age would seem to be a bulwark against lying, bullshitting, grifting, deception, and oppression. The Internet allows us to look anything up at any time. But in reality, the near-instantaneous nature of our information sharing in 2023 has made these things more prevalent than ever. The lying and grifting always finds space between the lines of documented fact, or gives you reason to doubt what you’ve seen and know to be true. On top of that, “data” is used to defend choices that defy our hearts and better natures. We live in an era of unprecedented “choice”, but many of the choices are bad — every cell phone you can buy is produced with minerals which were probably mined by children; your morning commute pumps toxic gas into your lived environment; healthy food you buy at the store is produced with unsustainable farming practices and by union-busting companies. The instinct of the tech industry to drive us inexorably towards non-choice belies the shame that billionaires perhaps don’t know they can still feel. Nobody would choose this world if given the choice of something better.
If capitalism eliminates our real choices from the equation, with data, generative models, and manufactured consent thereof, then those in charge abdicate the real and urgent responsibilities they have to improve our conditions and our possible futures. To me, that’s what “AI” feels like. That’s why it rings hollow as something that can improve our future — because it doesn’t expose the richness of human experience, or grant us greater access to it. It reduces it, obfuscates it, and poorly curates it. It feeds us bullshit.
So, if you have worried like me that you’re being unreasonable for rejecting generative models as useful tech — I really do not think we are wrong. We know exactly what this tech does. The problem is that it sucks. I won’t know for a few weeks yet whether my Nikon pictures from Italy suck or not, but at least I’ll be making that determination for myself.