The Mythology of the Algorithm
Welcome to the 70th edition of this newsletter!
With each email I'm sharing material that has inspired me recently. I'm hoping it will inspire you, too. If you want to support my work, you can sign up for my Patreon. This will get you access to exclusive material every week.
If Patreon is not your thing but you enjoy what I'm doing, feel free to send me a little something via Paypal. I'll use the funds to pay for the fee the service provider of this Mailing List charges me every month. If there's money left, I'll invest it into the Japanese green tea that fuels much of my creative work.
Turns out, my own counting of these newsletters was in conflict with the provider's. I adjusted things above, so now that number above should agree with the number listed at the very bottom of this email (70 -- geez, did I already write 70 of these emails?).
If you're on Instagram, you know that once you click on an ad, you'll be inundated with similar ads. Sometimes, this can save you a lot of research time, even as things of course get pretty random. For example, in general I'm interested in apps that use artificial intelligence (AI) to work over photographs. It's not that I'm actually interested in using them, though. I'm mostly interested in seeing what is being offered; where possible, I'm interested in testing some of the products.
By writing "where possible" I really mean "if it's free". I don't mind paying for software at all. But if I merely test something, I see that more than the equivalent of a review copy: it should be free.
I've so far tested a number of apps, some of which I kept on my phone and most of which I deleted again. The picture above was partly produced by an app called Prequel. The advertising promised me that the app would create what was supposed to look like an image of a classical Roman statue. Of course I had to try that. As it turned out, the result needed a little bit of manual Photoshop work after the fact, and I arrived at this picture.
Maybe you'll think "that looks pretty good". I tried a couple of variants, though, and the second one came out a lot more disturbing. It looks more like a fascist variant of a classical Roman statue:
Yikes!
Please note that the image resolution isn't super great. It might (or might not) be better with the paid version of the software.
This is all fun, except, of course, it's not. What I'm really after isn't so much producing a bunch of selfies -- even though that's what I do. I'm really interested in what these types of software do, in other words, what possible variant of myself they have on offer. After all, these are commercial products. In a Barthian sense (see his book Mythologies), I am interested in what the underlying ideology is that is on sale, the ideology that you can literally buy into.
Lately, a different app, Lensa, has caused quite the stir. I don't think I've tried it. Or maybe it was one of those apps where you needed to sign up for a short trial period before it kicks in with some expensive subscriptions (something I never do).
A few days ago, I came across an article entitled The inherent misogyny of AI portraits – Amelia Earhart rendered naked on a bed. If this sounds pretty bad, it is. "To test the software," Alaina Demopoulos writes, "I also submitted 10 photos of myself to the app, all fully clothed, and received two AI-generated nudes."
There's a short link to a different article, which, a brief aside, appears to have become very common these days. Many newspapers will publish articles that are based on someone else's usually much more detailed research. In the German press, this gets particularly bad. There, you typically find articles that are based on something that appeared a few days earlier in English somewhere. It takes a while to track back things to the actual in-depth articles.
In this particular case, Olivia Snow wrote an article for Wired. It's entitled ‘Magic Avatar’ App Lensa Generated Nudes From My Childhood Photos. There also is a Wired piece by Reece Rogers entitled What You Should Know Before Using the Lensa AI App. As far as I can tell, Rogers' advice applies to pretty much all off these types of apps.
"I’ve already been lectured about the dangers of how using the app implicates us in teaching the AI," Snow writes, "stealing from artists, and engaging in predatory data-sharing practices. Each concern is legitimate, but less discussed are the more sinister violations inherent in the app, namely the algorithmic tendency to sexualize subjects to a degree that is not only uncomfortable but also potentially dangerous." You want to read the full piece, but be warned: there are some pretty disturbing details included.
The outcome of all of this is sadly predictable: there are generous amounts of sexism, misogyny, and racism caked into these apps. In many ways, that's not surprising -- after all how should apps somehow be able to present a world other than our own? But of course, it's completely infuriating in more ways than one. To begin with, there are the sexism, misogyny, and racism that we need to get rid off -- instead of downloading it on our phones.
But there is another aspect that has bugged me a lot over the course of the past few years. With the rise of all of this AI stuff (remember that incredibly stupid biography that a language-based AI machine wrote for me?), what we're witnessing is how somehow, we are led to believe that 100% of what we're witnessing is the outcome of AI -- or algorithms for that matter. In a narrow sense, that is correct.
However, algorithms do not write themselves (let's simply include AI in there). Instead, they are being written, tested, and fine tuned by people. Consequently, even as we talk about algorithms -- "the Instagram algorithm removed one of my posts", that is a misleading way to think about it. The algorithm only does what someone wants it to do. In a nutshell, all of those tech companies have deftly outsourced responsibility for what they do: instead of blaming them -- meaning their management and employees -- for everything that's wrong with the algorithms, people instead blame the algorithms themselves.
If you want, you can think of "the algorithm" as the equivalent of the passive voice. You might be aware of the fact that if you phrase something using the passive voice, you can avoid mentioning the person responsible: my partner looked into the fridge and found that the plums had been eaten. Who would know who ate them? Obviously, when we write that an algorithm does something that's using an active voice. But an algorithm is not a sentient being. It does what someone wants it to do. Talking about the algorithm thus serves omitting the people who are responsible.
When I worked on my doctorate in computational physics, I would have never gotten away with blaming the various algorithms I coded for things that didn't work. The university would have never given me my doctorate for that. But here we are, about 25 years later, talking about algorithms and how they're the problem.
Algorithms might be the problem, but they're not the source of the problem. And we need to talk more about the sources of the problems we're facing. Who is responsible for sexism, misogyny, and racism being perpetuated in apps like Lensa?
By buying into the algorithm diversion, we allow tech companies to get away with what they're doing. We can't do that any longer. Too much is at stake.
I didn't want to end on too heavy a note, so I had Prequel work over a selfie with Tobey from earlier this morning. Maybe it will make you laugh.
As always thank you for reading!
-- Jörg