Jacky Alcine logo

Jacky Alcine

Subscribe
Archives
September 2, 2025

Commercially Encoded Perspectives, Communally Decoded Intent

This is the first newsletter entry that I’d actively turn on analytics for. As I’d like to take my writing more seriously this month, having numbers help. But what helps more is hearing what you think about what I write. All sorts of feedback - positive, negative or expansive, is welcome.

On the last day of June a decade ago, I posted an image from my Google account showing a friend of mine from high school (as well as members of my family) in Google’s photos preview, showcasing its new vision features. That began a two-year situation of having to hide her identity and me moving away from New York City to California. I don’t talk too much about the personal impact of that, but it’s definitely a contributor to my angst towards the false benevolence that folks in the tech industry gave when it went down. I do think that folks only got a sliver of what I said, so here’s the post reproduced as a copy from the Internet Archive:

A screenshot from the Internet Archive of the tweets that kicked off scrunity into Google Photos and its ability to recognize Black faces.
A screenshot of the Twitter thread when I highlighted Google Photos’ explicit categorization of my friend as a gorilla. Which kind, it’s not clear.

The comments online were as expected; folks using the excuse of beta software to justify a sociological (at best) issue or separating the act of production from the impact of its outputs. I’m used to this (now) from the Internet in its collective inability to perceive anyone who isn’t a white straight (optionally, Christian) man as effectively human. Some more public figures who are people of color went to defend Google, saying that they did take care to include “diverse people” in their photo collection efforts. Collecting enough data to satisfy internal QA isn’t necessarily the same as preventing generationally-endued assumptions about how technology works from being repeated. Especially in the realm of image capture, it was invented around and based on the capturing of white skin. This only changed not because of rational choice theory, but because they needed to help satisfy the needs of capital. Namely, from an industry that used the labor of African children to distribute their wares around the world; chocolate. Lorna Roth led a charge to correct this issue in the late 1960s, more than 30 years before the Internet came to be. Earl Kage said it himself, “It was never black flesh that was addressed as a serious problem at the time”. Kage was the Kodak’s former manager of research. This trend of research needing to bolt on other people is something I found to be a theme in Brian Christian’s book, The Alignment Problem. Scientists, computer programmers, government officials and executives tend to lead with “what if” and fail to add meaningful constraint beyond those two words when crafting their visions.

Defend, Deploy and Disseminate: Replacing Pipes with Guessing Tubes

The idyllic nature of the industry of computer and information science, similar to economics, encourages a particular objective: to view the world as a complex variable and our job is to mutate the fuck out of it until it does what we tell it to do. There’s documentary evidence today we have around us that the (literal) filters we perceive ourselves with in the webcam (and then camera phone) period has had a significant impact on how “mass media transmits sociocultural symbols that are unrealistic and unachievable for most users, especially women”. If the technology designed to capture and mainline whiteness is now failing to do that for a not-small demographic of people through its (primary, now-social) application; how can it reconcile itself - can it?

I focus on this because it (computer vision) falls under the umbrella-and-disgraced term, artificial intelligence. And as much disdain I have for AI, we need to be critical of the environments that technologies produce their works. Once a tool is launched to the public, it becomes very difficult to undo whatever expectations and biases it has baked in. There are works like Cyber Racism (2009) and the works of Nakamura like Cybertypes (2002) that provide evidence that refutes the digital egalitarian nature that’s professed with technology. This is vital as we have companies working to embed a form of technology met to guide, if not completely replace, the decision-making of many parts of our society. This choice to replace direct bureaucracy (or interruptible participation in a system) with machines that use some mix of predetermined statistical decisions from undisclosed corpora of text, “prompting” (suggestive or instructive text given to a machine to guide its intentions) and “context restraint” (again, suggestively constrained by a pattern of retrieval from select data) is characteristic of commercial technocracies via software, instead of PACs. Despite the mountains of evidence of the need to restraint the wide stroke of applications of these services, due to now-federally supported efforts and executive actions; companies have even less reason to concern themselves with the impact of the technologies of the generative space. It’s frustrating because it’s managed to dwarf conversations about how we can use things like optical character recognition or image pattern detection to help with things that folks do every day. Instead of working to fix the things we have today that are extending the institutional harms we see and rail against, we’ve advocating for the acceleration of technology for “innovation” because it allows for apolitical progress:

In the late 1960s in the face of the Vietnam War, environmental degradation, the Kennedy and King assassinations, and other social and technological disappointments, it grew more difficult for many to have faith in moral and social progress. To take the place of progress, ‘innovation’, a smaller, and morally neutral, concept arose. Innovation provided a way to celebrate the accomplishments of a high-tech age without expecting too much from them in the way of moral and social improvement.

This is the banner that folks are choosing to stand behind because of the short-term benefits and perks it provides them. For example, I used a self-hosted version of LanguageTool, a (effectively AI) grammar checking that uses similar technology for detection that one would use for generating text. But the corpora collection and compilation is done incredibly differently and for an optimized purpose: making it easier to hold to its objective and limit its scope of perceived authority (it can’t reject my resume, for example, but it can help me correct potential mistakes I find in one). Folks who have contention with the notion of discussing the depth of AI and its critique, including peers that begin with robust critique of how seemingly-alarmist folks can be towards AI to then devolve into repeating the notion of the deployment is inevitable without addressing the systems that make it so. Others have made it more of a moral standpoint to reject the deployment altogether. Where does that put people like myself, who understand and empathize with the impact, labor and wrath necessary for generative artificial intelligence but are actively conscripted to integrate these services into federal government systems? Is it better to have someone who has contentions against these systems instead of someone else who’s more hungry about innovation than actual impact? This dilemma mirrors, to a lesser level, the conversation around the pollution effect of a bad apple in an orchid. I wrestle with the idea that still choosing to develop these tools that I immediately negate my stances, and it’s been close to a year that I’ve been involved in this work.

It’s difficult: I’m not in favor nor can I politically align with the notion of the mass replacing of levers of decision-making venues and labor management decisions in a society where we’re barely holding on to tech workers having a noncommercial voice in how production works and one that rewards labor reduction by injecting automation at every turn. The abolitionist perspective has me see the parallels with the abstraction and “disappearing” of labor when we examine the situation of prison workers being used as “augmented” staff on behalf of the State for private corporations and the internationalist perspective in me connects that to the underpaid and exploited Kenyan workers that OpenAI used for their model works. It sees the commonality: capital exploiting hyper-disposable Black bodies for the purpose of minimizing their future need of said workers. Dangerously so, I also see the cementing of liberal ideas in the tech industry from folks who are unintentionally upholding institutional racism (and sexism) and how it manifests - from Kodak’s Shirley Cards to OpenAI’s washing of data and Amazon’s AI labor automation. Nothing is unconnected in a world where we all depend on each other in the act of production, regardless of how we appear in it. I also needed a job and even took a role reduction due to the difficulty of the landscape to find work. I’ve talked publicly a lot about leaving tech from an engineering capacity to study it from a social or history perspective; hoping that my participation in it could provide a sharper answer about the role of producers and consumers of systems that produce and consume other systems (“Meta-workers”? This is mildly applicable to lawyers, too) and the kind of landscape we need for the mantra of egalitarian living can be achieved. One thing I can agree on is the following:

If we simply dismiss this technology, people may believe us, and find that a whole new technological paradigm has passed them by, curtailing their power and agency.

If we let this technology become the plaything of the affluent exclusively, we’ll deepen our digital divide in a way we may not be able to recover from.

I posit that the divide has already existed and has now become more prominent, as Ruha Benjamin makes clear in Race After Technology, who introduces us to the New Jim Code, in a landscape that sees so little importance of the encoded differences of its mother environment to a point of supporting the redevelopment of it (that, when wielded in a capitalist, will stretch out to all of those not deemed desirable).

I’ll end with her caution to us technologists to be more thoughtful about how and who we want to make things for:

The issue is not simply that innovation and inequity can go hand in hand but that a view of technology as value-free means that we are less likely to question the New Jim Code in the same way we would the unjust laws of a previous era, assuming in the process that our hands are clean.

Don't miss what's next. Subscribe to Jacky Alcine:
Bluesky Fediverse
Powered by Buttondown, the easiest way to start and grow your newsletter.