It's time to get serious
A guest post essay by Deryn Walker
After much reflection, we at Lowered Techspectations decided that this newsletter has gotten far too silly, and it’s time to rein it in. With this post, we’re increasing our word count, thought leading, and telling it like it is the best way we know: by asking someone else to do it.
We’re proud to present today’s guest post by friend of the newsletter and fellow techworker Deryn Walker.
AI and its Human Impact
When I was a kid, someone told me about a movie with a man who’d been given a magic ball of string. Every time he wanted to jump forward he could clip the string. Unpleasant job interview? Cut. Painful dentist appointment? Cut. As time went on, he cut more and more—the crying baby, the weeks before vacation, the annoyances and boring details and painful tasks. And suddenly, there was no string left. He hadn’t lived the life he’d been given, and there was no life left.
I don’t think this story actually exists as I remember it (it was most likely a mashup of an Adam Sandler movie and an old folk story), but the concept stayed with me. Skipping the difficult stuff is skipping life. We only find meaning by walking through it.
Until AI arrived! A magic ball of string that lets us skip the unpleasant parts, like “thinking” or “trying,” while giving us “time back.”
AI is distinctly American, from a culture that prizes capital—the quicker earned, the better. We use money metaphors for time (save it, waste it, spend it). We prioritize speed, ease, comfort and personalization. Our tech industry rakes in the money by selling “faster” and “easier” products to its users.
AI promises these things in the extreme. It’s touted as a tool to reduce creative and generative effort. No longer do we have to think of what to make, figure out the structure, begin to create it, edit the structure, create some more—all part of the process with writing, coding, and illustrating: AI reduces this cycle to “decide what you want and ask for it” (with an optional addition to “tweak the request until it’s the way you like”).
But the impact of embracing reduced effort and increased speed is becoming increasingly evident. The more we outsource, the more we forget. Tech structures already de-incentivize the process of being exposed to and learning new things: algorithms promote echo chambers and “customers like you also bought.” Serendipity is dying.
‘Reduced effort and increased speed’ also de-incentivizes connection in favor of personal comfort. Interpersonal relationships come with friction—differences, disagreements, inconvenience. Why experience the inconvenience of friends and their schedules, emotional output, and conversation if I can watch funny 30-second videos, send a few memes to people who already agree with me, and go to bed early?
It’s true: it’s often difficult to relate to people who aren’t you. One draw of AI is how it eliminates relational friction (for example as a “thought partner” or advice tool). This sounds great, because it’s easier to create a me-centered reality where answers are quick, my imaginary relationship is perfectly spicy, my ideas are validated, and everything I want is immediately done. But in reality, a world of mirrors reflecting back one flattened version of its user—who is never required to stretch or change—is a shallow, lonely world.
(And potentially an expensive one: friends don’t charge, but subscription services will. Lonely people are a captive audience. Without the interpersonal skills or practice, will AI users be able to opt out?)
AI exchanges answers for personal autonomy and trust. It tells us everything’s easier, for free! But like a parasite, gorges itself on the human thought and thought patterns, working towards a reality where it replaces our ability to do the thinking at all. It takes away critical thinking (as trust increases, fact-checking drops), our potential to create (or we'd just do that work, acquiring the skills to create it ourselves), and our resilience: we won't know how to navigate friction if we never do. And friction is inevitable.
You will get sick. You will argue with friends. You will die. You cannot Claude your way out of that, or what you believe it means, or what you believe will happen to you afterwards. You'll have to face some things unmediated.
By removing the need to think, we will not practice thinking, and when we need to think for ourselves—what do I value? what do I want? what do I believe?—we will not know how.
Behavior states and shapes our values. I believe you try to do good; you will also increasingly consider ‘good’ whatever you consistently do. Honestly look at the values you have, and the values you’re actually expressing.
Using and perpetuating AI both shapes us and reveals us. Using it already implies the time spent gazing into its magic orb is “saved,” to be better spent elsewhere. I invite you to reflect: where is your time best spent? What do you choose to do yourself?
Do you value relationships, but you’re limiting true personal interaction?
Do you value learning, but you’re taking a shortcut to the answer?
Do you have an ethical framework, but you’re willing to cut corners because of ease?
Human experience is embodied learning and is the lens through which we view the world. Experience shapes the narratives we tell about reality, and what we believe should be reality. Removing experience, jumping to answers via a tool that imitates humanity without embodying it, will change fundamentally what we believe about living, about humanness, and ultimately, who we are.
What to do about it
Examine the values you implicitly uphold and practice.
Do you believe faster is always better?
Do you believe easier is always better?
When is it not better?
When and how can you opt out?
Start with your own humanity. Don’t assume that the story AI sells is the truth.
Identify your values. Are they evident in the ways you spend your time?
Identify the values built into the tool you’re using. What narrative is it trying to sell you about your life, your needs, your capabilities? Do you believe that narrative? Are you willing to perpetuate it?
Take a tech hippocratic oath. Each time you consider using AI, note the risks, and the impact your use might have—personally, locally, nationally, globally. How can you do no harm?
Change your metaphors. The metaphors we use shape how we see the world, and a tech view will be tech-centered. What other perspectives might you gain if you used nature metaphors? Relational metaphors?
Take responsibility for your individual involvement: articulate your priorities and shift your use (via dollars). We can't rely on a tech industry that got us to this place to proactively evaluate their harm and design these models differently. That would require a priority shift. Priorities will only shift with incentives. In the US, that means consumer power. That means you.
AI offers so much for “free,” when really, it's an exchange of personal autonomy for the answers it gives. Be intentional about that exchange.
Thank you so much for reading, and to Deryn for this sprint’s contribution. Check out more of Deryn’s work via her Substack. And… April Fools - we’ll be back next week with more comics! 😜
Did someone forward you this newsletter? Push the button:
Subscribe now