AI and Tech Ethics
AI doesn't have to make things worse, but it absolutely will with these guys in charge

I want to start this week by offering an apology. Apparently, because of the way I inserted an image many of you got a borked version of the prior newsletter. My bad.
The copy in the archives was intact but what hit many of your inboxes was a string of unreadable code. Thanks to those who reached out to me. It's rewarding to know that so many people saw it and cared enough to let me know about the issue.
If you missed that newsletter it was a banger (in my opinion) and is the starting point for the conversation I want to have with you today.
Before we get to that, one personal update.
After seven weeks of remote learning I’ll be returning to my classroom on Monday and will welcome the majority of my students back to #308. A portion of the students remain abroad and a smaller number will continue with remote learning out of security concerns, but school marches on. We have seven weeks left in the academic year and I should be back in Tacoma on or around July 1. This is all assuming clear heads and ceasefires hold. Going back is going to be weird, but I've been doing this long enough that I can roll with the punches.
Okay, on to the conversation at hand.
Subscribe nowThe term artificial intelligence itself was coined in 1955 by John McCarthy at Dartmouth. But the mainstreaming of intelligent computers into our lives has been foreshadowed in fiction as early as Isaac Asimov’s Robot and Foundation books.
Broad adoption of artificial intelligence is an inevitability that could bring much good to people’s lives. But it won’t with the current actors in charge nor with the current set of incentives at play.
Put differently, my quibble with artificial intelligence is not existential. My concerns are about means and methods. I have deep reservations about data privacy, the protection of intellectual property, the gradual dumbing down of the public, and displacement in the workforce. I've articulated each of those previously. Specifically, my statement that “the technology is value neutral” drew some ire from readers but I want to reward long time subscribers by not telling them the same thing over again.
There is a way to integrate this technology into our lives that respects personal privacy; one that introduces it in a consensual and deliberate manner rather than forcing it into every possible application by default. That kind of approach would treat technology as something people opt into, not something quietly imposed on them.
But we do not live in that world. Instead, we live in one shaped by tech oligarchs whose STEM education came without a meaningful grounding in ethics or moral reasoning. The result is a system optimized for capability and scale, not for consent, restraint, or human consequences.
Put differently, again, sharp knives and shotguns are tools I’ve greatly appreciated at times in my life, but not when they’re put in the hands of people with no moral compass.
I asked you last week for your takes on AI or how you are currently using LLMs. Among the responses here are two worth discussing.
Reader B.D. deploys the tech in a limited manner and notes the importance of quality checking outputs from AI for hallucinations:
I think that it is useful for data mining and correcting spelling on term papers and such (as long as you read over the changes to make sure it wasn’t hallucinating)... It is not useful for original thought. In that kind of application it is more like a very talented parrot. It does not have 5 senses with which to experience the world.
This is roughly how I use AI when I use it.
For instance, “help me take some heat off this email and copy-edit it for missing words and syntax” was a recent prompt I used. But that process involved me going to the LLM, not it swooping in like Clippy in the middle of my email. If the technology is as world altering and life changing as its advocates claim, they shouldn’t need to force it on people.
The language of consent is important with AI. I shouldn't find out my posts, chats, and Zoom calls are being used to train AI models from The Verge or Mashable.
Reader G.W., after also noting the non-consensual nature of much of the deployment of AI, pointed out how AI may drive skill atrophy, “I find it’s not a helpful tool for research and writing… it disables human critical thinking skills.” This is a concern I share and one I observe in my practice. With one notable exception, my students who in my observation are most dependent on AI seem to have some of the worst long-term retention of information and earn some of the lowest marks on assessments. But they defend their use of the tech because “it’s so fast” or “it’s easier.
The more they use it the less they understand.
I want to close by highlighting a growing number of reported incidents in which AI intersected with acts of violence. This week, OpenAI’s Sam Altman issued an apology after the company did not alert law enforcement when a user’s chats—deemed alarming enough to warrant a ban—did not meet internal thresholds for escalation. The individual later carried out a shooting that killed eight people, wounding twenty-seven.
This follows similar incidents in Florida, Finland, and Las Vegas.
Again, I would love to hear your thoughts on any of this.
Feel free to opine.