Signals & Subtractions logo

Signals & Subtractions

Archives
September 16, 2025

Wishing ≠ Collaborating

Ditching double standards with AI: stop treating it as a computer and start fostering collaborative interactions.

One signal 🔭
One prompt 🧠
One subtraction opportunity ➖

Created by Sam Rogers · Powered by Snap Synapse
Now also freely available on Substack 🎉


🔭 Signal: How we’re talking to AI

Yeah, so as you may have noticed, it’s getting kinda awkward.

Some people talk to AI like a vending machine: push the right buttons, get the right snacks.

Homer Simpson presses buttons on a vending machine, hoping for a candy bar.
D’oh! It’s Homer button-roulette.

Others treat it like an oracle: vague, reverent, and weirdly apologetic.

Scotty from Star Trek picks up a computer mouse and says "hello computer" into it
Captain, she cannot take much more of this!

But here’s the thing:
We already have the skills to work with AI. We’ve just been trained to ignore them.

The real challenge isn’t that AI is non-deterministic.
The challenge is that we expect computers to be perfect, while we’ve always given our fellow humans some wiggle room.

The double standard is breaking things. Fast.


🧠 Strategic (Human) Prompt: Craft a Human Prompt

How would we change the request if this were a person?

  • Not a robot.

  • Not a search engine.

  • Not an all-knowing deity.

Instead:

  • A new colleague.

  • A not-yet-trusted partner.

  • A slightly-too-literal teammate...who it turns out never sleeps.

They're a bit eccentric, sure. They give compliments when they aren't needed, often over-confident or overly enthusiastic. But for now you aren't inviting them into your home, you're just inviting them into your work.

Frame it this way with your teams and increase your likelihood of getting better results, or at least healthier expectations.


➖ Subtraction: Ditch the Double Standard

We’ve been navigating multiple non-deterministic systems for millennia.
They’re called other people.

So why are we scrambling to invent entirely new standards and behaviors with AI?

We already know what to do:

  • Clarify our intent

  • Stay flexible and allow for interpretation

  • Course-correct if misunderstood

And don't take leave of our senses and place all our trust in any single person (and/or tool), blindly.


🧞‍♂️ Analogy of the Week: Arguing With a Genie (While Still Holding the Lamp)

Aladdin holding lamp and yelling instructions at an annoyed genie
One thousand and one prompts.

User: “Okay! Summarize this PDF. No hallucinations. Use bullets. 150 words.”
Genie: nods, obeys, delivers
User: “Ugh. This isn’t what I meant. This isn't right. And not 151 or 149, exactly 150 words! Can't you count?”

Here’s the deal:
The AI is standing by and granting wishes.

But if we treat it like a wish engine, it’ll keep granting literal, low-risk results.
And if we expect disappointment, we’ll almost always get it.

Want better?
Get clearer. Get conversational.
We’re not just issuing instructions.

We’re shaping intent.
And we’re the ones still holding on to the lamp.


🎥 Extra Bonus: Claude Privacy Settings in :10 sec

Anthropic recently changed its training defaults for Claude. And people flipped out.

  • No, nobody else likes this change either.

  • No, there's no reason to stop using Claude.

  • Yes, you still control the setting. So go, y'know...control the setting, already!

Here’s a quick clip that shows exactly which 2-clicks update your defaults within the next 2-weeks. Best to do it right now, though.

animation of how to update Claude data privacy settings
Claude’s data privacy defaults changed. Just do this.

🎵Closing Notes

We don’t need prompt guides the size of car repair manuals.
We need to stop pretending this is all brand new.
It isn't. It's more like interacting with someone from a vastly different culture than ours. Which, to be fair, most of us aren't great at either. But we can still stumble through it.

The real risk isn’t AI making mistakes. After all, people do that all the time, right?
The real risk is getting scared, getting impatient, or getting unrealistic with a tool that sometimes outperforms us and sometimes fails, but doesn't fail like we do.

Just ask the autonomous vehicle industry.
Self-driving cars weren’t always safer than humans.
But today, they are.
We still struggle when the machine errs. But it's hard to ignore the countless mistakes it doesn’t make that humans routinely do.

If you'd like to talk to a human instead of a machine about AI Transformation in your organization, please book a meeting here.

Until next week,

Sam Rogers
Conversational UX Realist
Snap Synapse – tools and thinking partners to fuel your AI transformation

Read more:

  • July 1, 2025

    Delusions of Dashboards

    Declare your independence from ineffective dashboards and redefine metrics that spark clarity not clutter!

    Read article →
  • July 19, 2025

    Everything is a Spec

    Embrace the shift from aligning AI to prompts to aligning it to the specification.

    Read article →
Don't miss what's next. Subscribe to Signals & Subtractions:
Share this email:
Share on LinkedIn Share via email
snapsynapse.com
LinkedIn
Powered by Buttondown, the easiest way to start and grow your newsletter.