Signals & Subtractions logo

Signals & Subtractions

Archives
December 1, 2025

Managers over AI Models

Why your AI strategy lives (or dies) in 1:1s

One strategic signal 🔭
One (human) prompt 🧠
One subtraction opportunity ➖

Created by Sam Rogers · Powered by Snap Synapse
Freely available on Substack, LinkedIn, and our mailing list.
New issue every Monday.


🔭 Signal: Managers Write The Real AI Policy

Most companies think their AI policy lives in a document.

It does not. It lives in 1:1s, or it dies there.

There might be a multi-page responsible use framework. There might be an AI council that meets once a month. Across the org, most AI strategy conversations are about tools, vendors, and budgets. Very few of them mention managers by name.

That gap is where adoption quietly stalls. Because every week there is a meeting with precisely two people in it where something more powerful happens.

In practice, the real rules for AI get written in three places:

  • Offhand comments in team meetings

  • Side remarks in performance reviews

  • What managers actually reward, ignore, or punish

In a 1:1, the manager either:

  • Invites AI into the work with “show me where you used AI on this” or “what did you keep and what did you throw away”

  • Keeps AI invisible with “let’s just review the final deck” or “I do not really trust those tools, just send me your version”

Teams also watch what their manager actually does:

  • If the manager never uses AI in front of the team, then using AI is risky

  • If the manager only mentions AI when something goes wrong, AI sounds dangerous

  • If the manager openly says “here is where AI helped me and where I overruled it,” AI starts to look like a legit option they should get in on

People might applaud the slide deck, but they don't generally follow it. They’ve been burned before, and learned to follow the person who signs their Annual Performance Review instead.


🧠 Strategic (Human) Prompt: Boost, Tool, Risk, or Silence?

For most employees, the core AI question is not technical. It is personal.

On my team, using AI feels like:
A. a career boost
B. a neutral tool
C. a career risk
D. we don't even say "AI" around here

Answer that honestly. Not the official answer, the lived one.

If you are a manager or senior leader, go ask your team this week. In the meantime, consider: what have you actually done with AI in your 1:1s, team meetings, performance reviews, and goal setting that communicates the answer you want your people to give?

If AI never appears in those conversations, people will invent their own story about what is safe. That story will be based on fear, not strategy.

We don't need perfect AI roadmaps to improve things. Start small by proposing a 5 minute slot in each 1:1 titled “Where AI showed up”, and using it to ask:

  1. “Where did you use AI on real work”

  2. “Where did it feel risky or confusing”

  3. “What is one place you would like to try it if it felt safe”

Then do the hardest part: listen louder than you talk. Those short conversations will tell you more about your true AI posture than any dashboard ever could.


➖ Strategic Subtraction: Performance Ambiguity

The biggest drag on healthy AI adoption right now is not a lack of tools. It is performance ambiguity.

When people don't know how AI use will show up in their own evaluation, they play it safe. As we head into year-end and early 2026 performance reviews, this is the perfect time to be the clear AI onramp for your team by subtracting phrases like:

  • “Use your judgment.”

  • “Play with AI if you want.”

  • “It is there as an option.”

Those all sound flexible. But they land as “You are on your own here” and make people think twice. Consider replacing them with explicit guidance that is written down and visible:

  • “These tasks should be AI assisted by default, unless there is a reason not to.”

  • “These tasks must be human led with AI as optional input only.”

  • “These tasks are AI free for now, and here is why. Do you see anything I am missing?”

If your people have to guess which category their work is in, they will tend to guess conservatively. You can go one step further and connect this to the upcoming reviews directly.

Manager checkpoint for this review cycle:

  • Call out 1 or 2 workflows where smart AI use is part of meets expectations

  • Name 1 or 2 situations where blindly trusting AI is a performance problem, not a clever shortcut

  • Recognize people who document and share reusable AI patterns as demonstrations of leverage, not laziness

If AI shows up in someone’s work all year and never shows up in their performance review, everybody loses.


🏋️ Analogy of the Week: Managers As Spotters

“I'm here, I got you. Try it like this, and if something slips, we'll rack it together.”

Think of AI as the barbell that can make us stronger and faster, with real potential for injury.
The employee is the lifter.
The manager is the spotter.

What the lifter attempts has everything to do with their spotter.

Bad spotters:

  • Look at their phone while you lift

  • Panic or blame when something goes wrong

  • Tell you "go heavy" with no support, then step back

In that environment, lifters play it safe and do far less than they could. They hide what they are trying, and treat the barbell as a liability.

Good spotters:

  • Help pick the right weight for the moment

  • Stand close when risk is high

  • Give clear cues on form

  • Take responsibility for safety

  • Give credit to the lifter when it goes well

In that environment, people try new weight. They push into new capacity without ignoring risk.

Managers decide whether anyone feels safe taking on new loads. And as performance review season rolls in, people are quietly deciding whether to attempt new "weight" with AI or stick to what feels safe.

If you are building human onramps for AI, be the person saying:

“I'm here, I got you. Try it like this, and if something slips, we'll rack it together.”


🎵 Closing Notes

If your AI strategy does not mention managers explicitly, it is not a strategy. It is a wish.

Because if AI never shows up in 1:1s, then doesn't get treated as serious work. It stays a side hobby or a guilty secret. Most teams are already living with unspoken AI rules. The job of every manager is to make those rules visible and safer, not leave people guessing.

Employees will wait for clear signal from the person who matters most. If you are a people manager, that is you, and the signal is yours to send. If you are not, use your next 1:1 to ask for what you need. Either way, start with one sentence in your next 1:1 that removes ambiguity instead of adding to it.

Until next time,


Sam Rogers.
Your AI Spotter
Snap Synapse – from AI promise to AI practice

📅 Book a meeting

If managers write the real AI policy, they also need a real view of how their people actually work with AI. Not an oversimplified good/bad score, but a nuanced behavioral assessment. Try PAICE.work for free for individual team members. And if you want to see how your team is handling AI risk, let's talk about a pilot that shows true business value for your org in Q1. Deeply discounted if you lock it in by December 15th.

Read more:

  • November 18, 2025

    Synthetic Trust

    Unmasking Synthetic Trust: be wary of AI's confident tone and demand proof of reasoning

    Read article →
  • September 29, 2025

    The Trust Gap

    Updating the Diffusion of Innovation model for AI, strategizing for trust-building, and navigating AI adoption hurdles.

    Read article →
Don't miss what's next. Subscribe to Signals & Subtractions:
Share this email:
Share on LinkedIn Share via email
snapsynapse.com
LinkedIn
Powered by Buttondown, the easiest way to start and grow your newsletter.