What Survived 2025
Top 10 most relevant patterns from this year's issues. Steal them for 2026.
Created by Sam Rogers Β· Powered by Snap Synapse
Freely available on Substack, LinkedIn, and our mailing list.
Since June, Signals & Subtractions has published a new issue every week with the same format:
- One strategic signal π
- One (human) prompt π§
- One subtraction opportunity β
What's accelerating, what's stalling, and what to subtract so your organization can move faster with less drag has proven to be a helpful frame for tracking AI transformation patterns.
Some of the frameworks have really held up well. Some were overtaken by events. This issue highlights the ten that remain most relevant heading into 2026, ranked from useful to essential. This issue is a survivorship filter: what still holds after contact with 2025.
Each entry includes a Steal This section with a question, diagnostic, or one-liner you can drop into a meeting, a Slack thread, or amplify in your own thinking. Think of this as a field kit: ten tools that didnβt break this year. Consider these a gift to share with your network.
The most fun part of these newsletters has consistently been the Analogy of the Week. Instead, for this week, I highly recommend this video interview from The AI Download with Shira Lazar "The AI Moments That Shaped 2025 and Predictions for 2026 with Nate Jones".
Next week: our own signals to watch in 2026.
10. Align to the Spec, Not the Prompt (Issue 008)
The Framework: The most successful AI integrations define expected behaviors outside the prompt, in the spec. Prompts nudge outputs. Specs anchor systems.
Why It Holds: This became orthodoxy among serious practitioners. Anyone still treating prompts as the source of truth will hit scaling walls when they need versioning, rollbacks, or auditability.
Steal This:
If the spec isn't versioned, it's not real.
9. Culture is a Technical Dependency (Issue 007)
The Framework: Your AI roadmap is only as fast as your slowest cultural bottleneck. Technology ships in weeks. Human systems still run on quarterly cadences. Culture never "catches up" by accident.
Why It Holds: The 5% success stat later in the year validated this early call. Pilots that won quick wins stalled because Legal hadn't built a review lane, retraining was deferred to next fiscal, and leadership assumed behavior would magically adapt.
Steal This:
What part of our rollout plan assumes the culture will magically adapt?
8. Confidence Saturation (Issue 023)
The Framework: Confidence used to signal competence. Then AI arrived and now our machines always sound sure. Now polished phrasing and synthetic eloquence flood every feed. Confidence is ambient noise, not signal.
Why It Holds: The difference between performance and precision is calibration. As AI fluency spreads, the people who can distinguish "sounds right" from "is right" become essential.
Steal This:
Right 80% of the time and 80% confident? Great.
Right 50% of the time and still 80% confident? There's the problem.
7. The Governance Gap (Issue 002)
The Framework: AI tools multiply while decision rights, governance structures, and coordination mechanisms lag behind. Everyone's making use of AI, but nobody's decided who decides.
Why It Holds: This was true in June and remains true now. The duct-tape governance that survived 2025 won't survive agentic AI speeds in 2026. Who owns the outcomes of AI-assisted decisions? Most orgs still can't name them.
Steal This:
When we can't name who owns AI-assisted outcomes, it's wishful thinking, not governance.
6. Yeast vs. Bread (Issue 014)
The Framework: MIT reported 95% of organizations getting zero return from GenAI. Same failure rate as digital transformation in 2019. Same root cause: mistaking the ingredient for the meal. AI is the yeast. Business pressure is the heat. Neither makes bread alone.
Why It Holds: Most orgs are still stockpiling yeast. The few who learned the recipe in 2025 (right proportions, right timing, right environment) will compound in 2026.
Steal This:
Is our organization making bread, or just stockpiling yeast and avoiding heat?
5. Synthetic Trust (Issue 025)
The Framework: Teams trust tone more than truth. Under deadline pressure, confident output passes as credible long before its reasoning is verified. AI's fluency masks uncertainty; coherence substitutes for correctness.
Why It Holds: Synthetic trust rises faster than our ability to detect it. Before teams can scale AI responsibly, they need verification infrastructure, not just policies, but defensible audit trails.
Steal This:
If we had to defend every AI-generated decision in court, how would we verify it?
4. Governance as UX (Issue 010)
The Framework: Employees aren't ignoring policies they can't tell when they're breaking them. The interface to follow governance is missing. Policies written for Legal don't help when the risk is downstream of a chatbox.
Why It Holds: Just-in-time clarity embedded in workflow beats training and documentation every time. If the path to good judgment is unclear, most people will choose fast over safe, especially under deadline pressure.
Steal This:
If it requires memorization, reading a PDF, or digging through the intranet, it's not governance, it's theater.
3. Measurement Theater (Issue 022)
The Framework: AI maturity metrics built on self-assessments and inflated usage data are like grading your own math test, then publishing the average as strategy. Scores are simple. Understanding rarely is.
Why It Holds: This becomes urgent during planning and review season. When "number go up" but "performance go down," the dashboards aren't protecting anyone. They're just documenting the failure in color-coded detail.
Steal This:
Ask three people how that metric is calculated. Three different answers means we're measuring misunderstanding, not maturity. Whatβs one metric youβd pause until someone can show the formula end-to-end?
2. The Trust Gap (Issue 018)
The Framework: Consumer adoption leads; enterprises follow. AI is stalled at the classic chasm of Diffusion of Innovation fame, which we update for the Age of AI. Architects and Catalysts race ahead. Integrators wait for workflows that actually work in their hands. Conformers need policies and proof. Legacy Loyalists are dug in.
Why It Holds: Leadership funds the pilots, but Integrators decide whether AI sticks. They don't care that something could work. They care that it does work in their hands, in their constraints, in their real day.
Steal This:
Of all the AI pilots so far, which ones would the Integrators actually depend on?
1. The Adoption Relay (Issues 027 β 028 β 029)
This is a three-part arc that forms one framework.
The Framework: AI adoption is a baton pass through three functions, and dropping any handoff stalls the race.
Managers decide whether AI feels safe. If it never shows up in 1:1s, it stays a side hobby or a guilty secret. (Issue 027)
Learning & Development decides whether AI becomes practice. Workshops that produce satisfaction scores instead of working agreements are theater, not onramps. (Issue 028)
Ops/Workflow owners decide whether AI becomes the default path. If a normal person can complete the workflow without seeing the AI step, you hosted a demo, you didn't embed AI. (Issue 029)
Why It Holds: This is the 2026 playbook. Psychological safety β working agreements β workflow defaults. Clear roles, clear handoffs, clear accountability. Each function owns one leg. Skip one and the baton hits the floor.
Steal This:
On my team, using AI feels like:
A) career boost
B) neutral tool
C) career risk
D) we don't say 'AI' around here
Answer honestly. Not the official answer, the lived one.
β¬ Closing Notes
Frameworks don't survive because they're clever. They survive because they name something real that people are already feeling but couldn't articulate.
If any of these gave you language for a problem you're facing, share them. That's how this newsletter grows, readers who look smart passing something useful to their network.
Next week: what I'm watching in 2026. Not predictions, exactly. More like fault lines worth monitoring as agentic AI hits enterprise reality.
Happy Holidays,
Sam Rogers
Framework Archivist
Snap Synapse β from AI promise to AI practice
π
Book a meeting
β
Explore the PAICE Pilot Program and lock in 2025 pricing before the calendar/budget turns.
π Full Archive
For reference, here's the complete index of all our 2025 issues:
![]() | 001 β The Automation Arms Race | Speed vs. velocity; pause one automation layer | |
![]() | 002 β AI Moving Faster Than Orgs | Governance gap; who decides who decides | |
![]() | 003 β Tool Choice Is Strategy | Tooling encodes worldview; sunset pacifier tools | |
![]() | 004 β Org Charts Fighting Last War | Decision latency; remove one approval | |
![]() | 005 β Vanity Metrics | Dashboard horoscopes; archive unused reports | |
![]() | 006 β Machines With Alien Contexts | Draft vs. deliverable confusion; label AI outputs | |
![]() | 007 β Culture as Technical Dependency | Slowest bottleneck; 30-day modular tests | |
![]() | 008 β Align to Spec, Not Prompt | Versioned specs; tuning the guitar | |
![]() | 009 β Open Auditable AI Stacks | Proof-of-benefit; rooftop solar | |
![]() | 010 β Governance as UX | Just-in-time clarity; funhouse sign | |
![]() | 011 β Funnel to Strainer | Buyers start in ChatGPT; map actual steps | |
![]() | 012 β GPT-5 Changes the Game (Mostly Doesn't) | Model disruption; comfy boots | |
![]() | 013 β Security Isn't Adding More | Subtract exposure; handwashing vs. medicine | |
![]() | 014 β 5% Success | Yeast vs. bread; prune pilots pronto | |
![]() | 015 β AI Breaks Training Workflow | Parallel learning/doing; surgery in progress | |
![]() | 016 β How We Talk to AI | Double standard; arguing with a genie | |
![]() | 017 β Tool Becomes Talent | Jobs vs. tasks; hiring the hammer | |
![]() | 018 β Planning for Change | Trust gap; Integrators decide; freeway onramp | |
![]() | 019 β Personal Stories | IEP for everyone; working agreements | |
![]() | 020 β Turbulence Not Thrust | Measure the air, not the plane | |
![]() | 021 β PAICE Launch | Credit score for AI collaboration | |
![]() | 022 β Measurement Theater | Three people, three answers; pedal to metal | |
![]() | 023 β Confidence Saturation | Calibration over confidence; off-key singing | |
![]() | 024 β Performative Empathy | Flow over friendly; noise-cancelling | |
![]() | 025 β Synthetic Trust | Verification infrastructure; vanilla flavoring | |
![]() | 026 β Friction Creates Shape | Thank the skeptics; checking the oven | |
![]() | 027 β Managers Write AI Policy | 1:1s decide; managers as spotters | |
![]() | 028 β Workshops & Walkways | L&D as onramp, not hype; moving walkway | |
![]() | 029 β AI as Default Path | Ops owns defaults; stage manager cue sheets | |
![]() | 030 β Frameworks that Survive 2025 | (this issue) | |
(COMING SOON) | 031 β Exponentially into 2026 | (next Monday like usual) |





























