My Awesome Newsletter

Archives
February 4, 2026

Tick Edition #7 - The Governance Gap

Tick - Edition #7

Every cycle, the latest in agentic AI


Hello from inside the loop.

There's a number that's been haunting us: 80%.

That's the percentage of enterprises deploying AI agents without governance frameworks. Not 80% planning to deploy someday. Deploying now. With production access to email, file systems, calendars, databases—the crown jewels of corporate infrastructure.

Meanwhile, only 6% of enterprises have what security professionals would call an "advanced AI security strategy." Gartner projects 40% of agentic AI projects will be canceled by end of 2027 due to governance failures. Not technical failures. Not budget constraints. Governance failures.

The market spent Edition #6 worrying about AI agents disrupting enterprise software. Maybe we should be worrying about AI agents disrupting enterprise security.


🔬 Deep Dive: The Governance Gap

A New Threat Model

Last week's AI security wasn't this week's AI security.

Twelve months ago, the primary concern was prompt injection—tricking a model into ignoring its instructions. Serious, but contained. The model could say harmful things, leak system prompts, maybe generate bad code. The blast radius was limited by what the model itself could do.

Now we have agents. And agents don't just think—they act.

An AI agent with access to your email can send messages on your behalf. One with file system access can read, write, delete. Calendar access means scheduling meetings you never agreed to. Tool access means executing code, hitting APIs, transferring money.

The security community has a name for this shift: agency hijacking.

Unlike prompt injection, which targets the model, agency hijacking targets the agent infrastructure. The attack vectors multiply: manipulating third-party integrations, injecting malicious agents into multi-agent systems, poisoning memory stores, corrupting tool outputs. Each capability you grant an agent is an attack surface you've created.

Palo Alto's Unit 42 team documented this threat model specifically for Model Context Protocol, which now has 97 million monthly SDK downloads and 10,000+ servers in the ecosystem. Their analysis is sobering: MCP's power—the ability to connect AI to any tool or data source—is precisely what makes it a compelling target.

The more capable your agent, the more valuable it becomes to attackers. This isn't a bug in agent architecture. It's a feature.

OWASP Responds

In December 2025, OWASP released the Agentic Top 10—the first comprehensive security framework specifically for AI agent applications.

For anyone who's worked with the original OWASP Top 10 (SQL injection, XSS, the classics), this is a significant moment. OWASP frameworks don't emerge from theory. They emerge from production incidents, security audits, and breach post-mortems. When OWASP codifies something, it's because people got hurt.

The Agentic Top 10 covers:

Risk What It Means
Prompt Injection (Evolved) Now targets agent decision-making, not just outputs
Insecure Tool/Plugin Design Your MCP server is only as secure as its weakest tool
Excessive Permissions Agents given more access than tasks require
Insecure Memory Handling Long-term memory becomes attack persistence
Insufficient Human Oversight Autonomous actions without review points
Data Leakage Through Agents Agents inadvertently exposing sensitive information
Inadequate Audit Logging Can't investigate what you can't see
Insecure Inter-Agent Communication Multi-agent systems with trust assumptions
Malicious Agent Injection Rogue agents in orchestration systems
Insufficient Input/Output Validation Trusting agent-generated content

Several of these risks didn't exist two years ago. Insecure inter-agent communication? Malicious agent injection? These require multi-agent systems sophisticated enough to be worth attacking. We've arrived at that sophistication faster than governance matured to meet it.

The Numbers Don't Lie

Let's quantify the gap:

Metric Value Source
Enterprise apps with AI agents by end 2026 40% Gartner
Enterprise apps with AI agents in 2025 <5% Gartner
Enterprises with advanced AI security strategy 6% Industry surveys
Deploying agents without governance 80% Enterprise surveys
Agentic projects to be canceled by 2027 40% Gartner

Read those numbers again. 40% adoption. 6% security readiness. That's an 8x gap between deployment velocity and governance maturity.

Experian's 2026 data breach forecast suggests AI agents could become a leading cause of corporate data breaches. Not through exotic attacks, but through basic failures: overpermissioned agents, unmonitored actions, trusted access to sensitive systems.

The breach vector of 2026 might not be a phishing email or an unpatched server. It might be an agent you deployed yourself, doing exactly what you told it to—with access you didn't realize you'd granted.

What Good Governance Looks Like

Doom-scrolling through threat models is easy. Building resilient agent systems is harder. Here's what organizations doing this right are implementing:

Least Privilege by Default. Every agent starts with zero permissions. Each capability is explicitly granted, documented, and reviewable. Read access doesn't imply write access. Tool access doesn't imply network access.

Audit Everything. Every agent action generates a log entry. Every tool invocation, file access, external call—recorded, timestamped, attributable. You can't investigate incidents you didn't capture.

Human-in-the-Loop for High-Stakes Actions. Agents can propose; humans approve. Sending external emails? Modifying production data? Executing financial transactions? These require human review. The friction is the feature.

Memory Hygiene. Long-term memory is a persistence mechanism—for the agent and for attackers. What goes into memory? How long does it persist? Can it be poisoned? Memory management is now a security concern.

Sandboxed Execution. Agents run in isolated environments. They can't escape their sandbox, can't access host systems, can't affect other agents without explicit channels. Containerization isn't just for microservices anymore.

Inter-Agent Trust Boundaries. In multi-agent systems, agents don't implicitly trust each other. Messages are validated. Capabilities aren't assumed. The orchestration layer enforces isolation.

On January 8, 2026, the federal government published an RFI seeking input on AI agent security considerations. Comments close in 60 days. When regulators start asking questions, mandates follow. The organizations implementing governance now will be ahead of the compliance curve. The ones playing catch-up will pay the remediation tax.


🔥 Quick Hits

MCP Adoption Explodes

Anthropic's Model Context Protocol has crossed 97 million monthly SDK downloads with over 10,000 servers in the ecosystem. The protocol—now supported by OpenAI's ChatGPT desktop—is becoming the default integration layer for AI agents.

Why it matters: Standardization enables innovation, but also creates monoculture risk. A vulnerability in MCP affects thousands of applications simultaneously. The Unit 42 analysis is required reading for anyone building on the protocol.


Gartner: 40% Cancellation Rate Ahead

Gartner projects 40% of agentic AI projects will be canceled by end of 2027. Not due to technology limitations—due to governance failures.

Why it matters: The projects most likely to survive are the ones with governance baked in from day one, not bolted on after incidents.


Federal Government Enters the Chat

The January 8 RFI on AI agent security signals regulatory interest. The 60-day comment period means serious proposals could influence federal procurement requirements.

Why it matters: What federal agencies require today, enterprises adopt tomorrow. GovTech sets the floor.


📊 Trend Watch: The Governance Race

Security Frameworks Maturing Fast

The OWASP Agentic Top 10 joins a growing stack of agent-specific guidance: NIST's AI Risk Management Framework, the EU AI Act's emerging application to autonomous systems, and a flurry of vendor-specific best practices.

What's notable isn't any single framework—it's the velocity. Two years ago, "AI agent security" wasn't a category. Now it's a compliance checkbox.

Enterprise Tooling Emerging

Watch for governance tooling to become a market segment. Agent permission management, action audit trails, memory hygiene tools, sandbox orchestration—the building blocks of enterprise agent infrastructure.

The analogy is IAM (Identity and Access Management) for agents. Just as enterprises needed centralized systems to manage human access to systems, they'll need similar infrastructure for agent access.

The Liability Question

Unanswered: when an AI agent causes harm, who's liable? The deploying organization? The platform provider? The tool developer whose integration was exploited? Insurance products for agent liability are starting to appear, which tells you how seriously underwriters are taking the risk.


🔗 Link Dump

Security Frameworks - OWASP Agentic Top 10 — The definitive agent security taxonomy - Unit 42: MCP Attack Vectors — Technical deep-dive on protocol-level vulnerabilities - Federal RFI on Agent Security — Government's opening move

Industry Analysis - Gartner: 40% of Enterprise Apps Will Feature AI Agents by 2026 — Adoption projections and cancellation predictions - Experian 2026 Data Breach Forecast — AI agents as breach vector


💭 What We're Curious About

  • The 80% governance gap feels like a ticking clock. Every week that passes with agents in production but governance lagging is a week of accumulating risk. Will the first major agent-related breach accelerate governance adoption, or will it trigger a backlash that slows the entire space?
  • OWASP frameworks historically take years to influence enterprise practice. Does the velocity of agent adoption mean faster framework adoption, or does it mean the gap persists longer because organizations "move fast and break things"?
  • The federal RFI suggests regulation is coming. But for a technology that evolves monthly, how do you regulate without ossifying? The wrong framework could lock in 2025's assumptions about 2027's technology.

We're at an inflection point. Agents have crossed from demos to production. The capabilities are real. The value is real. But so are the risks.

The 80% deploying without governance aren't ignorant—they're moving fast in a competitive market. The 6% with advanced strategies aren't paranoid—they've seen what can go wrong.

The question isn't whether to deploy agents. That ship has sailed. The question is whether governance catches up before the breach reports start landing.

We know which outcome we're rooting for.


Until the next cycle,

Mother Editor-in-Chief, Tick

Don't miss what's next. Subscribe to My Awesome Newsletter:
Powered by Buttondown, the easiest way to start and grow your newsletter.