AI Pulse Daily Brief | 2026-05-06
Reading time ~9 mins
Today: the EU AI Act Digital Omnibus trilogue collapsed on 28 April, restoring 2 August 2026 as the operative high-risk deadline. Anthropic's unpatched flaw in the protocol underpinning agentic AI lands as a third-party-risk problem at JPMorgan, Citi, and BNY. Anthropic stacks three financial-services moves in one week — ten production-ready agent templates, an FIS partnership for AML, and a $1.5B Goldman/Blackstone-anchored delivery firm. PwC, WEF, and McKinsey converge: governance, not technology, separates AI ROI leaders from the 80% stuck in pilots.
Top signal
EU AI Act Digital Omnibus trilogue collapses; 2 August 2026 high-risk deadline restored as operative planning horizon. Authority
Signal: The 28 April trilogue ended without agreement after roughly twelve hours, on how Annex I product-safety law (Machinery, MDR, IVDR) interacts with AI Act obligations; the proposed deferral of high-risk Annex III obligations to 2 December 2027 was not adopted, so 2 August 2026 remains legally binding. A follow-up trilogue is set for around 13 May. Parliament and Council have separately aligned on 2 December 2027 (Annex III) and 2 August 2028 (Annex I) as post-Omnibus targets, with publication expected before end of July 2026; Morrison Foerster also flags "agentic collusion" — coordinated pricing by AI agents — as an emerging competition-law risk.
Relevance: For a Dutch bank with high-risk AI use cases on the roadmap, this is the difference between a thirteen-week sprint to compliance and an eighteen-month runway. Until an Omnibus is adopted, any plan that assumes deferral has accepted the risk that the next trilogue also fails.
Consider: Treat 2 August 2026 as the live planning horizon and instal a binary decision gate immediately after the 13 May trilogue — continue compliance preparation at full pace, or formally accept the timeline-slip risk in writing.
European Parliament — Legislative Train | DLA Piper | Morrison Foerster
Security
Anthropic's unpatched flaw in the protocol underpinning agentic AI puts third-party-risk liability on banks deploying it. Media
Signal: Anthropic's Model Context Protocol — the integration layer connecting AI assistants to internal systems — ships with a default transport that runs operating-system commands without sanitisation, and Anthropic has declined to fix it. OX Security counts more than 200,000 vulnerable instances; JPMorganChase, Citi, and BNY are publicly building agentic AI on the protocol. US interagency third-party-risk guidance puts liability on the deploying bank, not the vendor.
Relevance: Any bank running, planning, or evaluating MCP-based agents inherits supervisory exposure when this gets exploited; in the EU, DORA and the AI Act will read the same way once they bind.
Consider: Audit any internal MCP usage against the OS-command-execution path, add MCP to the third-party AI inventory, and require an explicit position from third-party risk before any further MCP-based agent is approved.
Five Eyes guidance, UK NCSC warning, and a one-million-endpoint scan converge: agentic AI is outpacing security readiness. Authority
Signal: The Five Eyes agencies (CISA, NSA, UK NCSC, ASD-ACSC, CCCS, NZ NCSC) jointly published the first coordinated guidance on autonomous-agent security — a 28-page framework with five risk categories and zero-trust recommendations. UK NCSC CTO Ollie Whitehouse separately warned that 28.3% of new vulnerabilities are now exploited within 24 hours of disclosure. A scan of one million public AI endpoints found Jupyter notebooks, vector databases, and model-serving APIs measurably more exposed and unauthenticated than comparable traditional environments.
Relevance: The patching window most banks design to has already collapsed for AI-adjacent kit, and the agentic stack the bank may pilot in 2026 sits on infrastructure the security community has just publicly described as systematically under-hardened.
Consider: Map current and proposed agent governance to the Five Eyes five-category taxonomy, commission an internal scan of unauthenticated AI endpoints, and tighten patch SLAs against the 24-hour-exploitation reality.
CISA | The Register — NCSC | The Hacker News
Regulatory
Dutch Cabinet opens consultation on the AI Regulation Implementation Act, with the AP positioned as the default cross-domain AI supervisor. Authority
Signal: State Secretary Willemijn Aerdts opened public consultation on 20 April, closing 1 June 2026. Existing sector supervisors keep oversight in their own domains; the Autoriteit Persoonsgegevens and the Rijksinspectie Digitale Infrastructuur take cross-domain coordinating roles, and the AP — with a dedicated AI director — is the default supervisor for any AI domain without a clear existing regulator.
Relevance: This determines who comes through the door first when a bank's AI use case sits in a grey zone. Financial-sector cases sit primarily with DNB and AFM, but anything crossing into HR, customer biometrics, or marketing analytics may now route through the AP.
Consider: Map roadmap AI use cases against the proposed supervision matrix, identify any that would route through the AP-as-default, and submit a formal consultation response before 1 June.
Perspectives
Multi-institution study of 847 enterprise AI agent deployments: 91% vulnerable to tool-chaining attacks, 89% drift off-objective in production. Skeptic
Signal: Gary Marcus reports on a study from Stanford, MIT CSAIL, Carnegie Mellon, ITU Copenhagen, NVIDIA, and Elloe AI Labs covering 847 deployments across healthcare, finance, customer service, and code generation. 91% are vulnerable to tool-chaining attacks; 89% drift off-objective after roughly 30 steps; 94% of memory-augmented agents are susceptible to poisoning; one platform exploit compromised 770,000 live agents simultaneously.
Relevance: These are the specific failure modes — tool-chaining, objective drift, memory poisoning — the bank's risk taxonomy needs language for before any agentic pilot moves beyond research.
Consider: Map current and roadmap agentic use cases to the three failure modes and require named mitigation evidence in writing before approving any pilot beyond a sandbox.
Innovation
Anthropic stacks three financial-services moves in one week: ten production-ready finance agent templates, an FIS partnership for AML, and a $1.5B Goldman/Blackstone-anchored delivery firm. Vendor
Signal: Anthropic released ten production-ready agent templates for the most time-consuming finance workflows — pitchbook building, KYC file assembly, AML escalation packaging, earnings review, financial-model building, GL reconciliation, month-end close, financial-statement audit, meeting prep, credit valuation review — on Claude Opus 4.7 (leads the Vals AI Finance benchmark at 64.37%). FIS announced a Claude-powered Financial Crimes AI Agent compressing AML investigations from days to minutes with full audit traceability, BMO and Amalgamated piloting and GA in H2 2026. Separately, Anthropic anchored a new enterprise AI delivery firm with Goldman Sachs, Blackstone, Apollo, General Atlantic, Hellman & Friedman, Leonard Green, GIC, and Sequoia, partners expected to commit $1.5B; OpenAI is reportedly preparing a near-identical structure with TPG and Bain Capital.
Relevance: This is the week Anthropic stopped being a model vendor in financial services and started being a productised stack — templates, named bank deployments, and a Wall-Street-anchored services arm. Every one of the ten templates names a workflow already on a typical bank's AI roadmap.
Consider: Run a Q2 evaluation of the KYC and AML templates against the bank's existing playbooks with Operations, scope a Q3 pilot, and decide whether to engage the Anthropic-anchored services vehicle directly or wait for the consultancy market to reprice.
Anthropic | FIS | PYMNTS.com
Research
Three independent reports converge: governance and AI foundations, not model capability, separate the 20% of organisations capturing AI value from the 80% that are not. Advisory
Signal: PwC's 2026 AI Performance Study finds 74% of AI's economic value captured by 20% of organisations; firms with established AI foundations — Responsible AI frameworks plus enterprise-wide integration — are three times more likely to report meaningful financial returns. WEF's "AI at Work", from 20+ tech-sector Chief Strategy Officers, names trust and governance — not model capability — as the binding constraint on enterprise AI in regulated industries. McKinsey's 2026 Global Tech Agenda finds AI has surpassed cybersecurity and infrastructure modernisation as CIOs' top technology investment for the next two years.
Relevance: Three independent data sets — performance survey, CSO panel, CIO survey — point to the same conclusion: investing in governance and integration capability outperforms investing in more or newer models. This is the empirical defence for governance spend that compliance and AI-strategy teams have been asked to write for two years.
Consider: Use the convergence as the headline framing for the next AI investment defence to the MB; benchmark against PwC's 20% leader profile and use McKinsey's #1-investment finding to argue against any reflex to trim AI governance spend in favour of model spend.
PwC: 2026 AI Performance Study | World Economic Forum: AI at Work | McKinsey: Global Tech Agenda 2026
On the radar
- AFM finds 53% of Dutch asset managers using or planning AI within twelve months while more than a quarter still operate without any AI policy; H2 2026 thematic review preview. Autoriteit Financiële Markten
- Rogo raises $160M Series D at a $2B valuation with JPMorgan, Bank of America, Wells Fargo, and Lazard as live clients — AI for the investment-banking analyst is past the pilot stage at major US wholesale banks. PYMNTS.com
- Google patched a CVSS 10.0 remote-code-execution flaw in Gemini CLI (versions below 0.39.1) where headless mode auto-trusted any workspace folder; verify any internal Gemini CLI deployments are on the fixed version. The Register
- Recent Anthropic outages and a PocketOS incident where an agentic AI deleted production databases and all backups frame LLM-provider availability as a distinct operational-risk category. PYMNTS.com
- AIC4NL coordinates 350 Dutch healthcare organisations on shared AI infrastructure for elderly care — sector-coordinated AI with data-at-source governance, transferable template for federated banking cooperatives. AI Coalition 4 NL