Weekly Intelligence Brief — April 5, 2026
Weekly Intelligence Brief — April 5, 2026
Iran Declared Your Vendors Military Targets. Then It Followed Through.
On March 31, the Islamic Revolutionary Guard Corps published a list of 18 companies it designated as "legitimate military targets." The list included the companies that run most of American business technology: Microsoft, Google, Apple, Meta, Nvidia, Intel, Cisco, Oracle, Dell, HP, IBM, and Palantir. It also included JPMorgan Chase, Tesla, General Electric, Boeing, Abu Dhabi AI firm G42, and Dubai cybersecurity firm Spire Solutions, broadening the threat beyond tech to American financial, industrial, and regional partners. Amazon was not on the list. Its data centers had already been hit. The IRGC gave an 8 PM Tehran time deadline and warned employees to evacuate immediately.
The justification for the tech companies: they provide the AI and cloud infrastructure enabling US precision strikes against Iran. One week earlier, Palantir's chief technology officer told Bloomberg TV this was "the first large-scale combat operation that was really driven, enhanced and made substantially more productive with technology, with AI." Iran's response was to declare the infrastructure behind that productivity a military objective.
They had already started. On March 1, Iranian drones struck three Amazon Web Services facilities in the United Arab Emirates and Bahrain. Two of three availability zones in the UAE region were physically destroyed. 109 cloud services went offline. Amazon waived all charges for the entire month, the first time a major cloud provider has ever forgiven a full billing cycle. Three more strikes followed over the next month, hitting Bahrain repeatedly and destroying telecom infrastructure that AWS depends on. The pattern wasn't a one-off. It was a campaign.
In parallel, the Iran-linked group Handala ran cyber operations at a tempo that matched the kinetic strikes. It breached FBI Director Kash Patel's personal email and published the contents. It hit medical device manufacturer Stryker with a destructive attack, reportedly using Microsoft Intune's remote management capabilities to wipe more than 200,000 devices. Your organization's device management platform, the same tool your IT team uses to push software updates and enforce security policies, turned into a weapon. Handala also deleted 22 terabytes of data from 14 Israeli companies during Passover and hit St. Joseph County, Indiana. One group. One week. Cyber and kinetic running on parallel tracks.
This is where it hits your balance sheet. When a missile physically destroys a cloud data center, the loss falls into a gap between three types of insurance. Traditional property policies exclude acts of war. Cyber policies typically exclude acts of war. Business interruption policies increasingly limit coverage for cloud provider outages. The war exclusion clause that Lloyd's of London uses across its syndicate market has never been tested against a claim from a physically destroyed data center. The AWS strikes may become that test case. And Amazon's own service agreements cap its liability at whatever you paid them in the prior 12 months, a fraction of what an extended outage actually costs.
The legal ground is shifting underneath this. International humanitarian law draws a line between military objectives and civilian infrastructure. But when a company's servers host both commercial email and defense workloads, that line gets harder to draw. The concept of "dual-use" infrastructure, civilian systems that also serve military purposes, has been expanding for decades. A 1923 international arbitration established that governments can destroy private infrastructure serving military purposes during wartime without compensating the owners. That precedent is over a century old. The infrastructure it now applies to runs your business.
No precedent has been identified for a nation-state formally publishing a named list of private companies as military targets. The companies on that list aren't defense contractors. They're your email provider, your cloud platform, your payment processor, and your device management vendor.
The Takeaway: Check three things. First, where are your cloud workloads physically hosted? If any run in Middle East regions, your disaster recovery plan just became a business continuity requirement, not a checkbox. Second, read your cyber insurance and business interruption policies for war exclusion language. If a military strike on your cloud provider isn't covered, that's an uninsured gap your board needs to know about. Third, ask your vendors whether they have operations in conflict-adjacent regions and what their continuity plans look like. The question used to be theoretical. It's not anymore.
Sources: TNW · CNBC · Network World · CSIS · TechPolicy.Press · Bloomberg
The AI Inside Your Software May Already Report to Beijing
Chinese AI models now handle roughly 45% of workloads on major AI routing platforms, up from 1.2% in late 2024. In one week in February, they hit 61%. The reason is cost. DeepSeek charges roughly a tenth of what comparable American models cost. Alibaba's Qwen family has been downloaded 700 million times. Developers have created 180,000 derivative versions built on top of those models. This isn't a consumer app trend. It's a shift in the infrastructure layer that software companies build on.
That shift has a consequence most boards haven't considered. When a software vendor integrates an AI model into its product, the choice of which model to use is a cost decision made somewhere inside your vendor's organization. You never see it. Most vendor risk questionnaires ask about data storage and subprocessors. Few ask which AI model is embedded in the product or where that model processes data. Kimi, built by Beijing-based Moonshot AI, is the base model for Cursor's Composer 2, a coding tool used by development teams worldwide. MiniMax, a Shanghai-based company, was the single most-used model by volume on OpenRouter in February. Developers routing AI calls through aggregation platforms may not know which model is processing their request on any given call. Three layers down in your software supply chain, your data may be processed by a Chinese model that nobody in your organization evaluated, approved, or even knows about.
That matters because of a single sentence in Chinese law. Article 7 of China's 2017 National Intelligence Law: "All organizations and citizens shall support, assist, and cooperate with national intelligence efforts in accordance with law." The practical consensus among Western security and legal communities is that Chinese authorities can compel access to data held by Chinese companies, and those companies cannot meaningfully refuse. DeepSeek's privacy policy states that all data is stored on servers in the People's Republic of China. Prompts can be reused to improve their models. An opt-out exists, but only by emailing a privacy address that most users will never find.
The pricing isn't a market strategy. It's a pattern. China spent the last decade building energy infrastructure across 140 countries through Belt and Road, selling below market rate until the dependency was structural. The AI model market is running the same playbook. Undercut on price, build adoption, and the access comes with the infrastructure. The difference is the timeline. Energy dependency took a decade. AI model adoption took 18 months.
The problem also runs through your own workforce. More than 80% of workers use AI tools their employer hasn't approved. Among executives, the number is 93%. Three quarters of those workers admit sharing sensitive information with unapproved tools. Several governments have responded by banning DeepSeek on government devices, including Australia, South Korea, Taiwan, and parts of the US federal government. But bans on a single app miss the point. The models are open-source. They're embedded in derivative products, developer tools, and platform integrations that don't carry the DeepSeek name. The entry point isn't an app your employee downloaded. It's a feature your vendor quietly shipped.
The Takeaway: Your vendor risk questionnaire probably asks about data storage and encryption. It almost certainly doesn't ask which AI models are embedded in the product and where those models process data. That's the gap. Ask your software vendors whether their products use AI models, and if so, which ones and where the data is processed. If they can't answer, that's your answer.
Sources: War on the Rocks · Cybersecurity Dive · House Select Committee on the CCP
The Encryption Protecting Your Data Has an Expiration Date. It Just Moved Up.
Three research teams published findings in February and March 2026 that shrank the estimated resources needed to break widely used encryption by a factor of 20.
In 2019, the best estimate for breaking RSA-2048, the encryption standard that secures most government and financial communications, required 20 million quantum bits. By May 2025, a Google researcher reduced that to under one million. In February 2026, Sydney-based Iceberg Quantum brought it below 100,000. On March 30, researchers at Caltech and startup Oratomic showed that as few as 10,000 to 26,000 qubits could break the elliptic curve encryption that protects most internet commerce in a matter of days. Google Quantum AI published separate findings the same day showing a 20-fold reduction in its own estimates. A mathematician at Cloudflare, which handles a quarter of global internet traffic, told Nature: "It's a real shock for us too. We are still digesting it, but we are very concerned."
None of these machines exist yet. The largest operational quantum computers have roughly 1,000 qubits. But the gap is narrowing from both directions: hardware is scaling up while algorithms are getting dramatically more efficient. Google announced on March 25 that it is targeting 2029 to complete its own migration to quantum-resistant encryption, years ahead of the federal government's 2035 deadline.
The threat isn't only in the future. Intelligence services and sophisticated adversaries are already intercepting and storing encrypted data with the expectation that quantum computers will eventually decrypt it. The NSA, CISA, and the Federal Reserve have all warned about this publicly. Any data your organization transmits today that needs to remain confidential beyond 2030, contracts, merger negotiations, health records, intellectual property, is potentially already captured.
The National Institute of Standards and Technology finalized post-quantum encryption standards in August 2024. Google is migrating now. Most private organizations haven't started. 91% of businesses lack a formal roadmap. Industry estimates put migration at five to seven years for smaller enterprises. The math is simple: if the threat window is closing toward 2030 and migration takes five to seven years, you're already behind.
The Takeaway: The question isn't whether someone would target your company specifically. It's whether the institutions you trust with your data, your bank, your law firm, your cloud provider, your insurance carrier, are transmitting it over encryption that has an expiration date. They almost certainly are. Ask them what their post-quantum migration plan looks like. If they don't have one, your data's long-term confidentiality depends on someone else's timeline, not yours.
Sources: Nature · Google Research · Google Blog · Caltech · Iceberg Quantum · The Quantum Insider
45 States Are Writing AI Rules. Nobody Is Waiting for Washington.
As of March 2026, state legislators have introduced 1,561 bills regulating artificial intelligence across 45 states. In 2023, the number was fewer than 200. There is no federal AI law. There is no indication one is coming soon enough to matter.
The bills that have already become law tell you where this is heading. Tennessee signed a law on April 1 prohibiting AI systems from representing themselves as mental health professionals, with a private right of action allowing consumers to sue. Washington signed two AI laws on March 24: one requiring chatbot operators to disclose the chatbot is artificial and redisclose every hour for minors, with a private right of action allowing consumers to sue, and another requiring disclosure when content is AI-generated. Georgia's legislature unanimously passed a bill prohibiting health insurers from denying claims based solely on AI decisions, requiring a qualified human to review every denial. Colorado's AI Act, the most sweeping state AI law in the country, takes effect June 30 with penalties up to $20,000 per violation for companies that deploy high-risk AI systems in hiring, lending, housing, or healthcare without bias audits, impact assessments, and consumer disclosures. A separate Colorado bill banning individualized pricing based on behavioral surveillance data passed the House and is moving through the Senate.
If your company uses AI to set prices, screen job applicants, process insurance claims, interact with customers through chatbots, or make decisions that affect people's access to services, you are entering a compliance environment with no single rulebook. Colorado requires bias audits. Georgia requires human review of insurance denials. Washington requires hourly disclosure to minors. Tennessee allows consumers to sue. The requirements don't conflict, but they don't align either. A company operating in ten states may face ten different sets of obligations for the same AI system.
The federal government has noticed but not acted. The White House released a non-binding legislative framework in March advocating "a single national standard rather than a fragmented patchwork." A December 2025 executive order established a DOJ task force to challenge state AI laws in court on constitutional grounds. No lawsuits have been filed yet. Congress has repeatedly declined to pass comprehensive AI legislation. The EU, by comparison, passed a single unified AI Act that takes effect across all 27 member states on August 2, 2026. American companies selling into Europe will comply with one framework. Selling across America, they may face 45.
The Common Sense Institute estimates that Colorado's AI Act alone could cost 40,000 jobs and $7 billion in economic output by 2030. A US Chamber of Commerce survey found 65% of small businesses are concerned about litigation costs from conflicting state laws. One third said they would scale down AI use if they faced regulations like Colorado's. The compliance burden is not hypothetical. Colorado's effective date is 86 days away.
The Takeaway: If your organization uses AI in any customer-facing or employee-facing decision, you need an inventory of where those systems operate and which state laws apply. Don't wait for federal preemption. The states aren't waiting, and the private rights of action in Tennessee and Washington mean your next AI compliance problem may arrive as a lawsuit, not a regulatory notice. Start with three questions: which AI systems touch decisions about people, which states are those people in, and does anyone in your organization know the answer to the first two?
Sources: Multistate.ai · Transparency Coalition · Troutman Pepper · US Chamber of Commerce · WilmerHale
60 Days to Comply. Your Financial Partners Aren't Ready.
On June 3, the SEC's amended Regulation S-P takes effect for thousands of smaller financial firms. Investment advisers managing under $1.5 billion, smaller broker-dealers, fund companies, and transfer agents must have a written incident response program, notify affected customers within 30 days of a breach, and require their service providers to report breaches within 72 hours. The rule was adopted in May 2024. The first compliance deadline passed in December for larger firms. The majority of SEC-registered investment advisers fall under the $1.5 billion threshold, which means most of the industry hits the deadline in June. This is the first major update to Reg S-P since it was written in 2000.
The part that reaches beyond financial services is the service provider chain. Covered firms must now maintain written policies for oversight of every company that receives, processes, or has access to customer information. That includes IT vendors, cloud providers, payroll processors, custodians, and CRM platforms. Contracts must require 72-hour breach notification. If you provide services to a financial firm and your contract doesn't include that language, expect an amendment request in the next 60 days. If you can't meet the 72-hour window, you may lose the client.
The enforcement precedent is real. The SEC fined Morgan Stanley $35 million in 2022 for a five-year failure to properly dispose of devices containing personal information of 15 million customers. The firm had hired a moving company with no data destruction experience to decommission hard drives. The SEC's 2026 examination priorities explicitly list Reg S-P compliance as a focus area. Examiners are already reviewing implementation at larger firms that hit the December deadline. Smaller firms are next.
The 30-day customer notification window is faster than HIPAA's 60-day requirement and does not preempt state breach notification laws. Firms must comply with both. A single breach at a small advisory firm could trigger Reg S-P notification, state breach notification in every state where affected customers reside, and potentially an SEC 8-K disclosure if the firm is publicly traded. The layered obligations are the compliance burden that industry groups warned about. The Investment Adviser Association requested an extension of the deadline. The SEC declined.
The Takeaway: If your company provides any service to a financial firm, broker-dealer, or investment adviser, you're about to hear from their compliance team. The 72-hour service provider notification requirement means your incident detection and reporting capabilities are now part of their regulatory obligation. If your company uses a smaller financial adviser or broker-dealer, ask them whether they're ready for June 3. Their answer affects how fast you'd be notified if your financial data were compromised. Either way, this deadline is 60 days out and somebody on your vendor list isn't ready.
Sources: SEC · FINRA · Goodwin · SEC Exam Priorities · SEC (Morgan Stanley)
Read more at stateofthethreat.com