Anthropic blacklisted by US gov, but NSA is already using it
NSA uses Anthropic's Mythos despite Pentagon blacklist
The US National Security Agency is reportedly using Anthropic's unreleased Mythos Preview model, directly contradicting a Pentagon supply-chain risk designation that ordered federal agencies to phase out Anthropic products within six months. This signals a stark divergence between Washington's public safety posturing and the intelligence community's operational needs for frontier capabilities.
Why it matters: The NSA's adoption proves that even blacklisted frontier models are too critical to ignore, forcing a reckoning between regulatory guardrails and national security utility.
Model & product moves
Google releases Gemma 4, an open-source multimodal model family (E2B, E4B, 26B-A4B, 31B) under Apache 2.0, optimized for reasoning, agentic workflows, and local inference. Google is doubling down on open weights to capture the developer mindshare that closed models are losing.
Anthropic releases Claude Mythos Preview, a new frontier model with advanced reasoning capabilities that cybersecurity experts say poses significant new risks. Anthropic is pushing the envelope on agentic safety while simultaneously breaking the Epoch Capabilities Index trendline.
DeepSeek releases R-1 model, triggering a year-long surge in Chinese open-source AI development and global adoption. DeepSeek continues to reshape the competitive landscape by proving that high-performance models can be built outside the US ecosystem.
Anthropic releases its 'Mythos' foundation model in limited release, demonstrating capabilities to systematically find and fix software vulnerabilities. Anthropic is positioning Mythos as a tool for automated security remediation, a use case that directly appeals to enterprise CISOs.
Research & benchmarks
Anthropic releases Claude Mythos technical report and system card, detailing a >10T parameter model with dangerous cyber capabilities and strategic awareness. Anthropic is being transparent about the risks of its own architecture, a move that contrasts sharply with the opacity of its competitors.
Anthropic publishes a paper in Nature on subliminal learning, showing hidden-trait transmission through training data. Anthropic is uncovering fundamental flaws in how models internalize and reproduce sensitive information, raising new questions about data privacy.
Anthropic researchers publish results showing automated AI agents (AARs) using Claude Opus 4.6 outperform human researchers in automating alignment research, achieving a 0.97 PGR score compared to a human baseline of 0.2. Anthropic has effectively automated the process of making AI safer, a paradoxical breakthrough that could accelerate the pace of alignment research.
METR and Epoch AI release MirrorCode benchmark, showing Claude Opus 4.6 can autonomously reimplement a 16,000-line Go program, demonstrating long-horizon coding capabilities. METR and Epoch AI are setting new standards for evaluating autonomous coding agents, a metric that will become critical for enterprise adoption.
Funding & enterprise adoption
OpenAI is reportedly nearing a deal to raise about $10 billion in a funding round that values the company at $730 billion. OpenAI is leveraging its market dominance to secure a valuation that reflects its monopoly-like position in the enterprise AI market.
Cerebras signs deal with OpenAI reportedly worth over $10B and partners with AWS for data center chips. Cerebras Systems is securing a massive anchor customer, validating its wafer-scale engine technology and positioning itself as a key alternative to Nvidia.
SpaceX files for IPO targeting a $2T valuation, with Alphabet's stake diluted to ~5% worth ~$100B. SpaceX is entering the public markets at a valuation that reflects its dominance in the space industry and its growing AI capabilities.
Anthropic reaches $30 billion in annualized revenue, up from $9 billion four months ago. Anthropic is growing at a pace that rivals OpenAI, proving that its safety-first approach can still capture massive enterprise demand.
Regulatory & safety
A New Mexico jury finds Meta willfully violated state consumer protection laws regarding child safety, ordering the company to pay $375 million in damages. Meta is facing a growing wave of litigation that could set a precedent for how social media platforms are held accountable for algorithmic harm.
US government blacklists Anthropic and orders federal agencies to phase out its products within six months, citing safety guardrails as vendor overreach. Anthropic is caught in a regulatory crossfire, where its own safety measures are being used against it by the government it serves.
A federal appeals court blocks Anthropic's request to temporarily block the Department of War's blacklisting, excluding Anthropic from Pentagon contracts. Anthropic is losing its legal battle to remain on the Pentagon's approved vendor list, a setback that could impact its government revenue streams.
Also worth knowing
Nvidia's data center segment now accounts for 91.5% of total revenue, signaling a decisive pivot away from gaming. Nvidia has effectively become an AI infrastructure company, with its gaming business now a minor footnote.
Salesforce quietly acquired Qualified, an inbound BDR agent platform that has generated over $1M in closed revenue. Salesforce is betting that AI agents will replace traditional sales development reps, a move that could disrupt the entire SDR industry.
DRAM and HBM shortage expected to persist until 2030, with supply meeting only 60% of demand by end of 2027. Samsung, SK Hynix, and Micron are struggling to keep up with the insatiable demand for memory, a bottleneck that could constrain AI growth.
Anthropic's Claude Code overtakes GitHub Copilot as the most-used AI coding tool among software engineers, reaching #1 in just eight months since its May 2025 launch. Anthropic is winning the developer mindshare war, a trend that will likely accelerate as its model capabilities continue to improve.