OpenAI Handed the Pentagon a Quick Yes — Then Came the Fine Print
1. OpenAI Struck a Pentagon Deal in Hours. The Fine Print Shows What It Gave Up. Anthropic spent months negotiating with the Pentagon over two conditions. Its AI would not power mass domestic surveillance or direct lethal autonomous weapons.
2. The Supreme Court and London's Streets Both Rejected AI Last Weekend A one-line order in Washington. Hundreds chanting "Pull the plug!" outside OpenAI's London office. Same weekend, same word: no. The reasons had nothing in common.
3. Nvidia Bets $4 Billion on Photonics as Apple Turns to Google for AI Servers Nvidia committed $4 billion on Monday to two photonics companies: $2 billion each into Lumentum and Coherent.
In Brief
- Anthropic's Claude Hit by Widespread Service Outage Thousands of users reported problems accessing Claude on Monday morning. Anthropic acknowledged the disruptions but has not disclosed a root cause. TechCrunch
- 14.ai Sells AI Agents That Replace Startup Customer Support Teams Married co-founders built 14.ai to automate full customer support workflows at startups. The company also launched a consumer brand to measure how much of the support workload AI can realistically handle. TechCrunch
- Lenovo Shows AI Desktop Companion Concepts at MWC Lenovo revealed two standalone desk devices at MWC: an always-on "AI Workmate" and a robot arm with expressive eyes. Both target office workers as productivity assistants. Neither has a ship date. The Verge
- CUDA Agent Applies Reinforcement Learning to GPU Kernel Optimization A new paper introduces CUDA Agent, a system that uses large-scale agentic RL to generate high-performance CUDA kernels. Current LLM-based approaches to CUDA code generation still underperform compiler tools like torch.compile. Hugging Face Papers
- Memento Proposes Embedding AI Coding Sessions Into Git Commits An open-source project called Memento captures the full AI interaction transcript and attaches it to the corresponding commit. The goal: let future developers audit how and why AI-generated code was written. GitHub
- CiteAudit Benchmark Targets Hallucinated Scientific References Researchers released CiteAudit, a benchmark for verifying whether citations in LLM-generated text point to real publications. Fabricated references have already appeared in submissions and accepted papers at major ML conferences. Hugging Face Papers
- New Training Method Extends Video Generation From Seconds to Minutes A paper proposes decoupling local visual fidelity from long-term coherence using a Decoupled Diffusion Transformer. Separate training heads handle short-clip quality and long-sequence consistency, sidestepping the scarcity of high-quality long-form video data. Hugging Face Papers
- dLLM Provides Unified Open-Source Framework for Diffusion Language Models Researchers released dLLM, a standardized framework for building diffusion-based language models. The project consolidates components scattered across ad-hoc research codebases into one reproducible library. Hugging Face Papers
- LK Losses Directly Optimize Acceptance Rates in Speculative Decoding A new training objective called LK Losses optimizes the token acceptance rate in speculative decoding instead of using KL divergence as a proxy. Standard KL training leaves performance on the table when draft models have limited capacity. Hugging Face Papers
Don't miss what's next. Subscribe to AI News Digest: