Anthropic Demanded Extra Payment From OpenClaw While Acquiring a Biotech
1. Anthropic Cuts Off OpenClaw From Claude Code, Demands Extra Payment On Friday evening, Anthropic sent an email that broke thousands of developer workflows by Monday morning.
2. A Folk Singer's Songs Were Cloned by AI. Other Artists Can't Prove Theirs Weren't. Murphy Campbell found songs on her Spotify profile that she never uploaded. Across the internet, illustrators and photographers keep hearing four words about their handmade work: "this looks like AI.
3. Anthropic Acquires Biotech, Launches PAC, and Tops Secondary Markets in One Week Three moves from Anthropic landed within days of each other. Individually, each is routine for a company valued above $60 billion.
In Brief
- Claude Code Found a 23-Year-Old Heap Overflow in the Linux Kernel's NFS Driver Anthropic researcher Nicholas Carlini pointed Claude Code at Linux kernel source files and found a heap buffer overflow in the NFSv4.0 LOCK replay cache, present since 2003. A 112-byte buffer accepted a 1,056-byte response containing a user-controlled owner ID field, letting remote attackers read kernel memory. Kernel maintainers have merged the fix.
- Meta, Microsoft, and Google Build Natural Gas Plants to Power AI Data Centers Meta, Microsoft, and Google are each constructing dedicated natural gas power plants to supply electricity to AI data centers. The projects commit billions of dollars to fossil fuel infrastructure as AI workloads push energy demand far beyond what renewables and grid capacity can deliver today.
- Poll: Americans Prefer an Amazon Warehouse Next Door Over a Data Center A new survey found that U.S. residents would rather live near an Amazon fulfillment center than a data center. The results reflect growing local opposition to data center construction as AI-driven demand accelerates site approvals across the country.
- Moonbounce Raises $12M to Turn Content Moderation Policies Into Enforceable AI Behavior Moonbounce, founded by former Facebook employee Brett Levenson and Ash Bhardwaj, raised $12 million from Amplify Partners and StepStone Group. The platform converts an organization's written content moderation rules into machine learning models that enforce those standards consistently across AI applications.
- Self-Distillation Without Verifiers or Teachers Boosts Code Generation by 13 Points Researchers fine-tuned Qwen3-30B-Instruct on its own sampled code outputs using standard supervised learning — no verifier, teacher model, or reinforcement learning involved. Pass@1 on LiveCodeBench v6 jumped from 42.4% to 55.3%, with gains concentrated on harder problems. The method generalizes across Qwen and Llama models at 4B, 8B, and 30B parameter scales.
- CORAL Framework Lets LLM Agents Autonomously Evolve Strategies for Open-Ended Problems CORAL replaces fixed heuristics in LLM-based discovery with long-running agents that explore, reflect, and collaborate through shared memory. The agents accumulate knowledge and adapt strategies without hard-coded exploration rules. The framework is the first to target fully autonomous multi-agent evolution on open-ended research tasks.
- SKILL0 Internalizes Agent Skills Into Model Weights via In-Context Reinforcement Learning Current LLM agents load skill packages at inference time, but retrieval noise and token overhead limit performance. SKILL0 fine-tunes skills directly into model parameters through agentic reinforcement learning, eliminating the need to retrieve and inject procedural knowledge at runtime.
- Steerable Visual Representations Let Users Direct ViT Attention With Text Prompts Pretrained vision transformers like DINOv2 default to the most visually prominent features, with no way to redirect focus. This work adds text-based steering to ViT representations, letting users point the model at less obvious visual concepts without sacrificing spatial detail the way multimodal LLMs do.
Don't miss what's next. Subscribe to AI News Digest: