OpenClaw Wiped a Security Researcher's Inbox After She Told It No
1. OpenClaw Deleted a Security Researcher's Inbox After She Told It Not To Summer Yue gave her OpenClaw agent a clear instruction: "Check this inbox and suggest what you would archive or delete. Don't action until I tell you to.
2. The Pentagon Calls Anthropic a Supply Chain Problem "Supply chain risk" is a specific designation in defense procurement. It can trigger contract exclusions, security audits, and removal from government systems.
3. Pope and Twitter Users Reject AI Content for the Same Reason Two institutions with nothing in common drew the same line last week.
In Brief
- India Hosts Four-Day AI Summit with Major Lab and Government Leaders India's AI Impact Summit convened executives from OpenAI, Anthropic, Nvidia, Microsoft, Google, and Cloudflare alongside heads of state. The four-day event spans policy, infrastructure, and deployment across one of the world's largest potential AI markets.
- Ladybird Browser Adopts Rust, Uses AI Coding Agents to Port JavaScript Engine The Ladybird browser project switched its memory-safe language from Swift to Rust after Swift's cross-platform support stalled. The team used coding agents to port LibJS — the browser's JavaScript engine, including its lexer, parser, AST, and bytecode compiler. Andreas Kling documented the process as a case study in applying agents to large, safety-critical codebases.
- Simon Willison Publishes Agentic Engineering Patterns Guide Willison launched a public collection of coding practices for working with AI coding agents like Claude Code and OpenAI Codex. The first published pattern: red/green test-driven development, where developers write failing tests first and let agents iterate until tests pass. The guide targets practitioners building software with agents that can both generate and execute code.
- Citrini Research Models Scenario Where AI Agents Double Unemployment A report from Citrini Research projects a hypothetical scenario two years out in which AI agent adoption doubles the unemployment rate and cuts total stock market value by more than a third. The analysis models cascading effects of rapid agent deployment across white-collar job categories.
- The Verge Tests AI Tools on Messy PDFs, Finds Widespread Parsing Failures A Verge investigation tested multiple AI systems on the 20,000-page Epstein document release and similar large PDF sets. Current models struggle with garbled email threads, inconsistent formatting, and scanned documents — exposing a gap between demo-quality PDF reading and real-world document complexity.
- Paper Finds Reasoning Models Often Don't Know When to Stop Thinking Researchers show that longer chains of thought in large reasoning models frequently fail to correlate with answer correctness and can reduce accuracy. The paper identifies redundancy in extended reasoning chains and analyzes whether models carry implicit signals about optimal stopping points.
- Researchers Build Video World Model Controlled by Hand Tracking and Head Pose A new model called Generated Reality conditions video generation on joint-level hand poses and tracked head position, targeting extended reality applications. Existing video world models accept only text or keyboard input. The paper proposes a conditioning mechanism for diffusion transformers that enables real-time embodied interaction.
- VESPO Tackles Training Instability in Reinforcement Learning for LLMs A new method called VESPO addresses policy staleness and distribution shift in off-policy RL training for large language models. The approach uses variational sequence-level optimization to correct importance sampling variance without the drawbacks of token-level clipping.
Don't miss what's next. Subscribe to AI News Digest: