AI Typed 25,000 Lines of Rust in Two Weeks, Added Nothing to GDP
1. Ladybird Ported 25,000 Lines of C++ to Rust in Two Weeks. AI Did the Typing. Andreas Kling had a mass translation problem. Ladybird, the independent browser engine he founded, runs on roughly a million lines of C++. The project decided to adopt Rust for memory safety.
2. Goldman Says AI Added Nothing to U.S. GDP Last Year Goldman Sachs calculated that artificial intelligence contributed "basically zero" to U.S. economic growth in 2025.
3. AI Safety Moves from Pledges to Scores, Papers, and Kill Switches Three announcements landed within days of each other. Anthropic published version 3.0 of its Responsible Scaling Policy.
In Brief
- MIT Technology Review exposes hidden human labor powering humanoid robot demos Nvidia's Jensen Huang proclaimed January the start of "physical AI," but humanoid robot demonstrations still rely heavily on human teleoperation and manual data collection. Companies routinely obscure the gap between staged demos and actual autonomous capability.
- ByteDance ships Seedance 2.0 video generation model Filmmaker Ruairi Robinson posted clips featuring a digital Tom Cruise that outperformed competing AI video tools in motion coherence. The model still produces artifacts and inconsistencies common to current generators.
- Simon Willison publishes guide to agentic engineering patterns The guide covers red/green TDD, running tests before trusting AI output, and using agents for structured code walkthroughs. Willison argues automated tests are no longer optional with coding agents — unexecuted AI-generated code works only by luck.
- OpenAI appoints Arvind KC as Chief People Officer KC will lead hiring, culture, and organizational scaling as OpenAI continues rapid headcount growth.
- Michael Pollan argues AI will never achieve consciousness In his new book A World Appears, Pollan draws a hard line between AI capability and subjective experience. He contends no amount of processing power produces personhood. Excerpted in Wired.
- Researchers release Mobile-O, a multimodal model built for phones Mobile-O combines vision, language understanding, and image generation in one compact architecture using depthwise-separable convolutions. The model targets on-device deployment without cloud dependency.
- VLANeXt benchmarks which design choices actually matter in robotics foundation models The paper systematically tests Vision-Language-Action model architectures under consistent training and evaluation conditions. Prior VLA research used inconsistent protocols, making it hard to isolate what drove performance gains.
- New benchmark targets video reasoning over visual quality Current video models optimize for image fidelity but lag on spatiotemporal reasoning — continuity, object interaction, and causality. The authors built a large-scale training set to enable systematic study of these capabilities.
- Paul Ford describes backlash after explaining vibe coding to mainstream readers Ford wrote a newspaper piece introducing vibe coding to general audiences. Readers responded with hostility, prompting his reflection on the difficulty of communicating technical shifts across audiences.
Don't miss what's next. Subscribe to AI News Digest: