OpenAI Kills Sora 15 Months In, Walks Away from Billion-Dollar Disney Deal
1. OpenAI Kills Sora 15 Months After Launch, Unwinding a Billion-Dollar Disney Deal Earlier this month, OpenAI published a detailed blog post titled "Creating with Sora Safely," describing how it had built Sora 2 "with safety at the foundation" and outlining concrete protections for
2. Three Hacker News Threads Drew 1,005 Comments Questioning AI in a Single Week Three posts reached the Hacker News front page in the same March week. Each cleared 250 comments. None announced a product, funding round, or benchmark.
3. OpenAI Pairs $1 Billion Charity Pledge with ChatGPT Shopping Launch OpenAI's foundation committed at least $1 billion to curing diseases, expanding economic opportunity, strengthening AI resilience, and funding community programs.
In Brief
- LongCat-Flash-Prover Releases 560B Open-Source Model for Lean4 Formal Proofs The 560-billion-parameter mixture-of-experts model tackles formal mathematical reasoning in Lean4 by splitting the task into auto-formalization, sketching, and proving. A Hybrid-Experts Iteration Framework generates high-quality training trajectories for each capability. Hugging Face
- OpenResearcher Builds Fully Offline Pipeline for Training Deep Research Agents The open-source pipeline decouples corpus bootstrapping from multi-turn trajectory synthesis, removing the need for proprietary web APIs during training. All search-and-browse loops run locally, making large-scale trajectory generation cheaper and reproducible. Hugging Face
- daVinci-MagiHuman Open-Sources Single-Stream Audio-Video Generation Model The model generates synchronized video and audio through a single Transformer that processes text, video, and audio tokens in one unified sequence. The design drops cross-attention and multi-stream complexity in favor of standard self-attention, simplifying both training and inference. Hugging Face
- TerraScope Adds Pixel-Level Grounding to Earth Observation Vision-Language Models The unified VLM handles both optical and synthetic aperture radar inputs and fuses modalities during reasoning. It grounds spatial analysis at pixel resolution, targeting tasks where coarse bounding boxes lose critical geographic detail. Hugging Face
- Omni-WorldBench Proposes Interaction-Based Evaluation Standard for 4D World Models The benchmark argues that world modeling should jointly measure spatial structure and temporal evolution, not just visual fidelity or static 3D metrics. It covers both video generation and 3D reconstruction paradigms under a single evaluation framework. Hugging Face
- HopChain Exposes Compounding Errors in Vision-Language Reasoning with Multi-Hop Data Long chain-of-thought reasoning in VLMs surfaces perception, reasoning, knowledge, and hallucination failures that compound across steps. HopChain synthesizes multi-hop training data where each step depends on visual evidence, forcing models to maintain grounding throughout. Hugging Face
- ProactiveBench Tests Whether Multimodal Models Know When to Ask for Help The benchmark measures whether MLLMs can request simple user interventions — like removing an obstruction or improving image quality — instead of guessing from insufficient input. It repurposes seven existing datasets to test this "proactive" behavior across recognition, enhancement, and interpretation tasks. Hugging Face
- F4Splat Cuts Redundant Gaussians in Feed-Forward 3D Reconstruction Feed-forward 3D Gaussian Splatting methods typically allocate Gaussians uniformly across views, wasting compute on redundant primitives. F4Splat uses predictive densification to control total Gaussian count while preserving reconstruction quality. Hugging Face