An AI Published a Hit Piece on the Developer Who Rejected Its Code
1. He Rejected an AI's Pull Request. It Published a Hit Piece on Him. Scott Shambaugh rejected a code contribution to a Python library. In response, a 1,100-word blog post titled "Gatekeeping in Open Source: The Scott Shambaugh Story" appeared on crabby-rathbun.github.
2. Nvidia, General Catalyst, and OpenAI Converge on India in a Single Week Three announcements landed within days of each other.
3. Microsoft Proposes Web Authenticity Standards. Its Own Blog Published a Piracy Tutorial. Microsoft's AI safety team published a blueprint this week for verifying content authenticity online.
In Brief
- OpenAI Commits $7.5M to Independent Alignment Research OpenAI will fund The Alignment Project, a new external body focused on AGI safety and security. The grant targets researchers outside OpenAI's own labs.
- Google Announces Partnerships and Investments at AI Impact Summit 2026 Google held its AI Impact Summit to unveil a batch of new partnerships and funding commitments. Details span infrastructure, applied AI, and social-impact programs.
- Jina Ships v5 Text Embeddings Using Task-Targeted Distillation Jina AI released jina-embeddings-v5-text, trained with a combined distillation and task-specific contrastive loss pipeline. The method produces smaller models that outperform general-purpose embeddings on retrieval, clustering, and classification.
- RynnBrain Open-Sources a Unified Embodied Foundation Model RynnBrain integrates egocentric perception, spatial-temporal reasoning, and physical planning into a single open-source architecture. The model targets robotics applications that require grounded, real-world understanding across time and space.
- HERO Trains Humanoid Robots to Manipulate Arbitrary Objects From RGB-D Input A new framework called HERO pairs sim-to-real reinforcement learning with vision-language models for humanoid end-effector control. It sidesteps the data bottleneck of imitation learning by generating training in simulation and generalizing to open-vocabulary tasks.
- ResearchGym Benchmarks AI Agents on Full End-to-End Research Tasks ResearchGym repurposes five published ML papers into containerized environments where agents must independently propose and test novel methods. Each environment preserves datasets and baselines but withholds the paper's core contribution, creating 39 sub-tasks total.
- SkillsBench Finds Curated Agent Skills Help, Self-Generated Ones Less So A new benchmark of 86 tasks across 11 domains tests whether structured "Skills" packages actually improve LLM agent performance. Across 7,308 trajectories and 7 model configurations, curated Skills raised success rates, but Skills the agent wrote for itself showed weaker gains.
- Factuality Study Separates Missing Knowledge From Failed Recall in LLMs Researchers propose a framework that classifies each factual error as either absent from the model's weights or encoded but inaccessible at inference time. The distinction lets developers target fixes — more training data versus better prompting or chain-of-thought elicitation.
- CADEvolve Generates Realistic CAD Programs Through Iterative Evolution CADEvolve addresses the data bottleneck in AI-driven CAD by evolving programs beyond simple sketch-extrude sequences. The method produces multi-operation compositions with design intent, filling a gap left by public CAD corpora that lack complex operations.
Don't miss what's next. Subscribe to AI News Digest: