AI Research Brief

Archives
May 6, 2026

Gradient Boosting Turns Out to Be Diffusion's Asymptotic Optimum

  • Multi-Object Generation Failures Need Attribution Before Solutions: T2I multi-object failures come from scene complexity, not class imbalance. Concept-level issues respond to more data; compositional issues don't scale away.
  • VLM Plays Mario to 100+ Turns With a New RL Recipe: Odysseus uses a turn-level critic PPO variant to push RL horizon from 20-30 to 100+. Pre-trained VLM action priors replace hand-designed action engineering.
  • GFlowNets Move From Demo to Usable in Red-Teaming: Stable-GFN uses contrastive trajectory balance to bypass partition function estimation, fixing mode collapse head-on.
  • Gradient Boosting Is Diffusion's Asymptotic Optimum: Decision trees and diffusion share the GTSM optimization principle. TreeFlow's 2x speedup on tabular generation is early landing evidence.

Also Notable

  • VLM Anti-Hallucination Takes a Different Route — online self-calibration replaces GPT distillation. One less external dependency for independent teams deploying LVLMs.

Read the full edition →

Don't miss what's next. Subscribe to AI Research Brief:
Powered by Buttondown, the easiest way to start and grow your newsletter.