AI Research Brief

Archives
February 23, 2026

Model Folding Beats Pruning, XR Gets Hand-Level Control

  • Weight folding outperforms pruning at most compression rates. ICLR 2026 work proves folding yields lower reconstruction error and validates across 1,000+ checkpoints.
  • Video generation models can now track your fingers. Joint-level hand control makes XR scenes interactive, not just watchable.
  • VR conversational agents finally know where you're standing. SARAH generates spatially aware full-body motion at 300 FPS for streaming VR deployment.

Also Notable

  • Flow model as critic regularizer for offline RL. Flow Actor-Critic sets new state-of-the-art on D4RL and OGBench by using flow's expressiveness to prevent Q-value explosion in out-of-data regions. ICLR 2026.
  • Agent memory doesn't need raw logs for every query. TierMem escalates to raw records only when summaries are insufficient, cutting tokens by 54% and latency by 61% with only a 2-point accuracy drop.
  • Attribute leakage in multi-instance generation gets a systematic fix. DEIG uses instance-level masked attention to isolate semantics across objects. AAAI 2026.
  • VLA models lack 3D spatial understanding? Fix it with residual stream alignment. ROCKET reaches 98.5% success on LIBERO using only 4% of the compute budget.
  • LLM-guided RL without constant LLM supervision. MIRA stores LLM knowledge in a memory graph, querying the graph instead of the model during training. ICLR 2026.
  • Learning when to pre-filter vs. post-filter in vector search. A learned query planner achieves 4x speedup on filtered ANN with 90%+ recall.
  • Medical QA can't ignore patient conditions. CondMedQA is the first conditional biomedical QA benchmark; CGR prunes knowledge graph reasoning paths based on patient-specific factors.

Read the full edition →

Don't miss what's next. Subscribe to AI Research Brief:
Powered by Buttondown, the easiest way to start and grow your newsletter.