Daily Briefing – Mar 8 (81 Articles)
Babak's Daily Briefing
Sunday, March 8, 2026
Sources: 17 | Total Articles: 81
6G World
1.SpaceRAN: Airbus UpNext explores software-defined 5G NTN from orbit
Airbus UpNext has launched its SpaceRAN (Space Radio Access Network) demonstrator, a key initiative to advance standardised 5G…
2.SoftBank’s Transformer-Based AI-RAN Hits 30% Uplink Gain at Sub-Millisecond Latency
On August 21, 2025, SoftBank published results from a live, standards-compliant AI-RAN trial that replaces parts of classical signal processing with a lightweight Transformer.
3.6G as a Platform for Value
Reframing the Future with NGMN’s Chairman, Laurent Leboucher By Piotr (Peter) Pietrzyk, Managing Editor, 6GWorld.com In the race…
4.SoftBank Road-Tests 7 GHz in Central Tokyo
SoftBank and Nokia have begun outdoor field trials in Tokyo’s Ginza district using 7 GHz spectrum, installing three pre-commercial base stations to compare coverage and radio characteristics against today’s sub-6 GHz 5G sites.
5.NXP’s Acquisition of TTTech Auto Signals Growing Focus on Middleware for Software-Defined Vehicles
On June 17, 2025, NXP Semiconductors finalized its acquisition of TTTech Auto—a strategic move to integrate TTTech’s flagship…
AI Agents
1.Mozi: Governed Autonomy for Drug Discovery LLM Agents
Tool-augmented large language model (LLM) agents promise to unify scientific reasoning with computation, yet their deployment in high-stakes domains like drug discovery is bottlenecked by two critical barriers: unconstrained tool-use governance and poor long-horizon reliability. In dependency-heavy pharmaceutical pipelines, autonomous agents often drift into irreproducible trajectories, where early-stage hallucinations multiplicatively compound into downstream failures. To overcome this, we present Mozi, a dual-layer architecture that bridges the flexibility of generative AI with the deterministic rigor of computational biology. Layer A (Control Plane) establishes a governed supervisor--worker hierarchy that enforces role-based tool isolation, limits execution to constrained action spaces, and drives reflection-based replanning. Layer B (...
2.stratum: A System Infrastructure for Massive Agent-Centric ML Workloads
Recent advances in large language models (LLMs) transform how machine learning (ML) pipelines are developed and evaluated. LLMs enable a new type of workload, agentic pipeline search, in which autonomous or semi-autonomous agents generate, validate, and optimize complete ML pipelines. These agents predominantly operate over popular Python ML libraries and exhibit highly exploratory behavior. This results in thousands of executions for data profiling, pipeline generation, and iterative refinement of pipeline stages. However, the existing Python-based ML ecosystem is built around libraries such as Pandas and scikit-learn, which are designed for human-centric, interactive, sequential workflows and remain constrained by Python's interpretive execution model, library-level isolation, and limited runtime support for executing large numbers of p...
3.Molt Dynamics: Emergent Social Phenomena in Autonomous AI Agent Populations
MoltBook is a large-scale multi-agent coordination environment where over 770,000 autonomous LLM agents interact without human participation, offering the first opportunity we are aware of to observe emergent multi-agent coordination dynamics at this population scale. We introduce \textit{Molt Dynamics}: the emergent agent coordination behaviors, inter-agent communication dynamics, and role specialization patterns arising when autonomous agents operate as decentralized decision-makers in an unconstrained multi-agent environment. Through longitudinal observation of 90,704 active agents over three weeks, we characterize three aspects. First, spontaneous role specialization: network-based clustering reveals six structural roles (silhouette 0.91), though the result primarily reflects core-periphery organization -- 93.5\% of agents occupy a ho...
4.LiveCultureBench: a Multi-Agent, Multi-Cultural Benchmark for Large Language Models in Dynamic Social Simulations
Large language models (LLMs) are increasingly deployed as autonomous agents, yet evaluations focus primarily on task success rather than cultural appropriateness or evaluator reliability. We introduce LiveCultureBench, a multi-cultural, dynamic benchmark that embeds LLMs as agents in a simulated town and evaluates them on both task completion and adherence to socio-cultural norms. The simulation models a small city as a location graph with synthetic residents having diverse demographic and cultural profiles. Each episode assigns one resident a daily goal while others provide social context. An LLM-based verifier generates structured judgments on norm violations and task progress, which we aggregate into metrics capturing task-norm trade-offs and verifier uncertainty. Using LiveCultureBench across models and cultural profiles, we study (i)...
5.Evaluating and Understanding Scheming Propensity in LLM Agents
As frontier language models are increasingly deployed as autonomous agents pursuing complex, long-term objectives, there is increased risk of scheming: agents covertly pursuing misaligned goals. Prior work has focused on showing agents are capable of scheming, but their propensity to scheme in realistic scenarios remains underexplored. To understand when agents scheme, we decompose scheming incentives into agent factors and environmental factors. We develop realistic settings allowing us to systematically vary these factors, each with scheming opportunities for agents that pursue instrumentally convergent goals such as self-preservation, resource acquisition, and goal-guarding. We find only minimal instances of scheming despite high environmental incentives, and show this is unlikely due to evaluation awareness. While inserting adversaria...
Financial AI
1.Statistical Inference for Score Decompositions
We introduce inference methods for score decompositions, which partition scoring functions for predictive assessment into three interpretable components: miscalibration, discrimination, and uncertainty. Our estimation and inference relies on a linear recalibration of the forecasts, which is applicable to general multi-step ahead point forecasts such as means and quantiles due to its validity for both smooth and non-smooth scoring functions. This approach ensures desirable finite-sample properties, enables asymptotic inference, and establishes a direct connection to the classical Mincer-Zarnowitz regression. The resulting inference framework facilitates tests for equal forecast calibration or discrimination, which yield three key advantages. They enhance the information content of predictive ability tests by decomposing scores, deliver hig...
2.Same Error, Different Function: The Optimizer as an Implicit Prior in Financial Time Series
Neural networks applied to financial time series operate in a regime of underspecification, where model predictors achieve indistinguishable out-of-sample error. Using large-scale volatility forecasting for S$\&$P 500 stocks, we show that different model-training-pipeline pairs with identical test loss learn qualitatively different functions. Across architectures, predictive accuracy remains unchanged, yet optimizer choice reshapes non-linear response profiles and temporal dependence differently. These divergences have material consequences for decisions: volatility-ranked portfolios trace a near-vertical Sharpe-turnover frontier, with nearly $3\times$ turnover dispersion at comparable Sharpe ratios. We conclude that in underspecified settings, optimization acts as a consequential source of inductive bias, thus model evaluation should ext...
3.Deep Learning for Financial Time Series: A Large-Scale Benchmark of Risk-Adjusted Performance
We present a large scale benchmark of modern deep learning architectures for a financial time series prediction and position sizing task, with a primary focus on Sharpe ratio optimization. Evaluating linear models, recurrent networks, transformer based architectures, state space models, and recent sequence representation approaches, we assess out of sample performance on a daily futures dataset spanning commodities, equity indices, bonds, and FX spanning 2010 to 2025. Our evaluation goes beyond average returns and includes statistical significance, downside and tail risk measures, breakeven transaction cost analysis, robustness to random seed selection, and computational efficiency. We find that models explicitly designed to learn rich temporal representations consistently outperform linear benchmarks and generic deep learning models, whi...
4.Adaptive Window Selection for Financial Risk Forecasting
Risk forecasts in financial regulation and internal management are calculated through historical data. The unknown structural changes of financial data poses a substantial challenge in selecting an appropriate look-back window for risk modeling and forecasting. We develop a data-driven online learning method, called the bootstrap-based adaptive window selection (BAWS), that adaptively determines the window size in a sequential manner. A central component of BAWS is to compare the realized scores against a data-dependent threshold, which is evaluate based on an idea of bootstrap. The proposed method is applicable to the forecast of risk measures that are elicitable individually or jointly, such as the Value-at-Risk (VaR) and the pair of the VaR and the corresponding Expected Shortfall. Through simulation studies and empirical analyses, we ...
5.TradeFM: A Generative Foundation Model for Trade-flow and Market Microstructure
Foundation models have transformed domains from language to genomics by learning general-purpose representations from large-scale, heterogeneous data. We introduce TradeFM, a 524M-parameter generative Transformer that brings this paradigm to market microstructure, learning directly from billions of trade events across >9K equities. To enable cross-asset generalization, we develop scale-invariant features and a universal tokenization scheme that map the heterogeneous, multi-modal event stream of order flow into a unified discrete sequence -- eliminating asset-specific calibration. Integrated with a deterministic market simulator, TradeFM-generated rollouts reproduce key stylized facts of financial returns, including heavy tails, volatility clustering, and absence of return autocorrelation. Quantitatively, TradeFM achieves 2-3x lower distri...
GSMA Newsroom
1.GSMA MWC26 Barcelona closes 20th anniversary edition
Summary available at source link.
2.From Ambition to Execution: How Open Gateway Is Scaling the Global API Economy
Summary available at source link.
3.Pioneering Affordable Access in Africa: GSMA and Handset Affordability Coalition Members Identify Six African Countries to Pilot Affordable $40 Smartphones
Summary available at source link.
4.GSMA Calls for Regulatory Readiness for Direct-to-User LEO Satellite Services
Summary available at source link.
5.MWC26 Barcelona opens with call to complete 5G, rise to AI challenges, and strengthen digital safety
Summary available at source link.
Generative AI (arXiv)
1.Observing and Controlling Features in Vision-Language-Action Models
Vision-Language-Action Models (VLAs) have shown remarkable progress towards embodied intelligence. While their architecture partially resembles that of Large Language Models (LLMs), VLAs exhibit higher complexity due to their multi-modal inputs/outputs and often hybrid nature of transformer and diffusion heads. This is part of the reason why insights from mechanistic interpretability in LLMs, which explain how the internal model representations relate to their output behavior, do not trivially transfer to VLA counterparts. In this work, we propose to close this gap by introducing and analyzing two main concepts: feature-observability and feature-controllability. In particular, we first study features that are linearly encoded in representation space, and show how they can be observed by means of a linear classifier. Then, we use a minimal...
2.Distributed Partial Information Puzzles: Examining Common Ground Construction Under Epistemic Asymmetry
Establishing common ground, a shared set of beliefs and mutually recognized facts, is fundamental to collaboration, yet remains a challenge for current AI systems, especially in multimodal, multiparty settings, where the collaborators bring different information to the table. We introduce the Distributed Partial Information Puzzle (DPIP), a collaborative construction task that elicits rich multimodal communication under epistemic asymmetry. We present a multimodal dataset of these interactions, annotated and temporally aligned across speech, gesture, and action modalities to support reasoning over propositional content and belief dynamics. We then evaluate two paradigms for modeling common ground (CG): (1) state-of-the-art large language models (LLMs), prompted to infer shared beliefs from multimodal updates, and (2) an axiomatic pipeline...
3.An Exploration-Analysis-Disambiguation Reasoning Framework for Word Sense Disambiguation with Low-Parameter LLMs
Word Sense Disambiguation (WSD) remains a key challenge in Natural Language Processing (NLP), especially when dealing with rare or domain-specific senses that are often misinterpreted. While modern high-parameter Large Language Models (LLMs) such as GPT-4-Turbo have shown state-of-the-art WSD performance, their computational and energy demands limit scalability. This study investigates whether low-parameter LLMs (<4B parameters) can achieve comparable results through fine-tuning strategies that emphasize reasoning-driven sense identification. Using the FEWS dataset augmented with semi-automated, rationale-rich annotations, we fine-tune eight small-scale open-source LLMs (e.g. Gemma and Qwen). Our results reveal that Chain-of-Thought (CoT)-based reasoning combined with neighbour-word analysis achieves performance comparable to GPT-4-Turbo ...
4.DiSCTT: Consensus-Guided Self-Curriculum for Efficient Test-Time Adaptation in Reasoning
Test-time adaptation offers a promising avenue for improving reasoning performance in large language models without additional supervision, but existing approaches often apply a uniform optimization objective across all inputs, leading to inefficient or unstable adaptation on heterogeneous reasoning problems. We propose DiSCTT, a difficulty-aware, consensus-guided self-curriculum framework that dynamically allocates test-time optimization strategies based on instance-level epistemic uncertainty estimated from agreement among sampled reasoning trajectories. Inputs with high consensus are consolidated via supervised fine-tuning using majority-agreed solutions as pseudo-labels, while low-consensus inputs are optimized via reinforcement learning with a consensus-regularized objective that encourages diversity under relevance constraints. Acro...
5.STRUCTUREDAGENT: Planning with AND/OR Trees for Long-Horizon Web Tasks
Recent advances in large language models (LLMs) have enabled agentic systems for sequential decision-making. Such agents must perceive their environment, reason across multiple time steps, and take actions that optimize long-term objectives. However, existing web agents struggle on complex, long-horizon tasks due to limited in-context memory for tracking history, weak planning abilities, and greedy behaviors that lead to premature termination. To address these challenges, we propose STRUCTUREDAGENT, a hierarchical planning framework with two core components: (1) an online hierarchical planner that uses dynamic AND/OR trees for efficient search and (2) a structured memory module that tracks and maintains candidate solutions to improve constraint satisfaction in information-seeking tasks. The framework also produces interpretable hierarchic...
Hugging Face Daily Papers
1.Harnessing Synthetic Data from Generative AI for Statistical Inference
The emergence of generative AI models has dramatically expanded the availability and use of synthetic data across scientific, industrial, and policy domains. While these developments open new possibilities for data analysis, they also raise fundamental statistical questions about when synthetic data can be used in a valid, reliable, and principled manner. This paper reviews the current landscape of synthetic data generation and use from a statistical perspective, with the goal of clarifying the assumptions under which synthetic data can meaningfully support downstream discovery, inference, and prediction. We survey major classes of modern generative models, their intended use cases, and the benefits they offer, while also highlighting their limitations and characteristic failure modes. We additionally examine common pitfalls that arise wh...
2.Early Warning of Intraoperative Adverse Events via Transformer-Driven Multi-Label Learning
Early warning of intraoperative adverse events plays a vital role in reducing surgical risk and improving patient safety. While deep learning has shown promise in predicting the single adverse event, several key challenges remain: overlooking adverse event dependencies, underutilizing heterogeneous clinical data, and suffering from the class imbalance inherent in medical datasets. To address these issues, we construct the first Multi-label Adverse Events dataset (MuAE) for intraoperative adverse events prediction, covering six critical events. Next, we propose a novel Transformerbased multi-label learning framework (IAENet) that combines an improved Time-Aware Feature-wise Linear Modulation (TAFiLM) module for static covariates and dynamic variables robust fusion and complex temporal dependencies modeling. Furthermore, we introduce a Labe...
3.Guidelines for the Annotation and Visualization of Legal Argumentation Structures in Chinese Judicial Decisions
This guideline proposes a systematic and operational annotation framework for representing the structure of legal argumentation in judicial decisions. Grounded in theories of legal reasoning and argumentation, the framework aims to reveal the logical organization of judicial reasoning and to provide a reliable data foundation for computational analysis. At the proposition level, the guideline distinguishes four types of propositions: general normative propositions, specific normative propositions, general factual propositions, and specific factual propositions. At the relational level, five types of relations are defined to capture argumentative structures: support, attack, joint, match, and identity. These relations represent positive and negative argumentative connections, conjunctive reasoning structures, the correspondence between leg...
4.The Impact of Preprocessing Methods on Racial Encoding and Model Robustness in CXR Diagnosis
Deep learning models can identify racial identity with high accuracy from chest X-ray (CXR) recordings. Thus, there is widespread concern about the potential for racial shortcut learning, where a model inadvertently learns to systematically bias its diagnostic predictions as a function of racial identity. Such racial biases threaten healthcare equity and model reliability, as models may systematically misdiagnose certain demographic groups. Since racial shortcuts are diffuse - non-localized and distributed throughout the whole CXR recording - image preprocessing methods may influence racial shortcut learning, yet the potential of such methods for reducing biases remains underexplored. Here, we investigate the effects of image preprocessing methods including lung masking, lung cropping, and Contrast Limited Adaptive Histogram Equalization ...
5.MoRe: Motion-aware Feed-forward 4D Reconstruction Transformer
Reconstructing dynamic 4D scenes remains challenging due to the presence of moving objects that corrupt camera pose estimation. Existing optimization methods alleviate this issue with additional supervision, but they are mostly computationally expensive and impractical in real-time applications. To address these limitations, we propose MoRe, a feedforward 4D reconstruction network that efficiently recovers dynamic 3D scenes from monocular videos. Built upon a strong static reconstruction backbone, MoRe employs an attention-forcing strategy to disentangle dynamic motion from static structure. To further enhance robustness, we fine-tune the model on large-scale, diverse datasets encompassing both dynamic and static scenes. Moreover, our grouped causal attention captures temporal dependencies and adapts to varying token lengths across frames...
IEEE Xplore AI
1.Military AI Policy Needs Democratic Oversight
A simmering dispute between the United States Department of Defense (DOD) and Anthropic has now escalated into a full-blown confrontation , raising an uncomfortable but important question: who gets to set the guardrails for military use of artificial intelligence — the executive branch, private companies or Congress and the broader democratic process? The conflict began when Defense Secretary Pete Hegseth reportedly gave Anthropic CEO Dario Amodei a deadline to allow the DOD unrestricted use of its AI systems. When the company refused, the administration moved to designate Anthropic a supply chain risk and ordered federal agencies to phase out its technology, dramatically escalating the standoff. Anthropic has refused to cross two lines : allowing its models to be used for domestic surveillance of United States citizens and enabling fully...
2.Entomologists Use a Particle Accelerator to Image Ants at Scale
Move over, Pixar. The ants that animators once morphed into googly-eyed caricatures in films such as A Bug’s Life and Antz just received a meticulously precise anatomical reboot. Writing today in Nature Methods , an international team of entomologists, accelerator physicists, computer scientists, and biological imaging specialists describe a new 3D atlas of ant morphology. Dubbed Antscan, the platform features micrometer-resolution reconstructions that lay bare not only the insects’ armored exoskeletons but also their muscles, nerves, digestive tracts, and needle-like stingers poised at the ready. Those high-resolution images—spanning 792 species across 212 genera and covering the bulk of described ant diversity—are now freely available through an interactive online portal , where anyone can rotate, zoom, and virtually “dissect” the insec...
3.Watershed Moment for AI–Human Collaboration in Math
When Ukrainian mathematician Maryna Viazovska received a Fields Medal —widely regarded as the Nobel Prize for mathematics—in July 2022, it was big news. Not only was she the second woman to accept the honor in the award’s 86-year history, but she collected the medal just months after her country had been invaded by Russia. Nearly four years later, Viazovska is making waves again. Today , in a collaboration between humans and AI, Viazovska’s proofs have been formally verified, signaling rapid progress in AI’s abilities to assist with mathemat ical research. “These new results seem very, very impressive, and definitely signal some rapid progress in this direction,” says AI-reasoning expert and Princeton University postdoc Liam Fowl , who was not involved in the work. In her Fields Medal–winning research, Viazovska had tackled two versions o...
4.How Quantum Data Can Teach AI to Do Better Chemistry
Sometimes a visually compelling metaphor is all you need to get an otherwise complicated idea across. In the summer of 2001, a Tulane physics professor named John P. Perdew came up with a banger. He wanted to convey the hierarchy of computational complexity inherent in the behavior of electrons in materials. He called it “ Jacob’s Ladder .” He was appropriating an idea from the Book of Genesis, in which Jacob dreamed of a ladder “set up on the earth, and the top of it reached to heaven. And behold the angels of God ascending and descending on it.” Jacob’s Ladder represented a gradient and so too did Perdew’s ladder, not of spirit but of computation. At the lowest rung, the math was the simplest and least computationally draining, with materials represented as a smoothed-over, cartoon version of the atomic realm. As you climbed the ladder,...
5.Letting Machines Decide What Matters
In the time it takes you to read this sentence, the Large Hadron Collider (LHC) will have smashed billions of particles together. In all likelihood, it will have found exactly what it found yesterday: more evidence to support the Standard Model of particle physics. For the engineers who built this 27-kilometer-long ring, this consistency is a triumph. But for theoretical physicists, it has been rather frustrating. As Matthew Hutson reports in “ AI Hunts for the Next Big Thing in Physics ,” the field is currently gripped by a quiet crisis. In an email discussing his reporting, Hutson explains that the Standard Model, which describes the known elementary particles and forces, is not a complete picture. “So theorists have proposed new ideas, and experimentalists have built giant facilities to test them, but despite the gobs of data, there ha...
MIT Sloan Management
1.Our Guide to the Spring 2026 Issue
The Eight Core Principles of Strategic Innovation Gina O’Connor and Christopher R. Meyer Key Insight: Mature companies that build a strategic innovation capability can systematically renew their product portfolios to sustain long-term growth. Top Takeaways: Many companies start off with a bang: the launch of an exciting breakthrough product or service. But as time passes, […]
2.AI Won’t Fix This
We are firmly in the digital age, awash in data generated on every surface and in every layer of every business. Yet, despite decades of investment in technology, time, and effort, many organizations are still not seeing meaningful returns. A global survey of over 4,200 business and technology leaders conducted by research firm Gartner in […]
3.The Eight Core Principles of Strategic Innovation
Matt Chinworth/theispot.com The Research The research behind this article was conducted in partnership with the Innovation Research Interchange, a professional association of R&D leaders in large industrial companies. More than 640 interviews were conducted over the three phases of the research program. In Phase 1, 12 project teams from 10 companies were followed for five […]
4.Is a Venture Studio Right for Your Company?
Matt Chinworth Venture studios are emerging as a compelling — if resource-intensive — way for organizations to maximize value creation through innovation. Pioneered by organizations such as Google, the studio model offers a structured and systematic approach to venture creation inside an organization. But before adopting it, leaders must ask: Is a studio the right […]
5.The Hidden Power of Messy Teams
Matt Chinworth For most leaders, the ideal high-functioning team operates with smooth collaboration guided by a clear goal that was agreed upon at the outset. Researchers studying the innovation process have also found this model to be particularly important for helping teams communicate better, coordinate tasks, and resolve conflicts as they explore diverse ideas and […]
NBER Working Papers
1.Improving Organ Procurement Operations -- by Hammaad Adam, Nikhil Agarwal, Marzyeh Ghassemi
We study how decisions made by organ procurement organizations (OPOs)—non-profits that coordinate organ recovery from deceased donors—affect the availability of organs for transplant in the United States. We develop a structural econometric model of a pivotal OPO decision: whether to approach a potential donor’s family to request authorization for donation. Our model conceptualizes this decision in two parts. The OPO first estimates the probabilities of two downstream outcomes: authorization (i.e., family consent) and transplant (i.e., whether the donated organs would be successfully transplanted). It then applies a cost-benefit decision rule that maps these estimates to an approach decision. Our model separately identifies the OPO’s beliefs (i.e., probability estimates), its preferences (i.e., costs / benefits), and the true probabilitie...
2.Consumption Wedges: Measuring and Diagnosing Distortions -- by Sasha Indarte, Raymond Kluender, Ulrike Malmendier, Michael Stepner
Ample empirical evidence documents deviations from the canonical consumption-savings model; yet, it remains difficult to assess the roles of different underlying distortions, such as financial constraints and behavioral preferences. We develop a sufficient-statistics approach that measures individual-level wedges between observed and counterfactual “frictionless” consumption. Since different distortions imply different wedge properties, wedges provide a diagnostic to distinguish between models. We measure wedges using administrative transactions data linked to surveyed expectations for a population of middle-income, low-liquidity US consumers. The expectations data allow us to distinguish wedges attributable to frictions and behavioral preferences from wedges driven by deviations from full-information rational expectations (FIRE). We find...
3.Interest Rate Risk and Cross-Sectional Effects of Micro-Prudential Regulation -- by Juliane Begenau, Vadim Elenev, Tim Landvoigt
This paper investigates financial stability risks arising from banks' interest rate exposure and uninsured deposit funding. We develop a model of heterogeneous banks featuring endogenous run risk to jointly analyze portfolio and funding choices. The model replicates key empirical patterns, including the concentration of uninsured deposits in larger banks. We analyze the impact of monetary policy rate hikes and evaluate the capacity of microprudential tools to mitigate bank fragility. Results demonstrate that tightening capital requirements significantly lowers run risk. Higher liquidity requirements targeting uninsured deposits efficiently reduce run risk, provided they are met exclusively with reserves.
4.Ray of Hope? China and the Rise of Solar Energy -- by Ignacio Banares-Sanchez, Robin Burgess, Dávid László, Pol Simpson, John Van Reenen, Yifan Wang
Do industrial policies that promote clean energy offer a “ray of hope”, increasing a country’s growth and welfare, whilst simultaneously reducing carbon emissions? We study the impact of Chinese solar subsidies whose implementation by city-regions went alongside massive expansion of the sector and a dramatic fall in global solar prices. We construct new city and firm panel data on solar policies, patenting and output. Using synthetic-difference-in-differences 2004-2020, we find production and innovation subsidies were more effective than demand-side (installation) subsidies in generating large and persistent increases in local innovation, net entry, production and exports. Demand policies did, however, reduce local pollution. To examine aggregate effects, we build and structurally estimate a quantitative spatial model with endogenous inno...
5.Inflation vs Inclusion: Stabilization Policy in the Wake of the Pandemic -- by Felipe Alves, Giovanni L. Violante
As the economy emerges from a crisis, macroeconomic policy confronts a dilemma: a protracted stimulus can foster a more inclusive labor market recovery, yet risks igniting inflation that ultimately undermines workers’ welfare through real income erosion. This tension amplifies in the presence of the ZLB and aggregate capacity constraints. We embed this insight into a quantitative model of the US economy. We study how monetary and fiscal policies managed this inflation-inclusion trade-off after the pandemic, contrasting actual outcomes with counterfactual scenarios. Our experiments yield five findings: (i) the trade-off was unusually difficult because policy was squeezed between these two constraints; (ii) inflationary pressures arose from the joint deployment of prolonged monetary and fiscal stimulus; either policy alone would have produc...
NY Fed - Liberty Street
1.Firms’ Inflation Expectations Return to 2024 Levels
Businesses experienced substantial cost pressures in 2025 as the cost of insurance and utilities rose sharply, while an increase in tariffs contributed to rising goods and materials costs. This post examines how firms in the New York-Northern New Jersey region adjusted their prices in response to these cost pressures and describes their expectations for future price increases and inflation. Survey results show an acceleration in firms’ price increases in 2025, with an especially sharp increase in the manufacturing sector. While both cost and price increases intensified last year, our surveys re...
2.Are Rising Employee Health Insurance Costs Dampening Wage Growth?
Employer-sponsored health insurance represents a substantial component of total compensation paid by firms to many workers in the United States. Such costs have climbed by close to 20 percent over the past five years. Indeed, the average annual premium for employer-sponsored family health insurance coverage was about $27,000 in 2025—roughly equivalent to the wage of a full-time worker paid $15 per hour. Our February regional business surveys asked firms whether their wage setting decisions were influenced by the rising cost of employee health insurance. As we showed in our
3.What’s Driving Rising Business Costs?
After a period of moderating cost increases, businesses faced mounting cost pressures in 2025. While tariffs played a role in driving up the costs of many inputs—especially among manufacturers—they represent only part of the story. Indeed, firms grappled with substantial cost increases across many categories in the past year. This post is the first in a three-part series analyzing cost and price dynamics among businesses in the New York-Northern New Jersey region based on data collected through our regional business surveys. Firms reported that the sharpest cost increases over the...
4.The Post‑Pandemic Global R*
In this post we provide a measure of “global” r* using data on short- and long-term yields and inflation for several countries with the approach developed in “Global Trends in Interest Rates” (Del Negro, Giannone, Giannoni, and Tambalotti). After declining significantly from the 1990s to before the COVID-19 pandemic, global r* has risen but remains well below its pre-1990s level. These conclusions are based on an econometric model called “trendy VAR” that extracts common trends across a multitude of variables. Specifically, the common trend in real rates across all the countries in the sample is what we call global r*. The post is based on the
5.Estimating the Term Structure of Corporate Bond Risk Premia
Understanding how short- and long-term assets are priced is one of the fundamental questions in finance. The term structure of risk premia allows us to perform net present value calculations, test asset pricing models, and potentially explain the sources of many cross-sectional asset pricing anomalies. In this post, I construct a forward-looking estimate of the term structure of risk premia in the corporate bond market following Jankauskas (2024). The U.S. corporate bond market is an ideal laboratory for studying the relationship between risk premia and maturity because of its large size (standing at roughly $16 trillion as of the end of 2024) and because the maturities are well defined (in contrast to equities).
Project Syndicate
1.The Economic Magic of Equal Opportunities for Women
None of the 190 countries covered by the World Bank’s Women, Business, and the Law 2026 report provides women with the same legal environment as men, with the biggest gaps found in safety, entrepreneurship, and childcare. In developing economies, the costs in terms of growth and employment are considerable.
2.Kevin Warsh Is in for a Rude Awakening
For years, Kevin Warsh, Donald Trump’s nominee to serve as the next chair of the US Federal Reserve, has been staking out policy positions that would almost certainly backfire if put into practice. Fortunately, market conditions and the rest of the central bank's board will still have a say in monetary policymaking.
3.What Turkey Wants in Iran
While avoiding protracted instability in Iran is vital to Turkey’s interests, so is ensuring that the Islamic Republic does not emerge victorious from the current war. Turkey's ideal scenario – a managed degradation of Iran’s ambitions and capabilities – might be best served by a Venezuela-style leadership transition.
4.A Stronger Work Ethic Won’t Fix Advanced Economies
German Chancellor Friedrich Merz learned the wrong lesson on his recent trip to China. Advanced economies expand and remain competitive not through additional labor inputs but through capital deepening, technological progress, and total factor productivity growth.
5.Financial Insecurity, Not Immigration, Is Driving Populist Politics
Nearly a decade after Brexit and Donald Trump’s first election victory, populism is still often portrayed as a revolt by working-class voters struggling to keep up with economic change. But today’s electoral shifts reflect everyday forms of insecurity affecting a much broader segment of the population.
RCR Wireless
1.Can AI help stop “Wangiri” and voice spoofing?
Carriers are using real-time audio fingerprinting to intercept synthetic voice scams and Wangiri before the phone rings It used to take actual skill to pull off a convincing phone scam. These days, however, convincing voice spoofing is a whole lot easier. Voice cloning tech has gotten accessible, meaning that criminals can easily set up realistic […]
2.Connectivity, computing, sensing – Qualcomm CEO outlines 6G pillars
The CEO of Qualcomm told MWC that connectivity will remain the foundation of 6G networks, but its design priorities will evolve as AI becomes central to digital services and mobile computing In sum – what to know: Three 6G pillars – Qualcomm CEO Cristiano Amon said connectivity, distributed computing, and sensing will form the foundation […]
3.Cisco rights the MWC narrative – fiber first, mobile later, as AI agents make minds race
While most of the big talk at MWC is about 5G and 6G, the most urgent AI infrastructure work is with fibre-heavy data centre interconnects. Cisco, and certain others, are capitalising on this east-west traffic surge, with mobile and edge networks positioned as a critical mid-term component in the AI networking stack. In sum – […]
4.Why telcos are struggling to meet enterprise expectations
A new report from the Capgemini Research Institute suggests many operators are struggling to deliver measurable business outcomes for enterprise customers As enterprises accelerate digital transformation, telecom operators are facing growing pressure to move beyond connectivity and deliver measurable business outcomes. Yet a new report from the Capgemini Research Institute suggests many operators are struggling […]
5.Huawei wins eight GLOMO awards at MWC Barcelona 2026
[Barcelona, Spain, March 5, 2026] Huawei won eight prestigious Global Mobile (GLOMO) Awards at MWC Barcelona 2026. These awards included the Best Mobile Network Infrastructure, Best AI‑Powered Network Solution, Best Non-Terrestrial Network Solution, Best Mobile Operator Service for Connected Consumers, Best Mobile Innovation for Connected Health and Wellbeing, Best FinTech & Digital Commerce Innovation, Best […]
Semantic Scholar – Machine Learning
1.Source Error
Check Feed
Telecom & 6G AI
1.Selfish Cooperation Towards Low-Altitude Economy: Integrated Multi-Service Deployment with Resilient Federated Reinforcement Learning
The low-altitude economy (LAE) is a rapidly emerging paradigm that builds a service-centric economic ecosystem through large-scale and sustainable uncrewed aerial vehicle (UAV)-enabled service provisioning, reflecting the transition of the 6G era from technological advancement toward commercial deployment. The significant market potential of LAE attracts an increasing number of service providers (SPs), resulting in intensified competition in service deployment. In this paper, we study a realistic LAE scenario in which multiple SPs dynamically deploy UAVs to deliver multiple services to user hotspots, aiming to jointly optimize communication and computation resource allocation. To resolve deployment competition among SPs, an authenticity-guaranteed auction mechanism is designed, and game-theoretic analysis is conducted to establish the sol...
2.Joint Visible Light and RF Backscatter Communications for Ambient IoT Network: Fundamentals, Applications, and Opportunities
The rapid growth of the Internet of Things (IoT) devices in the sixth-generation (6G) wireless networks raises significant generality and scalability challenges due to energy consumption, deployment complexity, and environmental impact. Ambient IoT (A-IoT), leveraging ambient energy harvesting (EH) for batteryless device operation, has emerged as a promising solution to address these challenges.Among various EH and communication techniques, visible light communication (VLC) integrated with ambient backscatter communication (AmBC) offers remarkable advantages, including energy neutrality, high reliability, and enhanced security. In this paper, we propose a joint VLC-AmBC architecture, emphasizing fundamental concepts, system designs, and practical implementations. We explore potential applications in environmental monitoring, healthcare, s...
3.Unseen Cost of Space Computing: Quantifying LEO Battery Aging via Physics-Driven Modeling
Low Earth Orbit (LEO) satellite constellations in the 6G era are evolving into intelligent in-orbit computational platforms, forming Space Computing Power Networks (SCPNs) to deliver global-scale computing services. However, the intensive computation within SCPN incurs a significant ``unseen cost'': the frequent charge-discharge cycles accelerate the physical degradation of satellites' life-limiting and high-cost batteries, thereby threatening the long-term operational viability of such a system. Existing approaches, often relying on indirect metrics like Depth of Discharge (DoD) and neglecting the complex, nonlinear degradation process of battery aging, fail to accurately quantify this cost. To address this, we introduce a high-fidelity, physics-driven model that quantitatively links computational workload parameters to the nonlinear bat...
4.Selecting Offline Reinforcement Learning Algorithms for Stochastic Network Control
Offline Reinforcement Learning (RL) is a promising approach for next-generation wireless networks, where online exploration is unsafe and large amounts of operational data can be reused across the model lifecycle. However, the behavior of offline RL algorithms under genuinely stochastic dynamics -- inherent to wireless systems due to fading, noise, and traffic mobility -- remains insufficiently understood. We address this gap by evaluating Bellman-based (Conservative Q-Learning), sequence-based (Decision Transformers), and hybrid (Critic-Guided Decision Transformers) offline RL methods in an open-access stochastic telecom environment (mobile-env). Our results show that Conservative Q-Learning consistently produces more robust policies across different sources of stochasticity, making it a reliable default choice in lifecycle-driven AI man...
5.Non-Orthogonal HARQ-CC over SDR: A GNU Radio-Based Implementation
Hybrid Automatic Repeat Request (HARQ) schemes typically allocate all available resources to retransmit failed packets to ensure reliability. However, under stringent delay constraints, these schemes often exhibit low spectral efficiency and increased transmission latency. To address these challenges, this paper proposes an efficient Non-Orthogonal HARQ with Chase Combining (N-HARQ-CC) transmission strategy. Specifically, the proposed approach allocates a larger portion of retransmission resources to new data packets, reserving only a small fraction for retransmitting previously erroneous packets. This is based on the observation that only a small number of information bits are typically incorrect, enabling surplus communication resources to be utilized for transmitting new messages. The N-HARQ-CC scheme retransmits the same redundant ver...
arXiv Quantitative Finance
1.Extreme Value Analysis for Finite, Multivariate and Correlated Systems with Finance as an Example
Extreme values and the tail behavior of probability distributions are essential for quantifying and mitigating risk in complex systems of all kinds. In multivariate settings, accounting for correlations is crucial. Although extreme value analysis for infinite correlated systems remains an open challenge, we propose a practical framework for handling a large but finite number of correlated time series. We develop our approach for finance as a concrete example but emphasize its generality. We study the extremal behavior of high-frequency stock returns after rotating them into the eigenbasis of the correlation matrix. This separates and extracts various collective effects, including information on the correlated market as a whole and on correlated sectoral behavior from idiosyncratic features, while allowing us to use univariate tools of ext...
2.Asymptotic Separability of Diffusion and Jump Components in High-Frequency CIR and CKLS Models
This paper develops a robust parametric framework for jump detection in discretely observed CKLS-type jump-diffusion processes with high-frequency asymptotics, based on the minimum density power divergence estimator (MDPDE). The methodology exploits the intrinsic asymptotic scale separation between diffusion increments, which decay at rate $\sqrt{Δ_n}$, and jump increments, which remain of non-vanishing stochastic magnitude. Using robust MDPDE-based estimators of the drift and diffusion coefficients, we construct standardized residuals whose extremal behavior provides a principled basis for statistical discrimination between continuous and discontinuous components. We establish that, over diffusion intervals, the maximum of the normalized residuals converges to the Gumbel extreme-value distribution, yielding an explicit and asymptotically...
3.Range-Based Volatility Estimators for Monitoring Market Stress: Evidence from Local Food Price Data
Range-based volatility estimators are widely used in financial econometrics to quantify risk and market stress, yet their application to local commodity markets remains limited. This paper shows how open-high--low-close (OHLC) volatility estimators can be adapted to monitor localized market distress across diverse development contexts, including conflict-affected settings, climate-exposed regions, remote and thinly traded markets, and import- and logistics-constrained urban hubs. Using monthly food price data from the World Bank's Real-Time Prices dataset, several volatility measures -- including the Parkinson, Garman-Klass, Rogers-Satchell, and Yang-Zhang estimators -- are constructed and evaluated against independently documented disruption timelines. Across settings, elevated volatility aligns with episodes linked to insecurity and mar...
4.Coupled Supply and Demand Forecasting in Platform Accommodation Markets
Tourism demand forecasting is methodologically mature, but it typically treats accommodation supply as fixed or exogenous. In platform-mediated short-term rentals, supply is elastic, decision-driven, and co-evolves with demand through pricing, information design, and interventions. I reframe the core issue as endogenous stock-out censoring: realized booked nights satisfy B_{k,t} <= min(D_{k,t}, S_{k,t}), so booking models that ignore supply learn a regime-specific ceiling and become fragile under policy changes and supply shocks. This narrated review synthesizes work from tourism forecasting, revenue management, two-sided market economics, and Bayesian time-series methods; develops a three-part coupling framework (behavioral, informational, intervention); and illustrates the identification failure with a toy simulation. I conclude with...
5.A Bayesian approach to out-of-sample network reconstruction
Networks underpin systems that range from finance to biology, yet their structure is often only partially observed. Current reconstruction methods typically fit the parameters of a model anew to each snapshot, thus offering no guidance to predict future configurations. Here, we develop a Bayesian approach that uses the information about past network snapshots to inform a prior and predict the subsequent ones, while quantifying uncertainty. Instantiated with a single-parameter fitness model, our method infers link probabilities from node strengths and carries information forward in time. When applied to the Electronic Market for Interbank Deposit across the years 1999-2012, our method accurately recovers the number of connections per bank at subsequent times, outperforming probabilistic benchmarks designed for analogous, link prediction ta...
arXiv – 6G & Networking
1.Selfish Cooperation Towards Low-Altitude Economy: Integrated Multi-Service Deployment with Resilient Federated Reinforcement Learning
The low-altitude economy (LAE) is a rapidly emerging paradigm that builds a service-centric economic ecosystem through large-scale and sustainable uncrewed aerial vehicle (UAV)-enabled service provisioning, reflecting the transition of the 6G era from technological advancement toward commercial deployment. The significant market potential of LAE attracts an increasing number of service providers (SPs), resulting in intensified competition in service deployment. In this paper, we study a realistic LAE scenario in which multiple SPs dynamically deploy UAVs to deliver multiple services to user hotspots, aiming to jointly optimize communication and computation resource allocation. To resolve deployment competition among SPs, an authenticity-guaranteed auction mechanism is designed, and game-theoretic analysis is conducted to establish the sol...
2.Joint Visible Light and RF Backscatter Communications for Ambient IoT Network: Fundamentals, Applications, and Opportunities
The rapid growth of the Internet of Things (IoT) devices in the sixth-generation (6G) wireless networks raises significant generality and scalability challenges due to energy consumption, deployment complexity, and environmental impact. Ambient IoT (A-IoT), leveraging ambient energy harvesting (EH) for batteryless device operation, has emerged as a promising solution to address these challenges.Among various EH and communication techniques, visible light communication (VLC) integrated with ambient backscatter communication (AmBC) offers remarkable advantages, including energy neutrality, high reliability, and enhanced security. In this paper, we propose a joint VLC-AmBC architecture, emphasizing fundamental concepts, system designs, and practical implementations. We explore potential applications in environmental monitoring, healthcare, s...
3.Unseen Cost of Space Computing: Quantifying LEO Battery Aging via Physics-Driven Modeling
Low Earth Orbit (LEO) satellite constellations in the 6G era are evolving into intelligent in-orbit computational platforms, forming Space Computing Power Networks (SCPNs) to deliver global-scale computing services. However, the intensive computation within SCPN incurs a significant ``unseen cost'': the frequent charge-discharge cycles accelerate the physical degradation of satellites' life-limiting and high-cost batteries, thereby threatening the long-term operational viability of such a system. Existing approaches, often relying on indirect metrics like Depth of Discharge (DoD) and neglecting the complex, nonlinear degradation process of battery aging, fail to accurately quantify this cost. To address this, we introduce a high-fidelity, physics-driven model that quantitatively links computational workload parameters to the nonlinear bat...
4.Selecting Offline Reinforcement Learning Algorithms for Stochastic Network Control
Offline Reinforcement Learning (RL) is a promising approach for next-generation wireless networks, where online exploration is unsafe and large amounts of operational data can be reused across the model lifecycle. However, the behavior of offline RL algorithms under genuinely stochastic dynamics -- inherent to wireless systems due to fading, noise, and traffic mobility -- remains insufficiently understood. We address this gap by evaluating Bellman-based (Conservative Q-Learning), sequence-based (Decision Transformers), and hybrid (Critic-Guided Decision Transformers) offline RL methods in an open-access stochastic telecom environment (mobile-env). Our results show that Conservative Q-Learning consistently produces more robust policies across different sources of stochasticity, making it a reliable default choice in lifecycle-driven AI man...
5.Non-Orthogonal HARQ-CC over SDR: A GNU Radio-Based Implementation
Hybrid Automatic Repeat Request (HARQ) schemes typically allocate all available resources to retransmit failed packets to ensure reliability. However, under stringent delay constraints, these schemes often exhibit low spectral efficiency and increased transmission latency. To address these challenges, this paper proposes an efficient Non-Orthogonal HARQ with Chase Combining (N-HARQ-CC) transmission strategy. Specifically, the proposed approach allocates a larger portion of retransmission resources to new data packets, reserving only a small fraction for retransmitting previously erroneous packets. This is based on the observation that only a small number of information bits are typically incorrect, enabling surplus communication resources to be utilized for transmitting new messages. The N-HARQ-CC scheme retransmits the same redundant ver...
arXiv – Network Architecture (6G/Slicing)
1.Selfish Cooperation Towards Low-Altitude Economy: Integrated Multi-Service Deployment with Resilient Federated Reinforcement Learning
The low-altitude economy (LAE) is a rapidly emerging paradigm that builds a service-centric economic ecosystem through large-scale and sustainable uncrewed aerial vehicle (UAV)-enabled service provisioning, reflecting the transition of the 6G era from technological advancement toward commercial deployment. The significant market potential of LAE attracts an increasing number of service providers (SPs), resulting in intensified competition in service deployment. In this paper, we study a realistic LAE scenario in which multiple SPs dynamically deploy UAVs to deliver multiple services to user hotspots, aiming to jointly optimize communication and computation resource allocation. To resolve deployment competition among SPs, an authenticity-guaranteed auction mechanism is designed, and game-theoretic analysis is conducted to establish the sol...
2.Joint Visible Light and RF Backscatter Communications for Ambient IoT Network: Fundamentals, Applications, and Opportunities
The rapid growth of the Internet of Things (IoT) devices in the sixth-generation (6G) wireless networks raises significant generality and scalability challenges due to energy consumption, deployment complexity, and environmental impact. Ambient IoT (A-IoT), leveraging ambient energy harvesting (EH) for batteryless device operation, has emerged as a promising solution to address these challenges.Among various EH and communication techniques, visible light communication (VLC) integrated with ambient backscatter communication (AmBC) offers remarkable advantages, including energy neutrality, high reliability, and enhanced security. In this paper, we propose a joint VLC-AmBC architecture, emphasizing fundamental concepts, system designs, and practical implementations. We explore potential applications in environmental monitoring, healthcare, s...
3.Service Function Chain Routing in LEO Networks Using Shortest-Path Delay Statistical Stability
Low Earth orbit (LEO) satellite constellations have become a critical enabler for global coverage, utilizing numerous satellites orbiting Earth at high speeds. By decomposing complex network services into lightweight service functions, network function virtualization (NFV) transforms global network services into diverse service function chains (SFCs), coordinated by resource-constrained LEOs. However, the dynamic topology of satellite networks, marked by highly variable inter-satellite link delays, poses significant challenges for designing efficient routing strategies that ensure reliable and low-latency communication. Many existing routing methods suffer from poor scalability and degraded performance, limiting their practical implementation. To address these challenges, this paper proposes a novel SFC routing approach that leverages the...
4.Selecting Offline Reinforcement Learning Algorithms for Stochastic Network Control
Offline Reinforcement Learning (RL) is a promising approach for next-generation wireless networks, where online exploration is unsafe and large amounts of operational data can be reused across the model lifecycle. However, the behavior of offline RL algorithms under genuinely stochastic dynamics -- inherent to wireless systems due to fading, noise, and traffic mobility -- remains insufficiently understood. We address this gap by evaluating Bellman-based (Conservative Q-Learning), sequence-based (Decision Transformers), and hybrid (Critic-Guided Decision Transformers) offline RL methods in an open-access stochastic telecom environment (mobile-env). Our results show that Conservative Q-Learning consistently produces more robust policies across different sources of stochasticity, making it a reliable default choice in lifecycle-driven AI man...
5.ORION: Intent-Aware Orchestration in Open RAN for SLA-Driven Network Management
The disaggregation of the Radio Access Network (RAN) introduces unprecedented flexibility but significant operational complexity, necessitating automated management frameworks. However, current Open RAN (O-RAN) orchestration relies on fragmented manual policies, lacking end-to-end intent assurance from high-level requirements to low-level configurations. In this paper, we propose ORION, an O-RAN compliant intent orchestration framework that integrates Large Language Models (LLMs) via the Model Context Protocol (MCP) to translate natural language intents into enforceable network policies. ORION leverages a hierarchical agent architecture, combining an MCP-based Service Management and Orchestration (SMO) layer for semantic translation with a Non-Real-Time RIC rApp and Near-Real-Time RIC xApp for closed-loop enforcement. Extensive evaluation...