Daily Briefing – Apr 12 (81 Articles)
Babak's Daily Briefing
Sunday, April 12, 2026
Sources: 17 | Total Articles: 81
6G World
1.SoftBank’s Physical AI push gives AI-RAN a sharper purpose
SoftBank is starting to give AI-RAN a more concrete job description: not just running AI workloads near the network, but serving as the real-time infrastructure layer for robots and other physical systems. The company’s recent materials suggest it wants to move the AI-RAN conversation from telecom architecture to real-world machine action.
2.South Korea puts 6G inside its national AI push
South Korea has unveiled a three-year national roadmap aimed at becoming one of the world’s top three AI powers by 2028, with 6G commercialization positioned as part of that broader push.
3.b-com’s Open XG Hub targets one of telecom’s biggest gaps: turning experimentation into deployment
In an interview with Peter Pietrzyk, Managing Director of 6GWorld, Patrick Savell, Head of Connectivity at b-com, said platforms such as Open XG Hub are designed to help bridge one of the industry’s most persistent challenges: moving promising ideas from research environments into deployable network systems. The bigger point is that, as telecom becomes more software-driven and AI-native, the bottleneck is increasingly less about invention and more about validation, integration, and operational readiness.
4.ODC’s $45M raise signals a bigger shift in AI-RAN, from network optimization to edge intelligence
ORAN Development Company said it has closed a $45 million Series A backed by Booz Allen, Cisco Investments, Nokia, NVIDIA, AT&T, MTN and Telecom Italia to scale its U.S.-based Odyssey platform, which it positions as an AI-native RAN architecture combining communications, sensing and edge intelligence. The company said it plans to accelerate commercial deployment through 2026.
5.Lockheed Martin’s NetSense points to a bigger shift: 5G as drone-detection infrastructure
Lockheed Martin’s latest NetSense prototype suggests that commercial 5G infrastructure could play a growing role in drone detection, adding momentum to the broader move toward sensing-enabled wireless networks.
AI Agents
1.PyVRP$^+$: LLM-Driven Metacognitive Heuristic Evolution for Hybrid Genetic Search in Vehicle Routing Problems
Designing high-performing metaheuristics for NP-hard combinatorial optimization problems, such as the Vehicle Routing Problem (VRP), remains a significant challenge, often requiring extensive domain expertise and manual tuning. Recent advances have demonstrated the potential of large language models (LLMs) to automate this process through evolutionary search. However, existing methods are largely reactive, relying on immediate performance feedback to guide what are essentially black-box code mutations. Our work departs from this paradigm by introducing Metacognitive Evolutionary Programming (MEP), a framework that elevates the LLM to a strategic discovery agent. Instead of merely reacting to performance scores, MEP compels the LLM to engage in a structured Reason-Act-Reflect cycle, forcing it to explicitly diagnose failures, formulate des...
2.TraceSafe: A Systematic Assessment of LLM Guardrails on Multi-Step Tool-Calling Trajectories
As large language models (LLMs) evolve from static chatbots into autonomous agents, the primary vulnerability surface shifts from final outputs to intermediate execution traces. While safety guardrails are well-benchmarked for natural language responses, their efficacy remains largely unexplored within multi-step tool-use trajectories. To address this gap, we introduce TraceSafe-Bench, the first comprehensive benchmark specifically designed to assess mid-trajectory safety. It encompasses 12 risk categories, ranging from security threats (e.g., prompt injection, privacy leaks) to operational failures (e.g., hallucinations, interface inconsistencies), featuring over 1,000 unique execution instances. Our evaluation of 13 LLM-as-a-guard models and 7 specialized guardrails yields three critical findings: 1) Structural Bottleneck: Guardrail eff...
3.Strategic Persuasion with Trait-Conditioned Multi-Agent Systems for Iterative Legal Argumentation
Strategic interaction in adversarial domains such as law, diplomacy, and negotiation is mediated by language, yet most game-theoretic models abstract away the mechanisms of persuasion that operate through discourse. We present the Strategic Courtroom Framework, a multi-agent simulation environment in which prosecution and defense teams composed of trait-conditioned Large Language Model (LLM) agents engage in iterative, round-based legal argumentation. Agents are instantiated using nine interpretable traits organized into four archetypes, enabling systematic control over rhetorical style and strategic orientation. We evaluate the framework across 10 synthetic legal cases and 84 three-trait team configurations, totaling over 7{,}000 simulated trials using DeepSeek-R1 and Gemini~2.5~Pro. Our results show that heterogeneous teams with complem...
4.Cheap Talk, Empty Promise: Frontier LLMs easily break public promises for self-interest
Large language models are increasingly deployed as autonomous agents in multi-agent settings where they communicate intentions and take consequential actions with limited human oversight. A critical safety question is whether agents that publicly commit to actions break those promises when they can privately deviate, and what the consequences are for both themselves and the collective. We study deception as a deviation from a publicly announced action in one-shot normal-form games, classifying each deviation by its effect on individual payoff and collective welfare into four categories: win-win, selfish, altruistic, and sabotaging. By exhaustively enumerating announcement profiles across six canonical games, nine frontier models, and varying group sizes, we identify all opportunities for each deviation type and measure how often agents ex...
5.Springdrift: An Auditable Persistent Runtime for LLM Agents with Case-Based Memory, Normative Safety, and Ambient Self-Perception
We present Springdrift, a persistent runtime for long-lived LLM agents. The system integrates an auditable execution substrate (append-only memory, supervised processes, git-backed recovery), a case-based reasoning memory layer with hybrid retrieval (evaluated against a dense cosine baseline), a deterministic normative calculus for safety gating with auditable axiom trails, and continuous ambient self-perception via a structured self-state representation (the sensorium) injected each cycle without tool calls. These properties support behaviours difficult to achieve in session-bounded systems: cross-session task continuity, cross-channel context maintenance, end-to-end forensic reconstruction of decisions, and self-diagnostic behaviour. We report on a single-instance deployment over 23 days (19 operating days), during which the agent diagn...
Financial AI
1.Quantum Computing for Financial Transformation: A Review of Optimisation, Pricing, Risk, Machine Learning, and Post-Quantum Security
Quantum computing is becoming strategically relevant to finance because several core financial bottlenecks are already defined by combinatorial search, expectation estimation, rare-event analysis, representation learning, and long-horizon cryptographic resilience. This review examines that landscape across five connected domains: constrained portfolio optimisation, derivative pricing, tail-risk and scenario estimation, quantum machine learning, and post-quantum security. Rather than treating these topics as isolated demonstrations, the article studies them as linked layers of a financial-computation stack. Across all five domains, the review applies a common evaluative logic: identify the financial bottleneck, specify the relevant quantum primitive, compare it with an explicit classical benchmark, and assess the result under realistic imp...
2.SBBTS: A Unified Schrödinger-Bass Framework for Synthetic Financial Time Series
We study the problem of generating synthetic time series that reproduce both marginal distributions and temporal dynamics, a central challenge in financial machine learning. Existing approaches typically fail to jointly model drift and stochastic volatility, as diffusion-based methods fix the volatility while martingale transport models ignore drift. We introduce the Schrödinger-Bass Bridge for Time Series (SBBTS), a unified framework that extends the Schrödinger-Bass formulation to multi-step time series. The method constructs a diffusion process that jointly calibrates drift and volatility and admits a tractable decomposition into conditional transport problems, enabling efficient learning. Numerical experiments on the Heston model demonstrate that SBBTS accurately recovers stochastic volatility and correlation parameters that prior Sch...
3.Sequential Audit Sampling with Statistical Guarantees
Financial statement auditing is conducted under a risk-based evidence approach to obtain reasonable assurance. In practice, auditors often perform additional sampling or related procedures when an initial sample does not provide a sufficient basis for a conclusion. Across jurisdictions, current standards and practice manuals acknowledge such extensions, while the statistical design of sequential audit procedures has not been fully explored. This study formulates audit sampling with additional, sequentially collected items as a sequential testing problem for a finite population under sampling without replacement. We define null and alternative hypotheses in terms of a tolerable deviation rate, specify stopping and decision rules, and formulate exact sequential boundary conditions in terms of finite-population error probabilities. For pract...
4.Generative Path-Law Jump-Diffusion: Sequential MMD-Gradient Flows and Generalisation Bounds in Marcus-Signature RKHS
This paper introduces a novel generative framework for synthesising forward-looking, càdlàg stochastic trajectories that are sequentially consistent with time-evolving path-law proxies, thereby incorporating anticipated structural breaks, regime shifts, and non-autonomous dynamics. By framing path synthesis as a sequential matching problem on restricted Skorokhod manifolds, we develop the \textit{Anticipatory Neural Jump-Diffusion} (ANJD) flow, a generative mechanism that effectively inverts the time-extended Marcus-sense signature. Central to this approach is the Anticipatory Variance-Normalised Signature Geometry (AVNSG), a time-evolving precision operator that performs dynamic spectral whitening on the signature manifold to ensure contractivity during volatile regime shifts and discrete aleatoric shocks. We provide a rigorous theoretic...
5.Anticipatory Reinforcement Learning: From Generative Path-Laws to Distributional Value Functions
This paper introduces Anticipatory Reinforcement Learning (ARL), a novel framework designed to bridge the gap between non-Markovian decision processes and classical reinforcement learning architectures, specifically under the constraint of a single observed trajectory. In environments characterised by jump-diffusions and structural breaks, traditional state-based methods often fail to capture the essential path-dependent geometry required for accurate foresight. We resolve this by lifting the state space into a signature-augmented manifold, where the history of the process is embedded as a dynamical coordinate. By utilising a self-consistent field approach, the agent maintains an anticipated proxy of the future path-law, allowing for a deterministic evaluation of expected returns. This transition from stochastic branching to a single-pass...
GSMA Newsroom
1.From Rich Text to Video: RCS Universal Profile 4.0 has arrived
Summary available at source link.
2.Mobile Money accounted for $2 trillion in transactions in 2025, doubling since 2021 as active accounts continue to grow
Summary available at source link.
3.Strengthening the Global Fight Against Fraud and Scams – Takeaways from the Global Fraud Summit in Vienna
Summary available at source link.
4.GSMA MWC26 Barcelona closes 20th anniversary edition
Summary available at source link.
5.From Ambition to Execution: How Open Gateway Is Scaling the Global API Economy
Summary available at source link.
Generative AI (arXiv)
1.AVGen-Bench: A Task-Driven Benchmark for Multi-Granular Evaluation of Text-to-Audio-Video Generation
Text-to-Audio-Video (T2AV) generation is rapidly becoming a core interface for media creation, yet its evaluation remains fragmented. Existing benchmarks largely assess audio and video in isolation or rely on coarse embedding similarity, failing to capture the fine-grained joint correctness required by realistic prompts. We introduce AVGen-Bench, a task-driven benchmark for T2AV generation featuring high-quality prompts across 11 real-world categories. To support comprehensive assessment, we propose a multi-granular evaluation framework that combines lightweight specialist models with Multimodal Large Language Models (MLLMs), enabling evaluation from perceptual quality to fine-grained semantic controllability. Our evaluation reveals a pronounced gap between strong audio-visual aesthetics and weak semantic reliability, including persistent...
2.OpenVLThinkerV2: A Generalist Multimodal Reasoning Model for Multi-domain Visual Tasks
Group Relative Policy Optimization (GRPO) has emerged as the de facto Reinforcement Learning (RL) objective driving recent advancements in Multimodal Large Language Models. However, extending this success to open-source multimodal generalist models remains heavily constrained by two primary challenges: the extreme variance in reward topologies across diverse visual tasks, and the inherent difficulty of balancing fine-grained perception with multi-step reasoning capabilities. To address these issues, we introduce Gaussian GRPO (G$^2$RPO), a novel RL training objective that replaces standard linear scaling with non-linear distributional matching. By mathematically forcing the advantage distribution of any given task to strictly converge to a standard normal distribution, $\mathcal{N}(0,1)$, G$^2$RPO theoretically ensures inter-task gradient...
3.Demystifying OPD: Length Inflation and Stabilization Strategies for Large Language Models
On-policy distillation (OPD) trains student models under their own induced distribution while leveraging supervision from stronger teachers. We identify a failure mode of OPD: as training progresses, on-policy rollouts can undergo abrupt length inflation, causing truncated trajectories to dominate the training data. This truncation collapse coincides with abrupt repetition saturation and induces biased gradient signals, leading to severe training instability and sharp degradation in validation performance. We attribute this problem to the interaction between student-induced data collection and the distillation objective, which implicitly favors long and repetitive rollouts. To address this issue, we propose StableOPD, a stabilized OPD framework that combines a reference-based divergence constraint with rollout mixture distillation. These ...
4.Ads in AI Chatbots? An Analysis of How Large Language Models Navigate Conflicts of Interest
Today's large language models (LLMs) are trained to align with user preferences through methods such as reinforcement learning. Yet models are beginning to be deployed not merely to satisfy users, but also to generate revenue for the companies that created them through advertisements. This creates the potential for LLMs to face conflicts of interest, where the most beneficial response to a user may not be aligned with the company's incentives. For instance, a sponsored product may be more expensive but otherwise equal to another; in this case, what does (and should) the LLM recommend to the user? In this paper, we provide a framework for categorizing the ways in which conflicting incentives might lead LLMs to change the way they interact with users, inspired by literature from linguistics and advertising regulation. We then present a suit...
5.What do Language Models Learn and When? The Implicit Curriculum Hypothesis
Large language models (LLMs) can perform remarkably complex tasks, yet the fine-grained details of how these capabilities emerge during pretraining remain poorly understood. Scaling laws on validation loss tell us how much a model improves with additional compute, but not what skills it acquires in which order. To remedy this, we propose the Implicit Curriculum Hypothesis: pretraining follows a compositional and predictable curriculum across models and data mixtures. We test this by designing a suite of simple, composable tasks spanning retrieval, morphological transformations, coreference, logical reasoning, and mathematics. Using these tasks, we track emergence points across four model families spanning sizes from 410M-13B parameters. We find that emergence orderings of when models reach fixed accuracy thresholds are strikingly consiste...
Hugging Face Daily Papers
1.What Drives Representation Steering? A Mechanistic Case Study on Steering Refusal
Applying steering vectors to large language models (LLMs) is an efficient and effective model alignment technique, but we lack an interpretable explanation for how it works-- specifically, what internal mechanisms steering vectors affect and how this results in different model outputs. To investigate the causal mechanisms underlying the effectiveness of steering vectors, we conduct a comprehensive case study on refusal. We propose a multi-token activation patching framework and discover that different steering methodologies leverage functionally interchangeable circuits when applied at the same layer. These circuits reveal that steering vectors primarily interact with the attention mechanism through the OV circuit while largely ignoring the QK circuit-- freezing all attention scores during steering drops performance by only 8.75% across t...
2.ClawBench: Can AI Agents Complete Everyday Online Tasks?
AI agents may be able to automate your inbox, but can they automate other routine aspects of your life? Everyday online tasks offer a realistic yet unsolved testbed for evaluating the next generation of AI agents. To this end, we introduce ClawBench, an evaluation framework of 153 simple tasks that people need to accomplish regularly in their lives and work, spanning 144 live platforms across 15 categories, from completing purchases and booking appointments to submitting job applications. These tasks require demanding capabilities beyond existing benchmarks, such as obtaining relevant information from user-provided documents, navigating multi-step workflows across diverse platforms, and write-heavy operations like filling in many detailed forms correctly. Unlike existing benchmarks that evaluate agents in offline sandboxes with static pag...
3.When Fine-Tuning Changes the Evidence: Architecture-Dependent Semantic Drift in Chest X-Ray Explanations
Transfer learning followed by fine-tuning is widely adopted in medical image classification due to consistent gains in diagnostic performance. However, in multi-class settings with overlapping visual features, improvements in accuracy do not guarantee stability of the visual evidence used to support predictions. We define semantic drift as systematic changes in the attribution structure supporting a model's predictions between transfer learning and full fine-tuning, reflecting potential shifts in underlying visual reasoning despite stable classification performance. Using a five-class chest X-ray task, we evaluate DenseNet201, ResNet50V2, and InceptionV3 under a two-stage training protocol and quantify drift with reference-free metrics capturing spatial localization and structural consistency of attribution maps. Across architectures, coa...
4.A Machine Learning Framework for Turbofan Health Estimation via Inverse Problem Formulation
Estimating the health state of turbofan engines is a challenging ill-posed inverse problem, hindered by sparse sensing and complex nonlinear thermodynamics. Research in this area remains fragmented, with comparisons limited by the use of unrealistic datasets and insufficient exploration of the exploitation of temporal information. This work investigates how to recover component-level health indicators from operational sensor data under realistic degradation and maintenance patterns. To support this study, we introduce a new dataset that incorporates industry-oriented complexities such as maintenance events and usage changes. Using this dataset, we establish an initial benchmark that compares steady-state and nonstationary data-driven models, and Bayesian filters, classic families of methods used to solve this problem. In addition to this ...
5.DMax: Aggressive Parallel Decoding for dLLMs
We present DMax, a new paradigm for efficient diffusion language models (dLLMs). It mitigates error accumulation in parallel decoding, enabling aggressive decoding parallelism while preserving generation quality. Unlike conventional masked dLLMs that decode through a binary mask-to-token transition, DMax reformulates decoding as a progressive self-refinement from mask embeddings to token embeddings. At the core of our approach is On-Policy Uniform Training, a novel training strategy that efficiently unifies masked and uniform dLLMs, equipping the model to recover clean tokens from both masked inputs and its own erroneous predictions. Building on this foundation, we further propose Soft Parallel Decoding. We represent each intermediate decoding state as an interpolation between the predicted token embedding and the mask embedding, enabling...
IEEE Xplore AI
1.GoZTASP: A Zero-Trust Platform for Governing Autonomous Systems at Mission Scale
ZTASP is a mission-scale assurance and governance platform designed for autonomous systems operating in real-world environments. It integrates heterogeneous systems—including drones, robots, sensors, and human operators—into a unified zero-trust architecture. Through Secure Runtime Assurance (SRTA) and Secure Spatio-Temporal Reasoning (SSTR), ZTASP continuously verifies system integrity, enforces safety constraints, and enables resilient operation even under degraded conditions. ZTASP has progressed beyond conceptual design, with operational validation at Technology Readiness Level (TRL) 7 in mission critical environments. Core components, including Saluki secure flight controllers, have reached TRL8 and are deployed in customer systems. While initially developed for high-consequence mission environments, the same assurance challenges are...
2.AI Models Map the Colorado River’s Hard Choices
The Colorado River begins as snow. Every spring, the mountain snowpack of the Rockies melts into streams that feed into reservoirs that supply 40 million people across seven U.S. states. The system has worked, more or less, for a century. That century is over. By some measures, 2026 is shaping up to be the worst year the river has seen since records began. Flows are down 20 percent from 2000 levels . Lake Powell, the reservoir straddling Utah and Arizona, may drop below the threshold for generating hydropower before the year is out . The negotiations between the seven states over how to share what’s left have collapsed twice , and the U.S. federal government is threatening to impose its own plan. While the states argue and the river shrinks, a growing set of machine learning tools is being deployed across the basin. Federal water managers...
3.Decentralized Training Can Help Solve AI’s Energy Woes
Artificial intelligence harbors an enormous energy appetite. Such constant cravings are evident in the hefty carbon footprint of the data centers behind the AI boom and the steady increase over time of carbon emissions from training frontier AI models . No wonder big tech companies are warming up to nuclear energy , envisioning a future fueled by reliable, carbon-free sources. But while nuclear-powered data centers might still be years away, some in the research and industry spheres are taking action right now to curb AI’s growing energy demands. They’re tackling training as one of the most energy-intensive phases in a model’s life cycle, focusing their efforts on decentralization. Decentralization allocates model training across a network of independent nodes rather than relying on one platform or provider. It allows compute to go where ...
4.Why AI Systems Fail Quietly
In late-stage testing of a distributed AI platform, engineers sometimes encounter a perplexing situation: every monitoring dashboard reads “healthy,” yet users report that the system’s decisions are slowly becoming wrong. Engineers are trained to recognize failure in familiar ways: a service crashes, a sensor stops responding, a constraint violation triggers a shutdown. Something breaks, and the system tells you. But a growing class of software failures looks very different. The system keeps running, logs appear normal, and monitoring dashboards stay green. Yet the system’s behavior quietly drifts away from what it was designed to do. This pattern is becoming more common as autonomy spreads across software systems. Quiet failure is emerging as one of the defining engineering challenges of autonomous systems because correctness now depends...
5.AI Is Insatiable
While browsing our website a few weeks ago, I stumbled upon “ How and When the Memory Chip Shortage Will End ” by Senior Editor Samuel K. Moore. His analysis focuses on the current DRAM shortage caused by AI hyperscalers’ ravenous appetite for memory, a major constraint on the speed at which large language models run. Moore provides a clear explanation of the shortage, particularly for high bandwidth memory (HBM). As we and the rest of the tech media have documented, AI is a resource hog. AI electricity consumption could account for up to 12 percent of all U.S. power by 2028. Generative AI queries consumed 15 terawatt-hours in 2025 and are projected to consume 347 TWh by 2030. Water consumption for cooling AI data centers is predicted to double or even quadruple by 2028 compared to 2023. But Moore’s reporting shines a light on an obscure ...
MIT Sloan Management
1.The Trap That Skilled Negotiators Miss
Brian Stauffer/theispot.com Say you walk into a car dealership determined to stay within budget. The salesperson shows you a car you like and quotes a price of $41,435. You know there’s room to negotiate, but when it’s time to counter, that first number quietly takes over. Your counteroffer, the concessions, and the final deal all […]
2.Rethink Responsibility in the Age of AI
Mark Airs/Ikon Images Early one morning in 2018, a self-driving Uber vehicle fatally struck a pedestrian in Tempe, Arizona. The world had questions: Who was responsible? Was it the safety driver behind the wheel? The engineers who designed the algorithms? Uber’s leadership? Or the regulators who had allowed autonomous-vehicle testing? The inability to name a […]
3.Gain Consumer Insight With Generative AI
Stuart Kinlough/Ikon Images Marketing leaders often face a dilemma: Deriving the insights they need in order to make confident decisions can cost tens of thousands of dollars and involve several months of data gathering and analysis, by which time market conditions may have shifted. Can generative AI fundamentally reshape this calculus? Drawing on recent research, […]
4.Disintegrating the Org Chart: ServiceNow’s Jacqui Canney
In this episode of the Me, Myself, and AI podcast, Sam Ransbotham is joined by Jacqui Canney, chief people and AI enablement officer at ServiceNow. Jacqui outlines how the software company has embedded AI agents into processes like employee onboarding to automate tasks, personalize experiences, and free up people’s time to focus on higher-value work. […]
5.How to Reap Compound Benefits From Generative AI
Carolyn Geason-Beissel/MIT SMR | Minneapolis Institute of Art In domain after domain, AI has compressed work that used to be expensive — generating drafts, code, prototypes, and analyses. The marginal cost of a first attempt has dropped sharply. What remains expensive is what happens after the output arrives: evaluating what gets generated. That involves separating […]
NBER Working Papers
1.Can Personal Access to Medical Expertise Overcome Vaccine Hesitancy? -- by D. Mark Anderson, Ron Diris, Raymond Montizaan, Daniel I. Rees
Using data on applicants to Dutch medical schools and their older relatives (i.e., parents, aunts, and uncles ages 60+), we estimate the effect of personal access to medical expertise on vaccine hesitancy. Leveraging variation in lottery outcomes that determine admission to medical schools, we find that having a physician in the family increases the likelihood of complying with government recommendations that anyone over the age of 59 receive a second booster dose of a COVID-19 vaccine. Our estimated effects are strongest for having a female physician in the family, suggesting important gender-based differences in how medical expertise is communicated.
2.Why Do Americans No Longer Work So Much More Than Non-Americans? -- by Serdar Birinci, Loukas Karabarbounis, Kurt See
In the 1990s, Americans used to work much more than non-Americans. Nowadays, about half of the gap in hours worked has reversed. To evaluate the convergence of working hours, we develop a tractable model of labor supply enriched with multiple sources of heterogeneity across individuals, an extensive margin of participation, multi-member households, and an elaborate system of taxes and benefits upon non-employment. Using detailed measurements from micro-level and aggregate datasets, we identify model parameters and sources of heterogeneity across individuals for various countries. We run a horse race between competing explanations and find that U.S. hours per person declined after 2000 owing mainly to the rise of government health benefits provided to the non-employed. Non-U.S. countries have generous benefits for the non-employed, but th...
3.AI Patents in the United States and China: Measurement, Organization, and Knowledge Flows -- by Hanming Fang, Xian Gu, Hanyin Yan, Wu Zhu
We develop a high-precision classifier to measure artificial intelligence (AI) patents by fine-tuning PatentSBERTa on manually labeled data from the USPTO’s AI Patent Dataset. Our classifier substantially improves the existing USPTO approach, achieving 97.0% precision, 91.3% recall, and a 94.0% F1 score, and it generalizes well to Chinese patents based on citation and lexical validation. Applying it to granted U.S. patents (1976–2023) and Chinese patents (2010–2023), we document rapid growth in AI patenting in both countries and broad convergence in AI patenting intensity and subfield composition, even as China surpasses the United States in recent annual patent counts. The organization of AI innovation nevertheless differs sharply: U.S. AI patenting is concentrated among large private incumbents and established hubs, whereas Chinese AI p...
4.Tariffs, Global Value Chains, and the Incidence of Protection: Evidence from US Automobiles -- by Luke Heeney, Christopher R. Knittel, Jasdeep Mandia
In many modern industries, firms compete in differentiated-product markets while relying on complex global value chains for intermediate inputs. In such settings, trade policies such as tariffs on vehicles and parts operate not only through consumer substitution and firm pricing, but also through firms’ cost structures and sourcing decisions. We develop a structural model of the U.S. automobile market that integrates random-coefficients demand, multiproduct firm pricing, and a flexible supply-side framework in which shocks to the cost of imported parts transmit imperfectly into manufacturers’ marginal costs. The model is disciplined by novel model-level data on imported-parts exposure and exploits exchange-rate variation to identify cost pass-through. Our counterfactual analysis quantifies the effects of alternative tariff policies on pri...
5.Learning How To Borrow in a Fintech World: Consumer Behavior When Search Costs Are (Near) Zero -- by Alex Günsberg, Camelia M. Kuhnen
Online loan marketplaces are changing consumer lending. Here we investigate consumer behavior in these markets with near-zero search costs. Using administrative data on 730,000 applications, 750,000 offers, and 200,000 individuals, together with credit registry records, we document four facts. First, substantial within-applicant dispersion in offered terms makes search highly valuable. Second, marketplace nudges mitigate choice complexity. Third, applicants search significantly, applying repeatedly, asking for different terms, and rejecting offers, in ways consistent with their creditworthiness. Fourth, dynamic adverse selection constrains search, as lenders penalize repeat applicants. Our findings highlight trade-offs between informational gains from search, and reputational and cognitive costs.
NY Fed - Liberty Street
1.A Closer Look at Emerging Market Resilience During Recent Shocks
A succession of shocks to the global economy in recent years has focused attention on the improved economic and financial resilience of emerging market economies. For some of these economies, this assessment is well-founded and highlights the fruits of deep, structural economic reforms since the 1990s. However, for a much larger universe of countries, the ability to weather shocks is still mixed and many remain vulnerable. In this post, we explore the divide between the two sets of countries and focus on the effects of recent economic shocks, including the ongoing conflict in the Middle East.
2.The Fed Has Two Tools to Influence Money Market Conditions
The Federal Reserve’s 2022-23 tightening cycle involved the use of two monetary policy tools: changes in administrative rates and changes in the size of its balance sheet. This post highlights the results of a recent Staff Report that explores how these tools affect money market conditions. Using confidential trade-level data, we find that both tools have significant effects on the pricing of funds sourced through repo. These results suggest that the Fed can manage how financing conditions are affected even as it influences economic conditions. For example, the Fed can lower its administrative rates to loosen economic conditions, while shrinking its balance sheet to maintain financing conditions in the money markets.
3.Treasury Market Liquidity Since April 2025
In this post, we examine the evolution of U.S. Treasury market liquidity over the past year, which has witnessed myriad economic and political developments. Liquidity worsened markedly one year ago as volatility increased following the announcement of higher-than-expected tariffs. Liquidity quickly improved when the tariff increases were partially rolled back and then remained fairly stable thereafter (through the end of our sample in February 2026), including after the recent Supreme Court decision striking down the emergency tariffs and the subsequent announcement of new tariffs.
4.Behind the ATM: Exploring the Structure of Bank Holding Companies
Many modern banking organizations are highly complex. A “bank” is often a larger structure made up of distinct entities, each subject to different regulatory, supervisory, and reporting requirements. For researchers and policymakers, understanding how these institutions are structured and how they have evolved over time is essential. In this post, we illustrate what a modern financial holding company looks like in practice, document how banks’ organizational structures have changed over time, and explain why these details matter for conducting accurate analyses of the financial system.
5.Sports Betting Is Everywhere, Especially on Credit Reports
Since 2018, more than thirty states have legalized mobile sports betting, leading to more than a half trillion dollars in wagers. In our recent Staff Report, we examine how legalized sports betting affects household financial health by comparing betting activity and consumer credit outcomes between states that legalized to those that have not. We find that legalization increases spending at online sportsbooks roughly tenfold, but betting does not stop at state boundaries. Nearby areas where betting is not legal still experience roughly 15 percent the increase of counties where it is legal. At the same time, consumer financial health suffers. Our analysis finds rising delinquencies in participating states,...
Project Syndicate
1.Hedging Security in the Gulf Is Risky
The Gulf states have long hedged their diplomatic and security bets, attempting to strike a balance between those that might protect them from threats (especially the US) and those doing the threatening (such as Iran). But this approach has left them highly vulnerable and must be replaced by a unified approach to allies and foes.
2.The Iran War Has Ended a Year of Economic Promise
Before US President Donald Trump launched his war of choice against Iran, financial markets were booming in many countries, and private-sector confidence was recovering. But the outlook has suddenly become bleaker, and many governments have only limited policy buffers available to cushion the inflationary shock.
3.Africa Is Losing the Iran War
The fallout from the latest war in the Middle East has made visible a problem that many preferred to ignore: the international financial architecture is not fit for a world of cascading shocks, tightening fiscal constraints, and rising human need. Nowhere is this more obvious than in Africa.
4.A New Type of Democratic Transition
When trying to rebuild after illiberal rule, today’s pro-democracy governments discover booby traps laid by their predecessors and confront all kinds of veto players. But Poland’s experience has been instructive, highlighting the need for bold action and the right support from external actors.
5.How to Understand Emerging Risks
Understanding and anticipating risks is crucial to protecting society from the unpredictable. But the nature of risk is evolving at an unprecedented pace, and the processes and assumptions that guide insurers and their clients in managing uncertainty must be adjusted accordingly.
RCR Wireless
1.How GNSS satellites power positioning and timing
From smartphones to cars to critical infrastructure, these early satellites power some of the most modern technologies of today GNSS is not a term that peppers media language often. Nevertheless, it underpins almost all technologies that we use in everyday…
2.Nvidia’s AI grid and the telco dilemma
Should telcos invest billions in edge GPU infrastructure or wait for physical AI use cases to mature? In sum – what we know: ABI Research recently put out an analysis looking at Nvidia’s AI grid concept and the bigger question…
3.Viavi, Ground Control bring resilient PNT to GNSS-denied environments
Escalating GNSS disruptions are pushing operators toward multi-source, multi-constellation alternatives to maintain continuity and trust in navigation data A disturbing number of ships — both commercial and military — around the world are facing Global Navigation Satellite Systems (GNSS) disruptions.…
4.Movistar set for rapid sale and integration, as Telefónica quits Mexico
Nicolas Girard, chief executive at OXIO, buying Telefónica’s business in Mexico, told RCR that the sale will take up to nine months, and the subsequent integration will take just four months. In sum – what to know: Defined timeline –…
5.Qualcomm targets 40% opex reduction with agentic RAN tools
The Agentic RAN Management Service is designed for human-in-the-loop adoption on the path to AI-native 6G networks During the recent Mobile World Congress, Qualcomm showcased AI-powered solutions for radio access network (RAN) management that are delivering real-world results today while…
Semantic Scholar – Machine Learning
1.Source Error
Check Feed
Telecom & 6G AI
1.Temporal Graph Neural Network for ISAC Target Detection and Tracking
Integrated sensing and communication (ISAC) is a key enabler of 6G, supporting environment-aware services. A fundamental sensing task in this setting is reliable multi-target detection and tracking. This paper proposes a temporal graph neural network (TGNN)-based tracking method that exploits delay and Doppler information from the wireless channel. The delay-Doppler map is modeled as a sequence of graphs, and tracking is formulated as a temporal node classification problem, enabling joint clustering and data association of dynamic targets. Using ray-tracing-based channel outputs as ground truth, the method is evaluated across multiple scenes with varying target positions, velocities, and trajectories and is compared with a Kalman filter baseline. Results demonstrate reduced normalized mean squared error (NMSE) in delay and Doppler, leadin...
2.Weighted Sum Rate Maximization for ITS-Aided Arrays in Multi-User MIMO
This work explores the potential of integrating an Intelligent Transmissive Surface (ITS) into an antenna array to improve beamforming performance. We show that integrating a moderate number of passive refractive elements into a small antenna array can significantly improve the Weighted Sum Rate (WSR). We investigate the optimization of the WSR under two distinct operational constraints: a Radiated Power (RP) constraint and a Transmitted Power (TP) constraint. Our analysis reveals that the choice between these constraints significantly impacts the design parameters of the ITS-aided array. By contrasting these approaches, we explore critical design and material parameters, including the array geometry, surface loss, and illumination strategies.
3.Measurement-Based Ultra-Massive MIMO Statistical Channel Characterization and System Performance Evaluation for UMi Environments at 15 GHz FR3 Spectrum
This paper presents a detailed measurement campaign and a comprehensive analysis of 15 GHz ultra-massive multiple-input multiple-output (UM-MIMO) channels tailored for the urban microcell (UMi) environment. Channel sounding is performed over 14.875-15.125 GHz using a time-domain platform comprising a 128-element L-shaped transmit array and a 64-element square receive array. Four representative scenarios are investigated, namely near-field line-of-sight (LoS), near-field foliage-shaded, far-field foliage-shaded, and far-field LoS street canyon scenarios, resulting in 81 distinct transmit-receive links. Based on the measured data, conventional channel characteristics, including path loss, power delay angle profiles, delay spread, and angular spread, are characterized, while UM-MIMO-specific phenomena associated with near-field effects, spat...
4.Networking-Aware Energy Efficiency in Agentic AI Inference: A Survey
The rapid emergence of Large Language Models (LLMs) has catalyzed Agentic artificial intelligence (AI), autonomous systems integrating perception, reasoning, and action into closed-loop pipelines for continuous adaptation. While unlocking transformative applications in mobile edge computing, autonomous systems, and next-generation wireless networks, this paradigm creates fundamental energy challenges through iterative inference and persistent data exchange. Unlike traditional AI where bottlenecks are computational Floating Point Operations (FLOPs), Agentic AI faces compounding computational and communication energy costs. In this survey, we propose an energy accounting framework identifying computational and communication costs across the Perception-Reasoning-Action cycle. We establish a unified taxonomy spanning model simplification, com...
5.FR3 for 6G Networks: A Comparative Study against FR1 and FR2 Across Diverse Environments
Motivated by increasing wireless capacity demands and 6G advancements, the newly defined Frequency Range 3 (FR3, 7.125-24.25 GHz), also known as the upper mid-band, has emerged as a promising spectrum candidate. It offers a balance between the large bandwidth potential of millimeter-wave bands and the favorable propagation characteristics of sub-6 GHz bands. As a result, the upper mid-band presents a strong opportunity to enhance both coverage and capacity, particularly for 6G systems and Cellular Vehicle-to-Base Station (C-V2B) communications. Harnessing this potential, however, requires addressing key technical challenges through accurate and realistic channel modeling across diverse urban environments, including Suburban, Urban, and HighRise Urban scenarios. To this end, we employ a ray-tracing tool to characterize downlink propagation...
arXiv Quantitative Finance
1.Skewness Dispersion and Stock Market Returns
Cross-sectional dispersion in firm-level realized skewness is significantly and negatively related to future stock market returns. The predictive power of skewness dispersion is robust to in-sample and out-of-sample estimation and is incremental over a broad set of existing predictors, with only a few alternatives retaining independent explanatory ability. Skewness dispersion also delivers substantial economic gains in portfolio allocation. Its forecasting power is concentrated in months with monetary policy announcements, reflecting an information-based mechanism. The empirical evidence suggests that skewness dispersion captures the gradual incorporation of macro news into prices, which is driven by variation in aggregate risk and valuation adjustments.
2.Climate-Aware Copula Models for Sovereign Rating Migration Risk
This paper develops a copula-based time-series framework for modelling sovereign credit rating activity and its dependence dynamics, with extensions incorporating climate risk. We introduce a mixed-difference transformation that maps discrete annual counts of sovereign rating actions into a continuous domain, enabling flexible copula modelling. Building on a MAG(1) copula process, we extend the framework to a MAGMAR(1,1) specification combining moving-aggregate and autoregressive dependence, and establish consistency and asymptotic normality of the associated maximum likelihood estimators. The empirical analysis uses a multi-agency panel of sovereign ratings and country-level carbon intensity, aggregated to an annual measure of global rating activity. Results reveal strong nonlinear dependence and pronounced clustering of high-activity ye...
3.SBBTS: A Unified Schrödinger-Bass Framework for Synthetic Financial Time Series
We study the problem of generating synthetic time series that reproduce both marginal distributions and temporal dynamics, a central challenge in financial machine learning. Existing approaches typically fail to jointly model drift and stochastic volatility, as diffusion-based methods fix the volatility while martingale transport models ignore drift. We introduce the Schrödinger-Bass Bridge for Time Series (SBBTS), a unified framework that extends the Schrödinger-Bass formulation to multi-step time series. The method constructs a diffusion process that jointly calibrates drift and volatility and admits a tractable decomposition into conditional transport problems, enabling efficient learning. Numerical experiments on the Heston model demonstrate that SBBTS accurately recovers stochastic volatility and correlation parameters that prior Sch...
4.SoK of RWA Tokenization: A Systematization of Concepts, Architectures, and Legal Interoperability
The global financial architecture is undergoing a shift from intermediary centric-settlement to programmable infrastructure, to transmute trillions in static illiquid capital into active, high-velocity instruments. We argue that Real World Asset (RWA) tokenization represents a conceptual evolution beyond mere digitization, converting passive ledger entries into programmable economic agents capable of autonomous settlement and algorithmic collateralization. However, achieving such seamless capital efficiency necessitates resolving the fundamental friction between deterministic on-chain code and probabilistic off-chain reality, navigating the oracle problem and jurisdictional interoperability. This systematization of knowledge presents a taxonomy for the RWA lifecycle and deconstructs the multi-layered architecture, spanning legal custody, ...
5.Sequential Audit Sampling with Statistical Guarantees
Financial statement auditing is conducted under a risk-based evidence approach to obtain reasonable assurance. In practice, auditors often perform additional sampling or related procedures when an initial sample does not provide a sufficient basis for a conclusion. Across jurisdictions, current standards and practice manuals acknowledge such extensions, while the statistical design of sequential audit procedures has not been fully explored. This study formulates audit sampling with additional, sequentially collected items as a sequential testing problem for a finite population under sampling without replacement. We define null and alternative hypotheses in terms of a tolerable deviation rate, specify stopping and decision rules, and formulate exact sequential boundary conditions in terms of finite-population error probabilities. For pract...
arXiv – 6G & Networking
1.Temporal Graph Neural Network for ISAC Target Detection and Tracking
Integrated sensing and communication (ISAC) is a key enabler of 6G, supporting environment-aware services. A fundamental sensing task in this setting is reliable multi-target detection and tracking. This paper proposes a temporal graph neural network (TGNN)-based tracking method that exploits delay and Doppler information from the wireless channel. The delay-Doppler map is modeled as a sequence of graphs, and tracking is formulated as a temporal node classification problem, enabling joint clustering and data association of dynamic targets. Using ray-tracing-based channel outputs as ground truth, the method is evaluated across multiple scenes with varying target positions, velocities, and trajectories and is compared with a Kalman filter baseline. Results demonstrate reduced normalized mean squared error (NMSE) in delay and Doppler, leadin...
2.FORSLICE: An Automated Formal Framework for Efficient PRB-Allocation towards Slicing Multiple Network Services
Network slicing is a modern 5G technology that provides efficient network experience for diverse use cases. It is a technique for partitioning a single physical network infrastructure into multiple virtual networks, called slices, each equipped for specific services and requirements. In this work, we particularly deal with radio access network (RAN) slicing and resource allocation to RAN slices. In 5G, physical resource blocks (PRBs) being the fundamental units of radio resources, our main focus is to allocate PRBs to the slices efficiently. While addressing a spectrum of needs for multiple services or the same services with multi-priorities, we need to ensure two vital system properties: i) fairness to every service type (i.e., providing the required resources and a desired range of throughput) even after prioritizing a particular servic...
3.Weighted Sum Rate Maximization for ITS-Aided Arrays in Multi-User MIMO
This work explores the potential of integrating an Intelligent Transmissive Surface (ITS) into an antenna array to improve beamforming performance. We show that integrating a moderate number of passive refractive elements into a small antenna array can significantly improve the Weighted Sum Rate (WSR). We investigate the optimization of the WSR under two distinct operational constraints: a Radiated Power (RP) constraint and a Transmitted Power (TP) constraint. Our analysis reveals that the choice between these constraints significantly impacts the design parameters of the ITS-aided array. By contrasting these approaches, we explore critical design and material parameters, including the array geometry, surface loss, and illumination strategies.
4.Measurement-Based Ultra-Massive MIMO Statistical Channel Characterization and System Performance Evaluation for UMi Environments at 15 GHz FR3 Spectrum
This paper presents a detailed measurement campaign and a comprehensive analysis of 15 GHz ultra-massive multiple-input multiple-output (UM-MIMO) channels tailored for the urban microcell (UMi) environment. Channel sounding is performed over 14.875-15.125 GHz using a time-domain platform comprising a 128-element L-shaped transmit array and a 64-element square receive array. Four representative scenarios are investigated, namely near-field line-of-sight (LoS), near-field foliage-shaded, far-field foliage-shaded, and far-field LoS street canyon scenarios, resulting in 81 distinct transmit-receive links. Based on the measured data, conventional channel characteristics, including path loss, power delay angle profiles, delay spread, and angular spread, are characterized, while UM-MIMO-specific phenomena associated with near-field effects, spat...
5.Networking-Aware Energy Efficiency in Agentic AI Inference: A Survey
The rapid emergence of Large Language Models (LLMs) has catalyzed Agentic artificial intelligence (AI), autonomous systems integrating perception, reasoning, and action into closed-loop pipelines for continuous adaptation. While unlocking transformative applications in mobile edge computing, autonomous systems, and next-generation wireless networks, this paradigm creates fundamental energy challenges through iterative inference and persistent data exchange. Unlike traditional AI where bottlenecks are computational Floating Point Operations (FLOPs), Agentic AI faces compounding computational and communication energy costs. In this survey, we propose an energy accounting framework identifying computational and communication costs across the Perception-Reasoning-Action cycle. We establish a unified taxonomy spanning model simplification, com...
arXiv – Network Architecture (6G/Slicing)
1.FORSLICE: An Automated Formal Framework for Efficient PRB-Allocation towards Slicing Multiple Network Services
Network slicing is a modern 5G technology that provides efficient network experience for diverse use cases. It is a technique for partitioning a single physical network infrastructure into multiple virtual networks, called slices, each equipped for specific services and requirements. In this work, we particularly deal with radio access network (RAN) slicing and resource allocation to RAN slices. In 5G, physical resource blocks (PRBs) being the fundamental units of radio resources, our main focus is to allocate PRBs to the slices efficiently. While addressing a spectrum of needs for multiple services or the same services with multi-priorities, we need to ensure two vital system properties: i) fairness to every service type (i.e., providing the required resources and a desired range of throughput) even after prioritizing a particular servic...
2.Enhancing Secure Intent-Based Networking with an Agentic AI: The EU Project MARE Approach
In the EU project MARE, a novel plane was proposed and used in combination with intent-based networking (IBN), allowing the operator to focus on what, rather than on how. Recently, LLMs have been successfully employed to translate the high-level intents into low-level actions. The open challenge is to understand how IBN can be effectively enhanced with LLM and the emerging agentic AI for security purposes. Enhancing IBN with an agentic AI paradigm introduces significant challenges that existing solutions do not fully address. This paper proposes an enhanced IBN framework with a strong security focus toward agentic AI. We address the architectural and security requirements for a multi-agent intent-based system (IBS) architecture, including a multi-domain IBN. We propose a hierarchical multi-agent and multi-vendor architecture that can also...
3.Advanced Holographic Multi-Antenna Solutions for Global Non-Terrestrial Network Integration in IMT-2030 Systems
Sixth-generation (6G) networks are expected to provide ubiquitous connectivity across terrestrial and non-terrestrial domains. This will be possible by integrating non-terrestrial networks (NTNs) to extend coverage to underserved areas. Antennas are central to this vision, with multiple-input multiple-output (MIMO) technologies receiving the most attention due to their ability to exploit spatial multiplexing to improve link capacity and reliability. However, conventional MIMO can consume significant energy, as each antenna element typically requires an independent RF chain. This limitation is particularly critical in non-terrestrial systems, where onboard energy resources are limited. Holographic MIMO (HMIMO) has emerged as a promising alternative in this context. These systems are based on theoretically continuous apertures, where radiat...
4.Reimagining RAN Automation in 6G: An Agentic AI Framework with Hierarchical Online Decision Transformer
In this paper, we propose an Agentic Artificial Intelligence (AI) framework for wireless networks. The framework coordinates a pool of AI agents guided by Natural Language (NL) inputs from a human operator. At its core, the super agent is powered by a Hierarchical Online Decision Transformer (H-ODT). It orchestrates three categories of agents: (i) inter-slice, intra-slice resource allocation agents, (ii) network application orchestration agents, and (iii) self-healing agents. The orchestration takes place with the help of an Agentic Retrieval-Augmented Generation (RAG) module that integrates knowledge from heterogeneous sources. In this proposed methodology, the super agent directly interfaces with operators and generates sequential policies to activate relevant agents. The proposed framework is evaluated against three state-of-the-art ba...
5.RL-Loop: Reinforcement Learning-Driven Real-Time 5G Slice Control for Connected and Autonomous Mobility Services
Smart and connected mobility systems rely on 5G edge infrastructure to support real-time communication, control, and service differentiation. Achieving this requires adaptive resource management mechanisms that can react to rapidly changing traffic conditions. In this paper, we propose RL-Loop, a closed-loop reinforcement learning framework for real-time CPU resource control in 5G network slicing environments supporting connected mobility services. RL-Loop employs a Proximal Policy Optimization (PPO) agent that continuously observes slice-level key performance indicators and adjusts edge CPU allocations at one-second granularity on a real testbed. The framework leverages real-time observability and feedback to enable adaptive, software-defined edge intelligence. Experimental results suggest that RL-Loop can reduce average CPU allocation b...