Daily Briefing – Mar 10 (96 Articles)
Babak's Daily Briefing
Tuesday, March 10, 2026
Sources: 20 | Total Articles: 96
6G World
1.SpaceRAN: Airbus UpNext explores software-defined 5G NTN from orbit
Airbus UpNext has launched its SpaceRAN (Space Radio Access Network) demonstrator, a key initiative to advance standardised 5G…
2.SoftBank’s Transformer-Based AI-RAN Hits 30% Uplink Gain at Sub-Millisecond Latency
On August 21, 2025, SoftBank published results from a live, standards-compliant AI-RAN trial that replaces parts of classical signal processing with a lightweight Transformer.
3.6G as a Platform for Value
Reframing the Future with NGMN’s Chairman, Laurent Leboucher By Piotr (Peter) Pietrzyk, Managing Editor, 6GWorld.com In the race…
4.SoftBank Road-Tests 7 GHz in Central Tokyo
SoftBank and Nokia have begun outdoor field trials in Tokyo’s Ginza district using 7 GHz spectrum, installing three pre-commercial base stations to compare coverage and radio characteristics against today’s sub-6 GHz 5G sites.
5.NXP’s Acquisition of TTTech Auto Signals Growing Focus on Middleware for Software-Defined Vehicles
On June 17, 2025, NXP Semiconductors finalized its acquisition of TTTech Auto—a strategic move to integrate TTTech’s flagship…
AI Agents
1.Agentic Critical Training
Training large language models (LLMs) as autonomous agents often begins with imitation learning, but it only teaches agents what to do without understanding why: agents never contrast successful actions against suboptimal alternatives and thus lack awareness of action quality. Recent approaches attempt to address this by introducing self-reflection supervision derived from contrasts between expert and alternative actions. However, the training paradigm fundamentally remains imitation learning: the model imitates pre-constructed reflection text rather than learning to reason autonomously. We propose Agentic Critical Training (ACT), a reinforcement learning paradigm that trains agents to identify the better action among alternatives. By rewarding whether the model's judgment is correct, ACT drives the model to autonomously develop reasoning...
2.Reachability-based Temporal Logic Verification for Reliable LLM-guided Human-Autonomy Teaming
We propose a reachability-based framework for reliable LLM-guided human-autonomy teaming (HAT) using signal temporal logic (STL). In the proposed framework, LLM is leveraged as a translator that transfers natural language commands given by a human operator into corresponding STL specifications or vice versa. An STL feasibility filter (SFF) is proposed to check the feasibility of the generated STL. The SFF first decomposes the complex and nested LLM translation into a set of simpler subformulas for parallelization and informative feedback generation. The reachability analysis method is then applied to verify if each subformula is feasible for a target dynamical system: if feasible, perform mission planning, otherwise, reject it. The proposed SFF can identify infeasible subformulas, more than simply providing the boolean verification result...
3.A Hierarchical Error-Corrective Graph Framework for Autonomous Agents with LLM-Based Action Generation
We propose a Hierarchical Error-Corrective Graph FrameworkforAutonomousAgentswithLLM-BasedActionGeneration(HECG),whichincorporates three core innovations: (1) Multi-Dimensional Transferable Strategy (MDTS): by integrating task quality metrics (Q), confidence/cost metrics (C), reward metrics (R), and LLM-based semantic reasoning scores (LLM-Score), MDTS achieves multi-dimensional alignment between quantitative performance and semantic context, enabling more precise selection of high-quality candidate strate gies and effectively reducing the risk of negative transfer. (2) Error Matrix Classification (EMC): unlike simple confusion matrices or overall performance metrics, EMC provides structured attribution of task failures by categorizing errors into ten types, such as Strategy Errors (Strategy Whe) and Script Parsing Errors (Script-Parsing-...
4.ConflictBench: Evaluating Human-AI Conflict via Interactive and Visually Grounded Environments
As large language models (LLMs) evolve into autonomous agents capable of acting in open-ended environments, ensuring behavioral alignment with human values becomes a critical safety concern. Existing benchmarks, focused on static, single-turn prompts, fail to capture the interactive and multi-modal nature of real-world conflicts. We introduce ConflictBench, a benchmark for evaluating human-AI conflict through 150 multi-turn scenarios derived from prior alignment queries. ConflictBench integrates a text-based simulation engine with a visually grounded world model, enabling agents to perceive, plan, and act under dynamic conditions. Empirical results show that while agents often act safely when human harm is immediate, they frequently prioritize self-preservation or adopt deceptive strategies in delayed or low-risk settings. A regret test f...
5.Uncertainty Mitigation and Intent Inference: A Dual-Mode Human-Machine Joint Planning System
Effective human-robot collaboration in open-world environments requires joint planning under uncertain conditions. However, existing approaches often treat humans as passive supervisors, preventing autonomous agents from becoming human-like teammates that can actively model teammate behaviors, reason about knowledge gaps, query, and elicit responses through communication to resolve uncertainties. To address these limitations, we propose a unified human-robot joint planning system designed to tackle dual sources of uncertainty: task-relevant knowledge gaps and latent human intent. Our system operates in two complementary modes. First, an uncertainty-mitigation joint planning module enables two-way conversations to resolve semantic ambiguity and object uncertainty. It utilizes an LLM-assisted active elicitation mechanism and a hypothesis-au...
AI Computation & Hardware
1.ARC-AGI-2 Technical Report
arXiv:2603.06590v1 Announce Type: new Abstract: The Abstraction and Reasoning Corpus (ARC) is designed to assess generalization beyond pattern matching, requiring models to infer symbolic rules from very few examples. In this work, we present a transformer-based system that advances ARC performance by combining neural inference with structure-aware priors and online task adaptation. Our approach is built on four key ideas. First, we reformulate ARC reasoning as a sequence modeling problem using a compact task encoding with only 125 tokens, enabling efficient long-context processing with a modified LongT5 architecture. Second, we introduce a principled augmentation framework based on group symmetries, grid traversals, and automata perturbations, enforcing invariance to representation changes. Third, we apply test-time training (TTT) with ...
2.Hierarchical Latent Structures in Data Generation Process Unify Mechanistic Phenomena across Scale
arXiv:2603.06592v1 Announce Type: new Abstract: Contemporary studies have uncovered many puzzling phenomena in the neural information processing of Transformer-based language models. Building a robust, unified understanding of these phenomena requires disassembling a model within the scope of its training. While the intractable scale of pretraining corpora limits a bottom-up investigation in this direction, simplistic assumptions of the data generation process limit the expressivity and fail to explain complex patterns. In this work, we use probabilistic context-free grammars (PCFGs) to generate synthetic corpora that are faithful and computationally efficient proxies for web-scale text corpora. We investigate the emergence of three mechanistic phenomena: induction heads, function vectors, and the Hydra effect, under our designed data ge...
3.Hierarchical Embedding Fusion for Retrieval-Augmented Code Generation
arXiv:2603.06593v1 Announce Type: new Abstract: Retrieval-augmented code generation often conditions the decoder on large retrieved code snippets. This ties online inference cost to repository size and introduces noise from long contexts. We present Hierarchical Embedding Fusion (HEF), a two-stage approach to repository representation for code completion. First, an offline cache compresses repository chunks into a reusable hierarchy of dense vectors using a small fuser model. Second, an online interface maps a small number of retrieved vectors into learned pseudo-tokens that are consumed by the code generator. This replaces thousands of retrieved tokens with a fixed pseudo-token budget while preserving access to repository-level information. On RepoBench and RepoEval, HEF with a 1.8B-parameter pipeline achieves exact-match accuracy com...
4.A Coin Flip for Safety: LLM Judges Fail to Reliably Measure Adversarial Robustness
arXiv:2603.06594v1 Announce Type: new Abstract: Automated \enquote{LLM-as-a-Judge} frameworks have become the de facto standard for scalable evaluation across natural language processing. For instance, in safety evaluation, these judges are relied upon to evaluate harmfulness in order to benchmark the robustness of safety against adversarial attacks. However, we show that existing validation protocols fail to account for substantial distribution shifts inherent to red-teaming: diverse victim models exhibit distinct generation styles, attacks distort output patterns, and semantic ambiguity varies significantly across jailbreak scenarios. Through a comprehensive audit using 6642 human-verified labels, we reveal that the unpredictable interaction of these shifts often causes judge performance to degrade to near random chance. This stands in...
5.Rethinking Personalization in Large Language Models at the Token Level
arXiv:2603.06595v1 Announce Type: new Abstract: With large language models (LLMs) now performing strongly across diverse tasks, there is growing demand for them to personalize outputs for individual users. Personalization is typically framed as an additional layer on top of a base NLP task, requiring model responses to meet user-specific needs while still accomplishing the underlying task. From a token-level perspective, different tokens in a response contribute to personalization to varying degrees. Tokens with higher personalization relevance should therefore receive greater emphasis when developing personalized LLMs. However, accurately estimating such personalization degrees remains challenging. To address this challenge, we propose PerContrast, a self-contrast method that estimates each output token's dependence on user-specific inf...
AI Machine Learning
1.vLLM Hook v0: A Plug-in for Programming Model Internals on vLLM
arXiv:2603.06588v1 Announce Type: new Abstract: Modern artificial intelligence (AI) models are deployed on inference engines to optimize runtime efficiency and resource allocation, particularly for transformer-based large language models (LLMs). The vLLM project is a major open-source library to support model serving and inference. However, the current implementation of vLLM limits programmability of the internal states of deployed models. This prevents the use of popular test-time model alignment and enhancement methods. For example, it prevents the detection of adversarial prompts based on attention patterns or the adjustment of model responses based on activation steering. To bridge this critical gap, we present vLLM Hook, an opensource plug-in to enable the programming of internal states for vLLM models. Based on a configuration file ...
2.How Attention Sinks Emerge in Large Language Models: An Interpretability Perspective
arXiv:2603.06591v1 Announce Type: new Abstract: Large Language Models (LLMs) often allocate disproportionate attention to specific tokens, a phenomenon commonly referred to as the attention sink. While such sinks are generally considered detrimental, prior studies have identified a notable exception: the model's consistent emphasis on the first token of the input sequence. This structural bias can influence a wide range of downstream applications and warrants careful consideration. Despite its prevalence, the precise mechanisms underlying the emergence and persistence of attention sinks remain poorly understood. In this work, we trace the formation of attention sinks around the first token of the input. We identify a simple mechanism, referred to as the P0 Sink Circuit, that enables the model to recognize token at position zero and induce...
3.FuzzingRL: Reinforcement Fuzz-Testing for Revealing VLM Failures
arXiv:2603.06600v1 Announce Type: new Abstract: Vision Language Models (VLMs) are prone to errors, and identifying where these errors occur is critical for ensuring the reliability and safety of AI systems. In this paper, we propose an approach that automatically generates questions designed to deliberately induce incorrect responses from VLMs, thereby revealing their vulnerabilities. The core of this approach lies in fuzz testing and reinforcement finetuning: we transform a single input query into a large set of diverse variants through vision and language fuzzing. Based on the fuzzing outcomes, the question generator is further instructed by adversarial reinforcement fine-tuning to produce increasingly challenging queries that trigger model failures. With this approach, we can consistently drive down a target VLM's answer accuracy -- fo...
4.Switchable Activation Networks
arXiv:2603.06601v1 Announce Type: new Abstract: Deep neural networks, and more recently large-scale generative models such as large language models (LLMs) and large vision-action models (LVAs), achieve remarkable performance across diverse domains, yet their prohibitive computational cost hinders deployment in resource-constrained environments. Existing efficiency techniques offer only partial remedies: dropout improves regularization during training but leaves inference unchanged, while pruning and low-rank factorization compress models post hoc into static forms with limited adaptability. Here we introduce SWAN (Switchable Activation Networks), a framework that equips each neural unit with a deterministic, input-dependent binary gate, enabling the network to learn when a unit should be active or inactive. This dynamic control mechanism ...
5.Khatri-Rao Clustering for Data Summarization
arXiv:2603.06602v1 Announce Type: new Abstract: As datasets continue to grow in size and complexity, finding succinct yet accurate data summaries poses a key challenge. Centroid-based clustering, a widely adopted approach to address this challenge, finds informative summaries of datasets in terms of few prototypes, each representing a cluster in the data. Despite their wide adoption, the resulting data summaries often contain redundancies, limiting their effectiveness particularly in datasets characterized by a large number of underlying clusters. To overcome this limitation, we introduce the Khatri-Rao clustering paradigm that extends traditional centroid-based clustering to produce more succinct but equally accurate data summaries by postulating that centroids arise from the interaction of two or more succinct sets of protocentroids. We...
AI Robotics
1.A Pivot-Based Kirigami Utensil for Hand-Held and Robot-Assisted Feeding
arXiv:2603.06716v1 Announce Type: new Abstract: Eating is a daily challenge for over 60 million adults with essential tremors and other mobility limitations. For these users, traditional utensils like forks or spoons are difficult to manipulate -- resulting in accidental spills and restricting the types of food that can be consumed. Prior work has developed rigid, hand-held utensils that often fail to secure food, as well as soft, shape-changing utensils made strictly for robot-assisted feeding. To assist a broader range of users, we introduce a re-designed kiri-spoon that can be leveraged as either a hand-held utensil or a robot-mounted attachment. Our key idea -- developed in collaboration with stakeholders -- is a pivot-based design. With this design the kiri-spoon behaves like a pair of pliers: users squeeze the handles to change the ...
2.Dynamic Targeting of Satellite Observations Using Supplemental Geostationary Satellite Data and Hierarchical Planning
arXiv:2603.06719v1 Announce Type: new Abstract: The Dynamic Targeting (DT) mission concept is an approach to satellite observation in which a lookahead sensor gathers information about the upcoming environment and uses this information to intelligently plan observations. Previous work has shown that DT has the potential to increase the science return across applications. However, DT mission concepts must address challenges, such as the limited spatial extent of onboard lookahead data and instrument mobility, data throughput, and onboard computation constraints. In this work, we show how the performance of DT systems can be improved by using supplementary data streamed from geostationary satellites that provide lookahead information up to 35 minutes ahead of time rather than the 1 minute latency from an onboard lookahead sensor. While ther...
3.Robotic Foundation Models for Industrial Control: A Comprehensive Survey and Readiness Assessment Framework
arXiv:2603.06749v1 Announce Type: new Abstract: Robotic foundation models (RFMs) are emerging as a promising route towards flexible, instruction- and demonstration-driven robot control, however, a critical investigation of their industrial applicability is still lacking. This survey gives an extensive overview over the RFM-landscape and analyses, driven by concrete implications, how industrial domains and use cases shape the requirements of RFMs, with particular focus on collaborative robot platforms, heterogeneous sensing and actuation, edge-computing constraints, and safety-critical operation. We synthesise industrial deployment perspectives into eleven interdependent implications and operationalise them into an assessment framework comprising a catalogue of 149 concrete criteria, spanning both model capabilities and ecosystem requireme...
4.Gradient-based Nested Co-Design of Aerodynamic Shape and Control for Winged Robots
arXiv:2603.06760v1 Announce Type: new Abstract: Designing aerial robots for specialized tasks, from perching to payload delivery, requires tailoring their aerodynamic shape to specific mission requirements. For tasks involving wide flight envelopes, the usual sequential process of first determining the shape and then the motion planner is likely to be suboptimal due to the inherent nonlinear interactions between them. This limitation has been motivating co-design research, which involves jointly optimizing the aerodynamic shape and the motion planner. In this paper, we present a general-purpose, gradient-based, nested co-design framework where the motion planner solves an optimal control problem and the aerodynamic forces used in the dynamics model are determined by a neural surrogate model. This enables us to model complex subsonic flow ...
5.Stability-Guided Exploration for Diverse Motion Generation
arXiv:2603.06773v1 Announce Type: new Abstract: Scaling up datasets is highly effective in improving the performance of deep learning models, including in the field of robot learning. However, data collection still proves to be a bottleneck. Approaches relying on collecting human demonstrations are labor-intensive and inherently limited: they tend to be narrow, task-specific, and fail to adequately explore the full space of feasible states. Synthetic data generation could remedy this, but current techniques mostly rely on local trajectory optimization and fail to find diverse solutions. In this work, we propose a novel method capable of finding diverse long-horizon manipulations through black-box simulation. We achieve this by combining an RRT-style search with sampling-based MPC, together with a novel sampling scheme that guides the expl...
Financial AI
1.Generative Adversarial Regression (GAR): Learning Conditional Risk Scenarios
We propose Generative Adversarial Regression (GAR), a framework for learning conditional risk scenarios through generators aligned with downstream risk objectives. GAR builds on a regression characterization of conditional risk for elicitable functionals, including quantiles, expectiles, and jointly elicitable pairs. We extend this principle from point prediction to generative modeling by training generators whose policy-induced risk matches that of real data under the same context. To ensure robustness across all policies, GAR adopts a minimax formulation in which an adversarial policy identifies worst-case discrepancies in risk evaluation while the generator adapts to eliminate them. This structure preserves alignment with the risk functional across a broad class of policies rather than a fixed, pre-specified set. We illustrate GAR thro...
2.Differential Machine Learning for 0DTE Options with Stochastic Volatility and Jumps
We present a differential machine learning method for zero-days-to-expiry (0DTE) options under a stochastic-volatility jump-diffusion model that computes prices and Greeks in a single network evaluation. To handle the ultra-short-maturity regime, we represent the price in Black--Scholes form with a maturity-gated variance correction, and combine supervision on prices and Greeks with a PIDE-residual penalty. To make the jump contribution identifiable, we introduce a separate jump-operator network and train it with a three-stage procedure. In Bates-model simulations, the method improves jump-term approximation relative to one-stage baselines, keeps price errors close to one-stage alternatives while improving Greeks accuracy, produces stable one-day delta hedges, and is substantially faster than a Fourier-based pricing benchmark.
3.Stochastic Attention via Langevin Dynamics on the Modern Hopfield Energy
Attention heads retrieve: given a query, they return a softmax-weighted average of stored values. We show that this computation is one step of gradient descent on a classical energy function, and that Langevin sampling from the corresponding distribution yields \emph{stochastic attention}: a training-free sampler controlled by a single temperature. Lowering the temperature gives exact retrieval; raising it gives open-ended generation. Because the energy gradient equals the attention map, no score network, training loop, or learned model is required. We validate on four domains (64 to 4,096 dimensions). At generation temperature, stochastic attention is 2.6 times more novel and 2.0 times more diverse than the best learned baseline (a variational autoencoder trained on the same patterns), while matching a Metropolis-corrected gold standard....
4.Stock Market Prediction Using Node Transformer Architecture Integrated with BERT Sentiment Analysis
Stock market prediction presents considerable challenges for investors, financial institutions, and policymakers operating in complex market environments characterized by noise, non-stationarity, and behavioral dynamics. Traditional forecasting methods often fail to capture the intricate patterns and cross-sectional dependencies inherent in financial markets. This paper presents an integrated framework combining a node transformer architecture with BERT-based sentiment analysis for stock price forecasting. The proposed model represents the stock market as a graph structure where individual stocks form nodes and edges capture relationships including sectoral affiliations, correlated price movements, and supply chain connections. A fine-tuned BERT model extracts sentiment from social media posts and combines it with quantitative market feat...
5.Calibrated Credit Intelligence: Shift-Robust and Fair Risk Scoring with Bayesian Uncertainty and Gradient Boosting
Credit risk scoring must support high-stakes lending decisions where data distributions change over time, probability estimates must be reliable, and group-level fairness is required. While modern machine learning models improve default prediction accuracy, they often produce poorly calibrated scores under distribution shift and may create unfair outcomes when trained without explicit constraints. This paper proposes Calibrated Credit Intelligence (CCI), a deployment-oriented framework that combines (i) a Bayesian neural risk scorer to capture epistemic uncertainty and reduce overconfident errors, (ii) a fairnessconstrained gradient boosting model to control group disparities while preserving strong tabular performance, and (iii) a shiftaware fusion strategy followed by post-hoc probability calibration to stabilize decision thresholds in ...
GSMA Newsroom
1.GSMA MWC26 Barcelona closes 20th anniversary edition
Summary available at source link.
2.From Ambition to Execution: How Open Gateway Is Scaling the Global API Economy
Summary available at source link.
3.Pioneering Affordable Access in Africa: GSMA and Handset Affordability Coalition Members Identify Six African Countries to Pilot Affordable $40 Smartphones
Summary available at source link.
4.GSMA Calls for Regulatory Readiness for Direct-to-User LEO Satellite Services
Summary available at source link.
5.MWC26 Barcelona opens with call to complete 5G, rise to AI challenges, and strengthen digital safety
Summary available at source link.
Generative AI (arXiv)
1.Agentic Critical Training
Training large language models (LLMs) as autonomous agents often begins with imitation learning, but it only teaches agents what to do without understanding why: agents never contrast successful actions against suboptimal alternatives and thus lack awareness of action quality. Recent approaches attempt to address this by introducing self-reflection supervision derived from contrasts between expert and alternative actions. However, the training paradigm fundamentally remains imitation learning: the model imitates pre-constructed reflection text rather than learning to reason autonomously. We propose Agentic Critical Training (ACT), a reinforcement learning paradigm that trains agents to identify the better action among alternatives. By rewarding whether the model's judgment is correct, ACT drives the model to autonomously develop reasoning...
2.Evaluating Financial Intelligence in Large Language Models: Benchmarking SuperInvesting AI with LLM Engines
Large language models are increasingly used for financial analysis and investment research, yet systematic evaluation of their financial reasoning capabilities remains limited. In this work, we introduce the AI Financial Intelligence Benchmark (AFIB), a multi-dimensional evaluation framework designed to assess financial analysis capabilities across five dimensions: factual accuracy, analytical completeness, data recency, model consistency, and failure patterns. We evaluate five AI systems: GPT, Gemini, Perplexity, Claude, and SuperInvesting, using a dataset of 95+ structured financial analysis questions derived from real-world equity research tasks. The results reveal substantial differences in performance across models. Within this benchmark setting, SuperInvesting achieves the highest aggregate performance, with an average factual accur...
3.UNBOX: Unveiling Black-box visual models with Natural-language
Ensuring trustworthiness in open-world visual recognition requires models that are interpretable, fair, and robust to distribution shifts. Yet modern vision systems are increasingly deployed as proprietary black-box APIs, exposing only output probabilities and hiding architecture, parameters, gradients, and training data. This opacity prevents meaningful auditing, bias detection, and failure analysis. Existing explanation methods assume white- or gray-box access or knowledge of the training distribution, making them unusable in these real-world settings. We introduce UNBOX, a framework for class-wise model dissection under fully data-free, gradient-free, and backpropagation-free constraints. UNBOX leverages Large Language Models and text-to-image diffusion models to recast activation maximization as a purely semantic search driven by outp...
4.Boosting MLLM Spatial Reasoning with Geometrically Referenced 3D Scene Representations
While Multimodal Large Language Models (MLLMs) have achieved remarkable success in 2D visual understanding, their ability to reason about 3D space remains limited. To address this gap, we introduce geometrically referenced 3D scene representations (GR3D). Given a set of input images, GR3D annotates objects in the images with unique IDs and encodes their 3D geometric attributes as textual references indexed by these IDs. This representation enables MLLMs to interpret 3D cues using their advanced language-based skills in mathematical reasoning, while concurrently analyzing 2D visual features in a tightly coupled way. We present a simple yet effective approach based on GR3D, which requires no additional training and is readily applicable to different MLLMs. Implemented in a zero-shot setting, our approach boosts GPT-5's performance on VSI-Be...
5.Behavioral Generative Agents for Power Dispatch and Auction
This paper presents positive initial evidence that generative agents can relax the rigidity of traditional mathematical models for human decision-making in power dispatch and auction settings. We design two proof-of-concept energy experiments with generative agents powered by a large language model (LLM). First, we construct a home battery management testbed with stochastic electricity prices and blackout interventions, and benchmark LLM decisions against dynamic programming. By incorporating an in-context learning (ICL) module, we show that behavioral patterns discovered by a stronger reasoning model can be transferred to a smaller LLM via example-based prompting, leading agents to prioritize post-blackout energy reserves over short-term profit. Second, we study LLM agents in simultaneous ascending auctions (SAA) for power network access...
Hugging Face Daily Papers
1.FVG-PT: Adaptive Foreground View-Guided Prompt Tuning for Vision-Language Models
CLIP-based prompt tuning enables pretrained Vision-Language Models (VLMs) to efficiently adapt to downstream tasks. Although existing studies have made significant progress, they pay limited attention to changes in the internal attention representations of VLMs during the tuning process. In this paper, we attribute the failure modes of prompt tuning predictions to shifts in foreground attention of the visual encoder, and propose Foreground View-Guided Prompt Tuning (FVG-PT), an adaptive plug-and-play foreground attention guidance module, to alleviate the shifts. Concretely, FVG-PT introduces a learnable Foreground Reliability Gate to automatically enhance the foreground view quality, applies a Foreground Distillation Compensation module to guide visual attention toward the foreground, and further introduces a Prior Calibration module to m...
2.Evaluating Financial Intelligence in Large Language Models: Benchmarking SuperInvesting AI with LLM Engines
Large language models are increasingly used for financial analysis and investment research, yet systematic evaluation of their financial reasoning capabilities remains limited. In this work, we introduce the AI Financial Intelligence Benchmark (AFIB), a multi-dimensional evaluation framework designed to assess financial analysis capabilities across five dimensions: factual accuracy, analytical completeness, data recency, model consistency, and failure patterns. We evaluate five AI systems: GPT, Gemini, Perplexity, Claude, and SuperInvesting, using a dataset of 95+ structured financial analysis questions derived from real-world equity research tasks. The results reveal substantial differences in performance across models. Within this benchmark setting, SuperInvesting achieves the highest aggregate performance, with an average factual accur...
3.A Multi-Objective Optimization Approach for Sustainable AI-Driven Entrepreneurship in Resilient Economies
The rapid advancement of artificial intelligence (AI) technologies presents both unprecedented opportunities and significant challenges for sustainable economic development. While AI offers transformative potential for addressing environmental challenges and enhancing economic resilience, its deployment often involves substantial energy consumption and environmental costs. This research introduces the EcoAI-Resilience framework, a multi-objective optimization approach designed to maximize the sustainability benefits of AI deployment while minimizing environmental costs and enhancing economic resilience. The framework addresses three critical objectives through mathematical optimization: sustainability impact maximization, economic resilience enhancement, and environmental cost minimization. The methodology integrates diverse data sources,...
4.Benchmarking Language Modeling for Lossless Compression of Full-Fidelity Audio
Autoregressive "language" models (LMs) trained on raw waveforms can be repurposed for lossless audio compression, but prior work is limited to 8-bit audio, leaving open whether such approaches work for practical settings (16/24-bit) and can compete with existing codecs. We benchmark LM-based compression on full-fidelity audio across diverse domains (music, speech, bioacoustics), sampling rates (16kHz-48kHz), and bit depths (8, 16, 24-bit). Standard sample-level tokenization becomes intractable at higher bit depths due to vocabulary size (65K for 16-bit; 16.7M for 24-bit). We propose Trilobyte, a byte-level tokenization schema for full resolution audio, improving vocabulary scaling from $O(2^{b})$ to $O(1)$ and enabling the first tractable 24-bit LM-based lossless compression. While LMs consistently outperform FLAC and yield state-of-the-a...
5.ImprovedGS+: A High-Performance C++/CUDA Re-Implementation Strategy for 3D Gaussian Splatting
Recent advancements in 3D Gaussian Splatting (3DGS) have shifted the focus toward balancing reconstruction fidelity with computational efficiency. In this work, we propose ImprovedGS+, a high-performance, low-level reinvention of the ImprovedGS strategy, implemented natively within the LichtFeld-Studio framework. By transitioning from high-level Python logic to hardware-optimized C++/CUDA kernels, we achieve a significant reduction in host-device synchronization and training latency. Our implementation introduces a Long-Axis-Split (LAS) CUDA kernel, custom Laplacian-based importance kernels with Non-Maximum Suppression (NMS) for edge scores, and an adaptive Exponential Scale Scheduler. Experimental results on the Mip-NeRF360 dataset demonstrate that ImprovedGS+ establishes a new Pareto-optimal front for scene reconstruction. Our 1M-budget...
IEEE Xplore AI
1.Military AI Policy Needs Democratic Oversight
A simmering dispute between the United States Department of Defense (DOD) and Anthropic has now escalated into a full-blown confrontation , raising an uncomfortable but important question: who gets to set the guardrails for military use of artificial intelligence — the executive branch, private companies or Congress and the broader democratic process? The conflict began when Defense Secretary Pete Hegseth reportedly gave Anthropic CEO Dario Amodei a deadline to allow the DOD unrestricted use of its AI systems. When the company refused, the administration moved to designate Anthropic a supply chain risk and ordered federal agencies to phase out its technology, dramatically escalating the standoff. Anthropic has refused to cross two lines : allowing its models to be used for domestic surveillance of United States citizens and enabling fully...
2.Entomologists Use a Particle Accelerator to Image Ants at Scale
Move over, Pixar. The ants that animators once morphed into googly-eyed caricatures in films such as A Bug’s Life and Antz just received a meticulously precise anatomical reboot. Writing today in Nature Methods , an international team of entomologists, accelerator physicists, computer scientists, and biological imaging specialists describe a new 3D atlas of ant morphology. Dubbed Antscan, the platform features micrometer-resolution reconstructions that lay bare not only the insects’ armored exoskeletons but also their muscles, nerves, digestive tracts, and needle-like stingers poised at the ready. Those high-resolution images—spanning 792 species across 212 genera and covering the bulk of described ant diversity—are now freely available through an interactive online portal , where anyone can rotate, zoom, and virtually “dissect” the insec...
3.Watershed Moment for AI–Human Collaboration in Math
When Ukrainian mathematician Maryna Viazovska received a Fields Medal —widely regarded as the Nobel Prize for mathematics—in July 2022, it was big news. Not only was she the second woman to accept the honor in the award’s 86-year history, but she collected the medal just months after her country had been invaded by Russia. Nearly four years later, Viazovska is making waves again. Today , in a collaboration between humans and AI, Viazovska’s proofs have been formally verified, signaling rapid progress in AI’s abilities to assist with mathemat ical research. “These new results seem very, very impressive, and definitely signal some rapid progress in this direction,” says AI-reasoning expert and Princeton University postdoc Liam Fowl , who was not involved in the work. In her Fields Medal–winning research, Viazovska had tackled two versions o...
4.How Quantum Data Can Teach AI to Do Better Chemistry
Sometimes a visually compelling metaphor is all you need to get an otherwise complicated idea across. In the summer of 2001, a Tulane physics professor named John P. Perdew came up with a banger. He wanted to convey the hierarchy of computational complexity inherent in the behavior of electrons in materials. He called it “ Jacob’s Ladder .” He was appropriating an idea from the Book of Genesis, in which Jacob dreamed of a ladder “set up on the earth, and the top of it reached to heaven. And behold the angels of God ascending and descending on it.” Jacob’s Ladder represented a gradient and so too did Perdew’s ladder, not of spirit but of computation. At the lowest rung, the math was the simplest and least computationally draining, with materials represented as a smoothed-over, cartoon version of the atomic realm. As you climbed the ladder,...
5.Letting Machines Decide What Matters
In the time it takes you to read this sentence, the Large Hadron Collider (LHC) will have smashed billions of particles together. In all likelihood, it will have found exactly what it found yesterday: more evidence to support the Standard Model of particle physics. For the engineers who built this 27-kilometer-long ring, this consistency is a triumph. But for theoretical physicists, it has been rather frustrating. As Matthew Hutson reports in “ AI Hunts for the Next Big Thing in Physics ,” the field is currently gripped by a quiet crisis. In an email discussing his reporting, Hutson explains that the Standard Model, which describes the known elementary particles and forces, is not a complete picture. “So theorists have proposed new ideas, and experimentalists have built giant facilities to test them, but despite the gobs of data, there ha...
MIT Sloan Management
1.An Industry Benchmark for Data Fairness: Sony’s Alice Xiang
On today’s episode of Me, Myself, and AI, host Sam Ransbotham talks with Alice Xiang, global head of AI governance at Sony and lead research scientist for AI ethics at Sony AI, about what it actually takes to put responsible artificial intelligence into practice at scale. Alice shares how Sony moved early on AI ethics […]
2.Why Visibility Has Become the New Test of Leadership
Carolyn Geason-Beissel/MIT SMR In professional service firms, quiet excellence once defined leadership. A partner earned influence through expertise, loyalty, and discretion. But in an era of high transparency, where every meeting can be replayed, every comment rated, and every decision scrutinized online, competence alone no longer sustains trust. Visibility has become the new test of […]
3.Our Guide to the Spring 2026 Issue
The Eight Core Principles of Strategic Innovation Gina O’Connor and Christopher R. Meyer Key Insight: Mature companies that build a strategic innovation capability can systematically renew their product portfolios to sustain long-term growth. Top Takeaways: Many companies start off with a bang: the launch of an exciting breakthrough product or service. But as time passes, […]
4.AI Won’t Fix This
We are firmly in the digital age, awash in data generated on every surface and in every layer of every business. Yet, despite decades of investment in technology, time, and effort, many organizations are still not seeing meaningful returns. A global survey of over 4,200 business and technology leaders conducted by research firm Gartner in […]
5.The Eight Core Principles of Strategic Innovation
Matt Chinworth/theispot.com The Research The research behind this article was conducted in partnership with the Innovation Research Interchange, a professional association of R&D leaders in large industrial companies. More than 640 interviews were conducted over the three phases of the research program. In Phase 1, 12 project teams from 10 companies were followed for five […]
NBER Working Papers
1.Pricing Protection: Credit Scores, Disaster Risk, and Home Insurance Affordability -- by Joshua Blonz, Mallick Hossain, Benjamin J. Keys, Philip Mulder, Joakim A. Weill
We use 70 million policies linked to mortgages and property-level disaster risk to show that credit scores impact homeowners insurance premiums as much as disaster risk. Homeowners with low credit pay 24% more for identical coverage than high–credit score homeowners. Leveraging a natural experiment in Washington State, we find that banning the use of credit information considerably weakens the relationship between credit score and pricing. We discuss the role of credit information in pricing and show that, although insurance is often overlooked in discussions of home affordability, a low credit score increases premiums roughly as much as it raises mortgage rates.
2.When Incentives Aren't Enough: Evidence on Inattention and Imperfect Memory from HIV Medication Adherence -- by Hang Yu, Jared Stolove, Dean Yang, James Riddell IV, Arlete Mahumane
Financial incentives are widely used to encourage beneficial behaviors, but their effectiveness may be limited by inattention and imperfect memory. We study this in a randomized trial of HIV medication adherence in Mozambique. Financial incentives alone increase adherence by 10.6 percentage points, while pairing incentives with reminders increases adherence by 24.3 percentage points. We develop a model in which inattention to daily adherence and imperfect memory of payment eligibility reduce incentive effectiveness and show that reminders mitigate both frictions. Detailed medication refill data support the model’s predictions. The results suggest combining incentives with reminders can substantially increase program effectiveness.
3.Pay Now, Buy Never: The Economics of Consumer Prepayment Schemes -- by Yixuan Liu, Hua Zhang, Eric Zou
Prepaid consumption is a common feature of modern consumer markets and is often presented as a mutually beneficial arrangement: consumers receive upfront discounts, and firms secure future sales. We analyze a large-scale Pay Now, Buy Later (PNBL) program in which consumers prepay for restaurant credit with bonuses, and spend the balance later. Using detailed transaction data from over 4 million consumers, we document widespread balance breakage: approximately 40% of prepaid value is never used. Because many consumers underutilize their balances, merchants recover significantly more than the bonus cost. The median firm earns roughly $5.5 in breakage profit for every $1 of bonus credit issued. While PNBL participation does lead to modest increases in consumer spending over time, firms gain substantially more from breakage than from any loya...
4.How does AI Distribute the pie? Large Language Models and the Ultimatum Game. -- by Douglas K.G. Araujo, Harald Uhlig
As Large Language Models (LLMs) are increasingly tasked with autonomous decision making, understanding their behavior in strategic settings is crucial. We investigate the choices of various LLMs in the Ultimatum Game, a setting where human behavior notably deviates from theoretical rationality. We conduct experiments varying the stake size and the nature of the opponent (Human vs. AI) across both Proposer and Responder roles. Three key results emerge. First, LLM behavior is heterogeneous but predictable when conditioning on stake size and player types. Second, while some models approximate the rational benchmark and others mimic human social preferences, a distinct “altruistic” mode emerges where LLMs propose hyper-fair distributions (greater than 50%). Third, LLM Proposers forgo a large share of total payoff, and an even larger share whe...
5.Mergers and Non-contractible Benefits: The Employees' Perspective -- by Wei Cai, Andrea Prat, Jiehang Yu
Incomplete contract theory, supported by anecdotal evidence, suggests that when a firm is acquired, workers may be adversely affected in non-contractible aspects of their work experience. This paper empirically investigates this prediction by combining M\&A events from the Refinitiv database and web-scraped Glassdoor review data. We find that: (a) Controlling for pre-trends, mergers lead to lower satisfaction, especially on non-contractible dimensions of the employee experience (about 6% of a standard deviation); (b) The effect is stronger in the target firm than in the acquiring firm; (c) Text analysis of employee comments indicates that the decline in satisfaction is primarily associated with perceived breaches of implicit contracts. Our findings indicate that mergers may reduce workers' job utility through non-monetary channels.
NY Fed - Liberty Street
1.Firms’ Inflation Expectations Return to 2024 Levels
Businesses experienced substantial cost pressures in 2025 as the cost of insurance and utilities rose sharply, while an increase in tariffs contributed to rising goods and materials costs. This post examines how firms in the New York-Northern New Jersey region adjusted their prices in response to these cost pressures and describes their expectations for future price increases and inflation. Survey results show an acceleration in firms’ price increases in 2025, with an especially sharp increase in the manufacturing sector. While both cost and price increases intensified last year, our surveys re...
2.Are Rising Employee Health Insurance Costs Dampening Wage Growth?
Employer-sponsored health insurance represents a substantial component of total compensation paid by firms to many workers in the United States. Such costs have climbed by close to 20 percent over the past five years. Indeed, the average annual premium for employer-sponsored family health insurance coverage was about $27,000 in 2025—roughly equivalent to the wage of a full-time worker paid $15 per hour. Our February regional business surveys asked firms whether their wage setting decisions were influenced by the rising cost of employee health insurance. As we showed in our
3.What’s Driving Rising Business Costs?
After a period of moderating cost increases, businesses faced mounting cost pressures in 2025. While tariffs played a role in driving up the costs of many inputs—especially among manufacturers—they represent only part of the story. Indeed, firms grappled with substantial cost increases across many categories in the past year. This post is the first in a three-part series analyzing cost and price dynamics among businesses in the New York-Northern New Jersey region based on data collected through our regional business surveys. Firms reported that the sharpest cost increases over the...
4.The Post‑Pandemic Global R*
In this post we provide a measure of “global” r* using data on short- and long-term yields and inflation for several countries with the approach developed in “Global Trends in Interest Rates” (Del Negro, Giannone, Giannoni, and Tambalotti). After declining significantly from the 1990s to before the COVID-19 pandemic, global r* has risen but remains well below its pre-1990s level. These conclusions are based on an econometric model called “trendy VAR” that extracts common trends across a multitude of variables. Specifically, the common trend in real rates across all the countries in the sample is what we call global r*. The post is based on the
5.Estimating the Term Structure of Corporate Bond Risk Premia
Understanding how short- and long-term assets are priced is one of the fundamental questions in finance. The term structure of risk premia allows us to perform net present value calculations, test asset pricing models, and potentially explain the sources of many cross-sectional asset pricing anomalies. In this post, I construct a forward-looking estimate of the term structure of risk premia in the corporate bond market following Jankauskas (2024). The U.S. corporate bond market is an ideal laboratory for studying the relationship between risk premia and maturity because of its large size (standing at roughly $16 trillion as of the end of 2024) and because the maturities are well defined (in contrast to equities).
Project Syndicate
1.India’s Promising New Counter-Terrorism Strategy
India's new counter-terrorism strategy seeks to replace the reactive and disjointed approach of recent decades with a holistic doctrine that supports unity and coordination across India’s vast and often fragmented security apparatus. But the path from an eight-page document to a safer reality is littered with practical hurdles.
2.A Deal With Iran Requires an Iran that Can Make One
US President Donald Trump’s Venezuela gambit worked, to the extent that it did, because the state did not collapse. This might not be the outcome in Iran, because the Trump administration's war partners, Israel and the Kurds, have their own interests and ambitions, which do not necessarily include avoiding the fragmentation of authority.
3.China’s Big Bet on Central Asia is Paying Off
The Western caricature of Chinese “debt-trap diplomacy” ignores the realities of China’s economic presence in Central Asia. What is emerging across the region is not dependency but interdependence, as Chinese foreign direct investment finances industrialization and mutually beneficial infrastructure projects.
4.Europe’s “Limited Responsibility” Model Must Go
In interventions at an informal European Council meeting and the Munich Security Conference, European Central Bank President Christine Lagarde recently explained exactly what Europe must do to secure its prosperity and sovereignty in the years ahead. The question now is whether European leaders will heed the call.
5.An AI Bubble Won’t Trigger a Financial Crisis
The AI boom may be speculative, excessive, and reminiscent of earlier episodes like the dot-com crash. But given the nature of the financing and the investments being made, the risks to the financial system are minimal, and policymakers should turn their attention to the impact of AI on the real economy.
RCR Wireless
1.Exciting time for Wi-Fi, AI PCs and Intel
At MWC Barcelona, RCRTech principal analyst Sean Kinney spoke with Intel’s Eric McLaughlin, VP, Client Computing and GM, Connectivity Solutions about the evolution from 5G to 6G, and how the 6 GHz band (along with Wi-Fi 7 and 6E) will be a bridge to higher-frequency performance. Expansion into 6 GHz band, Wi-Fi 6E, Access points […]
2.Keysight MWC roundup: AI-RAN testing, AI-driven uplink performance, and pre-6G interoperability validation
Here are three Keysight launches at MWC that are worth a second look On the ground at MWC, where our team was all week, there was one unifying theme across keynotes, demonstrations, and hallway conversations: AI. AI workloads are taking over the network and telcos, data centers, and enterprises are looking to tap into AI-native solutions […]
3.AI era will reset telecom economics, says Jio chief
Mathew Oommen, group chief at Jio Platforms told MWC the AI transition is a “generational transformation” that will redefine telecom networks and create entirely new economic opportunities In sum – what to know AI reset – The AI era will fundamentally reset telecom economics and create trillions of dollars in new digital businesses. Infrastructure shift […]
4.Malaysian telcos take full control of DNB
MOF Inc. held a 41.67% stake in DNB from May 2025, after U Mobile sold its shares as part of conditions tied to its license to build Malaysia’s second 5G wholesale network In sum – what to know Government exit – MOF Inc has sold its shares in DNB to CelcomDigi, Maxis, and YTL Power, […]
5.Ericsson plays long game as AI boosts ‘mid-cycle’ 5G pay-offs
Ericsson’s networks chief Per Narvinger offered a measured view of AI’s impact on telecoms at MWC: fiber may lead the infrastructure boom today, but AI will also shift through the mid-cycle 5G evolution in AI-driven RAN optimisation. In sum – what to know: New traffic – despite the AI focus on fiber, today, mobile networks […]
Semantic Scholar – Machine Learning
1.Source Error
Check Feed
Telecom & 6G AI
1.Predicting Conflict Impact on Performance in O-RAN
The O-RAN Alliance promotes the integration of intelligent autonomous agents to control the Radio Access Network (RAN). This improves flexibility, performance, and observability in the RAN, but introduces new challenges, such as the detection and management of conflicts among the intelligent autonomous agents. A solution consists of profiling the agents before deployment to gather statistical information about their decision-making behavior, then using the information to estimate the level of conflict among agents with different goals. This approach enables determining the occurrence of conflicts among agents, but does not provide information about the impact on RAN performance, including potential service degradation. The problem becomes more complex when agents generate control actions at different timescales, which makes conflict sever...
2.Graph Based Semantic Encoder Decoder Framework for Task Oriented Communications in Connected Autonomous Vehicles
Connected autonomous vehicles (CAVs) require reliable and efficient communication frameworks to support safety critical and task-oriented applications such as collision avoidance, cooperative perception, and traffic risk assessment. Traditional communication paradigms, which focus on transmitting raw bits, often incur excessive bandwidth consumption and fail to preserve the semantic relevance of transmitted information. To bridge this gap, we propose a Graph-Based Semantic Encoder-Decoder (GBSED) architecture tailored for task-oriented communications in CAV networks. The encoder leverages scene graphs to capture spatial and semantic relationships among road entities, combined with a semantic compression algorithm that reduces the size of the extracted graph based representations by up to 99% compared to raw images, while the decoder recon...
3.Hard/Soft NLoS Detection via Combinatorial Data Augmentation for 6G Positioning
A key enabler for meeting the stringent requirements of 6G positioning is the ability to exploit site-dependent information governing line-of-sight (LoS) and non-line-of-sight (NLoS) propagation. However, acquiring such environmental information in real time is challenging in practice. To address this issue, we propose a novel NLoS detection algorithm termed combinatorial data augmentation-guided NLoS detection (CDA-ND), which builds upon our prior work. CDA-ND generates numerous preliminary estimated locations (PELs) by applying multilateration over many gNodeB (gNB) combinations using a single snapshot of range measurements. When a target gNB is in NLoS, the resulting PELs split into two clusters: one derived using the target gNB's range measurement and the other derived without it. Their displacement is summarized by a single vector, c...
4.GP Bandit-Assisted Two-Stage Sparse Phase Retrieval for Amplitude-Only Near-Field Beam Training
The transition to Extremely Large Antenna Arrays (ELAA) in 6G introduces significant near-field effects, necessitating robust near-field beam training strategies in multi-path environments. Because signal phases are frequently compromised by hardware impairments such as phase noise and frequency offsets, amplitude-only channel recovery is a critical alternative to coherent beam training. However, existing near-field amplitude-based training methods often assume simplistic line-of-sight conditions. Conversely, far-field phase retrieval (PR) methods lack the sensing flexibility required to optimize training efficiency and are fundamentally limited by plane-wave models, making them ill-suited for near-field propagation. We propose a two-stage sparse PR framework for amplitude-only near-field beam training in multipath channels. Stage I perfo...
5.Radar Enabled Adaptive Modulation for Millimeter Wave Integrated Sensing and Communication
An integrated sensing and communication (ISAC) framework comprises radar sensing to enable reliable direction beam-based communication between a base station (BS) and mobile user (MU). The ISAC will be an integral part of 6G with potential applications for high-speed vehicular communications. Existing works have explored azimuth and Doppler velocity estimated via radar sensing for beam identification and identification in dynamic environments. In this work, we propose radar-enabled modulation scheme selection for ISAC, thereby eliminating conventional time-consuming downlink-uplink feedback-based modulation scheme selection. We have analyzed the performance of the proposed approach for four different trajectories and shown an improvement in throughput between 54-209% over state-of-the-art ISAC.
arXiv Quantitative Finance
1.From debt crises to financial crashes (and back): a stock-flow consistent model for stock price bubbles
We develop a stochastic macro-financial model in continuous time by integrating two specifications of the Keen economic framework with a financial market driven by a jump-diffusion process. The economic block of the model combines monetary debt-deflation mechanisms with Ponzi-type financial destabilization and is influenced by the financial market through a stochastic interest rate that depends on asset price returns. The financial market block of the model consists of an asset with jump--diffusion price process with endogenous, state-dependent jump intensities driven by speculative credit flows. The model formalizes a feedback loop linking credit expansion, crash risk, perceived return dynamics, and bank lending spreads. Under suitable parameter restrictions, we establish global existence and non-explosion of the coupled system. Numerica...
2.Stock Market Prediction Using Node Transformer Architecture Integrated with BERT Sentiment Analysis
Stock market prediction presents considerable challenges for investors, financial institutions, and policymakers operating in complex market environments characterized by noise, non-stationarity, and behavioral dynamics. Traditional forecasting methods often fail to capture the intricate patterns and cross-sectional dependencies inherent in financial markets. This paper presents an integrated framework combining a node transformer architecture with BERT-based sentiment analysis for stock price forecasting. The proposed model represents the stock market as a graph structure where individual stocks form nodes and edges capture relationships including sectoral affiliations, correlated price movements, and supply chain connections. A fine-tuned BERT model extracts sentiment from social media posts and combines it with quantitative market feat...
3.Extreme Value Analysis for Finite, Multivariate and Correlated Systems with Finance as an Example
Extreme values and the tail behavior of probability distributions are essential for quantifying and mitigating risk in complex systems of all kinds. In multivariate settings, accounting for correlations is crucial. Although extreme value analysis for infinite correlated systems remains an open challenge, we propose a practical framework for handling a large but finite number of correlated time series. We develop our approach for finance as a concrete example but emphasize its generality. We study the extremal behavior of high-frequency stock returns after rotating them into the eigenbasis of the correlation matrix. This separates and extracts various collective effects, including information on the correlated market as a whole and on correlated sectoral behavior from idiosyncratic features, while allowing us to use univariate tools of ext...
4.Asymptotic Separability of Diffusion and Jump Components in High-Frequency CIR and CKLS Models
This paper develops a robust parametric framework for jump detection in discretely observed CKLS-type jump-diffusion processes with high-frequency asymptotics, based on the minimum density power divergence estimator (MDPDE). The methodology exploits the intrinsic asymptotic scale separation between diffusion increments, which decay at rate $\sqrt{Δ_n}$, and jump increments, which remain of non-vanishing stochastic magnitude. Using robust MDPDE-based estimators of the drift and diffusion coefficients, we construct standardized residuals whose extremal behavior provides a principled basis for statistical discrimination between continuous and discontinuous components. We establish that, over diffusion intervals, the maximum of the normalized residuals converges to the Gumbel extreme-value distribution, yielding an explicit and asymptotically...
5.Range-Based Volatility Estimators for Monitoring Market Stress: Evidence from Local Food Price Data
Range-based volatility estimators are widely used in financial econometrics to quantify risk and market stress, yet their application to local commodity markets remains limited. This paper shows how open-high--low-close (OHLC) volatility estimators can be adapted to monitor localized market distress across diverse development contexts, including conflict-affected settings, climate-exposed regions, remote and thinly traded markets, and import- and logistics-constrained urban hubs. Using monthly food price data from the World Bank's Real-Time Prices dataset, several volatility measures -- including the Parkinson, Garman-Klass, Rogers-Satchell, and Yang-Zhang estimators -- are constructed and evaluated against independently documented disruption timelines. Across settings, elevated volatility aligns with episodes linked to insecurity and mar...
arXiv – 6G & Networking
1.Predicting Conflict Impact on Performance in O-RAN
The O-RAN Alliance promotes the integration of intelligent autonomous agents to control the Radio Access Network (RAN). This improves flexibility, performance, and observability in the RAN, but introduces new challenges, such as the detection and management of conflicts among the intelligent autonomous agents. A solution consists of profiling the agents before deployment to gather statistical information about their decision-making behavior, then using the information to estimate the level of conflict among agents with different goals. This approach enables determining the occurrence of conflicts among agents, but does not provide information about the impact on RAN performance, including potential service degradation. The problem becomes more complex when agents generate control actions at different timescales, which makes conflict sever...
2.Graph Based Semantic Encoder Decoder Framework for Task Oriented Communications in Connected Autonomous Vehicles
Connected autonomous vehicles (CAVs) require reliable and efficient communication frameworks to support safety critical and task-oriented applications such as collision avoidance, cooperative perception, and traffic risk assessment. Traditional communication paradigms, which focus on transmitting raw bits, often incur excessive bandwidth consumption and fail to preserve the semantic relevance of transmitted information. To bridge this gap, we propose a Graph-Based Semantic Encoder-Decoder (GBSED) architecture tailored for task-oriented communications in CAV networks. The encoder leverages scene graphs to capture spatial and semantic relationships among road entities, combined with a semantic compression algorithm that reduces the size of the extracted graph based representations by up to 99% compared to raw images, while the decoder recon...
3.Hard/Soft NLoS Detection via Combinatorial Data Augmentation for 6G Positioning
A key enabler for meeting the stringent requirements of 6G positioning is the ability to exploit site-dependent information governing line-of-sight (LoS) and non-line-of-sight (NLoS) propagation. However, acquiring such environmental information in real time is challenging in practice. To address this issue, we propose a novel NLoS detection algorithm termed combinatorial data augmentation-guided NLoS detection (CDA-ND), which builds upon our prior work. CDA-ND generates numerous preliminary estimated locations (PELs) by applying multilateration over many gNodeB (gNB) combinations using a single snapshot of range measurements. When a target gNB is in NLoS, the resulting PELs split into two clusters: one derived using the target gNB's range measurement and the other derived without it. Their displacement is summarized by a single vector, c...
4.GP Bandit-Assisted Two-Stage Sparse Phase Retrieval for Amplitude-Only Near-Field Beam Training
The transition to Extremely Large Antenna Arrays (ELAA) in 6G introduces significant near-field effects, necessitating robust near-field beam training strategies in multi-path environments. Because signal phases are frequently compromised by hardware impairments such as phase noise and frequency offsets, amplitude-only channel recovery is a critical alternative to coherent beam training. However, existing near-field amplitude-based training methods often assume simplistic line-of-sight conditions. Conversely, far-field phase retrieval (PR) methods lack the sensing flexibility required to optimize training efficiency and are fundamentally limited by plane-wave models, making them ill-suited for near-field propagation. We propose a two-stage sparse PR framework for amplitude-only near-field beam training in multipath channels. Stage I perfo...
5.Multi-Agentic AI for Conflict-Aware rApp Policy Orchestration in Open RAN
Open Radio Access Network (RAN) enables flexible, AI-driven control of mobile networks through disaggregated, multi-vendor components. In this architecture, xApps handle real-time functions, whereas rApps in the non-real-time controller generate strategic policies. However, current rApp development remains largely manual, brittle, and poorly scalable as xApp diversity proliferates. In this work, we propose a Multi-Agentic AI framework to automate rApp policy generation and orchestration. The architecture integrates three specialized large language model (LLM)-based agents, Perception, Reasoning, and Refinement, supported by retrieval-augmented generation (RAG) and memory-based analogical reasoning. These agents collectively analyze potential conflicts, synthesize intent-aligned control pipelines, and incrementally refine deployment decisi...
arXiv – Network Architecture (6G/Slicing)
1.Predicting Conflict Impact on Performance in O-RAN
The O-RAN Alliance promotes the integration of intelligent autonomous agents to control the Radio Access Network (RAN). This improves flexibility, performance, and observability in the RAN, but introduces new challenges, such as the detection and management of conflicts among the intelligent autonomous agents. A solution consists of profiling the agents before deployment to gather statistical information about their decision-making behavior, then using the information to estimate the level of conflict among agents with different goals. This approach enables determining the occurrence of conflicts among agents, but does not provide information about the impact on RAN performance, including potential service degradation. The problem becomes more complex when agents generate control actions at different timescales, which makes conflict sever...
2.Hard/Soft NLoS Detection via Combinatorial Data Augmentation for 6G Positioning
A key enabler for meeting the stringent requirements of 6G positioning is the ability to exploit site-dependent information governing line-of-sight (LoS) and non-line-of-sight (NLoS) propagation. However, acquiring such environmental information in real time is challenging in practice. To address this issue, we propose a novel NLoS detection algorithm termed combinatorial data augmentation-guided NLoS detection (CDA-ND), which builds upon our prior work. CDA-ND generates numerous preliminary estimated locations (PELs) by applying multilateration over many gNodeB (gNB) combinations using a single snapshot of range measurements. When a target gNB is in NLoS, the resulting PELs split into two clusters: one derived using the target gNB's range measurement and the other derived without it. Their displacement is summarized by a single vector, c...
3.Selfish Cooperation Towards Low-Altitude Economy: Integrated Multi-Service Deployment with Resilient Federated Reinforcement Learning
The low-altitude economy (LAE) is a rapidly emerging paradigm that builds a service-centric economic ecosystem through large-scale and sustainable uncrewed aerial vehicle (UAV)-enabled service provisioning, reflecting the transition of the 6G era from technological advancement toward commercial deployment. The significant market potential of LAE attracts an increasing number of service providers (SPs), resulting in intensified competition in service deployment. In this paper, we study a realistic LAE scenario in which multiple SPs dynamically deploy UAVs to deliver multiple services to user hotspots, aiming to jointly optimize communication and computation resource allocation. To resolve deployment competition among SPs, an authenticity-guaranteed auction mechanism is designed, and game-theoretic analysis is conducted to establish the sol...
4.Joint Visible Light and RF Backscatter Communications for Ambient IoT Network: Fundamentals, Applications, and Opportunities
The rapid growth of the Internet of Things (IoT) devices in the sixth-generation (6G) wireless networks raises significant generality and scalability challenges due to energy consumption, deployment complexity, and environmental impact. Ambient IoT (A-IoT), leveraging ambient energy harvesting (EH) for batteryless device operation, has emerged as a promising solution to address these challenges.Among various EH and communication techniques, visible light communication (VLC) integrated with ambient backscatter communication (AmBC) offers remarkable advantages, including energy neutrality, high reliability, and enhanced security. In this paper, we propose a joint VLC-AmBC architecture, emphasizing fundamental concepts, system designs, and practical implementations. We explore potential applications in environmental monitoring, healthcare, s...
5.Service Function Chain Routing in LEO Networks Using Shortest-Path Delay Statistical Stability
Low Earth orbit (LEO) satellite constellations have become a critical enabler for global coverage, utilizing numerous satellites orbiting Earth at high speeds. By decomposing complex network services into lightweight service functions, network function virtualization (NFV) transforms global network services into diverse service function chains (SFCs), coordinated by resource-constrained LEOs. However, the dynamic topology of satellite networks, marked by highly variable inter-satellite link delays, poses significant challenges for designing efficient routing strategies that ensure reliable and low-latency communication. Many existing routing methods suffer from poor scalability and degraded performance, limiting their practical implementation. To address these challenges, this paper proposes a novel SFC routing approach that leverages the...