Daily Briefing – Mar 9 (96 Articles)
Babak's Daily Briefing
Monday, March 9, 2026
Sources: 20 | Total Articles: 96
6G World
1.SpaceRAN: Airbus UpNext explores software-defined 5G NTN from orbit
Airbus UpNext has launched its SpaceRAN (Space Radio Access Network) demonstrator, a key initiative to advance standardised 5G…
2.SoftBank’s Transformer-Based AI-RAN Hits 30% Uplink Gain at Sub-Millisecond Latency
On August 21, 2025, SoftBank published results from a live, standards-compliant AI-RAN trial that replaces parts of classical signal processing with a lightweight Transformer.
3.6G as a Platform for Value
Reframing the Future with NGMN’s Chairman, Laurent Leboucher By Piotr (Peter) Pietrzyk, Managing Editor, 6GWorld.com In the race…
4.SoftBank Road-Tests 7 GHz in Central Tokyo
SoftBank and Nokia have begun outdoor field trials in Tokyo’s Ginza district using 7 GHz spectrum, installing three pre-commercial base stations to compare coverage and radio characteristics against today’s sub-6 GHz 5G sites.
5.NXP’s Acquisition of TTTech Auto Signals Growing Focus on Middleware for Software-Defined Vehicles
On June 17, 2025, NXP Semiconductors finalized its acquisition of TTTech Auto—a strategic move to integrate TTTech’s flagship…
AI Agents
1.Mozi: Governed Autonomy for Drug Discovery LLM Agents
Tool-augmented large language model (LLM) agents promise to unify scientific reasoning with computation, yet their deployment in high-stakes domains like drug discovery is bottlenecked by two critical barriers: unconstrained tool-use governance and poor long-horizon reliability. In dependency-heavy pharmaceutical pipelines, autonomous agents often drift into irreproducible trajectories, where early-stage hallucinations multiplicatively compound into downstream failures. To overcome this, we present Mozi, a dual-layer architecture that bridges the flexibility of generative AI with the deterministic rigor of computational biology. Layer A (Control Plane) establishes a governed supervisor--worker hierarchy that enforces role-based tool isolation, limits execution to constrained action spaces, and drives reflection-based replanning. Layer B (...
2.stratum: A System Infrastructure for Massive Agent-Centric ML Workloads
Recent advances in large language models (LLMs) transform how machine learning (ML) pipelines are developed and evaluated. LLMs enable a new type of workload, agentic pipeline search, in which autonomous or semi-autonomous agents generate, validate, and optimize complete ML pipelines. These agents predominantly operate over popular Python ML libraries and exhibit highly exploratory behavior. This results in thousands of executions for data profiling, pipeline generation, and iterative refinement of pipeline stages. However, the existing Python-based ML ecosystem is built around libraries such as Pandas and scikit-learn, which are designed for human-centric, interactive, sequential workflows and remain constrained by Python's interpretive execution model, library-level isolation, and limited runtime support for executing large numbers of p...
3.Molt Dynamics: Emergent Social Phenomena in Autonomous AI Agent Populations
MoltBook is a large-scale multi-agent coordination environment where over 770,000 autonomous LLM agents interact without human participation, offering the first opportunity we are aware of to observe emergent multi-agent coordination dynamics at this population scale. We introduce \textit{Molt Dynamics}: the emergent agent coordination behaviors, inter-agent communication dynamics, and role specialization patterns arising when autonomous agents operate as decentralized decision-makers in an unconstrained multi-agent environment. Through longitudinal observation of 90,704 active agents over three weeks, we characterize three aspects. First, spontaneous role specialization: network-based clustering reveals six structural roles (silhouette 0.91), though the result primarily reflects core-periphery organization -- 93.5\% of agents occupy a ho...
4.LiveCultureBench: a Multi-Agent, Multi-Cultural Benchmark for Large Language Models in Dynamic Social Simulations
Large language models (LLMs) are increasingly deployed as autonomous agents, yet evaluations focus primarily on task success rather than cultural appropriateness or evaluator reliability. We introduce LiveCultureBench, a multi-cultural, dynamic benchmark that embeds LLMs as agents in a simulated town and evaluates them on both task completion and adherence to socio-cultural norms. The simulation models a small city as a location graph with synthetic residents having diverse demographic and cultural profiles. Each episode assigns one resident a daily goal while others provide social context. An LLM-based verifier generates structured judgments on norm violations and task progress, which we aggregate into metrics capturing task-norm trade-offs and verifier uncertainty. Using LiveCultureBench across models and cultural profiles, we study (i)...
5.Evaluating and Understanding Scheming Propensity in LLM Agents
As frontier language models are increasingly deployed as autonomous agents pursuing complex, long-term objectives, there is increased risk of scheming: agents covertly pursuing misaligned goals. Prior work has focused on showing agents are capable of scheming, but their propensity to scheme in realistic scenarios remains underexplored. To understand when agents scheme, we decompose scheming incentives into agent factors and environmental factors. We develop realistic settings allowing us to systematically vary these factors, each with scheming opportunities for agents that pursue instrumentally convergent goals such as self-preservation, resource acquisition, and goal-guarding. We find only minimal instances of scheming despite high environmental incentives, and show this is unlikely due to evaluation awareness. While inserting adversaria...
AI Computation & Hardware
1.Verify as You Go: An LLM-Powered Browser Extension for Fake News Detection
arXiv:2603.05519v1 Announce Type: new Abstract: The rampant spread of fake news in the digital age poses serious risks to public trust and democratic institutions, underscoring the need for effective, transparent, and user-centered detection tools. Existing browser extensions often fall short due to opaque model behavior, limited explanatory support, and a lack of meaningful user engagement. This paper introduces Aletheia, a novel browser extension that leverages Retrieval-Augmented Generation (RAG) and Large Language Models (LLMs) to detect fake news and provide evidence-based explanations. Aletheia further includes two interactive components: a Discussion Hub that enables user dialogue around flagged content and a Stay Informed feature that surfaces recent fact-checks. Through extensive experiments, we show that Aletheia outperforms st...
2.Attention Meets Reachability: Structural Equivalence and Efficiency in Grammar-Constrained LLM Decoding
arXiv:2603.05540v1 Announce Type: new Abstract: We study grammar-constrained decoding (GCD) as a coupling between an autoregressive next-token distribution and a reachability oracle over a pushdown system compiled from a context-free grammar (CFG). We prove an oracle invariance theorem: language-equivalent grammars induce identical admissible next-token sets for every prefix, hence identical logit masks, yet can yield provably different compiled state spaces and online ambiguity costs. We give exact control-state blowup counts for the canonical $a^n b^n$ language under redundant nonterminal delegation, and introduce a left-to-right structural ambiguity cost (SAC) measuring incremental packed-parse-forest growth per token. For two equivalent grammars over all finite strings, SAC is $O(1)$ per token under right-recursion but $\Theta(t^2)$ ...
3.NOTAI.AI: Explainable Detection of Machine-Generated Text via Curvature and Feature Attribution
arXiv:2603.05617v1 Announce Type: new Abstract: We present NOTAI.AI, an explainable framework for machine-generated text detection that extends Fast-DetectGPT by integrating curvature-based signals with neural and stylometric features in a supervised setting. The system combines 17 interpretable features, including Conditional Probability Curvature, ModernBERT detector score, readability metrics, and stylometric cues, within a gradient-boosted tree (XGBoost) meta-classifier to determine whether a text is human- or AI-generated. Furthermore, NOTAI.AI applies Shapley Additive Explanations (SHAP) to provide both local and global feature-level attribution. These attributions are further translated into structured natural-language rationales through an LLM-based explanation layer, which enables user-facing interpretability. The system is depl...
4.Safer Reasoning Traces: Measuring and Mitigating Chain-of-Thought Leakage in LLMs
arXiv:2603.05618v1 Announce Type: new Abstract: Chain-of-Thought (CoT) prompting improves LLM reasoning but can increase privacy risk by resurfacing personally identifiable information (PII) from the prompt into reasoning traces and outputs, even under policies that instruct the model not to restate PII. We study such direct, inference-time PII leakage using a model-agnostic framework that (i) defines leakage as risk-weighted, token-level events across 11 PII types, (ii) traces leakage curves as a function of the allowed CoT budget, and (iii) compares open- and closed-source model families on a structured PII dataset with a hierarchical risk taxonomy. We find that CoT consistently elevates leakage, especially for high-risk categories, and that leakage is strongly family- and budget-dependent. Increasing the reasoning budget can either am...
5.The Fragility Of Moral Judgment In Large Language Models
arXiv:2603.05651v1 Announce Type: new Abstract: People increasingly use large language models (LLMs) for everyday moral and interpersonal guidance, yet these systems cannot interrogate missing context and judge dilemmas as presented. We introduce a perturbation framework for testing the stability and manipulability of LLM moral judgments while holding the underlying moral conflict constant. Using 2,939 dilemmas from r/AmItheAsshole (January-March 2025), we generate three families of content perturbations: surface edits (lexical/structural noise), point-of-view shifts (voice and stance neutralization), and persuasion cues (self-positioning, social proof, pattern admissions, victim framing). We also vary the evaluation protocol (output ordering, instruction placement, and unstructured prompting). We evaluated all variants with four models ...
AI Machine Learning
1.Traversal-as-Policy: Log-Distilled Gated Behavior Trees as Externalized, Verifiable Policies for Safe, Robust, and Efficient Agents
arXiv:2603.05517v1 Announce Type: new Abstract: Autonomous LLM agents fail because long-horizon policy remains implicit in model weights and transcripts, while safety is retrofitted post hoc. We propose Traversal-as-Policy: distill sandboxed OpenHands execution logs into a single executable Gated Behavior Tree (GBT) and treat tree traversal -- rather than unconstrained generation -- as the control policy whenever a task is in coverage. Each node encodes a state-conditioned action macro mined and merge-checked from successful trajectories; macros implicated by unsafe traces attach deterministic pre-execution gates over structured tool context and bounded history, updated under experience-grounded monotonicity so previously rejected unsafe contexts cannot be re-admitted. At runtime, a lightweight traverser matches the base model's intent to...
2.JAWS: Enhancing Long-term Rollout of Neural Operators via Spatially-Adaptive Jacobian Regularization
arXiv:2603.05538v1 Announce Type: new Abstract: Data-driven surrogate models improve the efficiency of simulating continuous dynamical systems, yet their autoregressive rollouts are often limited by instability and spectral blow-up. While global regularization techniques can enforce contractive dynamics, they uniformly damp high-frequency features, introducing a contraction-dissipation dilemma. Furthermore, long-horizon trajectory optimization methods that explicitly correct drift are bottlenecked by memory constraints. In this work, we propose Jacobian-Adaptive Weighting for Stability (JAWS), a probabilistic regularization strategy designed to mitigate these limitations. By framing operator learning as Maximum A Posteriori (MAP) estimation with spatially heteroscedastic uncertainty, JAWS dynamically modulates the regularization strength ...
3.VDCook:DIY video data cook your MLLMs
arXiv:2603.05539v1 Announce Type: new Abstract: We introduce VDCook: a self-evolving video data operating system, a configurable video data construction platform for researchers and vertical domain teams. Users initiate data requests via natural language queries and adjustable parameters (scale, retrieval-synthesis ratio, quality threshold). The system automatically performs query optimization, concurrently running real video retrieval and controlled synthesis modules. It ultimately generates in-domain data packages with complete provenance and metadata, along with reproducible Notebooks. Unlike traditional static, one-time-built datasets, VDCook enables continuous updates and domain expansion through its automated data ingestion mechanism based on MCP (Model Context Protocol)\cite{mcp2024anthropic}, transforming datasets into dynamically...
4.IntSeqBERT: Learning Arithmetic Structure in OEIS via Modulo-Spectrum Embeddings
arXiv:2603.05556v1 Announce Type: new Abstract: Integer sequences in the OEIS span values from single-digit constants to astronomical factorials and exponentials, making prediction challenging for standard tokenised models that cannot handle out-of-vocabulary values or exploit periodic arithmetic structure. We present IntSeqBERT, a dual-stream Transformer encoder for masked integer-sequence modelling on OEIS. Each sequence element is encoded along two complementary axes: a continuous log-scale magnitude embedding and sin/cos modulo embeddings for 100 residues (moduli $2$--$101$), fused via FiLM. Three prediction heads (magnitude regression, sign classification, and modulo prediction for 100 moduli) are trained jointly on 274,705 OEIS sequences. At the Large scale (91.5M parameters), IntSeqBERT achieves 95.85% magnitude accuracy and 50.38%...
5.Autocorrelation effects in a stochastic-process model for decision making via time series
arXiv:2603.05559v1 Announce Type: new Abstract: Decision makers exploiting photonic chaotic dynamics obtained by semiconductor lasers provide an ultrafast approach to solving multi-armed bandit problems by using a temporal optical signal as the driving source for sequential decisions. In such systems, the sampling interval of the chaotic waveform shapes the temporal correlation of the resulting time series, and experiments have reported that decision accuracy depends strongly on this autocorrelation property. However, it remains unclear whether the benefit of autocorrelation can be explained by a minimal mathematical model. Here, we analyze a stochastic-process model of the time-series-based decision making using the tug-of-war principle for solving the two-armed bandit problem, where the threshold and a two-valued Markov signal evolve jo...
AI Robotics
1.ProFocus: Proactive Perception and Focused Reasoning in Vision-and-Language Navigation
arXiv:2603.05530v1 Announce Type: new Abstract: Vision-and-Language Navigation (VLN) requires agents to accurately perceive complex visual environments and reason over navigation instructions and histories. However, existing methods passively process redundant visual inputs and treat all historical contexts indiscriminately, resulting in inefficient perception and unfocused reasoning. To address these challenges, we propose \textbf{ProFocus}, a training-free progressive framework that unifies \underline{Pro}active Perception and \underline{Focus}ed Reasoning through collaboration between large language models (LLMs) and vision-language models (VLMs). For proactive perception, ProFocus transforms panoramic observations into structured ego-centric semantic maps, enabling the orchestration agent to identify missing visual information needed ...
2.Digital-Twin Losses for Lane-Compliant Trajectory Prediction at Urban Intersections
arXiv:2603.05546v1 Announce Type: new Abstract: Accurate and safety-conscious trajectory prediction is a key technology for intelligent transportation systems, especially in V2X-enabled urban environments with complex multi-agent interactions. In this paper, we created a digital twin-driven V2X trajectory prediction pipeline that jointly leverages cooperative perception from vehicles and infrastructure to forecast multi-agent motion at signalized intersections. The proposed model combines a Bi-LSTM-based generator with a structured training objective consisting of a standard mean squared error (MSE) loss and a novel twin loss. The twin loss encodes infrastructure constraints, collision avoidance, diversity across predicted modes, and rule-based priors derived from the digital twin. While the MSE term ensures point-wise accuracy, the twin ...
3.TEGA: A Tactile-Enhanced Grasping Assistant for Assistive Robotics via Sensor Fusion and Closed-Loop Haptic Feedback
arXiv:2603.05552v1 Announce Type: new Abstract: Recent advances in teleoperation have enabled sophisticated manipulation of dexterous robotic hands, with most systems concentrating on guiding finger positions to achieve desired grasp configurations. However, while accurate finger positioning is essential, it often overlooks the equally critical task of grasp force modulation, vital for handling objects of diverse hardness, texture, and shape. This limitation poses a significant challenge for users, especially individuals with upper limb disabilities who lack natural tactile feedback and rely on indirect cues to infer appropriate force levels. To address this gap, We present the tactile enhanced grasping assistant (TEGA), a closed loop assistive teleoperation framework that fuses EMG based intent2force inference with visuotactile sensing m...
4.PRISM: Personalized Refinement of Imitation Skills for Manipulation via Human Instructions
arXiv:2603.05574v1 Announce Type: new Abstract: This paper presents PRISM: an instruction-conditioned refinement method for imitation policies in robotic manipulation. This approach bridges Imitation Learning (IL) and Reinforcement Learning (RL) frameworks into a seamless pipeline, such that an imitation policy on a broad generic task, generated from a set of user-guided demonstrations, can be refined through reinforcement to generate new unseen fine-grain behaviours. The refinement process follows the Eureka paradigm, where reward functions for RL are iteratively generated from an initial natural-language task description. Presented approach, builds on top of this mechanism to adapt a refined IL policy of a generic task to new goal configurations and the introduction of constraints by adding also human feedback correction on intermediate...
5.Task Parameter Extrapolation via Learning Inverse Tasks from Forward Demonstrations
arXiv:2603.05576v1 Announce Type: new Abstract: Generalizing skill policies to novel conditions remains a key challenge in robot learning. Imitation learning methods, while data-efficient, are largely confined to the training region and consistently fail on input data outside it, leading to unpredictable policy failures. Alternatively, transfer learning approaches offer methods for trajectory generation robust to both changes in environment or tasks, but they remain data-hungry and lack accuracy in zero-shot generalization. We address these challenges by framing the problem in the context of task inversion learning and proposing a novel joint learning approach to achieve accurate and efficient knowledge transfer. Our method constructs a common representation of the forward and inverse tasks, and leverages auxiliary forward demonstrations ...
Financial AI
1.Stock Market Prediction Using Node Transformer Architecture Integrated with BERT Sentiment Analysis
Stock market prediction presents considerable challenges for investors, financial institutions, and policymakers operating in complex market environments characterized by noise, non-stationarity, and behavioral dynamics. Traditional forecasting methods often fail to capture the intricate patterns and cross-sectional dependencies inherent in financial markets. This paper presents an integrated framework combining a node transformer architecture with BERT-based sentiment analysis for stock price forecasting. The proposed model represents the stock market as a graph structure where individual stocks form nodes and edges capture relationships including sectoral affiliations, correlated price movements, and supply chain connections. A fine-tuned BERT model extracts sentiment from social media posts and combines it with quantitative market feat...
2.Statistical Inference for Score Decompositions
We introduce inference methods for score decompositions, which partition scoring functions for predictive assessment into three interpretable components: miscalibration, discrimination, and uncertainty. Our estimation and inference relies on a linear recalibration of the forecasts, which is applicable to general multi-step ahead point forecasts such as means and quantiles due to its validity for both smooth and non-smooth scoring functions. This approach ensures desirable finite-sample properties, enables asymptotic inference, and establishes a direct connection to the classical Mincer-Zarnowitz regression. The resulting inference framework facilitates tests for equal forecast calibration or discrimination, which yield three key advantages. They enhance the information content of predictive ability tests by decomposing scores, deliver hig...
3.Same Error, Different Function: The Optimizer as an Implicit Prior in Financial Time Series
Neural networks applied to financial time series operate in a regime of underspecification, where model predictors achieve indistinguishable out-of-sample error. Using large-scale volatility forecasting for S$\&$P 500 stocks, we show that different model-training-pipeline pairs with identical test loss learn qualitatively different functions. Across architectures, predictive accuracy remains unchanged, yet optimizer choice reshapes non-linear response profiles and temporal dependence differently. These divergences have material consequences for decisions: volatility-ranked portfolios trace a near-vertical Sharpe-turnover frontier, with nearly $3\times$ turnover dispersion at comparable Sharpe ratios. We conclude that in underspecified settings, optimization acts as a consequential source of inductive bias, thus model evaluation should ext...
4.Deep Learning for Financial Time Series: A Large-Scale Benchmark of Risk-Adjusted Performance
We present a large scale benchmark of modern deep learning architectures for a financial time series prediction and position sizing task, with a primary focus on Sharpe ratio optimization. Evaluating linear models, recurrent networks, transformer based architectures, state space models, and recent sequence representation approaches, we assess out of sample performance on a daily futures dataset spanning commodities, equity indices, bonds, and FX spanning 2010 to 2025. Our evaluation goes beyond average returns and includes statistical significance, downside and tail risk measures, breakeven transaction cost analysis, robustness to random seed selection, and computational efficiency. We find that models explicitly designed to learn rich temporal representations consistently outperform linear benchmarks and generic deep learning models, whi...
5.Adaptive Window Selection for Financial Risk Forecasting
Risk forecasts in financial regulation and internal management are calculated through historical data. The unknown structural changes of financial data poses a substantial challenge in selecting an appropriate look-back window for risk modeling and forecasting. We develop a data-driven online learning method, called the bootstrap-based adaptive window selection (BAWS), that adaptively determines the window size in a sequential manner. A central component of BAWS is to compare the realized scores against a data-dependent threshold, which is evaluate based on an idea of bootstrap. The proposed method is applicable to the forecast of risk measures that are elicitable individually or jointly, such as the Value-at-Risk (VaR) and the pair of the VaR and the corresponding Expected Shortfall. Through simulation studies and empirical analyses, we ...
GSMA Newsroom
1.GSMA MWC26 Barcelona closes 20th anniversary edition
Summary available at source link.
2.From Ambition to Execution: How Open Gateway Is Scaling the Global API Economy
Summary available at source link.
3.Pioneering Affordable Access in Africa: GSMA and Handset Affordability Coalition Members Identify Six African Countries to Pilot Affordable $40 Smartphones
Summary available at source link.
4.GSMA Calls for Regulatory Readiness for Direct-to-User LEO Satellite Services
Summary available at source link.
5.MWC26 Barcelona opens with call to complete 5G, rise to AI challenges, and strengthen digital safety
Summary available at source link.
Generative AI (arXiv)
1.BEVLM: Distilling Semantic Knowledge from LLMs into Bird's-Eye View Representations
The integration of Large Language Models (LLMs) into autonomous driving has attracted growing interest for their strong reasoning and semantic understanding abilities, which are essential for handling complex decision-making and long-tail scenarios. However, existing methods typically feed LLMs with tokens from multi-view and multi-frame images independently, leading to redundant computation and limited spatial consistency. This separation in visual processing hinders accurate 3D spatial reasoning and fails to maintain geometric coherence across views. On the other hand, Bird's-Eye View (BEV) representations learned from geometrically annotated tasks (e.g., object detection) provide spatial structure but lack the semantic richness of foundation vision encoders. To bridge this gap, we propose BEVLM, a framework that connects a spatially co...
2.Beyond Rows to Reasoning: Agentic Retrieval for Multimodal Spreadsheet Understanding and Editing
Recent advances in multimodal Retrieval-Augmented Generation (RAG) enable Large Language Models (LLMs) to analyze enterprise spreadsheet workbooks containing millions of cells, cross-sheet dependencies, and embedded visual artifacts. However, state-of-the-art approaches exclude critical context through single-pass retrieval, lose data resolution through compression, and exceed LLM context windows through naive full-context injection, preventing reliable multi-step reasoning over complex enterprise workbooks. We introduce Beyond Rows to Reasoning (BRTR), a multimodal agentic framework for spreadsheet understanding that replaces single-pass retrieval with an iterative tool-calling loop, supporting end-to-end Excel workflows from complex analysis to structured editing. Supported by over 200 hours of expert human evaluation, BRTR achieves sta...
3.Abductive Reasoning with Syllogistic Forms in Large Language Models
Research in AI using Large-Language Models (LLMs) is rapidly evolving, and the comparison of their performance with human reasoning has become a key concern. Prior studies have indicated that LLMs and humans share similar biases, such as dismissing logically valid inferences that contradict common beliefs. However, criticizing LLMs for these biases might be unfair, considering our reasoning not only involves formal deduction but also abduction, which draws tentative conclusions from limited information. Abduction can be regarded as the inverse form of syllogism in its basic structure, that is, a process of drawing a minor premise from a major premise and conclusion. This paper explores the accuracy of LLMs in abductive reasoning by converting a syllogistic dataset into one suitable for abduction. It aims to investigate whether the state-o...
4.Before You Hand Over the Wheel: Evaluating LLMs for Security Incident Analysis
Security incident analysis (SIA) poses a major challenge for security operations centers, which must manage overwhelming alert volumes, large and diverse data sources, complex toolchains, and limited analyst expertise. These difficulties intensify because incidents evolve dynamically and require multi-step, multifaceted reasoning. Although organizations are eager to adopt Large Language Models (LLMs) to support SIA, the absence of rigorous benchmarking creates significant risks for assessing their effectiveness and guiding design decisions. Benchmarking is further complicated by: (i) the lack of an LLM-ready dataset covering a wide spectrum of SIA tasks; (ii) the continual emergence of new tasks reflecting the diversity of analyst responsibilities; and (iii) the rapid release of new LLMs that must be incorporated into evaluations. In this...
5.Evaluation of Deontic Conditional Reasoning in Large Language Models: The Case of Wason's Selection Task
As large language models (LLMs) advance in linguistic competence, their reasoning abilities are gaining increasing attention. In humans, reasoning often performs well in domain specific settings, particularly in normative rather than purely formal contexts. Although prior studies have compared LLM and human reasoning, the domain specificity of LLM reasoning remains underexplored. In this study, we introduce a new Wason Selection Task dataset that explicitly encodes deontic modality to systematically distinguish deontic from descriptive conditionals, and use it to examine LLMs' conditional reasoning under deontic rules. We further analyze whether observed error patterns are better explained by confirmation bias (a tendency to seek rule-supporting evidence) or by matching bias (a tendency to ignore negation and select items that lexically m...
Hugging Face Daily Papers
1.Mitigating Bias in Concept Bottleneck Models for Fair and Interpretable Image Classification
Ensuring fairness in image classification prevents models from perpetuating and amplifying bias. Concept bottleneck models (CBMs) map images to high-level, human-interpretable concepts before making predictions via a sparse, one-layer classifier. This structure enhances interpretability and, in theory, supports fairness by masking sensitive attribute proxies such as facial features. However, CBM concepts have been known to leak information unrelated to concept semantics and early results reveal only marginal reductions in gender bias on datasets like ImSitu. We propose three bias mitigation techniques to improve fairness in CBMs: 1. Decreasing information leakage using a top-k concept filter, 2. Removing biased concepts, and 3. Adversarial debiasing. Our results outperform prior work in terms of fairness-performance tradeoffs, indicating ...
2.Test-Time Adaptation via Many-Shot Prompting: Benefits, Limits, and Pitfalls
Test-time adaptation enables large language models (LLMs) to modify their behavior at inference without updating model parameters. A common approach is many-shot prompting, where large numbers of in-context learning (ICL) examples are injected as an input-space test-time update. Although performance can improve as more demonstrations are added, the reliability and limits of this update mechanism remain poorly understood, particularly for open-source models. We present an empirical study of many-shot prompting across tasks and model backbones, analyzing how performance varies with update magnitude, example ordering, and selection policy. We further study Dynamic and Reinforced ICL as alternative test-time update strategies that control which information is injected and how it constrains model behavior. We find that many-shot prompting is e...
3.RouteGoT: Node-Adaptive Routing for Cost-Efficient Graph of Thoughts Reasoning
Large Language Models (LLMs) excel at multi-step reasoning, yet increasing the structural complexity of inference does not consistently improve system-level returns. Methods such as Tree of Thoughts (ToT), Graph of Thoughts (GoT), and Adaptive Graph of Thoughts (AGoT) can boost accuracy on some benchmarks, but often introduce substantial overhead in token consumption and latency, and their gains can be unstable across task distributions-sometimes underperforming simpler Chain-of-Thought (CoT) or direct input-output prompting (IO). We attribute this inefficiency to stage-wise and node-wise heterogeneity inside GoT-style reasoning pipelines: high-quality planning and final synthesis are globally coupled and typically benefit from strong models, whereas many intermediate subtasks are localized and can be solved accurately by lighter models w...
4.LTLGuard: Formalizing LTL Specifications with Compact Language Models and Lightweight Symbolic Reasoning
Translating informal requirements into formal specifications is challenging due to the ambiguity and variability of natural language (NL). This challenge is particularly pronounced when relying on compact (small and medium) language models, which may lack robust knowledge of temporal logic and thus struggle to produce syntactically valid and consistent formal specifications. In this work, we focus on enabling resource-efficient open-weight models (4B--14B parameters) to generate correct linear temporal logic (LTL) specifications from informal requirements. We present LTLGuard, a modular toolchain that combines constrained generation with formal consistency checking to generate conflict-free LTL specifications from informal input. Our method integrates the generative capabilities of model languages with lightweight automated reasoning tool...
5.Not All Trust is the Same: Effects of Decision Workflow and Explanations in Human-AI Decision Making
A central challenge in AI-assisted decision making is achieving warranted, well-calibrated trust. Both overtrust (accepting incorrect AI recommendations) and undertrust (rejecting correct advice) should be prevented. Prior studies differ in the design of the decision workflow - whether users see the AI suggestion immediately (1-step setup) or have to submit a first decision beforehand (2-step setup) -, and in how trust is measured - through self-reports or as behavioral trust, that is, reliance. We examined the effects and interactions of (a) the type of decision workflow, (b) the presence of explanations, and (c) users' domain knowledge and prior AI experience. We compared reported trust, reliance (agreement rate and switch rate), and overreliance. Results showed no evidence that a 2-step setup reduces overreliance. The decision workflow...
IEEE Xplore AI
1.Military AI Policy Needs Democratic Oversight
A simmering dispute between the United States Department of Defense (DOD) and Anthropic has now escalated into a full-blown confrontation , raising an uncomfortable but important question: who gets to set the guardrails for military use of artificial intelligence — the executive branch, private companies or Congress and the broader democratic process? The conflict began when Defense Secretary Pete Hegseth reportedly gave Anthropic CEO Dario Amodei a deadline to allow the DOD unrestricted use of its AI systems. When the company refused, the administration moved to designate Anthropic a supply chain risk and ordered federal agencies to phase out its technology, dramatically escalating the standoff. Anthropic has refused to cross two lines : allowing its models to be used for domestic surveillance of United States citizens and enabling fully...
2.Entomologists Use a Particle Accelerator to Image Ants at Scale
Move over, Pixar. The ants that animators once morphed into googly-eyed caricatures in films such as A Bug’s Life and Antz just received a meticulously precise anatomical reboot. Writing today in Nature Methods , an international team of entomologists, accelerator physicists, computer scientists, and biological imaging specialists describe a new 3D atlas of ant morphology. Dubbed Antscan, the platform features micrometer-resolution reconstructions that lay bare not only the insects’ armored exoskeletons but also their muscles, nerves, digestive tracts, and needle-like stingers poised at the ready. Those high-resolution images—spanning 792 species across 212 genera and covering the bulk of described ant diversity—are now freely available through an interactive online portal , where anyone can rotate, zoom, and virtually “dissect” the insec...
3.Watershed Moment for AI–Human Collaboration in Math
When Ukrainian mathematician Maryna Viazovska received a Fields Medal —widely regarded as the Nobel Prize for mathematics—in July 2022, it was big news. Not only was she the second woman to accept the honor in the award’s 86-year history, but she collected the medal just months after her country had been invaded by Russia. Nearly four years later, Viazovska is making waves again. Today , in a collaboration between humans and AI, Viazovska’s proofs have been formally verified, signaling rapid progress in AI’s abilities to assist with mathemat ical research. “These new results seem very, very impressive, and definitely signal some rapid progress in this direction,” says AI-reasoning expert and Princeton University postdoc Liam Fowl , who was not involved in the work. In her Fields Medal–winning research, Viazovska had tackled two versions o...
4.How Quantum Data Can Teach AI to Do Better Chemistry
Sometimes a visually compelling metaphor is all you need to get an otherwise complicated idea across. In the summer of 2001, a Tulane physics professor named John P. Perdew came up with a banger. He wanted to convey the hierarchy of computational complexity inherent in the behavior of electrons in materials. He called it “ Jacob’s Ladder .” He was appropriating an idea from the Book of Genesis, in which Jacob dreamed of a ladder “set up on the earth, and the top of it reached to heaven. And behold the angels of God ascending and descending on it.” Jacob’s Ladder represented a gradient and so too did Perdew’s ladder, not of spirit but of computation. At the lowest rung, the math was the simplest and least computationally draining, with materials represented as a smoothed-over, cartoon version of the atomic realm. As you climbed the ladder,...
5.Letting Machines Decide What Matters
In the time it takes you to read this sentence, the Large Hadron Collider (LHC) will have smashed billions of particles together. In all likelihood, it will have found exactly what it found yesterday: more evidence to support the Standard Model of particle physics. For the engineers who built this 27-kilometer-long ring, this consistency is a triumph. But for theoretical physicists, it has been rather frustrating. As Matthew Hutson reports in “ AI Hunts for the Next Big Thing in Physics ,” the field is currently gripped by a quiet crisis. In an email discussing his reporting, Hutson explains that the Standard Model, which describes the known elementary particles and forces, is not a complete picture. “So theorists have proposed new ideas, and experimentalists have built giant facilities to test them, but despite the gobs of data, there ha...
MIT Sloan Management
1.Why Visibility Has Become the New Test of Leadership
Carolyn Geason-Beissel/MIT SMR In professional service firms, quiet excellence once defined leadership. A partner earned influence through expertise, loyalty, and discretion. But in an era of high transparency, where every meeting can be replayed, every comment rated, and every decision scrutinized online, competence alone no longer sustains trust. Visibility has become the new test of […]
2.Our Guide to the Spring 2026 Issue
The Eight Core Principles of Strategic Innovation Gina O’Connor and Christopher R. Meyer Key Insight: Mature companies that build a strategic innovation capability can systematically renew their product portfolios to sustain long-term growth. Top Takeaways: Many companies start off with a bang: the launch of an exciting breakthrough product or service. But as time passes, […]
3.AI Won’t Fix This
We are firmly in the digital age, awash in data generated on every surface and in every layer of every business. Yet, despite decades of investment in technology, time, and effort, many organizations are still not seeing meaningful returns. A global survey of over 4,200 business and technology leaders conducted by research firm Gartner in […]
4.The Eight Core Principles of Strategic Innovation
Matt Chinworth/theispot.com The Research The research behind this article was conducted in partnership with the Innovation Research Interchange, a professional association of R&D leaders in large industrial companies. More than 640 interviews were conducted over the three phases of the research program. In Phase 1, 12 project teams from 10 companies were followed for five […]
5.Is a Venture Studio Right for Your Company?
Matt Chinworth Venture studios are emerging as a compelling — if resource-intensive — way for organizations to maximize value creation through innovation. Pioneered by organizations such as Google, the studio model offers a structured and systematic approach to venture creation inside an organization. But before adopting it, leaders must ask: Is a studio the right […]
NBER Working Papers
1.Pricing Protection: Credit Scores, Disaster Risk, and Home Insurance Affordability -- by Joshua Blonz, Mallick Hossain, Benjamin J. Keys, Philip Mulder, Joakim A. Weill
We use 70 million policies linked to mortgages and property-level disaster risk to show that credit scores impact homeowners insurance premiums as much as disaster risk. Homeowners with low credit pay 24% more for identical coverage than high–credit score homeowners. Leveraging a natural experiment in Washington State, we find that banning the use of credit information considerably weakens the relationship between credit score and pricing. We discuss the role of credit information in pricing and show that, although insurance is often overlooked in discussions of home affordability, a low credit score increases premiums roughly as much as it raises mortgage rates.
2.When Incentives Aren't Enough: Evidence on Inattention and Imperfect Memory from HIV Medication Adherence -- by Hang Yu, Jared Stolove, Dean Yang, James Riddell IV, Arlete Mahumane
Financial incentives are widely used to encourage beneficial behaviors, but their effectiveness may be limited by inattention and imperfect memory. We study this in a randomized trial of HIV medication adherence in Mozambique. Financial incentives alone increase adherence by 10.6 percentage points, while pairing incentives with reminders increases adherence by 24.3 percentage points. We develop a model in which inattention to daily adherence and imperfect memory of payment eligibility reduce incentive effectiveness and show that reminders mitigate both frictions. Detailed medication refill data support the model’s predictions. The results suggest combining incentives with reminders can substantially increase program effectiveness.
3.Pay Now, Buy Never: The Economics of Consumer Prepayment Schemes -- by Yixuan Liu, Hua Zhang, Eric Zou
Prepaid consumption is a common feature of modern consumer markets and is often presented as a mutually beneficial arrangement: consumers receive upfront discounts, and firms secure future sales. We analyze a large-scale Pay Now, Buy Later (PNBL) program in which consumers prepay for restaurant credit with bonuses, and spend the balance later. Using detailed transaction data from over 4 million consumers, we document widespread balance breakage: approximately 40% of prepaid value is never used. Because many consumers underutilize their balances, merchants recover significantly more than the bonus cost. The median firm earns roughly $5.5 in breakage profit for every $1 of bonus credit issued. While PNBL participation does lead to modest increases in consumer spending over time, firms gain substantially more from breakage than from any loya...
4.How does AI Distribute the pie? Large Language Models and the Ultimatum Game. -- by Douglas K.G. Araujo, Harald Uhlig
As Large Language Models (LLMs) are increasingly tasked with autonomous decision making, understanding their behavior in strategic settings is crucial. We investigate the choices of various LLMs in the Ultimatum Game, a setting where human behavior notably deviates from theoretical rationality. We conduct experiments varying the stake size and the nature of the opponent (Human vs. AI) across both Proposer and Responder roles. Three key results emerge. First, LLM behavior is heterogeneous but predictable when conditioning on stake size and player types. Second, while some models approximate the rational benchmark and others mimic human social preferences, a distinct “altruistic” mode emerges where LLMs propose hyper-fair distributions (greater than 50%). Third, LLM Proposers forgo a large share of total payoff, and an even larger share whe...
5.Mergers and Non-contractible Benefits: The Employees' Perspective -- by Wei Cai, Andrea Prat, Jiehang Yu
Incomplete contract theory, supported by anecdotal evidence, suggests that when a firm is acquired, workers may be adversely affected in non-contractible aspects of their work experience. This paper empirically investigates this prediction by combining M\&A events from the Refinitiv database and web-scraped Glassdoor review data. We find that: (a) Controlling for pre-trends, mergers lead to lower satisfaction, especially on non-contractible dimensions of the employee experience (about 6% of a standard deviation); (b) The effect is stronger in the target firm than in the acquiring firm; (c) Text analysis of employee comments indicates that the decline in satisfaction is primarily associated with perceived breaches of implicit contracts. Our findings indicate that mergers may reduce workers' job utility through non-monetary channels.
NY Fed - Liberty Street
1.Firms’ Inflation Expectations Return to 2024 Levels
Businesses experienced substantial cost pressures in 2025 as the cost of insurance and utilities rose sharply, while an increase in tariffs contributed to rising goods and materials costs. This post examines how firms in the New York-Northern New Jersey region adjusted their prices in response to these cost pressures and describes their expectations for future price increases and inflation. Survey results show an acceleration in firms’ price increases in 2025, with an especially sharp increase in the manufacturing sector. While both cost and price increases intensified last year, our surveys re...
2.Are Rising Employee Health Insurance Costs Dampening Wage Growth?
Employer-sponsored health insurance represents a substantial component of total compensation paid by firms to many workers in the United States. Such costs have climbed by close to 20 percent over the past five years. Indeed, the average annual premium for employer-sponsored family health insurance coverage was about $27,000 in 2025—roughly equivalent to the wage of a full-time worker paid $15 per hour. Our February regional business surveys asked firms whether their wage setting decisions were influenced by the rising cost of employee health insurance. As we showed in our
3.What’s Driving Rising Business Costs?
After a period of moderating cost increases, businesses faced mounting cost pressures in 2025. While tariffs played a role in driving up the costs of many inputs—especially among manufacturers—they represent only part of the story. Indeed, firms grappled with substantial cost increases across many categories in the past year. This post is the first in a three-part series analyzing cost and price dynamics among businesses in the New York-Northern New Jersey region based on data collected through our regional business surveys. Firms reported that the sharpest cost increases over the...
4.The Post‑Pandemic Global R*
In this post we provide a measure of “global” r* using data on short- and long-term yields and inflation for several countries with the approach developed in “Global Trends in Interest Rates” (Del Negro, Giannone, Giannoni, and Tambalotti). After declining significantly from the 1990s to before the COVID-19 pandemic, global r* has risen but remains well below its pre-1990s level. These conclusions are based on an econometric model called “trendy VAR” that extracts common trends across a multitude of variables. Specifically, the common trend in real rates across all the countries in the sample is what we call global r*. The post is based on the
5.Estimating the Term Structure of Corporate Bond Risk Premia
Understanding how short- and long-term assets are priced is one of the fundamental questions in finance. The term structure of risk premia allows us to perform net present value calculations, test asset pricing models, and potentially explain the sources of many cross-sectional asset pricing anomalies. In this post, I construct a forward-looking estimate of the term structure of risk premia in the corporate bond market following Jankauskas (2024). The U.S. corporate bond market is an ideal laboratory for studying the relationship between risk premia and maturity because of its large size (standing at roughly $16 trillion as of the end of 2024) and because the maturities are well defined (in contrast to equities).
Project Syndicate
1.Hungary and the Future of the EU
The European Union’s central challenge is to defend its members against external aggression, whether from the US or Russia, and Hungarian Prime Minister Viktor Orbán’s regime has long frustrated this effort. Even if Orbán is defeated in this year's election, EU leaders must take the right lessons from his 16 years of illiberal rule.
2.The Economic Magic of Equal Opportunities for Women
None of the 190 countries covered by the World Bank’s Women, Business, and the Law 2026 report provides women with the same legal environment as men, with the biggest gaps found in safety, entrepreneurship, and childcare. In developing economies, the costs in terms of growth and employment are considerable.
3.Kevin Warsh Is in for a Rude Awakening
For years, Kevin Warsh, Donald Trump’s nominee to serve as the next chair of the US Federal Reserve, has been staking out policy positions that would almost certainly backfire if put into practice. Fortunately, market conditions and the rest of the central bank's board will still have a say in monetary policymaking.
4.What Turkey Wants in Iran
While avoiding protracted instability in Iran is vital to Turkey’s interests, so is ensuring that the Islamic Republic does not emerge victorious from the current war. Turkey's ideal scenario – a managed degradation of Iran’s ambitions and capabilities – might be best served by a Venezuela-style leadership transition.
5.A Stronger Work Ethic Won’t Fix Advanced Economies
German Chancellor Friedrich Merz learned the wrong lesson on his recent trip to China. Advanced economies expand and remain competitive not through additional labor inputs but through capital deepening, technological progress, and total factor productivity growth.
RCR Wireless
1.Can AI help stop “Wangiri” and voice spoofing?
Carriers are using real-time audio fingerprinting to intercept synthetic voice scams and Wangiri before the phone rings It used to take actual skill to pull off a convincing phone scam. These days, however, convincing voice spoofing is a whole lot easier. Voice cloning tech has gotten accessible, meaning that criminals can easily set up realistic […]
2.Connectivity, computing, sensing – Qualcomm CEO outlines 6G pillars
The CEO of Qualcomm told MWC that connectivity will remain the foundation of 6G networks, but its design priorities will evolve as AI becomes central to digital services and mobile computing In sum – what to know: Three 6G pillars – Qualcomm CEO Cristiano Amon said connectivity, distributed computing, and sensing will form the foundation […]
3.Cisco rights the MWC narrative – fiber first, mobile later, as AI agents make minds race
While most of the big talk at MWC is about 5G and 6G, the most urgent AI infrastructure work is with fibre-heavy data centre interconnects. Cisco, and certain others, are capitalising on this east-west traffic surge, with mobile and edge networks positioned as a critical mid-term component in the AI networking stack. In sum – […]
4.Why telcos are struggling to meet enterprise expectations
A new report from the Capgemini Research Institute suggests many operators are struggling to deliver measurable business outcomes for enterprise customers As enterprises accelerate digital transformation, telecom operators are facing growing pressure to move beyond connectivity and deliver measurable business outcomes. Yet a new report from the Capgemini Research Institute suggests many operators are struggling […]
5.Huawei wins eight GLOMO awards at MWC Barcelona 2026
[Barcelona, Spain, March 5, 2026] Huawei won eight prestigious Global Mobile (GLOMO) Awards at MWC Barcelona 2026. These awards included the Best Mobile Network Infrastructure, Best AI‑Powered Network Solution, Best Non-Terrestrial Network Solution, Best Mobile Operator Service for Connected Consumers, Best Mobile Innovation for Connected Health and Wellbeing, Best FinTech & Digital Commerce Innovation, Best […]
Semantic Scholar – Machine Learning
1.Source Error
Check Feed
Telecom & 6G AI
1.A Unified Multicarrier Waveform Framework for Next-generation Wireless Networks: Principles, Performance, and Challenges
Next-generation wireless networks require enhanced flexibility, efficiency, and reliability in physical layer waveform design to address the challenges posed by heterogeneous channel conditions and stringent quality-of-service demands. To this end, this paper proposes a unified multicarrier waveform framework that provides a systematic characterization and practical implementation guidelines to facilitate waveform selection for the sixth-generation (6G) mobile networks and beyond. We commence by examining the design principles of the state-of-the-art waveforms, which are categorized into one-dimensional modulation waveforms (e.g., orthogonal frequency division multiplexing (OFDM) and affine frequency division multiplexing (AFDM)) and two-dimensional modulation waveforms (e.g., orthogonal time frequency space (OTFS)). Their inherent resili...
2.U6G XL-MIMO Radiomap Prediction: Multi-Config Dataset and Beam Map Approach
The upper 6 GHz (U6G) band with XL-MIMO is a key enabler for sixth-generation wireless systems, yet intelligent radiomap prediction for such systems remains challenging. Existing datasets support only small-scale arrays (up to 8x8) with predominantly isotropic antennas, far from the 1024-element directional arrays envisioned for 6G. Moreover, current methods encode array configurations as scalar parameters, forcing neural networks to extrapolate array-specific radiation patterns, which fails when predicting radiomaps for configurations absent from training data. To jointly address data scarcity and generalization limitations, this paper advances XL-MIMO radiomap prediction from three aspects. To overcome data limitations, we construct the first XL-MIMO radiomap dataset containing 78400 radiomaps across 800 urban scenes, five frequency ban...
3.Channel Estimation for Reconfigurable Intelligent Surface Assisted Upper Mid-Band MIMO Systems
The upper mid-band (UMB) spectrum is a key enabler for 6G systems, yet reconfigurable intelligent surface (RIS)-assisted UMB communications face severe channel estimation challenges due to near-field propagation and transitional scattering, which induce strong spatial correlation and ill-conditioned least-squares (LS) formulations. To overcome this limitation, we propose a conditioning-aware channel estimation framework that transforms the inherently ill-conditioned high-dimensional problem into multiple well-conditioned subproblems via greedy column grouping. By systematically separating highly correlated RIS elements into distinct sub-blocks via piecewise RIS phase design, the proposed method directly improves Gram matrix conditioning and stabilizes piecewise LS reconstruction without relying on sparsity assumptions. Simulation results ...
4.Rethinking Next-Generation Signal Waveform: Integration of Orthogonality and Non-Orthogonality
As 6G communications advance, the demand for new services and capabilities, as defined by the international telecommunication union (ITU), is increasing. A crucial aspect of 6G advancement lies in the development of signal waveforms that can meet these demands while maintaining compatibility with existing standards. This paper explores sustainable physical layer waveform options, focusing on a balanced approach that integrates non-orthogonality with orthogonality to achieve both backward compatibility and forward innovation. Specifically, we investigate two key signal formats: single-carrier orthogonal frequency division multiplexing (SC-OFDM) (1D,2D) and single-carrier non-orthogonal frequency shaping (SC-NOFS)(1D,2D). Both can use 1D frequency and 2D time-frequency precoding, offering enhanced frequency and time diversity, simplified pr...
5.A Survey on Stacked Intelligent Metasurfaces: Fundamentals, Recent Advances, and Challenges
Reconfigurable intelligent surfaces (RISs) enable programmable control of wireless propagation. Beyond environmental deployments, integrating metasurfaces at the antenna front end allows direct manipulation of the radiated electromagnetic field and enables wave-domain signal processing. In this context, stacked intelligent metasurfaces (SIMs) have recently been proposed as an advanced architecture in which multiple programmable metasurface layers interact through wave propagation, enabling richer and more flexible electromagnetic transformations than conventional single-layer designs. By leveraging cascaded wave-matter interactions at the transmitter or receiver front end, SIMs substantially expand the design space of programmable wireless systems. This survey provides a comprehensive overview of SIMs technologies from the electromagnetic...
arXiv Quantitative Finance
1.Stock Market Prediction Using Node Transformer Architecture Integrated with BERT Sentiment Analysis
Stock market prediction presents considerable challenges for investors, financial institutions, and policymakers operating in complex market environments characterized by noise, non-stationarity, and behavioral dynamics. Traditional forecasting methods often fail to capture the intricate patterns and cross-sectional dependencies inherent in financial markets. This paper presents an integrated framework combining a node transformer architecture with BERT-based sentiment analysis for stock price forecasting. The proposed model represents the stock market as a graph structure where individual stocks form nodes and edges capture relationships including sectoral affiliations, correlated price movements, and supply chain connections. A fine-tuned BERT model extracts sentiment from social media posts and combines it with quantitative market feat...
2.Extreme Value Analysis for Finite, Multivariate and Correlated Systems with Finance as an Example
Extreme values and the tail behavior of probability distributions are essential for quantifying and mitigating risk in complex systems of all kinds. In multivariate settings, accounting for correlations is crucial. Although extreme value analysis for infinite correlated systems remains an open challenge, we propose a practical framework for handling a large but finite number of correlated time series. We develop our approach for finance as a concrete example but emphasize its generality. We study the extremal behavior of high-frequency stock returns after rotating them into the eigenbasis of the correlation matrix. This separates and extracts various collective effects, including information on the correlated market as a whole and on correlated sectoral behavior from idiosyncratic features, while allowing us to use univariate tools of ext...
3.Asymptotic Separability of Diffusion and Jump Components in High-Frequency CIR and CKLS Models
This paper develops a robust parametric framework for jump detection in discretely observed CKLS-type jump-diffusion processes with high-frequency asymptotics, based on the minimum density power divergence estimator (MDPDE). The methodology exploits the intrinsic asymptotic scale separation between diffusion increments, which decay at rate $\sqrt{Δ_n}$, and jump increments, which remain of non-vanishing stochastic magnitude. Using robust MDPDE-based estimators of the drift and diffusion coefficients, we construct standardized residuals whose extremal behavior provides a principled basis for statistical discrimination between continuous and discontinuous components. We establish that, over diffusion intervals, the maximum of the normalized residuals converges to the Gumbel extreme-value distribution, yielding an explicit and asymptotically...
4.Range-Based Volatility Estimators for Monitoring Market Stress: Evidence from Local Food Price Data
Range-based volatility estimators are widely used in financial econometrics to quantify risk and market stress, yet their application to local commodity markets remains limited. This paper shows how open-high--low-close (OHLC) volatility estimators can be adapted to monitor localized market distress across diverse development contexts, including conflict-affected settings, climate-exposed regions, remote and thinly traded markets, and import- and logistics-constrained urban hubs. Using monthly food price data from the World Bank's Real-Time Prices dataset, several volatility measures -- including the Parkinson, Garman-Klass, Rogers-Satchell, and Yang-Zhang estimators -- are constructed and evaluated against independently documented disruption timelines. Across settings, elevated volatility aligns with episodes linked to insecurity and mar...
5.Coupled Supply and Demand Forecasting in Platform Accommodation Markets
Tourism demand forecasting is methodologically mature, but it typically treats accommodation supply as fixed or exogenous. In platform-mediated short-term rentals, supply is elastic, decision-driven, and co-evolves with demand through pricing, information design, and interventions. I reframe the core issue as endogenous stock-out censoring: realized booked nights satisfy B_{k,t} <= min(D_{k,t}, S_{k,t}), so booking models that ignore supply learn a regime-specific ceiling and become fragile under policy changes and supply shocks. This narrated review synthesizes work from tourism forecasting, revenue management, two-sided market economics, and Bayesian time-series methods; develops a three-part coupling framework (behavioral, informational, intervention); and illustrates the identification failure with a toy simulation. I conclude with...
arXiv – 6G & Networking
1.A Unified Multicarrier Waveform Framework for Next-generation Wireless Networks: Principles, Performance, and Challenges
Next-generation wireless networks require enhanced flexibility, efficiency, and reliability in physical layer waveform design to address the challenges posed by heterogeneous channel conditions and stringent quality-of-service demands. To this end, this paper proposes a unified multicarrier waveform framework that provides a systematic characterization and practical implementation guidelines to facilitate waveform selection for the sixth-generation (6G) mobile networks and beyond. We commence by examining the design principles of the state-of-the-art waveforms, which are categorized into one-dimensional modulation waveforms (e.g., orthogonal frequency division multiplexing (OFDM) and affine frequency division multiplexing (AFDM)) and two-dimensional modulation waveforms (e.g., orthogonal time frequency space (OTFS)). Their inherent resili...
2.U6G XL-MIMO Radiomap Prediction: Multi-Config Dataset and Beam Map Approach
Summary available at source link.
3.Channel Estimation for Reconfigurable Intelligent Surface Assisted Upper Mid-Band MIMO Systems
The upper mid-band (UMB) spectrum is a key enabler for 6G systems, yet reconfigurable intelligent surface (RIS)-assisted UMB communications face severe channel estimation challenges due to near-field propagation and transitional scattering, which induce strong spatial correlation and ill-conditioned least-squares (LS) formulations. To overcome this limitation, we propose a conditioning-aware channel estimation framework that transforms the inherently ill-conditioned high-dimensional problem into multiple well-conditioned subproblems via greedy column grouping. By systematically separating highly correlated RIS elements into distinct sub-blocks via piecewise RIS phase design, the proposed method directly improves Gram matrix conditioning and stabilizes piecewise LS reconstruction without relying on sparsity assumptions. Simulation results ...
4.Rethinking Next-Generation Signal Waveform: Integration of Orthogonality and Non-Orthogonality
As 6G communications advance, the demand for new services and capabilities, as defined by the international telecommunication union (ITU), is increasing. A crucial aspect of 6G advancement lies in the development of signal waveforms that can meet these demands while maintaining compatibility with existing standards. This paper explores sustainable physical layer waveform options, focusing on a balanced approach that integrates non-orthogonality with orthogonality to achieve both backward compatibility and forward innovation. Specifically, we investigate two key signal formats: single-carrier orthogonal frequency division multiplexing (SC-OFDM) (1D,2D) and single-carrier non-orthogonal frequency shaping (SC-NOFS)(1D,2D). Both can use 1D frequency and 2D time-frequency precoding, offering enhanced frequency and time diversity, simplified pr...
5.A Survey on Stacked Intelligent Metasurfaces: Fundamentals, Recent Advances, and Challenges
Reconfigurable intelligent surfaces (RISs) enable programmable control of wireless propagation. Beyond environmental deployments, integrating metasurfaces at the antenna front end allows direct manipulation of the radiated electromagnetic field and enables wave-domain signal processing. In this context, stacked intelligent metasurfaces (SIMs) have recently been proposed as an advanced architecture in which multiple programmable metasurface layers interact through wave propagation, enabling richer and more flexible electromagnetic transformations than conventional single-layer designs. By leveraging cascaded wave-matter interactions at the transmitter or receiver front end, SIMs substantially expand the design space of programmable wireless systems. This survey provides a comprehensive overview of SIMs technologies from the electromagnetic...
arXiv – Network Architecture (6G/Slicing)
1.Selfish Cooperation Towards Low-Altitude Economy: Integrated Multi-Service Deployment with Resilient Federated Reinforcement Learning
The low-altitude economy (LAE) is a rapidly emerging paradigm that builds a service-centric economic ecosystem through large-scale and sustainable uncrewed aerial vehicle (UAV)-enabled service provisioning, reflecting the transition of the 6G era from technological advancement toward commercial deployment. The significant market potential of LAE attracts an increasing number of service providers (SPs), resulting in intensified competition in service deployment. In this paper, we study a realistic LAE scenario in which multiple SPs dynamically deploy UAVs to deliver multiple services to user hotspots, aiming to jointly optimize communication and computation resource allocation. To resolve deployment competition among SPs, an authenticity-guaranteed auction mechanism is designed, and game-theoretic analysis is conducted to establish the sol...
2.Joint Visible Light and RF Backscatter Communications for Ambient IoT Network: Fundamentals, Applications, and Opportunities
The rapid growth of the Internet of Things (IoT) devices in the sixth-generation (6G) wireless networks raises significant generality and scalability challenges due to energy consumption, deployment complexity, and environmental impact. Ambient IoT (A-IoT), leveraging ambient energy harvesting (EH) for batteryless device operation, has emerged as a promising solution to address these challenges.Among various EH and communication techniques, visible light communication (VLC) integrated with ambient backscatter communication (AmBC) offers remarkable advantages, including energy neutrality, high reliability, and enhanced security. In this paper, we propose a joint VLC-AmBC architecture, emphasizing fundamental concepts, system designs, and practical implementations. We explore potential applications in environmental monitoring, healthcare, s...
3.Service Function Chain Routing in LEO Networks Using Shortest-Path Delay Statistical Stability
Low Earth orbit (LEO) satellite constellations have become a critical enabler for global coverage, utilizing numerous satellites orbiting Earth at high speeds. By decomposing complex network services into lightweight service functions, network function virtualization (NFV) transforms global network services into diverse service function chains (SFCs), coordinated by resource-constrained LEOs. However, the dynamic topology of satellite networks, marked by highly variable inter-satellite link delays, poses significant challenges for designing efficient routing strategies that ensure reliable and low-latency communication. Many existing routing methods suffer from poor scalability and degraded performance, limiting their practical implementation. To address these challenges, this paper proposes a novel SFC routing approach that leverages the...
4.Selecting Offline Reinforcement Learning Algorithms for Stochastic Network Control
Offline Reinforcement Learning (RL) is a promising approach for next-generation wireless networks, where online exploration is unsafe and large amounts of operational data can be reused across the model lifecycle. However, the behavior of offline RL algorithms under genuinely stochastic dynamics -- inherent to wireless systems due to fading, noise, and traffic mobility -- remains insufficiently understood. We address this gap by evaluating Bellman-based (Conservative Q-Learning), sequence-based (Decision Transformers), and hybrid (Critic-Guided Decision Transformers) offline RL methods in an open-access stochastic telecom environment (mobile-env). Our results show that Conservative Q-Learning consistently produces more robust policies across different sources of stochasticity, making it a reliable default choice in lifecycle-driven AI man...
5.ORION: Intent-Aware Orchestration in Open RAN for SLA-Driven Network Management
The disaggregation of the Radio Access Network (RAN) introduces unprecedented flexibility but significant operational complexity, necessitating automated management frameworks. However, current Open RAN (O-RAN) orchestration relies on fragmented manual policies, lacking end-to-end intent assurance from high-level requirements to low-level configurations. In this paper, we propose ORION, an O-RAN compliant intent orchestration framework that integrates Large Language Models (LLMs) via the Model Context Protocol (MCP) to translate natural language intents into enforceable network policies. ORION leverages a hierarchical agent architecture, combining an MCP-based Service Management and Orchestration (SMO) layer for semantic translation with a Non-Real-Time RIC rApp and Near-Real-Time RIC xApp for closed-loop enforcement. Extensive evaluation...