Daily Briefing – Apr 9 (96 Articles)
Babak's Daily Briefing
Thursday, April 9, 2026
Sources: 20 | Total Articles: 96
6G World
1.SoftBank’s Physical AI push gives AI-RAN a sharper purpose
SoftBank is starting to give AI-RAN a more concrete job description: not just running AI workloads near the network, but serving as the real-time infrastructure layer for robots and other physical systems. The company’s recent materials suggest it wants to move the AI-RAN conversation from telecom architecture to real-world machine action.
2.South Korea puts 6G inside its national AI push
South Korea has unveiled a three-year national roadmap aimed at becoming one of the world’s top three AI powers by 2028, with 6G commercialization positioned as part of that broader push.
3.b-com’s Open XG Hub targets one of telecom’s biggest gaps: turning experimentation into deployment
In an interview with Peter Pietrzyk, Managing Director of 6GWorld, Patrick Savell, Head of Connectivity at b-com, said platforms such as Open XG Hub are designed to help bridge one of the industry’s most persistent challenges: moving promising ideas from research environments into deployable network systems. The bigger point is that, as telecom becomes more software-driven and AI-native, the bottleneck is increasingly less about invention and more about validation, integration, and operational readiness.
4.ODC’s $45M raise signals a bigger shift in AI-RAN, from network optimization to edge intelligence
ORAN Development Company said it has closed a $45 million Series A backed by Booz Allen, Cisco Investments, Nokia, NVIDIA, AT&T, MTN and Telecom Italia to scale its U.S.-based Odyssey platform, which it positions as an AI-native RAN architecture combining communications, sensing and edge intelligence. The company said it plans to accelerate commercial deployment through 2026.
5.Lockheed Martin’s NetSense points to a bigger shift: 5G as drone-detection infrastructure
Lockheed Martin’s latest NetSense prototype suggests that commercial 5G infrastructure could play a growing role in drone detection, adding momentum to the broader move toward sensing-enabled wireless networks.
AI Agents
1.TraceSafe: A Systematic Assessment of LLM Guardrails on Multi-Step Tool-Calling Trajectories
As large language models (LLMs) evolve from static chatbots into autonomous agents, the primary vulnerability surface shifts from final outputs to intermediate execution traces. While safety guardrails are well-benchmarked for natural language responses, their efficacy remains largely unexplored within multi-step tool-use trajectories. To address this gap, we introduce TraceSafe-Bench, the first comprehensive benchmark specifically designed to assess mid-trajectory safety. It encompasses 12 risk categories, ranging from security threats (e.g., prompt injection, privacy leaks) to operational failures (e.g., hallucinations, interface inconsistencies), featuring over 1,000 unique execution instances. Our evaluation of 13 LLM-as-a-guard models and 7 specialized guardrails yields three critical findings: 1) Structural Bottleneck: Guardrail eff...
2.Strategic Persuasion with Trait-Conditioned Multi-Agent Systems for Iterative Legal Argumentation
Strategic interaction in adversarial domains such as law, diplomacy, and negotiation is mediated by language, yet most game-theoretic models abstract away the mechanisms of persuasion that operate through discourse. We present the Strategic Courtroom Framework, a multi-agent simulation environment in which prosecution and defense teams composed of trait-conditioned Large Language Model (LLM) agents engage in iterative, round-based legal argumentation. Agents are instantiated using nine interpretable traits organized into four archetypes, enabling systematic control over rhetorical style and strategic orientation. We evaluate the framework across 10 synthetic legal cases and 84 three-trait team configurations, totaling over 7{,}000 simulated trials using DeepSeek-R1 and Gemini~2.5~Pro. Our results show that heterogeneous teams with complem...
3.Cheap Talk, Empty Promise: Frontier LLMs easily break public promises for self-interest
Large language models are increasingly deployed as autonomous agents in multi-agent settings where they communicate intentions and take consequential actions with limited human oversight. A critical safety question is whether agents that publicly commit to actions break those promises when they can privately deviate, and what the consequences are for both themselves and the collective. We study deception as a deviation from a publicly announced action in one-shot normal-form games, classifying each deviation by its effect on individual payoff and collective welfare into four categories: win-win, selfish, altruistic, and sabotaging. By exhaustively enumerating announcement profiles across six canonical games, nine frontier models, and varying group sizes, we identify all opportunities for each deviation type and measure how often agents ex...
4.Springdrift: An Auditable Persistent Runtime for LLM Agents with Case-Based Memory, Normative Safety, and Ambient Self-Perception
We present Springdrift, a persistent runtime for long-lived LLM agents. The system integrates an auditable execution substrate (append-only memory, supervised processes, git-backed recovery), a case-based reasoning memory layer with hybrid retrieval (evaluated against a dense cosine baseline), a deterministic normative calculus for safety gating with auditable axiom trails, and continuous ambient self-perception via a structured self-state representation (the sensorium) injected each cycle without tool calls. These properties support behaviours difficult to achieve in session-bounded systems: cross-session task continuity, cross-channel context maintenance, end-to-end forensic reconstruction of decisions, and self-diagnostic behaviour. We report on a single-instance deployment over 23 days (19 operating days), during which the agent diagn...
5.LOCARD: An Agentic Framework for Blockchain Forensics
Blockchain forensics inherently involves dynamic and iterative investigations, while many existing approaches primarily model it through static inference pipelines. We propose a paradigm shift towards Agentic Blockchain Forensics (ABF), modeling forensic investigation as a sequential decision-making process. To instantiate this paradigm, we introduce LOCARD, the first agentic framework for blockchain forensics. LOCARD operationalizes this perspective through a Tri-Core Cognitive Architecture that decouples strategic planning, operational execution, and evaluative validation. Unlike generic LLM-based agents, it incorporates a Structured Belief State mechanism to enforce forensic rigor and guide exploration under explicit state constraints. To demonstrate the efficacy of the ABF paradigm, we apply LOCARD to the inherently complex domain of ...
AI Computation & Hardware
1.LLM-Augmented Knowledge Base Construction For Root Cause Analysis
arXiv:2604.06171v1 Announce Type: new Abstract: Communications networks now form the backbone of our digital world, with fast and reliable connectivity. However, even with appropriate redundancy and failover mechanisms, it is difficult to guarantee "five 9s" (99.999 %) reliability, requiring rapid and accurate root cause analysis (RCA) during outages. In the event of an outage, rapid and accurate RCA becomes essential to restore service and prevent future disruptions. This study evaluates three Large Language Model (LLM) methodologies - Fine-Tuning, RAG, and a Hybrid approach - for constructing a Root Cause Analysis (RCA) Knowledge Base from support tickets. We compare their performance using a comprehensive suite of lexical and semantic similarity metrics. Our experiments on a real industrial dataset demonstrate that the generated kno...
2.The Stepwise Informativeness Assumption: Why are Entropy Dynamics and Reasoning Correlated in LLMs?
arXiv:2604.06192v1 Announce Type: new Abstract: Recent work uses entropy-based signals at multiple representation levels to study reasoning in large language models, but the field remains largely empirical. A central unresolved puzzle is why internal entropy dynamics, defined under the predictive distribution of a model, correlate so robustly with external correctness given by the ground-truth answer. In this paper, we argue that this correlation arises because autoregressive models reason correctly when they accumulate information about the true answer via answer-informative prefixes. We formalize this intuition via the Stepwise Informativeness Assumption (SIA), which states that reasoning prefixes accumulate answer-relevant information in expectation as generation progresses. We show that SIA naturally emerges from maximum-likelihood o...
3.Depression Detection at the Point of Care: Automated Analysis of Linguistic Signals from Routine Primary Care Encounters
arXiv:2604.06193v1 Announce Type: new Abstract: Depression is underdiagnosed in primary care, yet timely identification remains critical. Recorded clinical encounters, increasingly common with digital scribing technologies, present an opportunity to detect depression from naturalistic dialogue. We investigated automated depression detection from 1,108 audio-recorded primary care encounters in the Establishing Focus study, with depression defined by PHQ-9 (n=253 depressed, n=855 non-depressed). We compared three supervised approaches, Sentence-BERT + Logistic Regression (LR), LIWC+LR and ModernBERT, against a zero-shot GPT-OSS. GPT-OSS achieved the strongest performance (AUPRC=0.510, AUROC=0.774), with LIWC+LR competitive among supervised models (AUPRC=0.500, AUROC=0.742). Combined dyadic transcripts outperformed single-speaker configurat...
4.Hallucination as output-boundary misclassification: a composite abstention architecture for language models
arXiv:2604.06195v1 Announce Type: new Abstract: Large language models often produce unsupported claims. We frame this as a misclassification error at the output boundary, where internally generated completions are emitted as if they were grounded in evidence. This motivates a composite intervention that combines instruction-based refusal with a structural abstention gate. The gate computes a support deficit score, St, from three black-box signals: self-consistency (At), paraphrase stability (Pt), and citation coverage (Ct), and blocks output when St exceeds a threshold. In a controlled evaluation across 50 items, five epistemic regimes, and three models, neither mechanism alone was sufficient. Instruction-only prompting reduced hallucination sharply, but still showed over-cautious abstention on answerable items and residual hallucination...
5.Consistency-Guided Decoding with Proof-Driven Disambiguation for Three-Way Logical Question Answering
arXiv:2604.06196v1 Announce Type: new Abstract: Three-way logical question answering (QA) assigns $True/False/Unknown$ to a hypothesis $H$ given a premise set $S$. While modern large language models (LLMs) can be accurate on isolated examples, we identify two recurring failure modes in 3-way logic QA: (i) negation inconsistency, where answers to $H$ and $\neg H$ violate the deterministic label mapping, and (ii) epistemic $Unknown$, where the model predicts $Unknown$ due to uncertainty or instability even when $S$ entails one side. We present CGD-PD, a lightweight test-time layer that (a) queries a single 3-way classifier on both $H$ and a mechanically negated form of $H$, (b) projects the pair onto a negation-consistent decision when possible, and (c) invokes a proof-driven disambiguation step that uses targeted binary entailment probes ...
AI Machine Learning
1.A Benchmark of Classical and Deep Learning Models for Agricultural Commodity Price Forecasting on A Novel Bangladeshi Market Price Dataset
arXiv:2604.06227v1 Announce Type: new Abstract: Accurate short-term forecasting of agricultural commodity prices is critical for food security planning and smallholder income stabilisation in developing economies, yet machine-learning-ready datasets for this purpose remain scarce in South Asia. This paper makes two contributions. First, we introduce AgriPriceBD, a benchmark dataset of 1,779 daily retail mid-prices for five Bangladeshi commodities - garlic, chickpea, green chilli, cucumber, and sweet pumpkin - spanning July 2020 to June 2025, extracted from government reports via an LLM-assisted digitisation pipeline. Second, we evaluate seven forecasting approaches spanning classical models - na\"{i}ve persistence, SARIMA, and Prophet - and deep learning architectures - BiLSTM, Transformer, Time2Vec-enhanced Transformer, and Informer - wi...
2.Probabilistic Language Tries: A Unified Framework for Compression, Decision Policies, and Execution Reuse
arXiv:2604.06228v1 Announce Type: new Abstract: We introduce probabilistic language tries (PLTs), a unified representation that makes explicit the prefix structure implicitly defined by any generative model over sequences. By assigning to each outgoing edge the conditional probability of the corresponding token or action, a PLT simultaneously serves as: (i) an optimal lossless compressor via frequency-weighted interval encoding, generalizing arithmetic coding to model-conditioned distributions; (ii) a policy representation for sequential decision problems including games, search, and robotic control; and (iii) a memoization index that lets repeated inference queries be answered by structured retrieval rather than full model execution. The central technical result is a prior-guided caching theorem: under a stationary generative distributio...
3.FLeX: Fourier-based Low-rank EXpansion for multilingual transfer
arXiv:2604.06253v1 Announce Type: new Abstract: Cross-lingual code generation is critical in enterprise environments where multiple programming languages coexist. However, fine-tuning large language models (LLMs) individually for each language is computationally prohibitive. This paper investigates whether parameter-efficient fine-tuning methods and optimizer enhancements can improve cross-lingual transfer from Python to languages like Java. We fine-tune the Code Llama 7B model using low-rank adaptation (LoRA) to optimize a small subset of parameters and compare Adam and Sophia optimizers, while exploring a novel Fourier-based regularization technique. Our contributions include: (1)demonstrating that LoRA fine-tuning on a small, high-quality dataset (MBPP) can exceed the pass@1 performance of the more broadly fine-tuned Code Llama-Python-...
4.Spectral Edge Dynamics Reveal Functional Modes of Learning
arXiv:2604.06256v1 Announce Type: new Abstract: Training dynamics during grokking concentrate along a small number of dominant update directions -- the spectral edge -- which reliably distinguishes grokking from non-grokking regimes. We show that standard mechanistic interpretability tools (head attribution, activation probing, sparse autoencoders) fail to capture these directions: their structure is not localized in parameter or feature space. Instead, each direction induces a structured function over the input domain, revealing low-dimensional functional modes invisible to representation-level analysis. For modular addition, all leading directions collapse to a single Fourier mode. For multiplication, the same collapse appears only in the discrete-log basis, yielding a 5.9x improvement in concentration. For subtraction, the edge spans a...
5.$S^3$: Stratified Scaling Search for Test-Time in Diffusion Language Models
arXiv:2604.06260v1 Announce Type: new Abstract: Test-time scaling investigates whether a fixed diffusion language model (DLM) can generate better outputs when given more inference compute, without additional training. However, naive best-of-$K$ sampling is fundamentally limited because it repeatedly draws from the same base diffusion distribution, whose high-probability regions are often misaligned with high-quality outputs. We propose $S^3$ (Stratified Scaling Search), a classical verifier-guided search method that improves generation by reallocating compute during the denoising process rather than only at the final output stage. At each denoising step, $S^3$ expands multiple candidate trajectories, evaluates them with a lightweight reference-free verifier, and selectively resamples promising candidates while preserving diversity within ...
AI Robotics
1.Occlusion Handling by Pushing for Enhanced Fruit Detection
arXiv:2604.06341v1 Announce Type: new Abstract: In agricultural robotics, effective observation and localization of fruits present challenges due to occlusions caused by other parts of the tree, such as branches and leaves. These occlusions can result in false fruit localization or impede the robot from picking the fruit. The objective of this work is to push away branches that block the fruit's view to increase their visibility. Our setup consists of an RGB-D camera and a robot arm. First, we detect the occluded fruit in the RGB image and estimate its occluded part via a deep learning generative model in the depth space. The direction to push to clear the occlusions is determined using classic image processing techniques. We then introduce a 3D extension of the 2D Hough transform to detect straight line segments in the point cloud. This ...
2.Designing Privacy-Preserving Visual Perception for Robot Navigation Based on User Privacy Preferences
arXiv:2604.06382v1 Announce Type: new Abstract: Visual navigation is a fundamental capability of mobile service robots, yet the onboard cameras required for such navigation can capture privacy-sensitive information and raise user privacy concerns. Existing approaches to privacy-preserving navigation-oriented visual perception have largely been driven by technical considerations, with limited grounding in user privacy preferences. In this work, we propose a user-centered approach to designing privacy-preserving visual perception for robot navigation. To investigate how user privacy preferences can inform such design, we conducted two user studies. The results show that users prefer privacy-preserving visual abstractions and capture-time low-resolution preservation mechanisms: their preferred RGB resolution depends both on the desired priva...
3.Uncertainty Estimation for Deep Reconstruction in Actuatic Disaster Scenarios with Autonomous Vehicles
arXiv:2604.06387v1 Announce Type: new Abstract: Accurate reconstruction of environmental scalar fields from sparse onboard observations is essential for autonomous vehicles engaged in aquatic monitoring. Beyond point estimates, principled uncertainty quantification is critical for active sensing strategies such as Informative Path Planning, where epistemic uncertainty drives data collection decisions. This paper compares Gaussian Processes, Monte Carlo Dropout, Deep Ensembles, and Evidential Deep Learning for simultaneous scalar field reconstruction and uncertainty decomposition under three perceptual models representative of real sensor modalities. Results show that Evidential Deep Learning achieves the best reconstruction accuracy and uncertainty calibration across all sensor configurations at the lowest inference cost, while Gaussian P...
4.BiDexGrasp: Coordinated Bimanual Dexterous Grasps across Object Geometries and Sizes
arXiv:2604.06589v1 Announce Type: new Abstract: Bimanual dexterous grasping is a fundamental and promising area in robotics, yet its progress is constrained by the lack of comprehensive datasets and powerful generation models. In this work, we propose BiDexGrasp, consists of a large-scale bimanual dexterous grasp dataset and a novel generation model. For dataset, we propose a novel bimanual grasp synthesis pipeline to efficiently annotate physically feasible data for dataset construction. This pipeline addresses the challenges of high-dimensional bimanual grasping through a two-stage synthesis strategy of efficient region-based grasp initialization and decoupled force-closure grasp optimization. Powered by this pipeline, we construct a large-scale bimanual dexterous grasp dataset, comprising 6351 diverse objects with sizes ranging from 30...
5.Train-Small Deploy-Large: Leveraging Diffusion-Based Multi-Robot Planning
arXiv:2604.06598v1 Announce Type: new Abstract: Learning based multi-robot path planning methods struggle to scale or generalize to changes, particularly variations in the number of robots during deployment. Most existing methods are trained on a fixed number of robots and may tolerate a reduced number during testing, but typically fail when the number increases. Additionally, training such methods for a larger number of agents can be both time consuming and computationally expensive. However, analytical methods can struggle to scale computationally or handle dynamic changes in the environment. In this work, we propose to leverage a diffusion model based planner capable of handling dynamically varying number of agents. Our approach is trained on a limited number of agents and generalizes effectively to larger numbers of agents during depl...
Financial AI
1.Sequential Audit Sampling with Statistical Guarantees
Financial statement auditing is conducted under a risk-based evidence approach to obtain reasonable assurance. In practice, auditors often perform additional sampling or related procedures when an initial sample does not provide a sufficient basis for a conclusion. Across jurisdictions, current standards and practice manuals acknowledge such extensions, while the statistical design of sequential audit procedures has not been fully explored. This study formulates audit sampling with additional, sequentially collected items as a sequential testing problem for a finite population under sampling without replacement. We define null and alternative hypotheses in terms of a tolerable deviation rate, specify stopping and decision rules, and formulate exact sequential boundary conditions in terms of finite-population error probabilities. For pract...
2.Generative Path-Law Jump-Diffusion: Sequential MMD-Gradient Flows and Generalisation Bounds in Marcus-Signature RKHS
This paper introduces a novel generative framework for synthesising forward-looking, càdlàg stochastic trajectories that are sequentially consistent with time-evolving path-law proxies, thereby incorporating anticipated structural breaks, regime shifts, and non-autonomous dynamics. By framing path synthesis as a sequential matching problem on restricted Skorokhod manifolds, we develop the \textit{Anticipatory Neural Jump-Diffusion} (ANJD) flow, a generative mechanism that effectively inverts the time-extended Marcus-sense signature. Central to this approach is the Anticipatory Variance-Normalised Signature Geometry (AVNSG), a time-evolving precision operator that performs dynamic spectral whitening on the signature manifold to ensure contractivity during volatile regime shifts and discrete aleatoric shocks. We provide a rigorous theoretic...
3.Anticipatory Reinforcement Learning: From Generative Path-Laws to Distributional Value Functions
This paper introduces Anticipatory Reinforcement Learning (ARL), a novel framework designed to bridge the gap between non-Markovian decision processes and classical reinforcement learning architectures, specifically under the constraint of a single observed trajectory. In environments characterised by jump-diffusions and structural breaks, traditional state-based methods often fail to capture the essential path-dependent geometry required for accurate foresight. We resolve this by lifting the state space into a signature-augmented manifold, where the history of the process is embedded as a dynamical coordinate. By utilising a self-consistent field approach, the agent maintains an anticipated proxy of the future path-law, allowing for a deterministic evaluation of expected returns. This transition from stochastic branching to a single-pass...
4.Transfer Learning for Loan Recovery Prediction under Distribution Shifts with Heterogeneous Feature Spaces
Accurate forecasting of recovery rates (RR) is central to credit risk management and regulatory capital determination. In many loan portfolios, however, RR modeling is constrained by data scarcity arising from infrequent default events. Transfer learning (TL) offers a promising avenue to mitigate this challenge by exploiting information from related but richer source domains, yet its effectiveness critically depends on the presence and strength of distributional shifts, and on potential heterogeneity between source and target feature spaces. This paper introduces FT-MDN-Transformer, a mixture-density tabular Transformer architecture specifically designed for TL in RR forecasting across heterogeneous feature sets. The model produces both loan-level point estimates and portfolio-level predictive distributions, thereby supporting a wide ra...
5.Financial Anomaly Detection for the Canadian Market
In this work we evaluate the performance of three classes of methods for detecting financial anomalies: topological data analysis (TDA), principal component analyis (PCA), and Neural Network-based approaches. We apply these methods to the TSX-60 data to identify major financial stress events in the Canadian stock market. We show how neural network-based methods (such as GlocalKD and One-Shot GIN(E)) and TDA methods achieve the strongest performance. The effectiveness of TDA in detecting financial anomalies suggests that global topological properties are meaningful in distinguishing financial stress events.
GSMA Newsroom
1.From Rich Text to Video: RCS Universal Profile 4.0 has arrived
Summary available at source link.
2.Mobile Money accounted for $2 trillion in transactions in 2025, doubling since 2021 as active accounts continue to grow
Summary available at source link.
3.Strengthening the Global Fight Against Fraud and Scams – Takeaways from the Global Fraud Summit in Vienna
Summary available at source link.
4.GSMA MWC26 Barcelona closes 20th anniversary edition
Summary available at source link.
5.From Ambition to Execution: How Open Gateway Is Scaling the Global API Economy
Summary available at source link.
Generative AI (arXiv)
1.Chatbot-Based Assessment of Code Understanding in Automated Programming Assessment Systems
Large Language Models (LLMs) challenge conventional automated programming assessment because students can now produce functionally correct code without demonstrating corresponding understanding. This paper makes two contributions. First, it reports a saturation-based scoping review of conversational assessment approaches in programming education. The review identifies three dominant architectural families: rule-based or template-driven systems, LLM-based systems, and hybrid systems. Across the literature, conversational agents appear promising for scalable feedback and deeper probing of code understanding, but important limitations remain around hallucinations, over-reliance, privacy, integrity, and deployment constraints. Second, the paper synthesizes these findings into a Hybrid Socratic Framework for integrating conversational verifica...
2.A Systematic Study of Retrieval Pipeline Design for Retrieval-Augmented Medical Question Answering
Large language models (LLMs) have demonstrated strong capabilities in medical question answering; however, purely parametric models often suffer from knowledge gaps and limited factual grounding. Retrieval-augmented generation (RAG) addresses this limitation by integrating external knowledge retrieval into the reasoning process. Despite increasing interest in RAG-based medical systems, the impact of individual retrieval components on performance remains insufficiently understood. This study presents a systematic evaluation of retrieval-augmented medical question answering using the MedQA USMLE benchmark and a structured textbook-based knowledge corpus. We analyze the interaction between language models, embedding models, retrieval strategies, query reformulation, and cross-encoder reranking within a unified experimental framework comprisi...
3.TraceSafe: A Systematic Assessment of LLM Guardrails on Multi-Step Tool-Calling Trajectories
As large language models (LLMs) evolve from static chatbots into autonomous agents, the primary vulnerability surface shifts from final outputs to intermediate execution traces. While safety guardrails are well-benchmarked for natural language responses, their efficacy remains largely unexplored within multi-step tool-use trajectories. To address this gap, we introduce TraceSafe-Bench, the first comprehensive benchmark specifically designed to assess mid-trajectory safety. It encompasses 12 risk categories, ranging from security threats (e.g., prompt injection, privacy leaks) to operational failures (e.g., hallucinations, interface inconsistencies), featuring over 1,000 unique execution instances. Our evaluation of 13 LLM-as-a-guard models and 7 specialized guardrails yields three critical findings: 1) Structural Bottleneck: Guardrail eff...
4.Reason in Chains, Learn in Trees: Self-Rectification and Grafting for Multi-turn Agent Policy Optimization
Reinforcement learning for Large Language Model agents is often hindered by sparse rewards in multi-step reasoning tasks. Existing approaches like Group Relative Policy Optimization treat sampled trajectories as independent chains, assigning uniform credit to all steps in each chain and ignoring the existence of critical steps that may disproportionally impact reasoning outcome. In this paper, we propose T-STAR(Tree-structured Self-Taught Agent Rectification), a framework that recovers the latent correlated reward structure across seemingly independent trajectories. Specifically, we consolidate trajectories into a unified Cognitive Tree by identifying and merging functionally similar steps/nodes. It enables an Introspective Valuation mechanism that back-propagates trajectory-level rewards through the tree to obtain a new notion of varianc...
5.Multi-Turn Reasoning LLMs for Task Offloading in Mobile Edge Computing
Emerging computation-intensive applications impose stringent latency requirements on resource-constrained mobile devices. Mobile Edge Computing (MEC) addresses this challenge through task offloading. However, designing effective policies remains difficult due to dynamic task arrivals, time-varying channels, and the spatio-temporal coupling of server queues. Conventional heuristics lack adaptability, while Deep Reinforcement Learning (DRL) suffers from limited generalization and architectural rigidity, requiring retraining when network topology changes. Although Large Language Models (LLMs) offer semantic reasoning capabilities, standard Supervised Fine-Tuning (SFT) yields myopic policies that greedily minimize immediate latency without accounting for long-term system evolution. To address these limitations, we propose COMLLM, a generative...
Hugging Face Daily Papers
1.Fast Spatial Memory with Elastic Test-Time Training
Large Chunk Test-Time Training (LaCT) has shown strong performance on long-context 3D reconstruction, but its fully plastic inference-time updates remain vulnerable to catastrophic forgetting and overfitting. As a result, LaCT is typically instantiated with a single large chunk spanning the full input sequence, falling short of the broader goal of handling arbitrarily long sequences in a single pass. We propose Elastic Test-Time Training inspired by elastic weight consolidation, that stabilizes LaCT fast-weight updates with a Fisher-weighted elastic prior around a maintained anchor state. The anchor evolves as an exponential moving average of past fast weights to balance stability and plasticity. Based on this updated architecture, we introduce Fast Spatial Memory (FSM), an efficient and scalable model for 4D reconstruction that learns sp...
2.Measurement of Generative AI Workload Power Profiles for Whole-Facility Data Center Infrastructure Planning
The rapid growth of generative artificial intelligence (AI) has introduced unprecedented computational demands, driving significant increases in the energy footprint of data centers. However, existing power consumption data is largely proprietary and reported at varying resolutions, creating challenges for estimating whole-facility energy use and planning infrastructure. In this work, we present a methodology that bridges this gap by linking high-resolution workload power measurements to whole-facility energy demand. Using NLR's high-performance computing data center equipped with NVIDIA H100 GPUs, we measure power consumption of AI workloads at 0.1-second resolution for AI training, fine-tuning and inference jobs. Workloads are characterized using MLCommons benchmarks for model training and fine-tuning, and vLLM benchmarks for inference,...
3.TC-AE: Unlocking Token Capacity for Deep Compression Autoencoders
We propose TC-AE, a ViT-based architecture for deep compression autoencoders. Existing methods commonly increase the channel number of latent representations to maintain reconstruction quality under high compression ratios. However, this strategy often leads to latent representation collapse, which degrades generative performance. Instead of relying on increasingly complex architectures or multi-stage training schemes, TC-AE addresses this challenge from the perspective of the token space, the key bridge between pixels and image latents, through two complementary innovations: Firstly, we study token number scaling by adjusting the patch size in ViT under a fixed latent budget, and identify aggressive token-to-latent compression as the key factor that limits effective scaling. To address this issue, we decompose token-to-latent compression...
4.Syntax Is Easy, Semantics Is Hard: Evaluating LLMs for LTL Translation
Propositional Linear Temporal Logic (LTL) is a popular formalism for specifying desirable requirements and security and privacy policies for software, networks, and systems. Yet expressing such requirements and policies in LTL remains challenging because of its intricate semantics. Since many security and privacy analysis tools require LTL formulas as input, this difficulty places them out of reach for many developers and analysts. Large Language Models (LLMs) could broaden access to such tools by translating natural language fragments into LTL formulas. This paper evaluates that premise by assessing how effectively several representative LLMs translate assertive English sentences into LTL formulas. Using both human-generated and synthetic ground-truth data, we evaluate effectiveness along syntactic and semantic dimensions. The results re...
5.Graph Neural ODE Digital Twins for Control-Oriented Reactor Thermal-Hydraulic Forecasting Under Partial Observability
Real-time supervisory control of advanced reactors requires accurate forecasting of plant-wide thermal-hydraulic states, including locations where physical sensors are unavailable. Meeting this need calls for surrogate models that combine predictive fidelity, millisecond-scale inference, and robustness to partial observability. In this work, we present a physics-informed message-passing Graph Neural Network coupled with a Neural Ordinary Differential Equation (GNN-ODE) to addresses all three requirements simultaneously. We represent the whole system as a directed sensor graph whose edges encode hydraulic connectivity through flow/heat transfer-aware message passing, and we advance the latent dynamics in continuous time via a controlled Neural ODE. A topology-guided missing-node initializer reconstructs uninstrumented states at rollout sta...
IEEE Xplore AI
1.AI Models Map the Colorado River’s Hard Choices
The Colorado River begins as snow. Every spring, the mountain snowpack of the Rockies melts into streams that feed into reservoirs that supply 40 million people across seven U.S. states. The system has worked, more or less, for a century. That century is over. By some measures, 2026 is shaping up to be the worst year the river has seen since records began. Flows are down 20 percent from 2000 levels . Lake Powell, the reservoir straddling Utah and Arizona, may drop below the threshold for generating hydropower before the year is out . The negotiations between the seven states over how to share what’s left have collapsed twice , and the U.S. federal government is threatening to impose its own plan. While the states argue and the river shrinks, a growing set of machine learning tools is being deployed across the basin. Federal water managers...
2.Decentralized Training Can Help Solve AI’s Energy Woes
Artificial intelligence harbors an enormous energy appetite. Such constant cravings are evident in the hefty carbon footprint of the data centers behind the AI boom and the steady increase over time of carbon emissions from training frontier AI models . No wonder big tech companies are warming up to nuclear energy , envisioning a future fueled by reliable, carbon-free sources. But while nuclear-powered data centers might still be years away, some in the research and industry spheres are taking action right now to curb AI’s growing energy demands. They’re tackling training as one of the most energy-intensive phases in a model’s life cycle, focusing their efforts on decentralization. Decentralization allocates model training across a network of independent nodes rather than relying on one platform or provider. It allows compute to go where ...
3.Why AI Systems Fail Quietly
In late-stage testing of a distributed AI platform, engineers sometimes encounter a perplexing situation: every monitoring dashboard reads “healthy,” yet users report that the system’s decisions are slowly becoming wrong. Engineers are trained to recognize failure in familiar ways: a service crashes, a sensor stops responding, a constraint violation triggers a shutdown. Something breaks, and the system tells you. But a growing class of software failures looks very different. The system keeps running, logs appear normal, and monitoring dashboards stay green. Yet the system’s behavior quietly drifts away from what it was designed to do. This pattern is becoming more common as autonomy spreads across software systems. Quiet failure is emerging as one of the defining engineering challenges of autonomous systems because correctness now depends...
4.AI Is Insatiable
While browsing our website a few weeks ago, I stumbled upon “ How and When the Memory Chip Shortage Will End ” by Senior Editor Samuel K. Moore. His analysis focuses on the current DRAM shortage caused by AI hyperscalers’ ravenous appetite for memory, a major constraint on the speed at which large language models run. Moore provides a clear explanation of the shortage, particularly for high bandwidth memory (HBM). As we and the rest of the tech media have documented, AI is a resource hog. AI electricity consumption could account for up to 12 percent of all U.S. power by 2028. Generative AI queries consumed 15 terawatt-hours in 2025 and are projected to consume 347 TWh by 2030. Water consumption for cooling AI data centers is predicted to double or even quadruple by 2028 compared to 2023. But Moore’s reporting shines a light on an obscure ...
5.The AI Data Centers That Fit on a Truck
A traditional data center protects the expensive hardware inside it with a “shell” constructed from steel and concrete. Constructing a data center’s shell is inexpensive compared to the cost of the hardware and infrastructure inside it, but it’s not trivial. It takes time for engineers to consider potential sites, apply for permits, and coordinate with construction contractors. That’s a problem for those looking to quickly deploy AI hardware, which has led companies like Duos Edge AI and LG CNS to respond with a more modular approach. They use pre-fabricated, self-contained boxes that can be deployed in months instead of years. The boxes can operate alone or in tandem with others, providing the option to add more if required. “I just came back from Nvidia’s GTC, and a lot of [companies] are sitting on their deployment because their data c...
MIT Sloan Management
1.Rethink Responsibility in the Age of AI
Mark Airs/Ikon Images Early one morning in 2018, a self-driving Uber vehicle fatally struck a pedestrian in Tempe, Arizona. The world had questions: Who was responsible? Was it the safety driver behind the wheel? The engineers who designed the algorithms? Uber’s leadership? Or the regulators who had allowed autonomous-vehicle testing? The inability to name a […]
2.Gain Consumer Insight With Generative AI
Stuart Kinlough/Ikon Images Marketing leaders often face a dilemma: Deriving the insights they need in order to make confident decisions can cost tens of thousands of dollars and involve several months of data gathering and analysis, by which time market conditions may have shifted. Can generative AI fundamentally reshape this calculus? Drawing on recent research, […]
3.Disintegrating the Org Chart: ServiceNow’s Jacqui Canney
In this episode of the Me, Myself, and AI podcast, Sam Ransbotham is joined by Jacqui Canney, chief people and AI enablement officer at ServiceNow. Jacqui outlines how the software company has embedded AI agents into processes like employee onboarding to automate tasks, personalize experiences, and free up people’s time to focus on higher-value work. […]
4.How to Reap Compound Benefits From Generative AI
Carolyn Geason-Beissel/MIT SMR | Minneapolis Institute of Art In domain after domain, AI has compressed work that used to be expensive — generating drafts, code, prototypes, and analyses. The marginal cost of a first attempt has dropped sharply. What remains expensive is what happens after the output arrives: evaluating what gets generated. That involves separating […]
5.Job Pivots in the Age of AI: Lessons From Mike Mulligan and His Steam Shovel
Matt Harrison Clough As organizations like Amazon, PwC, and Microsoft have announced AI-fueled layoffs, it’s no surprise that half of Americans have expressed concern about AI’s larger potential impact on their jobs. Of course, companies can attribute layoffs to AI efficiencies while trimming workforces for various reasons. Yet there is no question that artificial intelligence […]
NBER Working Papers
1.Can Personal Access to Medical Expertise Overcome Vaccine Hesitancy? -- by D. Mark Anderson, Ron Diris, Raymond Montizaan, Daniel I. Rees
Using data on applicants to Dutch medical schools and their older relatives (i.e., parents, aunts, and uncles ages 60+), we estimate the effect of personal access to medical expertise on vaccine hesitancy. Leveraging variation in lottery outcomes that determine admission to medical schools, we find that having a physician in the family increases the likelihood of complying with government recommendations that anyone over the age of 59 receive a second booster dose of a COVID-19 vaccine. Our estimated effects are strongest for having a female physician in the family, suggesting important gender-based differences in how medical expertise is communicated.
2.Why Do Americans No Longer Work So Much More Than Non-Americans? -- by Serdar Birinci, Loukas Karabarbounis, Kurt See
In the 1990s, Americans used to work much more than non-Americans. Nowadays, about half of the gap in hours worked has reversed. To evaluate the convergence of working hours, we develop a tractable model of labor supply enriched with multiple sources of heterogeneity across individuals, an extensive margin of participation, multi-member households, and an elaborate system of taxes and benefits upon non-employment. Using detailed measurements from micro-level and aggregate datasets, we identify model parameters and sources of heterogeneity across individuals for various countries. We run a horse race between competing explanations and find that U.S. hours per person declined after 2000 owing mainly to the rise of government health benefits provided to the non-employed. Non-U.S. countries have generous benefits for the non-employed, but th...
3.AI Patents in the United States and China: Measurement, Organization, and Knowledge Flows -- by Hanming Fang, Xian Gu, Hanyin Yan, Wu Zhu
We develop a high-precision classifier to measure artificial intelligence (AI) patents by fine-tuning PatentSBERTa on manually labeled data from the USPTO’s AI Patent Dataset. Our classifier substantially improves the existing USPTO approach, achieving 97.0% precision, 91.3% recall, and a 94.0% F1 score, and it generalizes well to Chinese patents based on citation and lexical validation. Applying it to granted U.S. patents (1976–2023) and Chinese patents (2010–2023), we document rapid growth in AI patenting in both countries and broad convergence in AI patenting intensity and subfield composition, even as China surpasses the United States in recent annual patent counts. The organization of AI innovation nevertheless differs sharply: U.S. AI patenting is concentrated among large private incumbents and established hubs, whereas Chinese AI p...
4.Tariffs, Global Value Chains, and the Incidence of Protection: Evidence from US Automobiles -- by Luke Heeney, Christopher R. Knittel, Jasdeep Mandia
In many modern industries, firms compete in differentiated-product markets while relying on complex global value chains for intermediate inputs. In such settings, trade policies such as tariffs on vehicles and parts operate not only through consumer substitution and firm pricing, but also through firms’ cost structures and sourcing decisions. We develop a structural model of the U.S. automobile market that integrates random-coefficients demand, multiproduct firm pricing, and a flexible supply-side framework in which shocks to the cost of imported parts transmit imperfectly into manufacturers’ marginal costs. The model is disciplined by novel model-level data on imported-parts exposure and exploits exchange-rate variation to identify cost pass-through. Our counterfactual analysis quantifies the effects of alternative tariff policies on pri...
5.Learning How To Borrow in a Fintech World: Consumer Behavior When Search Costs Are (Near) Zero -- by Alex Günsberg, Camelia M. Kuhnen
Online loan marketplaces are changing consumer lending. Here we investigate consumer behavior in these markets with near-zero search costs. Using administrative data on 730,000 applications, 750,000 offers, and 200,000 individuals, together with credit registry records, we document four facts. First, substantial within-applicant dispersion in offered terms makes search highly valuable. Second, marketplace nudges mitigate choice complexity. Third, applicants search significantly, applying repeatedly, asking for different terms, and rejecting offers, in ways consistent with their creditworthiness. Fourth, dynamic adverse selection constrains search, as lenders penalize repeat applicants. Our findings highlight trade-offs between informational gains from search, and reputational and cognitive costs.
NY Fed - Liberty Street
1.A Closer Look at Emerging Market Resilience During Recent Shocks
A succession of shocks to the global economy in recent years has focused attention on the improved economic and financial resilience of emerging market economies. For some of these economies, this assessment is well-founded and highlights the fruits of deep, structural economic reforms since the 1990s. However, for a much larger universe of countries, the ability to weather shocks is still mixed and many remain vulnerable. In this post, we explore the divide between the two sets of countries and focus on the effects of recent economic shocks, including the ongoing conflict in the Middle East.
2.The Fed Has Two Tools to Influence Money Market Conditions
The Federal Reserve’s 2022-23 tightening cycle involved the use of two monetary policy tools: changes in administrative rates and changes in the size of its balance sheet. This post highlights the results of a recent Staff Report that explores how these tools affect money market conditions. Using confidential trade-level data, we find that both tools have significant effects on the pricing of funds sourced through repo. These results suggest that the Fed can manage how financing conditions are affected even as it influences economic conditions. For example, the Fed can lower its administrative rates to loosen economic conditions, while shrinking its balance sheet to maintain financing conditions in the money markets.
3.Treasury Market Liquidity Since April 2025
In this post, we examine the evolution of U.S. Treasury market liquidity over the past year, which has witnessed myriad economic and political developments. Liquidity worsened markedly one year ago as volatility increased following the announcement of higher-than-expected tariffs. Liquidity quickly improved when the tariff increases were partially rolled back and then remained fairly stable thereafter (through the end of our sample in February 2026), including after the recent Supreme Court decision striking down the emergency tariffs and the subsequent announcement of new tariffs.
4.Behind the ATM: Exploring the Structure of Bank Holding Companies
Many modern banking organizations are highly complex. A “bank” is often a larger structure made up of distinct entities, each subject to different regulatory, supervisory, and reporting requirements. For researchers and policymakers, understanding how these institutions are structured and how they have evolved over time is essential. In this post, we illustrate what a modern financial holding company looks like in practice, document how banks’ organizational structures have changed over time, and explain why these details matter for conducting accurate analyses of the financial system.
5.Sports Betting Is Everywhere, Especially on Credit Reports
Since 2018, more than thirty states have legalized mobile sports betting, leading to more than a half trillion dollars in wagers. In our recent Staff Report, we examine how legalized sports betting affects household financial health by comparing betting activity and consumer credit outcomes between states that legalized to those that have not. We find that legalization increases spending at online sportsbooks roughly tenfold, but betting does not stop at state boundaries. Nearby areas where betting is not legal still experience roughly 15 percent the increase of counties where it is legal. At the same time, consumer financial health suffers. Our analysis finds rising delinquencies in participating states,...
Project Syndicate
1.Trump’s Next Coup Attempt
While turning a foreign war into a domestic dictatorship is complicated and difficult, Donald Trump could try to do so in one of five ways. But even the most likely scenario—using an act of terror as a pretext to delay or discredit the midterms—will not work if Americans are vigilant and refuse to obey in advance.
2.Iran’s Strategic Victory
The US-Israeli war will be remembered as yet another episode of powerful countries falling into the trap of asymmetric warfare, with the ceasefire ratifying what any competent military planner should have anticipated. But while the US might be able to absorb the shock of another defeat, Israel is no superpower.
3.The Problem of Assessing Democratic Decline
Democracy is in retreat across much of the world, erasing nearly five decades of hard-won gains. Understanding this trend and how to reverse it requires better ways to evaluate democratic governance and an honest reckoning with how major powers undermine rights and norms beyond their own borders.
4.How Bad Will the Economy Be?
Economic performance in the United States has proved unexpectedly resilient in recent years, withstanding even Donald Trump’s return to the White House. But, from a failed war of choice against Iran to reckless deregulation, fundamental risks are piling up, and uncertainty has become so extreme that economic forecasters cannot even assign probabilities to them. While that rules out confident prediction, the global connectedness of these risks means that what comes next is certain to reverberate worldwide.
5.The Big Picture
Summary available at source link.
RCR Wireless
1.Cool heads, big ideas – AT&T ties private 5G to AI grid
AT&T positions private 5G as a necessary enterprise capability rather than a breakout revenue engine, tying its future instead to edge AI, spectrum pragmatism, and a broader “AI grid” vision with partners like Cisco and Nvidia. In sum – what…
2.The quantum race in telecom is heating up
As RCR expands daily coverage to include quantum, here is a quick update on the latest developments The hype around quantum in telecommunications is becoming impossible to ignore, as trials and pilots continue to make headlines at a steady clip.…
3.Industrial AI adoption is surging despite growing security and infrastructure concerns
A new report from Cisco reveals that most industrial organizations have moved AI into live operations In sum – what we know: Cisco and Sapio Research have released the 2026 State of Industrial AI Report, pulling together survey responses from over…
4.3 takeaways from SatShow 2026
Megatrends shaping the satellite sector, operators’ deepest pain points, and strategic maneuvers, from SatShow 2026 RCR wasn’t on the floor at the SatShow 2026, but that didn’t stop us from following the buzz. Here’s what we learned. SatShow this year…
5.Telcos shifting capex to AI infra, says Omdia
Hyperscalers continue to dominate AI infrastructure globally, but regulatory dynamics are opening space for telcos in specific segments, observes Omdia In sum – what to know: Demand-led models – Telcos must secure anchor tenants and real workloads early, as AI…
Semantic Scholar – Machine Learning
1.Source Error
Check Feed
Telecom & 6G AI
1.Delay-Doppler Channel Estimation using Arbitrarily Modulated Data Transmissions
Conventional delay-Doppler (DD) communication and sensing systems require transmitting pilot frames at every channel coherence time interval in order to keep track of channel variations at the cost of spectral efficiency. In this paper, we propose an approach to utilize data transmissions modulated using arbitrary waveforms for DD channel estimation without requiring pilot transmissions in every coherence time interval. Numerical evaluation over practical doubly-selective channel models demonstrate $\sim 1.8 \times$ improvement in spectral efficiency with our proposed data-based approach over conventional pilot-based approaches across various 6G modulation schemes.
2.Robust Hybrid Beamforming with Liquid Crystal Antennas and Liquid Neural Networks
Sub-terahertz (sub-THz) multi-user multiple-input multiple-output (MU-MIMO) systems unlock immense bandwidth for 6G wireless communications. However, practical deployment of wireless systems in sub-THz bands faces critical challenges such as increased atmospheric absorption, reduced channel coherence time due to increased Doppler spread at higher carrier frequencies, and hardware bottlenecks as low-loss sub-THz phase shifters are difficult to realize. To overcome the hardware and channel estimation challenges of sub-THz systems, this paper proposes a hybrid beamforming (BF) framework that integrates reconfigurable liquid crystal (LC) antennas with a liquid neural network (LNN) for transmitter. Specifically, we employ an LC antenna as the analog BF stage of a hybrid BF architecture, exploiting its voltage-driven permittivity tunability to ...
3.From 6G Scenarios and Requirements to Design Drivers: Insights from 3GPP Release 20
The definition of sixth-generation (6G) systems is being shaped by early standardization efforts, including the 3GPP TR 38.914 (Release 20) study on scenarios and requirements. This study introduces a comprehensive set of deployment environments, service classes, and performance targets that will guide the evolution toward IMT-2030. This article provides a design-oriented interpretation of these definitions, bridging the gap between standardized scenarios and system design. We first organize 6G deployment scenarios and emerging services into a unified framework. We then identify key design drivers derived from the 3GPP requirements, including terrestrial-non-terrestrial integration, GNSS-free operation, AI-native networking, and joint communication and sensing. Finally, we discuss the implications of these drivers on 6G architecture and h...
4.Reliable Non-Line-of-Sight Intrusion Detection with Integrated Sensing and Communications Hardware
Non-line-of-sight (NLOS) sensing has the potential to enable use cases like intrusion detection in occluded areas, increasing the value provided by Integrated Sensing and Communications (ISAC) in future 6G cellular networks. In this paper, we present a reliable NLOS intrusion detection system based on a millimeter-wave ISAC proof-of-concept. By leveraging reflections off a large surface, the proposed system addresses the challenge of detecting moving targets in cluttered indoor industrial scenarios where the direct line-of-sight is obstructed. A signal processing pipeline including a probability hypothesis density (PHD) filter is applied to detect targets and track movements in NLOS. Experimental validation conducted in the ARENA2036 industrial research campus demonstrates that our system can reliably detect target presence in NLOS while ...
5.RieIF: Knowledge-Driven Riemannian Information Flow for Robust Spatio-Temporal Graph Signal Prediction in 6G Wireless Networks
With 6G evolving towards intelligent network autonomy, artificial intelligence (AI)-native operations are becoming pivotal. Wireless networks continuously generate rich and heterogeneous data, which inherently exhibits spatio-temporal graph structure. However, limited radio resources result in incomplete and noisy network measurements. This challenge is further intensified when a target variable and its strongest correlates are missing over contiguous intervals, forming systemic blind spots. To tackle this issue, we propose RieIF (Knowledge-driven Riemannian Information Flow), a geometry-consistent framework that incorporates knowledge graphs (KGs) for robust spatio-temporal graph signal prediction. For analytical tractability within the Fisher-Rao geometry, we project the input from a Riemannian manifold onto a positive unit hypersphere,...
arXiv Quantitative Finance
1.SoK of RWA Tokenization: A Systematization of Concepts, Architectures, and Legal Interoperability
The global financial architecture is undergoing a shift from intermediary centric-settlement to programmable infrastructure, to transmute trillions in static illiquid capital into active, high-velocity instruments. We argue that Real World Asset (RWA) tokenization represents a conceptual evolution beyond mere digitization, converting passive ledger entries into programmable economic agents capable of autonomous settlement and algorithmic collateralization. However, achieving such seamless capital efficiency necessitates resolving the fundamental friction between deterministic on-chain code and probabilistic off-chain reality, navigating the oracle problem and jurisdictional interoperability. This systematization of knowledge presents a taxonomy for the RWA lifecycle and deconstructs the multi-layered architecture, spanning legal custody, ...
2.Sequential Audit Sampling with Statistical Guarantees
Financial statement auditing is conducted under a risk-based evidence approach to obtain reasonable assurance. In practice, auditors often perform additional sampling or related procedures when an initial sample does not provide a sufficient basis for a conclusion. Across jurisdictions, current standards and practice manuals acknowledge such extensions, while the statistical design of sequential audit procedures has not been fully explored. This study formulates audit sampling with additional, sequentially collected items as a sequential testing problem for a finite population under sampling without replacement. We define null and alternative hypotheses in terms of a tolerable deviation rate, specify stopping and decision rules, and formulate exact sequential boundary conditions in terms of finite-population error probabilities. For pract...
3.Generative Path-Law Jump-Diffusion: Sequential MMD-Gradient Flows and Generalisation Bounds in Marcus-Signature RKHS
This paper introduces a novel generative framework for synthesising forward-looking, càdlàg stochastic trajectories that are sequentially consistent with time-evolving path-law proxies, thereby incorporating anticipated structural breaks, regime shifts, and non-autonomous dynamics. By framing path synthesis as a sequential matching problem on restricted Skorokhod manifolds, we develop the \textit{Anticipatory Neural Jump-Diffusion} (ANJD) flow, a generative mechanism that effectively inverts the time-extended Marcus-sense signature. Central to this approach is the Anticipatory Variance-Normalised Signature Geometry (AVNSG), a time-evolving precision operator that performs dynamic spectral whitening on the signature manifold to ensure contractivity during volatile regime shifts and discrete aleatoric shocks. We provide a rigorous theoretic...
4.Anticipatory Reinforcement Learning: From Generative Path-Laws to Distributional Value Functions
This paper introduces Anticipatory Reinforcement Learning (ARL), a novel framework designed to bridge the gap between non-Markovian decision processes and classical reinforcement learning architectures, specifically under the constraint of a single observed trajectory. In environments characterised by jump-diffusions and structural breaks, traditional state-based methods often fail to capture the essential path-dependent geometry required for accurate foresight. We resolve this by lifting the state space into a signature-augmented manifold, where the history of the process is embedded as a dynamical coordinate. By utilising a self-consistent field approach, the agent maintains an anticipated proxy of the future path-law, allowing for a deterministic evaluation of expected returns. This transition from stochastic branching to a single-pass...
5.SoK: Blockchain Agent-to-Agent Payments
Agentic AI rivals human capabilities across a wide range of domains. Looking ahead, it is foreseeable that AI agents will autonomously handle complex workflows and interactions. Early prototypes of this paradigm are emerging, e.g., OpenClaw and Moltbook, signaling a shift toward Agent-to-Agent (A2A) ecosystems. However, despite these promising blueprints, critical trust and security challenges remain, particularly in scenarios involving financial transactions. Ensuring secure and reliable payment mechanisms between unknown and untrusted agents is crucial to complete a fully functional and trustworthy A2A ecosystem. Although blockchain-based infrastructures provide a natural foundation for this setting, via programmable settlement, transparent accounting, and open interoperability, trust and security challenges have not yet been fully addr...
arXiv – 6G & Networking
1.Delay-Doppler Channel Estimation using Arbitrarily Modulated Data Transmissions
Conventional delay-Doppler (DD) communication and sensing systems require transmitting pilot frames at every channel coherence time interval in order to keep track of channel variations at the cost of spectral efficiency. In this paper, we propose an approach to utilize data transmissions modulated using arbitrary waveforms for DD channel estimation without requiring pilot transmissions in every coherence time interval. Numerical evaluation over practical doubly-selective channel models demonstrate $\sim 1.8 \times$ improvement in spectral efficiency with our proposed data-based approach over conventional pilot-based approaches across various 6G modulation schemes.
2.Robust Hybrid Beamforming with Liquid Crystal Antennas and Liquid Neural Networks
Sub-terahertz (sub-THz) multi-user multiple-input multiple-output (MU-MIMO) systems unlock immense bandwidth for 6G wireless communications. However, practical deployment of wireless systems in sub-THz bands faces critical challenges such as increased atmospheric absorption, reduced channel coherence time due to increased Doppler spread at higher carrier frequencies, and hardware bottlenecks as low-loss sub-THz phase shifters are difficult to realize. To overcome the hardware and channel estimation challenges of sub-THz systems, this paper proposes a hybrid beamforming (BF) framework that integrates reconfigurable liquid crystal (LC) antennas with a liquid neural network (LNN) for transmitter. Specifically, we employ an LC antenna as the analog BF stage of a hybrid BF architecture, exploiting its voltage-driven permittivity tunability to ...
3.From 6G Scenarios and Requirements to Design Drivers: Insights from 3GPP Release 20
The definition of sixth-generation (6G) systems is being shaped by early standardization efforts, including the 3GPP TR 38.914 (Release 20) study on scenarios and requirements. This study introduces a comprehensive set of deployment environments, service classes, and performance targets that will guide the evolution toward IMT-2030. This article provides a design-oriented interpretation of these definitions, bridging the gap between standardized scenarios and system design. We first organize 6G deployment scenarios and emerging services into a unified framework. We then identify key design drivers derived from the 3GPP requirements, including terrestrial-non-terrestrial integration, GNSS-free operation, AI-native networking, and joint communication and sensing. Finally, we discuss the implications of these drivers on 6G architecture and h...
4.Reliable Non-Line-of-Sight Intrusion Detection with Integrated Sensing and Communications Hardware
Non-line-of-sight (NLOS) sensing has the potential to enable use cases like intrusion detection in occluded areas, increasing the value provided by Integrated Sensing and Communications (ISAC) in future 6G cellular networks. In this paper, we present a reliable NLOS intrusion detection system based on a millimeter-wave ISAC proof-of-concept. By leveraging reflections off a large surface, the proposed system addresses the challenge of detecting moving targets in cluttered indoor industrial scenarios where the direct line-of-sight is obstructed. A signal processing pipeline including a probability hypothesis density (PHD) filter is applied to detect targets and track movements in NLOS. Experimental validation conducted in the ARENA2036 industrial research campus demonstrates that our system can reliably detect target presence in NLOS while ...
5.RieIF: Knowledge-Driven Riemannian Information Flow for Robust Spatio-Temporal Graph Signal Prediction in 6G Wireless Networks
With 6G evolving towards intelligent network autonomy, artificial intelligence (AI)-native operations are becoming pivotal. Wireless networks continuously generate rich and heterogeneous data, which inherently exhibits spatio-temporal graph structure. However, limited radio resources result in incomplete and noisy network measurements. This challenge is further intensified when a target variable and its strongest correlates are missing over contiguous intervals, forming systemic blind spots. To tackle this issue, we propose RieIF (Knowledge-driven Riemannian Information Flow), a geometry-consistent framework that incorporates knowledge graphs (KGs) for robust spatio-temporal graph signal prediction. For analytical tractability within the Fisher-Rao geometry, we project the input from a Riemannian manifold onto a positive unit hypersphere,...
arXiv – Network Architecture (6G/Slicing)
1.Enhancing Secure Intent-Based Networking with an Agentic AI: The EU Project MARE Approach
In the EU project MARE, a novel plane was proposed and used in combination with intent-based networking (IBN), allowing the operator to focus on what, rather than on how. Recently, LLMs have been successfully employed to translate the high-level intents into low-level actions. The open challenge is to understand how IBN can be effectively enhanced with LLM and the emerging agentic AI for security purposes. Enhancing IBN with an agentic AI paradigm introduces significant challenges that existing solutions do not fully address. This paper proposes an enhanced IBN framework with a strong security focus toward agentic AI. We address the architectural and security requirements for a multi-agent intent-based system (IBS) architecture, including a multi-domain IBN. We propose a hierarchical multi-agent and multi-vendor architecture that can also...
2.Advanced Holographic Multi-Antenna Solutions for Global Non-Terrestrial Network Integration in IMT-2030 Systems
Sixth-generation (6G) networks are expected to provide ubiquitous connectivity across terrestrial and non-terrestrial domains. This will be possible by integrating non-terrestrial networks (NTNs) to extend coverage to underserved areas. Antennas are central to this vision, with multiple-input multiple-output (MIMO) technologies receiving the most attention due to their ability to exploit spatial multiplexing to improve link capacity and reliability. However, conventional MIMO can consume significant energy, as each antenna element typically requires an independent RF chain. This limitation is particularly critical in non-terrestrial systems, where onboard energy resources are limited. Holographic MIMO (HMIMO) has emerged as a promising alternative in this context. These systems are based on theoretically continuous apertures, where radiat...
3.Reimagining RAN Automation in 6G: An Agentic AI Framework with Hierarchical Online Decision Transformer
In this paper, we propose an Agentic Artificial Intelligence (AI) framework for wireless networks. The framework coordinates a pool of AI agents guided by Natural Language (NL) inputs from a human operator. At its core, the super agent is powered by a Hierarchical Online Decision Transformer (H-ODT). It orchestrates three categories of agents: (i) inter-slice, intra-slice resource allocation agents, (ii) network application orchestration agents, and (iii) self-healing agents. The orchestration takes place with the help of an Agentic Retrieval-Augmented Generation (RAG) module that integrates knowledge from heterogeneous sources. In this proposed methodology, the super agent directly interfaces with operators and generates sequential policies to activate relevant agents. The proposed framework is evaluated against three state-of-the-art ba...
4.RL-Loop: Reinforcement Learning-Driven Real-Time 5G Slice Control for Connected and Autonomous Mobility Services
Smart and connected mobility systems rely on 5G edge infrastructure to support real-time communication, control, and service differentiation. Achieving this requires adaptive resource management mechanisms that can react to rapidly changing traffic conditions. In this paper, we propose RL-Loop, a closed-loop reinforcement learning framework for real-time CPU resource control in 5G network slicing environments supporting connected mobility services. RL-Loop employs a Proximal Policy Optimization (PPO) agent that continuously observes slice-level key performance indicators and adjusts edge CPU allocations at one-second granularity on a real testbed. The framework leverages real-time observability and feedback to enable adaptive, software-defined edge intelligence. Experimental results suggest that RL-Loop can reduce average CPU allocation b...
5.CIVIC: Cooperative Immersion Via Intelligent Credit-sharing in DRL-Powered Metaverse
The Metaverse faces complex resource allocation challenges due to diverse Virtual Environments (VEs), Digital Twins (DTs), dynamic user demands, and strict immersion needs. This paper introduces CIVIC (Cooperative Immersion Via Intelligent Credit-sharing), a novel framework optimizing resource sharing among multiple Metaverse Service Providers (MSPs) to enhance user immersion. Unlike existing methods, CIVIC integrates VE rendering, DT synchronization, credit sharing, and immersion-aware provisioning within a cooperative multi-MSP model. The resource allocation problem is formulated as two NP-hard challenges: a non-cooperative setting where MSPs operate independently and a cooperative setting utilizing a General Credit Pool (GCP) for dynamic resource sharing. Using Deep Reinforcement Learning (DRL) for tuning resources and managing coopera...