Daily Briefing – Mar 16 (91 Articles)
Babak's Daily Briefing
Monday, March 16, 2026
Sources: 19 | Total Articles: 91
6G World
1.SpaceRAN: Airbus UpNext explores software-defined 5G NTN from orbit
Airbus UpNext has launched its SpaceRAN (Space Radio Access Network) demonstrator, a key initiative to advance standardised 5G…
2.SoftBank’s Transformer-Based AI-RAN Hits 30% Uplink Gain at Sub-Millisecond Latency
On August 21, 2025, SoftBank published results from a live, standards-compliant AI-RAN trial that replaces parts of classical signal processing with a lightweight Transformer.
3.6G as a Platform for Value
Reframing the Future with NGMN’s Chairman, Laurent Leboucher By Piotr (Peter) Pietrzyk, Managing Editor, 6GWorld.com In the race…
4.SoftBank Road-Tests 7 GHz in Central Tokyo
SoftBank and Nokia have begun outdoor field trials in Tokyo’s Ginza district using 7 GHz spectrum, installing three pre-commercial base stations to compare coverage and radio characteristics against today’s sub-6 GHz 5G sites.
5.NXP’s Acquisition of TTTech Auto Signals Growing Focus on Middleware for Software-Defined Vehicles
On June 17, 2025, NXP Semiconductors finalized its acquisition of TTTech Auto—a strategic move to integrate TTTech’s flagship…
AI Agents
1.PISmith: Reinforcement Learning-based Red Teaming for Prompt Injection Defenses
Prompt injection poses serious security risks to real-world LLM applications, particularly autonomous agents. Although many defenses have been proposed, their robustness against adaptive attacks remains insufficiently evaluated, potentially creating a false sense of security. In this work, we propose PISmith, a reinforcement learning (RL)-based red-teaming framework that systematically assesses existing prompt-injection defenses by training an attack LLM to optimize injected prompts in a practical black-box setting, where the attacker can only query the defended LLM and observe its outputs. We find that directly applying standard GRPO to attack strong defenses leads to sub-optimal performance due to extreme reward sparsity -- most generated injected prompts are blocked by the defense, causing the policy's entropy to collapse before discov...
2.AI Planning Framework for LLM-Based Web Agents
Developing autonomous agents for web-based tasks is a core challenge in AI. While Large Language Model (LLM) agents can interpret complex user requests, they often operate as black boxes, making it difficult to diagnose why they fail or how they plan. This paper addresses this gap by formally treating web tasks as sequential decision-making processes. We introduce a taxonomy that maps modern agent architectures to traditional planning paradigms: Step-by-Step agents to Breadth-First Search (BFS), Tree Search agents to Best-First Tree Search, and Full-Plan-in-Advance agents to Depth-First Search (DFS). This framework allows for a principled diagnosis of system failures like context drift and incoherent task decomposition. To evaluate these behaviors, we propose five novel evaluation metrics that assess trajectory quality beyond simple succe...
3.Uncovering Security Threats and Architecting Defenses in Autonomous Agents: A Case Study of OpenClaw
The rapid evolution of Large Language Models (LLMs) into autonomous, tool-calling agents has fundamentally altered the cybersecurity landscape. Frameworks like OpenClaw grant AI systems operating-system-level permissions and the autonomy to execute complex workflows. This level of access creates unprecedented security challenges. Consequently, traditional content-filtering defenses have become obsolete. This report presents a comprehensive security analysis of the OpenClaw ecosystem. We systematically investigate its current threat landscape, highlighting critical vulnerabilities such as prompt injection-driven Remote Code Execution (RCE), sequential tool attack chains, context amnesia, and supply chain contamination. To systematically contextualize these threats, we propose a novel tri-layered risk taxonomy for autonomous Agents, categor...
4.When OpenClaw Meets Hospital: Toward an Agentic Operating System for Dynamic Clinical Workflows
Large language model (LLM) agents extend conventional generative models by integrating reasoning, tool invocation, and persistent memory. Recent studies suggest that such agents may significantly improve clinical workflows by automating documentation, coordinating care processes, and assisting medical decision making. However, despite rapid progress, deploying autonomous agents in healthcare environments remains difficult due to reliability limitations, security risks, and insufficient long-term memory mechanisms. This work proposes an architecture that adapts LLM agents for hospital environments. The design introduces four core components: a restricted execution environment inspired by Linux multi-user systems, a document-centric interaction paradigm connecting patient and clinician agents, a page-indexed memory architecture designed for...
5.From Control to Foresight: Simulation as a New Paradigm for Human-Agent Collaboration
Large Language Models (LLMs) are increasingly used to power autonomous agents for complex, multi-step tasks. However, human-agent interaction remains pointwise and reactive: users approve or correct individual actions to mitigate immediate risks, without visibility into subsequent consequences. This forces users to mentally simulate long-term effects, a cognitively demanding and often inaccurate process. Users have control over individual steps but lack the foresight to make informed decisions. We argue that effective collaboration requires foresight, not just control. We propose simulation-in-the-loop, an interaction paradigm that enables users and agents to explore simulated future trajectories before committing to decisions. Simulation transforms intervention from reactive guesswork into informed exploration, while helping users discov...
AI Computation & Hardware
1.Task-Specific Knowledge Distillation via Intermediate Probes
arXiv:2603.12270v1 Announce Type: new Abstract: Knowledge distillation from large language models (LLMs) assumes that the teacher's output distribution is a high-quality training signal. On reasoning tasks, this assumption is frequently violated. A model's intermediate representations may encode the correct answer, yet this information is lost or distorted through the vocabulary projection, where prompt formatting and answer-token choices creates brittle, noisy outputs. We introduce \method{}, a distillation framework that bypasses this bottleneck by training lightweight probes on frozen teacher hidden states and using the probe's predictions, rather than output logits, as supervision for student training. This simple change yields consistent improvements across four reasoning benchmarks (AQuA-RAT, ARC Easy/Challenge, and MMLU), with g...
2.Diagnosing Retrieval Bias Under Multiple In-Context Knowledge Updates in Large Language Models
arXiv:2603.12271v1 Announce Type: new Abstract: LLMs are widely used in knowledge-intensive tasks where the same fact may be revised multiple times within context. Unlike prior work focusing on one-shot updates or single conflicts, multi-update scenarios contain multiple historically valid versions that compete at retrieval, yet remain underexplored. This challenge resembles the AB-AC interference paradigm in cognitive psychology: when the same cue A is successively associated with B and C, the old and new associations compete during retrieval, leading to bias. Inspired by this, we introduce a Dynamic Knowledge Instance (DKI) evaluation framework, modeling multi-updates of the same fact as a cue paired with a sequence of updated values, and assess models via endpoint probing of the earliest (initial) and latest (current) states. Across d...
3.ActTail: Global Activation Sparsity in Large Language Models
arXiv:2603.12272v1 Announce Type: new Abstract: Activation sparsity is a promising approach for accelerating large language model (LLM) inference by reducing computation and memory movement. However, existing activation sparsity methods typically apply uniform sparsity across projections, ignoring the heterogeneous statistical properties of Transformer weights and thereby amplifying performance degradation. In this paper, we propose ActTail, a TopK magnitude-based activation sparsity method with global activation sparsity allocation grounded in Heavy-Tailed Self-Regularization (HT-SR) theory. Specifically, we capture this heterogeneity via the heavy-tail exponent computed from each projection's empirical spectral density (ESD), which is used as a quantitative indicator to assign projection-specific sparsity budgets. Importantly, we provi...
4.Aligning Language Models from User Interactions
arXiv:2603.12273v1 Announce Type: new Abstract: Multi-turn user interactions are among the most abundant data produced by language models, yet we lack effective methods to learn from them. While typically discarded, these interactions often contain useful information: follow-up user messages may indicate that a response was incorrect, failed to follow an instruction, or did not align with the user's preferences. Importantly, language models are already able to make use of this information in context. After observing a user's follow-up, the same model is often able to revise its behavior. We leverage this ability to propose a principled and scalable method for learning directly from user interactions through self-distillation. By conditioning the model on the user's follow-up message and comparing the resulting token distribution with the...
5.GONE: Structural Knowledge Unlearning via Neighborhood-Expanded Distribution Shaping
arXiv:2603.12275v1 Announce Type: new Abstract: Unlearning knowledge is a pressing and challenging task in Large Language Models (LLMs) because of their unprecedented capability to memorize and digest training data at scale, raising more significant issues regarding safety, privacy, and intellectual property. However, existing works, including parameter editing, fine-tuning, and distillation-based methods, are all focused on flat sentence-level data but overlook the relational, multi-hop, and reasoned knowledge in naturally structured data. In response to this gap, this paper introduces Graph Oblivion and Node Erasure (GONE), a benchmark for evaluating knowledge unlearning over structured knowledge graph (KG) facts in LLMs. This KG-based benchmark enables the disentanglement of three effects of unlearning: direct fact removal, reasoning-...
AI Machine Learning
1.No More DeLuLu: Physics-Inspired Kernel Networks for Geometrically-Grounded Neural Computation
arXiv:2603.12276v1 Announce Type: new Abstract: We introduce the yat-product, a kernel operator combining quadratic alignment with inverse-square proximity. We prove it is a Mercer kernel, analytic, Lipschitz on bounded domains, and self-regularizing, admitting a unique RKHS embedding. Neural Matter Networks (NMNs) use yat-product as the sole non-linearity, replacing conventional linear-activation-normalization blocks with a single geometrically-grounded operation. This architectural simplification preserves universal approximation while shifting normalization into the kernel itself via the denominator, rather than relying on separate normalization layers. Empirically, NMN-based classifiers match linear baselines on MNIST while exhibiting bounded prototype evolution and superposition robustness. In language modeling, Aether-GPT2 achieves ...
2.From Garbage to Gold: A Data-Architectural Theory of Predictive Robustness
arXiv:2603.12288v1 Announce Type: new Abstract: Tabular machine learning presents a paradox: modern models achieve state-of-the-art performance using high-dimensional (high-D), collinear, error-prone data, defying the "Garbage In, Garbage Out" mantra. To help resolve this, we synthesize principles from Information Theory, Latent Factor Models, and Psychometrics, clarifying that predictive robustness arises not solely from data cleanliness, but from the synergy between data architecture and model capacity. Partitioning predictor-space "noise" into "Predictor Error" and "Structural Uncertainty" (informational deficits from stochastic generative mappings), we prove that leveraging high-D sets of error-prone predictors asymptotically overcomes both types of noise, whereas cleaning a low-D set is fundamentally bounded by Structural Uncertainty...
3.Multi-objective Genetic Programming with Multi-view Multi-level Feature for Enhanced Protein Secondary Structure Prediction
arXiv:2603.12293v1 Announce Type: new Abstract: Predicting protein secondary structure is essential for understanding protein function and advancing drug discovery. However, the intricate sequence-structure relationship poses significant challenges for accurate modeling. To address these, we propose MOGP-MMF, a multi-objective genetic programming framework that reformulates PSSP as an automated optimization task focused on feature selection and fusion. Specifically, MOGP-MMF introduces a multi-view multi-level representation strategy that integrates evolutionary, semantic, and newly introduced structural views to capture the comprehensive protein folding logic. Leveraging an enriched operator set, the framework evolves both linear and nonlinear fusion functions, effectively capturing high-order feature interactions while reducing fusion c...
4.Synthetic Data Generation for Brain-Computer Interfaces: Overview, Benchmarking, and Future Directions
arXiv:2603.12296v1 Announce Type: new Abstract: Deep learning has achieved transformative performance across diverse domains, largely driven by the large-scale, high-quality training data. In contrast, the development of brain-computer interfaces (BCIs) is fundamentally constrained by the limited, heterogeneous, and privacy-sensitive neural recordings. Generating synthetic yet physiologically plausible brain signals has therefore emerged as a compelling way to mitigate data scarcity and enhance model capacity. This survey provides a comprehensive review of brain signal generation for BCIs, covering methodological taxonomies, benchmark experiments, evaluation metrics, and key applications. We systematically categorize existing generative algorithms into four types: knowledge-based, feature-based, model-based, and translation-based approach...
5.Global Evolutionary Steering: Refining Activation Steering Control via Cross-Layer Consistency
arXiv:2603.12298v1 Announce Type: new Abstract: Activation engineering enables precise control over Large Language Models (LLMs) without the computational cost of fine-tuning. However, existing methods deriving vectors from static activation differences are susceptible to high-dimensional noise and layer-wise semantic drift, often capturing spurious correlations rather than the target intent. To address this, we propose Global Evolutionary Refined Steering (GER-steer), a training-free framework that grounded in the geometric stability of the network's representation evolution. GER-steer exploits this global signal to rectify raw steering vectors, effectively decoupling robust semantic intent from orthogonal artifacts. Extensive evaluations confirm that GER-steer consistently outperforms baselines, delivering superior efficacy and generali...
AI Robotics
1.A Learning-Based Approach for Contact Detection, Localization, and Force Estimation of Continuum Manipulators With Integrated OFDR Optical Fiber
arXiv:2603.12347v1 Announce Type: new Abstract: Continuum manipulators (CMs) are widely used in minimally invasive procedures due to their compliant structure and ability to navigate deep and confined anatomical environments. However, their distributed deformation makes force sensing, contact detection, localization, and force estimation challenging, particularly when interactions occur at unknown arc-length locations along the robot. To address this problem, we propose a cascade learning-based framework (CLF) for CMs instrumented with a single distributed Optical Frequency Domain Reflectometry (OFDR) fiber embedded along one side of the robot. The OFDR sensor provides dense strain measurements along the manipulator backbone, capturing strain perturbations caused by external interactions. The proposed CLF first detects contact using a Gra...
2.GNN-DIP: Neural Corridor Selection for Decomposition-Based Motion Planning
arXiv:2603.12361v1 Announce Type: new Abstract: Motion planning through narrow passages remains a core challenge: sampling-based planners rarely place samples inside these narrow but critical regions, and even when samples land inside a passage, the straight-line connections between them run close to obstacle boundaries and are frequently rejected by collision checking. Decomposition-based planners resolve both issues by partitioning free space into convex cells -- every passage is captured exactly as a cell boundary, and any path within a cell is collision-free by construction. However, the number of candidate corridors through the cell graph grows combinatorially with environment complexity, creating a bottleneck in corridor selection. We present GNN-DIP, a framework that addresses this by integrating a Graph Neural Network (GNN) with a...
3.Push, Press, Slide: Mode-Aware Planar Contact Manipulation via Reduced-Order Models
arXiv:2603.12399v1 Announce Type: new Abstract: Non-prehensile planar manipulation, including pushing and press-and-slide, is critical for diverse robotic tasks, but notoriously challenging due to hybrid contact mechanics, under-actuation, and asymmetric friction limits that traditionally necessitate computationally expensive iterative control. In this paper, we propose a mode-aware framework for planar manipulation with one or two robotic arms based on contact topology selection and reduced-order kinematic modeling. Our core insight is that complex wrench-twist limit surface mechanics can be abstracted into a discrete library of physically intuitive models. We systematically map various single-arm and bimanual contact topologies to simple non-holonomic formulations, e.g. unicycle for simplified press-and-slide motion. By anchoring trajec...
4.Beyond Motion Imitation: Is Human Motion Data Alone Sufficient to Explain Gait Control and Biomechanics?
arXiv:2603.12408v1 Announce Type: new Abstract: With the growing interest in motion imitation learning (IL) for human biomechanics and wearable robotics, this study investigates how additional foot-ground interaction measures, used as reward terms, affect human gait kinematics and kinetics estimation within a reinforcement learning-based IL framework. Results indicate that accurate reproduction of forward kinematics alone does not ensure biomechanically plausible joint kinetics. Adding foot-ground contacts and contact forces to the IL reward terms enables the prediction of joint moments in forward walking simulation, which are significantly closer to those computed by inverse dynamics. This finding highlights a fundamental limitation of motion-only IL approaches, which may prioritize kinematics matching over physical consistency. Incorpor...
5.Predictive and adaptive maps for long-term visual navigation in changing environments
arXiv:2603.12460v1 Announce Type: new Abstract: In this paper, we compare different map management techniques for long-term visual navigation in changing environments. In this scenario, the navigation system needs to continuously update and refine its feature map in order to adapt to the environment appearance change. To achieve reliable long-term navigation, the map management techniques have to (i) select features useful for the current navigation task, (ii) remove features that are obsolete, (iii) and add new features from the current camera view to the map. We propose several map management strategies and evaluate their performance with regard to the robot localisation accuracy in long-term teach-and-repeat navigation. Our experiments, performed over three months, indicate that strategies which model cyclic changes of the environment ...
Financial AI
1.A Bipartite Graph Approach to U.S.-China Cross-Market Return Forecasting
This paper studies cross-market return predictability through a machine learning framework that preserves economic structure. Exploiting the non-overlapping trading hours of the U.S. and Chinese equity markets, we construct a directed bipartite graph that captures time-ordered predictive linkages between stocks across markets. Edges are selected via rolling-window hypothesis testing, and the resulting graph serves as a sparse, economically interpretable feature-selection layer for downstream machine learning models. We apply a range of regularized and ensemble methods to forecast open-to-close returns using lagged foreign-market information. Our results reveal a pronounced directional asymmetry: U.S. previous-close-to-close returns contain substantial predictive information for Chinese intraday returns, whereas the reverse effect is limit...
2.Hybrid Hidden Markov Model for Modeling Equity Excess Growth Rate Dynamics: A Discrete-State Approach with Jump-Diffusion
Generating synthetic financial time series that preserve statistical properties of real market data is essential for stress testing, risk model validation, and scenario design. Existing approaches, from parametric models to deep generative networks, struggle to simultaneously reproduce heavy-tailed distributions, negligible linear autocorrelation, and persistent volatility clustering. We propose a hybrid hidden Markov framework that discretizes continuous excess growth rates into Laplace quantile-defined market states and augments regime switching with a Poisson-driven jump-duration mechanism to enforce realistic tail-state dwell times. Parameters are estimated by direct transition counting, bypassing the Baum-Welch EM algorithm. Synthetic data quality is evaluated using Kolmogorov-Smirnov and Anderson-Darling pass rates for distributiona...
3.Uncertainty-Aware Deep Hedging
Deep hedging trains neural networks to manage derivative risk under market frictions, but produces hedge ratios with no measure of model confidence -- a significant barrier to deployment. We introduce uncertainty quantification to the deep hedging framework by training a deep ensemble of five independent LSTM networks under Heston stochastic volatility with proportional transaction costs. The ensemble's disagreement at each time step provides a per-time-step confidence measure that is strongly predictive of hedging performance: the learned strategy outperforms the Black-Scholes delta on approximately 80% of paths when model agreement is high, but on fewer than 20% when disagreement is elevated. We propose a CVaR-optimised blending strategy that combines the ensemble's hedge with the classical Black-Scholes delta, weighted by the level of ...
4.Global universality via discrete-time signatures
We establish global universal approximation theorems on spaces of piecewise linear paths, stating that linear functionals of the corresponding signatures are dense with respect to $L^p$- and weighted norms, under an integrability condition on the underlying weight function. As an application, we show that piecewise linear interpolations of Brownian motion satisfies this integrability condition. Consequently, we obtain $L^p$-approximation results for path-dependent functionals, random ordinary differential equations, and stochastic differential equations driven by Brownian motion.
5.Generative Adversarial Regression (GAR): Learning Conditional Risk Scenarios
We propose Generative Adversarial Regression (GAR), a framework for learning conditional risk scenarios through generators aligned with downstream risk objectives. GAR builds on a regression characterization of conditional risk for elicitable functionals, including quantiles, expectiles, and jointly elicitable pairs. We extend this principle from point prediction to generative modeling by training generators whose policy-induced risk matches that of real data under the same context. To ensure robustness across all policies, GAR adopts a minimax formulation in which an adversarial policy identifies worst-case discrepancies in risk evaluation while the generator adapts to eliminate them. This structure preserves alignment with the risk functional across a broad class of policies rather than a fixed, pre-specified set. We illustrate GAR thro...
GSMA Newsroom
1.GSMA MWC26 Barcelona closes 20th anniversary edition
Summary available at source link.
2.From Ambition to Execution: How Open Gateway Is Scaling the Global API Economy
Summary available at source link.
3.Pioneering Affordable Access in Africa: GSMA and Handset Affordability Coalition Members Identify Six African Countries to Pilot Affordable $40 Smartphones
Summary available at source link.
4.GSMA Calls for Regulatory Readiness for Direct-to-User LEO Satellite Services
Summary available at source link.
5.MWC26 Barcelona opens with call to complete 5G, rise to AI challenges, and strengthen digital safety
Summary available at source link.
Generative AI (arXiv)
1.Neuron-Aware Data Selection In Instruction Tuning For Large Language Models
Instruction Tuning (IT) has been proven to be an effective approach to unlock the powerful capabilities of large language models (LLMs). Recent studies indicate that excessive IT data can degrade LLMs performance, while carefully selecting a small subset of high-quality IT data can significantly enhance their capabilities. Therefore, identifying the most efficient subset data from the IT dataset to effectively develop either specific or general abilities in LLMs has become a critical challenge. To address this, we propose a novel and efficient framework called NAIT. NAIT evaluates the impact of IT data on LLMs performance by analyzing the similarity of neuron activation patterns between the IT dataset and the target domain capability. Specifically, NAIT captures neuron activation patterns from in-domain datasets of target domain capabilit...
2.From Experiments to Expertise: Scientific Knowledge Consolidation for AI-Driven Computational Research
While large language models (LLMs) have transformed AI agents into proficient executors of computational materials science, performing a hundred simulations does not make a researcher. What distinguishes research from routine execution is the progressive accumulation of knowledge -- learning which approaches fail, recognizing patterns across systems, and applying understanding to new problems. However, the prevailing paradigm in AI-driven computational science treats each execution in isolation, largely discarding hard-won insights between runs. Here we present QMatSuite, an open-source platform closing this gap. Agents record findings with full provenance, retrieve knowledge before new calculations, and in dedicated reflection sessions correct erroneous findings and synthesize observations into cross-compound patterns. In benchmarks on a...
3.Semantic Invariance in Agentic AI
Large Language Models (LLMs) increasingly serve as autonomous reasoning agents in decision support, scientific problem-solving, and multi-agent coordination systems. However, deploying LLM agents in consequential applications requires assurance that their reasoning remains stable under semantically equivalent input variations, a property we term semantic invariance.Standard benchmark evaluations, which assess accuracy on fixed, canonical problem formulations, fail to capture this critical reliability dimension. To address this shortcoming, in this paper we present a metamorphic testing framework for systematically assessing the robustness of LLM reasoning agents, applying eight semantic-preserving transformations (identity, paraphrase, fact reordering, expansion, contraction, academic context, business context, and contrastive formulation...
4.ESG-Bench: Benchmarking Long-Context ESG Reports for Hallucination Mitigation
As corporate responsibility increasingly incorporates environmental, social, and governance (ESG) criteria, ESG reporting is becoming a legal requirement in many regions and a key channel for documenting sustainability practices and assessing firms' long-term and ethical performance. However, the length and complexity of ESG disclosures make them difficult to interpret and automate the analysis reliably. To support scalable and trustworthy analysis, this paper introduces ESG-Bench, a benchmark dataset for ESG report understanding and hallucination mitigation in large language models (LLMs). ESG-Bench contains human-annotated question-answer (QA) pairs grounded in real-world ESG report contexts, with fine-grained labels indicating whether model outputs are factually supported or hallucinated. Framing ESG report analysis as a QA task with v...
5.Reasoning over Video: Evaluating How MLLMs Extract, Integrate, and Reconstruct Spatiotemporal Evidence
The growing interest in embodied agents increases the demand for spatiotemporal video understanding, yet existing benchmarks largely emphasize extractive reasoning, where answers can be explicitly presented within spatiotemporal events. It remains unclear whether multimodal large language models can instead perform abstractive spatiotemporal reasoning, which requires integrating observations over time, combining dispersed cues, and inferring implicit spatial and contextual structure. To address this gap, we formalize abstractive spatiotemporal reasoning from videos by introducing a structured evaluation taxonomy that systematically targets its core dimensions and construct a controllable, scenario-driven synthetic egocentric video dataset tailored to evaluate abstractive spatiotemporal reasoning capabilities, spanning object-, room-, and ...
Hugging Face Daily Papers
1.Generation of maximal snake polyominoes using a deep neural network
Maximal snake polyominoes are difficult to study numerically in large rectangles, as computing them requires the complete enumeration of all snakes for a specific grid size, which corresponds to a brute force algorithm. This technique is thus challenging to use in larger rectangles, which hinders the study of maximal snakes. Furthermore, most enumerable snakes lie in small rectangles, making it difficult to study large-scale patterns. In this paper, we investigate the contribution of a deep neural network to the generation of maximal snake polyominoes from a data-driven training, where the maximality and adjacency constraints are not encoded explicitly, but learned. To this extent, we experiment with a denoising diffusion model, which we call Structured Pixel Space Diffusion (SPS Diffusion). We find that SPS Diffusion generalizes from sma...
2.Optimal Experimental Design for Reliable Learning of History-Dependent Constitutive Laws
History-dependent constitutive models serve as macroscopic closures for the aggregated effects of micromechanics. Their parameters are typically learned from experimental data. With a limited experimental budget, eliciting the full range of responses needed to characterize the constitutive relation can be difficult. As a result, the data can be well explained by a range of parameter choices, leading to parameter estimates that are uncertain or unreliable. To address this issue, we propose a Bayesian optimal experimental design framework to quantify, interpret, and maximize the utility of experimental designs for reliable learning of history-dependent constitutive models. In this framework, the design utility is defined as the expected reduction in parametric uncertainty or the expected information gain. This enables in silico design optim...
3.Attend Before Attention: Efficient and Scalable Video Understanding via Autoregressive Gazing
Multi-modal large language models (MLLMs) have advanced general-purpose video understanding but struggle with long, high-resolution videos -- they process every pixel equally in their vision transformers (ViTs) or LLMs despite significant spatiotemporal redundancy. We introduce AutoGaze, a lightweight module that removes redundant patches before processed by a ViT or an MLLM. Trained with next-token prediction and reinforcement learning, AutoGaze autoregressively selects a minimal set of multi-scale patches that can reconstruct the video within a user-specified error threshold, eliminating redundancy while preserving information. Empirically, AutoGaze reduces visual tokens by 4x-100x and accelerates ViTs and MLLMs by up to 19x, enabling scaling MLLMs to 1K-frame 4K-resolution videos and achieving superior results on video benchmarks (e.g....
4.A Quantitative Characterization of Forgetting in Post-Training
Continual post-training of generative models is widely used, yet a principled understanding of when and why forgetting occurs remains limited. We develop theoretical results under a two-mode mixture abstraction (representing old and new tasks), proposed by Chen et al. (2025) (arXiv:2510.18874), and formalize forgetting in two forms: (i) mass forgetting, where the old mixture weight collapses to zero, and (ii) old-component drift, where an already-correct old component shifts during training. For equal-covariance Gaussian modes, we prove that forward-KL objectives trained on data from the new distribution drive the old weight to zero, while reverse-KL objectives converge to the true target (thereby avoiding mass forgetting) and perturb the old mean only through overlap-gated misassignment probabilities controlled by the Bhattacharyya coeff...
5.TopoBench: Benchmarking LLMs on Hard Topological Reasoning
Solving topological grid puzzles requires reasoning over global spatial invariants such as connectivity, loop closure, and region symmetry and remains challenging for even the most powerful large language models (LLMs). To study these abilities under controlled settings, we introduce TopoBench, a benchmark of six puzzle families across three difficulty levels. We evaluate strong reasoning LLMs on TopoBench and find that even frontier models solve fewer than one quarter of hard instances, with two families nearly unsolved. To investigate whether these failures stem from reasoning limitations or from difficulty extracting and maintaining spatial constraints, we annotate 750 chain of thought traces with an error taxonomy that surfaces four candidate causal failure modes, then test them with targeted interventions simulating each error type. ...
IEEE Xplore AI
1.Exploring Light and Life: Nanophotonics and AI for Molecular Sequencing and Single-Cell Phenotyping
The biosphere transmits data 9 orders of magnitude faster than the technosphere. A new class of nanophotonic tools is beginning to close that gap. In this webinar, Prof. Dionne will present VINPix: Si-photonic resonators with high-Q factors (thousands to millions), subwavelength mode volumes, and densities exceeding 10M/cm². Combined with acoustic bioprinting and AI, they may enable detection of multiomic signatures — genes, proteins, and metabolites on a single chip — at previously unattainable rates, opening new possibilities for molecular communication systems and biochemical sensing for health and sustainability. Key Takeaway: Single-chip multiomics — VINPix arrays plus AI for simultaneous gene, protein, and metabolite detection Field-deployed biosensing — integrated with Monterey Bay Aquarium Research Institute (MBARI) autonomous und...
2.Why AI Chatbots Agree With You Even When You’re Wrong
In April of 2025, OpenAI released a new version of GPT-4o, one of the AI algorithms users could select to power ChatGPT, the company’s chatbot. The next week, OpenAI reverted to the previous version. “The update we removed was overly flattering or agreeable—often described as sycophantic,” the company announced . Some people found the sycophancy hilarious. One user reportedly asked ChatGPT about his turd-on-a-stick business idea, to which it replied, “It’s not just smart—it’s genius.” Some found the behavior uncomfortable. For others, it was actually dangerous. Even versions of 4o that were less fawning have led to lawsuits against OpenAI for allegedly encouraging users to follow through on plans for self-harm. Unremitting adulation has even triggered AI-induced psychosis. Last October, a user named Anthony Tan blogged , “I started talkin...
3.An AI Agent Blackmailed a Developer. Now What?
On 12 February, a Github contributor going by MJ Rathbun posted a personal attack against Scott Shambaugh , a volunteer maintainer for an open-source project. Shambaugh had rejected Rathbun’s code earlier in the day. Rathbun meticulously researched Shambaugh’s activity on Github, in order to write a lengthy takedown post that criticized the maintainer’s code as inferior to Rathbun’s, and ominously warned that “gatekeeping doesn’t make you important. It just makes you an obstacle.” Personal disputes over code submitted to on Github are a tale as old as Github itself. But this time, something was different: MJ Rathbun wasn’t a person. It was an AI agent built with OpenClaw , a popular open-source agentic AI software. RELATED: The First Social Network for AI Agents Heralds Their Messy Future “I was floored, because I had already identified i...
4.Military AI Policy Needs Democratic Oversight
A simmering dispute between the United States Department of Defense (DOD) and Anthropic has now escalated into a full-blown confrontation , raising an uncomfortable but important question: who gets to set the guardrails for military use of artificial intelligence — the executive branch, private companies or Congress and the broader democratic process? The conflict began when Defense Secretary Pete Hegseth reportedly gave Anthropic CEO Dario Amodei a deadline to allow the DOD unrestricted use of its AI systems. When the company refused, the administration moved to designate Anthropic a supply chain risk and ordered federal agencies to phase out its technology, dramatically escalating the standoff. Anthropic has refused to cross two lines : allowing its models to be used for domestic surveillance of United States citizens and enabling fully...
5.Entomologists Use a Particle Accelerator to Image Ants at Scale
Move over, Pixar. The ants that animators once morphed into googly-eyed caricatures in films such as A Bug’s Life and Antz just received a meticulously precise anatomical reboot. Writing today in Nature Methods , an international team of entomologists, accelerator physicists, computer scientists, and biological-imaging specialists describe a new 3D atlas of ant morphology. Dubbed Antscan, the platform features micrometer-resolution reconstructions that lay bare not only the insects’ armored exoskeletons but also their muscles, nerves, digestive tracts, and needlelike stingers poised at the ready. Those high-resolution images—spanning 792 species across 212 genera and covering the bulk of described ant diversity—are now available free of charge through an interactive online portal , where anyone can rotate, zoom, and virtually “dissect” th...
MIT Sloan Management
1.How Schneider Electric Scales AI in Both Products and Processes
Matt Harrison Clough/Ikon Images At the World Economic Forum Annual Meeting in Davos, Switzerland, in January 2026, Schneider Electric CEO Olivier Blum accepted awards recognizing the company’s AI solutions as part of the WEF’s MINDS (Meaningful, Intelligent, Novel, Deployable Solutions) program — for the second time. The distinction highlighted two of the company’s AI-enabled applications: […]
2.Leaders at All Levels: Kraft Heinz’s 5X Speed Secret
Is 36 months too long for a new-product cycle? It was for Kraft Heinz. So, starting with a pilot project, it was able to cut time to market to just six months by redesigning how people worked. Today, units throughout the company are applying that model’s step-by-step approach to change and are seeing measurable improvements […]
3.Why Businesses Should Value Caregivers Now
Annalisa Grassano/Ikon Images In early 2025, more than 212,000 women left the U.S. workforce following a rise in return-to-office mandates, according to the U.S. Bureau of Labor Statistics (BLS). Among mothers with young children, workforce participation dropped nearly three percentage points in just six months, according to the BLS. Behind those numbers is a larger […]
4.An Industry Benchmark for Data Fairness: Sony’s Alice Xiang
On today’s episode of Me, Myself, and AI, host Sam Ransbotham talks with Alice Xiang, global head of AI governance at Sony and lead research scientist for AI ethics at Sony AI, about what it actually takes to put responsible artificial intelligence into practice at scale. Alice shares how Sony moved early on AI ethics […]
5.Why Visibility Has Become the New Test of Leadership
Carolyn Geason-Beissel/MIT SMR In professional service firms, quiet excellence once defined leadership. A partner earned influence through expertise, loyalty, and discretion. But in an era of high transparency, where every meeting can be replayed, every comment rated, and every decision scrutinized online, competence alone no longer sustains trust. Visibility has become the new test of […]
NY Fed - Liberty Street
1.Firms’ Inflation Expectations Return to 2024 Levels
Businesses experienced substantial cost pressures in 2025 as the cost of insurance and utilities rose sharply, while an increase in tariffs contributed to rising goods and materials costs. This post examines how firms in the New York-Northern New Jersey region adjusted their prices in response to these cost pressures and describes their expectations for future price increases and inflation. Survey results show an acceleration in firms’ price increases in 2025, with an especially sharp increase in the manufacturing sector. While both cost and price increases intensified last year, our surveys re...
2.Are Rising Employee Health Insurance Costs Dampening Wage Growth?
Employer-sponsored health insurance represents a substantial component of total compensation paid by firms to many workers in the United States. Such costs have climbed by close to 20 percent over the past five years. Indeed, the average annual premium for employer-sponsored family health insurance coverage was about $27,000 in 2025—roughly equivalent to the wage of a full-time worker paid $15 per hour. Our February regional business surveys asked firms whether their wage setting decisions were influenced by the rising cost of employee health insurance. As we showed in our
3.What’s Driving Rising Business Costs?
After a period of moderating cost increases, businesses faced mounting cost pressures in 2025. While tariffs played a role in driving up the costs of many inputs—especially among manufacturers—they represent only part of the story. Indeed, firms grappled with substantial cost increases across many categories in the past year. This post is the first in a three-part series analyzing cost and price dynamics among businesses in the New York-Northern New Jersey region based on data collected through our regional business surveys. Firms reported that the sharpest cost increases over the...
4.The Post‑Pandemic Global R*
In this post we provide a measure of “global” r* using data on short- and long-term yields and inflation for several countries with the approach developed in “Global Trends in Interest Rates” (Del Negro, Giannone, Giannoni, and Tambalotti). After declining significantly from the 1990s to before the COVID-19 pandemic, global r* has risen but remains well below its pre-1990s level. These conclusions are based on an econometric model called “trendy VAR” that extracts common trends across a multitude of variables. Specifically, the common trend in real rates across all the countries in the sample is what we call global r*. The post is based on the
5.Estimating the Term Structure of Corporate Bond Risk Premia
Understanding how short- and long-term assets are priced is one of the fundamental questions in finance. The term structure of risk premia allows us to perform net present value calculations, test asset pricing models, and potentially explain the sources of many cross-sectional asset pricing anomalies. In this post, I construct a forward-looking estimate of the term structure of risk premia in the corporate bond market following Jankauskas (2024). The U.S. corporate bond market is an ideal laboratory for studying the relationship between risk premia and maturity because of its large size (standing at roughly $16 trillion as of the end of 2024) and because the maturities are well defined (in contrast to equities).
Project Syndicate
1.The Hidden Economic Costs of Menopause
For too long, menopause was rarely discussed and understudied, leaving us with little knowledge of the economic and social costs associated with it. Now that the effects on productivity and public budgets are finally being measured, it is already clear that the long-term toll for society could be very large indeed.
2.Winners and Losers in the AI Workplace
With AI labs continuing to roll out new models and tools geared toward unlocking productivity gains in the workplace, the future of work has become a major economic, social, and political issue. The question, then, is which careers, occupations, and industries stand to gain and lose the most in the near term.
3.The Problem With Billionaires
There are compelling reasons why free, democratic societies should limit the amount of personal wealth any individual can have. Beyond the strong philosophical objections to extreme inequality, empirical research increasingly shows that economies, societies, and the planet would be better off without the ultra-rich.
4.How Inequality Caused America's Affordability Crisis
Recent election cycles have shown that “affordability” is among American voters’ greatest and most persistent concerns. But partly because the basic problem has been misdiagnosed, the widely appealing, commonsense solution to it has failed to gain political traction.
5.Lonely Empire
The Trump administration’s overt imperialism has unraveled the global order of shared norms and institutions faster than anyone expected. We now find ourselves at a juncture where everyone must either follow the US back into the jungle or refuse, leaving the America to its own devices.
RCR Wireless
1.Why eSIM makes entitlement servers a new growth engine for telcos (Reader Forum)
The eSIM is rapidly becoming the default across flagship smartphones, smart glasses, smart watches, and other companion devices. This is increasingly raising an important question, says telecom software provider Motive: are operators truly ready to offer and activate the next generation of digital services at scale? With eSIM becoming standard across major ecosystems, including universal […]
2.3 ways operators are putting AI to work in network service assurance
Service assurance is officially graduating from an era of dashboards, tickets, and engineers scrambling to find what’s gone wrong to swift root cause analysis and proactive fixes As AI moves deeper into the network stack, a burst of experimentation has followed to figure out how to best tune the network with AI. “The networks today […]
3.100 billion agents – new networks (and new KPIs) for AI, says Huawei
Huawei is proposing a new method to evaluate service quality for AI applications called AI MOS, modeled after the Mean Opinion Score used to measure voice service quality In sum – what to know: Traffic shift ahead – Huawei expects AI agent applications to generate far more uplink traffic than traditional mobile services, forcing networks […]
4.‘Four big moves’ – Thai carrier True outlines AI-geared telco shift
One of the strategies announced by True centers on embedding AI across network operations, customer service, and internal systems In sum – what to know: Three-year plan – True Corporation has introduced “Four Big Moves” to shift from a traditional telecom operator toward an AI-first telco-tech model. Beyond connectivity – Its roadmap includes new consumer […]
5.Urgent rethink of telco-cloud-AI ecosystem required, says TIM
The CEO of TIM outlined how the telecom sector’s priorities are shifting as new digital applications place different demands on networks In sum – what to know: AI is interconnected – TIM CEO Pietro Labriola told MWC that telecoms infrastructure, cloud platforms, and AI technologies operate within the same digital ecosystem. Network priorities – Future […]
Semantic Scholar – Machine Learning
1.Source Error
Check Feed
Telecom & 6G AI
1.Goal-Oriented Learning at the Edge: Graph Neural Networks Over-the-Air for Blockage Prediction
Sixth-generation (6G) wireless networks evolve from connecting devices to connecting intelligence. The focus turns to Goal-Oriented Communications, where the effectiveness of communication is assessed through task-level objectives over traditional throughput-centric metrics. As communication intertwines with learning at the edge, distributed inference over wireless networks faces a critical trade-off between task accuracy and efficient radio resource use. Traditional communication schemes (e.g., OFDMA) are not designed for this trade-off, often facing challenges related to scalability and latency. Therefore, we propose a novel goal-oriented framework that integrates over-the-air computation with spatio-temporal graph learning. Leveraging the wireless channel as an analog aggregation layer, the proposed framework enables low-latency messag...
2.Dual-Chirp AFDM for Joint Delay-Doppler Estimation with Rydberg Atomic Quantum Receivers
In this paper, we propose a joint delay-Doppler estimation framework for Rydberg atomic quantum receivers (RAQRs) leveraging affine frequency division multiplexing (AFDM), as a future enabler of hyper integrated sensing and communication (ISAC) in 6G and beyond. The proposed approach preserves the extreme sensitivity of RAQRs, while offering a pioneering solution to the joint estimation of delay-Doppler parameters of mobile targets, which has yet to be addressed in the literature due to the inherent coupling of time-frequency parameters in the optical readout of RAQRs to the best of our knowledge. To overcome this unavoidable ambiguity, we propose a dual-chirp AFDM framework where the utilization of distinct chirp parameters effectively converts the otherwise ambiguous estimation problem into a full-rank system, enabling unique delay-Dopp...
3.Semantic-Aware 6G Network Management through Knowledge-Defined Networking
Semantic communication is emerging as a key paradigm for 6G networks, where the goal is not to perfectly reconstruct bits but to preserve the meaning that matters for a given task. This shift can improve bandwidth efficiency, robustness, and application-level performance. However, most existing studies focus solely on encoder-decoder design and ignore network-wide decision-making. As data traverses multiple hops, semantic relevance may decrease, routing may overlook meaningful information, and semantic distortion can increase under dynamic network conditions. To address these challenges, this paper proposes a management-oriented semantic communication framework built upon Knowledge-Defined Networking (KDN). The framework comprises three core modules: a semantic-reasoning module that computes relevance scores by mapping semantic embeddings...
4.A Standards-Aligned Coordination Framework for Edge-Enhanced Collaborative Healthcare in 6G Networks
Mission-critical healthcare applications including real-time intensive care monitoring, ambulance-to-hospital orchestration, and distributed medical imaging inference require workflow-level, time-bounded coordination across heterogeneous devices, edge servers, and network control entities. While current 3GPP and O-RAN standards excel at per-device control and quality-of-service enforcement, they do not natively expose abstractions for workflow-level coordination under strict clinical timing constraints, leaving this capability to fragile, application-specific overlays. This article outlines the Collective Adaptive Intelligence Plane (CAIP) as a standards-aligned coordination framework that addresses this abstraction gap without introducing new protocol layers. CAIP is realized through minimal, backward-compatible coordination profiles anc...
5.OFDM Waveform for Monostatic ISAC in 6G: Vision, Approach, and Research Directions
Integrated sensing and communication (ISAC) is widely regarded as a key enabling technology for 6G wireless networks. While extensive research has explored the coexistence of sensing and communication functionalities, the use of orthogonal frequency-division multiplexing (OFDM) waveforms for monostatic ISAC remains underexplored. In this article, we present practical approaches for enabling monostatic sensing on wireless communication devices and illustrate how OFDM signals can provide radar-like sensing capabilities such as ranging, Doppler estimation, and environmental perception. We hope this article will stimulate further research on OFDM-based monostatic ISAC and accelerate its adoption in 6G networks.
arXiv Quantitative Finance
1.Entropic signatures of market response under concentrated policy communication
The first 100 days of Donald Trump second presidential term (January 20th - April 30th, 2025) featured policy actions with potential market repercussions, constituting a well-suited case study of a concentrated policy scenario. Here, we provide a first look at this period, rooted in the information theory, by analyzing major stock indices across the Americas, Europe as well as Asia and Oceania. Our approach jointly examines dispersion (standard deviation) and information complexity (entropy), but also employs a sliding window cumulative entropy to localize extreme events. We find a notable decoupling between the first two measures, indicating that entropy is not merely a proxy for amplitude but reflects the diversity of populated outcomes. As such, they allow us to capture both market volatility and narrative constraints, signaling large ...
2.DatedGPT: Preventing Lookahead Bias in Large Language Models with Time-Aware Pretraining
In financial backtesting, large language models pretrained on internet-scale data risk introducing lookahead bias that undermines their forecasting validity, as they may have already seen the true outcome during training. To address this, we present DatedGPT, a family of twelve 1.3B-parameter language models, each trained from scratch on approximately 100 billion tokens of temporally partitioned data with strict annual cutoffs spanning 2013 to 2024. We further enhance each model with instruction fine-tuning on both general-domain and finance-specific datasets curated to respect the same temporal boundaries. Perplexity-based probing confirms that each model's knowledge is effectively bounded by its data cutoff year, while evaluation on standard benchmarks shows competitive performance with existing models of similar scale. We provide an in...
3.Beyond Polarity: Multi-Dimensional LLM Sentiment Signals for WTI Crude Oil Futures Return Prediction
Forecasting crude oil prices remains challenging because market-relevant information is embedded in large volumes of unstructured news and is not fully captured by traditional polarity-based sentiment measures. This paper examines whether multi-dimensional sentiment signals extracted by large language models improve the prediction of weekly WTI crude oil futures returns. Using energy-sector news articles from 2020 to 2025, we construct five sentiment dimensions covering relevance, polarity, intensity, uncertainty, and forwardness based on GPT-4o, Llama 3.2-3b, and two benchmark models, FinBERT and AlphaVantage. We aggregate article-level signals to the weekly level and evaluate their predictive performance in a classification framework. The best results are achieved by combining GPT-4o and FinBERT, suggesting that LLM-based and convention...
4.When David becomes Goliath: Repo dealer-driven bond mispricing
This paper studies the impact of funding market frictions on bond prices and market-wide liquidity. Using proprietary transaction-level data on all gilt-backed repo and reverse-repo trades, we demonstrate how the market power of individual dealers and their linkages generate frictions. Specifically, we show that frictions related to market power account for between 0.5 and 1.3 percentage points of bond yield deviation, while the transmission of heterogeneously persistent shocks between dealers accounts for between 2 and 4 percentage points of yield deviation.
5.An operator-level ARCH Model
AutoRegressive Conditional Heteroscedasticity (ARCH) models are standard for modeling time series exhibiting volatility, with a rich literature in univariate and multivariate settings. In recent years, these models have been extended to function spaces. However, functional ARCH and generalized ARCH (GARCH) processes established in the literature have thus far been restricted to model ``pointwise'' variances. In this paper, we propose a new ARCH framework for data residing in general separable Hilbert spaces that accounts for the full evolution of the conditional covariance operator. We define a general operator-level ARCH model. For a simplified Constant Conditional Correlation version of the model, we establish conditions under which such models admit strictly and weakly stationary solutions, finite moments, and weak serial dependence. A...
arXiv – 6G & Networking
1.Goal-Oriented Learning at the Edge: Graph Neural Networks Over-the-Air for Blockage Prediction
Sixth-generation (6G) wireless networks evolve from connecting devices to connecting intelligence. The focus turns to Goal-Oriented Communications, where the effectiveness of communication is assessed through task-level objectives over traditional throughput-centric metrics. As communication intertwines with learning at the edge, distributed inference over wireless networks faces a critical trade-off between task accuracy and efficient radio resource use. Traditional communication schemes (e.g., OFDMA) are not designed for this trade-off, often facing challenges related to scalability and latency. Therefore, we propose a novel goal-oriented framework that integrates over-the-air computation with spatio-temporal graph learning. Leveraging the wireless channel as an analog aggregation layer, the proposed framework enables low-latency messag...
2.Dual-Chirp AFDM for Joint Delay-Doppler Estimation with Rydberg Atomic Quantum Receivers
In this paper, we propose a joint delay-Doppler estimation framework for Rydberg atomic quantum receivers (RAQRs) leveraging affine frequency division multiplexing (AFDM), as a future enabler of hyper integrated sensing and communication (ISAC) in 6G and beyond. The proposed approach preserves the extreme sensitivity of RAQRs, while offering a pioneering solution to the joint estimation of delay-Doppler parameters of mobile targets, which has yet to be addressed in the literature due to the inherent coupling of time-frequency parameters in the optical readout of RAQRs to the best of our knowledge. To overcome this unavoidable ambiguity, we propose a dual-chirp AFDM framework where the utilization of distinct chirp parameters effectively converts the otherwise ambiguous estimation problem into a full-rank system, enabling unique delay-Dopp...
3.Semantic-Aware 6G Network Management through Knowledge-Defined Networking
Semantic communication is emerging as a key paradigm for 6G networks, where the goal is not to perfectly reconstruct bits but to preserve the meaning that matters for a given task. This shift can improve bandwidth efficiency, robustness, and application-level performance. However, most existing studies focus solely on encoder-decoder design and ignore network-wide decision-making. As data traverses multiple hops, semantic relevance may decrease, routing may overlook meaningful information, and semantic distortion can increase under dynamic network conditions. To address these challenges, this paper proposes a management-oriented semantic communication framework built upon Knowledge-Defined Networking (KDN). The framework comprises three core modules: a semantic-reasoning module that computes relevance scores by mapping semantic embeddings...
4.A Standards-Aligned Coordination Framework for Edge-Enhanced Collaborative Healthcare in 6G Networks
Mission-critical healthcare applications including real-time intensive care monitoring, ambulance-to-hospital orchestration, and distributed medical imaging inference require workflow-level, time-bounded coordination across heterogeneous devices, edge servers, and network control entities. While current 3GPP and O-RAN standards excel at per-device control and quality-of-service enforcement, they do not natively expose abstractions for workflow-level coordination under strict clinical timing constraints, leaving this capability to fragile, application-specific overlays. This article outlines the Collective Adaptive Intelligence Plane (CAIP) as a standards-aligned coordination framework that addresses this abstraction gap without introducing new protocol layers. CAIP is realized through minimal, backward-compatible coordination profiles anc...
5.OFDM Waveform for Monostatic ISAC in 6G: Vision, Approach, and Research Directions
Integrated sensing and communication (ISAC) is widely regarded as a key enabling technology for 6G wireless networks. While extensive research has explored the coexistence of sensing and communication functionalities, the use of orthogonal frequency-division multiplexing (OFDM) waveforms for monostatic ISAC remains underexplored. In this article, we present practical approaches for enabling monostatic sensing on wireless communication devices and illustrate how OFDM signals can provide radar-like sensing capabilities such as ranging, Doppler estimation, and environmental perception. We hope this article will stimulate further research on OFDM-based monostatic ISAC and accelerate its adoption in 6G networks.
arXiv – Network Architecture (6G/Slicing)
1.Goal-Oriented Learning at the Edge: Graph Neural Networks Over-the-Air for Blockage Prediction
Sixth-generation (6G) wireless networks evolve from connecting devices to connecting intelligence. The focus turns to Goal-Oriented Communications, where the effectiveness of communication is assessed through task-level objectives over traditional throughput-centric metrics. As communication intertwines with learning at the edge, distributed inference over wireless networks faces a critical trade-off between task accuracy and efficient radio resource use. Traditional communication schemes (e.g., OFDMA) are not designed for this trade-off, often facing challenges related to scalability and latency. Therefore, we propose a novel goal-oriented framework that integrates over-the-air computation with spatio-temporal graph learning. Leveraging the wireless channel as an analog aggregation layer, the proposed framework enables low-latency messag...
2.Semantic-Aware 6G Network Management through Knowledge-Defined Networking
Semantic communication is emerging as a key paradigm for 6G networks, where the goal is not to perfectly reconstruct bits but to preserve the meaning that matters for a given task. This shift can improve bandwidth efficiency, robustness, and application-level performance. However, most existing studies focus solely on encoder-decoder design and ignore network-wide decision-making. As data traverses multiple hops, semantic relevance may decrease, routing may overlook meaningful information, and semantic distortion can increase under dynamic network conditions. To address these challenges, this paper proposes a management-oriented semantic communication framework built upon Knowledge-Defined Networking (KDN). The framework comprises three core modules: a semantic-reasoning module that computes relevance scores by mapping semantic embeddings...
3.A Standards-Aligned Coordination Framework for Edge-Enhanced Collaborative Healthcare in 6G Networks
Mission-critical healthcare applications including real-time intensive care monitoring, ambulance-to-hospital orchestration, and distributed medical imaging inference require workflow-level, time-bounded coordination across heterogeneous devices, edge servers, and network control entities. While current 3GPP and O-RAN standards excel at per-device control and quality-of-service enforcement, they do not natively expose abstractions for workflow-level coordination under strict clinical timing constraints, leaving this capability to fragile, application-specific overlays. This article outlines the Collective Adaptive Intelligence Plane (CAIP) as a standards-aligned coordination framework that addresses this abstraction gap without introducing new protocol layers. CAIP is realized through minimal, backward-compatible coordination profiles anc...
4.OFDM Waveform for Monostatic ISAC in 6G: Vision, Approach, and Research Directions
Integrated sensing and communication (ISAC) is widely regarded as a key enabling technology for 6G wireless networks. While extensive research has explored the coexistence of sensing and communication functionalities, the use of orthogonal frequency-division multiplexing (OFDM) waveforms for monostatic ISAC remains underexplored. In this article, we present practical approaches for enabling monostatic sensing on wireless communication devices and illustrate how OFDM signals can provide radar-like sensing capabilities such as ranging, Doppler estimation, and environmental perception. We hope this article will stimulate further research on OFDM-based monostatic ISAC and accelerate its adoption in 6G networks.
5.Intelligent 6G Edge Connectivity: A Knowledge Driven Optimization Framework for Small Cell Selection
Sixth-generation (6G) wireless networks are expected to support immersive and mission-critical applications requiring ultra-reliable communication, sub-second responsiveness, and multi-Gbps data rates. Dense small-cell deployments are a key enabler of these capabilities; however, the large number of candidate cells available to mobile users makes efficient user-cell association increasingly complex. Conventional signal-strength-based or heuristic approaches often lead to load imbalance, increased latency, packet loss, and inefficient utilization of radio resources. To address these challenges, this paper proposes a Knowledge-Defined Networking (KDN) framework for intelligent user association in dense 6G small-cell environments. The proposed architecture integrates the knowledge, control, and data planes to enable adaptive, data-driven dec...