Daily Briefing – Apr 14 (96 Articles)
Babak's Daily Briefing
Tuesday, April 14, 2026
Sources: 20 | Total Articles: 96
6G World
1.SoftBank’s Physical AI push gives AI-RAN a sharper purpose
SoftBank is starting to give AI-RAN a more concrete job description: not just running AI workloads near the network, but serving as the real-time infrastructure layer for robots and other physical systems. The company’s recent materials suggest it wants to move the AI-RAN conversation from telecom architecture to real-world machine action.
2.South Korea puts 6G inside its national AI push
South Korea has unveiled a three-year national roadmap aimed at becoming one of the world’s top three AI powers by 2028, with 6G commercialization positioned as part of that broader push.
3.b-com’s Open XG Hub targets one of telecom’s biggest gaps: turning experimentation into deployment
In an interview with Peter Pietrzyk, Managing Director of 6GWorld, Patrick Savell, Head of Connectivity at b-com, said platforms such as Open XG Hub are designed to help bridge one of the industry’s most persistent challenges: moving promising ideas from research environments into deployable network systems. The bigger point is that, as telecom becomes more software-driven and AI-native, the bottleneck is increasingly less about invention and more about validation, integration, and operational readiness.
4.ODC’s $45M raise signals a bigger shift in AI-RAN, from network optimization to edge intelligence
ORAN Development Company said it has closed a $45 million Series A backed by Booz Allen, Cisco Investments, Nokia, NVIDIA, AT&T, MTN and Telecom Italia to scale its U.S.-based Odyssey platform, which it positions as an AI-native RAN architecture combining communications, sensing and edge intelligence. The company said it plans to accelerate commercial deployment through 2026.
5.Lockheed Martin’s NetSense points to a bigger shift: 5G as drone-detection infrastructure
Lockheed Martin’s latest NetSense prototype suggests that commercial 5G infrastructure could play a growing role in drone detection, adding momentum to the broader move toward sensing-enabled wireless networks.
AI Agents
1.Time is Not a Label: Continuous Phase Rotation for Temporal Knowledge Graphs and Agentic Memory
Structured memory representations such as knowledge graphs are central to autonomous agents and other long-lived systems. However, most existing approaches model time as discrete metadata, either sorting by recency (burying old-yet-permanent knowledge), simply overwriting outdated facts, or requiring an expensive LLM call at every ingestion step, leaving them unable to distinguish persistent facts from evolving ones. To address this, we introduce RoMem, a drop-in temporal knowledge graph module for structured memory systems, applicable to agentic memory and beyond. A pretrained Semantic Speed Gate maps each relation's text embedding to a volatility score, learning from data that evolving relations (e.g., "president of") should rotate fast while persistent ones (e.g., "born in") should remain stable. Combined with continuous phase rotation...
2.OOM-RL: Out-of-Money Reinforcement Learning Market-Driven Alignment for LLM-Based Multi-Agent Systems
The alignment of Multi-Agent Systems (MAS) for autonomous software engineering is constrained by evaluator epistemic uncertainty. Current paradigms, such as Reinforcement Learning from Human Feedback (RLHF) and AI Feedback (RLAIF), frequently induce model sycophancy, while execution-based environments suffer from adversarial "Test Evasion" by unconstrained agents. In this paper, we introduce an objective alignment paradigm: \textbf{Out-of-Money Reinforcement Learning (OOM-RL)}. By deploying agents into the non-stationary, high-friction reality of live financial markets, we utilize critical capital depletion as an un-hackable negative gradient. Our longitudinal 20-month empirical study (July 2024 -- February 2026) chronicles the system's evolution from a high-turnover, sycophantic baseline to a robust, liquidity-aware architecture. We demo...
3.AgentWebBench: Benchmarking Multi-Agent Coordination in Agentic Web
Agentic Web is an emerging paradigm where autonomous agents help users use online information. As the paradigm develops, content providers are also deploying agents to manage their data and serve it through controlled interfaces. This shift moves information access from centralized retrieval to decentralized coordination. To study this setting, we introduce AgentWebBench, a benchmark that evaluates how well a user agent synthesizes answers by interacting with website-specific content agents. We evaluate four tasks that cover common web information needs, spanning ranked retrieval (web search, web recommendation) and open-ended synthesis (question answering, deep research). Across seven advanced LLMs and three coordination strategies, multi-agent coordination generally lags behind centralized retrieval as expected, because user agent canno...
4.In-situ process monitoring for defect detection in wire-arc additive manufacturing: an agentic AI approach
AI agents are being increasingly deployed across a wide range of real-world applications. In this paper, we propose an agentic AI framework for in-situ process monitoring for defect detection in wire-arc additive manufacturing (WAAM). The autonomous agent leverages a WAAM process monitoring dataset and a trained classification tool to build AI agents and uses a large language model (LLM) for in-situ process monitoring decision-making for defect detection. A processing agent is developed based on welder process signals, such as current and voltage, and a monitoring agent is developed based on acoustic data collected during the process. Both agents are tasked with identifying porosity defects from processing and monitoring signals, respectively. Ground truth X-ray computed tomography (XCT) data are used to develop classification tools for b...
5.CONSCIENTIA: Can LLM Agents Learn to Strategize? Emergent Deception and Trust in a Multi-Agent NYC Simulation
As large language models (LLMs) are increasingly deployed as autonomous agents, understanding how strategic behavior emerges in multi-agent environments has become an important alignment challenge. We take a neutral empirical stance and construct a controlled environment in which strategic behavior can be directly observed and measured. We introduce a large-scale multi-agent simulation in a simplified model of New York City, where LLM-driven agents interact under opposing incentives. Blue agents aim to reach their destinations efficiently, while Red agents attempt to divert them toward billboard-heavy routes using persuasive language to maximize advertising revenue. Hidden identities make navigation socially mediated, forcing agents to decide when to trust or deceive. We study policy learning through an iterative simulation pipeline that ...
AI Computation & Hardware
1.Self-Calibrating Language Models via Test-Time Discriminative Distillation
arXiv:2604.09624v1 Announce Type: new Abstract: Large language models (LLMs) are systematically overconfident: they routinely express high certainty on questions they often answer incorrectly. Existing calibration methods either require labeled validation data, degrade under distribution shifts, or incur substantial inference costs. Recent work has shown that LLMs already contain a better-calibrated signal than the one they verbalize: the token probability of "True" when the model is asked "Is this answer correct?" ($P(\text{True})$) consistently outperforms their stated confidence, a gap that is theoretically grounded as generative error is lower-bounded by roughly twice the corresponding discriminative error. We introduce $\textbf{SECL}$ ($\textbf{SE}$lf-$\textbf{C}$alibrating $\textbf{L}$anguage Models), a test-time training (TTT) pip...
2.Toward Generalized Cross-Lingual Hateful Language Detection with Web-Scale Data and Ensemble LLM Annotations
arXiv:2604.09625v1 Announce Type: new Abstract: We study whether large-scale unlabelled web data and LLM-based synthetic annotations can improve multilingual hate speech detection. Starting from texts crawled via OpenWebSearch.eu~(OWS) in four languages (English, German, Spanish, Vietnamese), we pursue two complementary strategies. First, we apply continued pre-training to BERT models by continuing masked language modelling on unlabelled OWS texts before supervised fine-tuning, and show that this yields an average macro-F1 gain of approximately 3% over standard baselines across sixteen benchmarks, with stronger gains in low-resource settings. Second, we use four open-source LLMs (Mistral-7B, Llama3.1-8B, Gemma2-9B, Qwen2.5-14B) to produce synthetic annotations through three ensemble strategies: mean averaging, majority voting, and a Ligh...
3.HumorGen: Cognitive Synergy for Humor Generation in Large Language Models via Persona-Based Distillation
arXiv:2604.09629v1 Announce Type: new Abstract: Humor generation poses a significant challenge for Large Language Models (LLMs), because their standard training objective - predicting the most likely next word - inherently conflicts with the surprise and incongruity needed for comedy. To bridge this gap, we introduce the Cognitive Synergy Framework, a theoretically grounded methodology for generating high-quality humor data inspired by psychological theories of humor. Utilizing a Mixture-of-Thought (MoT) approach, we deploy six cognitive personas (e.g., The Absurdist, The Cynic) to synthesize diverse comedic perspectives for a given prompt. This framework creates a theoretically grounded dataset, which we use to fine-tune a 7B-parameter student model. We compare Direct Preference Optimization (DPO) and a novel Offline Group Relative Poli...
4.Generating High Quality Synthetic Data for Dutch Medical Conversations
arXiv:2604.09645v1 Announce Type: new Abstract: Medical conversations offer insights into clinical communication often absent from Electronic Health Records. However, developing reliable clinical Natural Language Processing (NLP) models is hampered by the scarcity of domain-specific datasets, as clinical data are typically inaccessible due to privacy and ethical constraints. To address these challenges, we present a pipeline for generating synthetic Dutch medical dialogues using a Dutch fine-tuned Large Language Model, with real medical conversations serving as linguistic and structural reference. The generated dialogues were evaluated through quantitative metrics and qualitative review by native speakers and medical practitioners. Quantitative analysis revealed strong lexical variety and overly regular turn-taking, suggesting scripted r...
5.GIANTS: Generative Insight Anticipation from Scientific Literature
arXiv:2604.09793v1 Announce Type: new Abstract: Scientific breakthroughs often emerge from synthesizing prior ideas into novel contributions. While language models (LMs) show promise in scientific discovery, their ability to perform this targeted, literature-grounded synthesis remains underexplored. We introduce insight anticipation, a generation task in which a model predicts a downstream paper's core insight from its foundational parent papers. To evaluate this capability, we develop GiantsBench, a benchmark of 17k examples across eight scientific domains, where each example consists of a set of parent papers paired with the core insight of a downstream paper. We evaluate models using an LM judge that scores similarity between generated and ground-truth insights, and show that these similarity scores correlate with expert human ratings...
AI Machine Learning
1.The Diffusion-Attention Connection
arXiv:2604.09560v1 Announce Type: new Abstract: Transformers, diffusion-maps, and magnetic Laplacians are usually treated as separate tools; we show they are all different regimes of a single Markov geometry built from pre-softmax query-scores. We define a QK "bidivergence" whose exponentiated and normalized forms yield attention, diffusion-maps, and magnetic diffusion. And use product of experts and Schr\"odinger-bridges to connect and organize them into equilibrium, nonequilibrium steady-state, and driven dynamics.
2.Fairboard: a quantitative framework for equity assessment of healthcare models
arXiv:2604.09656v1 Announce Type: new Abstract: Despite there now being more than 1,000 FDA-authorised AI medical devices, formal equity assessments -- whether model performance is uniform across patient subgroups -- are rare. Here, we evaluate the equity of 18 open-source brain tumour segmentation models across 648 glioma patients from two independent datasets (n = 11,664 model inferences) along distinct univariate, Bayesian multivariate, spatial, and representational dimensions. We find that patient identity consistently explains more performance variance than model choice, with clinical factors, including molecular diagnosis, tumour grade, and extent of resection, predicting segmentation accuracy more strongly than model architecture. A voxel-wise spatial meta-analysis identifies neuroanatomically localised biases that are compartment-...
3.Deliberative Alignment is Deep, but Uncertainty Remains: Inference time safety improvement in reasoning via attribution of unsafe behavior to base model
arXiv:2604.09665v1 Announce Type: new Abstract: While the wide adoption of refusal training in large language models (LLMs) has showcased improvements in model safety, recent works have highlighted shortcomings due to the shallow nature of these alignment methods. To this end, the work on Deliberative alignment proposed distilling reasoning capabilities from stronger reasoning models, thereby instilling deeper safety in LLMs. In this work, we study the impact of deliberative alignment in language models. First, we show that despite being larger in model size and stronger in safety capability, there exists an alignment gap between teacher and student language models, which affects both the safety and general utility of the student model. Furthermore, we show that models aligned through deliberative alignment can retain unsafe behaviors fro...
4.Human-like Working Memory Interference in Large Language Models
arXiv:2604.09670v1 Announce Type: new Abstract: Intelligent systems must maintain and manipulate task-relevant information online to adapt to dynamic environments and changing goals. This capacity, known as working memory, is fundamental to human reasoning and intelligence. Despite having on the order of 100 billion neurons, both biological and artificial systems exhibit limitations in working memory. This raises a key question: why do large language models (LLMs) show such limitations, given that transformers have full access to prior context through attention? We find that although a two-layer transformer can be trained to solve working memory tasks perfectly, a diverse set of pretrained LLMs continues to show working memory limitations. Notably, LLMs reproduce interference signatures observed in humans: performance degrades with increa...
5.Belief-State RWKV for Reinforcement Learning under Partial Observability
arXiv:2604.09671v1 Announce Type: new Abstract: We propose a stronger formulation of RL on top of RWKV-style recurrent sequence models, in which the fixed-size recurrent state is explicitly interpreted as a belief state rather than an opaque hidden vector. Instead of conditioning policy and value on a single summary h_t, we maintain a compact uncertainty-aware state b_t = (\mu_t, \Sigma_t) derived from RWKV-style recurrent statistics and let control depend on both memory and uncertainty. This design targets a key weakness of plain fixed-state policies in partially observed settings: they may store evidence, but not necessarily confidence. We present the method, a theoretical program, and a pilot RL experiment with hidden episode-level observation noise together with a test-time noise sweep. The pilot shows that belief-state policies nearl...
AI Robotics
1.Spectral Kernel Dynamics via Maximum Caliber: Fixed Points, Geodesics, and Phase Transitions
arXiv:2604.09745v1 Announce Type: new Abstract: We derive a closed-form geometric functional for kernel dynamics on finite graphs by applying the Maximum Caliber (MaxCal) variational principle to the spectral transfer function h(lambda) of the graph Laplacian eigenbasis. The main result is that the MaxCal stationarity condition decouples into N one-dimensional problems with explicit solution: h*(lambda_l) = h_0(lambda_l) exp(-1 - T_l[h*]), yielding self-consistent (fixed-point) kernels via exponential tilting (Corollary 1), log-linear Fisher-Rao geodesics (Corollary 2), a diagonal Hessian stability criterion (Corollary 3), and an l^2_+ isometry for the spectral kernel space (Proposition 3). The spectral entropy H[h_t] provides a computable O(N) early-warning signal for network-structural phase transitions (Remark 7). All claims are numeri...
2.Kinematics of continuum planar grasping
arXiv:2604.09800v1 Announce Type: new Abstract: This paper presents an analytical framework to study the geometry arising when a soft continuum arm grasps a planar object. Both the arm centerline and the object boundary are modeled as smooth curves. The grasping problem is formulated as a kinematic boundary following problem, in which the object boundary acts as the arm's 'shadow curve'. This formulation leads to a set of reduced kinematic equations expressed in terms of relative geometric shape variables, with the arm curvature serving as the control input. An optimal control problem is formulated to determine feasible arm shapes that achieve optimal grasping configurations, and its solution is obtained using Pontryagin's Maximum Principle. Based on the resulting optimal grasp kinematics, a class of continuum grasp quality metrics is pro...
3.ProGAL-VLA: Grounded Alignment through Prospective Reasoning in Vision-Language-Action Models
arXiv:2604.09824v1 Announce Type: new Abstract: Vision language action (VLA) models enable generalist robotic agents but often exhibit language ignorance, relying on visual shortcuts and remaining insensitive to instruction changes. We present Prospective Grounding and Alignment VLA (ProGAL-VLA), which constructs a 3D entity-centric graph (GSM), uses a slow planner to produce symbolic sub-goals, and aligns them with grounded entities via a Grounding Alignment Contrastive (GAC) loss. All actions are conditioned on a verified goal embedding $g_t$, whose attention entropy provides an intrinsic ambiguity signal. On LIBERO-Plus, ProGAL-VLA increases robustness under robot perturbations from 30.3 to 71.5 percent, reduces language ignorance by 3x-4x, and improves entity retrieval from 0.41 to 0.71 Recall@1. On the Custom Ambiguity Benchmark, it ...
4.Perception Is All You Need: A Neuroscience Framework for Low Cost Sensorless Gaze in HRI
arXiv:2604.09829v1 Announce Type: new Abstract: Gaze-following in child-robot interaction improves attention, recall, and learning, but requires expensive platforms (\$30,000+), sensors, algorithms, and raises privacy concerns. We propose a framework that avoids sensors and computation entirely, instead relying on the human visual system's assumption of convexity to produce perceptual gaze-following between a robot and its viewer. Specifically, we motivate sub-dollar cardboard robot design that directly implements the brain's own gaze computation pipeline in reverse, making the viewer's perceptual system the robot's "actuator", with no sensors, no power, and no privacy concerns. We ground this framework in three converging lines of theoretical and empirical neuroscience evidence. Namely, the distributed face processing network that comput...
5.RoboLab: A High-Fidelity Simulation Benchmark for Analysis of Task Generalist Policies
arXiv:2604.09860v1 Announce Type: new Abstract: The pursuit of general-purpose robotics has yielded impressive foundation models, yet simulation-based benchmarking remains a bottleneck due to rapid performance saturation and a lack of true generalization testing. Existing benchmarks often exhibit significant domain overlap between training and evaluation, trivializing success rates and obscuring insights into robustness. We introduce RoboLab, a simulation benchmarking framework designed to address these challenges. Concretely, our framework is designed to answer two questions: (1) to what extent can we understand the performance of a real-world policy by analyzing its behavior in simulation, and (2) which external factors most strongly affect that behavior under controlled perturbations. First, RoboLab enables human-authored and LLM-enabl...
Financial AI
1.PRAGMA: Revolut Foundation Model
Modern financial systems generate vast quantities of transactional and event-level data that encode rich economic signals. This paper presents PRAGMA, a family of foundation models for multi-source banking event sequences. Our approach pre-trains a Transformer-based architecture with masked modelling on a large-scale, heterogeneous banking event corpus using a self-supervised objective tailored to the discrete, variable-length nature of financial records. The resulting model supports a wide range of downstream tasks such as credit scoring, fraud detection, and lifetime value prediction: strong performance can be achieved by training a simple linear model on top of the extracted embeddings and can be further improved with lightweight fine-tuning. Through extensive evaluation on downstream tasks, we demonstrate that PRAGMA achieves superior...
2.Quantum Computing for Financial Transformation: A Review of Optimisation, Pricing, Risk, Machine Learning, and Post-Quantum Security
Quantum computing is becoming strategically relevant to finance because several core financial bottlenecks are already defined by combinatorial search, expectation estimation, rare-event analysis, representation learning, and long-horizon cryptographic resilience. This review examines that landscape across five connected domains: constrained portfolio optimisation, derivative pricing, tail-risk and scenario estimation, quantum machine learning, and post-quantum security. Rather than treating these topics as isolated demonstrations, the article studies them as linked layers of a financial-computation stack. Across all five domains, the review applies a common evaluative logic: identify the financial bottleneck, specify the relevant quantum primitive, compare it with an explicit classical benchmark, and assess the result under realistic imp...
3.SBBTS: A Unified Schrödinger-Bass Framework for Synthetic Financial Time Series
We study the problem of generating synthetic time series that reproduce both marginal distributions and temporal dynamics, a central challenge in financial machine learning. Existing approaches typically fail to jointly model drift and stochastic volatility, as diffusion-based methods fix the volatility while martingale transport models ignore drift. We introduce the Schrödinger-Bass Bridge for Time Series (SBBTS), a unified framework that extends the Schrödinger-Bass formulation to multi-step time series. The method constructs a diffusion process that jointly calibrates drift and volatility and admits a tractable decomposition into conditional transport problems, enabling efficient learning. Numerical experiments on the Heston model demonstrate that SBBTS accurately recovers stochastic volatility and correlation parameters that prior Sch...
4.Sequential Audit Sampling with Statistical Guarantees
Financial statement auditing is conducted under a risk-based evidence approach to obtain reasonable assurance. In practice, auditors often perform additional sampling or related procedures when an initial sample does not provide a sufficient basis for a conclusion. Across jurisdictions, current standards and practice manuals acknowledge such extensions, while the statistical design of sequential audit procedures has not been fully explored. This study formulates audit sampling with additional, sequentially collected items as a sequential testing problem for a finite population under sampling without replacement. We define null and alternative hypotheses in terms of a tolerable deviation rate, specify stopping and decision rules, and formulate exact sequential boundary conditions in terms of finite-population error probabilities. For pract...
5.Generative Path-Law Jump-Diffusion: Sequential MMD-Gradient Flows and Generalisation Bounds in Marcus-Signature RKHS
This paper introduces a novel generative framework for synthesising forward-looking, càdlàg stochastic trajectories that are sequentially consistent with time-evolving path-law proxies, thereby incorporating anticipated structural breaks, regime shifts, and non-autonomous dynamics. By framing path synthesis as a sequential matching problem on restricted Skorokhod manifolds, we develop the \textit{Anticipatory Neural Jump-Diffusion} (ANJD) flow, a generative mechanism that effectively inverts the time-extended Marcus-sense signature. Central to this approach is the Anticipatory Variance-Normalised Signature Geometry (AVNSG), a time-evolving precision operator that performs dynamic spectral whitening on the signature manifold to ensure contractivity during volatile regime shifts and discrete aleatoric shocks. We provide a rigorous theoretic...
GSMA Newsroom
1.From Rich Text to Video: RCS Universal Profile 4.0 has arrived
Summary available at source link.
2.Mobile Money accounted for $2 trillion in transactions in 2025, doubling since 2021 as active accounts continue to grow
Summary available at source link.
3.Strengthening the Global Fight Against Fraud and Scams – Takeaways from the Global Fraud Summit in Vienna
Summary available at source link.
4.GSMA MWC26 Barcelona closes 20th anniversary edition
Summary available at source link.
5.From Ambition to Execution: How Open Gateway Is Scaling the Global API Economy
Summary available at source link.
Generative AI (arXiv)
1.A Mechanistic Analysis of Looped Reasoning Language Models
Reasoning has become a central capability in large language models. Recent research has shown that reasoning performance can be improved by looping an LLM's layers in the latent dimension, resulting in looped reasoning language models. Despite promising results, few works have investigated how their internal dynamics differ from those of standard feedforward models. In this paper, we conduct a mechanistic analysis of the latent states in looped language models, focusing in particular on how the stages of inference observed in feedforward models compare to those observed in looped ones. To this end, we analyze cyclic recurrence and show that for many of the studied models each layer in the cycle converges to a distinct fixed point; consequently, the recurrent block follows a consistent cyclic trajectory in the latent space. We provide evid...
2.General365: Benchmarking General Reasoning in Large Language Models Across Diverse and Challenging Tasks
Contemporary large language models (LLMs) have demonstrated remarkable reasoning capabilities, particularly in specialized domains like mathematics and physics. However, their ability to generalize these reasoning skills to more general and broader contexts--often termed general reasoning--remains under-explored. Unlike domain-specific reasoning, general reasoning relies less on expert knowledge but still presents formidable reasoning challenges, such as complex constraints, nested logical branches, and semantic interference. To address this gap, we introduce General365, a benchmark specifically designed to assess general reasoning in LLMs. By restricting background knowledge to a K-12 level, General365 explicitly decouples reasoning from specialized expertise. The benchmark comprises 365 seed problems and 1,095 variant problems across ei...
3.Enhancing Program Repair with Specification Guidance and Intermediate Behavioral Signals
Automated Program Repair (APR) has recently benefited from large language models (LLMs). However, most LLM-based APR approaches still rely primarily on coarse end-to-end signals from test-suite outcomes to guide repair, providing limited insight into where a program's internal logic deviates from its intended behavior. In contrast, human debugging often relies on intermediate reasoning about program states through localized correctness conditions or assertions. Inspired by this observation, we propose SpecTune, a specification-guided debugging framework that incorporates intermediate behavioral reasoning into APR. SpecTune decomposes the repair task into suspicious regions connected by execution checkpoints and derives localized postconditions representing expected program behaviors at those points. By executing the buggy program and eval...
4.DreamKG: A KG-Augmented Conversational System for People Experiencing Homelessness
People experiencing homelessness (PEH) face substantial barriers to accessing timely, accurate information about community services. DreamKG addresses this through a knowledge graph-augmented conversational system that grounds responses in verified, up-to-date data about Philadelphia organizations, services, locations, and hours. Unlike standard large language models (LLMs) prone to hallucinations, DreamKG combines Neo4j knowledge graphs with structured query understanding to handle location-aware and time-sensitive queries reliably. The system performs spatial reasoning for distance-based recommendations and temporal filtering for operating hours. Preliminary evaluation shows 59% superiority over Google Search AI on relevant queries and 84% rejection of irrelevant queries. This demonstration highlights the potential of hybrid architectur...
5.EA-Agent: A Structured Multi-Step Reasoning Agent for Entity Alignment
Entity alignment (EA) aims to identify entities across different knowledge graphs (KGs) that refer to the same real-world object and plays a critical role in knowledge fusion and integration. Traditional EA methods mainly rely on knowledge representation learning, but their performance is often limited under noisy or sparsely supervised scenarios. Recently, large language models (LLMs) have been introduced to EA and achieved notable improvements by leveraging rich semantic knowledge. However, existing LLM-based EA approaches typically treat LLMs as black-box decision makers, resulting in limited interpretability, and the direct use of large-scale triples substantially increases inference cost. To address these challenges, we propose \textbf{EA-Agent}, a reasoning-driven agent for EA. EA-Agent formulates EA as a structured reasoning proces...
Hugging Face Daily Papers
1.Psychological Concept Neurons: Can Neural Control Bias Probing and Shift Generation in LLMs?
Using psychological constructs such as the Big Five, large language models (LLMs) can imitate specific personality profiles and predict a user's personality. While LLMs can exhibit behaviors consistent with these constructs, it remains unclear where and how they are represented inside the model and how they relate to behavioral outputs. To address this gap, we focus on questionnaire-operationalized Big Five concepts, analyze the formation and localization of their internal representations, and use interventions to examine how these representations relate to behavioral outputs. In our experiment, we first use probing to examine where Big Five information emerges across model depth. We then identify neurons that respond selectively to each Big Five concept and test whether enhancing or suppressing their activations can bias latent represent...
2.LMMs Meet Object-Centric Vision: Understanding, Segmentation, Editing and Generation
Large Multimodal Models (LMMs) have achieved remarkable progress in general-purpose vision--language understanding, yet they remain limited in tasks requiring precise object-level grounding, fine-grained spatial reasoning, and controllable visual manipulation. In particular, existing systems often struggle to identify the correct instance, preserve object identity across interactions, and localize or modify designated regions with high precision. Object-centric vision provides a principled framework for addressing these challenges by promoting explicit representations and operations over visual entities, thereby extending multimodal systems from global scene understanding to object-level understanding, segmentation, editing, and generation. This paper presents a comprehensive review of recent advances at the convergence of LMMs and object...
3.GenTac: Generative Modeling and Forecasting of Soccer Tactics
Modeling open-play soccer tactics is a formidable challenge due to the stochastic, multi-agent nature of the game. Existing computational approaches typically produce single, deterministic trajectory forecasts or focus on highly structured set-pieces, fundamentally failing to capture the inherent variance and branching possibilities of real-world match evolution. Here, we introduce GenTac, a diffusion-based generative framework that conceptualizes soccer tactics as a stochastic process over continuous multi-player trajectories and discrete semantic events. By learning the underlying distribution of player movements from historical tracking data, GenTac samples diverse, plausible, long-horizon future trajectories. The framework supports rich contextual conditioning, including opponent behavior, specific team or league playing styles, and s...
4.Efficient KernelSHAP Explanations for Patch-based 3D Medical Image Segmentation
Perturbation-based explainability methods such as KernelSHAP provide model-agnostic attributions but are typically impractical for patch-based 3D medical image segmentation due to the large number of coalition evaluations and the high cost of sliding-window inference. We present an efficient KernelSHAP framework for volumetric CT segmentation that restricts computation to a user-defined region of interest and its receptive-field support, and accelerates inference via patch logit caching, reusing baseline predictions for unaffected patches while preserving nnU-Net's fusion scheme. To enable clinically meaningful attributions, we compare three automatically generated feature abstractions within the receptive-field crop: whole-organ units, regular FCC supervoxels, and hybrid organ-aware supervoxels, and we study multiple aggregation/value fu...
5.Agentic Aggregation for Parallel Scaling of Long-Horizon Agentic Tasks
We study parallel test-time scaling for long-horizon agentic tasks such as agentic search and deep research, where multiple rollouts are generated in parallel and aggregated into a final response. While such scaling has proven effective for chain-of-thought reasoning, agentic tasks pose unique challenges: trajectories are long, multi-turn, and tool-augmented, and outputs are often open-ended. Aggregating only final answers discards rich information from trajectories, while concatenating all trajectories exceeds the model's context window. To address this, we propose AggAgent, an aggregation agent that treats parallel trajectories as an environment. We equip it with lightweight tools to inspect candidate solutions and search across trajectories, enabling it to navigate and synthesize information on demand. Across six benchmarks and three m...
IEEE Xplore AI
1.12 Graphs That Explain the State of AI in 2026
The capabilities of leading AI models continue to accelerate and the largest AI companies, including OpenAI and Anthropic , are hurtling toward IPOs later this year. Yet resentment towards AI continues to simmer and in some cases has boiled over, especially in the United States, where local governments are beginning to embrace restrictions or outright bans on new data center development. It’s a lot to keep track of, but the 2026 edition of the AI Index from Stanford University’s Human-Centered Artificial Intelligence center pulls it off. The report, which comes in at over 400 pages, includes dozens of data points and graphs that approach the topic from multiple angles, from benchmark scores to investment and public perception. As in prior years (see our coverage from 2021 , 2022 , 2023 , 2024 , and 2025 ), we’ve read the report and identi...
2.GoZTASP: A Zero-Trust Platform for Governing Autonomous Systems at Mission Scale
ZTASP is a mission-scale assurance and governance platform designed for autonomous systems operating in real-world environments. It integrates heterogeneous systems—including drones, robots, sensors, and human operators—into a unified zero-trust architecture. Through Secure Runtime Assurance (SRTA) and Secure Spatio-Temporal Reasoning (SSTR), ZTASP continuously verifies system integrity, enforces safety constraints, and enables resilient operation even under degraded conditions. ZTASP has progressed beyond conceptual design, with operational validation at Technology Readiness Level (TRL) 7 in mission critical environments. Core components, including Saluki secure flight controllers, have reached TRL8 and are deployed in customer systems. While initially developed for high-consequence mission environments, the same assurance challenges are...
3.AI Models Map the Colorado River’s Hard Choices
The Colorado River begins as snow. Every spring, the mountain snowpack of the Rockies melts into streams that feed into reservoirs that supply 40 million people across seven U.S. states. The system has worked, more or less, for a century. That century is over. By some measures, 2026 is shaping up to be the worst year the river has seen since records began. Flows are down 20 percent from 2000 levels . Lake Powell, the reservoir straddling Utah and Arizona, may drop below the threshold for generating hydropower before the year is out . The negotiations between the seven states over how to share what’s left have collapsed twice , and the U.S. federal government is threatening to impose its own plan. While the states argue and the river shrinks, a growing set of machine learning tools is being deployed across the basin. Federal water managers...
4.Decentralized Training Can Help Solve AI’s Energy Woes
Artificial intelligence harbors an enormous energy appetite. Such constant cravings are evident in the hefty carbon footprint of the data centers behind the AI boom and the steady increase over time of carbon emissions from training frontier AI models . No wonder big tech companies are warming up to nuclear energy , envisioning a future fueled by reliable, carbon-free sources. But while nuclear-powered data centers might still be years away, some in the research and industry spheres are taking action right now to curb AI’s growing energy demands. They’re tackling training as one of the most energy-intensive phases in a model’s life cycle, focusing their efforts on decentralization. Decentralization allocates model training across a network of independent nodes rather than relying on one platform or provider. It allows compute to go where ...
5.Why AI Systems Fail Quietly
In late-stage testing of a distributed AI platform, engineers sometimes encounter a perplexing situation: every monitoring dashboard reads “healthy,” yet users report that the system’s decisions are slowly becoming wrong. Engineers are trained to recognize failure in familiar ways: a service crashes, a sensor stops responding, a constraint violation triggers a shutdown. Something breaks, and the system tells you. But a growing class of software failures looks very different. The system keeps running, logs appear normal, and monitoring dashboards stay green. Yet the system’s behavior quietly drifts away from what it was designed to do. This pattern is becoming more common as autonomy spreads across software systems. Quiet failure is emerging as one of the defining engineering challenges of autonomous systems because correctness now depends...
MIT Sloan Management
1.The Human Side of AI Adoption: Lessons From the Field
Carolyn Geason-Beissel/MIT SMR Not a day goes by without another article being published about how AI could disrupt yet another aspect of our business or personal lives. In recent years, AI adoption has indeed taken off. However, if you pay close attention, you’ll notice a dichotomy. Many examples of successful early adoption of artificial intelligence […]
2.Managing Up: A Skill Set That Matters Now
Carolyn Geason-Beissel/MIT SMR | Getty Images Are you skilled at managing up? If your talents are lacking when it comes to managing and dealing with the people above you in the organizational hierarchy, you can find yourself mired in some unpleasant and career-harming situations. Maybe you’re frustrated by a micromanaging supervisor or feeling marginalized by […]
3.The Trap That Skilled Negotiators Miss
Brian Stauffer/theispot.com Say you walk into a car dealership determined to stay within budget. The salesperson shows you a car you like and quotes a price of $41,435. You know there’s room to negotiate, but when it’s time to counter, that first number quietly takes over. Your counteroffer, the concessions, and the final deal all […]
4.Rethink Responsibility in the Age of AI
Mark Airs/Ikon Images Early one morning in 2018, a self-driving Uber vehicle fatally struck a pedestrian in Tempe, Arizona. The world had questions: Who was responsible? Was it the safety driver behind the wheel? The engineers who designed the algorithms? Uber’s leadership? Or the regulators who had allowed autonomous-vehicle testing? The inability to name a […]
5.Gain Consumer Insight With Generative AI
Stuart Kinlough/Ikon Images Marketing leaders often face a dilemma: Deriving the insights they need in order to make confident decisions can cost tens of thousands of dollars and involve several months of data gathering and analysis, by which time market conditions may have shifted. Can generative AI fundamentally reshape this calculus? Drawing on recent research, […]
NBER Working Papers
1.The Empathy Channel in Fertility -- by Sebastian Galiani, Raul A. Sosa
Being around babies makes people want babies. We formalize this observation as the empathy channel: exposure to infants in the social environment activates neurobiological mechanisms that increase the desire for parenthood. As children become scarcer, this affective stimulus weakens, further eroding the motivation to have children. We embed the mechanism in a two-group overlapping-generations quantity-quality model. The empathy channel generates a positive externality, since each birth raises others’ desire for children, making the decentralized equilibrium inefficient. We characterize the optimal per-child subsidy and show that the first-order Pigouvian rate substantially overshoots the general-equilibrium optimum. The optimal targeting rule follows a Ramsey-like logic, directing the subsidy at the group with the most externality per fis...
2.Profit Regulation and Strategic Transfer Pricing by Vertically Integrated Firms: Evidence from Health Care -- by Pragya Kakani, Eric Yde, Genevieve P. Kanter, Richard G. Frank, Amelia M. Bond
We provide evidence of strategic transfer pricing by vertically integrated health care firms in response to insurer profit regulations. Insurers increased prices at vertically integrated pharmacies by 9.5% following the introduction of caps on insurer profits in Medicare Part D. We detect larger price increases by insurers that were at greatest risk of exceeding the allowable profit level. More than one-fifth of these higher prices were borne by the federal government. Our analysis illustrates that vertically integrated firms can evade profit regulation by “tunneling” profits to unregulated subsidiaries, undermining regulatory intent and increasing health care spending.
3.Predicted Incrementality by Experimentation (PIE) for Ad Measurement -- by Brett R. Gordon, Robert Moakler, Florian Zettelmeyer
Randomized controlled trials (RCTs) provide the most credible estimates of advertising incrementality but are difficult to scale. We propose Predicted Incrementality by Experimentation (PIE), which reframes ad measurement as a campaign-level prediction problem. PIE uses a sample of RCTs to learn a mapping from campaign features to causal effects, then applies it to campaigns not run as RCTs. Because the RCTs identify the causal effects, PIE can incorporate post-determined features—campaign-level aggregates such as test-group outcomes, exposure rates, and last-click conversions, computed after campaign completion. These metrics reflect the consumer behaviors that generate treatment effects, so they carry predictive information about incrementality even though they would be invalid controls in a causal model. Using 2,226 Meta ad experiments...
4.Bad News and Policy Views: Expectations, Disappointment, and Opposition to Affirmative Action -- by Louis-Pierre Lepage, Heather Sarsons, Michael Thaler
There is widespread opposition to affirmative action policies. We study whether personal disappointments shape preferences for such policies. Specifically, we test whether individuals' college admissions outcomes, relative to their expectations, influence their attitudes toward affirmative action policies. Using a retrospective survey among recent White and Asian college applicants, we find that disappointed individuals—those who were admitted to fewer schools than anticipated—are relatively more likely to believe that affirmative action played an important role in their admissions outcomes, have the lowest support for affirmative action policies, and are more willing to donate to an anti-affirmative action organization. They also hold more negative views about the academic qualifications of under-represented minorities. To isolate the ca...
5.Forecasting the Economic Effects of AI -- by Ezra Karger, Otto Kuusela, Jason Abaluck, Kevin A. Bryan, Basil Halperin, Todd R. Jones, Connacher Murphy, Philip Trammell, Matt Reynolds, Dan Mayland, Ria Viswanathan, Ananaya Mittal, Rebecca Ceppas de Castro, Josh Rosenberg, Philip Tetlock
We elicit forecasts of how AI will affect the U.S. economy, comparing the beliefs of five groups: academic economists, employees at AI companies, policy researchers focused on AI, highly accurate forecasters, and the general public. The median respondent in each group expects substantial advances in AI capabilities by 2030, small declines in labor force participation consistent with demographic shifts, and an annual GDP growth rate of 2.5%, which exceeds both the typical medium-run (2.0%) and long-run (1.7%) baseline forecasts from government agencies and private-sector forecasters. Conditional on a “rapid” AI progress scenario, in which AI systems surpass human performance on many cognitive and physical tasks, experts forecast substantial, though not historically unprecedented, economic shifts: annualized GDP growth rising to around 4% a...
NY Fed - Liberty Street
1.What Millions of Homeowner’s Insurance Contracts Reveal About Risk Sharing
Housing is the largest component of assets held by households in the United States, totaling $48 trillion in 2025. When natural disasters strike, the resulting damage to homes can be large relative to households’ liquid savings. Homeowner’s insurance is the primary financial tool households use to protect themselves against property risk. Despite the economic importance of homeowner’s insurance, we know surprisingly little about how insurance contracts are actually designed with respect to property risk. In this post, which is based on our new paper, “Economics of Property Insurance,” we examine how homeowner’s insurance contracts are structured in practice. Using a new granular dataset covering millions of homeowner’s insurance policies, we document ...
2.A Closer Look at Emerging Market Resilience During Recent Shocks
A succession of shocks to the global economy in recent years has focused attention on the improved economic and financial resilience of emerging market economies. For some of these economies, this assessment is well-founded and highlights the fruits of deep, structural economic reforms since the 1990s. However, for a much larger universe of countries, the ability to weather shocks is still mixed and many remain vulnerable. In this post, we explore the divide between the two sets of countries and focus on the effects of recent economic shocks, including the ongoing conflict in the Middle East.
3.The Fed Has Two Tools to Influence Money Market Conditions
The Federal Reserve’s 2022-23 tightening cycle involved the use of two monetary policy tools: changes in administrative rates and changes in the size of its balance sheet. This post highlights the results of a recent Staff Report that explores how these tools affect money market conditions. Using confidential trade-level data, we find that both tools have significant effects on the pricing of funds sourced through repo. These results suggest that the Fed can manage how financing conditions are affected even as it influences economic conditions. For example, the Fed can lower its administrative rates to loosen economic conditions, while shrinking its balance sheet to maintain financing conditions in the money markets.
4.Treasury Market Liquidity Since April 2025
In this post, we examine the evolution of U.S. Treasury market liquidity over the past year, which has witnessed myriad economic and political developments. Liquidity worsened markedly one year ago as volatility increased following the announcement of higher-than-expected tariffs. Liquidity quickly improved when the tariff increases were partially rolled back and then remained fairly stable thereafter (through the end of our sample in February 2026), including after the recent Supreme Court decision striking down the emergency tariffs and the subsequent announcement of new tariffs.
5.Behind the ATM: Exploring the Structure of Bank Holding Companies
Many modern banking organizations are highly complex. A “bank” is often a larger structure made up of distinct entities, each subject to different regulatory, supervisory, and reporting requirements. For researchers and policymakers, understanding how these institutions are structured and how they have evolved over time is essential. In this post, we illustrate what a modern financial holding company looks like in practice, document how banks’ organizational structures have changed over time, and explain why these details matter for conducting accurate analyses of the financial system.
Project Syndicate
1.Why Development Doesn’t Prevent War
Violent conflicts have reached levels not seen since World War II, even as global poverty has fallen to historic lows, challenging long-held assumptions about the relationship between development and peace. This outcome calls for a reassessment of the theory of change that underpins development aid.
2.Russians Go Home
At a time when illiberalism has presented itself as the wave of the future, Hungarians just voted overwhelmingly to reverse course. But unlike the country’s liberation in 1989, this week’s electoral result represents only the beginning of a longer process, the outcome of which will remain uncertain.
3.How Trump’s Crypto Push Is Undermining American Power
US President Donald Trump’s deregulation of cryptocurrency markets has introduced new financial risks and threatens to undermine the dollar’s global dominance. Russia, Iran, and North Korea have been the biggest beneficiaries, using crypto to evade American sanctions with impunity.
4.Will the IMF Ever Learn?
From austerity to taxation, the International Monetary Fund has consistently failed to incorporate its own findings into its lending programs. The Fund's once-a-decade Review of Program Design and Conditionality, which is now underway, offers a critical opportunity to change this.
5.To Work for Us, AI Must Not Think for Us
The public discussion about AI’s impact on society focuses largely on the potential displacement of workers and the loss of jobs. But an even greater risk is the displacement of human thought and the processes that produce the knowledge base on which AI models themselves rely.
RCR Wireless
1.EU clears Orange’s full acquisition of MasOrange
The deal will see Orange take sole control of MasOrange, the operator formed through the merger of Orange and MásMóvil in 2024 In sum – what to know: Regulatory clearance – The European Commission approved the deal under a simplified…
2.InfiniG taps Nokia to bring ‘carrier-grade’ CBRS indoors, prep for enterprise AI-RAN
A new partnership between InfiniG and Nokia upgrades in-building neutral-host coverage to ‘carrier-class’ standards while quietly positioning enterprise networks for AI-driven RAN innovation – which ties to both the Finnish firm’s ‘super-cycle’ AI vision and its strange enterprise 5G strategy. …
3.How GNSS satellites power positioning and timing
From smartphones to cars to critical infrastructure, these early satellites power some of the most modern technologies of today GNSS is not a term that peppers headlines often. Nevertheless, it underpins almost all technologies that we use in everyday life.…
4.Nvidia’s AI grid and the telco dilemma
Should telcos invest billions in edge GPU infrastructure or wait for physical AI use cases to mature? In sum – what we know: ABI Research recently put out an analysis looking at Nvidia’s AI grid concept and the bigger question…
5.Viavi, Ground Control bring resilient PNT to GNSS-denied environments
Escalating GNSS disruptions are pushing operators toward multi-source, multi-constellation alternatives to maintain continuity and trust in navigation data A disturbing number of ships — both commercial and military — around the world are facing Global Navigation Satellite Systems (GNSS) disruptions.…
Semantic Scholar – Machine Learning
1.Source Error
Check Feed
Telecom & 6G AI
1.ISAC-Enabled Non-Terrestrial Networks for 6G: Design Principles, Standardization, Performance Tradeoffs, and Use Cases
Non-Terrestrial Networks (NTN) have emerged as a key enabler to fully realize the vision of integrated, intelligent, and ubiquitous connectivity in 6G systems. However, several operational challenges, including severe Doppler effects, interference, and latency, hinder the seamless integration of NTN and Terrestrial Networks (TN). In this context, Integrated Sensing and Communication (ISAC), which unifies sensing and communication functionalities within a common framework, offers great potential to address these challenges while enabling new network capabilities. Due to its complementary functionalities, ISAC can play a pivotal role in enhancing NTN performance, although its practical adoption requires a fundamental rethinking of existing architectural and standardization frameworks. Motivated by this need, this article examines key aspect...
2.Prior-Guided Movable Antenna Control for Agile Multi-Path Sensing (extended version)
Multi-path sensing, which aims to extract the geometric attributes of multiple propagation paths, is expected to be a key functionality of 6G. A movable antenna (MA) can enable this functionality by creating a synthetic aperture through sequential mechanical motion. However, existing MA-based sensing methods typically rely on exhaustive scanning over the entire movable plate, resulting in significant control overhead and sensing latency, which limits their practicality for agile sensing. To address this challenge, this paper develops a prior-guided agile multi-path sensing framework that leverages weak prior angle-of-arrival (AoA) statistics as side information. The proposed framework comprises two steps. First, the movable plate's three-dimensional orientation is optimized only once to maximize path visibility while preserving path discr...
3.Adaptive Structured Sparse Bayesian Learning for Near-Field Non-Stationary Channel Estimation in XL-MIMO Systems
Extremely large-scale multiple-input multiple-output (XL-MIMO) is a key enabler for sixth-generation (6G) communications. However, near-field channel estimation is particularly challenging due to spherical-wave propagation and spatial non-stationarity. To tackle this challenge, we propose a structured sparse Bayesian learning framework with adaptive dictionary updating for near-field non-stationary channel estimation. Specifically, the proposed method iteratively updates the distance parameters within an adaptive dictionary, thereby enhancing the representation capability without increasing the dictionary size. Moreover, we develop a hierarchical prior model that jointly captures polar-domain sparsity and structured dependency, enabling efficient Bayesian inference. Simulation results demonstrate that the proposed approach outperforms exi...
4.Quantum Graph Neural Networks for Double-Sided Reconfigurable Intelligent Surface Optimization
As a key enabler for sixth-generation (6G) wireless communications, reconfigurable intelligent surfaces (RISs) provide the flexibility to control signal strength. Nevertheless, optimizing hundreds of elements is computationally expensive. To overcome this challenge, we present a quantum framework (QGCN) to jointly optimize the physical and electromagnetic response of a double-sided RIS design that incorporates discrete phase shifts and inter-element coupling. The core contribution is the adaptive activation or deactivation of elements, allowing a virtual spacing mechanism using PIN diode switches. We then solve a multi-objective problem that maximizes the minimum user data rate subject to constraints on aperture length and mutual coupling between active elements. Experimental results on IBM Quantum's 127-qubit ibm_kyiv superconducting pro...
5.Energy-Efficient Hybrid Data Computation via Coordinated AirComp and Edge Offloading
The development of 6G networks brings an increasing variety of data services, which motivates the hybrid computation paradigm that coordinates the over-the-air computation (AirComp) and edge computing for diverse and effective data processing. In this paper, we address this emerging issue of hybrid data computation from an energy-efficiency perspective, where the coexistence of both types induces resource competition and interference, and thus complicates the network management. Accordingly, we formulate the problem to minimize the overall energy consumption including the data transmission and computation, subject to the offloading capacity and aggregation accuracy. We then propose a block coordinate descent framework that decomposes and solves the subproblems including the user scheduling, power control, and transceiver scaling, which ar...
arXiv Quantitative Finance
1.A Herding-Based Model of Technological Transfer and Economic Convergence: Evidence from Central and Eastern Europe
The long-run convergence of developing economies toward advanced countries exhibits robust empirical regularities, yet the mechanisms underlying technological diffusion remain insufficiently specified in standard growth models. In this paper, we extend the neoclassical framework by introducing a micro-founded mechanism of technological transfer as a driver of total factor productivity. Rather than treating technological progress as exogenous or purely innovation-driven, we model productivity growth as a process of adopting existing technologies from the global frontier. The diffusion process is described using a herding-type interaction mechanism, in which agents transition from non-adopters to adopters under the combined influence of individual incentives and peer effects. This approach yields a tractable aggregate representation of TFP ...
2.AI Patents in the United States and China: Measurement, Organization, and Knowledge Flows
We develop a high-precision classifier to measure artificial intelligence (AI) patents by fine-tuning PatentSBERTa on manually labeled data from the USPTO's AI Patent Dataset. Our classifier substantially improves the existing USPTO approach, achieving 97.0% precision, 91.3% recall, and a 94.0% F1 score, and it generalizes well to Chinese patents based on citation and lexical validation. Applying it to granted U.S. patents (1976-2023) and Chinese patents (2010-2023), we document rapid growth in AI patenting in both countries and broad convergence in AI patenting intensity and subfield composition, even as China surpasses the United States in recent annual patent counts. The organization of AI innovation nevertheless differs sharply: U.S. AI patenting is concentrated among large private incumbents and established hubs, whereas Chinese AI p...
3.Regime-Aware Specialist Routing for Volatility Forecasting
Volatility forecasting becomes challenging when market conditions change and model performance varies across regimes. Motivated by this instability, we develop a regime-aware specialist routing framework for ETF volatility forecasting. The framework uses online risk-sensitive evaluation and state-dependent gating to combine different forecasting specialists across calm and stressed market states. Using a daily panel of six ETFs under a rolling walk-forward design, we find that the strongest forecaster is regime-dependent rather than global. Relative to the rolling-best baseline, the proposed routing framework reduces high-volatility forecast loss by about 24\% and underprediction loss by about 22\%. These results suggest that specialist routing provides a practical adaptive forecasting architecture for changing market conditions.
4.Global Persistence, Local Residual Structure: Forecasting Heterogeneous Investment Panels
On a 93-actor quarterly panel mixing macro indicators, institutional data, and firm-level investment ratios, global factor augmentation degrades prediction for actor subgroups whose dynamics are misrepresented by the shared basis. A two-stage architecture -- global pooled AR(1) for shared persistence, block-specific local models for residual dynamics -- improves full-panel out-of-sample $R^2$ from 0.630 to 0.677 ($Δ= +0.047$, CI $[+0.036, +0.058]$, 10/10 windows, placebo $p \leq 0.001$). A held-out decade test -- block partition frozen on 2005--2014 data, evaluated on unseen 2015--2024 windows -- confirms the gain ($Δ= +0.050$, 10/10). Dropping the tech/health block eliminates roughly 72\% of the gain, making it the primary driver; rank-matched decomposition confirms this reflects a genuine cross-sector co-movement factor, not a rank-capa...
5.The Geoeconomics of Venture Capital An Economic Complexity Approach to Emerging Technological Sovereignty
We explore a quantitative approach to emerging technological sovereignty and geoeconomic power by assessing the relative positioning of countries with economic complexity methods applied to the structure of national venture-capital (VC) portfolios and their associated Revealed Venture Advantage (RVA) metrics. Using Crunchbase firm- and deal-level data, we map venture-backed startups to 18 emerging technology domains via a probabilistic multi-label large-language-model classifier, and construct an RVA-based country-technology specialization matrix for the 17 countries with the highest aggregate VC funding. From this matrix, we derive two eigenvector-based measures: a Geoeconomic Complexity Index (GCI) that ranks countries by the composition of their venture specializations, and an Emerging Technology Geoeconomic Complexity Index (ETGCI) th...
arXiv – 6G & Networking
1.ISAC-Enabled Non-Terrestrial Networks for 6G: Design Principles, Standardization, Performance Tradeoffs, and Use Cases
Non-Terrestrial Networks (NTN) have emerged as a key enabler to fully realize the vision of integrated, intelligent, and ubiquitous connectivity in 6G systems. However, several operational challenges, including severe Doppler effects, interference, and latency, hinder the seamless integration of NTN and Terrestrial Networks (TN). In this context, Integrated Sensing and Communication (ISAC), which unifies sensing and communication functionalities within a common framework, offers great potential to address these challenges while enabling new network capabilities. Due to its complementary functionalities, ISAC can play a pivotal role in enhancing NTN performance, although its practical adoption requires a fundamental rethinking of existing architectural and standardization frameworks. Motivated by this need, this article examines key aspect...
2.Prior-Guided Movable Antenna Control for Agile Multi-Path Sensing (extended version)
Multi-path sensing, which aims to extract the geometric attributes of multiple propagation paths, is expected to be a key functionality of 6G. A movable antenna (MA) can enable this functionality by creating a synthetic aperture through sequential mechanical motion. However, existing MA-based sensing methods typically rely on exhaustive scanning over the entire movable plate, resulting in significant control overhead and sensing latency, which limits their practicality for agile sensing. To address this challenge, this paper develops a prior-guided agile multi-path sensing framework that leverages weak prior angle-of-arrival (AoA) statistics as side information. The proposed framework comprises two steps. First, the movable plate's three-dimensional orientation is optimized only once to maximize path visibility while preserving path discr...
3.Adaptive Structured Sparse Bayesian Learning for Near-Field Non-Stationary Channel Estimation in XL-MIMO Systems
Extremely large-scale multiple-input multiple-output (XL-MIMO) is a key enabler for sixth-generation (6G) communications. However, near-field channel estimation is particularly challenging due to spherical-wave propagation and spatial non-stationarity. To tackle this challenge, we propose a structured sparse Bayesian learning framework with adaptive dictionary updating for near-field non-stationary channel estimation. Specifically, the proposed method iteratively updates the distance parameters within an adaptive dictionary, thereby enhancing the representation capability without increasing the dictionary size. Moreover, we develop a hierarchical prior model that jointly captures polar-domain sparsity and structured dependency, enabling efficient Bayesian inference. Simulation results demonstrate that the proposed approach outperforms exi...
4.Quantum Graph Neural Networks for Double-Sided Reconfigurable Intelligent Surface Optimization
As a key enabler for sixth-generation (6G) wireless communications, reconfigurable intelligent surfaces (RISs) provide the flexibility to control signal strength. Nevertheless, optimizing hundreds of elements is computationally expensive. To overcome this challenge, we present a quantum framework (QGCN) to jointly optimize the physical and electromagnetic response of a double-sided RIS design that incorporates discrete phase shifts and inter-element coupling. The core contribution is the adaptive activation or deactivation of elements, allowing a virtual spacing mechanism using PIN diode switches. We then solve a multi-objective problem that maximizes the minimum user data rate subject to constraints on aperture length and mutual coupling between active elements. Experimental results on IBM Quantum's 127-qubit ibm_kyiv superconducting pro...
5.Energy-Efficient Hybrid Data Computation via Coordinated AirComp and Edge Offloading
The development of 6G networks brings an increasing variety of data services, which motivates the hybrid computation paradigm that coordinates the over-the-air computation (AirComp) and edge computing for diverse and effective data processing. In this paper, we address this emerging issue of hybrid data computation from an energy-efficiency perspective, where the coexistence of both types induces resource competition and interference, and thus complicates the network management. Accordingly, we formulate the problem to minimize the overall energy consumption including the data transmission and computation, subject to the offloading capacity and aggregation accuracy. We then propose a block coordinate descent framework that decomposes and solves the subproblems including the user scheduling, power control, and transceiver scaling, which ar...
arXiv – Network Architecture (6G/Slicing)
1.ISAC-Enabled Non-Terrestrial Networks for 6G: Design Principles, Standardization, Performance Tradeoffs, and Use Cases
Non-Terrestrial Networks (NTN) have emerged as a key enabler to fully realize the vision of integrated, intelligent, and ubiquitous connectivity in 6G systems. However, several operational challenges, including severe Doppler effects, interference, and latency, hinder the seamless integration of NTN and Terrestrial Networks (TN). In this context, Integrated Sensing and Communication (ISAC), which unifies sensing and communication functionalities within a common framework, offers great potential to address these challenges while enabling new network capabilities. Due to its complementary functionalities, ISAC can play a pivotal role in enhancing NTN performance, although its practical adoption requires a fundamental rethinking of existing architectural and standardization frameworks. Motivated by this need, this article examines key aspect...
2.Security Implications of 5G Communication in Industrial Systems
Traditionally, industrial control systems (ICS) were designed without security in mind, prioritizing availability and real-time communication. As these systems increasingly become targets of powerful adversaries, security can no longer be neglected. Driven by flexibility and automation needs, ICS are transitioning from wired to 5G communication, introducing new attack surfaces and a less reliable communication medium, thereby exacerbating existing security challenges. Given their critical role in society, a comprehensive evaluation of their security is imperative. To this end, we introduce SWICS, a fully virtual testbed simulating an ICS in a realistic 5G environment, and study how this transition affects security under varying channel conditions. Our results show three key findings: under optimal channel conditions, industrial 5G network...
3.EYWA: Elastic Load-Balancing and High-Availability Wired Virtual Network Architecture
Infrastructure as a Service (IaaS) in cloud environments provides compute, storage, networking, and other fundamental resources that allow consumers to deploy and run arbitrary software, including operating systems and applications. To support multi-tenant environments, IaaS leverages virtualization, but conventional overlay network architectures have become a direct cause of scalability limitations. In particular, current IaaS virtual networks face challenges in high availability and load balancing. To address these issues, we present EYWA, a virtual network architecture that scales to support very large data centers with high availability, efficient load balancing, and large layer-2 semantics. EYWA overcomes scalability limitations by: (1) accommodating a large number of tenants (about 2^24 = 16,777,216) through logically isolated vir...
4.Generative AI Agent Empowered Power Allocation for HAP Propulsion and Communication Systems
High altitude platforms (HAPs) are emerging as a key enabler for 6G coverage, yet limited energy must be split between propulsion and communications. Most prior HAP studies ignore propulsion power or rely on surrogates that miss hull-propeller interference, leading to misestimated communication power budgets and degraded beamforming. More importantly, HAP power allocation is intrinsically a multi-system, multidisciplinary problem in which aerodynamics, propulsion-system efficiency, and communication-system performance (quality of service (QoS) and energy efficiency (EE)) are tightly coupled.To address these challenges, this paper designs an interactive generative artificial intelligence (AI)-empowered HAP power allocation agent.By interacting with the AI agent, we develop an accurate propulsion power consumption model that takes into acco...
5.FORSLICE: An Automated Formal Framework for Efficient PRB-Allocation towards Slicing Multiple Network Services
Network slicing is a modern 5G technology that provides efficient network experience for diverse use cases. It is a technique for partitioning a single physical network infrastructure into multiple virtual networks, called slices, each equipped for specific services and requirements. In this work, we particularly deal with radio access network (RAN) slicing and resource allocation to RAN slices. In 5G, physical resource blocks (PRBs) being the fundamental units of radio resources, our main focus is to allocate PRBs to the slices efficiently. While addressing a spectrum of needs for multiple services or the same services with multi-priorities, we need to ensure two vital system properties: i) fairness to every service type (i.e., providing the required resources and a desired range of throughput) even after prioritizing a particular servic...