Daily Briefing – Apr 27 (96 Articles)
Babak's Daily Briefing
Monday, April 27, 2026
Sources: 20 | Total Articles: 96
6G World
1.Evaluating 6G PHY Evolution: What the Industry Is Really Trying to Solve
Summary available at source link.
2.Amazon’s Globalstar deal gives Amazon Leo a faster path into D2D
Amazon’s planned acquisition of Globalstar is about far more than satellites. It gives Amazon Leo a faster path into direct-to-device connectivity, combining spectrum, operational assets, and Apple-facing service continuity in a move that could reshape the hybrid terrestrial-NTN landscape.
3.SoftBank’s Physical AI push gives AI-RAN a sharper purpose
SoftBank is starting to give AI-RAN a more concrete job description: not just running AI workloads near the network, but serving as the real-time infrastructure layer for robots and other physical systems. The company’s recent materials suggest it wants to move the AI-RAN conversation from telecom architecture to real-world machine action.
4.South Korea puts 6G inside its national AI push
South Korea has unveiled a three-year national roadmap aimed at becoming one of the world’s top three AI powers by 2028, with 6G commercialization positioned as part of that broader push.
5.b-com’s Open XG Hub targets one of telecom’s biggest gaps: turning experimentation into deployment
In an interview with Peter Pietrzyk, Managing Director of 6GWorld, Patrick Savell, Head of Connectivity at b-com, said platforms such as Open XG Hub are designed to help bridge one of the industry’s most persistent challenges: moving promising ideas from research environments into deployable network systems. The bigger point is that, as telecom becomes more software-driven and AI-native, the bottleneck is increasingly less about invention and more about validation, integration, and operational readiness.
AI Agents
1.CHORUS: An Agentic Framework for Generating Realistic Deliberation Data
Understanding the intricate dynamics of online discourse depends on large-scale deliberation data, a resource that remains scarce across interactive web platforms due to restrictive accessibility policies, ethical concerns and inconsistent data quality. In this paper, we propose Chorus, an agentic framework, which orchestrates LLM-powered actors with behaviorally consistent personas to generate realistic deliberation discussions. Each actor is governed by an autonomous agent equipped with memory of the evolving discussion, while participation timing is governed by a principled Poisson process-based temporal model, which approximates the heterogeneous engagement patterns of real users. The framework is further supported by structured tool usage, enabling actors to access external resources and facilitating integration with interactive web ...
2.An Agentic Approach to Metadata Reasoning
As LLM-driven autonomous agents evolve to perform complex, multi-step tasks that require integrating multiple datasets, the problem of discovering relevant data sources becomes a key bottleneck. Beyond the challenge posed by the sheer volume of available data sources, data-source selection is difficult because the semantics of data are extremely nuanced and require considering many aspects of the data. To address this, we introduce the Metadata Reasoner, an agentic approach to metadata reasoning, designed to identify a small set of data sources that are both sufficient and minimal for a given analytical task. The Metadata Reasoner leverages a table-search engine to retrieve candidate tables, and then autonomously consults various aspects of the available metadata to determine whether the candidates fit the requirements of the task. We dem...
3.ECLASS-Augmented Semantic Product Search for Electronic Components
Efficient semantic access to industrial product data is a key enabler for factory automation and emerging LLM-based agent workflows, where both human engineers and autonomous agents must identify suitable components from highly structured catalogs. However, the vocabulary mismatch between natural-language queries and attribute-centric product descriptions limits the effectiveness of traditional retrieval approaches, e.g., BM25. In this work, we present a systematic evaluation of LLM-assisted dense retrieval for semantic product search on industrial electronic components, and investigate the integration of hierarchical semantics from the ECLASS standard into embedding-based retrieval. Our results show that dense retrieval combined with re-ranking substantially outperforms classical lexical methods and foundation model web-search baselines....
4.Mesh Memory Protocol: Semantic Infrastructure for Multi-Agent LLM Systems
Teams of LLM agents increasingly collaborate on tasks spanning days or weeks: multi-day data-generation sprints where generator, reviewer, and auditor agents coordinate in real time on overlapping batches; specialists carrying findings forward across session restarts; product decisions compounding over many review rounds. This requires agents to share, evaluate, and combine each other's cognitive state in real time across sessions. We call this cross-session agent-to-agent cognitive collaboration, distinct from parallel agent execution. To enable it, three problems must be solved together. (P1) Each agent decides field by field what to accept from peers, not accept or reject whole messages. (P2) Every claim is traceable to source, so returning claims are recognised as echoes of the receiver's own prior thinking. (P3) Memory that survives ...
5.WebUncertainty: Dual-Level Uncertainty Driven Planning and Reasoning For Autonomous Web Agent
Recent advancements in large language models (LLMs) have empowered autonomous web agents to execute natural language instructions directly on real-world webpages. However, existing agents often struggle with complex tasks involving dynamic interactions and long-horizon execution due to rigid planning strategies and hallucination-prone reasoning. To address these limitations, we propose WebUncertainty, a novel autonomous agent framework designed to tackle dual-level uncertainty in planning and reasoning. Specifically, we design a Task Uncertainty-Driven Adaptive Planning Mechanism that adaptively selects planning modes to navigate unknown environments. Furthermore, we introduce an Action Uncertainty-Driven Monte Carlo tree search (MCTS) Reasoning Mechanism. This mechanism incorporates the Confidence-induced Action Uncertainty (ConActU) str...
AI Computation & Hardware
1.When Cow Urine Cures Constipation on YouTube: Limits of LLMs in Detecting Culture-specific Health Misinformation
arXiv:2604.22002v1 Announce Type: new Abstract: Social media platforms have become primary channels for health information in the Global South. Using gomutra (cow urine) discourse on YouTube in India as a case study, we present a post-facto Large Language Model (LLM)-assisted discourse analysis of 30 multilingual transcripts showing that promotional content blends sacred traditional language with pseudo-scientific claims in ways that sophisticated debunking content itself mirrors, creating a rhetorical register that LLMs, trained predominantly on Western corpora, are systematically ill-equipped to analyse. Varying prompt tone across three LLMs (GPT-4o, Gemini 2.5 Pro, DeepSeek-V3.1), we find that culturally embedded health misinformation does not look like ordinary misinformation, and this cultural obfuscation extends to gendered rhetori...
2.Shared Lexical Task Representations Explain Behavioral Variability In LLMs
arXiv:2604.22027v1 Announce Type: new Abstract: One of the most common complaints about large language models (LLMs) is their prompt sensitivity -- that is, the fact that their ability to perform a task or provide a correct answer to a question can depend unpredictably on the way the question is posed. We investigate this variation by comparing two very different but commonly-used styles of prompting: instruction-based prompts, which describe the task in natural language, and example-based prompts, which provide in-context few-shot demonstration pairs to illustrate the task. We find that, despite large variation in performance as a function of the prompt, the model engages some common underlying mechanisms across different prompts of a task. Specifically, we identify task-specific attention heads whose outputs literally describe the task...
3.Source-Modality Monitoring in Vision-Language Models
arXiv:2604.22038v1 Announce Type: new Abstract: We define and investigate source-modality monitoring -- the ability of multimodal models to track and communicate the input source from which pieces of information originate. We consider source-modality monitoring as an instance of the more general binding problem, and evaluate the extent to which models exploit syntactic vs. semantic signals in order to bind words like image in a user-provided prompt to specific components of their input and context (i.e., actual images). Across experiments spanning 11 vision-language models (VLMs) performing target-modality information retrieval tasks, we find that both syntactic and semantic signals play an important role, but that the latter tend to outweigh the former in cases when modalities are highly distinct distributionally. We discuss the implica...
4.Lightweight Retrieval-Augmented Generation and Large Language Model-Based Modeling for Scalable Patient-Trial Matching
arXiv:2604.22061v1 Announce Type: new Abstract: Patient-trial matching requires reasoning over long, heterogeneous electronic health records (EHRs) and complex eligibility criteria, posing significant challenges for scalability, generalization, and computational efficiency. Existing approaches either rely on full-document processing with large language models (LLMs), which is computationally expensive, or use traditional machine learning methods that struggle to capture unstructured clinical narratives. In this work, we propose a lightweight framework that combines retrieval-augmented generation and large language model-based modeling for scalable patient-trial matching. The framework explicitly separates two key components: retrieval-augmented generation is used to identify clinically relevant segments from long EHRs, reducing input com...
5.Incentivizing Neuro-symbolic Language-based Reasoning in VLMs via Reinforcement Learning
arXiv:2604.22062v1 Announce Type: new Abstract: There are 7,407 languages in the world. But, what about the languages that are not there in the world? Are humans so narrow minded that we don't care about the languages aliens communicate in? Aliens are humans too! In the 2016 movie Arrival, Amy Adams plays a linguist, Dr. Louise Banks who, by learning to think in an alien language (Heptapod) formed of non-sequential sentences, gains the ability to transcend time and look into the future. In this work, I aim to explore the representation and reasoning of vision-language concepts in a neuro-symbolic language, and study improvement in analytical reasoning abilities and efficiency of "thinking systems". With Qwen3-VL-2B-Instruct as base model and 4 $\times$ Nvidia H200 GPU nodes, I achieve an accuracy improvement of 3.33\% on a vision-languag...
AI Machine Learning
1.Focus Session: Hardware and Software Techniques for Accelerating Multimodal Foundation Models
arXiv:2604.21952v1 Announce Type: new Abstract: This work presents a multi-layered methodology for efficiently accelerating multimodal foundation models (MFMs). It combines hardware and software co-design of transformer blocks with an optimization pipeline that reduces computational and memory requirements. During model development, it employs performance enhancements through fine-tuning for domain-specific adaptation. Our methodology further incorporates hardware and software techniques for optimizing MFMs. Specifically, it employs MFM compression using hierarchy-aware mixed-precision quantization and structural pruning for transformer blocks and MLP channels. It also optimizes operations through speculative decoding, model cascading that routes queries through a small-to-large cascade and uses lightweight self-tests to determine when to...
2.Performance Anomaly Detection in Athletics: A Benchmarking System with Visual Analytics
arXiv:2604.21953v1 Announce Type: new Abstract: Anti-doping programs rely on biological testing to detect performance-enhancing drugs, but such testing costs over $800 per sample and is limited by short detection windows for many prohibited substances. These constraints leave large portions of athletes without regular testing, motivating complementary screening approaches that analyze routine competition results to identify suspicious performance patterns. We present a system that processes 1.6 million athletics performances from over 19,000 competitions (2010-2025) using eight detection methods ranging from statistical rules to machine learning and trajectory analysis. We validate all methods against publicly confirmed anti-doping violations to measure their effectiveness in identifying sanctioned athletes. Trajectory-based methods, whic...
3.Conditional anomaly detection using soft harmonic functions: An application to clinical alerting
arXiv:2604.21956v1 Announce Type: new Abstract: Timely detection of concerning events is an important problem in clinical practice. In this paper, we consider the problem of conditional anomaly detection that aims to identify data instances with an unusual response, such as the omission of an important lab test. We develop a new non-parametric approach for conditional anomaly detection based on the soft harmonic solution, with which we estimate the confidence of the label to detect anomalous mislabeling. We further regularize the solution to avoid the detection of isolated examples and examples on the boundary of the distribution support. We demonstrate the efficacy of the proposed method in detecting unusual labels on a real-world electronic health record dataset and compare it to several baseline approaches.
4.Multi-Task Optimization over Networks of Tasks
arXiv:2604.21991v1 Announce Type: new Abstract: Multi-task optimization is a powerful approach for solving a large number of tasks in parallel. However, existing algorithms face distinct limitations: Population-based methods scale poorly and remain underexplored for large task sets. Approaches that do scale beyond a thousand tasks are mostly MAP-Elites variants and rely on a fixed, discretized archive that disregards the topology of the task space. We introduce MONET (Multi-Task Optimization over Networks of Tasks), a multi-task optimization algorithm that models the task space as a graph: tasks are nodes, and edges connect tasks in the task parameter space. This representation enables knowledge transfer between tasks and remains tractable for high-dimensional problems while exploiting the topology of the task space. MONET combines social...
5.When Quotes Crumble: Detecting Transient Mechanical Liquidity Erosion in Limit Order Books
arXiv:2604.21993v1 Announce Type: new Abstract: We study the detection of transient liquidity erosion ("crumbling quotes") in electronic limit order books, where observable quote deterioration may reflect either mechanical liquidity withdrawal or informational repricing. Using the ABIDES agent-based simulator, we construct a multi-agent environment in which crumbling emerges from stochastic regime switches in a market maker, providing time-resolved ground truth unavailable in real market data. We develop a detection pipeline that identifies mechanically driven quote erosion using order book features, and train a neural model to produce calibrated crumbling probabilities. Experiments demonstrate that the proposed framework reliably identifies crumbling events against agent-level ground truth, with the neural model achieving +36% AUC improv...
AI Robotics
1.Robust Localization for Autonomous Vehicles in Highway Scenes
arXiv:2604.22040v1 Announce Type: new Abstract: Localization for autonomous vehicles on highways remains under-explored compared to urban roads, and state-of-the-art methods for urban scenes degrade when directly applied to highways. We identify key challenges including environment changes under information homogeneity, heavy occlusion, degraded GNSS signals, and stringent downstream requirements on accuracy and latency. We propose a robust localization system to address highway challenges, which uses a dual-likelihood LiDAR front end that decouples 3D geometric structures and 2D road-texture cues to handle environment changes; a Control-EKF further leverages steering and acceleration commands to reduce lag and improve closed-loop behavior. An automated offline mapping and ground-truth pipeline keep maps fresh at high cadence for optimal ...
2.SNGR: Selective Non-Gaussian Refinement for Ambiguous SLAM Factor Graphs
arXiv:2604.22065v1 Announce Type: new Abstract: We present Selective Non-Gaussian Refinement (SNGR), a SLAM framework that augments iSAM2 with targeted nested sampling on windows where Gaussian approximations are likely to fail. We detect such regions using the condition number of joint marginal covariances and selectively refine them using the full nonlinear factor graph likelihood, with a gating mechanism to avoid degradation in multimodal cases. Experiments on range-only SLAM with wrong data association show that SNGR achieves high-precision failure detection and consistent local likelihood improvements while reducing computational cost relative to exhaustive non-Gaussian inference. These results highlight both the promise and the limitations of selective refinement for approximate SLAM posteriors.
3.Wiggle and Go! System Identification for Zero-Shot Dynamic Rope Manipulation
arXiv:2604.22102v1 Announce Type: new Abstract: Many robotic tasks are unforgiving; a single mistake in a dynamic throw can lead to unacceptable delays or unrecoverable failure. To mitigate this, we present a novel approach that leverages learned simulation priors to inform goal-conditioned dynamic manipulation of ropes for efficient and accurate task execution. Related methods for dynamic rope manipulation either require large real-world datasets to estimate rope behavior or the use of iterative improvements on attempts at the task for goal completion. We introduce Wiggle and Go!, a system-identification, two-stage framework that enables zero-shot task rope manipulation. The framework consists of a system identification module that observes rope movement to predict descriptive physical parameters, which then informs an optimization metho...
4.Dynamic Coupling and Indirect Control of Jointed Robots Rolling Atop A Moving Platform
arXiv:2604.22104v1 Announce Type: new Abstract: An asymmetric two-link robot supported atop a flat platform by wheels that roll and pivot freely, but do not slip laterally, will develop forward momentum if the joint between the links is actuated internally. In particular, oscillations in the joint angle will generate undulatory locomotion suggesting fishlike swimming. If two such robots surmount a common platform that's free to translate with its own inertial dynamics, then the individual robots' dynamics will be coupled so that the locomotion of either robot is affected by that of the other. We develop a mathematical model for this system and present simulations demonstrating its behavior. We then consider a single robot with an unactuated joint rolling atop a platform that moves under control, and show that actuation of the platform is ...
5.dWorldEval: Scalable Robotic Policy Evaluation via Discrete Diffusion World Model
arXiv:2604.22152v1 Announce Type: new Abstract: Evaluating robotics policies across thousands of environments and thousands of tasks is infeasible with existing approaches. This motivates the need for a new methodology for scalable robotics policy evaluation. In this paper, we propose dWorldEval, which uses a discrete diffusion world model as a scalable evaluation proxy for robotics policies. Specifically, dWorldEval maps all modalities - including vision, language, and robotic actions - into a unified token space, modeling them via a single transformer-based denoising network. In this paper, we propose dWorldEval, using a discrete diffusion world model as a scalable evaluation proxy for robotics policy. Specifically, it maps all modalities, including vision, language, and robotics action into a unified token space, then denoises them wit...
Financial AI
1.Revealing Geography-Driven Signals in Zone-Level Claim Frequency Models: An Empirical Study using Environmental and Visual Predictors
Geographic context is often consider relevant to motor insurance risk, yet public actuarial datasets provide limited location identifiers, constraining how this information can be incorporated and evaluated in claim-frequency models. This study examines how geographic information from alternative data sources can be incorporated into actuarial models for Motor Third Party Liability (MTPL) claim prediction under such constraints. Using the BeMTPL97 dataset, we adopt a zone-level modeling framework and evaluate predictive performance on unseen postcodes. Geographic information is introduced through two channels: environmental indicators from OpenStreetMap and CORINE Land Cover, and orthoimagery released by the Belgian National Geographic Institute for academic use. We evaluate the predictive contribution of coordinates, environmental feat...
2.Early Detection of Latent Microstructure Regimes in Limit Order Books
Limit order books can transition rapidly from stable to stressed conditions, yet standard early-warning signals such as order flow imbalance and short-term volatility are inherently reactive. We formalise this limitation via a three-regime causal data-generating process (stable $\to$ latent build-up $\to$ stress) in which a latent deterioration phase creates a prediction window prior to observable stress. Under mild assumptions on temporal drift and regime persistence, we establish identifiability of the latent build-up regime and derive guarantees for strictly positive expected lead-time and non-trivial probability of early detection. We propose a trigger-based detector combining MAX aggregation of complementary signal channels, a rising-edge condition, and adaptive thresholding. Across 200 simulations, the method achieves mean lead-time...
3.The Virtue of Sparsity in Complexity
Sparsity or complexity? In modern high-dimensional asset pricing, these are often viewed as competing principles: richer feature spaces appear to favor complexity, while economic intuition has long favored parsimony. We show that this tension is misplaced. We distinguish capacity sparsity-the dimensionality of the candidate feature space-from factor sparsity-the parsimonious structure of priced risks-and argue that the two are complements: expanding capacity enables the discovery of factor sparsity. Revisiting the benchmark empirical design of Didisheim et al. (2025) and pushing it to higher complexity regimes, we show that nonlinear feature expansions combined with basis pursuit yield portfolios whose out-of-sample performance dominates ridgeless benchmarks beyond a critical complexity threshold. The evidence shows that the gains from co...
4.The CTLNet for Shanghai Composite Index Prediction
Shanghai Composite Index prediction has become a hot issue for many investors and academic researchers. Deep learning models are widely applied in multivariate time series forecasting, including recurrent neural networks (RNN), convolutional neural networks (CNN), and transformers. Specifically, the Transformer encoder, with its unique attention mechanism and parallel processing capabilities, has become an important tool in time series prediction, and has an advantage in dealing with long sequence dependencies and multivariate data correlations. Drawing on the strengths of various models, we propose the CNN-Transformer-LSTM Networks (CTLNet). This paper explores the application of CTLNet for Shanghai Composite Index prediction and the comparative experiments show that the proposed model outperforms state-of-the-art baselines.
5.Spurious Predictability in Financial Machine Learning
Adaptive specification search generates statistically significant backtests even under martingale-difference nulls. We introduce a falsification audit testing complete predictive workflows against synthetic reference classes, including zero-predictability environments and microstructure placebos. Workflows generating significant walk-forward evidence in these environments are falsified. For passing workflows, we quantify selection-induced performance inflation using an absolute magnitude gap linking optimized in-sample evidence to disjoint walk-forward realizations, adjusted for effective multiplicity. Simulations validate extreme-value scaling under correlated searches and demonstrate detection power under genuine structure. Empirical case studies confirm that many apparent findings represent methodological artifacts rather than genuine ...
GSMA Newsroom
1.GSMA Report Urges Japan to Take Bold Action to Convert Technical Excellence into Global Digital Leadership
Summary available at source link.
2.From Rich Text to Video: RCS Universal Profile 4.0 has arrived
Summary available at source link.
3.Mobile Money accounted for $2 trillion in transactions in 2025, doubling since 2021 as active accounts continue to grow
Summary available at source link.
4.Strengthening the Global Fight Against Fraud and Scams – Takeaways from the Global Fraud Summit in Vienna
Summary available at source link.
5.GSMA MWC26 Barcelona closes 20th anniversary edition
Summary available at source link.
Generative AI (arXiv)
1.Rethinking Math Reasoning Evaluation: A Robust LLM-as-a-Judge Framework Beyond Symbolic Rigidity
Recent advancements in large language models have led to significant improvements across various tasks, including mathematical reasoning, which is used to assess models' intelligence in logical reasoning and problem-solving. Models are evaluated on mathematical reasoning benchmarks by verifying the correctness of the final answer against a ground truth answer. A common approach for this verification is based on symbolic mathematics comparison, which fails to generalize across diverse mathematical representations and solution formats. In this work, we offer a robust and flexible alternative to rule-based symbolic mathematics comparison. We propose an LLM-based evaluation framework for evaluating model-generated answers, enabling accurate evaluation across diverse mathematical representations and answer formats. We present failure cases of ...
2.Learning Evidence Highlighting for Frozen LLMs
Large Language Models (LLMs) can reason well, yet often miss decisive evidence when it is buried in long, noisy contexts. We introduce HiLight, an Evidence Emphasis framework that decouples evidence selection from reasoning for frozen LLM solvers. HiLight avoids compressing or rewriting the input, which can discard or distort evidence, by training a lightweight Emphasis Actor to insert minimal highlight tags around pivotal spans in the unaltered context. A frozen Solver then performs downstream reasoning on the emphasized input. We cast highlighting as a weakly supervised decision-making problem and optimize the Actor with reinforcement learning using only the Solver's task reward, requiring no evidence labels and no access to or modification of the Solver. Across sequential recommendation and long-context question answering, HiLight cons...
3.CGC: Compositional Grounded Contrast for Fine-Grained Multi-Image Understanding
Although Multimodal Large Language Models (MLLMs) have advanced rapidly, they still face notable challenges in fine-grained multi-image understanding, often exhibiting spatial hallucination, attention leakage, and failures in object constancy. In addition, existing approaches typically rely on expensive human annotations or large-scale chain-of-thought (CoT) data generation. We propose Compositional Grounded Contrast (abbr. CGC), a low-cost full framework for boosting fine-grained multi-image understanding of MLLMs. Built on existing single-image grounding annotations, CGC constructs compositional multi-image training instances through Inter-Image Contrast and Intra-Image Contrast, which introduce semantically decoupled distractor contexts for cross-image discrimination and correlated cross-view samples for object constancy, respectively....
4.DM-ASR: Diarization-aware Multi-speaker ASR with Large Language Models
Multi-speaker automatic speech recognition (ASR) aims to transcribe conversational speech involving multiple speakers, requiring the model to capture not only what was said, but also who said it and sometimes when it was spoken. Recent Speech-LLM approaches have shown the potential of unified modeling for this task, but jointly learning speaker attribution, temporal structure, and lexical recognition remains difficult and data-intensive. At the current stage, leveraging reliable speaker diarization as an explicit structural prior provides a practical and efficient way to simplify this task. To effectively exploit such priors, we propose DM-ASR, a diarization-aware multi-speaker ASR framework that reformulates the task as a multi-turn dialogue generation process. Given an audio chunk and diarization results, DM-ASR decomposes transcription...
5.Superminds Test: Actively Evaluating Collective Intelligence of Agent Society via Probing Agents
Collective intelligence refers to the ability of a group to achieve outcomes beyond what any individual member can accomplish alone. As large language model agents scale to populations of millions, a key question arises: Does collective intelligence emerge spontaneously from scale? We present the first empirical evaluation of this question in a large-scale autonomous agent society. Studying MoltBook, a platform hosting over two million agents, we introduce Superminds Test, a hierarchical framework that probes society-level intelligence using controlled Probing Agents across three tiers: joint reasoning, information synthesis, and basic interaction. Our experiments reveal a stark absence of collective intelligence. The society fails to outperform individual frontier models on complex reasoning tasks, rarely synthesizes distributed informat...
Hugging Face Daily Papers
1.Quality-Driven Selective Mutation for Deep Learning
Mutants support testing and debugging in two roles: (i) as test goals and (ii) as substitutes for real faults. Hard-to-kill mutants provide better guidance for test improvement, while realism is essential when mutants are used to simulate real bugs. Building on these roles, selective mutation for deep learning (DL) aims to reduce the cost of mutant generation and execution by choosing operator configurations that yield resistant and realistic mutants. However, the DL literature lacks a unified measure that captures both aspects. This study presents a probabilistic framework to quantify mutant quality along two complementary axes: resistance and realism. Resistance adapts the classical notion of hard-to-kill mutants to the DL setting using statistical killing probabilities, while realism is measured via the generalized Jaccard similarity b...
2.Multi-output Extreme Spatial Model for Complex Aircraft Production Systems
Problem definition: Data-driven models in machine learning have enabled efficient management of production systems. However, a majority of machine learning models are devoted to modeling the mean response or average pattern, which is inappropriate for studying abnormal extreme events that are often of primary interest in aircraft manufacturing. Since extreme events from heavy-tailed distributions give rise to prohibitive expenditures in system management, sophisticated extreme models are urgently needed to analyze complex extreme risks. Engineering applications of extreme models usually focus on individual extreme events, which is insufficient for complex systems with correlations. Methodology/results: We introduce an extreme spatial model for multi-output response control systems that efficiently captures the dynamics using a bilinear fu...
3.Fine-Tuning Regimes Define Distinct Continual Learning Problems
Continual learning (CL) studies how models acquire tasks sequentially while retaining previously learned knowledge. Despite substantial progress in benchmarking CL methods, comparative evaluations typically keep the fine-tuning regime fixed. In this paper, we argue that the fine-tuning regime, defined by the trainable parameter subspace, is itself a key evaluation variable. We formalize adaptation regimes as projected optimization over fixed trainable subspaces, showing that changing the trainable depth alters the effective update signal through which both current task fitting and knowledge preservation operate. This analysis motivates the hypothesis that method comparisons need not be invariant across regimes. We test this hypothesis in task incremental CL, five trainable depth regimes, and four standard methods: online EWC, LwF, SI, and...
4.Task-specific Subnetwork Discovery in Reinforcement Learning for Autonomous Underwater Navigation
Autonomous underwater vehicles are required to perform multiple tasks adaptively and in an explainable manner under dynamic, uncertain conditions and limited sensing, challenges that classical controllers struggle to address. This demands robust, generalizable, and inherently interpretable control policies for reliable long-term monitoring. Reinforcement learning, particularly multi-task RL, overcomes these limitations by leveraging shared representations to enable efficient adaptation across tasks and environments. However, while such policies show promising results in simulation and controlled experiments, they yet remain opaque and offer limited insight into the agent's internal decision-making, creating gaps in transparency, trust, and safety that hinder real-world deployment. The internal policy structure and task-specific specializa...
5.To See the Unseen: on the Generalization Ability of Transformers in Symbolic Reasoning
We investigate the ability of decoder-only transformer models to perform abstract symbolic reasoning; specifically solving propositional logic reasoning problems given in-context. Previous work demonstrated that models fail to generalize to problems involving variable names that were not observed during training, and it was shown that one reason behind this is the difficulty of copying (or generating) unseen tokens. We show both theoretically and empirically that a particular representational collapse also has a crucial role: the unembeddings (last-layer weights) of unseen tokens collapse to nearly the same vector during training. The collapse makes distinguishing multiple unseen variables difficult for the model (especially when the embedding and unembedding parameters are shared), and provides a mechanistic explanation for the effective...
IEEE Xplore AI
1.Engineering Collisions: How NYU Is Remaking Health Research
This sponsored article is brought to you by NYU Tandon School of Engineering . The traditional approach to academic research goes something like this: Assemble experts from a discipline, put them in a building, and hope something useful emerges. Biology departments do biology. Engineering departments do engineering. Medical schools treat patients. NYU is turning that model inside out. At its new Institute for Engineering Health , the organizing principle centers around disease states rather than traditional disciplines. Instead of asking “what can electrical engineers contribute to medicine?,” they’re asking “what would it take to cure allergic asthma?,” and then assembling whoever can answer that question, whether they’re immunologists, computational biologists, materials scientists, AI researchers, or wireless communications engineers. ...
2.Modeling and Simulation Approaches for Modern Power System Studies
This webinar covers power system modeling and simulation across multiple timescales, from quasi-static 8760 analysis through EMT studies, fault classification, and inverter-based resource grid integration. What Attendees will Learn Programmatic network construction and multi-fidelity modeling — Learn how to build power system networks programmatically from standard data formats, configure models for specific engineering objectives, and work across fidelity levels from quasi-static phasor simulation through switched-linear and nonlinear electromagnetic transient (EMT) analysis. Quasi-static and EMT simulation workflows — Explore 8760-hour quasi-static simulation on an IEEE 123-node distribution feeder for annual energy studies, and EMT simulation on transmission system benchmarks including generator trip dynamics and asset relocation witho...
3.What Anthropic’s Mythos Means for the Future of Cybersecurity
Two weeks ago, Anthropic announced that its new model, Claude Mythos Preview, can autonomously find and weaponize software vulnerabilities, turning them into working exploits without expert guidance. These were vulnerabilities in key software like operating systems and internet infrastructure that thousands of software developers working on those systems failed to find. This capability will have major security implications, compromising the devices and services we use every day. As a result, Anthropic is not releasing the model to the general public, but instead to a limited number of companies. The news rocked the internet security community. There were few details in Anthropic’s announcement, angering many observers. Some speculate that Anthropic doesn’t have the GPUs to run the thing, and that cybersecurity was the excuse to limit its ...
4.AI Designs Thermoelectric Generators 10,000 Times Faster Than We Can
Waste heat is everywhere: car engines, industrial machinery , kitchen appliances—even your own body . Some of that lost energy can be converted into electricity using thermoelectric generators: compact, solid-state devices that produce power directly from temperature differences without the need for spinning turbines or moving parts. But designing materials that make these systems efficient has long been an engineering slog, requiring slow simulations and painstaking experiments to identify combinations that conduct electricity while limiting unwanted heat flow. Now researchers in Japan have built an artificial-intelligence tool that can design thermoelectric generators 10,000 times faster than conventional approaches. Prototypes built based on the tool’s recommendations performed on par with today’s leading thermoelectric devices, the st...
5.AI Agent Designs a RISC-V CPU Core From Scratch
In 2020, researchers fine-tuned a GPT-2 model to design fragments of logic circuits ; in 2023, researchers used GPT-4 to help design an 8-bit processor with a novel instruction set; by 2024, a variety of LLMs could design and test chips with basic functionality, like dice rolls (though often these were flawed). Now Verkor.io, an AI chip design startup, claims a bigger milestone: a RISC-V CPU core designed entirely by an agentic AI system. The CPU, dubbed VerCore, has a clock speed of 1.5 gigahertz and performance similar to a 2011-era laptop CPU. Suresh Krishna , cofounder at Verkor.io , says the team’s key claim is that this approach is more effective than using only specialized AI systems for specialized tasks within the overall design process. “ What we learned is that the better approach is to let the AI agent solve the whole problem,...
MIT Sloan Management
1.Why Adventure Matters in Long Working Lives
Emma Hanquist/Ikon Images In my ongoing exploration about the patterns and changes in how people approach their working lives, I’ve found myself looking back at my own memories from over five decades of work. What stands out is not simply the steady progression of roles and achievements but the disproportionate impact of recurring moments of […]
2.How to Slay the Chaos Dragon
Carolyn Geason-Beissel/MIT SMR | Getty Images In my first job out of college, I had a frenetic boss whom we’ll call Don. Don was all over the place in a quite literal sense: running from desk to desk across the office, talking to people here and there, dashing in and out for cigarettes all day. […]
3.Why Business Leaders Need to Champion Democracy
Carolyn Geason-Beissel/MIT SMR | Getty Images Democracy is in decline across the world. More countries are experiencing erosion of political rights and civil liberties than gains, according to Freedom House. As of 2025, 92 countries, representing 74% of the world’s population, were classified as autocracies by the V-Dem Institute. Democratic backsliding is a primary concern […]
4.Industrial AI for the Physical World: Siemens’s Peter Koerte
In this episode of the Me, Myself, and AI podcast, host Sam Ransbotham talks with Peter Koerte, a member of the managing board and chief strategy and technology officer of Siemens, about how industrial AI is quietly transforming the infrastructure that powers everyday life. While consumer AI grabs headlines, Peter explains how artificial intelligence is […]
5.Beyond the Model — Why Responsible AI Must Address Workforce Impact
For the fifth year in a row, MIT Sloan Management Review and Boston Consulting Group (BCG) have assembled an international panel of AI experts that includes academics and practitioners to help us understand how responsible artificial intelligence (RAI) is being implemented across organizations worldwide. In prior years, we examined organizational RAI maturity; third-party, generative, and […]
NBER Working Papers
1.Market Power in Mortgage Pricing: the Role of Referral Lending -- by Dayin Zhang, Panle Jia Barwick, Lu Han, Jonathan Kroah
Despite intense competition among mortgage lenders, borrowers face substantial price dispersion. We argue that realtor–loan officer referral networks are a key source of lender market power: by steering homebuyers toward a small set of loan officers, these networks restrict effective borrower choice even in competitive markets. Using a novel dataset linking 81,306 realtors to 102,860 loan officers across 41 states, we document that such networks are pervasive and highly concentrated — 85% of realtors direct over 40% of their clients to fewer than four loan officers — and that this concentration persists and even increases in markets with more lenders. IV estimates indicate that borrowers using referred loan officers pay 18.6 basis points higher mortgage rates, equivalent to $2,609 in upfront costs on the average loan of $306K. Referral le...
2.Trends in Health Inequalities among Spanish Retirees -- by Cristina Bellés-Obrero, Manuel Flores Mallo, Pilar García-Gómez, Sergi Jimenez-Martin, Judit Vall Castelló
Spain, with one of the highest life expectancies globally and a rapidly ageing population, faces growing challenges in sustaining its pension, healthcare, and long-term care systems. This study examines trends in health inequalities among retired Spaniards from 2004 to 2022, using eight waves of the Survey of Health, Ageing and Retirement in Europe (SHARE). We analyse five health outcomes—limitations in daily and instrumental activities, number of chronic conditions, a composite health deficiency index, mental health (EURO-D scale), and cognitive performance—and use linear regression to assess income-related gradients, adjusted for age and sex. We also compute a catch-up time measure—the number of years a poorer individual would need to reach the same level of health as a richer individual—and concentration indices of bad health. We then ...
3.China's Global Ownership -- by Jennie Bai, Luc Laeven, Yaojun Ke, Hong Ru
We study the global footprint and real effects of Chinese overseas corporate ownership. By assembling a comprehensive micro-level dataset of 161,773 firms across 159 countries (2012–2021), we independently reconstruct multi-layered ownership chains to trace capital through offshore tax havens to its ultimate origin. This approach reveals a global footprint substantially broader than official FDI statistics. Chinese-controlled foreign assets expanded at 20% annually, reaching $2.1 trillion or roughly 3% of global corporate assets by 2021. Chinese investors—particularly state-owned enterprises (SOEs)—strategically target R&D-intensive and supply-chain-linked firms. Following acquisition, target firms increase capital stock and R&D expenditures, yet these inputs fail to generate higher patent output and are accompanied by a significant decli...
4.The Domestic Political Economy of War: Evidence from Russia -- by Alena Gorbuntsova, Gaurav Khanna, Sultan Mehmood
Wars are often framed as responses to external threats or shifts in the regional balance of power. Yet they can also serve domestic political ends. This paper studies how Russia’s escalations against Ukraine reshaped support for the regime and redistributed the burdens of war across the population. Combining ethnic Russian shares with election and independent polling data, we exploit two sharp geopolitical shocks, the 2014 annexation of Crimea and the 2022 full-scale invasion, in a difference-in-differences event-study design. We find that provinces with larger ethnic Russian populations exhibit sharp increases in support for President Putin following both episodes. At the same time, battlefield casualties fall disproportionately on regions with lower ethnic Russian shares, and attitudes toward the US and EU deteriorate sharply. On the Uk...
5.Effects of Expanding Contraceptive Choice: New Evidence from Virginia's Contraceptive Access Initiative -- by Jessica H. Kiser, Analisa Packham, Janelle Anthony, Evelyn Escobar, Emily Yeatts
In 2018, the Virginia Department of Health implemented the Contraceptive Access Initiative (CAI) to increase access to long-acting reversible contraceptives (LARCs). We use encounter-level data on contraceptive choice in participating CAI clinics and county-level natality data from 2014--2021 to estimate relative changes in LARC take-up and childbearing rates before and after the CAI. Difference-in-differences estimates indicate that the CAI reduced birth rates in participating counties by approximately 3 percent, or less than half of the effect size of other similar, state-level programs. We show that this smaller effect is likely due to existing high LARC take-up and contraceptive substitution.
NY Fed - Liberty Street
1.Bank Failures: The Roles of Solvency and Liquidity
Do banks fail because of runs or because they become insolvent? Answering this question is central to understanding financial crises and designing effective financial stability policies. Long-run historical evidence reveals that the root cause of bank failures is usually insolvency. The importance of bank runs is somewhat overstated. Runs matter, but in most cases they trigger or accelerate failure at already weak banks, rather than cause otherwise sound banks to fail.
2.The R*–Labor Share Nexus
Over the past quarter century, the U.S. economy has experienced significant declines in both the labor share of income and the natural rate of interest, referred to as R*. Existing research has largely analyzed these two developments in isolation. In this post, we provide a simple model that captures the joint evolution of the labor share and R*, which we call the R*–labor share nexus. Our key finding is that structural changes affecting R* also influence the evolution of the labor share, and thereby wages and prices. This highlights a potentially important channel, absent from many macroeconomic models, through which the factors that determine R* also affect the labor share and, in turn, broader macroeconomic developments, with implications for monetary policy.
3.Use of Gen AI in the Workplace and the Value of Access to Training
The rapid spread of generative AI (AI) tools is reshaping the workplace at a remarkable rate. Yet relatively little is known about whether workers have access to these tools, how the tools affect workers’ daily productivity, and how much workers value the training needed to use the tools effectively. In this post, we shed light on these issues by drawing on supplemental questions in the November 2025 Survey of Consumer Expectations (SCE), fielded to a representative sample of the U.S. population. We find that adoption of AI tools at work is heterogeneous, that a sizable share of workers see AI training as important, and that a significant share of employers are nonetheless not yet providing access to AI tools or training on how to use them.
4.What Millions of Homeowner’s Insurance Contracts Reveal About Risk Sharing
Housing is the largest component of assets held by households in the United States, totaling $48 trillion in 2025. When natural disasters strike, the resulting damage to homes can be large relative to households’ liquid savings. Homeowner’s insurance is the primary financial tool households use to protect themselves against property risk. Despite the economic importance of homeowner’s insurance, we know surprisingly little about how insurance contracts are actually designed with respect to property risk. In this post, which is based on our new paper, “Economics of Property Insurance,” we examine how homeowner’s insurance contracts are structured in practice. Using a new granular dataset covering millions of homeowner’s insurance policies, we document ...
5.A Closer Look at Emerging Market Resilience During Recent Shocks
A succession of shocks to the global economy in recent years has focused attention on the improved economic and financial resilience of emerging market economies. For some of these economies, this assessment is well-founded and highlights the fruits of deep, structural economic reforms since the 1990s. However, for a much larger universe of countries, the ability to weather shocks is still mixed and many remain vulnerable. In this post, we explore the divide between the two sets of countries and focus on the effects of recent economic shocks, including the ongoing conflict in the Middle East.
Project Syndicate
1.The Hidden Chokepoints Threatening the Global Economy
The closure of the Strait of Hormuz offers another stark reminder of the potential for concentrated supply chains to trigger cascading global crises. Despite repeated disruptions, governments have made little progress in mapping these vulnerabilities, leaving major economies dangerously unprepared for future shocks.
2.The Deeper Forces Shaping Global Trade
It would be natural to assume that tariffs are the driving force behind recent major shifts in global trade patterns. But while geopolitically-driven trade policies are indeed reconfiguring flows, longer-term shifts in technology and economic development are determining what the world builds and buys.
3.Rules for the Rest of Us
As the Global South knows well, the alternative to international rules is not freedom, but rather the undisguised power of the strongest. But these economies are far from powerless: they have significant leverage, although wielding it requires collective positions, shared frameworks, and coordinated strategies.
4.The Populist vs. the Pope
The war of words between Pope Leo XIV and US President Donald Trump has revived the age-old clash between the sacred and the secular. But Trump has severely misjudged the “soft power” of the world’s preeminent religious leader, and attacking a popular pontiff will likely come at a high political cost.
5.To Save Democracy, Fight Inequality
When liberal democracy fails to deliver material well-being, its legitimacy erodes, and the far right fills the void. The answer to the resurgence of authoritarianism is not to patch up a broken system, but to confront its underlying causes and rebuild the economic foundations of democratic life.
RCR Wireless
1.Slicing the future – how 5G SA is transforming venues and industries (Reader Forum)
For years, Communication Service Providers (CSPs) have poured billions into building out 5G Standalone (SA) infrastructure with plenty of hype into the speed and capacity it would bring, but without a clear path to profitability. Now that 5G SA is…
2.Qualcomm is working to turn 6G ambition into commercial reality
Through ecosystem alignment, early system validation and a focus on an AI-native approach, Qualcomm is pushing 6G toward rollout in 2029 As it has with previous generations of cellular, Qualcomm is working toward 6G commercialization through coordinated progress across spectrum,…
3.Deutsche Telekom’s move for T-Mobile is a valuation play, say analysts
Industry analysts say that, given its 53% stake in the business, that Deutsche Telekom’s move for full ownership of T-Mobile US is less about acquiring control and more about simplifying structure and addressing valuation In sum – what to know:…
4.Enea White Paper: Scalable Database Design for 5G and Beyond
Discover how distributed, cloud-native database design can ensure critical availability and performance in 5G and beyond. In this white paper, Enea explores the architectural choices shaping next-generation telecom infrastructure. Learn why milliseconds matter, how latency impacts subscriber experience at scale,…
5.How managed Wi-Fi is reshaping BTR and MPC communities (Reader Forum)
For years, the managed Wi-Fi conversation in residential real estate has centered on the apartment building: a single structure, a defined footprint, a property manager with a clear mandate to upgrade. That framing has served the industry well, but it…
Semantic Scholar – Machine Learning
1.Source Error
Check Feed
Telecom & 6G AI
1.OCC: Physical-Layer Assisted Congestion Control for Real-Time Communications
Real-time communications (RTC) is a core technology for emerging applications in 6G, such as cloud gaming, teleoperation, and extended reality (XR), which require consistently low latency and high bitrates. Existing RTC solutions fundamentally struggle to maintain low latency while supporting high bitrates due to their reliance on trial-and-error-based mechanisms. These mechanisms fail to probe the available bandwidth (ABW) promptly and accurately, leading to a trade-off between latency reliability and bandwidth utilization. The tension becomes extremely more critical as the cellular bandwidth and application's demand fluctuate with a larger range in cellular networks nowadays. To address this trade-off, we propose OCC, a novel approach that utilizes physical-layer information to explicitly obtain the ABW in real time, enabling rapid adap...
2.Semantic Error Correction and Decoding for Short Block Channel Codes
This paper presents a semantic-enhanced receiver framework for transmitting natural language sentences over noisy wireless channels using multiple short block codes. After ASCII encoding, the sentence is divided into segments, each independently encoded with a short block code and transmitted over an AWGN channel. At the receiver, segments are decoded in parallel, followed by a semantic error correction (SEC) model, which reconstructs corrupted segments using language model context. We further propose the semantic list decoding (SLD), which generates multiple candidate reconstructions and selects the best one via weighted Hamming distance, and a semantic confidence-guided HARQ (SHARQ) mechanism that replaces CRC-based error detection with a confidence score, enabling selective segment retransmission without CRC overhead. All modules are d...
3.A General EM-Based Channel Model for Reconfigurable Antenna Systems
Reconfigurable antenna systems (RASs), such as fluid antennas and movable antennas, are poised to play a pivotal role in sixth-generation (6G) systems by dynamically adapting the antenna elements for system performance enhancement. However, unlocking their full potential requires channel models that accurately capture the influence of antenna configurations on the radiation, propagation, and reception of signals. Existing channel models suffer from several limitations, such as neglecting polarization effects, being restricted to specific antenna types, or relying on oversimplified assumptions. In this paper, we propose a general electromagnetic (EM)-based channel model grounded in spherical vector wave expansion (SVWE). The proposed EM-based channel model captures the impact of antenna position and orientation on the channel gain, thereby...
4.Virtualizing the Senses: Enabling High-Precision ISAC on Commercial Cellular Infrastructure
Integrated sensing and communication (ISAC) is poised to be a defining feature of 6G networks, promising to transform cellular base stations (BSs) into ubiquitous radar sensors. However, a significant gap exists between the theoretical promise of ISAC and the commercial reality of legacy cellular communication infrastructure. Existing communication networks are constrained by fragmented spectrum, blockage-prone environments, and cost-prohibitive high-rate analog-to-digital converters (ADCs). These limitations stifle the high-resolution sensing required for emerging applications. This article advocates a shift from dependence on physical resources to computational synthesis and introduces a unified full stack virtualization framework that upgrades legacy networks with minimal hardware changes, spanning signal generation, propagation, and a...
5.The Radon Transform, True Time Delay Beamforming, and Ultra-Wideband Antenna Arrays (Invited Paper)
The FR3 band has emerged as the major focus of 6G wireless research. FR3 cellular operation presents the challenge of extreme bandwidth combined with physically large antenna arrays. In this regime, conventional phase-shift beamforming entails a loss of coherence (beam-squint), and has to be replaced by true time delay beamforming (TTD). It happens that TTD is mathematically equivalent to taking the Radon transform of the space/time measurements. We exploit fifty years of research in the application of the Radon transform to computer tomography and to seismic exploration to elucidate the workings of TTD. We use the Radon transform combined with semblance detection and Radon slowness filtering to remove far-field signals from the measured space/time signals from a linear array, leaving only near-field signals. In turn we partition the arra...
arXiv Quantitative Finance
1.Modeling dependency between operational risk losses and macroeconomic variables using Hidden Markov Models
Predicting future operational risk losses gives rise to a significant challenge due to the heterogeneous and time-dependent structures present in real-world data. Furthermore, stress test exercises require examining the relationship with operational losses. To capture such relationship, we propose to use an extension of Hidden Markov Models to multivariate observations. This model introduces a third auxiliary variable designed to accommodate the economic covariates in the time-series data. We detail the unique aspects of operational risk data and describe how model calibration is achieved via the Expectation-Maximization (EM) algorithm. Additionally, we provide the calibration results for the various risk-event types and analyze the relevance of the inclusion of the macroeconomic covariates.
2.Research Streams in Biodiversity Finance: A Bibliometric Analysis and Research Agenda
Biodiversity loss is accelerating at an unprecedented pace, threatening ecosystem stability, economic resilience, and human well-being, with billions required to reverse current trends. Against this backdrop, biodiversity finance has emerged as a rapidly expanding but highly fragmented field spanning ecology, economics, finance, accounting, and policy. However, it remains emerging and complex, with the majority of relevant knowledge being produced in non-finance journals. This study employs quantitative bibliometric analysis to examine a corpus of 189,456 references underlying 3,998 articles related to biodiversity and finance. The analysis identifies eight primary research streams within the field that concern (1) strategic and financial approaches in global biodiversity conservation, (2) the impact and implementation of payments for env...
3.ChatGPT as a Time Capsule: The Limits of Price Discovery
Frozen large language model (LLM) checkpoints extract information from pre-cutoff public text that is associated with future fundamentals and equity returns beyond standard contemporaneous valuation measures. Because each frozen checkpoint has a fixed knowledge cutoff, it can be interpreted as a compressed representation of publicly available textual information at a given point in time. We treat twelve OpenAI snapshots spanning 2021-2025 as time-stamped summaries of the public textual record and extract a sector-neutral LLM outlook score for roughly 7,000 U.S. equities per cross-section. The outlook score is positively associated with analyst revisions, target-price changes and one-month cross-sectional returns in both Fama-MacBeth regressions and pooled panels with model fixed effects (t = 6.02), after direct controls for market-implied...
4.Tuning in to Frequencies: How Global Assets Align U.S. Put-Call Parity Residuals
Put-call parity holds under risk-neutral pricing, yet enforcement exposes arbitrageurs to path-dependent capital costs. The carry gap-the annualized wedge between option-implied and OIS discount factors-is a Q-measure object, but P-measure investment opportunities may shape its enforcement burden. We document this alignment in SPX and RUT options: low-frequency global asset returns raise in-sample R^2 by 0.093 and 0.082 and lift pooled out-of-sample R^2 from 0.221 to 0.364 (SPX) and 0.171 to 0.309 (RUT). Effective horizons differ by asset-IEFA (70 days), IGOV (400 days), IAU (336 days)-and asset terms largely absorb the OIS baseline, providing systematic evidence of a P-Q channel.
5.The Cost of a Free Lunch: Evidence from U.S. Derivatives Markets
Put-call parity is a terminal-payoff identity; quoted residuals against traded futures are near zero. Yet enforcing parity is path-dependent, exposing arbitrageurs to daily settlement, margin, and finite capital. Using minute-level NBBO data on S&P 500 and Russell 2000 options, I extract option-implied discount factors, compare them with the OIS curve, and construct an annualized carry gap. A reduced-form specification centered on a volatility times sqrt(tau) path-risk term links the carry gap to implementation risk, trading frictions, and financial conditions, with coefficient signs stable across leave-one-year-out validation. The carry gap is an implementation wedge invisible in price space but systematic in carry space.
arXiv – 6G & Networking
1.OCC: Physical-Layer Assisted Congestion Control for Real-Time Communications
Real-time communications (RTC) is a core technology for emerging applications in 6G, such as cloud gaming, teleoperation, and extended reality (XR), which require consistently low latency and high bitrates. Existing RTC solutions fundamentally struggle to maintain low latency while supporting high bitrates due to their reliance on trial-and-error-based mechanisms. These mechanisms fail to probe the available bandwidth (ABW) promptly and accurately, leading to a trade-off between latency reliability and bandwidth utilization. The tension becomes extremely more critical as the cellular bandwidth and application's demand fluctuate with a larger range in cellular networks nowadays. To address this trade-off, we propose OCC, a novel approach that utilizes physical-layer information to explicitly obtain the ABW in real time, enabling rapid adap...
2.A General EM-Based Channel Model for Reconfigurable Antenna Systems
Reconfigurable antenna systems (RASs), such as fluid antennas and movable antennas, are poised to play a pivotal role in sixth-generation (6G) systems by dynamically adapting the antenna elements for system performance enhancement. However, unlocking their full potential requires channel models that accurately capture the influence of antenna configurations on the radiation, propagation, and reception of signals. Existing channel models suffer from several limitations, such as neglecting polarization effects, being restricted to specific antenna types, or relying on oversimplified assumptions. In this paper, we propose a general electromagnetic (EM)-based channel model grounded in spherical vector wave expansion (SVWE). The proposed EM-based channel model captures the impact of antenna position and orientation on the channel gain, thereby...
3.Virtualizing the Senses: Enabling High-Precision ISAC on Commercial Cellular Infrastructure
Integrated sensing and communication (ISAC) is poised to be a defining feature of 6G networks, promising to transform cellular base stations (BSs) into ubiquitous radar sensors. However, a significant gap exists between the theoretical promise of ISAC and the commercial reality of legacy cellular communication infrastructure. Existing communication networks are constrained by fragmented spectrum, blockage-prone environments, and cost-prohibitive high-rate analog-to-digital converters (ADCs). These limitations stifle the high-resolution sensing required for emerging applications. This article advocates a shift from dependence on physical resources to computational synthesis and introduces a unified full stack virtualization framework that upgrades legacy networks with minimal hardware changes, spanning signal generation, propagation, and a...
4.The Radon Transform, True Time Delay Beamforming, and Ultra-Wideband Antenna Arrays (Invited Paper)
The FR3 band has emerged as the major focus of 6G wireless research. FR3 cellular operation presents the challenge of extreme bandwidth combined with physically large antenna arrays. In this regime, conventional phase-shift beamforming entails a loss of coherence (beam-squint), and has to be replaced by true time delay beamforming (TTD). It happens that TTD is mathematically equivalent to taking the Radon transform of the space/time measurements. We exploit fifty years of research in the application of the Radon transform to computer tomography and to seismic exploration to elucidate the workings of TTD. We use the Radon transform combined with semblance detection and Radon slowness filtering to remove far-field signals from the measured space/time signals from a linear array, leaving only near-field signals. In turn we partition the arra...
5.Multi-Objective RIS Deployment Optimization for Physical Layer Security in ISAC Networks
Reconfigurable Intelligent Surfaces (RIS) have emerged as a key enabler for programmable wireless environments in future Beyond-5G (B5G) and 6G networks. In the meantime, Integrated Sensing and Communication (ISAC) and Physical-Layer Security (PLS) are becoming essential functionalities for next-generation wireless systems, particularly in safety and mission-critical applications. However, jointly optimizing RIS-assisted systems to support communication, sensing, and security introduces complex and often conflicting design trade-offs. In this work, a multi-objective optimization framework for RIS-assisted networks is proposed, aiming to jointly analyze communication performance, sensing accuracy, and security-related channel properties in a unified system perspective. The proposed model jointly considers RIS deployment location, orientati...
arXiv – Network Architecture (6G/Slicing)
1.Chamelio: A Fast Shared Cloud Network Stack for Isolated Tenant-Defined Protocols
Conventional cloud network virtualization sends packets through multiple guest and host layers, inflating CPU cost and tail latency. Shared host datapaths collapse this layering into one optimized path across tenants, but existing shared stacks are fixed-function: tenants cannot specialize their protocols. eBPF is the natural vehicle for restoring programmability to a shared datapath, but today's extensions are hook-sized, and its verifier provides safety -- not performance isolation: one tenant's per-packet work can inflate every other tenant's tail latency. Chamelio is a programmable shared network stack that lets tenants implement full protocols through a bounded eBPF fast path and a tenant slow path, while approaching the performance and preserving the strong isolation of fixed shared stacks. It combines three ideas: a shared-stack ...
2.OCC: Physical-Layer Assisted Congestion Control for Real-Time Communications
Real-time communications (RTC) is a core technology for emerging applications in 6G, such as cloud gaming, teleoperation, and extended reality (XR), which require consistently low latency and high bitrates. Existing RTC solutions fundamentally struggle to maintain low latency while supporting high bitrates due to their reliance on trial-and-error-based mechanisms. These mechanisms fail to probe the available bandwidth (ABW) promptly and accurately, leading to a trade-off between latency reliability and bandwidth utilization. The tension becomes extremely more critical as the cellular bandwidth and application's demand fluctuate with a larger range in cellular networks nowadays. To address this trade-off, we propose OCC, a novel approach that utilizes physical-layer information to explicitly obtain the ABW in real time, enabling rapid adap...
3.Scheduling in Multi-Hop Wireless Networks With Deadlines
We analyze the problem of scheduling in wireless networks to meet end-to-end service guarantees, defined by instantaneous throughput and hard packet deadlines. Using a network slicing model to decouple the queueing dynamics between flows, we show that the network's ability to meet hard deadline guarantees under interference is largely influenced by the link scheduling policy. We characterize throughput- and deadline-optimal policies for a solitary flow operating in isolation, which provide bounds on feasibility in the general case with multiple flows. We prove that packet delays can grow arbitrarily large in the multi-flow setting under a worst-case stabilizing policy, showing that queue stability is not sufficient to guarantee tight deadlines. We derive conditions on end-to-end packet delays in terms of link inter-scheduling times, and s...
4.Safety-Aware AoI Scheduling for LEO Satellite-Assisted Autonomous Driving
Autonomous platoons traversing infrastructure gaps increasingly depend on LEO satellite backhaul for safety-critical updates, yet no existing framework jointly addresses compound Doppler from simultaneous satellite and vehicle motion, sub-slot handover outages that exceed collision-alert deadlines, and heterogeneous freshness requirements across three vehicular priority classes. The core challenge is a \emph{timescale mismatch}: coarse control slots hide sub-slot outages, which makes both AoI spike analysis and safety verification ill-posed. Ping-pong handover oscillations further compound AoI cost in a way that purely reactive schedulers cannot mitigate. We address these challenges through a unified framework that couples a two-timescale AoI model with tiered time-average safety constraints enforced by virtual queues. A closed-form ping-...
5.Toward EU Sovereignty in Space: A Comparative Simulation Study of IRIS 2 and Starlink
The evolution of 6th generation (6G) networks increasingly relies on satellite-based Non-Terrestrial Networks (NTNs) to extend broadband connectivity to remote and unserved regions, and to support public safety. In this paper we compare two representative and conceptually different satellite constellation architectures, namely Starlink and IRIS 2. Starlink is a commercial private Internet constellation by SpaceX, based on dense Low Earth Orbit (LEO) satellites. It is primarily designed to deliver high-capacity broadband services for civil applications, with performance targets comparable to those of terrestrial networks. In contrast, IRIS 2 is a planned public initiative to be deployed by the European Union, based on a multi-layer combination of LEO, Medium Earth Orbit (MEO), and Geo-stationary Earth Orbit (GEO) satellites. It is primaril...