Daily Briefing – Apr 18 (86 Articles)
Babak's Daily Briefing
Saturday, April 18, 2026
Sources: 18 | Total Articles: 86
6G World
1.Evaluating 6G PHY Evolution: What the Industry Is Really Trying to Solve
Summary available at source link.
2.Amazon’s Globalstar deal gives Amazon Leo a faster path into D2D
Amazon’s planned acquisition of Globalstar is about far more than satellites. It gives Amazon Leo a faster path into direct-to-device connectivity, combining spectrum, operational assets, and Apple-facing service continuity in a move that could reshape the hybrid terrestrial-NTN landscape.
3.SoftBank’s Physical AI push gives AI-RAN a sharper purpose
SoftBank is starting to give AI-RAN a more concrete job description: not just running AI workloads near the network, but serving as the real-time infrastructure layer for robots and other physical systems. The company’s recent materials suggest it wants to move the AI-RAN conversation from telecom architecture to real-world machine action.
4.South Korea puts 6G inside its national AI push
South Korea has unveiled a three-year national roadmap aimed at becoming one of the world’s top three AI powers by 2028, with 6G commercialization positioned as part of that broader push.
5.b-com’s Open XG Hub targets one of telecom’s biggest gaps: turning experimentation into deployment
In an interview with Peter Pietrzyk, Managing Director of 6GWorld, Patrick Savell, Head of Connectivity at b-com, said platforms such as Open XG Hub are designed to help bridge one of the industry’s most persistent challenges: moving promising ideas from research environments into deployable network systems. The bigger point is that, as telecom becomes more software-driven and AI-native, the bottleneck is increasingly less about invention and more about validation, integration, and operational readiness.
AI Agents
1.Dr.~RTL: Autonomous Agentic RTL Optimization through Tool-Grounded Self-Improvement
Recent advances in large language models (LLMs) have sparked growing interest in automatic RTL optimization for better performance, power, and area (PPA). However, existing methods are still far from realistic RTL optimization. Their evaluation settings are often unrealistic: they are tested on manually degraded, small-scale RTL designs and rely on weak open-source tools. Their optimization methods are also limited, relying on coarse design-level feedback and simple pre-defined rewriting rules. To address these limitations, we present Dr. RTL, an agentic framework for RTL timing optimization in a realistic evaluation environment, with continual self-improvement through reusable optimization skills. We establish a realistic evaluation setting with more challenging RTL designs and an industrial EDA workflow. Within this setting, Dr. RTL per...
2.SWE-TRACE: Optimizing Long-Horizon SWE Agents Through Rubric Process Reward Models and Heuristic Test-Time Scaling
Resolving real-world software engineering (SWE) issues with autonomous agents requires complex, long-horizon reasoning. Current pipelines are bottlenecked by unoptimized demonstration data, sparse execution rewards, and computationally prohibitive inference scaling, which collectively exacerbate token bloat, reward hacking, and policy degradation. We present SWE-TRACE (Trajectory Reduction and Agentic Criteria Evaluation), a unified framework optimizing the SWE agent lifecycle across data curation, reinforcement learning (RL), and test-time inference. First, we introduce an LLM multi-task cascading method, utilizing stepwise oracle verification to distill a 60K-instance Supervised Fine-Tuning (SFT) corpus strictly biased toward token-efficient, shortest-path trajectories. Second, to overcome the instability of sparse outcome rewards, we d...
3.Rethinking AI Hardware: A Three-Layer Cognitive Architecture for Autonomous Agents
The next generation of autonomous AI systems will be constrained not only by model capability, but by how intelligence is structured across heterogeneous hardware. Current paradigms -- cloud-centric AI, on-device inference, and edge-cloud pipelines -- treat planning, reasoning, and execution as a monolithic process, leading to unnecessary latency, energy consumption, and fragmented behavioral continuity. We introduce the Tri-Spirit Architecture, a three-layer cognitive framework that decomposes intelligence into planning (Super Layer), reasoning (Agent Layer), and execution (Reflex Layer), each mapped to distinct compute substrates and coordinated via an asynchronous message bus. We formalize the system with a parameterized routing policy, a habit-compilation mechanism that promotes repeated reasoning paths into zero-inference execution p...
4.Agentic Open RAN: A Deterministic and Auditable Framework for Intent-Driven Radio Control
Large language models (LLMs) open new possibilities for agentic control in Open RAN, allowing operators to express intents in natural language while delegating low-level execution to autonomous agents. We present A1gent, an agentic RAN control stack that decouples reasoning from real-time actuation. A non-RT agentic rApp compiles operator goals into typed A1 policy instances, and three task-oriented near-RT agentic xApps enforce them through a deterministic loop with plane-scoped actuation - E2 for mobility and load steering, and O1 for energy orchestration. This agentic reasoning-execution split ensures auditable coordination between RAN intelligent controller (RIC) tiers, supported by encoded guardrails and a fixed-priority action merger for conflict governance. A training-free adaptive policy tuner then refines bounded parameters using...
5.Multi-Agent Object Detection Framework Based on Raspberry Pi YOLO Detector and Slack-Ollama Natural Language Interface
The paper presents design and prototype implementation of an edge based object detection system within the new paradigm of AI agents orchestration. It goes beyond traditional design approaches by leveraging on LLM based natural language interface for system control and communication and practically demonstrates integration of all system components into a single resource constrained hardware platform. The method is based on the proposed multi-agent object detection framework which tightly integrates different AI agents within the same task of providing object detection and tracking capabilities. The proposed design principles highlight the fast prototyping approach that is characteristic for transformational potential of generative AI systems, which are applied during both development and implementation stages. Instead of specialized commu...
AI Machine Learning
1.The Devil Is in Gradient Entanglement: Energy-Aware Gradient Coordinator for Robust Generalized Category Discovery
arXiv:2604.14176v1 Announce Type: new Abstract: Generalized Category Discovery (GCD) leverages labeled data to categorize unlabeled samples from known or unknown classes. Most previous methods jointly optimize supervised and unsupervised objectives and achieve promising results. However, inherent optimization interference still limits their ability to improve further. Through quantitative analysis, we identify a key issue, i.e., gradient entanglement, which 1) distorts supervised gradients and weakens discrimination among known classes, and 2) induces representation-subspace overlap between known and novel classes, reducing the separability of novel categories. To address this issue, we propose the Energy-Aware Gradient Coordinator (EAGC), a plug-and-play gradient-level module that explicitly regulates the optimization process. EAGC compr...
2.MixAtlas: Uncertainty-aware Data Mixture Optimization for Multimodal LLM Midtraining
arXiv:2604.14198v1 Announce Type: new Abstract: Domain reweighting can improve sample efficiency and downstream generalization, but data-mixture optimization for multimodal midtraining remains largely unexplored. Current multimodal training recipes tune mixtures along a single dimension, typically data format or task type. We introduce MixAtlas, a method that produces benchmark-targeted data recipes that can be inspected, adapted, and transferred to new corpora. MixAtlas decomposes the training corpus along two axes: image concepts (10 visual-domain clusters discovered via CLIP embeddings) and task supervision (5 objective types including captioning, OCR, grounding, detection, and VQA). Using small proxy models (Qwen2-0.5B) paired with a Gaussian-process surrogate and GP-UCB acquisition, MixAtlas searches the resulting mixture space with ...
3.Portfolio Optimization Proxies under Label Scarcity and Regime Shifts via Bayesian and Deterministic Students under Semi-Supervised Sandwich Training
arXiv:2604.14206v1 Announce Type: new Abstract: This paper proposes a machine learning assisted portfolio optimization framework designed for low data environments and regime uncertainty. We construct a teacher student learning pipeline in which a Conditional Value at Risk (CVaR) optimizer generates supervisory labels, and neural models (Bayesian and deterministic) are trained using both real and synthetically augmented data. The synthetic data is generated using a factor based model with t copula residuals, enabling training beyond the limited real sample of 104 labeled observations. We evaluate four student models under a structured experimental framework comprising (i) controlled synthetic experiments (3 x 5 seed grid), (ii) in-distribution real market evaluation (C2A) and (iii) cross-universe generalization (D2A). In real-market setti...
4.Towards Verified and Targeted Explanations through Formal Methods
arXiv:2604.14209v1 Announce Type: new Abstract: As deep neural networks are deployed in safety-critical domains such as autonomous driving and medical diagnosis, stakeholders need explanations that are interpretable but also trustworthy with formal guarantees. Existing XAI methods fall short: heuristic attribution techniques (e.g., LIME, Integrated Gradients) highlight influential features but offer no mathematical guarantees about decision boundaries, while formal methods verify robustness yet remain untargeted, analyzing the nearest boundary regardless of whether it represents a critical risk. In safety-critical systems, not all misclassifications carry equal consequences; confusing a "Stop" sign for a "60 kph" sign is far more dangerous than confusing it with a "No Passing" sign. We introduce ViTaX (Verified and Targeted Explanations),...
5.Shapley Value-Guided Adaptive Ensemble Learning for Explainable Financial Fraud Detection with U.S. Regulatory Compliance Validation
arXiv:2604.14231v1 Announce Type: new Abstract: Financial crime costs U.S. institutions over $32 billion each year. Although AI tools for fraud detection have become more advanced, their use in real-world systems still faces a major obstacle: many of these models operate as black boxes that cannot provide the transparent, auditable explanations required by regulations such as OCC Bulletin 2011-12 and Federal Reserve SR 11-7. This study makes three main contributions. First, it offers a thorough evaluation of explanation quality across faithfulness (sufficiency and comprehensiveness at k=5, 10, and 15) and stability (Kendall's W across 30 bootstrap samples). XGBoost paired with TreeExplainer achieves near-perfect stability (W=0.9912), while LSTM with DeepExplainer shows weak results (W=0.4962). Second, the paper introduces the SHAP-Guided ...
Financial AI
1.The Acoustic Camouflage Phenomenon: Re-evaluating Speech Features for Financial Risk Prediction
In computational paralinguistics, detecting cognitive load and deception from speech signals is a heavily researched domain. Recent efforts have attempted to apply these acoustic frameworks to corporate earnings calls to predict catastrophic stock market volatility. In this study, we empirically investigate the limits of acoustic feature extraction (pitch, jitter, and hesitation) when applied to highly trained speakers in in-the-wild teleconference environments. Utilizing a two-stream late-fusion architecture, we contrast an acoustic-based stream with a baseline Natural Language Processing (NLP) stream. The isolated NLP model achieved a recall of 66.25% for tail-risk downside events. Surprisingly, integrating acoustic features via late fusion significantly degraded performance, reducing recall to 47.08%. We identify this degradation as Ac...
2.PRAGMA: Revolut Foundation Model
Modern financial systems generate vast quantities of transactional and event-level data that encode rich economic signals. This paper presents PRAGMA, a family of foundation models for multi-source banking event sequences. Our approach pre-trains a Transformer-based architecture with masked modelling on a large-scale, heterogeneous banking event corpus using a self-supervised objective tailored to the discrete, variable-length nature of financial records. The resulting model supports a wide range of downstream tasks such as credit scoring, fraud detection, and lifetime value prediction: strong performance can be achieved by training a simple linear model on top of the extracted embeddings and can be further improved with lightweight fine-tuning. Through extensive evaluation on downstream tasks, we demonstrate that PRAGMA achieves superior...
3.Quantum Computing for Financial Transformation: A Review of Optimisation, Pricing, Risk, Machine Learning, and Post-Quantum Security
Quantum computing is becoming strategically relevant to finance because several core financial bottlenecks are already defined by combinatorial search, expectation estimation, rare-event analysis, representation learning, and long-horizon cryptographic resilience. This review examines that landscape across five connected domains: constrained portfolio optimisation, derivative pricing, tail-risk and scenario estimation, quantum machine learning, and post-quantum security. Rather than treating these topics as isolated demonstrations, the article studies them as linked layers of a financial-computation stack. Across all five domains, the review applies a common evaluative logic: identify the financial bottleneck, specify the relevant quantum primitive, compare it with an explicit classical benchmark, and assess the result under realistic imp...
4.SBBTS: A Unified Schrödinger-Bass Framework for Synthetic Financial Time Series
We study the problem of generating synthetic time series that reproduce both marginal distributions and temporal dynamics, a central challenge in financial machine learning. Existing approaches typically fail to jointly model drift and stochastic volatility, as diffusion-based methods fix the volatility while martingale transport models ignore drift. We introduce the Schrödinger-Bass Bridge for Time Series (SBBTS), a unified framework that extends the Schrödinger-Bass formulation to multi-step time series. The method constructs a diffusion process that jointly calibrates drift and volatility and admits a tractable decomposition into conditional transport problems, enabling efficient learning. Numerical experiments on the Heston model demonstrate that SBBTS accurately recovers stochastic volatility and correlation parameters that prior Sch...
5.Sequential Audit Sampling with Statistical Guarantees
Financial statement auditing is conducted under a risk-based evidence approach to obtain reasonable assurance. In practice, auditors often perform additional sampling or related procedures when an initial sample does not provide a sufficient basis for a conclusion. Across jurisdictions, current standards and practice manuals acknowledge such extensions, while the statistical design of sequential audit procedures has not been fully explored. This study formulates audit sampling with additional, sequentially collected items as a sequential testing problem for a finite population under sampling without replacement. We define null and alternative hypotheses in terms of a tolerable deviation rate, specify stopping and decision rules, and formulate exact sequential boundary conditions in terms of finite-population error probabilities. For pract...
GSMA Newsroom
1.GSMA Report Urges Japan to Take Bold Action to Convert Technical Excellence into Global Digital Leadership
Summary available at source link.
2.From Rich Text to Video: RCS Universal Profile 4.0 has arrived
Summary available at source link.
3.Mobile Money accounted for $2 trillion in transactions in 2025, doubling since 2021 as active accounts continue to grow
Summary available at source link.
4.Strengthening the Global Fight Against Fraud and Scams – Takeaways from the Global Fraud Summit in Vienna
Summary available at source link.
5.GSMA MWC26 Barcelona closes 20th anniversary edition
Summary available at source link.
Generative AI (arXiv)
1.From Tokens to Steps: Verification-Aware Speculative Decoding for Efficient Multi-Step Reasoning
Speculative decoding (SD) accelerates large language model inference by allowing a lightweight draft model to propose outputs that a stronger target model verifies. However, its token-centric nature allows erroneous steps to propagate. Prior approaches mitigate this using external reward models, but incur additional latency, computational overhead, and limit generalizability. We propose SpecGuard, a verification-aware speculative decoding framework that performs step-level verification using only model-internal signals. At each step, SpecGuard samples multiple draft candidates and selects the most consistent step, which is then validated using an ensemble of two lightweight model-internal signals: (i) an attention-based grounding score that measures attribution to the input and previously accepted steps, and (ii) a log-probability-based s...
2.Compressing Sequences in the Latent Embedding Space: $K$-Token Merging for Large Language Models
Large Language Models (LLMs) incur significant computational and memory costs when processing long prompts, as full self-attention scales quadratically with input length. Token compression aims to address this challenge by reducing the number of tokens representing inputs. However, existing prompt-compression approaches primarily operate in token space and overlook inefficiencies in the latent embedding space. In this paper, we propose K-Token Merging, a latent-space compression framework that merges each contiguous block of K token embeddings into a single embedding via a lightweight encoder. The compressed sequence is processed by a LoRA-adapted LLM, while generation remains in the original vocabulary. Experiments on structural reasoning (Textualized Tree), sentiment classification (Amazon Reviews), and code editing (CommitPackFT) show ...
3.IG-Search: Step-Level Information Gain Rewards for Search-Augmented Reasoning
Reinforcement learning has emerged as an effective paradigm for training large language models to perform search-augmented reasoning. However, existing approaches rely on trajectory-level rewards that cannot distinguish precise search queries from vague or redundant ones within a rollout group, and collapse to a near-zero gradient signal whenever every sampled trajectory fails. In this paper, we propose IG-Search, a reinforcement learning framework that introduces a step-level reward based on Information Gain (IG). For each search step, IG measures how much the retrieved documents improve the model's confidence in the gold answer relative to a counterfactual baseline of random documents, thereby reflecting the effectiveness of the underlying search query. This signal is fed back to the corresponding search-query tokens via per-token advan...
4.Feedback-Driven Execution for LLM-Based Binary Analysis
Binary analysis increasingly relies on large language models (LLMs) to perform semantic reasoning over complex program behaviors. However, existing approaches largely adopt a one-pass execution paradigm, where reasoning operates over a fixed program representation constructed by static analysis tools. This formulation limits the ability to adapt exploration based on intermediate results and makes it difficult to sustain long-horizon, multi-path analysis under constrained context. We present FORGE, a system that rethinks LLM-based analysis as a feedback-driven execution process. FORGE interleaves reasoning and tool interaction through a reasoning-action-observation loop, enabling incremental exploration and evidence construction. To address the instability of long-horizon reasoning, we introduce a Dynamic Forest of Agents (FoA), a decompos...
5.HintPilot: LLM-based Compiler Hint Synthesis for Code Optimization
Code optimization remains a core objective in software development, yet modern compilers struggle to navigate the enormous optimization spaces. While recent research has looked into employing large language models (LLMs) to optimize source code directly, these techniques can introduce semantic errors and miss fine-grained compiler-level optimization opportunities. We present HintPilot, which bridges LLM-based reasoning with traditional compiler infrastructures via synthesizing compiler hints, annotations that steer compiler behavior. HintPilot employs retrieval-augmented synthesis over compiler documentation and applies profiling-guided iterative refinement to synthesize semantics-preserving and effective hints. Upon PolyBench and HumanEval-CPP benchmarks, HintPilot achieves up to 6.88x geometric mean speedup over -Ofast while preserving ...
Hugging Face Daily Papers
1.Bidirectional Cross-Modal Prompting for Event-Frame Asymmetric Stereo
Conventional frame-based cameras capture rich contextual information but suffer from limited temporal resolution and motion blur in dynamic scenes. Event cameras offer an alternative visual representation with higher dynamic range free from such limitations. The complementary characteristics of the two modalities make event-frame asymmetric stereo promising for reliable 3D perception under fast motion and challenging illumination. However, the modality gap often leads to marginalization of domain-specific cues essential for cross-modal stereo matching. In this paper, we introduce Bi-CMPStereo, a novel bidirectional cross-modal prompting framework that fully exploits semantic and structural features from both domains for robust matching. Our approach learns finely aligned stereo representations within a target canonical space and integrate...
2.TokenLight: Precise Lighting Control in Images using Attribute Tokens
This paper presents a method for image relighting that enables precise and continuous control over multiple illumination attributes in a photograph. We formulate relighting as a conditional image generation task and introduce attribute tokens to encode distinct lighting factors such as intensity, color, ambient illumination, diffuse level, and 3D light positions. The model is trained on a large-scale synthetic dataset with ground-truth lighting annotations, supplemented by a small set of real captures to enhance realism and generalization. We validate our approach across a variety of relighting tasks, including controlling in-scene lighting fixtures and editing environment illumination using virtual light sources, on synthetic and real images. Our method achieves state-of-the-art quantitative and qualitative performance compared to prior ...
3.LLMs Gaming Verifiers: RLVR can Lead to Reward Hacking
As reinforcement Learning with Verifiable Rewards (RLVR) has become the dominant paradigm for scaling reasoning capabilities in LLMs, a new failure mode emerges: LLMs gaming verifiers. We study this phenomenon on inductive reasoning tasks, where models must induce and output logical rules. We find that RLVR-trained models systematically abandon rule induction. Instead of learning generalizable patterns (e.g., ``trains carrying red cars go east''), they enumerate instance-level labels, producing outputs that pass verifiers without capturing the relational patterns required by the task. We show that this behavior is not a failure of understanding but a form of reward hacking: imperfect verifiers that check only extensional correctness admit false positives. To detect such shortcuts, we introduce Isomorphic Perturbation Testing (IPT), which ...
4.Hybrid Decision Making via Conformal VLM-generated Guidance
Building on recent advances in AI, hybrid decision making (HDM) holds the promise of improving human decision quality and reducing cognitive load. We work in the context of learning to guide (LtG), a recently proposed HDM framework in which the human is always responsible for the final decision: rather than suggesting decisions, in LtG the AI supplies (textual) guidance useful for facilitating decision making. One limiting factor of existing approaches is that their guidance compounds information about all possible outcomes, and as a result it can be difficult to digest. We address this issue by introducing ConfGuide, a novel LtG approach that generates more succinct and targeted guidance. To this end, it employs conformal risk control to select a set of outcomes, ensuring a cap on the false negative rate. We demonstrate our approach on a...
5.Unsupervised feature selection using Bayesian Tucker decomposition
In this paper, we proposed Bayesian Tucker decomposition (BTuD) in which residual is supposed to obey Gaussian distribution analogous to linear regression. Although we have proposed an algorithm to perform the proposed BTuD, the conventional higher-order orthogonal iteration can generate Tucker decomposition consistent with the present implementation. Using the proposed BTuD, we can perform unsupervised feature selection successfully applied to various synthetic datasets, global coupled maps with randomized coupling strength, and gene expression profiles. Thus we can conclude that our newly proposed unsupervised feature selection method is promising. In addition to this, BTuD based unsupervised FE is expected to coincide with TD based unsupervised FE that were previously proposed and successfully applied to a wide range of problems.
IEEE Xplore AI
1.Optical Fiber Networks Can Keep Rail Networks Safe
This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore. Rail networks are vast, which makes it difficult to conduct comprehensive, continuous safety monitoring. Researchers in China have suggested analyzing the vibrations of existing fiber cables buried underground alongside railway tracks to detect problems. In a study published 5 March in the Journal of Optical Communications and Networking , the research group demonstrated through experiments how the technique can successfully identify a number of issues associated with train safety, including faulty train wheels and broken sound barriers alongside the railway tracks. Sasha Dong is a junior chair professor in Southeast University’s School of Transportation, in Nanjing, China. She notes that traditional approaches for monitoring railways—such as ...
2.Boston Dynamics and Google DeepMind Teach Spot to Reason
The amazing and frustrating thing about robots is that they can do almost anything you want them to do, as long as you know how to ask properly. In the not-so-distant past, asking properly meant writing code, and while we’ve thankfully moved beyond that brittle constraint, there’s still an irritatingly inverse correlation between ease of use and complexity of task. AI has promised to change that. The idea is that when AI is embodied within robots—giving AI software a physical presence in the world—those robots will be imbued with reasoning and understanding. This is cutting-edge stuff, though, and while we’ve seen plenty of examples of embodied AI in a research context, finding applications where reasoning robots can provide reliable commercial value has not been easy. Boston Dynamics is one of the few companies to commercially deploy leg...
3.Sarang Gupta Builds AI Systems With Real-World Impact
Like many engineers, Sarang Gupta spent his childhood tinkering with everyday items around the house. From a young age he gravitated to projects that could make a difference in someone’s everyday life. When the family’s microwave plug broke, Gupta and his father figured out how to fix it. When a drawer handle started jiggling annoyingly, the youngster made sure it didn’t do so for long. Sarang Gupta Employer OpenAI in San Francisco Job Data science staff member Member grade Senior member Alma maters The Hong Kong University of Science and Technology; Columbia By age 11, his interest expanded from nuts and bolts to software. He learned programming languages such as Basic and Logo and designed simple programs including one that helped a local restaurant automate online ordering and billing. Gupta, an IEEE senior member, brings his mix of cu...
4.12 Graphs That Explain the State of AI in 2026
The capabilities of leading AI models continue to accelerate, and the largest AI companies, including OpenAI and Anthropic , are hurtling toward IPOs later this year. Yet resentment toward AI continues to simmer, and in some cases has boiled over, especially in the United States, where local governments are beginning to embrace restrictions or outright bans on new data center development. It’s a lot to keep track of, but the 2026 edition of the AI Index from Stanford University’s Human-Centered Artificial Intelligence center pulls it off. The report, which comes in at over 400 pages, includes dozens of data points and graphs that approach the topic from multiple angles, from benchmark scores to investment and public perception. As in prior years (see our coverage from 2021 , 2022 , 2023 , 2024 , and 2025 ), we’ve read the report and ident...
5.GoZTASP: A Zero-Trust Platform for Governing Autonomous Systems at Mission Scale
ZTASP is a mission-scale assurance and governance platform designed for autonomous systems operating in real-world environments. It integrates heterogeneous systems—including drones, robots, sensors, and human operators—into a unified zero-trust architecture. Through Secure Runtime Assurance (SRTA) and Secure Spatio-Temporal Reasoning (SSTR), ZTASP continuously verifies system integrity, enforces safety constraints, and enables resilient operation even under degraded conditions. ZTASP has progressed beyond conceptual design, with operational validation at Technology Readiness Level (TRL) 7 in mission critical environments. Core components, including Saluki secure flight controllers, have reached TRL8 and are deployed in customer systems. While initially developed for high-consequence mission environments, the same assurance challenges are...
MIT Sloan Management
1.Lessons From Innovation Pioneer Florence Nightingale
Carolyn Geason-Beissel/MIT SMR | Wellcome Collection Florence Nightingale may be best remembered as the epitome of a kind, caring nurse, but she was also a force for disruptive innovation in health care. Three distinct elements of her work — communicating data compellingly, publicizing clear and simple instructions, and expanding professionalized training — carry timeless lessons […]
2.The Human Side of AI Adoption: Lessons From the Field
Carolyn Geason-Beissel/MIT SMR Not a day goes by without another article being published about how AI could disrupt yet another aspect of our business or personal lives. In recent years, AI adoption has indeed taken off. However, if you pay close attention, you’ll notice a dichotomy. Many examples of successful early adoption of artificial intelligence […]
3.Managing Up: A Skill Set That Matters Now
Carolyn Geason-Beissel/MIT SMR | Getty Images Are you skilled at managing up? If your talents are lacking when it comes to managing and dealing with the people above you in the organizational hierarchy, you can find yourself mired in some unpleasant and career-harming situations. Maybe you’re frustrated by a micromanaging supervisor or feeling marginalized by […]
4.The Trap That Skilled Negotiators Miss
Brian Stauffer/theispot.com Say you walk into a car dealership determined to stay within budget. The salesperson shows you a car you like and quotes a price of $41,435. You know there’s room to negotiate, but when it’s time to counter, that first number quietly takes over. Your counteroffer, the concessions, and the final deal all […]
5.Rethink Responsibility in the Age of AI
Mark Airs/Ikon Images Early one morning in 2018, a self-driving Uber vehicle fatally struck a pedestrian in Tempe, Arizona. The world had questions: Who was responsible? Was it the safety driver behind the wheel? The engineers who designed the algorithms? Uber’s leadership? Or the regulators who had allowed autonomous-vehicle testing? The inability to name a […]
NBER Working Papers
1.The Empathy Channel in Fertility -- by Sebastian Galiani, Raul A. Sosa
Being around babies makes people want babies. We formalize this observation as the empathy channel: exposure to infants in the social environment activates neurobiological mechanisms that increase the desire for parenthood. As children become scarcer, this affective stimulus weakens, further eroding the motivation to have children. We embed the mechanism in a two-group overlapping-generations quantity-quality model. The empathy channel generates a positive externality, since each birth raises others’ desire for children, making the decentralized equilibrium inefficient. We characterize the optimal per-child subsidy and show that the first-order Pigouvian rate substantially overshoots the general-equilibrium optimum. The optimal targeting rule follows a Ramsey-like logic, directing the subsidy at the group with the most externality per fis...
2.Profit Regulation and Strategic Transfer Pricing by Vertically Integrated Firms: Evidence from Health Care -- by Pragya Kakani, Eric Yde, Genevieve P. Kanter, Richard G. Frank, Amelia M. Bond
We provide evidence of strategic transfer pricing by vertically integrated health care firms in response to insurer profit regulations. Insurers increased prices at vertically integrated pharmacies by 9.5% following the introduction of caps on insurer profits in Medicare Part D. We detect larger price increases by insurers that were at greatest risk of exceeding the allowable profit level. More than one-fifth of these higher prices were borne by the federal government. Our analysis illustrates that vertically integrated firms can evade profit regulation by “tunneling” profits to unregulated subsidiaries, undermining regulatory intent and increasing health care spending.
3.Predicted Incrementality by Experimentation (PIE) for Ad Measurement -- by Brett R. Gordon, Robert Moakler, Florian Zettelmeyer
Randomized controlled trials (RCTs) provide the most credible estimates of advertising incrementality but are difficult to scale. We propose Predicted Incrementality by Experimentation (PIE), which reframes ad measurement as a campaign-level prediction problem. PIE uses a sample of RCTs to learn a mapping from campaign features to causal effects, then applies it to campaigns not run as RCTs. Because the RCTs identify the causal effects, PIE can incorporate post-determined features—campaign-level aggregates such as test-group outcomes, exposure rates, and last-click conversions, computed after campaign completion. These metrics reflect the consumer behaviors that generate treatment effects, so they carry predictive information about incrementality even though they would be invalid controls in a causal model. Using 2,226 Meta ad experiments...
4.Bad News and Policy Views: Expectations, Disappointment, and Opposition to Affirmative Action -- by Louis-Pierre Lepage, Heather Sarsons, Michael Thaler
There is widespread opposition to affirmative action policies. We study whether personal disappointments shape preferences for such policies. Specifically, we test whether individuals' college admissions outcomes, relative to their expectations, influence their attitudes toward affirmative action policies. Using a retrospective survey among recent White and Asian college applicants, we find that disappointed individuals—those who were admitted to fewer schools than anticipated—are relatively more likely to believe that affirmative action played an important role in their admissions outcomes, have the lowest support for affirmative action policies, and are more willing to donate to an anti-affirmative action organization. They also hold more negative views about the academic qualifications of under-represented minorities. To isolate the ca...
5.Forecasting the Economic Effects of AI -- by Ezra Karger, Otto Kuusela, Jason Abaluck, Kevin A. Bryan, Basil Halperin, Todd R. Jones, Connacher Murphy, Philip Trammell, Matt Reynolds, Dan Mayland, Ria Viswanathan, Ananaya Mittal, Rebecca Ceppas de Castro, Josh Rosenberg, Philip Tetlock
We elicit forecasts of how AI will affect the U.S. economy, comparing the beliefs of five groups: academic economists, employees at AI companies, policy researchers focused on AI, highly accurate forecasters, and the general public. The median respondent in each group expects substantial advances in AI capabilities by 2030, small declines in labor force participation consistent with demographic shifts, and an annual GDP growth rate of 2.5%, which exceeds both the typical medium-run (2.0%) and long-run (1.7%) baseline forecasts from government agencies and private-sector forecasters. Conditional on a “rapid” AI progress scenario, in which AI systems surpass human performance on many cognitive and physical tasks, experts forecast substantial, though not historically unprecedented, economic shifts: annualized GDP growth rising to around 4% a...
NY Fed - Liberty Street
1.Bank Failures: The Roles of Solvency and Liquidity
Do banks fail because of runs or because they become insolvent? Answering this question is central to understanding financial crises and designing effective financial stability policies. Long-run historical evidence reveals that the root cause of bank failures is usually insolvency. The importance of bank runs is somewhat overstated. Runs matter, but in most cases they trigger or accelerate failure at already weak banks, rather than cause otherwise sound banks to fail.
2.The R*–Labor Share Nexus
Over the past quarter century, the U.S. economy has experienced significant declines in both the labor share of income and the natural rate of interest, referred to as R*. Existing research has largely analyzed these two developments in isolation. In this post, we provide a simple model that captures the joint evolution of the labor share and R*, which we call the R*–labor share nexus. Our key finding is that structural changes affecting R* also influence the evolution of the labor share, and thereby wages and prices. This highlights a potentially important channel, absent from many macroeconomic models, through which the factors that determine R* also affect the labor share and, in turn, broader macroeconomic developments, with implications for monetary policy.
3.Use of Gen AI in the Workplace and the Value of Access to Training
The rapid spread of generative AI (AI) tools is reshaping the workplace at a remarkable rate. Yet relatively little is known about whether workers have access to these tools, how the tools affect workers’ daily productivity, and how much workers value the training needed to use the tools effectively. In this post, we shed light on these issues by drawing on supplemental questions in the November 2025 Survey of Consumer Expectations (SCE), fielded to a representative sample of the U.S. population. We find that adoption of AI tools at work is heterogeneous, that a sizable share of workers see AI training as important, and that a significant share of employers are nonetheless not yet providing access to AI tools or training on how to use them.
4.What Millions of Homeowner’s Insurance Contracts Reveal About Risk Sharing
Housing is the largest component of assets held by households in the United States, totaling $48 trillion in 2025. When natural disasters strike, the resulting damage to homes can be large relative to households’ liquid savings. Homeowner’s insurance is the primary financial tool households use to protect themselves against property risk. Despite the economic importance of homeowner’s insurance, we know surprisingly little about how insurance contracts are actually designed with respect to property risk. In this post, which is based on our new paper, “Economics of Property Insurance,” we examine how homeowner’s insurance contracts are structured in practice. Using a new granular dataset covering millions of homeowner’s insurance policies, we document ...
5.A Closer Look at Emerging Market Resilience During Recent Shocks
A succession of shocks to the global economy in recent years has focused attention on the improved economic and financial resilience of emerging market economies. For some of these economies, this assessment is well-founded and highlights the fruits of deep, structural economic reforms since the 1990s. However, for a much larger universe of countries, the ability to weather shocks is still mixed and many remain vulnerable. In this post, we explore the divide between the two sets of countries and focus on the effects of recent economic shocks, including the ongoing conflict in the Middle East.
Project Syndicate
1.The World Needs an Oil Buyers’ Club
As the world is plunged into another energy crisis, market allocation is leading to grossly unjust outcomes, as the rich outbid the poor. A multilateral oil buyers' club is urgently needed to defend a price ceiling in global oil markets and allocate resources in a way that meets people’s essential needs and minimizes the economic fallout.
2.A New Security Architecture for the Middle East
The tense negotiations between the United States and Iran have exposed the limits of bilateral diplomacy. With the crisis fueled by overlapping, interconnected conflicts, the only viable path forward is a broader regional framework that addresses the Strait of Hormuz, nuclear proliferation, Palestinian statehood, and proxy warfare.
3.Fossil-Fuel Investments Are a Fiduciary Risk
The Iran war has reminded everyone, but especially Africans, of the structural instability of fossil-fuel prices. For African trustees, directors, asset managers, and other fiduciaries, the question is not whether capital should reposition, but whether institutions will act before events compel them to do so.
4.To Strengthen Climate Resilience, Focus on Social Protection
The international community is increasingly trying to distinguish between climate, development, and humanitarian finance—as if they can be neatly compartmentalized. But this siloed approach overlooks how social-protection programs providing cash transfers to vulnerable households can strengthen resilience to climate shocks.
5.Europe’s Digital Decade Is in Disarray
Under the banner of the European Union’s Digital Decade agenda, Europe is investing in digitalization to protect its industries from shocks like the one currently emanating from the Middle East. But if EU leaders think their current program is sufficient, they are in for a rude awakening.
RCR Wireless
1.5G positioning is picking up, but monetization is a problem
Analysys Mason says early adoption of 5G positioning is likely to come from localized private network environments In sum – what to know: Early adoption – Private networks in logistics, manufacturing, and healthcare are expected to lead uptake of high-precision…
2.How ISPs can win in a saturated US broadband market (Analyst Angle)
US broadband has rapidly transformed, with fiber, fixed wireless and satellite expanding competition. As coverage rises and prices fall, ISPs must shift focus toward retaining subscribers and securing long-term revenue, especially in MDUs opportunity segment. The American broadband landscape has…
3.Ericsson bets on enterprise 5G and APIs for longer-term AI upside – versus DCI game
Ericsson is sticking to what it knows: 5G, public and private, and APIs, to expose 5G capabilities to developers and enterprises; it offers more coherent longer-term diversification, it implies, than a Nokia-style switch to ride the AI bandwagon on fiber…
4.Ericsson posts 6% organic growth but misses targets amid FX drag and AI chip costs
Ericsson saw 6% organic growth in Q1 2026, but slumped 10% in real terms with currency swings, divestment costs, and higher AI chip prices – causing it to miss targets. Network sales in EMEA and APAC made up for a…
5.Fast-charging quantum batteries could make devices run forever
Quantum batteries could enable wireless charging, allowing systems to stay in a state of constant charging, says James Quach, whose team has developed a working prototype of a quantum battery that charges in quadrillionths of a second In sum —…
Semantic Scholar – Machine Learning
1.Source Error
Check Feed
Telecom & 6G AI
1.A Novel 6G Dynamic Channel Map Based on a Hybrid Channel Model
In the sixth generation (6G) wireless communication networks, the device density, antenna number, and the complexity of communication scenarios will significantly increase, which brings great challenges for system design and network optimization. By obtaining channel information in advance, channel map has become a promising solution to these challenges in 6G era. However, conventional channel maps cannot be updated in time as physical environment changes. To solve the problem, a novel dynamic channel map (DCM) is proposed in this work. For DCM construction, we further present a ray tracing (RT) and geometric stochastic hybrid channel model (RT-GSHCM), which pre-constructs the DCM offline by RT and updates it online by geometry-based stochastic channel model (GBSM). By this way, the DCM can provide time-varying channel information and cha...
2.An Open-Source Hardware-Aware Sub-THz Radio-Stripe Simulator
Sub-Terahertz radio-stripe and distributed MIMO architectures promise extreme spatial reuse and multi-GHz bandwidths, but the cascaded fiber front-haul and RF hardware impairments strongly shape end-to-end performance. This paper presents an open-source, configuration-driven simulator that models the full waveform-level signal chain from CP-OFDM baseband generation in the central unit, through measurement-parameterized polymer microwave fiber and coupler links, to booster/active Radio Units (RUs) with configurable nonlinearity, noise, in-phase and quadrature imbalance, and oscillator phase noise and carrier frequency offset. Wireless propagation is supported via lightweight deterministic and stochastic per-subcarrier channel models as well as site-specific ray-tracing datasets generated with a companion Sionna ray-tracer module. The simul...
3.Towards Trustworthy 6G Network Digital Twins: A Framework for Validating Counterfactual What-If Analysis in Edge Computing Resources
Network Digital Twins (NDTs) enable safe what-if analysis for 6G cloud-edge infrastructures, but adoption is often limited by fragmented workflows from telemetry to validation. We present a data-driven NDT framework that extends 6G-TWIN with a scalable pipeline for cloud-edge telemetry aggregation and semantic alignment into unified data models. Our contributions include: (i) scalable cloud-edge telemetry collection, (ii) regime-aware feature engineering capturing the network's scaling behavior, and (iii) a validation methodology based on Sign Agreement and Directional Sensitivity. Evaluated on a Kubernetes-managed cluster, the framework extrapolates performance to unseen high-load regimes. Results show both Deep Neural Network (DNN) and XGBoost achieve high regression accuracy (R2 > 0.99), while the XGBoost model delivers superior direct...
4.EdgeDetect: Importance-Aware Gradient Compression with Homomorphic Aggregation for Federated Intrusion Detection
Federated learning (FL) enables collaborative intrusion detection without raw data exchange, but conventional FL incurs high communication overhead from full-precision gradient transmission and remains vulnerable to gradient inference attacks. This paper presents EdgeDetect, a communication-efficient and privacy-aware federated IDS for bandwidth-constrained 6G-IoT environments. EdgeDetect introduces gradient smartification, a median-based statistical binarization that compresses local updates to $\{+1,-1\}$ representations, reducing uplink payload by $32\times$ while preserving convergence. We further integrate Paillier homomorphic encryption over binarized gradients, protecting against honest-but-curious servers without exposing individual updates. Experiments on CIC-IDS2017 (2.8M flows, 7 attack classes) demonstrate $98.0\%$ multi-class...
5.Comprehensive Review of Doppler Shift Localization Methods: Advances, Limitations, and Research Opportunities
Reliable geolocation of non-cooperative emitters in environments where Global Navigation Satellite Systems (GNSS) are unavailable or degraded is a key enabler for spectrum regulation, emergency response, autonomous mobility, and Integrated Sensing and Communication (ISAC) services in 5G/6G systems. Doppler-based techniques - from single-receiver Signal Doppler Frequency (SDF) fixes through multi-node Frequency Difference of Arrival (FDOA) and Direct Position Determination (DPD) to derivative-enhanced and learning-assisted hybrids - exploit radial-velocity-induced frequency shifts as a passive, high-resolution localization cue accessible with commodity software-defined radios, millimeter-wave access points, or acoustic sensors. This review consolidates over a decade of research across radio, acoustic, and satellite domains. It introduces a...
arXiv Quantitative Finance
1.The Acoustic Camouflage Phenomenon: Re-evaluating Speech Features for Financial Risk Prediction
In computational paralinguistics, detecting cognitive load and deception from speech signals is a heavily researched domain. Recent efforts have attempted to apply these acoustic frameworks to corporate earnings calls to predict catastrophic stock market volatility. In this study, we empirically investigate the limits of acoustic feature extraction (pitch, jitter, and hesitation) when applied to highly trained speakers in in-the-wild teleconference environments. Utilizing a two-stream late-fusion architecture, we contrast an acoustic-based stream with a baseline Natural Language Processing (NLP) stream. The isolated NLP model achieved a recall of 66.25% for tail-risk downside events. Surprisingly, integrating acoustic features via late fusion significantly degraded performance, reducing recall to 47.08%. We identify this degradation as Ac...
2.Interpretable Systematic Risk around the Clock
In this paper, I present the first comprehensive, around-the-clock analysis of systematic jump risk by combining high-frequency market data with contemporaneous news narratives identified as the underlying causes of market jumps. These narratives are retrieved and classified using a state-of-the-art open-source reasoning LLM. Decomposing market risk into interpretable jump categories reveals significant heterogeneity in risk premia, with macroeconomic news commanding the largest and most persistent premium. Leveraging this insight, I construct an annually rebalanced real-time Fama-MacBeth factor-mimicking portfolio that isolates the most strongly priced jump risk, achieving a high out-of-sample Sharpe ratio and delivering significant alphas relative to standard factor models. The results highlight the value of around-the-clock analysis an...
3.Forecasting Oil Prices Across the Distribution: A Quantile VAR Approach
We develop a Quantile Bayesian Vector Autoregression (QBVAR) to forecast real oil prices across different quantiles of the conditional distribution. The model allows predictor effects to vary across quantiles, capturing asymmetries that standard mean-focused approaches miss. Using monthly data from 1975 to 2025, we document three findings. First, the QBVAR improves median forecasts by 2-5\% relative to Bayesian VARs, demonstrating that quantile-specific dynamics matter even for point prediction. Second, uncertainty and financial condition variables strongly predict downside risk, with left-tail forecast improvements of 10-25\% that intensify during crisis episodes. Third, right-tail forecasting remains difficult; stochastic volatility models dominate for upside risk, though forecast combinations that include the QBVAR recover these losses...
4.A Herding-Based Model of Technological Transfer and Economic Convergence: Evidence from Central and Eastern Europe
The long-run convergence of developing economies toward advanced countries exhibits robust empirical regularities, yet the mechanisms underlying technological diffusion remain insufficiently specified in standard growth models. In this paper, we extend the neoclassical framework by introducing a micro-founded mechanism of technological transfer as a driver of total factor productivity. Rather than treating technological progress as exogenous or purely innovation-driven, we model productivity growth as a process of adopting existing technologies from the global frontier. The diffusion process is described using a herding-type interaction mechanism, in which agents transition from non-adopters to adopters under the combined influence of individual incentives and peer effects. This approach yields a tractable aggregate representation of TFP ...
5.AI Patents in the United States and China: Measurement, Organization, and Knowledge Flows
We develop a high-precision classifier to measure artificial intelligence (AI) patents by fine-tuning PatentSBERTa on manually labeled data from the USPTO's AI Patent Dataset. Our classifier substantially improves the existing USPTO approach, achieving 97.0% precision, 91.3% recall, and a 94.0% F1 score, and it generalizes well to Chinese patents based on citation and lexical validation. Applying it to granted U.S. patents (1976-2023) and Chinese patents (2010-2023), we document rapid growth in AI patenting in both countries and broad convergence in AI patenting intensity and subfield composition, even as China surpasses the United States in recent annual patent counts. The organization of AI innovation nevertheless differs sharply: U.S. AI patenting is concentrated among large private incumbents and established hubs, whereas Chinese AI p...
arXiv – 6G & Networking
1.A Novel 6G Dynamic Channel Map Based on a Hybrid Channel Model
In the sixth generation (6G) wireless communication networks, the device density, antenna number, and the complexity of communication scenarios will significantly increase, which brings great challenges for system design and network optimization. By obtaining channel information in advance, channel map has become a promising solution to these challenges in 6G era. However, conventional channel maps cannot be updated in time as physical environment changes. To solve the problem, a novel dynamic channel map (DCM) is proposed in this work. For DCM construction, we further present a ray tracing (RT) and geometric stochastic hybrid channel model (RT-GSHCM), which pre-constructs the DCM offline by RT and updates it online by geometry-based stochastic channel model (GBSM). By this way, the DCM can provide time-varying channel information and cha...
2.An Open-Source Hardware-Aware Sub-THz Radio-Stripe Simulator
Sub-Terahertz radio-stripe and distributed MIMO architectures promise extreme spatial reuse and multi-GHz bandwidths, but the cascaded fiber front-haul and RF hardware impairments strongly shape end-to-end performance. This paper presents an open-source, configuration-driven simulator that models the full waveform-level signal chain from CP-OFDM baseband generation in the central unit, through measurement-parameterized polymer microwave fiber and coupler links, to booster/active Radio Units (RUs) with configurable nonlinearity, noise, in-phase and quadrature imbalance, and oscillator phase noise and carrier frequency offset. Wireless propagation is supported via lightweight deterministic and stochastic per-subcarrier channel models as well as site-specific ray-tracing datasets generated with a companion Sionna ray-tracer module. The simul...
3.Towards Trustworthy 6G Network Digital Twins: A Framework for Validating Counterfactual What-If Analysis in Edge Computing Resources
Network Digital Twins (NDTs) enable safe what-if analysis for 6G cloud-edge infrastructures, but adoption is often limited by fragmented workflows from telemetry to validation. We present a data-driven NDT framework that extends 6G-TWIN with a scalable pipeline for cloud-edge telemetry aggregation and semantic alignment into unified data models. Our contributions include: (i) scalable cloud-edge telemetry collection, (ii) regime-aware feature engineering capturing the network's scaling behavior, and (iii) a validation methodology based on Sign Agreement and Directional Sensitivity. Evaluated on a Kubernetes-managed cluster, the framework extrapolates performance to unseen high-load regimes. Results show both Deep Neural Network (DNN) and XGBoost achieve high regression accuracy (R2 > 0.99), while the XGBoost model delivers superior dir...
4.EdgeDetect: Importance-Aware Gradient Compression with Homomorphic Aggregation for Federated Intrusion Detection
Federated learning (FL) enables collaborative intrusion detection without raw data exchange, but conventional FL incurs high communication overhead from full-precision gradient transmission and remains vulnerable to gradient inference attacks. This paper presents EdgeDetect, a communication-efficient and privacy-aware federated IDS for bandwidth-constrained 6G-IoT environments. EdgeDetect introduces gradient smartification, a median-based statistical binarization that compresses local updates to $\{+1,-1\}$ representations, reducing uplink payload by $32\times$ while preserving convergence. We further integrate Paillier homomorphic encryption over binarized gradients, protecting against honest-but-curious servers without exposing individual updates. Experiments on CIC-IDS2017 (2.8M flows, 7 attack classes) demonstrate $98.0\%$ multi-class...
5.Comprehensive Review of Doppler Shift Localization Methods: Advances, Limitations, and Research Opportunities
Reliable geolocation of non-cooperative emitters in environments where Global Navigation Satellite Systems (GNSS) are unavailable or degraded is a key enabler for spectrum regulation, emergency response, autonomous mobility, and Integrated Sensing and Communication (ISAC) services in 5G/6G systems. Doppler-based techniques - from single-receiver Signal Doppler Frequency (SDF) fixes through multi-node Frequency Difference of Arrival (FDOA) and Direct Position Determination (DPD) to derivative-enhanced and learning-assisted hybrids - exploit radial-velocity-induced frequency shifts as a passive, high-resolution localization cue accessible with commodity software-defined radios, millimeter-wave access points, or acoustic sensors. This review consolidates over a decade of research across radio, acoustic, and satellite domains. It introduces a...
arXiv – Network Architecture (6G/Slicing)
1.Towards Trustworthy 6G Network Digital Twins: A Framework for Validating Counterfactual What-If Analysis in Edge Computing Resources
Network Digital Twins (NDTs) enable safe what-if analysis for 6G cloud-edge infrastructures, but adoption is often limited by fragmented workflows from telemetry to validation. We present a data-driven NDT framework that extends 6G-TWIN with a scalable pipeline for cloud-edge telemetry aggregation and semantic alignment into unified data models. Our contributions include: (i) scalable cloud-edge telemetry collection, (ii) regime-aware feature engineering capturing the network's scaling behavior, and (iii) a validation methodology based on Sign Agreement and Directional Sensitivity. Evaluated on a Kubernetes-managed cluster, the framework extrapolates performance to unseen high-load regimes. Results show both Deep Neural Network (DNN) and XGBoost achieve high regression accuracy (R2 > 0.99), while the XGBoost model delivers superior dir...
2.Cross-Domain Query Translation for Network Troubleshooting: A Multi-Agent LLM Framework with Privacy Preservation and Self-Reflection
This paper presents a hierarchical multi-agent LLM architecture to bridge communication gaps between non-technical end users and telecommunications domain experts in private network environments. We propose a cross-domain query translation framework that leverages specialized language models coordinated through multi-agent reflection-based reasoning. The resulting system addresses three critical challenges: (1) accurately classify user queries related to telecommunications network issues using a dual-stage hierarchical approach, (2) preserve user privacy through the anonymization of semantically relevant personally identifiable information (PII) while maintaining diagnostic utility, and (3) translate technical expert responses into user-comprehensible language. Our approach employs ReAct-style agents enhanced with self-reflection mechan...
3.The Missing Pillar in Quantum-Safe 6G: Regulation and Global Compliance
Sixth-generation (6G) mobile networks are expected to operate for multiple decades, supporting mission-critical and globally federated digital services. This long operational horizon coincides with rapid advances in quantum computing that threaten the cryptographic foundations of contemporary mobile systems. While post-quantum cryptography is widely recognized as a necessary technical response, its effective deployment in 6G depends equally on the evolution of regulatory policy and global compliance frameworks. This article argues that quantum-safe 6G represents a regulatory inflection point for mobile networks, as existing compliance models shaped by static cryptographic assumptions, incremental evolution, and point-in-time certification are poorly suited to long-term quantum risk. Building on an analysis of baseline telecom compliance c...
4.Advancing Network Digital Twin Framework for Generating Realistic Datasets
The integration of accurate and reproducible wireless network simulations is a key enabler for research on open, virtualized, and intelligent communication systems. Network Digital Twins (NDTs) provide a scalable alternative to costly and time-consuming measurement campaigns, while enabling controlled experimentation and data generation for data-driven network design. In this paper, we present an open and user-friendly NDT framework that integrates controllable vehicular mobility with the site-specific ray tracer Sionna and the discrete-event ns-3 network simulator, enabling virtualized end-to-end modeling of wireless networks across the radio, network, and application layers. The proposed framework is particularly well-suited for dynamic vehicular networks and urban deployments, supporting realistic mobility, traffic dynamics, and the ex...
5.LightTune: Lightweight Forward-Only Online Fine-Tuning with Applications to Link Adaptation
Deploying machine learning (ML) algorithms on mobile phones is bottlenecked by performance degradation under dynamic, real-world conditions that differ from the offline training conditions. While continual learning and adaptation are essential to mitigate this distributional shift, conventional online learning methods are often computationally prohibitive for resource-constrained devices. In this paper, we propose LightTune, a lightweight, backpropagation-free online fine-tuning framework with provable convergence guarantees. LightTune opportunistically refines ML models using live test-time data only when performance falls below a predefined threshold, ensuring minimal computational overhead and highly efficient responsiveness. As a practical demonstration, we integrate LightTune into a block error rate (BLER) prediction algorithm for ...