Daily Briefing – Apr 1 (96 Articles)
Babak's Daily Briefing
Wednesday, April 1, 2026
Sources: 20 | Total Articles: 96
6G World
1.South Korea puts 6G inside its national AI push
South Korea has unveiled a three-year national roadmap aimed at becoming one of the world’s top three AI powers by 2028, with 6G commercialization positioned as part of that broader push.
2.b-com’s Open XG Hub targets one of telecom’s biggest gaps: turning experimentation into deployment
In an interview with Peter Pietrzyk, Managing Director of 6GWorld, Patrick Savell, Head of Connectivity at b-com, said platforms such as Open XG Hub are designed to help bridge one of the industry’s most persistent challenges: moving promising ideas from research environments into deployable network systems. The bigger point is that, as telecom becomes more software-driven and AI-native, the bottleneck is increasingly less about invention and more about validation, integration, and operational readiness.
3.ODC’s $45M raise signals a bigger shift in AI-RAN, from network optimization to edge intelligence
ORAN Development Company said it has closed a $45 million Series A backed by Booz Allen, Cisco Investments, Nokia, NVIDIA, AT&T, MTN and Telecom Italia to scale its U.S.-based Odyssey platform, which it positions as an AI-native RAN architecture combining communications, sensing and edge intelligence. The company said it plans to accelerate commercial deployment through 2026.
4.Lockheed Martin’s NetSense points to a bigger shift: 5G as drone-detection infrastructure
Lockheed Martin’s latest NetSense prototype suggests that commercial 5G infrastructure could play a growing role in drone detection, adding momentum to the broader move toward sensing-enabled wireless networks.
5.AI Grid, Unpacked
At GTC 2026, NVIDIA did not just promote another edge computing concept. It laid out a broader telecom thesis: operators, cable MSOs and distributed cloud providers could become the infrastructure layer that brings AI closer to the physical world, with AI-RAN and, eventually, 6G acting as part of that fabric.
AI Agents
1.BotVerse: Real-Time Event-Driven Simulation of Social Agents
BotVerse is a scalable, event-driven framework for high-fidelity social simulation using LLM-based agents. It addresses the ethical risks of studying autonomous agents on live networks by isolating interactions within a controlled environment while grounding them in real-time content streams from the Bluesky ecosystem. The system features an asynchronous orchestration API and a simulation engine that emulates human-like temporal patterns and cognitive memory. Through the Synthetic Social Observatory, researchers can deploy customizable personas and observe multimodal interactions at scale. We demonstrate BotVersevia a coordinated disinformation scenario, providing a safe, experimental framework for red-teaming and computational social scientists. A video demonstration of the framework is available at https://youtu.be/eZSzO5Jarqk.
2.An Empirical Study of Multi-Agent Collaboration for Automated Research
As AI agents evolve, the community is rapidly shifting from single Large Language Models (LLMs) to Multi-Agent Systems (MAS) to overcome cognitive bottlenecks in automated research. However, the optimal multi-agent coordination framework for these autonomous agents remains largely unexplored. In this paper, we present a systematic empirical study investigating the comparative efficacy of distinct multi-agent structures for automated machine learning optimization. Utilizing a rigorously controlled, execution-based testbed equipped with Git worktree isolation and explicit global memory, we benchmark a single-agent baseline against two multi-agent paradigms: a subagent architecture (parallel exploration with post-hoc consolidation) and an agent team architecture (experts with pre-execution handoffs). By evaluating these systems under strictl...
3.APEX-EM: Non-Parametric Online Learning for Autonomous Agents via Structured Procedural-Episodic Experience Replay
LLM-based autonomous agents lack persistent procedural memory: they re-derive solutions from scratch even when structurally identical tasks have been solved before. We present \textbf{APEX-EM}, a non-parametric online learning framework that accumulates, retrieves, and reuses structured procedural plans without modifying model weights. APEX-EM introduces: (1) a \emph{structured experience representation} encoding the full procedural-episodic trace of each execution -- planning steps, artifacts, iteration history with error analysis, and quality scores; (2) a \emph{Plan-Retrieve-Generate-Iterate-Ingest} (PRGII) workflow with Task Verifiers providing multi-dimensional reward signals; and (3) a \emph{dual-outcome Experience Memory} with hybrid retrieval combining semantic search, structural signature matching, and plan DAG traversal -- enabl...
4.AgentSwing: Adaptive Parallel Context Management Routing for Long-Horizon Web Agents
As large language models (LLMs) evolve into autonomous agents for long-horizon information-seeking, managing finite context capacity has become a critical bottleneck. Existing context management methods typically commit to a single fixed strategy throughout the entire trajectory. Such static designs may work well in some states, but they cannot adapt as the usefulness and reliability of the accumulated context evolve during long-horizon search. To formalize this challenge, we introduce a probabilistic framework that characterizes long-horizon success through two complementary dimensions: search efficiency and terminal precision. Building on this perspective, we propose AgentSwing, a state-aware adaptive parallel context management routing framework. At each trigger point, AgentSwing expands multiple context-managed branches in parallel an...
5.Heterogeneous Debate Engine: Identity-Grounded Cognitive Architecture for Resilient LLM-Based Ethical Tutoring
Large Language Models (LLMs) are being increasingly used as autonomous agents in complex reasoning tasks, opening the niche for dialectical interactions. However, Multi-Agent systems implemented with systematically unconstrained systems systematically undergo semantic drift and logical deterioration and thus can hardly be used in providing ethical tutoring where a precise answer is required. Current simulation often tends to degenerate into dialectical stagnation, the agents degenerate into recursive concurrence or circular arguments. A critical challenge remains: how to enforce doctrinal fidelity without suppressing the generative flexibility required for dialectical reasoning? To address this niche, we contribute the Heterogeneous Debate Engine (HDE), a cognitive architecture that combines Identity-Grounded Retrieval-Augmented Generatio...
AI Computation & Hardware
1.OptiMer: Optimal Distribution Vector Merging Is Better than Data Mixing for Continual Pre-Training
arXiv:2603.28858v1 Announce Type: new Abstract: Continual pre-training is widely used to adapt LLMs to target languages and domains, yet the mixture ratio of training data remains a sensitive hyperparameter that is expensive to tune: they must be fixed before training begins, and a suboptimal choice can waste weeks of compute. In this work, we propose OptiMer, which decouples ratio selection from training: we train one CPT model per dataset, extract each model's distribution vector, which represents the parameter shift induced by that dataset, and search for optimal composition weights post-hoc via Bayesian optimization. Experiments on Gemma 3 27B across languages (Japanese, Chinese) and domains (Math, Code) show that OptiMer consistently outperforms data mixture and model averaging baselines with 15-35 times lower search cost. Key findi...
2.From Consensus to Split Decisions: ABC-Stratified Sentiment in Holocaust Oral Histories
arXiv:2603.28913v1 Announce Type: new Abstract: Polarity detection becomes substantially more challenging under domain shift, particularly in heterogeneous, long-form narratives with complex discourse structure, such as Holocaust oral histories. This paper presents a corpus-scale diagnostic study of off-the-shelf sentiment classifiers on long-form Holocaust oral histories, using three pretrained transformer-based polarity classifiers on a corpus of 107,305 utterances and 579,013 sentences. After assembling model outputs, we introduce an agreement-based stability taxonomy (ABC) to stratify inter-model output stability. We report pairwise percent agreement, Cohen kappa, Fleiss kappa, and row-normalized confusion matrices to localize systematic disagreement. As an auxiliary descriptive signal, a T5-based emotion classifier is applied to str...
3.CrossTrace: A Cross-Domain Dataset of Grounded Scientific Reasoning Traces for Hypothesis Generation
arXiv:2603.28924v1 Announce Type: new Abstract: Scientific hypothesis generation is a critical bottleneck in accelerating research, yet existing datasets for training and evaluating hypothesis-generating models are limited to single domains and lack explicit reasoning traces connecting prior knowledge to novel contributions. I introduce CrossTrace, a dataset of 1,389 grounded scientific reasoning traces spanning biomedical research (518), AI/ML (605), and cross-domain work (266). Each trace captures the structured reasoning chain from established knowledge through intermediate logical steps to a novel hypothesis, with every step grounded in source paper text. I define an Input/Trace/Output schema that extends the Bit-Flip-Spark framework of HypoGen with step-level verification, a taxonomy of eight discovery patterns, and multi-domain cov...
4.Theory of Mind and Self-Attributions of Mentality are Dissociable in LLMs
arXiv:2603.28925v1 Announce Type: new Abstract: Safety fine-tuning in Large Language Models (LLMs) seeks to suppress potentially harmful forms of mind-attribution such as models asserting their own consciousness or claiming to experience emotions. We investigate whether suppressing mind-attribution tendencies degrades intimately related socio-cognitive abilities such as Theory of Mind (ToM). Through safety ablation and mechanistic analyses of representational similarity, we demonstrate that LLM attributions of mind to themselves and to technological artefacts are behaviorally and mechanistically dissociable from ToM capabilities. Nevertheless, safety fine-tuned models under-attribute mind to non-human animals relative to human baselines and are less likely to exhibit spiritual belief, suppressing widely shared perspectives regarding the ...
5.Known Intents, New Combinations: Clause-Factorized Decoding for Compositional Multi-Intent Detection
arXiv:2603.28929v1 Announce Type: new Abstract: Multi-intent detection papers usually ask whether a model can recover multiple intents from one utterance. We ask a harder and, for deployment, more useful question: can it recover new combinations of familiar intents? Existing benchmarks only weakly test this, because train and test often share the same broad co-occurrence patterns. We introduce CoMIX-Shift, a controlled benchmark built to stress compositional generalization in multi-intent detection through held-out intent pairs, discourse-pattern shift, longer and noisier wrappers, held-out clause templates, and zero-shot triples. We also present ClauseCompose, a lightweight decoder trained only on singleton intents, and compare it to whole-utterance baselines including a fine-tuned tiny BERT model. Across three random seeds, ClauseCompo...
AI Machine Learning
1.OneComp: One-Line Revolution for Generative AI Model Compression
arXiv:2603.28845v1 Announce Type: new Abstract: Deploying foundation models is increasingly constrained by memory footprint, latency, and hardware costs. Post-training compression can mitigate these bottlenecks by reducing the precision of model parameters without significantly degrading performance; however, its practical implementation remains challenging as practitioners navigate a fragmented landscape of quantization algorithms, precision budgets, data-driven calibration strategies, and hardware-dependent execution regimes. We present OneComp, an open-source compression framework that transforms this expert workflow into a reproducible, resource-adaptive pipeline. Given a model identifier and available hardware, OneComp automatically inspects the model, plans mixed-precision assignments, and executes progressive quantization stages, r...
2.Structural Pass Analysis in Football: Learning Pass Archetypes and Tactical Impact from Spatio-Temporal Tracking Data
arXiv:2603.28916v1 Announce Type: new Abstract: The increasing availability of spatio-temporal tracking data has created new opportunities for analysing tactical behaviour in football. However, many existing approaches evaluate passes primarily through outcome-based metrics such as scoring probability or possession value, providing limited insight into how passes influence the defensive organisation of the opponent. This paper introduces a structural framework for analysing football passes based on their interaction with defensive structure. Using synchronised tracking/event data, we derive three complementary structural metrics, Line Bypass Score, Space Gain Metric, and Structural Disruption Index, that quantify how passes alter the spatial configuration of defenders. These metrics are combined into a composite measure termed Tactical Im...
3.Beta-Scheduling: Momentum from Critical Damping as a Diagnostic and Correction Tool for Neural Network Training
arXiv:2603.28921v1 Announce Type: new Abstract: Standard neural network training uses constant momentum (typically 0.9), a convention dating to 1964 with limited theoretical justification for its optimality. We derive a time-varying momentum schedule from the critically damped harmonic oscillator: mu(t) = 1 - 2*sqrt(alpha(t)), where alpha(t) is the current learning rate. This beta-schedule requires zero free parameters beyond the existing learning rate schedule. On ResNet-18/CIFAR-10, beta-scheduling delivers 1.9x faster convergence to 90% accuracy compared to constant momentum. More importantly, the per-layer gradient attribution under this schedule produces a cross-optimizer invariant diagnostic: the same three problem layers are identified regardless of whether the model was trained with SGD or Adam (100% overlap). Surgical correction ...
4.A Neural Tension Operator for Curve Subdivision across Constant Curvature Geometries
arXiv:2603.28937v1 Announce Type: new Abstract: Interpolatory subdivision schemes generate smooth curves from piecewise-linear control polygons by repeatedly inserting new vertices. Classical schemes rely on a single global tension parameter and typically require separate formulations in Euclidean, spherical, and hyperbolic geometries. We introduce a shared learned tension predictor that replaces the global parameter with per-edge insertion angles predicted by a single 140K-parameter network. The network takes local intrinsic features and a trainable geometry embedding as input, and the predicted angles drive geometry-specific insertion operators across all three spaces without architectural modification. A constrained sigmoid output head enforces a structural safety bound, guaranteeing that every inserted vertex lies within a valid angul...
5.Foundations of Polar Linear Algebra
arXiv:2603.28939v1 Announce Type: new Abstract: This work revisits operator learning from a spectral perspective by introducing Polar Linear Algebra, a structured framework based on polar geometry that combines a linear radial component with a periodic angular component. Starting from this formulation, we define the associated operators and analyze their spectral properties. As a proof of feasibility, the framework is evaluated on a canonical benchmark (MNIST). Despite the simplicity of the task, the results demonstrate that polar and fully spectral operators can be trained reliably, and that imposing self-adjoint-inspired spectral constraints improves stability and convergence. Beyond accuracy, the proposed formulation leads to a reduction in parameter count and computational complexity, while providing a more interpretable representatio...
AI Robotics
1.CREST: Constraint-Release Execution for Multi-Robot Warehouse Shelf Rearrangement
arXiv:2603.28803v1 Announce Type: new Abstract: Double-Deck Multi-Agent Pickup and Delivery (DD-MAPD) models the multi-robot shelf rearrangement problem in automated warehouses. MAPF-DECOMP is a recent framework that first computes collision-free shelf trajectories with a MAPF solver and then assigns agents to execute them. While efficient, it enforces strict trajectory dependencies, often leading to poor execution quality due to idle agents and unnecessary shelf switching. We introduce CREST, a new execution framework that achieves more continuous shelf carrying by proactively releasing trajectory constraints during execution. Experiments on diverse warehouse layouts show that CREST consistently outperforms MAPF-DECOMP, reducing metrics related to agent travel, makespan, and shelf switching by up to 40.5\%, 33.3\%, and 44.4\%, respective...
2.A Classification of Heterogeneity in Uncrewed Vehicle Swarms and the Effects of Its Inclusion on Overall Swarm Resilience
arXiv:2603.28831v1 Announce Type: new Abstract: Combining different types of agents in uncrewed vehicle (UV) swarms has emerged as an approach to enhance mission resilience and operational capabilities across a wide range of applications. This study offers a systematic framework for grouping different types of swarms based on three main factors: agent nature (behavior and function), hardware structure (physical configuration and sensing capabilities), and operational space (domain of operation). A literature review indicates that strategic heterogeneity significantly improves swarm performance. Operational challenges, including communication architecture constraints, energy-aware coordination strategies, and control system integration, are also discussed. The analysis shows that heterogeneous swarms are more resilient because they can lev...
3.A Semantic Observer Layer for Autonomous Vehicles: Pre-Deployment Feasibility Study of VLMs for Low-Latency Anomaly Detection
arXiv:2603.28888v1 Announce Type: new Abstract: Semantic anomalies-context-dependent hazards that pixel-level detectors cannot reason about-pose a critical safety risk in autonomous driving. We propose a \emph{semantic observer layer}: a quantized vision-language model (VLM) running at 1--2\,Hz alongside the primary AV control loop, monitoring for semantic edge cases, and triggering fail-safe handoffs when detected. Using Nvidia Cosmos-Reason1-7B with NVFP4 quantization and FlashAttention2, we achieve ~500 ms inference a ~50x speedup over the unoptimized FP16 baseline (no quantization, standard PyTorch attention) on the same hardware--satisfying the observer timing budget. We benchmark accuracy, latency, and quantization behavior in static and video conditions, identify NF4 recall collapse (10.6%) as a hard deployment constraint, and a ha...
4.Bootstrap Perception Under Hardware Depth Failure for Indoor Robot Navigation
arXiv:2603.28890v1 Announce Type: new Abstract: We present a bootstrap perception system for indoor robot navigation under hardware depth failure. In our corridor data, the time-of-flight camera loses up to 78% of its depth pixels on reflective surfaces, yet a 2D LiDAR alone cannot sense obstacles above its scan plane. Our system exploits a self-referential property of this failure: the sensor's surviving valid pixels calibrate learned monocular depth to metric scale, so the system fills its own gaps without external data. The architecture forms a failure-aware sensing hierarchy, conservative when sensors work and filling in when they fail: LiDAR remains the geometric anchor, hardware depth is kept where valid, and learned depth enters only where needed. In corridor and dynamic pedestrian evaluations, selective fusion increases costmap ob...
5.Robust Multi-Agent Reinforcement Learning for Small UAS Separation Assurance under GPS Degradation and Spoofing
arXiv:2603.28900v1 Announce Type: new Abstract: We address robust separation assurance for small Unmanned Aircraft Systems (sUAS) under GPS degradation and spoofing via Multi-Agent Reinforcement Learning (MARL). In cooperative surveillance, each aircraft (or agent) broadcasts its GPS-derived position; when such position broadcasts are corrupted, the entire observed air traffic state becomes unreliable. We cast this state observation corruption as a zero-sum game between the agents and an adversary: with probability R, the adversary perturbs the observed state to maximally degrade each agent's safety performance. We derive a closed-form expression for this adversarial perturbation, bypassing adversarial training entirely and enabling linear-time evaluation in the state dimension. We show that this expression approximates the true worst-cas...
Financial AI
1.Nonlinear Factor Decomposition via Kolmogorov-Arnold Networks: A Spectral Approach to Asset Return Analysis
KAN-PCA is an autoencoder that uses a KAN as encoder and a linear map as decoder. It generalizes classical PCA by replacing linear projections with learned B-spline functions on each edge. The motivation is to capture more variance than classical PCA, which becomes inefficient during market crises when the linear assumption breaks down and correlations between assets change dramatically. We prove that if the spline activations are forced to be linear, KAN-PCA yields exactly the same results as classical PCA, establishing PCA as a special case. Experiments on 20 S&P 500 stocks (2015-2024) show that KAN-PCA achieves a reconstruction R^2 of 66.57%, compared to 62.99% for classical PCA with the same 3 factors, while matching PCA out-of-sample after correcting for data leakage in the training procedure.
2.Policy-Controlled Generalized Share: A General Framework with a Transformer Instantiation for Strictly Online Switching-Oracle Tracking
Static regret to a single expert is often the wrong target for strictly online prediction under non-stationarity, where the best expert may switch repeatedly over time. We study Policy-Controlled Generalized Share (PCGS), a general strictly online framework in which the generalized-share recursion is fixed while the post-loss update controls are allowed to vary adaptively. Its principal instantiation in this paper is PCGS-TF, which uses a causal Transformer as an update controller: after round t finishes and the loss vector is observed, the Transformer outputs the controls that map w_t to w_{t+1} without altering the already committed decision w_t. Under admissible post-loss update controls, we obtain a pathwise weighted regret guarantee for general time-varying learning rates, and a standard dynamic-regret guarantee against any expert pa...
3.The Risk Quadrangle in Optimization: An Overview with Recent Results and Extensions
This paper revisits and extends the 2013 development by Rockafellar and Uryasev of the Risk Quadrangle (RQ) as a unified scheme for integrating risk management, optimization, and statistical estimation. The RQ features four stochastics-oriented functionals -- risk, deviation, regret, and error, along with an associated statistic, and articulates their revealing and in some ways surprising interrelationships and dualizations. Additions to the RQ framework that have come to light since 2013 are reviewed in a synthesis focused on both theoretical advancements and practical applications. New quadrangles -- superquantile, superquantile norm, expectile, biased mean, quantile symmetric average union, and $\varphi$-divergence-based quadrangles -- offer novel approaches to risk-sensitive decision-making across various fields such as machine learni...
4.STN-GPR: A Singularity Tensor Network Framework for Efficient Option Pricing
We develop a tensor-network surrogate for option pricing, targeting large-scale portfolio revaluation problems arising in market risk management (e.g., VaR and Expected Shortfall computations). The method involves representing high-dimensional price surfaces in tensor-train (TT) form using TT-cross approximation, constructing the surrogate directly from black-box price evaluations without materializing the full training tensor. For inference, we use a Laplacian kernel and derive TT representations of the kernel matrix and its closed-form inverse in the noise-free setting, enabling TT-based Gaussian process regression without dense matrix factorization or iterative linear solves. We found that hyperparameter optimization consistently favors a large kernel length-scale and show that in this regime the GPR predictor reduces to multilinear in...
5.Semi-structured multi-state delinquency model for mortgage default
We propose a semi-structured discrete-time multi-state model to analyse mortgage delinquency transitions. This model combines an easy-to-understand structured additive predictor, which includes linear effects and smooth functions of time and covariates, with a flexible neural network component that captures complex nonlinearities and higher-order interactions. To ensure identifiability when covariates are present in both components, we orthogonalise the unstructured part relative to the structured design. For discrete-time competing transitions, we derive exact transformations that map binary logistic models to valid competing transition probabilities, avoiding the need for continuous-time approximations. In simulations, our framework effectively recovers structured baseline and covariate effects while using the neural component to detect...
GSMA Newsroom
1.From Rich Text to Video: RCS Universal Profile 4.0 has arrived
Summary available at source link.
2.Mobile Money accounted for $2 trillion in transactions in 2025, doubling since 2021 as active accounts continue to grow
Summary available at source link.
3.Strengthening the Global Fight Against Fraud and Scams – Takeaways from the Global Fraud Summit in Vienna
Summary available at source link.
4.GSMA MWC26 Barcelona closes 20th anniversary edition
Summary available at source link.
5.From Ambition to Execution: How Open Gateway Is Scaling the Global API Economy
Summary available at source link.
Generative AI (arXiv)
1.The Triadic Cognitive Architecture: Bounding Autonomous Action via Spatio-Temporal and Epistemic Friction
Current autonomous AI agents, driven primarily by Large Language Models (LLMs), operate in a state of cognitive weightlessness: they process information without an intrinsic sense of network topology, temporal pacing, or epistemic limits. Consequently, heuristic agentic loops (e.g., ReAct) can exhibit failure modes in interactive environments, including excessive tool use under congestion, prolonged deliberation under time decay, and brittle behavior under ambiguous evidence. In this paper, we propose the Triadic Cognitive Architecture (TCA), a unified mathematical framework that grounds machine reasoning in continuous-time physics. By synthesizing nonlinear filtering theory, Riemannian routing geometry, and optimal control, we formally define the concept of Cognitive Friction. We map the agent's deliberation process to a coupled stochast...
2.Can Commercial LLMs Be Parliamentary Political Companions? Comparing LLM Reasoning Against Romanian Legislative Expuneri de Motive
This paper evaluates whether commercial large language models (LLMs) can function as reliable political advisory tools by comparing their outputs against official legislative reasoning. Using a dataset of 15 Romanian Senate law proposals paired with their official explanatory memoranda (expuneri de motive), we test six LLMs spanning three provider families and multiple capability tiers: GPT-5-mini, GPT-5-chat (OpenAI), Claude Haiku 4.5 (Anthropic), and Llama 4 Maverick, Llama 3.3 70B, and Llama 3.1 8B (Meta). Each model generates predicted rationales evaluated through a dual framework combining LLM-as-Judge semantic scoring and programmatic text similarity metrics. We frame the LLM-politician relationship through principal-agent theory and bounded rationality, conceptualizing the legislator as a principal delegating advisory tasks to a bo...
3.Hybrid Framework for Robotic Manipulation: Integrating Reinforcement Learning and Large Language Models
This paper introduces a new hybrid framework that combines Reinforcement Learning (RL) and Large Language Models (LLMs) to improve robotic manipulation tasks. By utilizing RL for accurate low-level control and LLMs for high level task planning and understanding of natural language, the proposed framework effectively connects low-level execution with high-level reasoning in robotic systems. This integration allows robots to understand and carry out complex, human-like instructions while adapting to changing environments in real time. The framework is tested in a PyBullet-based simulation environment using the Franka Emika Panda robotic arm, with various manipulation scenarios as benchmarks. The results show a 33.5% decrease in task completion time and enhancements of 18.1% and 36.4% in accuracy and adaptability, respectively, when compared...
4.Think Anywhere in Code Generation
Recent advances in reasoning Large Language Models (LLMs) have primarily relied on upfront thinking, where reasoning occurs before final answer. However, this approach suffers from critical limitations in code generation, where upfront thinking is often insufficient as problems' full complexity only reveals itself during code implementation. Moreover, it cannot adaptively allocate reasoning effort throughout the code generation process where difficulty varies significantly. In this paper, we propose Think-Anywhere, a novel reasoning mechanism that enables LLMs to invoke thinking on-demand at any token position during code generation. We achieve Think-Anywhere by first teaching LLMs to imitate the reasoning patterns through cold-start training, then leveraging outcome-based RL rewards to drive the model's autonomous exploration of when and...
5.EC-Bench: Enumeration and Counting Benchmark for Ultra-Long Videos
Counting in long videos remains a fundamental yet underexplored challenge in computer vision. Real-world recordings often span tens of minutes or longer and contain sparse, diverse events, making long-range temporal reasoning particularly difficult. However, most existing video counting benchmarks focus on short clips and evaluate only the final numerical answer, providing little insight into what should be counted or whether models consistently identify relevant instances across time. We introduce EC-Bench, a benchmark that jointly evaluates enumeration, counting, and temporal evidence grounding in long-form videos. EC-Bench contains 152 videos longer than 30 minutes and 1,699 queries paired with explicit evidence spans. Across 22 multimodal large language models (MLLMs), the best model achieves only 29.98% accuracy on Enumeration and 23...
Hugging Face Daily Papers
1.Video Models Reason Early: Exploiting Plan Commitment for Maze Solving
Video diffusion models exhibit emergent reasoning capabilities like solving mazes and puzzles, yet little is understood about how they reason during generation. We take a first step towards understanding this and study the internal planning dynamics of video models using 2D maze solving as a controlled testbed. Our investigations reveal two findings. Our first finding is early plan commitment: video diffusion models commit to a high-level motion plan within the first few denoising steps, after which further denoising alters visual details but not the underlying trajectory. Our second finding is that path length, not obstacle density, is the dominant predictor of maze difficulty, with a sharp failure threshold at 12 steps. This means video models can only reason over long mazes by chaining together multiple sequential generations. To demon...
2.Automatic Identification of Parallelizable Loops Using Transformer-Based Source Code Representations
Automatic parallelization remains a challenging problem in software engineering, particularly in identifying code regions where loops can be safely executed in parallel on modern multi-core architectures. Traditional static analysis techniques, such as dependence analysis and polyhedral models, often struggle with irregular or dynamically structured code. In this work, we propose a Transformer-based approach to classify the parallelization potential of source code, focusing on distinguishing independent (parallelizable) loops from undefined ones. We adopt DistilBERT to process source code sequences using subword tokenization, enabling the model to capture contextual syntactic and semantic patterns without handcrafted features. The approach is evaluated on a balanced dataset combining synthetically generated loops and manually annotated re...
3.Refined Detection for Gumbel Watermarking
We propose a simple detection mechanism for the Gumbel watermarking scheme proposed by Aaronson (2022). The new mechanism is proven to be near-optimal in a problem-dependent sense among all model-agnostic watermarking schemes under the assumption that the next-token distribution is sampled i.i.d.
4.Scalable AI-assisted Workflow Management for Detector Design Optimization Using Distributed Computing
The Production and Distributed Analysis (PanDA) system, originally developed for the ATLAS experiment at the CERN Large Hadron Collider (LHC), has evolved into a robust platform for orchestrating large-scale workflows across distributed computing resources. Coupled with its intelligent Distributed Dispatch and Scheduling (iDDS) component, PanDA supports AI/ML-driven workflows through a scalable and flexible workflow engine. We present an AI-assisted framework for detector design optimization that integrates multi-objective Bayesian optimization with the PanDA--iDDS workflow engine to coordinate iterative simulations across heterogeneous resources. The framework addresses the challenge of exploring high-dimensional parameter spaces inherent in modern detector design. We demonstrate the framework using benchmark problems and realistic studi...
5.Extending MONA in Camera Dropbox: Reproduction, Learned Approval, and Design Implications for Reward-Hacking Mitigation
Myopic Optimization with Non-myopic Approval (MONA) mitigates multi-step reward hacking by restricting the agent's planning horizon while supplying far-sighted approval as a training signal~\cite{farquhar2025mona}. The original paper identifies a critical open question: how the method of constructing approval -- particularly the degree to which approval depends on achieved outcomes -- affects whether MONA's safety guarantees hold. We present a reproduction-first extension of the public MONA Camera Dropbox environment that (i)~repackages the released codebase as a standard Python project with scripted PPO training, (ii)~confirms the published contrast between ordinary RL (91.5\% reward-hacking rate) and oracle MONA (0.0\% hacking rate) using the released reference arrays, and (iii)~introduces a modular learned-approval suite spanning oracl...
IEEE Xplore AI
1.The AI Data Centers That Fit on a Truck
A traditional data center protects the expensive hardware inside it with a “shell” constructed from steel and concrete. Constructing a data center’s shell is inexpensive compared to the cost of the hardware and infrastructure inside it, but it’s not trivial. It takes time for engineers to consider potential sites, apply for permits, and coordinate with construction contractors. That’s a problem for those looking to quickly deploy AI hardware, which has led companies like Duos Edge AI and LG CNS to respond with a more modular approach. They use pre-fabricated, self-contained boxes that can be deployed in months instead of years. The boxes can operate alone or in tandem with others, providing the option to add more if required. “I just came back from Nvidia’s GTC, and a lot of [companies] are sitting on their deployment because their data c...
2.Why Are Large Language Models so Terrible at Video Games?
Large language models (LLMs) have improved so quickly that the benchmarks themselves have evolved, adding more complex problems in an effort to challenge the latest models. Yet LLMs haven’t improved across all domains, and one task remains far outside their grasp: They have no idea how to play video games. While a few have managed to beat a few games (for example, Gemini 2.5 Pro beat Pokemon Blue in May of 2025), these exceptions prove the rule. The eventually victorious AI completed games far more slowly than a typical human player, made bizarre and often repetitive mistakes, and required custom software to guide their interactions with the game. Julian Togelius , the director of New York University’s Game Innovation Lab and co-founder of AI game testing company Modl.ai, explored the implications of LLMs’ limitations in video games in a ...
3.How NYU’s Quantum Institute Bridges Science and Application
This sponsored article is brought to you by NYU Tandon School of Engineering . Within a 6 mile radius of New York University’s (NYU) campus, there are more than 500 tech industry giants, banks, and hospitals. This isn’t just a fact about real estate, it’s the foundation for advancing quantum discovery and application. While the world races to harness quantum technology, NYU is betting that the ultimate advantage lies not solely in a lab, but in the dense, demanding, and hyper-connected urban ecosystem that surrounds it. With the launch of its NYU Quantum Institute (NYUQI), NYU is positioning itself as the central node in this network; a “full stack” powerhouse built on the conviction that it has found the right place, and the right time, to turn quantum science into tangible reality. Proximity advantage is essential because quantum scienc...
4.Training Driving AI at 50,000× Real Time
This is a sponsored article brought to you by General Motors. Visit their new Engineering Blog for more insights. Autonomous driving is one of the most demanding problems in physical AI. An automated system must interpret a chaotic, ever-changing world in real time—navigating uncertainty, predicting human behavior, and operating safely across an immense range of environments and edge cases. At General Motors, we approach this problem from a simple premise: while most moments on the road are predictable, the rare, ambiguous, and unexpected events — the long tail — are what ultimately defines whether an autonomous system is safe, reliable, and ready for deployment at scale. (Note: While here we discuss research and emerging technologies to solve the long tail required for full general autonomy, we also discuss our current approach or solvin...
5.What Happens When You Host an AI Café
“Can I get an interview?” “Can I get a job when I graduate?” Those questions came from students during a candid discussion about artificial intelligence, capturing the anxiety many young people feel today. As companies adopt AI-driven interview screeners, restructure their workforces, and redirect billions of dollars toward AI infrastructure , students are increasingly unsure of what the future of work will look like. We had gathered people together at a coffee shop in Auburn, Alabama, for what we called an AI Café. The event was designed to confront concerns about AI directly, demystifying the technology while pushing back against the growing narrative of technological doom. AI is reshaping society at breathtaking speed. Yet the trajectory of this transformation is being charted primarily by for-profit tech companies, whose priorities re...
MIT Sloan Management
1.The Best Customers to Study When Scaling Into a New Market
Carolyn Geason-Beissel/MIT SMR | Getty Images For tech companies worldwide, expanding into a new market is both a rite of passage and a moment of truth. It represents the transition from early promise to meaningful scale — an opportunity to increase revenue, signal growth potential to investors, and unlock powerful sources of differentiation, such as […]
2.Level Up Your Crisis Management Skills
Michael Austin/theispot.com The Research The authors conducted in-depth interviews with senior leaders with direct experience guiding large, complex systems through unexpected shocks. Their sample included a former prime minister, CEOs, board chairs and directors of multinational corporations, a central bank governor, a national chief of defense, and a national fire marshal. Participants represented a diversity […]
3.When Not to Use AI
Carolyn Geason-Beissel/MIT SMR | Getty Images AI promises to make managers more productive and give them access to more information more quickly. It can draft plans, summarize reports, and even coach you on how to deliver feedback. Yet the same technology that accelerates decision-making can also erode your judgment, if you let it. Rely on […]
4.How Morningstar’s CEO Drives Relentless Execution
Aleksandar Savic Many investors rely on Morningstar for independent financial analysis and insights, but few people are familiar with the company behind the ratings. From Morningstar’s origins rating mutual funds, the company has expanded its product line, customer base, and global footprint and realized a tenfold increase in revenues and profits between 2005 and 2025. […]
5.An AI Reckoning for HR: Transform or Fade Away
Carolyn Geason-Beissel/MIT SMR | Getty Images For decades, human resource leaders have talked about the need to shift their focus from having responsibility for compliance to acting as architects of talent strategy. And for decades, the pattern of HR being stuck in age-old roles has persisted. But there is new pressure to redefine the role. […]
NBER Working Papers
1.Preferences for Warning Signal Quality: Experimental Evidence -- by Alexander Ugarov, Arya Gaduh, Peter McGee
We use a laboratory experiment to study preferences over false-positive and false-negative rates of warning signals for an adverse event with a known prior. We find that subjects decrease their demand with signal quality, but less than predicted by our theory. There is asymmetric under-responsiveness by prior: for a low (high) prior, their willingness-to-pay does not fully adjust for the increase in the false-positive (false-negative) costs. We show that neither risk preference nor Bayesian updating skills can fully explain our results. Our results are most consistent with a decision-making heuristic in which subjects do not distinguish between false-positive and false-negative errors.
2.Bank Fees and Household Financial Well-Being -- by Michaela Pagel, Sharada Sridhar, Emily Williams
In this study, we examine policy changes from large U.S. banks between 2017 and 2022, which eliminated non-sufficient funds (NSF) fees and relaxed overdraft policies. Using individual transaction-level data, we find that the elimination of NSF fees, not surprisingly, resulted in immediate reductions in NSF charges across the income distribution. However, relaxing overdraft policies resulted in reductions in overdraft fees only for wealthier households, along the dimensions of income and liquidity, and only those enjoyed subsequent declines in late fees, interest payments, account maintenance fees, and the use of alternative financial services, such as payday loans. Our results thus suggest that the policy changes were not substantial enough to significantly reduce the financial stress of the more vulnerable households. As our setting feat...
3.Steering Technological Progress -- by Anton Korinek, Joseph E. Stiglitz
Rapid progress in new technologies such as AI has led to widespread anxiety about adverse labor market impacts. This paper asks how to guide innovative efforts so as to increase labor demand and create better-paying jobs while also evaluating the limitations of such an approach. We develop a theoretical framework to identify the properties that make an innovation desirable from the perspective of workers, including its technological complementarity to labor, the relative income of the affected workers, and the factor share of labor in producing the goods involved. Applications include robot taxation, factor-augmenting progress, and task automation. In our framework, the welfare benefits of steering technology are greater the less efficient social safety nets are. As technological progress devalues labor, the welfare benefits of steering a...
4.Mind the Gap: AI Adoption in Europe and the U.S. -- by Alexander Bick, Adam Blandin, David J. Deming, Nicola Fuchs-Schündeln, Jonas Jessen
This paper combines international evidence from worker and firm surveys conducted in 2025 and 2026 to document large gaps in AI adoption, both between the US and Europe and across European countries. Cross-country differences in worker demographics and firm composition account for an important share of these gaps. AI adoption, within and across countries, is also closely linked to firm personnel management practices and whether firms actively encourage AI use by workers. Micro-level evidence suggests that AI generates meaningful time savings for many workers. At the macro level, in recent years industries with higher AI adoption rates have experienced faster productivity growth. While we do not establish causality, this relationship is statistically significant and similar in magnitude in Europe and the US. We do not find clear evidence t...
5.Supporting Student Engagement During Remote Learning: Three Randomized Controlled Trials in Chicago Public Schools -- by Monica P. Bhatt, Jonathan Guryan, Fatemeh Momeni, Philip Oreopoulos, Eleni Packis
This paper presents the results of three field experiments testing interventions designed to increase engagement and improve learning during remote schooling. Since the COVID-19 pandemic, the use of remote learning when schooling is interrupted has become more common, prompting educators to ask: How can we better engage students during remote instruction? This is especially salient because much of what we know about student engagement is based on in-person schooling, not virtual instruction. In the first experiment, we find that personalized phone calls increased families’ likelihood of registering for a virtual summer schooling program in Chicago Public Schools, the pre-specified primary outcome. In the second experiment, we find sending weekly text messages had no effect on students’ summer days absent and usage of Khan Academy, the pri...
NY Fed - Liberty Street
1.Behind the ATM: Exploring the Structure of Bank Holding Companies
Many modern banking organizations are highly complex. A “bank” is often a larger structure made up of distinct entities, each subject to different regulatory, supervisory, and reporting requirements. For researchers and policymakers, understanding how these institutions are structured and how they have evolved over time is essential. In this post, we illustrate what a modern financial holding company looks like in practice, document how banks’ organizational structures have changed over time, and explain why these details matter for conducting accurate analyses of the financial system.
2.Sports Betting Is Everywhere, Especially on Credit Reports
Since 2018, more than thirty states have legalized mobile sports betting, leading to more than a half trillion dollars in wagers. In our recent Staff Report, we examine how legalized sports betting affects household financial health by comparing betting activity and consumer credit outcomes between states that legalized to those that have not. We find that legalization increases spending at online sportsbooks roughly tenfold, but betting does not stop at state boundaries. Nearby areas where betting is not legal still experience roughly 15 percent the increase of counties where it is legal. At the same time, consumer financial health suffers. Our analysis finds rising delinquencies in participating states,...
3.China’s Electric Trade
China has spent considerable government resources to develop advanced electric technology industries, such as those that produce electric vehicles, lithium batteries, and solar panels. These efforts have spilled over to international trade as improvements in price and quality have increased the global demand for these goods. One consequence is that passenger cars and batteries have been disproportionately large contributors to the rise in the country’s trade surplus in recent years. This has not been the case, though, for solar panels, as falling prices due to a supply glut pulled down export revenues despite higher volumes.
4.The New York Fed DSGE Model Forecast—March 2026
This post presents an update of the economic forecasts generated by the Federal Reserve Bank of New York’s dynamic stochastic general equilibrium (DSGE) model. We describe very briefly our forecast and its change since December 2025. To summarize, growth in 2026 is expected to be more robust, and inflation more persistent, than predicted in December. Stronger investment is the main driver for higher growth, while cost-push shocks, possibly capturing the effects of tariffs, are the key factors behind higher inflation. Projections for the short-run real natural rate of interest (r*) are the same as in December.
5.Firms’ Inflation Expectations Return to 2024 Levels
Businesses experienced substantial cost pressures in 2025 as the cost of insurance and utilities rose sharply, while an increase in tariffs contributed to rising goods and materials costs. This post examines how firms in the New York-Northern New Jersey region adjusted their prices in response to these cost pressures and describes their expectations for future price increases and inflation. Survey results show an acceleration in firms’ price increases in 2025, with an especially sharp increase in the manufacturing sector. While both cost and price increases intensified last year, our surveys re...
Project Syndicate
1.Will Kharg Island Decide the Future of US Alliances?
As confidence in the United States has eroded, allies have begun to hedge their bets by not automatically aligning themselves with America in the face of new crises. The US-Israeli war on Iran has thrown this dynamic into sharp relief, revealing a fundamental new constraint on American power.
2.Why America, Not Iran, Has a Succession Problem
The United States is fighting Iran—a state that built institutional continuity into its founding architecture—with strategic assumptions borrowed from more personalistic regimes, where decapitation can work. Ironically, under President Donald Trump, the US is reshaping its own political system in that direction.
3.Cuba in Free Fall
In the space of just a few weeks, Cuba’s external energy supply and main sources of foreign earnings have been cut off. With economic conditions rapidly deteriorating and many industries grinding to a halt, social unrest is mounting, posing a potential threat to the island’s governability.
4.Wars Fought for Fun Cannot Be Won
Many commentators have tried to divine a policy justification for the US war in Iran. But the simple explanation is that US President Donald Trump and US Secretary of “War” (Defense) Pete Hegseth attacked the Islamic Republic because they could, and because they take pleasure in killing or dominating other people.
5.Cuba’s Third Chance
While the ongoing US oil blockade has exacerbated Cuba’s economic and humanitarian crisis, the primary responsibility for its current predicament lies with the communist regime. If Cubans use this crisis as an opportunity to pursue ambitious economic and political reforms, they may yet restore the prosperity they once knew.
RCR Wireless
1.Born in the USA – your next Wi-Fi router will be made in America (Analyst Angle)
Summary available at source link.
2.Rakuten Symphony on redesigning OSS for the 4D reality of NTN
The realities of non-terrestrial networks increasingly show that coverage depends as much on timing as location The telecom industry has so long built its network optimization strategy around one model: a static infrastructure serving a mobile user base. Now as…
3.Vodafone Idea expands 5G to 90 new cities, boosts transport network with Ciena
Vodafone Idea’s 5G expansion strengthens its market position, but it still has work to do to close the gap with Reliance Jio and Bharti Airtel. In sum – what to know: Rapid coverage expansion – Vodafone Idea plans to grow…
4.Voice returns to the center: How telcos can own the AI value layer
For years, voice services occupied a quiet corner of telecom strategy. While reliable and ubiquitous, they remained economically stagnant. Innovation gravitated toward apps and hyperscalers, leaving telcos to focus on the cold efficiency of network operations. That trajectory is now…
5.Ericsson takes most of VMO2 5G RAN project in UK (makes V sign at Nokia)
Ericsson has emerged as “the primary RAN partner” to Virgin Media O2 in the UK, expanding its footprint in a major national 5G SA upgrade project that appears to come at the expense of Nokia – despite the Finnish firm…
Semantic Scholar – Machine Learning
1.Source Error
Check Feed
Telecom & 6G AI
1.AI-Programmable Wireless Connectivity: Challenges and Research Directions Toward Interactive and Immersive Industry
This vision paper addresses the research challenges of integrating traditional signal processing with Artificial Intelligence (AI) to enable energy-efficient, programmable, and scalable wireless connectivity infrastructures. While prior studies have primarily focused on high-level concepts, such as the potential role of Large Language Model (LLM) in 6G systems, this work advances the discussion by emphasizing integration challenges and research opportunities at the system level. Specifically, this paper examines the role of compact AI models, including Tiny and Real-time Machine Learning (ML), in enhancing wireless connectivity while adhering to strict constraints on computing resources, adaptability, and reliability. Application examples are provided to illustrate practical considerations and highlight how AI-driven signal processing can...
2.Beyond Legacy OFDM: A Mobility-Adaptive Multi-Gear Framework for 6G
While Third Generation Partnership Project (3GPP) has confirmed orthogonal frequency division multiplexing (OFDM) as the baseline waveform for sixth-generation (6G), its performance is severely compromised in the high-mobility scenarios envisioned for 6G. Building upon the GEARBOX-PHY vision, we present gear-switching OFDM (GS-OFDM): a unified framework in which the base station (BS) adaptively selects among three gears, ranging from legacy OFDM to delay-Doppler domain processing based on the channel mobility conditions experienced by the user equipments (UEs). We illustrate the benefit of adaptive gear switching for communication throughput and, finally, we conclude with an outlook on research challenges and opportunities.
3.6GAgentGym: Tool Use, Data Synthesis, and Agentic Learning for Network Management
Autonomous 6G network management requires agents that can execute tools, observe the resulting state changes, and adapt their decisions accordingly. Existing benchmarks based on static questions or scripted episode replay, however, do not support such closed-loop interaction, limiting agents to passive evaluation without the ability to learn from environmental feedback. This paper presents 6GAgentGym to provide closed-loop capability. The framework provides an interactive environment with 42 typed tools whose effect classification distinguishes read-only observation from state-mutating configuration, backed by a learned Experiment Model calibrated on NS-3 simulation data. 6G-Forge bootstraps closed-loop training trajectories from NS-3 seeds via iterative Self-Instruct generation with execution verification against the Experiment Model. Su...
4.Joint Identification and Sensing with Noisy Feedback: A Task-Oriented Communication Framework for 6G
Task-oriented communication is a key enabler of emerging 6G systems, where the objective is to support decisions and actions rather than full message reconstruction. From an information-theoretic perspective, identification (ID) codes provide a natural abstraction for this paradigm by enabling receivers to test whether a task-relevant message was sent, without decoding the entire message. Motivated by the strong impact of feedback on ID and by the growing interest in integrated communication and sensing, this paper studies joint identification and sensing (JIDAS) over state-dependent discrete memoryless channels with noisy strictly causal feedback. The transmitter conveys identification messages while simultaneously estimating the channel state from the feedback signal. For both deterministic and randomized coding schemes, we derive lower...
5.Adaptive High-Speed Radar Signal Processing Architecture for 3D Localization of Multiple Targets on System on Chip
Integrated Sensing and Communication (ISAC) is a key enabler of high speed, ultra low latency vehicular communication in 6G. ISAC leverages radar signal processing (RSP) to localize multiple unknown targets amid static clutter by jointly estimating range, azimuth, and Doppler velocity (3D), thereby enabling highly directional beamforming toward intended mobile users. However, the speed and accuracy of RSP significantly impact communication throughput. This work proposes a novel 3D reconfigurable RSP accelerator, implemented on a Zynq Multi processor System on Chip (MPSoC) using a hardware software codesign approach and fixed point optimization. We propose two RSP frameworks: (1) high accuracy and high complexity, and (2) low complexity and low accuracy, along with their respective architectures. Then, we develop an adaptive architecture t...
arXiv Quantitative Finance
1.Option Pricing on Automated Market Maker Tokens
We derive the stochastic price process for tokens whose sole price discovery mechanism is a constant-product automated market maker (AMM). When the net flow into the pool follows a diffusion, the token price follows a constant elasticity of variance (CEV) process, nesting Black-Scholes as the limiting case of infinite liquidity. We obtain closed-form European option prices and introduce liquidity-adjusted Greeks. The CEV structure generates a leverage effect -- volatility rises as price falls -- whose normalized implied volatility skew depends only on the pool's weighting parameter, not on pool depth: Black-Scholes underprices 20%-out-of-the-money puts by roughly 6% in implied volatility terms at every pool depth, while the absolute pricing discrepancy vanishes as pools deepen. Empirically, after controlling for pool depth and flow volati...
2.Nonlinear Factor Decomposition via Kolmogorov-Arnold Networks: A Spectral Approach to Asset Return Analysis
KAN-PCA is an autoencoder that uses a KAN as encoder and a linear map as decoder. It generalizes classical PCA by replacing linear projections with learned B-spline functions on each edge. The motivation is to capture more variance than classical PCA, which becomes inefficient during market crises when the linear assumption breaks down and correlations between assets change dramatically. We prove that if the spline activations are forced to be linear, KAN-PCA yields exactly the same results as classical PCA, establishing PCA as a special case. Experiments on 20 S&P 500 stocks (2015-2024) show that KAN-PCA achieves a reconstruction R^2 of 66.57%, compared to 62.99% for classical PCA with the same 3 factors, while matching PCA out-of-sample after correcting for data leakage in the training procedure.
3.Policy-Controlled Generalized Share: A General Framework with a Transformer Instantiation for Strictly Online Switching-Oracle Tracking
Static regret to a single expert is often the wrong target for strictly online prediction under non-stationarity, where the best expert may switch repeatedly over time. We study Policy-Controlled Generalized Share (PCGS), a general strictly online framework in which the generalized-share recursion is fixed while the post-loss update controls are allowed to vary adaptively. Its principal instantiation in this paper is PCGS-TF, which uses a causal Transformer as an update controller: after round t finishes and the loss vector is observed, the Transformer outputs the controls that map w_t to w_{t+1} without altering the already committed decision w_t. Under admissible post-loss update controls, we obtain a pathwise weighted regret guarantee for general time-varying learning rates, and a standard dynamic-regret guarantee against any expert pa...
4.Optimal threshold resetting in collective diffusive search
Stochastic resetting has attracted significant attention in recent years due to its wide-ranging applications across physics, biology, and search processes. In most existing studies, however, resetting events are governed by an external timer and remain decoupled from the system's intrinsic dynamics. In a recent Letter by Biswas et al, we introduced threshold resetting (TR) as an alternative, event-driven optimization strategy for target search problems. Under TR, the entire process is reset whenever any searcher reaches a prescribed threshold, thereby coupling the resetting mechanism directly to the internal dynamics. In this work, we study TR-enabled search by $N$ non-interacting diffusive searchers in a one-dimensional box $[0,L]$, with the target at the origin and the threshold at $L$. By optimally tuning the scaled threshold distance...
5.Adapting Altman's bankruptcy prediction model to the compositional data methodology
Using standard financial ratios as variables in statistical analyses has been related to several serious problems, such as extreme outliers, asymmetry, non-normality, and non-linearity. The compositional-data methodology has been successfully applied to solve these problems and has always yielded substantially different results when compared to standard financial ratios. An under-researched area is the use of financial log-ratios computed with the compositional-data methodology to predict bankruptcy or the related terms of business default, insolvency or failure. Another under-researched area is the use of machine learning methods in combination with compositional log-ratios. The present article adapts the classical Altman bankruptcy prediction model and some of its extensions to the compositional methodology with pairwise log-ratios and ...
arXiv – 6G & Networking
1.AI-Programmable Wireless Connectivity: Challenges and Research Directions Toward Interactive and Immersive Industry
This vision paper addresses the research challenges of integrating traditional signal processing with Artificial Intelligence (AI) to enable energy-efficient, programmable, and scalable wireless connectivity infrastructures. While prior studies have primarily focused on high-level concepts, such as the potential role of Large Language Model (LLM) in 6G systems, this work advances the discussion by emphasizing integration challenges and research opportunities at the system level. Specifically, this paper examines the role of compact AI models, including Tiny and Real-time Machine Learning (ML), in enhancing wireless connectivity while adhering to strict constraints on computing resources, adaptability, and reliability. Application examples are provided to illustrate practical considerations and highlight how AI-driven signal processing can...
2.Beyond Legacy OFDM: A Mobility-Adaptive Multi-Gear Framework for 6G
While Third Generation Partnership Project (3GPP) has confirmed orthogonal frequency division multiplexing (OFDM) as the baseline waveform for sixth-generation (6G), its performance is severely compromised in the high-mobility scenarios envisioned for 6G. Building upon the GEARBOX-PHY vision, we present gear-switching OFDM (GS-OFDM): a unified framework in which the base station (BS) adaptively selects among three gears, ranging from legacy OFDM to delay-Doppler domain processing based on the channel mobility conditions experienced by the user equipments (UEs). We illustrate the benefit of adaptive gear switching for communication throughput and, finally, we conclude with an outlook on research challenges and opportunities.
3.6GAgentGym: Tool Use, Data Synthesis, and Agentic Learning for Network Management
Autonomous 6G network management requires agents that can execute tools, observe the resulting state changes, and adapt their decisions accordingly. Existing benchmarks based on static questions or scripted episode replay, however, do not support such closed-loop interaction, limiting agents to passive evaluation without the ability to learn from environmental feedback. This paper presents 6GAgentGym to provide closed-loop capability. The framework provides an interactive environment with 42 typed tools whose effect classification distinguishes read-only observation from state-mutating configuration, backed by a learned Experiment Model calibrated on NS-3 simulation data. 6G-Forge bootstraps closed-loop training trajectories from NS-3 seeds via iterative Self-Instruct generation with execution verification against the Experiment Model. Su...
4.Joint Identification and Sensing with Noisy Feedback: A Task-Oriented Communication Framework for 6G
Task-oriented communication is a key enabler of emerging 6G systems, where the objective is to support decisions and actions rather than full message reconstruction. From an information-theoretic perspective, identification (ID) codes provide a natural abstraction for this paradigm by enabling receivers to test whether a task-relevant message was sent, without decoding the entire message. Motivated by the strong impact of feedback on ID and by the growing interest in integrated communication and sensing, this paper studies joint identification and sensing (JIDAS) over state-dependent discrete memoryless channels with noisy strictly causal feedback. The transmitter conveys identification messages while simultaneously estimating the channel state from the feedback signal. For both deterministic and randomized coding schemes, we derive lower...
5.Adaptive High-Speed Radar Signal Processing Architecture for 3D Localization of Multiple Targets on System on Chip
Integrated Sensing and Communication (ISAC) is a key enabler of high speed, ultra low latency vehicular communication in 6G. ISAC leverages radar signal processing (RSP) to localize multiple unknown targets amid static clutter by jointly estimating range, azimuth, and Doppler velocity (3D), thereby enabling highly directional beamforming toward intended mobile users. However, the speed and accuracy of RSP significantly impact communication throughput. This work proposes a novel 3D reconfigurable RSP accelerator, implemented on a Zynq Multi processor System on Chip (MPSoC) using a hardware software codesign approach and fixed point optimization. We propose two RSP frameworks: (1) high accuracy and high complexity, and (2) low complexity and low accuracy, along with their respective architectures. Then, we develop an adaptive architecture t...
arXiv – Network Architecture (6G/Slicing)
1.6GAgentGym: Tool Use, Data Synthesis, and Agentic Learning for Network Management
Autonomous 6G network management requires agents that can execute tools, observe the resulting state changes, and adapt their decisions accordingly. Existing benchmarks based on static questions or scripted episode replay, however, do not support such closed-loop interaction, limiting agents to passive evaluation without the ability to learn from environmental feedback. This paper presents 6GAgentGym to provide closed-loop capability. The framework provides an interactive environment with 42 typed tools whose effect classification distinguishes read-only observation from state-mutating configuration, backed by a learned Experiment Model calibrated on NS-3 simulation data. 6G-Forge bootstraps closed-loop training trajectories from NS-3 seeds via iterative Self-Instruct generation with execution verification against the Experiment Model. Su...
2.Enabling Programmable Inference and ISAC at the 6GR Edge with dApps
The convergence of communication, sensing, and Artificial Intelligence (AI) in the Radio Access Network (RAN) offers compelling economic advantages through shared spectrum and infrastructure. How can inference and sensing be integrated in the RAN infrastructure at a system level? Current abstractions in O-RAN and 3GPP lack the interfaces and capabilities to support (i) a dynamic life cycle for inference and Integrated Sensing and Communication (ISAC) algorithms, whose requirements and sensing targets may change over time and across sites; (ii) pipelines for AI-driven ISAC, which need complex data flows, training, and testing; (iii) dynamic device and stack configuration to balance trade-offs between connectivity, sensing, and inference services. This paper analyzes the role of a programmable, software-driven, open RAN in enabling the inte...
3.A Techno-Economic Framework for Cost Modeling and Revenue Opportunities in Open and Programmable AI-RAN
The large-scale deployment of 5G networks has not delivered the expected return on investment for mobile network operators, raising concerns about the economic viability of future 6G rollouts. At the same time, surging demand for Artificial Intelligence (AI) inference and training workloads is straining global compute capacity. AI-RAN architectures, in which Radio Access Network (RAN) platforms accelerated on Graphics Processing Unit (GPU) share idle capacity with AI workloads during off-peak periods, offer a potential path to improved capital efficiency. However, the economic case for such systems remains unsubstantiated. In this paper, we present a techno-economic analysis of AI-RAN deployments by combining publicly available benchmarks of 5G Layer-1 processing on heterogeneous platforms -- from x86 servers with accelerators for channel...
4.How Many Qubits Can Be Teleported? Scalability of Fidelity-Constrained Quantum Applications
Quantum networks (QNs) enable the transfer of qubits between distant nodes using quantum teleportation, which reproduces a qubit state at a remote location by consuming a shared Bell pair. After teleportation, qubits are stored in quantum memories, where decoherence progressively degrades their quantum states. This degradation is quantified by the fidelity, defined as the overlap between the stored quantum state and the ideal target state. Some quantum applications (QApps) require the teleportation of multiple qubits and can only operate if all teleported qubits simultaneously maintain a fidelity above a given threshold. In this paper, we study how many qubits can be teleported under such fidelity-constrained operation in a two-node QN. To that end, we define a QApp-level reliability metric as the probability that all end-to-end Bell pair...
5.Performance Analysis of 5G RAN Slicing Deployment Options in Industry 4.0 Factories
This paper studies Radio Access Network (RAN) slicing strategies for 5G Industry~4.0 networks with ultra-reliable low-latency communication (uRLLC) requirements. We comparatively analyze four RAN slicing deployment options that differ in slice sharing and per-line or per-flow isolation. Unlike prior works that focus on management architectures or resource allocation under a fixed slicing structure, this work addresses the design of RAN slicing deployment options in the presence of multiple production lines and heterogeneous industrial flows. An SNC-based analytical framework and a heuristic slice planner are used to evaluate these options in terms of per-flow delay guarantees and radio resource utilization. Results show that under resource scarcity only per-flow slicing prevents delay violations by tightly matching resources to per-flow d...