url
stringlengths
33
33
title
stringlengths
18
214
date_published
stringdate
2025-03-20 00:07:06
2025-04-17 04:46:57
abstract
stringlengths
114
1.92k
http://arxiv.org/abs/2503.15752v3
Using Language Models to Decipher the Motivation Behind Human Behaviors
2025-03-20T00:07:06+00:00
AI presents a novel tool for deciphering the motivations behind human behaviors. We show that by varying prompts to a large language model, we can elicit a full range of human behaviors in a variety of different scenarios in terms of classic economic games. Then by analyzing which prompts are needed to elicit which behaviors, we can infer (decipher) the motivations behind the human behaviors. We also show how one can analyze the prompts to reveal relationships between the classic economic games, providing new insight into what different economic scenarios induce people to think about. We also show how this deciphering process can be used to understand differences in the behavioral tendencies of different populations.
http://arxiv.org/abs/2503.15753v1
CATCH: a Cost Analysis Tool for Co-optimization of chiplet-based Heterogeneous systems
2025-03-20T00:07:10+00:00
With the increasing prevalence of chiplet systems in high-performance computing applications, the number of design options has increased dramatically. Instead of chips defaulting to a single die design, now there are options for 2.5D and 3D stacking along with a plethora of choices regarding configurations and processes. For chiplet-based designs, high-impact decisions such as those regarding the number of chiplets, the design partitions, the interconnect types, and other factors must be made early in the development process. In this work, we describe an open-source tool, CATCH, that can be used to guide these early design choices. We also present case studies showing some of the insights we can draw by using this tool. We look at case studies on optimal chip size, defect density, test cost, IO types, assembly processes, and substrates.
http://arxiv.org/abs/2503.15754v1
AutoRedTeamer: Autonomous Red Teaming with Lifelong Attack Integration
2025-03-20T00:13:04+00:00
As large language models (LLMs) become increasingly capable, security and safety evaluation are crucial. While current red teaming approaches have made strides in assessing LLM vulnerabilities, they often rely heavily on human input and lack comprehensive coverage of emerging attack vectors. This paper introduces AutoRedTeamer, a novel framework for fully automated, end-to-end red teaming against LLMs. AutoRedTeamer combines a multi-agent architecture with a memory-guided attack selection mechanism to enable continuous discovery and integration of new attack vectors. The dual-agent framework consists of a red teaming agent that can operate from high-level risk categories alone to generate and execute test cases and a strategy proposer agent that autonomously discovers and implements new attacks by analyzing recent research. This modular design allows AutoRedTeamer to adapt to emerging threats while maintaining strong performance on existing attack vectors. We demonstrate AutoRedTeamer's effectiveness across diverse evaluation settings, achieving 20% higher attack success rates on HarmBench against Llama-3.1-70B while reducing computational costs by 46% compared to existing approaches. AutoRedTeamer also matches the diversity of human-curated benchmarks in generating test cases, providing a comprehensive, scalable, and continuously evolving framework for evaluating the security of AI systems.
http://arxiv.org/abs/2503.15755v1
Collecting Particles in Confined Spaces by Active Filamentous Matter
2025-03-20T00:17:56+00:00
The potential of compliant and adaptable active matter for particle transport presents a promising avenue for the development of efficient, autonomous systems. However, achieving optimal task efficiency often depends on external control mechanisms, which can limit the autonomy of such systems. In this study, we draw inspiration from Tubifex tubifex and Lumbriculus variegatus, centimeter-sized worms that exhibit an extraordinary ability to aggregate dispersed particles within confined environments. By observing their natural behaviors, we identify a simple yet effective particle collection strategy driven by flexibility and activity. Using these biological insights, we develop larger-scale robotic systems and simulations that replicate the particle aggregation dynamics of living worms. Our results reveal that coupling between activity and flexibility governs the efficiency of particle clustering, and this principle applies universally across biological, robotic, and simulated filaments. These results allow us to offer new particle collection strategies by tuning the design elements like topology or bending stiffness of soft active filaments.
http://arxiv.org/abs/2503.15756v1
A theory of quasiballistic spin transport
2025-03-20T00:18:33+00:00
A recent work [Mierzejewski et al., Phys. Rev. B 107, 045134 (2023)] observed "quasiballistic spin transport" - long-lived and transiently ballistic modes of the magnetization density - in numerical simulations of infinite-temperature XXZ chains with power-law exchange interactions. We develop an analytical theory of such quasiballistic spin transport. Previous work found that this effect was maximized along a specific locus in the space of model parameters, which interpolated smoothly between the integrable Haldane-Shastry and XX models and whose shape was estimated from numerics. We obtain an analytical estimate for the lifetime of the spin current and show that it has a unique maximum along a different locus, which interpolates more gradually between the two integrable points. We further rule out the existence of a conserved two-body operator that protects ballistic spin transport away from these integrable points by proving that a corresponding functional equation has no solutions. We discuss connections between our approach and an integrability-transport conjecture for spin.
http://arxiv.org/abs/2503.15757v1
A note on goodness of fit testing for the Poisson distribution
2025-03-20T00:21:46+00:00
Since its introduction in 1950, Fisher's dispersion test has become a standard means of deciding whether or not count data follow the Poisson distribution. The test is based on a characteristic property of the Poisson distribution, and discriminates well between the Poisson and the natural alternative hypotheses of binomial and negative binomial distributions. While the test is commonly used to test for general deviations from Poissonity, its performance against more general alternatives has not been widely investigated. This paper presents realistic alternative hypotheses for which general goodness of fit tests perform much better than the Fisher dispersion test.
http://arxiv.org/abs/2503.15758v1
ATTENTION2D: Communication Efficient Distributed Self-Attention Mechanism
2025-03-20T00:25:44+00:00
Transformer-based models have emerged as a leading architecture for natural language processing, natural language generation, and image generation tasks. A fundamental element of the transformer architecture is self-attention, which allows the model to capture intricate dependencies within the data. However, the self-attention mechanism also incurs significant computational and memory costs, particularly for long sequences. In this paper, we introduce ATTENTION2D, a novel approach that exploits parallelism along two dimensions - query and key/value - of the self-attention operation. This method enables efficient distribution and parallelization of computations across multiple devices. Our approach facilitates asymptotically faster training and inference phases compared to previous methods, without relying on approximations or incurring additional computational or memory overheads. Furthermore, unlike existing techniques that struggle to scale with an increasing number of processing units, our approach effectively scales with additional processing units. Our experimental results confirm the effectiveness of our method in improving communication efficiency and scalability. Compared to Ring Attention, our approach demonstrated up to a 5x performance boost on a GPT-3-like model using 64 NVIDIA A100 GPUs across 16 nodes, and up to a 9.4x performance boost on 64 NVIDIA H100 GPUs across 64 nodes.
http://arxiv.org/abs/2503.15759v1
Diverse electronic topography in a distorted kagome metal LaTi3Bi4
2025-03-20T00:26:40+00:00
Recent reports on a family of kagome metals of the form LnTi3Bi4 (Ln = Lanthanide) has stoked interest due to the combination of highly anisotropic magnetism and a rich electronic structure. The electronic structure near the Fermi level is proposed to exhibit Dirac points and van Hove singularities. In this manuscript, we use angle resolved photoemission spectroscopy measurements in combination with density functional theory calculations to investigate the electronic structure of a newly discovered kagome metal LaTi3Bi4. Our results reveal multiple van Hove singularities (VHSs) with one VHS located in the vicinity of the Fermi level. We clearly observe two flat bands, which originate from the destructive interference of wave functions within the Ti kagome motif. These flat bands and VHSs originate from Ti d orbitals and are very responsive to the polarization of the incident beam. We notice a significant anisotropy in the electronic structure, resulting from the breaking of six fold rotational symmetry in this material. Our findings demonstrate this new family of Ti based kagome material as a promising platform to explore novel emerging phenomena in the wider LnTi3Bi4 (Ln= lanthanide) family of materials.
http://arxiv.org/abs/2503.15760v1
New curved Kakeya estimates
2025-03-20T00:38:46+00:00
We give new lower bounds for the Hausdorff dimension of Kakeya sets built from various families of curves in $\mathbb R^3$, going beyond what the polynomial partitioning method has so-far achieved. We do this by combining Wolff's classical hairbrush argument with a new incidence bound for 3-parameter families of curves which satisfy conditions we call coniness and twistiness. Our main argument builds off a technique of Katz, Wu, and Zahl used in the study of $\rm{SL}_2$-Kakeya sets.
http://arxiv.org/abs/2503.15761v1
GraPLUS: Graph-based Placement Using Semantics for Image Composition
2025-03-20T00:43:29+00:00
We present GraPLUS (Graph-based Placement Using Semantics), a novel framework for plausible object placement in images that leverages scene graphs and large language models. Our approach uniquely combines graph-structured scene representation with semantic understanding to determine contextually appropriate object positions. The framework employs GPT-2 to transform categorical node and edge labels into rich semantic embeddings that capture both definitional characteristics and typical spatial contexts, enabling nuanced understanding of object relationships and placement patterns. GraPLUS achieves placement accuracy of 92.1% and an FID score of 28.83 on the OPA dataset, outperforming state-of-the-art methods by 8.1% while maintaining competitive visual quality. In human evaluation studies involving 964 samples assessed by 19 participants, our method was preferred in 52.1% of cases, significantly outperforming previous approaches. The framework's key innovations include: (i) leveraging pre-trained scene graph models that transfer knowledge from other domains, (ii) edge-aware graph neural networks that process scene semantics through structured relationships, (iii) a cross-modal attention mechanism that aligns categorical embeddings with enhanced scene features, and (iv) a multiobjective training strategy incorporating semantic consistency constraints.
http://arxiv.org/abs/2503.15762v1
Dialogic Learning in Child-Robot Interaction: A Hybrid Approach to Personalized Educational Content Generation
2025-03-20T00:46:10+00:00
Dialogic learning fosters motivation and deeper understanding in education through purposeful and structured dialogues. Foundational models offer a transformative potential for child-robot interactions, enabling the design of personalized, engaging, and scalable interactions. However, their integration into educational contexts presents challenges in terms of ensuring age-appropriate and safe content and alignment with pedagogical goals. We introduce a hybrid approach to designing personalized educational dialogues in child-robot interactions. By combining rule-based systems with LLMs for selective offline content generation and human validation, the framework ensures educational quality and developmental appropriateness. We illustrate this approach through a project aimed at enhancing reading motivation, in which a robot facilitated book-related dialogues.
http://arxiv.org/abs/2503.15763v1
OffsetOPT: Explicit Surface Reconstruction without Normals
2025-03-20T00:47:27+00:00
Neural surface reconstruction has been dominated by implicit representations with marching cubes for explicit surface extraction. However, those methods typically require high-quality normals for accurate reconstruction. We propose OffsetOPT, a method that reconstructs explicit surfaces directly from 3D point clouds and eliminates the need for point normals. The approach comprises two stages: first, we train a neural network to predict surface triangles based on local point geometry, given uniformly distributed training point clouds. Next, we apply the frozen network to reconstruct surfaces from unseen point clouds by optimizing a per-point offset to maximize the accuracy of triangle predictions. Compared to state-of-the-art methods, OffsetOPT not only excels at reconstructing overall surfaces but also significantly preserves sharp surface features. We demonstrate its accuracy on popular benchmarks, including small-scale shapes and large-scale open surfaces.
http://arxiv.org/abs/2503.15764v1
Towards Agentic AI Networking in 6G: A Generative Foundation Model-as-Agent Approach
2025-03-20T00:48:44+00:00
The promising potential of AI and network convergence in improving networking performance and enabling new service capabilities has recently attracted significant interest. Existing network AI solutions, while powerful, are mainly built based on the close-loop and passive learning framework, resulting in major limitations in autonomous solution finding and dynamic environmental adaptation. Agentic AI has recently been introduced as a promising solution to address the above limitations and pave the way for true generally intelligent and beneficial AI systems. The key idea is to create a networking ecosystem to support a diverse range of autonomous and embodied AI agents in fulfilling their goals. In this paper, we focus on the novel challenges and requirements of agentic AI networking. We propose AgentNet, a novel framework for supporting interaction, collaborative learning, and knowledge transfer among AI agents. We introduce a general architectural framework of AgentNet and then propose a generative foundation model (GFM)-based implementation in which multiple GFM-as-agents have been created as an interactive knowledge-base to bootstrap the development of embodied AI agents according to different task requirements and environmental features. We consider two application scenarios, digital-twin-based industrial automation and metaverse-based infotainment system, to describe how to apply AgentNet for supporting efficient task-driven collaboration and interaction among AI agents.
http://arxiv.org/abs/2503.15765v1
Computation of whispering gallery modes for spherical symmetric heterogeneous Helmholtz problems with piecewise smooth refractive index
2025-03-20T00:51:13+00:00
In this paper, we develop a numerical method for the computation of (quasi-)resonances in spherical symmetric heterogeneous Helmholtz problems with piecewise smooth refractive index. Our focus lies in resonances very close to the real axis, which characterize the so-called whispering gallery modes. Our method involves a modal equation incorporating fundamental solutions to decoupled problems, extending the known modal equation to the case of piecewise smooth coefficients. We first establish the well-posedeness of the fundamental system, then we formulate the problem of resonances as a nonlinear eigenvalue problem, whose determinant will be the modal equation in the piecewise smooth case. In combination with the numerical approximation of the fundamental solutions using a spectral method, we derive a Newton method to solve the nonlinear modal equation with a proper scaling. We show the local convergence of the algorithm in the piecewise constant case by proving the simplicity of the roots. We confirm our approach through a series of numerical experiments in the piecewise constant and variable case.
http://arxiv.org/abs/2503.15766v1
Accelerating Transient CFD through Machine Learning-Based Flow Initialization
2025-03-20T00:51:59+00:00
Transient computational fluid dynamics (CFD) simulations are essential for many industrial applications, but a significant portion of their computational cost stems from the time needed to reach statistical steadiness from initial conditions. We present a novel machine learning-based initialization method that reduces the cost of this subsequent transient solve substantially, achieving a 50% reduction in time-to-convergence compared to traditional uniform and potential flow-based initializations. Through a case study in automotive aerodynamics using a 16.7M-cell unsteady RANS simulation, we evaluate three ML-based initialization strategies. Two of these strategies are recommended for general use: (1) a physics-informed hybrid method combining ML predictions with potential flow solutions, and (2) a more versatile approach integrating ML predictions with uniform flow. Both strategies enable CFD solvers to achieve convergence times comparable to computationally expensive steady RANS initializations, while requiring only seconds of computation. We develop a robust statistical convergence metric based on windowed time-averaging for performance comparison between initialization strategies. Notably, these improvements are achieved using an ML model trained on a different dataset of automotive geometries, demonstrating strong generalization capabilities. The proposed methods integrate seamlessly with existing CFD workflows without requiring modifications to the underlying flow solver, providing a practical approach to accelerating industrial CFD simulations through improved ML-based initialization strategies.
http://arxiv.org/abs/2503.15767v1
Angular Interplay of Nematicity, Superconductivity, and Strange Metallicity in a Moiré Flat Band
2025-03-20T00:52:21+00:00
Superconductivity in strongly correlated electron systems frequently exhibits broken rotational symmetry, raising fundamental questions about the underlying order parameter symmetry. In this work, we demonstrate that electronic nematicity--driven by Coulomb-mediated rotational symmetry breaking--serves as a crucial link to understanding the nature of superconductivity. Utilizing a novel framework of angle-resolved measurement, we reveal an interring angular interplay among nematicity, superconductivity, and strange metallicity in magic-angle twisted trilayer graphene. By establishing a direct correlation between the preferred superconducting transport direction and the principal axis of the metallic phase, our findings place strong constrains on the symmetry of the superconducting order parameter. This work introduces a new paradigm for probing the microscopic mechanisms governing superconductivity in strongly interacting two-dimensional systems.
http://arxiv.org/abs/2503.15768v1
Can one size fit all?: Measuring Failure in Multi-Document Summarization Domain Transfer
2025-03-20T00:57:38+00:00
Abstractive multi-document summarization (MDS) is the task of automatically summarizing information in multiple documents, from news articles to conversations with multiple speakers. The training approaches for current MDS models can be grouped into four approaches: end-to-end with special pre-training ("direct"), chunk-then-summarize, extract-then-summarize, and inference with GPT-style models. In this work, we evaluate MDS models across training approaches, domains, and dimensions (reference similarity, quality, and factuality), to analyze how and why models trained on one domain can fail to summarize documents from another (News, Science, and Conversation) in the zero-shot domain transfer setting. We define domain-transfer "failure" as a decrease in factuality, higher deviation from the target, and a general decrease in summary quality. In addition to exploring domain transfer for MDS models, we examine potential issues with applying popular summarization metrics out-of-the-box.
http://arxiv.org/abs/2503.15769v1
Prediction of Permissioned Blockchain Performance for Resource Scaling Configurations
2025-03-20T01:03:54+00:00
Blockchain is increasingly offered as blockchain-as-a-service (BaaS) by cloud service providers. However, configuring BaaS appropriately for optimal performance and reliability resorts to try-and-error. A key challenge is that BaaS is often perceived as a ``black-box,'' leading to uncertainties in performance and resource provisioning. Previous studies attempted to address this challenge; however, the impacts of both vertical and horizontal scaling remain elusive. To this end, we present machine learning-based models to predict network reliability and throughput based on scaling configurations. In our evaluation, the models exhibit prediction errors of ~1.9%, which is highly accurate and can be applied in the real-world.
http://arxiv.org/abs/2503.15770v1
Nano-3D: Metasurface-Based Neural Depth Imaging
2025-03-20T01:06:26+00:00
Depth imaging is a foundational building block for broad applications, such as autonomous driving and virtual/augmented reality. Traditionally, depth cameras have relied on time-of-flight sensors or multi-lens systems to achieve physical depth measurements. However, these systems often face a trade-off between a bulky form factor and imprecise approximations, limiting their suitability for spatially constrained scenarios. Inspired by the emerging advancements of nano-optics, we present Nano-3D, a metasurface-based neural depth imaging solution with an ultra-compact footprint. Nano-3D integrates our custom-fabricated 700 nm thick TiO2 metasurface with a multi-module deep neural network to extract precise metric depth information from monocular metasurface-polarized imagery. We demonstrate the effectiveness of Nano-3D with both simulated and physical experiments. We hope the exhibited success paves the way for the community to bridge future graphics systems with emerging nanomaterial technologies through novel computational approaches.
http://arxiv.org/abs/2503.15771v1
Recognizing and Realizing Temporal Reachability Graphs
2025-03-20T01:07:39+00:00
A temporal graph $\mathcal{G}=(G,\lambda)$ can be represented by an underlying graph $G=(V,E)$ together with a function $\lambda$ that assigns to each edge $e\in E$ the set of time steps during which $e$ is present. The reachability graph of $\mathcal{G}$ is the directed graph $D=(V,A)$ with $(u,v)\in A$ if only if there is a temporal path from $u$ to $v$. We study the Reachability Graph Realizability (RGR) problem that asks whether a given directed graph $D=(V,A)$ is the reachability graph of some temporal graph. The question can be asked for undirected or directed temporal graphs, for reachability defined via strict or non-strict temporal paths, and with or without restrictions on $\lambda$ (proper, simple, or happy). Answering an open question posed by Casteigts et al. (Theoretical Computer Science 991 (2024)), we show that all variants of the problem are NP-complete, except for two variants that become trivial in the directed case. For undirected temporal graphs, we consider the complexity of the problem with respect to the solid graph, that is, the graph containing all edges that could potentially receive a label in any realization. We show that the RGR problem is polynomial-time solvable if the solid graph is a tree and fixed-parameter tractable with respect to the feedback edge set number of the solid graph. As we show, the latter parameter can presumably not be replaced by smaller parameters like feedback vertex set or treedepth, since the problem is W[2]-hard with respect to these parameters.
http://arxiv.org/abs/2503.15772v1
Detecting LLM-Written Peer Reviews
2025-03-20T01:11:35+00:00
Editors of academic journals and program chairs of conferences require peer reviewers to write their own reviews. However, there is growing concern about the rise of lazy reviewing practices, where reviewers use large language models (LLMs) to generate reviews instead of writing them independently. Existing tools for detecting LLM-generated content are not designed to differentiate between fully LLM-generated reviews and those merely polished by an LLM. In this work, we employ a straightforward approach to identify LLM-generated reviews - doing an indirect prompt injection via the paper PDF to ask the LLM to embed a watermark. Our focus is on presenting watermarking schemes and statistical tests that maintain a bounded family-wise error rate, when a venue evaluates multiple reviews, with a higher power as compared to standard methods like Bonferroni correction. These guarantees hold without relying on any assumptions about human-written reviews. We also consider various methods for prompt injection including font embedding and jailbreaking. We evaluate the effectiveness and various tradeoffs of these methods, including different reviewer defenses. We find a high success rate in the embedding of our watermarks in LLM-generated reviews across models. We also find that our approach is resilient to common reviewer defenses, and that the bounds on error rates in our statistical tests hold in practice while having the power to flag LLM-generated reviews, while Bonferroni correction is infeasible.
http://arxiv.org/abs/2503.15773v1
Search for continuous gravitational waves from neutron stars in five globular clusters with a phase-tracking hidden Markov model in the third LIGO observing run
2025-03-20T01:11:39+00:00
A search is performed for continuous gravitational waves emitted by unknown neutron stars in five nearby globular clusters using data from the third Laser Interferometer Gravitational-Wave Observatory (LIGO) observing run, over the frequency range $100$--$800\,\mathrm{Hz}$. The search uses a hidden Markov model to track both the frequency and phase of the continuous wave signal from one coherent segment to the next. It represents the first time that a phase-tracking hidden Markov model has been used in a LIGO search. After applying vetoes to reject candidates consistent with non-Gaussian artifacts, no significant candidates are detected. Estimates of the strain sensitivity at 95\% confidence $h_{0,\mathrm{eff}}^{95\%}$ and corresponding neutron star ellipticity $\epsilon^{95\%}$ are presented. The best strain sensitivity, $h_{0,\mathrm{eff}}^{95\%} = 2.7 \times 10^{-26}$ at $211\,\mathrm{Hz}$, is achieved for the cluster NGC6544.
http://arxiv.org/abs/2503.15774v1
Low-lying Electronic Structure of Rare-Earth Based Topological Nodal Line Semimetal Candidate DySbTe
2025-03-20T01:16:12+00:00
Lanthanide (Ln) based LnSbTe materials have garnered significant attention due to rich interplay of long range magnetic ordering and topological properties, driven by unique crystalline symmetry, 4f electron interactions, and pronounced spin-orbit coupling (SOC) effects. DySbTe, as a heavier lanthanide-based member of the LnSbTe family, stands out with its SOC and larger on site interactions on its 4f electrons, which arise due to the heavier Dy element. Here, we present a comprehensive study on the low-temperature bulk physical properties and the electronic structure of DySbTe using magnetic susceptibility, heat capacity, and electrical resistivity measurements, along with high-resolution angle-resolved photoemission spectroscopy (ARPES), scanning tunneling microscopy and spectroscopy (STM/S), and density functional theory calculations. Our thermodynamic measurements revealed an antiferromagnetic ordering below TN = 7.45 K and a subsequent magnetic phase transition at TN1 = 7.15 K. Our transport studies indicate a semimetallic behavior with unusual feature in the ordered state. Our ARPES measurements revealed a diamond-shaped Fermi pocket centered at the G point, with band features that evolve distinctly across various binding energies. STM/S results indicate a minimum in the density of states at around 100 meV below the Fermi level, and ARPES measurements reveal a significant gap present around the X point, differentiating DySbTe from other LnSbTe compounds. These findings enhance our understanding of the SOC effects on the electronic structure and topological properties in the LnSbTe family, highlighting DySbTe as a promising candidate for exploring the interplay between topology and magnetism.
http://arxiv.org/abs/2503.15775v2
Fast Calculation of Nonuniform Plane Waves at Arbitrarily Oriented and Charged Planar Interfaces of Isotropic Lossy Media
2025-03-20T01:20:13+00:00
A fast method for calculating the reflected and transmitted waves for a given nonuniform plane wave incident on an arbitrarily oriented and charged planar interface between two isotropic and possibly lossy media is proposed based on the decomposition of the complex wave vector and complex wave numbers with respect to the unit normal vector of the interface. According to the complex vector analysis, the exact definition of the complex angles of incidence, reflection and refraction are presented and applied in the complex forms of Snell's law and Fresnel equations to quickly and correctly calculate the complex wave vectors and the complex electric fields of the reflected and refracted waves at a charged interface where the surface charge and current densities are considered. The calculation procedure and two practical examples are also given to demonstrate the validity and powerfulness of the proposed methodology.
http://arxiv.org/abs/2503.15776v2
Copenhagen Survey on Black Holes and Fundamental Physics
2025-03-20T01:26:08+00:00
The purpose of this survey is to take a snapshot of the attitudes of physicists working on some of the most pressing questions in modern physics, which may be useful to sociologists and historians of science. For this study, a total of 85 completed surveys were returned out of 151 registered participants of the ``Black holes Inside and out'' conference, held in Copenhagen in 2024. The survey asked questions about some of the most contentious issues in fundamental physics, including the nature of black holes and dark energy. A number of surprising results were found. For example, some of the leading frameworks, such as the cosmological constant, cosmic inflation, or string theory - while most popular - gain less than the majority of votes from the participants. The only statement that gains majority approval (by 68\% of participants) was that the Big Bang meant ``the universe evolved from a hot dense state'', not ``an absolute beginning time''. These results provide reasons for caution in describing ideas as consensus in the scientific community when a more nuanced view may be justified.
http://arxiv.org/abs/2503.15777v1
Line Space Clustering (LSC): Feature-Based Clustering using K-medians and Dynamic Time Warping for Versatility
2025-03-20T01:27:10+00:00
Clustering high-dimensional data is a critical challenge in machine learning due to the curse of dimensionality and the presence of noise. Traditional clustering algorithms often fail to capture the intrinsic structures in such data. This paper explores a combination of clustering methods, which we called Line Space Clustering (LSC), a representation that transforms data points into lines in a newly defined feature space, enabling clustering based on the similarity of feature value patterns, essentially treating features as sequences. LSC employs a combined distance metric that uses Euclidean and Dynamic Time Warping (DTW) distances, weighted by a parameter {\alpha}, allowing flexibility in emphasizing shape or magnitude similarities. We delve deeply into the mechanics of DTW and the Savitzky Golay filter, explaining their roles in the algorithm. Extensive experiments demonstrate the efficacy of LSC on synthetic and real-world datasets, showing that randomly experimenting with time-series optimized methods sometimes might surprisingly work on a complex dataset, particularly in noisy environments. Source code and experiments are available at: https://github.com/JoanikijChulev/LSC.
http://arxiv.org/abs/2503.15778v1
AutoDrive-QA- Automated Generation of Multiple-Choice Questions for Autonomous Driving Datasets Using Large Vision-Language Models
2025-03-20T01:32:00+00:00
In autonomous driving, open-ended question answering often suffers from unreliable evaluations because freeform responses require either complex metrics or subjective human judgment. To address this challenge, we introduce AutoDrive-QA, an automatic pipeline that converts existing driving QA datasets (including DriveLM, NuScenes-QA, and LingoQA) into a structured multiple-choice question (MCQ) format. This benchmark systematically assesses perception, prediction, and planning tasks, providing a standardized and objective evaluation framework. AutoDrive-QA employs an automated pipeline that leverages large language models (LLMs) to generate high-quality, contextually relevant distractors based on domain-specific error patterns commonly found in autonomous driving scenarios. To evaluate both general capabilities and generalization performance, we test the benchmark on three public datasets and conduct zero-shot experiments on an unseen dataset. The zero-shot evaluations reveal that GPT-4V leads with 69.57% accuracy -- achieving 74.94% in Perception, 65.33% in Prediction, and 68.45% in Planning -- demonstrating that while all models excel in Perception, they struggle in Prediction. Consequently, AutoDrive-QA establishes a rigorous, unbiased standard for integrating and evaluating different vision-language models across various autonomous driving datasets, thereby improving generalization in this field. We release all the codes in the AutoDrive-QA GitHub Repository.
http://arxiv.org/abs/2503.15779v1
MobiFuse: Learning Universal Human Mobility Patterns through Cross-domain Data Fusion
2025-03-20T01:41:28+00:00
Human mobility modeling is critical for urban planning and transportation management, yet existing datasets often lack the resolution and semantic richness required for comprehensive analysis. To address this, we proposed a cross-domain data fusion framework that integrates multi-modal data of distinct nature and spatio-temporal resolution, including geographical, mobility, socio-demographic, and traffic information, to construct a privacy-preserving and semantically enriched human travel trajectory dataset. This framework is demonstrated through two case studies in Los Angeles (LA) and Egypt, where a domain adaptation algorithm ensures its transferability across diverse urban contexts. Quantitative evaluation shows that the generated synthetic dataset accurately reproduces mobility patterns observed in empirical data. Moreover, large-scale traffic simulations for LA County based on the generated synthetic demand align well with observed traffic. On California's I-405 corridor, the simulation yields a Mean Absolute Percentage Error of 5.85% for traffic volume and 4.36% for speed compared to Caltrans PeMS observations.
http://arxiv.org/abs/2503.15780v1
On the Convexity of the Bernardi Integral Operator
2025-03-20T01:43:38+00:00
We prove that the Bernardi Integral Operator maps certain classes of bounded starlike functions into the class of convex functions, improving the result of Oros and Oros. We also present a general unified method for investigating various other integral operators that preserve many of the previously studied subclasses of univalent and p-valent functions.
http://arxiv.org/abs/2503.15781v1
UAS Visual Navigation in Large and Unseen Environments via a Meta Agent
2025-03-20T01:44:59+00:00
The aim of this work is to develop an approach that enables Unmanned Aerial System (UAS) to efficiently learn to navigate in large-scale urban environments and transfer their acquired expertise to novel environments. To achieve this, we propose a meta-curriculum training scheme. First, meta-training allows the agent to learn a master policy to generalize across tasks. The resulting model is then fine-tuned on the downstream tasks. We organize the training curriculum in a hierarchical manner such that the agent is guided from coarse to fine towards the target task. In addition, we introduce Incremental Self-Adaptive Reinforcement learning (ISAR), an algorithm that combines the ideas of incremental learning and meta-reinforcement learning (MRL). In contrast to traditional reinforcement learning (RL), which focuses on acquiring a policy for a specific task, MRL aims to learn a policy with fast transfer ability to novel tasks. However, the MRL training process is time consuming, whereas our proposed ISAR algorithm achieves faster convergence than the conventional MRL algorithm. We evaluate the proposed methodologies in simulated environments and demonstrate that using this training philosophy in conjunction with the ISAR algorithm significantly improves the convergence speed for navigation in large-scale cities and the adaptation proficiency in novel environments.
http://arxiv.org/abs/2503.15782v1
High-throughput Discovery of Anti-gap Semiconductors
2025-03-20T01:45:53+00:00
Conventional semiconductors typically have bonding states near the valence band maximum (VBM) and antibonding states near the conduction band minimum (CBM). Semiconductors with the opposite electronic configuration, namely an antibonding VBM and a bonding CBM, are here termed ``anti-gap semiconductors". They have been theoretically proposed to exhibit excellent optoelectronic properties because of their strong tolerance to defects. However, no anti-gap semiconductors have been identified so far, despite a known list of semiconductors with an antibonding VBM. Here, we use high-throughput computation to identify over 100 anti-gap semiconductors. From this group, we analyze the transition metal dichalcogenide MX$_2$ (M=Hf, Zr; X=S, Se) family in detail. In addition to verifying their defect tolerance for both electrons and holes using first-principles simulations, we also discovered that photoexcitation of charge carriers can lead to significant lattice stiffening and increased thermal conductivity in anti-gap semiconductors, which can be potentially used as photo-driven thermal switches. Our work analyzes the formation of the anti-gap electronic structure and showcases their unusual photoinduced lattice dynamics that can have a potential impact on their photophysical applications.
http://arxiv.org/abs/2503.15783v1
Grammar and Gameplay-aligned RL for Game Description Generation with LLMs
2025-03-20T01:47:33+00:00
Game Description Generation (GDG) is the task of generating a game description written in a Game Description Language (GDL) from natural language text. Previous studies have explored generation methods leveraging the contextual understanding capabilities of Large Language Models (LLMs); however, accurately reproducing the game features of the game descriptions remains a challenge. In this paper, we propose reinforcement learning-based fine-tuning of LLMs for GDG (RLGDG). Our training method simultaneously improves grammatical correctness and fidelity to game concepts by introducing both grammar rewards and concept rewards. Furthermore, we adopt a two-stage training strategy where Reinforcement Learning (RL) is applied following Supervised Fine-Tuning (SFT). Experimental results demonstrate that our proposed method significantly outperforms baseline methods using SFT alone.
http://arxiv.org/abs/2503.15784v1
RL4Med-DDPO: Reinforcement Learning for Controlled Guidance Towards Diverse Medical Image Generation using Vision-Language Foundation Models
2025-03-20T01:51:05+00:00
Vision-Language Foundation Models (VLFM) have shown a tremendous increase in performance in terms of generating high-resolution, photorealistic natural images. While VLFMs show a rich understanding of semantic content across modalities, they often struggle with fine-grained alignment tasks that require precise correspondence between image regions and textual descriptions a limitation in medical imaging, where accurate localization and detection of clinical features are essential for diagnosis and analysis. To address this issue, we propose a multi-stage architecture where a pre-trained VLFM provides a cursory semantic understanding, while a reinforcement learning (RL) algorithm refines the alignment through an iterative process that optimizes for understanding semantic context. The reward signal is designed to align the semantic information of the text with synthesized images. We demonstrate the effectiveness of our method on a medical imaging skin dataset where the generated images exhibit improved generation quality and alignment with prompt over the fine-tuned Stable Diffusion. We also show that the synthesized samples could be used to improve disease classifier performance for underrepresented subgroups through augmentation.
http://arxiv.org/abs/2503.15785v2
Bridging Retrospective and Prospective Merger Analyses: The Case of US Airline Mergers
2025-03-20T01:57:05+00:00
We begin with a retrospective analysis of three major U.S. airline mergers and document the sensitivity of the findings, particularly questioning whether market conditions evolve similarly for treated and control markets. We then develop a structural model that clarifies this and other assumptions implicit in retrospective analyses and separates efficiency gains from increases in firms' conduct. Using only pre-merger data, we propose a reduced-form approach that leverages exogenous changes in market structure to forecast merger effects. Finally, we use structural prospective merger simulations with our other estimates for a comprehensive evaluation. This bridging of approaches uncovers a fundamental tension: either efficiency gains were limited or, if they were significant, they were accompanied and offset by coordinated effects.
http://arxiv.org/abs/2503.15786v1
Stable quadratic generalized IsoGeometric analysis for elliptic interface problem
2025-03-20T01:57:39+00:00
Unfitted mesh formulations for interface problems generally adopt two distinct methodologies: (i) penalty-based approaches and (ii) explicit enrichment space techniques. While Stable Generalized Finite Element Method (SGFEM) has been rigorously established for one-dimensional and linear-element cases, the construction of optimal enrichment spaces preserving approximation-theoretic properties within isogeometric analysis (IGA) frameworks remains an open challenge. In this paper, we introduce a stable quadratic generalized isogeometric analysis (SGIGA2) for two-dimensional elliptic interface problems. The method is achieved through two key ideas: a new quasi-interpolation for the function with C0 continuous along interface and a new enrichment space with controlled condition number for the stiffness matrix. We mathematically prove that the present method has optimal convergence rates for elliptic interface problems and demonstrate its stability and robustness through numerical verification.
http://arxiv.org/abs/2503.15787v1
Enhancing Physical Layer Security in Cognitive Radio-Enabled NTNs with Beyond Diagonal RIS
2025-03-20T02:00:18+00:00
Beyond diagonal reconfigurable intelligent surfaces (BD-RIS) have emerged as a transformative technology for enhancing wireless communication by intelligently manipulating the propagation environment. This paper explores the potential of BD-RIS in improving cognitive radio enabled multilayer non-terrestrial networks (NTNs). It is assumed that a high-altitude platform station (HAPS) has set up the primary network, while an uncrewed aerial vehicle (UAV) establishes the secondary network in the HAPS footprint. We formulate a joint optimization problem to maximize the secrecy rate by optimizing BD-RIS phase shifts and the secondary transmitter power allocation while controlling the interference temperature from the secondary network to the primary network. To solve this problem efficiently, we decouple the original problem into two sub-problems, which are solved iteratively by relying on alternating optimization. Simulation results demonstrate the effectiveness of BD-RIS in cognitive radio-enabled multilayer NTNs to accommodate the secondary network while satisfying the constraints imposed from the primary network.
http://arxiv.org/abs/2503.15788v1
A two-stage model leveraging friendship network for community evolution prediction in interactive networks
2025-03-20T02:05:36+00:00
Interactive networks representing user participation and interactions in specific "events" are highly dynamic, with communities reflecting collective behaviors that evolve over time. Predicting these community evolutions is crucial for forecasting the trajectory of the related "event". Some models for community evolution prediction have been witnessed, but they primarily focused on coarse-grained evolution types (e.g., expand, dissolve, merge, split), often neglecting fine-grained evolution extents (e.g., the extent of community expansion). Furthermore, these models typically utilize only one network data (here is interactive network data) for dynamic community featurization, overlooking the more stable friendship network that represents the friendships between people to enrich community representations. To address these limitations, we propose a two-stage model that predicts both the type and extent of community evolution. Our model unifies multi-class classification for evolution type and regression for evolution extent within a single framework and fuses data from both interactive and friendship networks for a comprehensive community featurization. We also introduce a hybrid strategy to differentiate between evolution types that are difficult to distinguish. Experimental results on three datasets show the significant superiority of the proposed model over other models, confirming its efficacy in predicting community evolution in interactive networks.
http://arxiv.org/abs/2503.15789v1
Distribution of $θ-$powers and their sums
2025-03-20T02:07:39+00:00
We refine a remark of Steinerberger (2024), proving that for $\alpha \in \mathbb{R}$, there exists integers $1 \leq b_{1}, \ldots, b_{k} \leq n$ such that \[ \left\| \sum_{j=1}^k \sqrt{b_j} - \alpha \right\| = O(n^{-\gamma_k}), \] where $\gamma_{k} \geq (k-1)/4$, $\gamma_2 = 1$, and $\gamma_k = k/2$ for $k = 2^m - 1$. We extend this to higher-order roots. Building on the Bambah-Chowla theorem, we study gaps in $\{x^{\theta}+y^{\theta}: x,y\in \mathbb{N}\cup\{0\}\}$, yielding a modulo one result with $\gamma_2 = 1$ and bounded gaps for $\theta = 3/2$. Given $\rho(m) \geq 0$ with $\sum_{m=1}^{\infty} \rho(m)/m < \infty$, we show that the number of solutions to \[ \left|\sum_{j=1}^{k} a_j^{\theta} - b\right| \leq \frac{\rho\left(\|(a_1, \dots, a_k)\|_{\infty}\right)}{\|(a_1, \dots, a_k)\|_{\infty}^{k}}, \] in the variables $((a_{j})_{j=1}^{k},b) \in \mathbb{N}^{k+1}$ is finite for almost all $\theta>0$. We also identify exceptional values of $\theta$, resolving a question of Dubickas (2024), by proving the existence of a transcendental $\tau$ for which $\|n^{\tau}\| \leq n^v$ has infinitely many solutions for any $v \in \mathbb{R}$.
http://arxiv.org/abs/2503.15790v1
Experimental demonstration of electric power generation from Earth's rotation through its own magnetic field
2025-03-20T02:11:56+00:00
Earth rotates through the axisymmetric part of its own magnetic field, but a simple proof shows that it is impossible to use this to generate electricity in a conductor rotating with Earth.However, we previously identified implicit assumptions underlying this proof and showed theoretically that these could be violated and the proof circumvented. This requires using a soft magnetic material with a topology satisfying a particular mathematical condition and a composition and scale favoring magnetic diffusion, i.e. having a low magnetic Reynolds number Rm (C.F. Chyba, K.P. Hand, Electric power generation from Earth's rotation through its own magnetic field. Phys. Rev. Applied 6, 014017-1-18 (2016)). Here we realize these requirements with a cylindrical shell of manganese-zinc ferrite. Controlling for thermoelectric and other potentially confounding effects (including 60 Hz and RF background), we show that this small demonstration system generates a continuous DC voltage and current of the (low) predicted magnitude. We test and verify other predictions of the theory: voltage and current peak when the cylindrical shell's long axis is orthogonal to both Earth's rotational velocity v and magnetic field; voltage and current go to zero when the entire apparatus (cylindrical shell together with current leads and multimeters) is rotated 90 degrees to orient the shell parallel to v; voltage and current again reach a maximum but of opposite sign when the apparatus is rotated a further 90 degrees; an otherwise-identical solid MnZn cylinder generates zero voltage at all orientations; and a highRm cylindrical shell produces zero voltage. We also reproduce the effect at a second experimental location. The purpose of these experiments was to test the existence of the predicted effect. Ways in which this effect might be scaled to generate higher voltage and current may now be investigated.
http://arxiv.org/abs/2503.15791v1
Canonical torus action on symplectic singularities
2025-03-20T02:13:51+00:00
We show that any symplectic singularity lying on a smoothable projective symplectic variety locally admits a good action of an algebraic torus of dimension $r \geq 1$, which is canonical. In particular, it admits a good $\mathbb{C}^*$-action. This proves Kaledin's conjecture conditionally but in a substantially stronger form. Our key idea is to relate Donaldson-Sun theory on local Kahler metrics in complex differential geometry to the theory of Poisson deformations of symplectic varieties. We also prove results on the local behaviour of (singular) hyperKahler metrics. For instance, we show that the singular hyperKahler metric of any smoothable projective symplectic variety around isolated singularity is close to a Riemannian cone in a polynomial order. Most of our results also work for symplectic singularities on hyperKahler quotients under some conditions.
http://arxiv.org/abs/2503.15792v3
Turnstile area as a measure for chaotic transport in magnetic confinement fusion devices
2025-03-20T02:14:16+00:00
We analyze stochasticity in the magnetic fields of magnetic confinement fusion reactors by calculating the lobe areas of turnstiles - a method developed for characterizing transport into and out of resonance zones in Hamiltonian dynamical systems. We develop an efficient algorithm based on an action principle to calculate this quantity directly from the magnetic field, including stellarator magnetic fields which are sourced by a complicated set of three-dimensional coils. In the analyzed devices, the turnstile area on the inboard (plasma-facing) manifolds is much smaller than the turnstile area on the outboard (wall-facing) manifolds. The application of the turnstile area calculation for the design of future reactors will be discussed.
http://arxiv.org/abs/2503.15793v3
DNR Bench: Benchmarking Over-Reasoning in Reasoning LLMs
2025-03-20T02:19:14+00:00
Test-time scaling has significantly improved large language model performance, enabling deeper reasoning to solve complex problems. However, this increased reasoning capability also leads to excessive token generation and unnecessary problem-solving attempts. We introduce Don\'t Answer Bench (DNA Bench), a new benchmark designed to evaluate LLMs ability to robustly understand the tricky reasoning triggers and avoiding unnecessary generation. DNA Bench consists of 150 adversarially designed prompts that are easy for humans to understand and respond to, but surprisingly not for many of the recent prominent LLMs. DNA Bench tests models abilities across different capabilities, such as instruction adherence, hallucination avoidance, redundancy filtering, and unanswerable question recognition. We evaluate reasoning LLMs (RLMs), including DeepSeek-R1, OpenAI O3-mini, Claude-3.7-sonnet and compare them against a powerful non-reasoning model, e.g., GPT-4o. Our experiments reveal that RLMs generate up to 70x more tokens than necessary, often failing at tasks that simpler non-reasoning models handle efficiently with higher accuracy. Our findings underscore the need for more effective training and inference strategies in RLMs.
http://arxiv.org/abs/2503.15794v1
Finite-Horizon Discrete-Time Optimal Control for Nonlinear Systems under State and Control Constraints
2025-03-20T02:19:53+00:00
This paper addresses the optimal control problem of finite-horizon discrete-time nonlinear systems under state and control constraints. A novel numerical algorithm based on optimal control theory is proposed to achieve superior computational efficiency, with the novelty lying in establishing a unified framework that integrates all aspects of algorithm design through the solution of forward and backward difference equations (FBDEs). Firstly, the state and control constraints are transformed using an augmented Lagrangian method (ALM), thereby decomposing the original optimal control problem into several optimization subproblems. These subproblems are then reformulated as new optimal control problem, which are solved through the corresponding FBDEs, resulting in an algorithm with superlinear convergence rate. Furthermore, the gradient and Hessian matrix are computed by iteratively solving FBDEs, thereby accelerating the optimization process. The gradient is obtained through the standard Hamiltonian, while the Hessian matrix is derived by constructing a novel Hamiltonian specifically designed for second-order optimization, transforming each row into an iterative solution of a new set of FBDEs. Finally, the effectiveness of the algorithm is validated through simulation results in automatic guided vehicles (AGV) trajectory tracking control.
http://arxiv.org/abs/2503.15795v1
Pulsations and Pre-He White Dwarf in the Post-mass Transfer Eclipsing System WASP 1021-28
2025-03-20T02:25:16+00:00
We present results from VLT/UVES spectra and TESS photometric observations of the pulsating EL CVn binary WASP 1021-28, containing a He-core white dwarf precursor (pre-He WD). Double-lined radial velocities were measured with the atmospheric parameters of $T_{\rm eff,A}$ = 7411$\pm40$ K, [M/H] = 0.34$\pm$0.05 dex, and $v_{\rm A}$$\sin i$ = 86.6$\pm$4.0 km s$^{-1}$ for the more massive primary. Combining these measurements and TESS data from four sectors allowed the direct calculation of accurate values for the absolute parameters of each component and the distance to the system. The third-light source of $l_3$ = 0.029 may be the outer tertiary object previously discovered by SPHERE/IRDIS observations. WASP 1021-28 A is located near the blue edge of the $\gamma$ Dor instability strip, and the less massive companion is concurrent with the He-core WD model for metallicity $Z$ = 0.02 and mass $M$ = 0.191 $M_\odot$. The $Z$ value and the Galactic kinematics demonstrate that the program target belongs to the thin-disk population. We iteratively prewhitened the entire TESS residuals and extracted four and nine significant signals in two ranges of 1.12$-$2.25 day$^{-1}$ and 111.25$-$139.24 day$^{-1}$, respectively. A signal of $f_2$ = 1.31865 day$^{-1}$ in the low-frequency region can be attributed to the $\gamma$ Dor pulsation of WASP 1021-28 A, and the high frequencies may be extremely low-mass pre-He WD oscillations. The results presented here provide valuable information on the evolution of short-period EL CVn stars proposed as inner binaries of hierarchical triple systems and the multiperiodic pulsations.
http://arxiv.org/abs/2503.15796v1
Blend the Separated: Mixture of Synergistic Experts for Data-Scarcity Drug-Target Interaction Prediction
2025-03-20T02:27:16+00:00
Drug-target interaction prediction (DTI) is essential in various applications including drug discovery and clinical application. There are two perspectives of input data widely used in DTI prediction: Intrinsic data represents how drugs or targets are constructed, and extrinsic data represents how drugs or targets are related to other biological entities. However, any of the two perspectives of input data can be scarce for some drugs or targets, especially for those unpopular or newly discovered. Furthermore, ground-truth labels for specific interaction types can also be scarce. Therefore, we propose the first method to tackle DTI prediction under input data and/or label scarcity. To make our model functional when only one perspective of input data is available, we design two separate experts to process intrinsic and extrinsic data respectively and fuse them adaptively according to different samples. Furthermore, to make the two perspectives complement each other and remedy label scarcity, two experts synergize with each other in a mutually supervised way to exploit the enormous unlabeled data. Extensive experiments on 3 real-world datasets under different extents of input data scarcity and/or label scarcity demonstrate our model outperforms states of the art significantly and steadily, with a maximum improvement of 53.53%. We also test our model without any data scarcity and it still outperforms current methods.
http://arxiv.org/abs/2503.15797v1
Multispectral radiation temperature inversion based on Transformer-LSTM-SVM
2025-03-20T02:30:23+00:00
The key challenge in multispectral radiation thermometry is accurately measuring emissivity. Traditional constrained optimization methods often fail to meet practical requirements in terms of precision, efficiency, and noise resistance. However, the continuous advancement of neural networks in data processing offers a potential solution to this issue. This paper presents a multispectral radiation thermometry algorithm that combines Transformer, LSTM (Long Short-Term Memory), and SVM (Support Vector Machine) to mitigate the impact of emissivity, thereby enhancing accuracy and noise resistance. In simulations, compared to the BP neural network algorithm, GIM-LSTM, and Transformer-LSTM algorithms, the Transformer-LSTM-SVM algorithm demonstrates an improvement in accuracy of 1.23%, 0.46% and 0.13%, respectively, without noise. When 5% random noise is added, the accuracy increases by 1.39%, 0.51%, and 0.38%, respectively. Finally, experiments confirmed that the maximum temperature error using this method is less than 1%, indicating that the algorithm offers high accuracy, fast processing speed, and robust noise resistance. These characteristics make it well-suited for real-time high-temperature measurements with multi-wavelength thermometry equipment.
http://arxiv.org/abs/2503.15798v1
Mixture of Lookup Experts
2025-03-20T02:31:57+00:00
Mixture-of-Experts (MoE) activates only a subset of experts during inference, allowing the model to maintain low inference FLOPs and latency even as the parameter count scales up. However, since MoE dynamically selects the experts, all the experts need to be loaded into VRAM. Their large parameter size still limits deployment, and offloading, which load experts into VRAM only when needed, significantly increase inference latency. To address this, we propose Mixture of Lookup Experts (MoLE), a new MoE architecture that is efficient in both communication and VRAM usage. In MoLE, the experts are Feed-Forward Networks (FFNs) during training, taking the output of the embedding layer as input. Before inference, these experts can be re-parameterized as lookup tables (LUTs) that retrieves expert outputs based on input ids, and offloaded to storage devices. Therefore, we do not need to perform expert computations during inference. Instead, we directly retrieve the expert's computation results based on input ids and load them into VRAM, and thus the resulting communication overhead is negligible. Experiments show that, with the same FLOPs and VRAM usage, MoLE achieves inference speeds comparable to dense models and significantly faster than MoE with experts offloading, while maintaining performance on par with MoE.
http://arxiv.org/abs/2503.15799v1
Photocatalytic Carbon Dioxide Methanation by High-Entropy Oxides: Significance of Work Function
2025-03-20T02:34:22+00:00
Methane (CH4) formation from photocatalytic carbon dioxide (CO2) conversion in water is currently of interest because methane is a fuel, and it can also be transformed into other useful hydrocarbons. However, achieving high selectivity to produce methane remains a challenge because of the large number of contributing electrons (eight) in methanation. High-entropy oxides present a new pathway to tune the catalyst selectivity by arranging various cations in the lattice. This study aims to clarify the selectivity for methane formation in high-entropy photocatalysts containing hybrid d0 + d10 orbital configuration. Several oxides are designed and synthesized which have a base of 3-4 cations with d0 orbital configuration (titanium and zirconium with a valence of 4, and niobium and tantalum with a valence of 5) and incorporate 1-2 elements with d10 orbital configuration (zinc, gallium, indium, bismuth and copper). Results demonstrate that adding elements with a d10 electronic configuration is effective for methane formation, while the selectivity toward methanation is enhanced by increasing the work function of the d10 cations. Selectivity levels over 50% are achieved using these oxides, suggesting a potential strategy for designing new catalysts for methanation.
http://arxiv.org/abs/2503.15800v1
Frequency Enhancement for Image Demosaicking
2025-03-20T02:37:10+00:00
Recovering high-frequency textures in image demosaicking remains a challenging issue. While existing methods introduced elaborate spatial learning methods, they still exhibit limited performance. To address this issue, a frequency enhancement approach is proposed. Based on the frequency analysis of color filter array (CFA)/demosaicked/ground truth images, we propose Dual-path Frequency Enhancement Network (DFENet), which reconstructs RGB images in a divide-and-conquer manner through fourier-domain frequency selection. In DFENet, two frequency selectors are employed, each selecting a set of frequency components for processing along separate paths. One path focuses on generating missing information through detail refinement in spatial domain, while the other aims at suppressing undesirable frequencies with the guidance of CFA images in frequency domain. Multi-level frequency supervision with a stagewise training strategy is employed to further improve the reconstruction performance. With these designs, the proposed DFENet outperforms other state-of-the-art algorithms on different datasets and demonstrates significant advantages on hard cases. Moreover, to better assess algorithms' ability to reconstruct high-frequency textures, a new dataset, LineSet37, is contributed, which consists of 37 artificially designed and generated images. These images feature complex line patterns and are prone to severe visual artifacts like color moir\'e after demosaicking. Experiments on LineSet37 offer a more targeted evaluation of performance on challenging cases. The code and dataset are available at https://github.com/VelvetReverie/DFENet-demosaicking.
http://arxiv.org/abs/2503.15801v1
Disentangling Uncertainties by Learning Compressed Data Representation
2025-03-20T02:37:48+00:00
We study aleatoric and epistemic uncertainty estimation in a learned regressive system dynamics model. Disentangling aleatoric uncertainty (the inherent randomness of the system) from epistemic uncertainty (the lack of data) is crucial for downstream tasks such as risk-aware control and reinforcement learning, efficient exploration, and robust policy transfer. While existing approaches like Gaussian Processes, Bayesian networks, and model ensembles are widely adopted, they suffer from either high computational complexity or inaccurate uncertainty estimation. To address these limitations, we propose the Compressed Data Representation Model (CDRM), a framework that learns a neural network encoding of the data distribution and enables direct sampling from the output distribution. Our approach incorporates a novel inference procedure based on Langevin dynamics sampling, allowing CDRM to predict arbitrary output distributions rather than being constrained to a Gaussian prior. Theoretical analysis provides the conditions where CDRM achieves better memory and computational complexity compared to bin-based compression methods. Empirical evaluations show that CDRM demonstrates a superior capability to identify aleatoric and epistemic uncertainties separately, achieving AUROCs of 0.8876 and 0.9981 on a single test set containing a mixture of both uncertainties. Qualitative results further show that CDRM's capability extends to datasets with multimodal output distributions, a challenging scenario where existing methods consistently fail. Code and supplementary materials are available at https://github.com/ryeii/CDRM.
http://arxiv.org/abs/2503.15802v1
The Ambiguous Age and Tidal History for the Ultra-Hot Jupiter TOI-1937Ab
2025-03-20T02:38:34+00:00
Ultra-short-period (USP) planets are a rare but dynamically significant subset of the exoplanet sample, and understanding their dynamical histories and migration processes is necessary to build a complete picture of the outcomes of planet formation. In this work, we present an analysis of system age constraints and the impact of tidal evolution in the TOI-1937A system, a component of a large-separation stellar binary with an ambiguous age constraint that hosts a massive (> 2 $M_{Jup}$) USP planetary companion. Through a suite of tidal evolution simulations and analysis of the transit timing variations present in the photometric data, we find that the ultra-hot Jupiter TOI-1937Ab is likely undergoing orbital decay driven by tidal interactions, and we place an observational upper limit on its decay rate of |$\dot{P}$| < 0.09. We consider three different hypotheses for the system age based on three distinct methods of age estimation. These three age limits are complemented by indirect evidence of the age of the star that comes from our dynamical and transit timing analyses. We discuss the possibility that future data will provide more concrete constraints on the tidal parameters of TOI-1937Ab and its host star.
http://arxiv.org/abs/2503.15803v1
Linear-Quadratic Partially Observed Mean Field Stackelberg Stochastic Differential Game
2025-03-20T02:38:54+00:00
This paper is concerned with a linear-quadratic partially observed mean field Stackelberg stochastic differential game, which contains a leader and a large number of followers. Specifically, the followers confront a large-population Nash game subsequent to the leader's initial announcement of his strategy. In turn, the leader optimizes his own cost functional, taking into account the anticipated reactions of the followers. The state equations of both the leader and the followers are general stochastic differential equations, where the drift terms contain both the state average term and the state expectation term. However, the followers' average state terms enter into the drift term of the leader's state equation and the state expectation term of the leader enters into the state equation of the follower, reflecting the mutual influence between the leader and the followers. By utilizing the techniques of state decomposition and backward separation principle, we deduce the open-loop adapted decentralized strategies and feedback decentralized strategies of this leader-followers system, and demonstrate that the decentralized strategies are the corresponding $\varepsilon$-Stackelberg-Nash equilibrium.
http://arxiv.org/abs/2503.15804v1
Communication Efficient Federated Learning with Linear Convergence on Heterogeneous Data
2025-03-20T02:43:02+00:00
By letting local clients perform multiple local updates before communicating with a parameter server, modern federated learning algorithms such as FedAvg tackle the communication bottleneck problem in distributed learning and have found many successful applications. However, this asynchrony between local updates and communication also leads to a ''client-drift'' problem when the data is heterogeneous (not independent and identically distributed), resulting in errors in the final learning result. In this paper, we propose a federated learning algorithm, which is called FedCET, to ensure accurate convergence even under heterogeneous distributions of data across clients. Inspired by the distributed optimization algorithm NIDS, we use learning rates to weight information received from local clients to eliminate the ''client-drift''. We prove that under appropriate learning rates, FedCET can ensure linear convergence to the exact solution. Different from existing algorithms which have to share both gradients and a drift-correction term to ensure accurate convergence under heterogeneous data distributions, FedCET only shares one variable, which significantly reduces communication overhead. Numerical comparison with existing counterpart algorithms confirms the effectiveness of FedCET.
http://arxiv.org/abs/2503.15805v1
Multiwavelength Analysis of GRB 250101A: From Gamma-ray Prompt Emission to Optical Afterglow
2025-03-20T02:43:19+00:00
Gamma-ray bursts (GRBs) are the most luminous transients in the universe. The interaction between the relativistic jet and the circumburst medium produces a multiwavelength afterglow through synchrotron radiation. In this work, we present multiwavelength properties of GRB~250101A based on the observations of Swift, Fermi, and Multi-channel Photometric Survey Telescope (Mephisto). The spectral analysis of Swift/BAT and Fermi/GBM reveals a soft prompt spectrum with a low-energy photon index of $-1.18$ and a peak energy of 33 keV, and the isotropic energy is $1.4\times10^{52}~{\rm erg}$. The prompt emission of GRB 250101A aligns with Type II GRBs in the Amati relation. Meanwhile, our analysis indicates that GRB 250101A is an X-ray-rich or X-ray-dominated GRB, with intrinsic properties suggesting that it is relatively softer than most classical GRBs. Optical observation with Mephisto, beginning 197 s post-trigger, shows a single power-law decay in $uvgriz$ bands, with $F_{\nu,\mathrm{obs}} \propto t^{-0.76} \nu^{-1.20}$. The observed spectral index significantly exceeds theoretical predictions under standard afterglow models, suggesting a color excess of $\sim0.21$ mag. However, combining X-ray and optical afterglow, we find that GRB 250101A is more likely a ``normal burst'' rather than an ``optical-dark burst'', and the dust extinction effect plays an important role in the optical blue bands. Furthermore, there is a structural change at $T_0+2924$ s in the optical light curve, indicating a density drop of $\sim50$ \% in the interstellar medium at a distance of $\sim0.05~{\rm pc}$.
http://arxiv.org/abs/2503.15806v1
Kinks of fractional $φ^4$ models: existence, uniqueness, monotonicity, stability, and sharp asymptotics
2025-03-20T02:45:35+00:00
In the present work we construct kink solutions for different (parabolic and wave) variants of the fractional $\phi^4$ model, in both the sub-Laplacian and super-Laplacian setting. We establish existence and monotonicity results (for the sub - Laplacian case), along with sharp asymptotics which are corroborated through numerical computations. Importantly, in the sub-Laplacian regime, we provide the explicit and numerically verifiable spectral condition, which guarantees uniqueness for odd kinks. We check numerically the relevant condition to confirm the uniqueness of such solutions. In addition, we show asymptotic stability for the stationary kinks in the parabolic setting and also, the spectral stability for the traveling kinks in the corresponding wave equation.
http://arxiv.org/abs/2503.15807v1
Video-VoT-R1: An efficient video inference model integrating image packing and AoE architecture
2025-03-20T02:50:57+00:00
In the field of video-language pretraining, existing models face numerous challenges in terms of inference efficiency and multimodal data processing. This paper proposes a KunLunBaize-VoT-R1 video inference model based on a long-sequence image encoder, along with its training and application methods. By integrating image packing technology, the Autonomy-of-Experts (AoE) architecture, and combining the video of Thought (VoT), a large language model (LLM) trained with large-scale reinforcement learning, and multiple training techniques, the efficiency and accuracy of the model in video inference tasks are effectively improved. Experiments show that this model performs outstandingly in multiple tests, providing a new solution for video-language understanding.
http://arxiv.org/abs/2503.15808v1
ChatGPT and U(X): A Rapid Review on Measuring the User Experience
2025-03-20T02:51:11+00:00
ChatGPT, powered by a large language model (LLM), has revolutionized everyday human-computer interaction (HCI) since its 2022 release. While now used by millions around the world, a coherent pathway for evaluating the user experience (UX) ChatGPT offers remains missing. In this rapid review (N = 58), I explored how ChatGPT UX has been approached quantitatively so far. I focused on the independent variables (IVs) manipulated, the dependent variables (DVs) measured, and the methods used for measurement. Findings reveal trends, gaps, and emerging consensus in UX assessments. This work offers a first step towards synthesizing existing approaches to measuring ChatGPT UX, urgent trajectories to advance standardization and breadth, and two preliminary frameworks aimed at guiding future research and tool development. I seek to elevate the field of ChatGPT UX by empowering researchers and practitioners in optimizing user interactions with ChatGPT and similar LLM-based systems.
http://arxiv.org/abs/2503.15809v1
Controlling Avatar Diffusion with Learnable Gaussian Embedding
2025-03-20T02:52:01+00:00
Recent advances in diffusion models have made significant progress in digital human generation. However, most existing models still struggle to maintain 3D consistency, temporal coherence, and motion accuracy. A key reason for these shortcomings is the limited representation ability of commonly used control signals(e.g., landmarks, depth maps, etc.). In addition, the lack of diversity in identity and pose variations in public datasets further hinders progress in this area. In this paper, we analyze the shortcomings of current control signals and introduce a novel control signal representation that is optimizable, dense, expressive, and 3D consistent. Our method embeds a learnable neural Gaussian onto a parametric head surface, which greatly enhances the consistency and expressiveness of diffusion-based head models. Regarding the dataset, we synthesize a large-scale dataset with multiple poses and identities. In addition, we use real/synthetic labels to effectively distinguish real and synthetic data, minimizing the impact of imperfections in synthetic data on the generated head images. Extensive experiments show that our model outperforms existing methods in terms of realism, expressiveness, and 3D consistency. Our code, synthetic datasets, and pre-trained models will be released in our project page: https://ustc3dv.github.io/Learn2Control/
http://arxiv.org/abs/2503.15810v1
Big data comparison of quantum invariants
2025-03-20T02:52:08+00:00
We apply big data techniques, including exploratory and topological data analysis, to investigate quantum invariants. More precisely, our study explores the Jones polynomial's structural properties and contrasts its behavior under four principal methods of enhancement: coloring, rank increase, categorification, and leaving the realm of Lie algebras.
http://arxiv.org/abs/2503.15811v1
Reduced density matrix approach to one-dimensional ultracold bosonic systems
2025-03-20T02:54:22+00:00
The variational determination of the two-boson reduced density matrix is described for a one-dimensional system of $N$ (where $N$ ranges from $2$ to $10^4$) harmonically trapped bosons interacting via contact interaction. The ground-state energies are calculated, and compared to existing methods in the field, including the analytic case (for $N=2)$ and mean-field approaches such as the one-dimensional Gross-Pitaevskii equation and its variations. Structural properties including the density and correlation functions are also derived, including the behaviour of the correlation function when boson coordinates coincide, collectively demonstrating the capacity of the reduced density matrix method to accurately calculate ground-state properties of bosonic systems comprising few to many bosons, including the cross-over region between these extremes, across a large range of interaction strengths.
http://arxiv.org/abs/2503.15812v5
Data Spatial Programming
2025-03-20T02:55:40+00:00
We introduce a novel programming model, Data Spatial Programming, which extends the semantics of Object-Oriented Programming (OOP) by introducing new class-like constructs called archetypes. These archetypes encapsulate the topological relationships between data entities and the execution flow in a structured manner, enabling more expressive and semantically rich computations over interconnected data structures or finite states. By formalizing the relationships between data elements in this topological space, our approach allows for more intuitive modeling of complex systems where a topology of connections is formed for the underlying computational model. This paradigm addresses limitations in traditional OOP when representing a wide range of problems in computer science such as agent-based systems, social networks, processing on relational data, neural networks, distributed systems, finite state machines, and other spatially-oriented computational problems.
http://arxiv.org/abs/2503.15813v4
An isoperimetric inequality for lower order Neumann eigenvalues in Gauss space
2025-03-20T02:55:42+00:00
We prove a sharp isoperimetric inequality for the harmonic mean of the first $m-1$ nonzero Neumann eigenvalues for bounded Lipschitz domains symmetric about the origin in Gauss space. Our result generalizes the Szeg\"o-Weinberger type inequality in Gauss space, as proved in [8, Theorem 4.1].
http://arxiv.org/abs/2503.15814v1
TONGS: A Treasury Of Nearby Galaxy Surveys
2025-03-20T02:56:51+00:00
The beginning of the 21st century marked the "modern era of galaxy surveys" in astronomy. Rapid innovation in observing technology, combined with the base built by galaxy catalogs and atlases dating back centuries, sparked an explosion of new observational programs driven by efforts to understand the different processes driving galaxy evolution. This review aims to answer the following science questions: (1) how have galaxy surveys evolved in the past 20 years, and how have traditional observational programs been affected by the rise of large panoramic surveys, (2) can the term "nearby" be quantified in the context of galaxy surveys, and (3) how complete is the coverage of the nearby universe and what areas hold the largest opportunity for future work? We define a galaxy survey as a systematically obtained data set which aims to characterize a set of astronomical objects. Galaxy surveys can further be subdivided based on the methods used to select the objects to observe, the properties of the survey samples (e.g. distance or morphology), or the observing strategies used. We focus on \textit{pointed} nearby galaxy surveys, which we define as surveys which observe a specific sample of target galaxies. Through a study of 43 nearby galaxy surveys, we find no standardized quantitative definition for "nearby" with surveys covering a wide range of distances. We observe that since 2003, traditional targeted galaxy surveys have undergone a dramatic evolution, transitioning from large, statistical surveys to small, ultra-specific projects which complement the rise of large high resolution panoramic surveys. While wavelength regimes observable from the ground (such as radio or optical wavelengths) host numerous surveys, the largest opportunity for future work is within the less covered space-based wavelength regimes (especially ultraviolet and X-ray).
http://arxiv.org/abs/2503.15815v1
Attention Pruning: Automated Fairness Repair of Language Models via Surrogate Simulated Annealing
2025-03-20T03:02:32+00:00
This paper explores pruning attention heads as a post-processing bias mitigation method for large language models (LLMs). Modern AI systems such as LLMs are expanding into sensitive social contexts where fairness concerns become especially crucial. Since LLMs develop decision-making patterns by training on massive datasets of human-generated content, they naturally encode and perpetuate societal biases. While modifying training datasets and algorithms is expensive and requires significant resources; post-processing techniques-such as selectively deactivating neurons and attention heads in pre-trained LLMs-can provide feasible and effective approaches to improve fairness. However, identifying the optimal subset of parameters to prune presents a combinatorial challenge within LLMs' immense parameter space, requiring solutions that efficiently balance competing objectives across the frontiers of model fairness and utility. To address the computational challenges, we explore a search-based program repair approach via randomized simulated annealing. Given the prohibitive evaluation costs in billion-parameter LLMs, we develop surrogate deep neural networks that efficiently model the relationship between attention head states (active/inactive) and their corresponding fairness/utility metrics. This allows us to perform optimization over the surrogate models and efficiently identify optimal subsets of attention heads for selective pruning rather than directly searching through the LLM parameter space. This paper introduces Attention Pruning, a fairness-aware surrogate simulated annealing approach to prune attention heads in LLMs that disproportionately contribute to bias while minimally impacting overall model utility. Our experiments show that Attention Pruning achieves up to $40\%$ reduction in gender bias and outperforms the state-of-the-art bias mitigation strategies.
http://arxiv.org/abs/2503.15816v2
A Vision Centric Remote Sensing Benchmark
2025-03-20T03:03:46+00:00
Multimodal Large Language Models (MLLMs) have achieved remarkable success in vision-language tasks but their remote sensing (RS) counterpart are relatively under explored. Unlike natural images, RS imagery presents unique challenges that current MLLMs struggle to handle, particularly in visual grounding and spatial reasoning. This study investigates the limitations of CLIP-based MLLMs in RS, highlighting their failure to differentiate visually distinct yet semantically similar RS images. To address this, we introduce a remote sensing multimodal visual patterns (RSMMVP) benchmark. It is designed to evaluate MLLMs in RS tasks by identifying the CLIP-blind pairs, where CLIP-based models incorrectly assign high similarity scores to visually distinct RS images. Through a visual question answering (VQA) evaluation, we analyze the performance of state-of-the-art MLLMs, revealing significant limitations in RS specific representation learning. The results provide valuable insights into the weaknesses of CLIP-based visual encoding and offer a foundation for future research to develop more effective MLLMs tailored for remote sensing applications.
http://arxiv.org/abs/2503.15817v1
Ranking Counterfactual Explanations
2025-03-20T03:04:05+00:00
AI-driven outcomes can be challenging for end-users to understand. Explanations can address two key questions: "Why this outcome?" (factual) and "Why not another?" (counterfactual). While substantial efforts have been made to formalize factual explanations, a precise and comprehensive study of counterfactual explanations is still lacking. This paper proposes a formal definition of counterfactual explanations, proving some properties they satisfy, and examining the relationship with factual explanations. Given that multiple counterfactual explanations generally exist for a specific case, we also introduce a rigorous method to rank these counterfactual explanations, going beyond a simple minimality condition, and to identify the optimal ones. Our experiments with 12 real-world datasets highlight that, in most cases, a single optimal counterfactual explanation emerges. We also demonstrate, via three metrics, that the selected optimal explanation exhibits higher representativeness and can explain a broader range of elements than a random minimal counterfactual. This result highlights the effectiveness of our approach in identifying more robust and comprehensive counterfactual explanations.
http://arxiv.org/abs/2503.15818v2
Computation-Efficient and Recognition-Friendly 3D Point Cloud Privacy Protection
2025-03-20T03:09:44+00:00
3D point cloud has been widely used in applications such as self-driving cars, robotics, CAD models, etc. To the best of our knowledge, these applications raised the issue of privacy leakage in 3D point clouds, which has not been studied well. Different from the 2D image privacy, which is related to texture and 2D geometric structure, the 3D point cloud is texture-less and only relevant to 3D geometric structure. In this work, we defined the 3D point cloud privacy problem and proposed an efficient privacy-preserving framework named PointFlowGMM that can support downstream classification and segmentation tasks without seeing the original data. Using a flow-based generative model, the point cloud is projected into a latent Gaussian mixture distributed subspace. We further designed a novel angular similarity loss to obfuscate the original geometric structure and reduce the model size from 767MB to 120MB without a decrease in recognition performance. The projected point cloud in the latent space is orthogonally rotated randomly to further protect the original geometric structure, the class-to-class relationship is preserved after rotation, thus, the protected point cloud can support the recognition task. We evaluated our model on multiple datasets and achieved comparable recognition results on encrypted point clouds compared to the original point clouds.
http://arxiv.org/abs/2503.15819v1
Control Pneumatic Soft Bending Actuator with Online Learning Pneumatic Physical Reservoir Computing
2025-03-20T03:09:46+00:00
The intrinsic nonlinearities of soft robots present significant control but simultaneously provide them with rich computational potential. Reservoir computing (RC) has shown effectiveness in online learning systems for controlling nonlinear systems such as soft actuators. Conventional RC can be extended into physical reservoir computing (PRC) by leveraging the nonlinear dynamics of soft actuators for computation. This paper introduces a PRC-based online learning framework to control the motion of a pneumatic soft bending actuator, utilizing another pneumatic soft actuator as the PRC model. Unlike conventional designs requiring two RC models, the proposed control system employs a more compact architecture with a single RC model. Additionally, the framework enables zero-shot online learning, addressing limitations of previous PRC-based control systems reliant on offline training. Simulations and experiments validated the performance of the proposed system. Experimental results indicate that the PRC model achieved superior control performance compared to a linear model, reducing the root-mean-square error (RMSE) by an average of over 37% in bending motion control tasks. The proposed PRC-based online learning control framework provides a novel approach for harnessing physical systems' inherent nonlinearities to enhance the control of soft actuators.
http://arxiv.org/abs/2503.15820v1
The Deligne Complex for the $B_3$ Artin Group
2025-03-20T03:12:24+00:00
We show that the piecewise Euclidean Moussong metric on the Deligne complex of the Artin group of type $B_3$ is $\mathrm{CAT}(0)$. We do this by establishing a criteria for a complex made of $B_3$ simplices to be $\mathrm{CAT}(1)$ in terms of embedded edge paths, which in particular applies to the spherical Deligne complex of type $B_3$. This provides one more step to showing that the Moussong metric is $\mathrm{CAT}(0)$ for any 3-dimensional Artin group.
http://arxiv.org/abs/2503.15821v1
Temporal Point Process Modeling of Aggressive Behavior Onset in Psychiatric Inpatient Youths with Autism
2025-03-20T03:12:54+00:00
Aggressive behavior, including aggression towards others and self-injury, occurs in up to 80% of children and adolescents with autism, making it a leading cause of behavioral health referrals and a major driver of healthcare costs. Predicting when autistic youth will exhibit aggression is challenging due to their communication difficulties. Many are minimally verbal or have poor emotional insight. Recent advances in Machine Learning and wearable biosensing enable short-term aggression predictions within a limited future window (typically one to three minutes). However, existing models do not estimate aggression probability within longer future windows nor the expected number of aggression onsets over such a period. To address these limitations, we employ Temporal Point Processes (TPPs) to model the generative process of aggressive behavior onsets in inpatient youths with autism. We hypothesize that aggressive behavior onsets follow a self-exciting process driven by short-term history, making them well-suited for Hawkes Point Process modeling. We establish a benchmark and demonstrate through Goodness-of-Fit statistics and predictive metrics that TPPs perform well modeling aggressive behavior onsets in inpatient youths with autism. Additionally, we gain insights into the onset generative process, like the branching factor near criticality, and suggest TPPs may enhance future clinical decision-making and preemptive interventions.
http://arxiv.org/abs/2503.15822v1
Energy-Efficient Federated Learning and Migration in Digital Twin Edge Networks
2025-03-20T03:14:23+00:00
The digital twin edge network (DITEN) is a significant paradigm in the sixth-generation wireless system (6G) that aims to organize well-developed infrastructures to meet the requirements of evolving application scenarios. However, the impact of the interaction between the long-term DITEN maintenance and detailed digital twin tasks, which often entail privacy considerations, is commonly overlooked in current research. This paper addresses this issue by introducing a problem of digital twin association and historical data allocation for a federated learning (FL) task within DITEN. To achieve this goal, we start by introducing a closed-form function to predict the training accuracy of the FL task, referring to it as the data utility. Subsequently, we carry out comprehensive convergence analyses on the proposed FL methodology. Our objective is to jointly optimize the data utility of the digital twin-empowered FL task and the energy costs incurred by the long-term DITEN maintenance, encompassing FL model training, data synchronization, and twin migration. To tackle the aforementioned challenge, we present an optimization-driven learning algorithm that effectively identifies optimized solutions for the formulated problem. Numerical results demonstrate that our proposed algorithm outperforms various baseline approaches.
http://arxiv.org/abs/2503.15823v2
A Unified Stability Analysis of Safety-Critical Control using Multiple Control Barrier Functions
2025-03-20T03:16:38+00:00
Ensuring liveness and safety of autonomous and cyber-physical systems remains a fundamental challenge, particularly when multiple safety constraints are present. This letter advances the theoretical foundations of safety-filter Quadratic Programs (QP) and Control Lyapunov Function (CLF)-Control Barrier Function (CBF) controllers by establishing a unified analytical framework for studying their stability properties. We derive sufficient feasibility conditions for QPs with multiple CBFs and formally characterize the conditions leading to undesirable equilibrium points at possible intersecting safe set boundaries. Additionally, we introduce a stability criterion for equilibrium points, providing a systematic approach to identifying conditions under which they can be destabilized or eliminated. Our analysis extends prior theoretical results, deepening the understanding of the conditions of feasibility and stability of CBF-based safety filters and the CLF-CBF QP framework.
http://arxiv.org/abs/2503.15824v1
Robust distortion risk measures with linear penalty under distribution uncertainty
2025-03-20T03:19:02+00:00
The paper investigates the robust distortion risk measure with linear penalty function under distribution uncertainty. The distribution uncertainties are characterized by predetermined moment conditions or constraints on the Wasserstein distance. The optimal quantile distribution and the optimal value function are explicitly characterized. Our results partially extend the results of Bernard, Pesenti and Vanduffel (2024) and Li (2018) to robust distortion risk measures with linear penalty. In addition, we also discuss the influence of the penalty parameter on the optimal solution.
http://arxiv.org/abs/2503.15825v1
Efficient Symbolic Execution of Software under Fault Attacks
2025-03-20T03:19:48+00:00
We propose a symbolic method for analyzing the safety of software under fault attacks both accurately and efficiently. Fault attacks leverage physically injected hardware faults to break the safety of a software program. While there are existing methods for analyzing the impact of faults on software, they suffer from inaccurate fault modeling and inefficient analysis algorithms. We propose two new techniques to overcome these problems. First, we propose a fault modeling technique that leverages program transformation to add symbolic variables to the program, to accurately model the fault-induced program behavior. Second, we propose a redundancy pruning technique that leverages the weakest precondition and fault saturation to mitigate path explosion, which is a performance bottleneck of symbolic execution that is exacerbated by the fault-induced program behavior. We have implemented the method and evaluated it on a variety of benchmark programs. The experimental results show that our method significantly outperforms the state-of-the-art method. Specifically, it not only reveals many previously-missed safety violations but also reduces the running time drastically. Compared to the baseline, our optimized method is 2.0$\times$ faster on average.
http://arxiv.org/abs/2503.15826v1
Fourth-order uniformly accurate integrators with long time near conservations for the nonlinear Dirac equation in the nonrelativistic regime
2025-03-20T03:21:29+00:00
In this paper, we propose two novel fourth-order integrators that exhibit uniformly high accuracy and long-term near conservations for solving the nonlinear Dirac equation (NLDE) in the nonrelativistic regime. In this regime, the solution of the NLDE exhibits highly oscillatory behavior in time, characterized by a wavelength of O($\varepsilon^{2}$) with a small parameter $\varepsilon>0$. To ensure uniform temporal accuracy, we employ a two-scale approach in conjunction with exponential integrators, utilizing operator decomposition techniques for the NLDE. The proposed methods are rigorously proved to achieve fourth-order uniform accuracy in time for all $\varepsilon\in (0,1]$. Furthermore, we successfully incorporate symmetry into the integrator, and the long-term near conservation properties are analyzed through the modulated Fourier expansion. The proposed schemes are readily extendable to linear Dirac equations incorporating magnetic potentials, the dynamics of traveling wave solutions and the two/three-dimensional Dirac equations. The validity of all theoretical ndings and extensions is numerically substantiated through a series of numerical experiments.
http://arxiv.org/abs/2503.15827v2
Rapid quantum ground state preparation via dissipative dynamics
2025-03-20T03:27:52+00:00
Inspired by natural cooling processes, dissipation has become a promising approach for preparing low-energy states of quantum systems. However, the potential of dissipative protocols remains unclear beyond certain commuting Hamiltonians. This work provides significant analytical and numerical insights into the power of dissipation for preparing the ground state of non-commuting Hamiltonians. For quasi-free dissipative dynamics, including certain 1D spin systems with boundary dissipation, our results reveal a new connection between the mixing time in trace distance and the spectral properties of a non-Hermitian Hamiltonian, leading to an explicit and sharp bound on the mixing time that scales polynomially with system size. For more general spin systems, we develop a tensor network-based algorithm for constructing the Lindblad jump operator and for simulating the dynamics. Using this algorithm, we demonstrate numerically that dissipative ground state preparation protocols can achieve rapid mixing for certain 1D local Hamiltonians under bulk dissipation, with a mixing time that scales logarithmically with the system size. We then prove the rapid mixing result for certain weakly interacting spin and fermionic systems in arbitrary dimensions, extending recent results for high-temperature quantum Gibbs samplers to the zero-temperature regime. Our theoretical approaches are applicable to systems with singular stationary states, and are thus expected to have applications beyond the specific systems considered in this study.
http://arxiv.org/abs/2503.15828v1
Ergodicity of the viscous scalar conservation laws with a degenerate noise
2025-03-20T03:50:58+00:00
This paper establishes the ergodicity in $H^\mathfrak n,\mathfrak n=\lfloor\frac{d}{2}+1\rfloor$ of the viscous scalar conservation laws on torus $\mathbb T^d$ with general polynomial flux and a degenerate noise. The noise could appear in as few as several directions. We introduce a localized framework that restricts attention to trajectories with controlled energy growth, circumventing the limitations of traditional contraction-based approaches. This localized method allows for a demonstration of e-property and consequently proves the uniqueness of invariant measure under a H{\"o}rmander-type condition. Furthermore, we characterize the absolute continuity of the invariant measure's projections onto any finite-dimensional subspaces under requirement on an algebraic nondegenerate condition for the flux.
http://arxiv.org/abs/2503.15829v1
Path components of $\mathrm{G}_2$-moduli spaces may be non-aspherical
2025-03-20T03:51:11+00:00
Starting from Joyce's generalised Kummer construction, we exhibit non-trivial families of $\mathrm{G}_2$-manifolds over the two dimensional sphere by resolving singularities with a twisted family of Eguchi-Hanson spaces. We establish that the comparison map $\mathcal{G}_2^{\mathrm{tf}}(M) /\!\!/ \mathrm{Diff}(M)_0 \rightarrow \mathcal{G}_2^{\mathrm{tf}}(M) / \mathrm{Diff}(M)_0$ is a fibration over each path components with Eilenberg Mac Lane spaces as fibres, which allows us to show that these families remain non-trivial in $\mathcal{G}_2^{\mathrm{tf}}(M) / \mathrm{Diff}(M)_0$. In addition, we construct a new invariant based on characteristic classes that allows us to show that different resolutions give rise to different elements in the moduli space.
http://arxiv.org/abs/2503.15830v1
Alignment of Continuous Brain Connectivity
2025-03-20T03:52:20+00:00
Brain networks are typically represented by adjacency matrices, where each node corresponds to a brain region. In traditional brain network analysis, nodes are assumed to be matched across individuals, but the methods used for node matching often overlook the underlying connectivity information. This oversight can result in inaccurate node alignment, leading to inflated edge variability and reduced statistical power in downstream connectivity analyses. To overcome this challenge, we propose a novel framework for registering high resolution continuous connectivity (ConCon), defined as a continuous function on a product manifold space specifically, the cortical surface capturing structural connectivity between all pairs of cortical points. Leveraging ConCon, we formulate an optimal diffeomorphism problem to align both connectivity profiles and cortical surfaces simultaneously. We introduce an efficient algorithm to solve this problem and validate our approach using data from the Human Connectome Project (HCP). Results demonstrate that our method substantially improves the accuracy and robustness of connectome-based analyses compared to existing techniques.
http://arxiv.org/abs/2503.15831v1
EDEN: Enhanced Diffusion for High-quality Large-motion Video Frame Interpolation
2025-03-20T03:54:52+00:00
Handling complex or nonlinear motion patterns has long posed challenges for video frame interpolation. Although recent advances in diffusion-based methods offer improvements over traditional optical flow-based approaches, they still struggle to generate sharp, temporally consistent frames in scenarios with large motion. To address this limitation, we introduce EDEN, an Enhanced Diffusion for high-quality large-motion vidEo frame iNterpolation. Our approach first utilizes a transformer-based tokenizer to produce refined latent representations of the intermediate frames for diffusion models. We then enhance the diffusion transformer with temporal attention across the process and incorporate a start-end frame difference embedding to guide the generation of dynamic motion. Extensive experiments demonstrate that EDEN achieves state-of-the-art results across popular benchmarks, including nearly a 10% LPIPS reduction on DAVIS and SNU-FILM, and an 8% improvement on DAIN-HD.
http://arxiv.org/abs/2503.15832v1
The positivity technique and low-lying zeros of Dirichlet $L$-functions
2025-03-20T04:00:31+00:00
Assuming the generalized Riemann hypothesis, we rediscover and sharpen some of the best known results regarding the distribution of low-lying zeros of Dirichlet $L$-functions. This builds upon earlier work of Omar, which relies on the classical positivity technique of explicit formulas. In addition, we generalize some of our results to a larger class of $L$-functions and provide effective conditional estimates for the lowest zeros of Dirichlet $L$-functions.
http://arxiv.org/abs/2503.15833v1
Smyth's conjecture and a non-deterministic Hasse principle
2025-03-20T04:19:04+00:00
In a 1986 paper, Smyth proposed a conjecture about which integer-linear relations were possible among Galois-conjugate algebraic numbers. We prove this conjecture. The main tools (as Smyth already anticipated) are combinatorial rather than number-theoretic in nature. For instance, the question can be reinterpreted as a question about the possible eigenvalues of a specified linear combination of permutation matrices. What's more, we reinterpret Smyth's conjecture as a local-to-global principle for a "non-deterministic system of equations" where variables are interpreted as compactly supported K-valued random variables (for K a local or global field) rather than as elements of K.
http://arxiv.org/abs/2503.15834v1
From Paramagnet to Dipolar Topological Order via Duality and Dipolar SPT
2025-03-20T04:19:46+00:00
A scheme for the adaptive preparation of a topological state with dipole symmetry, dubbed the dipolar topological state (dTS), which serves as an example of translation symmetry-enriched topological phase, is proposed. The midcircuit state emerging during the preparation process is identified as a two-dimensional symmetry-protected topological (SPT) state protected by dipole bundle symmetry alongside charge and 1-form symmetries. The non-trivial boundary modes of the dipolar SPT state exhibiting the spontaneous breaking of charge and dipole bundle symmetries are analyzed. The duality map between the paramagnetic state and the dipolar topological state is established in the framework of the {\it simultaneous gauging} of two charge symmetries and one dipole symmetry that cannot be reduced as sequential gauging of the individual symmetry. Leveraging this duality, we work out the phase diagram of the dipolar topological state under perturbations by various transverse fields.
http://arxiv.org/abs/2503.15835v1
BARD-GS: Blur-Aware Reconstruction of Dynamic Scenes via Gaussian Splatting
2025-03-20T04:23:52+00:00
3D Gaussian Splatting (3DGS) has shown remarkable potential for static scene reconstruction, and recent advancements have extended its application to dynamic scenes. However, the quality of reconstructions depends heavily on high-quality input images and precise camera poses, which are not that trivial to fulfill in real-world scenarios. Capturing dynamic scenes with handheld monocular cameras, for instance, typically involves simultaneous movement of both the camera and objects within a single exposure. This combined motion frequently results in image blur that existing methods cannot adequately handle. To address these challenges, we introduce BARD-GS, a novel approach for robust dynamic scene reconstruction that effectively handles blurry inputs and imprecise camera poses. Our method comprises two main components: 1) camera motion deblurring and 2) object motion deblurring. By explicitly decomposing motion blur into camera motion blur and object motion blur and modeling them separately, we achieve significantly improved rendering results in dynamic regions. In addition, we collect a real-world motion blur dataset of dynamic scenes to evaluate our approach. Extensive experiments demonstrate that BARD-GS effectively reconstructs high-quality dynamic scenes under realistic conditions, significantly outperforming existing methods.
http://arxiv.org/abs/2503.15836v1
APEX-MR: Multi-Robot Asynchronous Planning and Execution for Cooperative Assembly
2025-03-20T04:25:38+00:00
Compared to a single-robot workstation, a multi-robot system offers several advantages: 1) it expands the system's workspace, 2) improves task efficiency, and more importantly, 3) enables robots to achieve significantly more complex and dexterous tasks, such as cooperative assembly. However, coordinating the tasks and motions of multiple robots is challenging due to issues, e.g. system uncertainty, task efficiency, algorithm scalability, and safety concerns. To address these challenges, this paper studies multi-robot coordination and proposes APEX-MR, an asynchronous planning and execution framework designed to safely and efficiently coordinate multiple robots to achieve cooperative assembly, e.g. LEGO assembly. In particular, APEX-MR provides a systematic approach to post-process multi-robot tasks and motion plans to enable robust asynchronous execution under uncertainty. Experimental results demonstrate that APEX-MR can significantly speed up the execution time of many long-horizon LEGO assembly tasks by 48% compared to sequential planning and 36% compared to synchronous planning on average. To further demonstrate the performance, we deploy APEX-MR to a dual-arm system to perform physical LEGO assembly. To our knowledge, this is the first robotic system capable of performing customized LEGO assembly using commercial LEGO bricks. The experiment results demonstrate that the dual-arm system, with APEX-MR, can safely coordinate robot motions, efficiently collaborate, and construct complex LEGO structures. Our project website is available at https://intelligent-control-lab.github.io/APEX-MR/
http://arxiv.org/abs/2503.15837v1
Fùxì: A Benchmark for Evaluating Language Models on Ancient Chinese Text Understanding and Generation
2025-03-20T04:26:40+00:00
Ancient Chinese text processing presents unique challenges for large language models (LLMs) due to its distinct linguistic features, complex structural constraints, and rich cultural context. While existing benchmarks have primarily focused on evaluating comprehension through multiple-choice questions, there remains a critical gap in assessing models' generative capabilities in classical Chinese. We introduce F\`ux\`i, a comprehensive benchmark that evaluates both understanding and generation capabilities across 21 diverse tasks. Our benchmark distinguishes itself through three key contributions: (1) balanced coverage of both comprehension and generation tasks, including novel tasks like poetry composition and couplet completion, (2) specialized evaluation metrics designed specifically for classical Chinese text generation, combining rule-based verification with fine-tuned LLM evaluators, and (3) a systematic assessment framework that considers both linguistic accuracy and cultural authenticity. Through extensive evaluation of state-of-the-art LLMs, we reveal significant performance gaps between understanding and generation tasks, with models achieving promising results in comprehension but struggling considerably in generation tasks, particularly those requiring deep cultural knowledge and adherence to classical formats. Our findings highlight the current limitations in ancient Chinese text processing and provide insights for future model development. The benchmark, evaluation toolkit, and baseline results are publicly available to facilitate research in this domain.
http://arxiv.org/abs/2503.15838v1
Enhancing LLM Code Generation with Ensembles: A Similarity-Based Selection Approach
2025-03-20T04:38:56+00:00
Ensemble learning has been widely used in machine learning to improve model robustness, accuracy, and generalization, but has not yet been applied to code generation tasks with large language models (LLMs). We propose an ensemble approach for LLMs in code generation. Instead of relying on the output of a single model, we generate multiple candidate programs from different LLMs and apply a structured voting mechanism to select the most reliable solution. For voting, we compute syntactic and semantic similarity using CodeBLEU and behavioral equivalence using CrossHair's differential behavior analysis. By aggregating these similarity scores, we select the program that best aligns with the consensus among the candidates. We show through experiments that our ensemble approach consistently outperforms standalone LLMs on the well-known HumanEval and the more challenging LiveCodeBench datasets, achieving an accuracy of 90.2% and 50.2%, respectively, on the two datasets. In comparison, the best-performing LLM (GPT-4o) has an accuracy of 83.5% and 43.4%, respectively. Furthermore, even when restricted to free open-source models, our method achieves an accuracy of 80.5% and 41.6%, respectively, demonstrating the viability of our approach in resource-constrained settings.
http://arxiv.org/abs/2503.15839v1
Structural stability of cylindrical supersonic solutions to the steady Euler-Poisson system
2025-03-20T04:39:52+00:00
This paper concerns the structural stability of smooth cylindrically symmetric supersonic Euler-Poisson flows in nozzles. Both three-dimensional and axisymmetric perturbations are considered. On one hand, we establish the existence and uniqueness of three-dimensional smooth supersonic solutions to the potential flow model of the steady Euler-Poisson system. On the other hand, the existence and uniqueness of smooth supersonic flows with nonzero vorticity to the steady axisymmetric Euler-Poisson system are proved. The problem is reduced to solve a nonlinear boundary value problem for a hyperbolic-elliptic mixed system. One of the key ingredients in the analysis of three-dimensional supersonic irrotational flows is the well-posedness theory for a linear second order hyperbolic-elliptic coupled system, which is achieved by using the multiplier method and the reflection technique to derive the energy estimates. For smooth axisymmetric supersonic flows with nonzero vorticity, the deformation-curl-Poisson decomposition is utilized to reformulate the steady axisymmetric Euler-Poisson system as a deformation-curl-Poisson system together with several transport equations, so that one can design a two-layer iteration scheme to establish the nonlinear structural stability of the background supersonic flow within the class of axisymmetric rotational flows.
http://arxiv.org/abs/2503.15840v1
Automatic Generation of Safety-compliant Linear Temporal Logic via Large Language Model: A Self-supervised Framework
2025-03-20T04:40:29+00:00
Ensuring safety in cyber-physical systems (CPS) poses a significant challenge, especially when converting high-level tasks described by natural language into formal specifications like Linear Temporal Logic (LTL). In particular, the compliance of formal languages with respect to safety restrictions imposed on CPS is crucial for system safety. In this paper, we introduce AutoSafeLTL, a self-supervised framework that utilizes large language models (LLMs) to automate the generation of safety-compliant LTL. Our approach integrates a Language Inclusion check with an automated counterexample-guided feedback and modification mechanism, establishing a pipeline that verifies the safety-compliance of the resulting LTL while preserving its logical consistency and semantic accuracy. To enhance the framework's understanding and correction capabilities, we incorporate two additional Agent LLMs. Experimental results demonstrate that AutoSafeLTL effectively guarantees safety-compliance for generated LTL, achieving a 0% violation rate against imposed safety constraints.
http://arxiv.org/abs/2503.15841v1
On the maximal displacement of critical branching random walk in random environment
2025-03-20T04:45:27+00:00
In this article, we study the maximal displacement of critical branching random walk in random environment. Let $M_n$ be the maximal displacement of a particle in generation $n$, and $Z_n$ be the total population in generation $n$, $M$ be the rightmost point ever reached by the branching random walk. Under some reasonable conditions, we prove a conditional limit theorem, \begin{equation*} \mathcal{L}\left( \dfrac{M_n}{\sqrt{\sigma} n^{\frac{3}{4}}} |Z_n>0\right) \dcon \mathcal{L}\left(A_\Lambda\right), \end{equation*} where random variable $A_\Lambda$ is related to the standard Brownian meander. And there exist some positive constant $C_1$ and $C_2$, such that \begin{equation*} C_1\leqslant\liminf\limits_{x\rightarrow\infty}x^{\frac{2}{3}}\P(M>x) \leqslant \limsup\limits_{x\rightarrow\infty} x^{\frac{2}{3}}\P(M>x) \leqslant C_2. \end{equation*} Compared with the constant environment case (Lalley and Shao (2015)), it revaels that, the conditional limit speed for $M_n$ in random environment (i.e., $n^{\frac{3}{4}}$) is significantly greater than that of constant environment case (i.e., $n^{\frac{1}{2}}$), and so is the tail probability for the $M$ (i.e., $x^{-\frac{2}{3}}$ vs $x^{-2}$). Our method is based on the path large deviation for the reduced critical branching random walk in random environment.
http://arxiv.org/abs/2503.15842v1
FedAWA: Adaptive Optimization of Aggregation Weights in Federated Learning Using Client Vectors
2025-03-20T04:49:40+00:00
Federated Learning (FL) has emerged as a promising framework for distributed machine learning, enabling collaborative model training without sharing local data, thereby preserving privacy and enhancing security. However, data heterogeneity resulting from differences across user behaviors, preferences, and device characteristics poses a significant challenge for federated learning. Most previous works overlook the adjustment of aggregation weights, relying solely on dataset size for weight assignment, which often leads to unstable convergence and reduced model performance. Recently, several studies have sought to refine aggregation strategies by incorporating dataset characteristics and model alignment. However, adaptively adjusting aggregation weights while ensuring data security-without requiring additional proxy data-remains a significant challenge. In this work, we propose Federated learning with Adaptive Weight Aggregation (FedAWA), a novel method that adaptively adjusts aggregation weights based on client vectors during the learning process. The client vector captures the direction of model updates, reflecting local data variations, and is used to optimize the aggregation weight without requiring additional datasets or violating privacy. By assigning higher aggregation weights to local models whose updates align closely with the global optimization direction, FedAWA enhances the stability and generalization of the global model. Extensive experiments under diverse scenarios demonstrate the superiority of our method, providing a promising solution to the challenges of data heterogeneity in federated learning.
http://arxiv.org/abs/2503.15843v1
Reducing T Gates with Unitary Synthesis
2025-03-20T04:53:54+00:00
Quantum error correction is essential for achieving practical quantum computing but has a significant computational overhead. Among fault-tolerant (FT) gate operations, non-Clifford gates, such as $T$, are particularly expensive due to their reliance on magic state distillation. These costly $T$ gates appear frequently in FT circuits as many quantum algorithms require arbitrary single-qubit rotations, such as $R_x$ and $R_z$ gates, which must be decomposed into a sequence of $T$ and Clifford gates. In many quantum circuits, $R_x$ and $R_z$ gates can be fused to form a single $U3$ unitary. However, existing synthesis methods, such as gridsynth, rely on indirect decompositions, requiring separate $R_z$ decompositions that result in a threefold increase in $T$ count. This work presents a novel FT synthesis algorithm that directly synthesizes arbitrary single-qubit unitaries, avoiding the overhead of separate $R_z$ decompositions. By leveraging tensor network-based search, our approach enables native $U3$ synthesis, reducing the $T$ count, Clifford gate count, and approximation error. Compared to gridsynth-based circuit synthesis, for 187 representative benchmarks, our design reduces the $T$ count by up to $3.5\times$, and Clifford gates by $7\times$, resulting in up to $4\times$ improvement in overall circuit infidelity.
http://arxiv.org/abs/2503.15844v3
Characterising the Atmosphere of 55 Cancri e: 1D Forward Model Grid for Current and Future JWST Observations
2025-03-20T04:54:41+00:00
Recent JWST observations with NIRCam and MIRI of the ultra-short-period super-Earth 55 Cancri e indicate a possible volatile atmosphere surrounding the planet. Previous analysis of the NIRCam spectra suggested potential absorption features from CO2 or CO and significant sub-weekly variability. The MIRI low-resolution spectrum does not contain substantial features but was found to be consistent with effective heat redistribution models. In this work, we computed a grid of over 25000 self-consistent 1D forward models incorporating H-N-O-C-S-P-Si-Ti equilibrium chemistry and assessed plausible atmospheric compositions based on the current JWST data. Despite exhaustive analysis, the composition and properties of the atmosphere remain elusive. While our results statistically favour a global, hydrogen-free, nitrogen-dominated atmosphere enriched in PO and CO2, various alternative compositions, including H2O-,CO-, PH3-, or Si-bearing remain viable explanations. Unconstrained heat redistribution efficiency and absolute NIRCam flux are among the largest sources of uncertainty in our analysis. We also find that the heat redistribution factor and surface pressure are highly degenerate with atmospheric composition, and that these parameters cannot be independently constrained using current JWST observations. Furthermore, we show that the observed variability may arise from dynamic interactions between the atmosphere and an underlying magma ocean, driving rapid shifts in atmospheric chemistry and thermal emission. Our results highlight the importance of using self-consistent forward models when analysing novel JWST spectra with limited signal-to-noise ratios -- such as those of 55 Cancri e -- as it allows for a more comprehensive evaluation of potential atmospheric scenarios while also being less sensitive to subtle spectral differences than retrievals...
http://arxiv.org/abs/2503.15845v1
Network-wide Freeway Traffic Estimation Using Sparse Sensor Data: A Dirichlet Graph Auto-Encoder Approach
2025-03-20T04:58:50+00:00
Network-wide Traffic State Estimation (TSE), which aims to infer a complete image of network traffic states with sparsely deployed sensors, plays a vital role in intelligent transportation systems. With the development of data-driven methods, traffic dynamics modeling has advanced significantly. However, TSE poses fundamental challenges for data-driven approaches, since historical patterns cannot be learned locally at sensor-free segments. Although inductive graph learning shows promise in estimating states at locations without sensor, existing methods typically handle unobserved locations by filling them with zeros, introducing bias to the sensitive graph message propagation. The recently proposed Dirichlet Energy-based Feature Propagation (DEFP) method achieves State-Of-The-Art (SOTA) performance in unobserved node classification by eliminating the need for zero-filling. However, applying it to TSE faces three key challenges: inability to handle directed traffic networks, strong assumptions in traffic spatial correlation modeling, and overlooks distinct propagation rules of different patterns (e.g., congestion and free flow). We propose DGAE, a novel inductive graph representation model that addresses these challenges through theoretically derived DEFP for Directed graph (DEFP4D), enhanced spatial representation learning via DEFP4D-guided latent space encoding, and physics-guided propagation mechanisms that separately handles congested and free-flow patterns. Experiments on three traffic datasets demonstrate that DGAE outperforms existing SOTA methods and exhibits strong cross-city transferability. Furthermore, DEFP4D can serve as a standalone lightweight solution, showing superior performance under extremely sparse sensor conditions.
http://arxiv.org/abs/2503.15846v1
What can Off-the-Shelves Large Multi-Modal Models do for Dynamic Scene Graph Generation?
2025-03-20T04:58:53+00:00
Dynamic Scene Graph Generation (DSGG) for videos is a challenging task in computer vision. While existing approaches often focus on sophisticated architectural design and solely use recall during evaluation, we take a closer look at their predicted scene graphs and discover three critical issues with existing DSGG methods: severe precision-recall trade-off, lack of awareness on triplet importance, and inappropriate evaluation protocols. On the other hand, recent advances of Large Multimodal Models (LMMs) have shown great capabilities in video understanding, yet they have not been tested on fine-grained, frame-wise understanding tasks like DSGG. In this work, we conduct the first systematic analysis of Video LMMs for performing DSGG. Without relying on sophisticated architectural design, we show that LMMs with simple decoder-only structure can be turned into State-of-the-Art scene graph generators that effectively overcome the aforementioned issues, while requiring little finetuning (5-10% training data).
http://arxiv.org/abs/2503.15847v1
Beyond Local Selection: Global Cut Selection for Enhanced Mixed-Integer Programming
2025-03-20T04:59:18+00:00
In mixed-integer programming (MIP) solvers, cutting planes are essential for Branch-and-Cut (B&C) algorithms as they reduce the search space and accelerate the solving process. Traditional methods rely on hard-coded heuristics for cut plane selection but fail to leverage problem-specific structural features. Recent machine learning approaches use neural networks for cut selection but focus narrowly on the efficiency of single-node within the B&C algorithm, without considering the broader contextual information. To address this, we propose Global Cut Selection (GCS), which uses a bipartite graph to represent the search tree and combines graph neural networks with reinforcement learning to develop cut selection strategies. Unlike prior methods, GCS applies cutting planes across all nodes, incorporating richer contextual information. Experiments show GCS significantly improves solving efficiency for synthetic and large-scale real-world MIPs compared to traditional and learning-based methods.
http://arxiv.org/abs/2503.15848v1
Entropy-based Exploration Conduction for Multi-step Reasoning
2025-03-20T05:03:26+00:00
In large language model (LLM) reasoning, multi-step processes have proven effective for solving complex tasks. However, the depth of exploration can significantly affect the reasoning performance. Existing methods to automatically decide the depth often bring high costs and lack flexibility, and thus undermine the model's reasoning accuracy. To address these issues, we propose Entropy-based Exploration Depth Conduction (Entro-duction), a novel method that dynamically adjusts the exploration depth during multi-step reasoning by monitoring LLM's output entropy and variance entropy. We employ these two metrics to capture the model's current uncertainty and the fluctuation of uncertainty across consecutive reasoning steps. Based on the observed changes, the LLM selects whether to deepen, expand or stop exploration according to the probability. In this way, we balance the reasoning accuracy and exploration effectiveness. Experimental results across four benchmark datasets demonstrate the efficacy of Entro-duction. We further conduct experiments and analysis on the components of Entro-duction to discuss their contributions to reasoning performance.
http://arxiv.org/abs/2503.15849v1
Impact of tiny Fermi pockets with extremely high mobility on the Hall anomaly in the kagome metal CsV$_3$Sb$_5$
2025-03-20T05:03:45+00:00
The kagome metal CsV$_3$Sb$_5$ exhibits an unusual charge-density-wave (CDW) order, where the emergence of loop current order that breaks time-reversal symmetry (TRS) has been proposed. A key feature of this CDW phase is a non-monotonic Hall effect at low fields, often attributed to TRS breaking. However, its origin remains unclear. Here, we conduct comprehensive magnetotransport measurements on CsV$_3$Sb$_5$ and, through mobility spectrum analysis, identify the formation of tiny Fermi pockets with extremely high mobility below the CDW transition. Furthermore, electron irradiation experiments reveal that the non-monotonic Hall effect is significantly suppressed in samples with reduced mobility, despite no substantial change in the electronic structure. These results indicate that the non-monotonic Hall effect originates from these tiny Fermi pockets with high mobility carriers rather than anomalous Hall mechanisms, providing new insights into understanding the Hall anomaly in this kagome system.
http://arxiv.org/abs/2503.15850v1
Uncertainty Quantification and Confidence Calibration in Large Language Models: A Survey
2025-03-20T05:04:29+00:00
Large Language Models (LLMs) excel in text generation, reasoning, and decision-making, enabling their adoption in high-stakes domains such as healthcare, law, and transportation. However, their reliability is a major concern, as they often produce plausible but incorrect responses. Uncertainty quantification (UQ) enhances trustworthiness by estimating confidence in outputs, enabling risk mitigation and selective prediction. However, traditional UQ methods struggle with LLMs due to computational constraints and decoding inconsistencies. Moreover, LLMs introduce unique uncertainty sources, such as input ambiguity, reasoning path divergence, and decoding stochasticity, that extend beyond classical aleatoric and epistemic uncertainty. To address this, we introduce a new taxonomy that categorizes UQ methods based on computational efficiency and uncertainty dimensions (input, reasoning, parameter, and prediction uncertainty). We evaluate existing techniques, assess their real-world applicability, and identify open challenges, emphasizing the need for scalable, interpretable, and robust UQ approaches to enhance LLM reliability.
http://arxiv.org/abs/2503.15851v2
Zero-1-to-A: Zero-Shot One Image to Animatable Head Avatars Using Video Diffusion
2025-03-20T05:07:46+00:00
Animatable head avatar generation typically requires extensive data for training. To reduce the data requirements, a natural solution is to leverage existing data-free static avatar generation methods, such as pre-trained diffusion models with score distillation sampling (SDS), which align avatars with pseudo ground-truth outputs from the diffusion model. However, directly distilling 4D avatars from video diffusion often leads to over-smooth results due to spatial and temporal inconsistencies in the generated video. To address this issue, we propose Zero-1-to-A, a robust method that synthesizes a spatial and temporal consistency dataset for 4D avatar reconstruction using the video diffusion model. Specifically, Zero-1-to-A iteratively constructs video datasets and optimizes animatable avatars in a progressive manner, ensuring that avatar quality increases smoothly and consistently throughout the learning process. This progressive learning involves two stages: (1) Spatial Consistency Learning fixes expressions and learns from front-to-side views, and (2) Temporal Consistency Learning fixes views and learns from relaxed to exaggerated expressions, generating 4D avatars in a simple-to-complex manner. Extensive experiments demonstrate that Zero-1-to-A improves fidelity, animation quality, and rendering speed compared to existing diffusion-based methods, providing a solution for lifelike avatar creation. Code is publicly available at: https://github.com/ZhenglinZhou/Zero-1-to-A.