url
stringlengths
33
33
title
stringlengths
18
214
date_published
stringdate
2025-03-20 00:07:06
2025-04-17 04:46:57
abstract
stringlengths
114
1.92k
http://arxiv.org/abs/2504.09762v1
(How) Do reasoning models reason?
2025-04-14T00:03:34+00:00
We will provide a broad unifying perspective on the recent breed of Large Reasoning Models (LRMs) such as OpenAI o1 and DeepSeek R1, including their promise, sources of power, misconceptions and limitations.
http://arxiv.org/abs/2504.09763v1
Executable Functional Abstractions: Inferring Generative Programs for Advanced Math Problems
2025-04-14T00:06:48+00:00
Scientists often infer abstract procedures from specific instances of problems and use the abstractions to generate new, related instances. For example, programs encoding the formal rules and properties of a system have been useful in fields ranging from RL (procedural environments) to physics (simulation engines). These programs can be seen as functions which execute to different outputs based on their parameterizations (e.g., gridworld configuration or initial physical conditions). We introduce the term EFA (Executable Functional Abstraction) to denote such programs for math problems. EFA-like constructs have been shown to be useful for math reasoning as problem generators for stress-testing models. However, prior work has been limited to abstractions for grade-school math (whose simple rules are easy to encode in programs), while generating EFAs for advanced math has thus far required human engineering. We explore the automatic construction of EFAs for advanced math problems. We operationalize the task of automatically constructing EFAs as a program synthesis task, and develop EFAGen, which conditions an LLM on a seed math problem and its step-by-step solution to generate candidate EFA programs that are faithful to the generalized problem and solution class underlying the seed problem. Furthermore, we formalize properties any valid EFA must possess in terms of executable unit tests, and show how the tests can be used as verifiable rewards to train LLMs to become better writers of EFAs. We demonstrate that EFAs constructed by EFAGen behave rationally by remaining faithful to seed problems, produce learnable problem variations, and that EFAGen can infer EFAs across multiple diverse sources of competition-level math problems. Finally, we show downstream uses of model-written EFAs e.g. finding problem variations that are harder or easier for a learner to solve, as well as data generation.
http://arxiv.org/abs/2504.09764v1
Socratic Chart: Cooperating Multiple Agents for Robust SVG Chart Understanding
2025-04-14T00:07:39+00:00
Multimodal Large Language Models (MLLMs) have shown remarkable versatility but face challenges in demonstrating true visual understanding, particularly in chart reasoning tasks. Existing benchmarks like ChartQA reveal significant reliance on text-based shortcuts and probabilistic pattern-matching rather than genuine visual reasoning. To rigorously evaluate visual reasoning, we introduce a more challenging test scenario by removing textual labels and introducing chart perturbations in the ChartQA dataset. Under these conditions, models like GPT-4o and Gemini-2.0 Pro experience up to a 30% performance drop, underscoring their limitations. To address these challenges, we propose Socratic Chart, a new framework that transforms chart images into Scalable Vector Graphics (SVG) representations, enabling MLLMs to integrate textual and visual modalities for enhanced chart understanding. Socratic Chart employs a multi-agent pipeline with specialized agent-generators to extract primitive chart attributes (e.g., bar heights, line coordinates) and an agent-critic to validate results, ensuring high-fidelity symbolic representations. Our framework surpasses state-of-the-art models in accurately capturing chart primitives and improving reasoning performance, establishing a robust pathway for advancing MLLM visual understanding.
http://arxiv.org/abs/2504.09765v1
The linear relations between the complex moduli and using the linear relations reduce the stabilization equations for supersymmetric black holes in N=2 theory
2025-04-14T00:13:32+00:00
In [Phys. Rev. D 54, 6293 (1996)], the black hole entropy was derived by solving the matrix equation obtained from the stabilization equations for the solution of frozen moduli. In this paper, by directly solving the stabilization equations for the solution of frozen moduli without using the matrix equation, we find linear relations between any two of the three complex moduli at the black hole horizon. So far, no one discusses the linear relations. Via the linear relations, we derive the unique solution of frozen moduli and reduce the stabilization equations in three different ways. For example, the eight stabilization equations in [Phys. Rev. D 54, 6293 (1996)] can be replaced equivalently with three equations: the solution of moduli z1 and two linear relations, which are much simpler and more intuitive.
http://arxiv.org/abs/2504.09766v1
On the representation of stack operators by mathematical morphology
2025-04-14T00:14:52+00:00
This paper introduces the class of grey-scale image stack operators as those that (a) map binary-images into binary-images and (b) commute in average with cross-sectioning. We show that stack operators are 1-Lipchitz extensions of set operators which can be represented by applying a characteristic set operator to the cross-sections of the image and summing. In particular, they are a generalisation of stack filters, for which the characteristic set operators are increasing. Our main result is that stack operators inherit lattice properties of the characteristic set operators. We focus on the case of translation-invariant and locally defined stack operators and show the main result by deducing the characteristic function, kernel, and basis representation of stack operators. The results of this paper have implications on the design of image operators, since imply that to solve some grey-scale image processing problems it is enough to design an operator for performing the desired transformation on binary images, and then considering its extension given by a stack operator. We leave many topics for future research regarding the machine learning of stack operators and the characterisation of the image processing problems that can be solved by them.
http://arxiv.org/abs/2504.09767v1
Implementing and benchmarking dynamically corrected gates on superconducting devices using space curve quantum control
2025-04-14T00:19:23+00:00
We use Space Curve Quantum Control (SCQC) to design, experimentally demonstrate, and benchmark dynamically corrected single-qubit gates on IBM hardware, comparing their performance to that of the standard gates provided by IBM. Our gates are designed to dynamically suppress both detuning and pulse-amplitude noise, with gate times as short as 88 ns. We compare our gates against those of IBM on two separate IBM devices and across sets of up to 18 qubits. Randomized benchmarking is done utilizing our detuning- and amplitude-robust gates in randomized Clifford circuits containing up to 4000 gates. Our gates achieve error-per-Clifford rates that reach as low as 7$\times10^{-5}$ ($\pm10^{-6}$) and which remain nearly constant as the compound noise is increased up to 4% amplitude noise and up to a detuning noise of 342 kHz; this is in contrast to the IBM gates, which exhibit rates that drop to order $10^{-3}$ across this range. This range is consistent with the commonly reported frequency fluctuations and with the upper bound of the statistical uncertainty in gate calibration. In addition, we investigate the performance across larger noise ranges of up to 20% amplitude and 3.5 MHz detuning noise using quantum process tomography. Finally, we experimentally demonstrate how SCQC can be tailored to different practical use cases by trading off amplitude-robustness for ultrafast 60 ns dephasing-only robust pulses. Our work establishes experimental guidelines for implementing SCQC-designed dynamically corrected gates on a broad range of qubit hardware to limit the effect of noise-induced errors and decoherence.
http://arxiv.org/abs/2504.09768v1
Robust Output-Feedback MPC for Nonlinear Systems with Applications to Robotic Exploration
2025-04-14T00:21:23+00:00
This paper introduces a novel method for robust output-feedback model predictive control (MPC) for a class of nonlinear discrete-time systems. We propose a novel interval-valued predictor which, given an initial estimate of the state, produces intervals which are guaranteed to contain the future trajectory of the system. By parameterizing the control input with an initial stabilizing feedback term, we are able to reduce the width of the predicted state intervals compared to existing methods. We demonstrate this through a numerical comparison where we show that our controller performs better in the presence of large amounts of noise. Finally, we present a simulation study of a robot navigation scenario, where we incorporate a time-varying entropy term into the cost function in order to autonomously explore an uncertain area.
http://arxiv.org/abs/2504.09769v1
Identification of Community Structures in Networks Employing a Modified Divisive Algorithm
2025-04-14T00:22:03+00:00
In numerous networks, it is vital to identify communities consisting of closely joined groups of individuals. Such communities often reveal the role of the networks or primary properties of the individuals. In this perspective, Newman and Girvan proposed a modularity score (Q) for quantifying the power of community structure and measuring the appropriateness of a division. The Q function has newly become a significant standard. In this paper, the strengths of the Q score and another technique known as the divisive algorithm are combined to enhance the efficiently of the identification of communities from a network. To achieve that goal, we have developed a new algorithm. The simulation results indicated that our algorithm achieved a division with a slightly higher Q score against some conventional methods.
http://arxiv.org/abs/2504.09770v1
Quantum Phase diagrams and transitions for Chern topological insulators
2025-04-14T00:24:36+00:00
Topological invariants such as Chern classes are by now a standard way to classify topological phases. Varying systems in a family leads to phase diagrams, where the Chern classes may jump when crossingn a critical locus. These systems appear naturally when considering slicing of higher dimensional systems or when considering systems with parameters. As the Chern classes are topological invariants, they can only change if the ``topology breaks down''. We give a precise mathematical formulation of this phenomenon and show that synthetically any phase diagram of Chern topological phases can be designed and realized by a physical system, using covering, aka.\ winding maps. Here we provide explicit families realizing arbitrary Chern jumps. The critical locus of these maps is described by the classical rose curves. These give a lower bond on the number of Dirac points in general that is sharp for 2-level systems. In the process, we treat several concrete models. In particular, we treat the lattices and tight--binding models, and show that effective winding maps can be achieved using $k$--th nearest neighbors. We give explicit formulas for a family of 2D lattices using imaginary quadratic field extensions and their norms. This includes the square, triangular, honeycomb and Kagome lattices
http://arxiv.org/abs/2504.09771v1
Generalization analysis of quantum neural networks using dynamical Lie algebras
2025-04-14T00:27:30+00:00
The paper presents a generalization bound for quantum neural networks based on a dynamical Lie algebra. Using covering numbers derived from a dynamical Lie algebra, the Rademacher complexity is derived to calculate the generalization bound. The obtained result indicates that the generalization bound is scaled by O(sqrt(dim(g))), where g denotes a dynamical Lie algebra of generators. Additionally, the upper bound of the number of the trainable parameters in a quantum neural network is presented. Numerical simulations are conducted to confirm the validity of the obtained results.
http://arxiv.org/abs/2504.09772v1
Two Heads are Better Than One: Test-time Scaling of Multi-agent Collaborative Reasoning
2025-04-14T00:27:45+00:00
Multi-agent systems (MAS) built on large language models (LLMs) offer a promising path toward solving complex, real-world tasks that single-agent systems often struggle to manage. While recent advancements in test-time scaling (TTS) have significantly improved single-agent performance on challenging reasoning tasks, how to effectively scale collaboration and reasoning in MAS remains an open question. In this work, we introduce an adaptive multi-agent framework designed to enhance collaborative reasoning through both model-level training and system-level coordination. We construct M500, a high-quality dataset containing 500 multi-agent collaborative reasoning traces, and fine-tune Qwen2.5-32B-Instruct on this dataset to produce M1-32B, a model optimized for multi-agent collaboration. To further enable adaptive reasoning, we propose a novel CEO agent that dynamically manages the discussion process, guiding agent collaboration and adjusting reasoning depth for more effective problem-solving. Evaluated in an open-source MAS across a range of tasks-including general understanding, mathematical reasoning, and coding-our system significantly outperforms strong baselines. For instance, M1-32B achieves 12% improvement on GPQA-Diamond, 41% on AIME2024, and 10% on MBPP-Sanitized, matching the performance of state-of-the-art models like DeepSeek-R1 on some tasks. These results highlight the importance of both learned collaboration and adaptive coordination in scaling multi-agent reasoning. Code is available at https://github.com/jincan333/MAS-TTS
http://arxiv.org/abs/2504.09773v1
Coupling Selection Rules in Heterotic Calabi-Yau Compactifications
2025-04-14T00:28:48+00:00
We study coupling selection rules of chiral matter fields in heterotic string theory with standard embedding. These selection rules are determined by topological properties of Calabi-Yau threefolds. We classify coupling selection rules on complete intersection Calabi-Yau threefolds for $h^{1,1}\leq 5$. It is found that all of these selection rules for $h^{1,1}\leq 5$ are understood by combinations of only five types of fusion rules.
http://arxiv.org/abs/2504.09774v1
Links between the integrable systems of CMC surfaces, isothermic surfaces and constrained Willmore surfaces
2025-04-14T00:29:24+00:00
Since constant mean curvature surfaces in 3-space are special cases of isothermic and constrained Willmore surfaces, they give rise to three, apriori distinct, integrable systems. We provide a comprehensive and unified view of these integrable systems in terms of the associated families of flat connections and their parallel sections: in case of a CMC surface, parallel sections of all three associated families of flat connections are given algebraically by parallel sections of either one of the families. As a consequence, we provide a complete description of the links between the simple factor dressing given by the conformal Gauss map, the simple factor dressing given by isothermicity, the simple factor dressing given by the harmonic Gauss map, as well as the relationship to the classical, the $\mu$- and the $\varrho$-Darboux transforms of a CMC surface. Moreover, we establish the associated family of the CMC surfaces as limits of the associated family of isothermic surfaces and constrained Willmore surfaces.
http://arxiv.org/abs/2504.09775v2
Understanding and Optimizing Multi-Stage AI Inference Pipelines
2025-04-14T00:29:49+00:00
The rapid evolution of Large Language Models (LLMs) has driven the need for increasingly sophisticated inference pipelines and hardware platforms. Modern LLM serving extends beyond traditional prefill-decode workflows, incorporating multi-stage processes such as Retrieval Augmented Generation (RAG), key-value (KV) cache retrieval, dynamic model routing, and multi step reasoning. These stages exhibit diverse computational demands, requiring distributed systems that integrate GPUs, ASICs, CPUs, and memory-centric architectures. However, existing simulators lack the fidelity to model these heterogeneous, multi-engine workflows, limiting their ability to inform architectural decisions. To address this gap, we introduce HERMES, a Heterogeneous Multi-stage LLM inference Execution Simulator. HERMES models diverse request stages; including RAG, KV retrieval, reasoning, prefill, and decode across complex hardware hierarchies. HERMES supports heterogeneous clients executing multiple models concurrently unlike prior frameworks while incorporating advanced batching strategies and multi-level memory hierarchies. By integrating real hardware traces with analytical modeling, HERMES captures critical trade-offs such as memory bandwidth contention, inter-cluster communication latency, and batching efficiency in hybrid CPU-accelerator deployments. Through case studies, we explore the impact of reasoning stages on end-to-end latency, optimal batching strategies for hybrid pipelines, and the architectural implications of remote KV cache retrieval. HERMES empowers system designers to navigate the evolving landscape of LLM inference, providing actionable insights into optimizing hardware-software co-design for next-generation AI workloads.
http://arxiv.org/abs/2504.09776v1
An Investigation of Large Language Models and Their Vulnerabilities in Spam Detection
2025-04-14T00:30:27+00:00
Spam messages continue to present significant challenges to digital users, cluttering inboxes and posing security risks. Traditional spam detection methods, including rules-based, collaborative, and machine learning approaches, struggle to keep up with the rapidly evolving tactics employed by spammers. This project studies new spam detection systems that leverage Large Language Models (LLMs) fine-tuned with spam datasets. More importantly, we want to understand how LLM-based spam detection systems perform under adversarial attacks that purposefully modify spam emails and data poisoning attacks that exploit the differences between the training data and the massages in detection, to which traditional machine learning models are shown to be vulnerable. This experimentation employs two LLM models of GPT2 and BERT and three spam datasets of Enron, LingSpam, and SMSspamCollection for extensive training and testing tasks. The results show that, while they can function as effective spam filters, the LLM models are susceptible to the adversarial and data poisoning attacks. This research provides very useful insights for future applications of LLM models for information security.
http://arxiv.org/abs/2504.09777v1
Reasoning without Regret
2025-04-14T00:34:20+00:00
Chain-of-thought reasoning enables large language models to solve multi-step tasks by framing problem solving as sequential decision problems. Outcome-based rewards, which provide feedback only on final answers, show impressive success, but face challenges with credit assignment and slow convergence. In contrast, procedure-based rewards offer efficient step-level feedback, but typically require costly human supervision. We introduce \emph{Backwards Adaptive Reward Shaping} (BARS), a no-regret framework that converts sparse outcomes-based rewards into effective procedure-based signals. BARS uses sparse rewards generated from terminal-state priors and cover trees to scale rewards while preventing exploitation. With Bellman contraction and $(\Delta, \epsilon)$-gap rewards, our backward Euler solver achieves $\epsilon$-accuracy in $O\left((R_{\max}/\Delta)\log(1/\epsilon)\right)$ iterations with $O(\log T)$ dynamic regret over $T$ rounds. Our analysis, based on generic chaining, continuous scaling limits, and non-linear Feynman-Kac bounds, connects recent outcome-based methods' empirical successes with the benefits of intermediate supervision. Combined, this provides the first rigorous no-regret algorithm for outcome reward shaping, providing a theoretical foundation for the empirical success of DeepSeek's R1.
http://arxiv.org/abs/2504.09778v1
RoboCup Rescue 2025 Team Description Paper UruBots
2025-04-14T00:37:50+00:00
This paper describes the approach used by Team UruBots for participation in the 2025 RoboCup Rescue Robot League competition. Our team aims to participate for the first time in this competition at RoboCup, using experience learned from previous competitions and research. We present our vehicle and our approach to tackle the task of detecting and finding victims in search and rescue environments. Our approach contains known topics in robotics, such as ROS, SLAM, Human Robot Interaction and segmentation and perception. Our proposed approach is open source, available to the RoboCup Rescue community, where we aim to learn and contribute to the league.
http://arxiv.org/abs/2504.09779v1
"All Roads Lead to ChatGPT": How Generative AI is Eroding Social Interactions and Student Learning Communities
2025-04-14T00:40:58+00:00
The widespread adoption of generative AI is already impacting learning and help-seeking. While the benefits of generative AI are well-understood, recent studies have also raised concerns about increased potential for cheating and negative impacts on students' metacognition and critical thinking. However, the potential impacts on social interactions, peer learning, and classroom dynamics are not yet well understood. To investigate these aspects, we conducted 17 semi-structured interviews with undergraduate computing students across seven R1 universities in North America. Our findings suggest that help-seeking requests are now often mediated by generative AI. For example, students often redirected questions from their peers to generative AI instead of providing assistance themselves, undermining peer interaction. Students also reported feeling increasingly isolated and demotivated as the social support systems they rely on begin to break down. These findings are concerning given the important role that social interactions play in students' learning and sense of belonging.
http://arxiv.org/abs/2504.09780v1
An Interoperable Syntax for Gas Scattering Reaction Definition
2025-04-14T00:46:13+00:00
We propose a unified, human-readable, machine-processable novel syntax/notation designed to comprehensively describe reactions, molecules and excitation states. Our notation resolves inconsistencies in existing data representations and facilitates seamless integration with computational tools. We define a structured syntax for molecular species, excitation states, and reaction mechanisms, ensuring compatibility with a wide range of scientific applications. We provide a reference implementation based on Parsing Expression Grammar syntax, enabling automated parsing and interpretation of the proposed notation. This work is available as an open-source project, enabling validation and fostering its adoption and further improvement by the scientific community. Our standardized framework provides gas scattering models with increased interoperability and accuracy.
http://arxiv.org/abs/2504.09781v1
Reasoning Court: Combining Reasoning, Action, and Judgment for Multi-Hop Reasoning
2025-04-14T00:56:08+00:00
While large language models (LLMs) have demonstrated strong capabilities in tasks like question answering and fact verification, they continue to suffer from hallucinations and reasoning errors, especially in multi-hop tasks that require integration of multiple information sources. Current methods address these issues through retrieval-based techniques (grounding reasoning in external evidence), reasoning-based approaches (enhancing coherence via improved prompting), or hybrid strategies combining both elements. One prominent hybrid method, ReAct, has outperformed purely retrieval-based or reasoning-based approaches; however, it lacks internal verification of intermediate reasoning steps, allowing potential errors to propagate through complex reasoning tasks. In this paper, we introduce Reasoning Court (RC), a novel framework that extends iterative reasoning-and-retrieval methods, such as ReAct, with a dedicated LLM judge. Unlike ReAct, RC employs this judge to independently evaluate multiple candidate answers and their associated reasoning generated by separate LLM agents. The judge is asked to select the answer that it considers the most factually grounded and logically coherent based on the presented reasoning and evidence, or synthesizes a new answer using available evidence and its pre-trained knowledge if all candidates are inadequate, flawed, or invalid. Evaluations on multi-hop benchmarks (HotpotQA, MuSiQue) and fact-verification (FEVER) demonstrate that RC consistently outperforms state-of-the-art few-shot prompting methods without task-specific fine-tuning.
http://arxiv.org/abs/2504.09782v1
Stark-induced tunable phase transition in the two-photon Dicke-Stark model
2025-04-14T00:58:31+00:00
We theoretically investigate the superradiant phase transition (SPT) in the two-photon Dicke-Stark model, which incorporates both Rabi and Stark coupling. By introducing a Stark coupling term, we significantly reduce the critical Rabi coupling strength required to achieve the SPT, enabling it to occur even in strong coupling regimes. Using mean-field theory, we derive the conditions for the SPT and show that it exhibits a second-order phase transition. Surprisingly, we demonstrate that the transition point can be widely tuned by the Stark coupling strength. The signatures of these Stark-tunable SPT points are manifested through atomic averages. When quantum fluctuations are included, the spin-squeezing distributions also reveal the effects of Stark-tunable SPT points. In addition, we propose an experimentally feasible realization using an ion trap system driven by three lasers. Our scheme enables optical switching between normal and superradiant phases through pump field intensity modulation, where the Stark coupling coefficient serves as the optically tunable parameter. Our results offer a new approach to engineer the SPT, extending superradiance-based quantum technologies beyond the ultrastrong coupling regime.
http://arxiv.org/abs/2504.09783v1
BLAST: Bayesian online change-point detection with structured image data
2025-04-14T00:59:21+00:00
The prompt online detection of abrupt changes in image data is essential for timely decision-making in broad applications, from video surveillance to manufacturing quality control. Existing methods, however, face three key challenges. First, the high-dimensional nature of image data introduces computational bottlenecks for efficient real-time monitoring. Second, changes often involve structural image features, e.g., edges, blurs and/or shapes, and ignoring such structure can lead to delayed change detection. Third, existing methods are largely non-Bayesian and thus do not provide a quantification of monitoring uncertainty for confident detection. We address this via a novel Bayesian onLine Structure-Aware change deTection (BLAST) method. BLAST first leverages a deep Gaussian Markov random field prior to elicit desirable image structure from offline reference data. With this prior elicited, BLAST employs a new Bayesian online change-point procedure for image monitoring via its so-called posterior run length distribution. This posterior run length distribution can be computed in an online fashion using $\mathcal{O}(p^2)$ work at each time-step, where $p$ is the number of image pixels; this facilitates scalable Bayesian online monitoring of large images. We demonstrate the effectiveness of BLAST over existing methods in a suite of numerical experiments and in two applications, the first on street scene monitoring and the second on real-time process monitoring for metal additive manufacturing.
http://arxiv.org/abs/2504.09784v1
Computationally Efficient State and Model Estimation via Interval Observers for Partially Unknown Systems
2025-04-14T01:02:33+00:00
This paper addresses the synthesis of interval observers for partially unknown nonlinear systems subject to bounded noise, aiming to simultaneously estimate system states and learn a model of the unknown dynamics. Our approach leverages Jacobian sign-stable (JSS) decompositions, tight decomposition functions for nonlinear systems, and a data-driven over-approximation framework to construct interval estimates that provably enclose the true augmented states. By recursively computing tight and tractable bounds for the unknown dynamics based on current and past interval framers, we systematically integrate these bounds into the observer design. Additionally, we formulate semi-definite programs (SDP) for observer gain synthesis, ensuring input-to-state stability and optimality of the proposed framework. Finally, simulation results demonstrate the computational efficiency of our approach compared to a method previously proposed by the authors.
http://arxiv.org/abs/2504.09785v1
Theory of zonal flow growth and propagation in toroidal geometry
2025-04-14T01:10:18+00:00
The toroidal geometry of tokamaks and stellarators is known to play a crucial role in the linear physics of zonal flows, leading to e.g. the Rosenbluth-Hinton residual and geodesic acoustic modes. However, descriptions of the nonlinear zonal flow growth from a turbulent background typically resort to simplified models of the geometry. We present a generalised theory of the secondary instability to model the zonal flow growth from turbulent fluctuations in toroidal geometry, demonstrating that the radial magnetic drift substantially affects the nonlinear zonal flow dynamics. In particular, the toroidicity gives rise to a new branch of propagating zonal flows, the toroidal secondary mode, which is nonlinearly supported by the turbulence. We present a theory of this mode and compare the theory against gyrokinetic simulations of the secondary mode. The connection with other secondary modes - the ion-temperature-gradient and Rogers-Dorland-Kotschenreuther secondary modes - is also examined.
http://arxiv.org/abs/2504.09786v1
Stabilization of Poincaré duality complexes and homotopy gyrations
2025-04-14T01:10:52+00:00
Stabilization of manifolds by a product of spheres or a projective space is important in geometry. There has been considerable recent work that studies the homotopy theory of stabilization for connected manifolds. This paper generalizes that work by developing new methods that allow for a generalization to stabilization of Poincar\'{e} Duality complexes. This includes the systematic study of a homotopy theoretic generalization of a gyration, obtained from a type of surgery in the manifold case. In particular, for a fixed Poincar\'{e} Duality complex, a criterion is given for the possible homotopy types of gyrations and shows there are only finitely many.
http://arxiv.org/abs/2504.09787v1
Local hyperbolicity, inert maps and Moore's conjecture
2025-04-14T01:14:06+00:00
We show that the base space of a homotopy cofibration is locally hyperbolic under various conditions. In particular, if these manifolds admit a rationally elliptic closure, then almost all punctured manifolds and almost all manifolds with rationally spherical boundary are $\mathbb{Z}/p^r$-hyperbolic for almost all primes $p$ and all integers $r \geq 1$, and satisfy Moore's conjecture at sufficiently large primes.
http://arxiv.org/abs/2504.09788v1
Using Process Calculus for Optimizing Data and Computation Sharing in Complex Stateful Parallel Computations
2025-04-14T01:16:58+00:00
We propose novel techniques that exploit data and computation sharing to improve the performance of complex stateful parallel computations, like agent-based simulations. Parallel computations are translated into behavioral equations, a novel formalism layered on top of the foundational process calculus $\pi$-calculus. Behavioral equations blend code and data, allowing a system to easily compose and transform parallel programs into specialized programs. We show how optimizations like merging programs, synthesizing efficient message data structures, eliminating local messaging, rewriting communication instructions into local computations, and {aggregation pushdown} can be expressed as transformations of behavioral equations. We have also built a system called OptiFusion that implements behavioral equations and the aforementioned optimizations. Our experiments showed that OptiFusion is over 10$\times$ faster than state-of-the-art stateful systems benchmarked via complex stateful workloads. Generating specialized instructions that are impractical to write by hand allows OptiFusion to outperform even the hand-optimized implementations by up to 2$\times$.
http://arxiv.org/abs/2504.09789v1
EquiVDM: Equivariant Video Diffusion Models with Temporally Consistent Noise
2025-04-14T01:26:29+00:00
Temporally consistent video-to-video generation is essential for applications of video diffusion models in areas such as sim-to-real, style-transfer, video upsampling, etc. In this paper, we propose a video diffusion framework that leverages temporally consistent noise to generate coherent video frames without specialized modules or additional constraints. We show that the standard training objective of diffusion models, when applied with temporally consistent noise, encourages the model to be equivariant to spatial transformations in input video and noise. This enables our model to better follow motion patterns from the input video, producing aligned motion and high-fidelity frames. Furthermore, we extend our approach to 3D-consistent video generation by attaching noise as textures on 3D meshes, ensuring 3D consistency in sim-to-real applications. Experimental results demonstrate that our method surpasses state-of-the-art baselines in motion alignment, 3D consistency, and video quality while requiring only a few sampling steps in practice.
http://arxiv.org/abs/2504.09790v1
A SageMath Package for Analytic Combinatorics in Several Variables: Beyond the Smooth Case
2025-04-14T01:30:44+00:00
The field of analytic combinatorics in several variables (ACSV) develops techniques to compute the asymptotic behaviour of multivariate sequences from analytic properties of their generating functions. When the generating function under consideration is rational, its set of singularities forms an algebraic variety -- called the singular variety -- and asymptotic behaviour depends heavily on the geometry of the singular variety. By combining a recent algorithm for the Whitney stratification of algebraic varieties with methods from ACSV, we present the first software that rigorously computes asymptotics of sequences whose generating functions have non-smooth singular varieties (under other assumptions on local geometry). Our work is built on the existing sage_acsv package for the SageMath computer algebra system, which previously gave asymptotics under a smoothness assumption. We also report on other improvements to the package, such as an efficient technique for determining higher order asymptotic expansions using Newton iteration, the ability to use more efficient backends for algebraic computations, and a method to compute so-called critical points for any multivariate rational function through Whitney stratification.
http://arxiv.org/abs/2504.09791v1
Practical Advantage of Classical Communication in Entanglement Detection
2025-04-14T01:33:20+00:00
Entanglement is the cornerstone of quantum communication, yet conventional detection relies solely on local measurements. In this work, we present a unified theoretical and experimental framework demonstrating that one-way local operations and classical communication (1-LOCC) can significantly outperform purely local measurements in detecting high-dimensional quantum entanglement. By casting the entanglement detection problem as a semidefinite program (SDP), we derive protocols that minimize false negatives at fixed false-positive rates. A variational generative machine-learning algorithm efficiently searches over high-dimensional parameter spaces, identifying states and measurement strategies that exhibit a clear 1-LOCC advantage. Experimentally, we realize a genuine event-ready protocol on a three-dimensional photonic entanglement source, employing fiber delays as short-lived quantum memories. We implement rapid, FPGA-based sampling of the optimized probabilistic instructions, allowing Bob's measurement settings to adapt to Alice's outcomes in real time. Our results validate the predicted 1-LOCC advantage in a realistic noisy setting and reduce the experimental trials needed to certify entanglement. These findings mark a step toward scalable, adaptive entanglement detection methods crucial for quantum networks and computing, paving the way for more efficient generation and verification of high-dimensional entangled states.
http://arxiv.org/abs/2504.09792v1
A Tale of Two Learning Algorithms: Multiple Stream Random Walk and Asynchronous Gossip
2025-04-14T01:34:22+00:00
Although gossip and random walk-based learning algorithms are widely known for decentralized learning, there has been limited theoretical and experimental analysis to understand their relative performance for different graph topologies and data heterogeneity. We first design and analyze a random walk-based learning algorithm with multiple streams (walks), which we name asynchronous "Multi-Walk (MW)". We provide a convergence analysis for MW w.r.t iteration (computation), wall-clock time, and communication. We also present a convergence analysis for "Asynchronous Gossip", noting the lack of a comprehensive analysis of its convergence, along with the computation and communication overhead, in the literature. Our results show that MW has better convergence in terms of iterations as compared to Asynchronous Gossip in graphs with large diameters (e.g., cycles), while its relative performance, as compared to Asynchronous Gossip, depends on the number of walks and the data heterogeneity in graphs with small diameters (e.g., complete graphs). In wall-clock time analysis, we observe a linear speed-up with the number of walks and nodes in MW and Asynchronous Gossip, respectively. Finally, we show that MW outperforms Asynchronous Gossip in communication overhead, except in small-diameter topologies with extreme data heterogeneity. These results highlight the effectiveness of each algorithm in different graph topologies and data heterogeneity. Our codes are available for reproducibility.
http://arxiv.org/abs/2504.09793v1
Toward Effective PBFT Consensus Service under Software Aging in Dynamic Scenarios
2025-04-14T01:41:53+00:00
The increasing application and deployment of blockchain in various services necessitates the assurance of the effectiveness of PBFT (Practical Byzantine Fault Tolerance) consensus service. However, the performance of PBFT consensus service is challenged in dynamic scenarios. The paper explores how to reduce the consensus processing time and maintenance cost of PBFT consensus service under software aging in dynamic scenarios. We first propose a PBFT system, consisting of three subsystems, one active-node subsystem, one standby-node subsystem and a repair subsystem. All the active nodes participate in the consensus and all standby nodes aim for fault-tolerance. Each aging/crashed nodes become standby nodes after completing its repairing in the repair subsystem. The nodes migrate between the active-node and standby-node subsystems in order to support the continuity of the PBFT consensus service while reducing maintenance cost. Then, we develop a Markov-chain-based analytical model for capturing the behaviors of the system and also derive the formulas for calculating the metrics, including consensus processing time, PBFT service availability, the mean number of nodes in each subsystem. Finally, we design a Multi-Objective Evolutionary Algorithm-based method for minimizing both the PBFT service response time and the PBFT system maintenance cost. We also conduct experiments for evaluation.
http://arxiv.org/abs/2504.09794v1
Arbitrary orientations of cycles in oriented graphs
2025-04-14T01:45:30+00:00
We show that every sufficiently large oriented graph $G$ with both minimum indegree and outdegree at least $(3|V(G)|-1)/8$ contains every possible orientation of a Hamilton cycle. This improves on an approximate result by Kelly and solves a problem of H\"aggkvist and Thomason from 1995. Moreover the bound is best possible. We also obtain a pancyclicity result for arbitrary orientations. More precisely, we show that the above degree condition is sufficient to guarantee a cycle of every possible orientation and of every possible length unless $G$ is isomorphic to one of exceptional oriented graphs.
http://arxiv.org/abs/2504.09795v1
VDocRAG: Retrieval-Augmented Generation over Visually-Rich Documents
2025-04-14T01:50:33+00:00
We aim to develop a retrieval-augmented generation (RAG) framework that answers questions over a corpus of visually-rich documents presented in mixed modalities (e.g., charts, tables) and diverse formats (e.g., PDF, PPTX). In this paper, we introduce a new RAG framework, VDocRAG, which can directly understand varied documents and modalities in a unified image format to prevent missing information that occurs by parsing documents to obtain text. To improve the performance, we propose novel self-supervised pre-training tasks that adapt large vision-language models for retrieval by compressing visual information into dense token representations while aligning them with textual content in documents. Furthermore, we introduce OpenDocVQA, the first unified collection of open-domain document visual question answering datasets, encompassing diverse document types and formats. OpenDocVQA provides a comprehensive resource for training and evaluating retrieval and question answering models on visually-rich documents in an open-domain setting. Experiments show that VDocRAG substantially outperforms conventional text-based RAG and has strong generalization capability, highlighting the potential of an effective RAG paradigm for real-world documents.
http://arxiv.org/abs/2504.09796v1
Advancing RFI-Detection in Radio Astronomy with Liquid State Machines
2025-04-14T01:51:01+00:00
Radio Frequency Interference (RFI) from anthropogenic radio sources poses significant challenges to current and future radio telescopes. Contemporary approaches to detecting RFI treat the task as a semantic segmentation problem on radio telescope spectrograms. Typically, complex heuristic algorithms handle this task of `flagging' in combination with manual labeling (in the most difficult cases). While recent machine-learning approaches have demonstrated high accuracy, they often fail to meet the stringent operational requirements of modern radio observatories. Owing to their inherently time-varying nature, spiking neural networks (SNNs) are a promising alternative method to RFI-detection by utilizing the time-varying nature of the spectrographic source data. In this work, we apply Liquid State Machines (LSMs), a class of spiking neural networks, to RFI-detection. We employ second-order Leaky Integrate-and-Fire (LiF) neurons, marking the first use of this architecture and neuron type for RFI-detection. We test three encoding methods and three increasingly complex readout layers, including a transformer decoder head, providing a hybrid of SNN and ANN techniques. Our methods extend LSMs beyond conventional classification tasks to fine-grained spatio-temporal segmentation. We train LSMs on simulated data derived from the Hyrogen Epoch of Reionization Array (HERA), a known benchmark for RFI-detection. Our model achieves a per-pixel accuracy of 98% and an F1-score of 0.743, demonstrating competitive performance on this highly challenging task. This work expands the sophistication of SNN techniques and architectures applied to RFI-detection, and highlights the effectiveness of LSMs in handling fine-grained, complex, spatio-temporal signal-processing tasks.
http://arxiv.org/abs/2504.09797v1
IGL-DT: Iterative Global-Local Feature Learning with Dual-Teacher Semantic Segmentation Framework under Limited Annotation Scheme
2025-04-14T01:51:29+00:00
Semi-Supervised Semantic Segmentation (SSSS) aims to improve segmentation accuracy by leveraging a small set of labeled images alongside a larger pool of unlabeled data. Recent advances primarily focus on pseudo-labeling, consistency regularization, and co-training strategies. However, existing methods struggle to balance global semantic representation with fine-grained local feature extraction. To address this challenge, we propose a novel tri-branch semi-supervised segmentation framework incorporating a dual-teacher strategy, named IGL-DT. Our approach employs SwinUnet for high-level semantic guidance through Global Context Learning and ResUnet for detailed feature refinement via Local Regional Learning. Additionally, a Discrepancy Learning mechanism mitigates over-reliance on a single teacher, promoting adaptive feature learning. Extensive experiments on benchmark datasets demonstrate that our method outperforms state-of-the-art approaches, achieving superior segmentation performance across various data regimes.
http://arxiv.org/abs/2504.09798v2
ReadMe.LLM: A Framework to Help LLMs Understand Your Library
2025-04-14T01:57:43+00:00
Large Language Models (LLMs) often struggle with code generation tasks involving niche software libraries. Existing code generation techniques with only human-oriented documentation can fail -- even when the LLM has access to web search and the library is documented online. To address this challenge, we propose ReadMe$.$LLM, LLM-oriented documentation for software libraries. By attaching the contents of ReadMe$.$LLM to a query, performance consistently improves to near-perfect accuracy, with one case study demonstrating up to 100% success across all tested models. We propose a software development lifecycle where LLM-specific documentation is maintained alongside traditional software updates. In this study, we present two practical applications of the ReadMe$.$LLM idea with diverse software libraries, highlighting that our proposed approach could generalize across programming domains.
http://arxiv.org/abs/2504.09799v1
Research and Experimental Validation for 3GPP ISAC Channel Modeling Standardization
2025-04-14T01:59:35+00:00
Integrated Sensing and Communication (ISAC) is considered a key technology in 6G networks. An accurate sensing channel model is crucial for the design and sensing performance evaluation of ISAC systems. The widely used Geometry-Based Stochastic Model (GBSM), typically applied in standardized channel modeling, mainly focuses on the statistical fading characteristics of the channel. However, it fails to capture the characteristics of targets in ISAC systems, such as their positions and velocities, as well as the impact of the targets on the background. To address this issue, this paper proposes an extended GBSM (E-GBSM) sensing channel model that incorporates newly discovered channel characteristics into a unified modeling framework. In this framework, the sensing channel is divided into target and background channels. For the target channel, the model introduces a concatenated modeling approach, while for the background channel, a parameter called the power control factor is introduced to assess impact of the target on the background channel, making the modeling framework applicable to both mono-static and bi-static sensing modes. To validate the proposed model's effectiveness, measurements of target and background channels are conducted in both indoor and outdoor scenarios, covering various sensing targets such as metal plates, reconfigurable intelligent surfaces, human bodies, UAVs, and vehicles. The experimental results provide important theoretical support and empirical data for the standardization of ISAC channel modeling.
http://arxiv.org/abs/2504.09800v1
Multi-task Federated Learning with Encoder-Decoder Structure: Enabling Collaborative Learning Across Different Tasks
2025-04-14T02:01:39+00:00
Federated learning has been extensively studied and applied due to its ability to ensure data security in distributed environments while building better models. However, clients participating in federated learning still face limitations, as clients with different structures or tasks cannot participate in learning together. In view of this, constructing a federated learning framework that allows collaboration between clients with different model structures and performing different tasks, enabling them to share valuable knowledge to enhance model efficiency, holds significant practical implications for the widespread application of federated learning. To achieve this goal, we propose a multi-task federated learning with encoder-decoder structure (M-Fed). Specifically, given the widespread adoption of the encoder-decoder architecture in current models, we leverage this structure to share intra-task knowledge through traditional federated learning methods and extract general knowledge from the encoder to achieve cross-task knowledge sharing. The training process is similar to traditional federated learning, and we incorporate local decoder and global decoder information into the loss function. The local decoder iteratively updates and gradually approaches the global decoder until sufficient cross-task knowledge sharing is achieved. Our method is lightweight and modular, demonstrating innovation compared to previous research. It enables clients performing different tasks to share general knowledge while maintaining the efficiency of traditional federated learning systems. We conducted experiments on two widely used benchmark datasets to verify the feasibility of M-Fed and compared it with traditional methods. The experimental results demonstrate the effectiveness of M-Fed in multi-task federated learning.
http://arxiv.org/abs/2504.09801v1
SDSS J134313.15+364457.5: Forming Compact Elliptical through the Merger
2025-04-14T02:03:45+00:00
Scaling relations are fundamental tools for exploring the morphological properties of galaxies and understanding their formation and evolution. Typically, galaxies follow a scaling relation between mass and size, measured by effective radius. However, a compact class of galaxies exists as outliers from this relation, and the origin of these compact galaxies in the local universe remains unclear. In this study, we investigate the compact dwarf galaxy SDSS J134313.15+364457.5 (J1343+3644), which is the result of a merger. Our analysis reveals that J1343+3644 has a half-light radius of 482~pc, significantly smaller than typical galaxies with the same brightness ($M_\text{r} = -19.17$ mag). With a high star-formation rate (SFR) of 0.87~M$_{\sun}$ year$^{-1}$, J1343+3644 is expected to evolve into a compact elliptical galaxy in a few million years. J1343+3644 could, therefore, be a progenitor of a compact elliptical galaxy. The phenomenon happened in early universe, where compact galaxies were common.
http://arxiv.org/abs/2504.09802v1
Training Small Reasoning LLMs with Cognitive Preference Alignment
2025-04-14T02:03:54+00:00
The reasoning capabilities of large language models (LLMs), such as OpenAI's o1 and DeepSeek-R1, have seen substantial advancements through deep thinking. However, these enhancements come with significant resource demands, underscoring the need to explore strategies to train effective reasoning LLMs with far fewer parameters. A critical challenge is that smaller models have different capacities and cognitive trajectories than their larger counterparts. Hence, direct distillation of chain-of-thought (CoT) results from large LLMs to smaller ones can be sometimes ineffective and requires a huge amount of annotated data. In this paper, we introduce a novel framework called Critique-Rethink-Verify (CRV), designed for training smaller yet powerful reasoning LLMs. Our CRV framework consists of multiple LLM agents, each specializing in unique abilities: (i) critiquing the CoTs according to the cognitive capabilities of smaller models, (ii) rethinking and refining these CoTs based on the critiques, and (iii) verifying the correctness of the refined results. We further propose the cognitive preference optimization (CogPO) algorithm to enhance the reasoning abilities of smaller models by aligning thoughts of these models with their cognitive capacities. Comprehensive evaluations on challenging reasoning benchmarks demonstrate the efficacy of CRV and CogPO, which outperforms other training methods by a large margin.
http://arxiv.org/abs/2504.09803v1
CUT: Pruning Pre-Trained Multi-Task Models into Compact Models for Edge Devices
2025-04-14T02:04:48+00:00
Multi-task learning has garnered widespread attention in the industry due to its efficient data utilization and strong generalization capabilities, making it particularly suitable for providing high-quality intelligent services to users. Edge devices, as the primary platforms directly serving users, play a crucial role in delivering multi-task services. However, current multi-task models are often large, and user task demands are increasingly diverse. Deploying such models directly on edge devices not only increases the burden on these devices but also leads to task redundancy. To address this issue, this paper innovatively proposes a pre-trained multi-task model pruning method specifically designed for edge computing. The goal is to utilize existing pre-trained multi-task models to construct a compact multi-task model that meets the needs of edge devices. The specific implementation steps are as follows: First, decompose the tasks within the pre-trained multi-task model and select tasks based on actual user needs. Next, while retaining the knowledge of the original pre-trained model, evaluate parameter importance and use a parameter fusion method to effectively integrate shared parameters among tasks. Finally, obtain a compact multi-task model suitable for edge devices. To validate the effectiveness of the proposed method, we conducted experiments on three public image datasets. The experimental results fully demonstrate the superiority and efficiency of this method, providing a new solution for multi-task learning on edge devices.
http://arxiv.org/abs/2504.09804v1
BO-SA-PINNs: Self-adaptive physics-informed neural networks based on Bayesian optimization for automatically designing PDE solvers
2025-04-14T02:07:45+00:00
Physics-informed neural networks (PINNs) is becoming a popular alternative method for solving partial differential equations (PDEs). However, they require dedicated manual modifications to the hyperparameters of the network, the sampling methods and loss function weights for different PDEs, which reduces the efficiency of the solvers. In this paper, we pro- pose a general multi-stage framework, i.e. BO-SA-PINNs to alleviate this issue. In the first stage, Bayesian optimization (BO) is used to select hyperparameters for the training process, and based on the results of the pre-training, the network architecture, learning rate, sampling points distribution and loss function weights suitable for the PDEs are automatically determined. The proposed hyperparameters search space based on experimental results can enhance the efficiency of BO in identifying optimal hyperparameters. After selecting the appropriate hyperparameters, we incorporate a global self-adaptive (SA) mechanism the second stage. Using the pre-trained model and loss information in the second-stage training, the exponential moving average (EMA) method is employed to optimize the loss function weights, and residual-based adaptive refinement with distribution (RAR-D) is used to optimize the sampling points distribution. In the third stage, L-BFGS is used for stable training. In addition, we introduce a new activation function that enables BO-SA-PINNs to achieve higher accuracy. In numerical experiments, we conduct comparative and ablation experiments to verify the performance of the model on Helmholtz, Maxwell, Burgers and high-dimensional Poisson equations. The comparative experiment results show that our model can achieve higher accuracy and fewer iterations in test cases, and the ablation experiments demonstrate the positive impact of every improvement.
http://arxiv.org/abs/2504.09805v1
You can lie but not deny: SWMR registers with signature properties in systems with Byzantine processes
2025-04-14T02:09:13+00:00
We define and show how to implement SWMR registers that provide properties of unforgeable digital signatures - without actually using such signatures - in systems with Byzantine processes. More precisely, we first define SWMR verifiable registers. Intuitively, processes can use these registers to write values as if they are ``signed'', such that these ``signed values'' can be ``verified'' by any process and ``relayed'' to any process. We give a signature-free implementation of such registers from plain SWMR registers in systems with $n > 3f$ processes, $f$ of which can be Byzantine. We also give a signature-free implementation of SWMR sticky registers from SWMR registers in systems with $n > 3f$ processes. Once the writer $p$ writes a value $v$ into a SWMR sticky register $R$, the register never changes its value. Note that the value $v$ can be considered ``signed'' by $p$: once $p$ writes $v$ in $R$, $p$ cannot change the value in $R$ or deny that it wrote $v$ in $R$, and every reader can verify that $p$ wrote $v$ just by reading $R$. This holds even if the writer $p$ of $R$ is Byzantine. We prove that our implementations are optimal in the number of Byzantine processes they can tolerate. Since SWMR registers can be implemented in message-passing systems with Byzantine processes and $n > 3f$ [9], the results in this paper also show that one can implement verifiable registers and sticky registers in such systems.
http://arxiv.org/abs/2504.09806v1
Quantum theory from classical mechanics near equilibrium
2025-04-14T02:14:41+00:00
We consider classical theories described by Hamiltonians $H(p,q)$ that have a non-degenerate minimum at the point where generalized momenta $p$ and generalized coordinates $q$ vanish. We assume that the sum of squares of generalized momenta and generalized coordinates is an integral of motion. In this situation, in the neighborhood of the point $p=0, q=0$ quadratic part of a Hamiltonian plays a dominant role. We suppose that a classical observer can observe only physical quantities corresponding to quadratic Hamiltonians and show that in this case, he should conclude that the laws of quantum theory govern his world.
http://arxiv.org/abs/2504.09807v1
Virtual domain extension for imposing boundary conditions in flow simulation using pre-trained local neural operator
2025-04-14T02:18:12+00:00
This paper builds up a virtual domain extension (VDE) framework for imposing boundary conditions (BCs) in flow simulation using pre-trained local neural operator (LNO). It creates extended virtual domains to the input function to compensate for the corrosion nature of computational domains during LNO inference, thus turns the implementation of BC into the determination of field values on the extended domain. Several strategies to calculate the field values are proposed and validated in solving numerical examples, including padding operation, direct imposition, pressure symmetry, and optimization by backpropagation, and compared with boundary imposition in traditional solvers. It is found that the large time interval of LNO induces a relatively wide near-boundary domain to be processed, thus imposing BC on only a few nodes near the boundary following the immersed boundary conception in traditional solvers can hardly achieve high accuracy. With appropriate values assigned on the extended virtual domains, VDE can accurately impose BCs and lead to reasonable flow field predictions. This work provides a guidance for imposing BCs reliably in LNO prediction, which could facilitate the reuse of pre-trained LNO in more applications.
http://arxiv.org/abs/2504.09808v2
Optimizing disorder with machine learning to harness synchronization
2025-04-14T02:18:15+00:00
Disorder is often considered detrimental to coherence. However, under specific conditions, it can enhance synchronization. We develop a machine-learning framework to design optimal disorder configurations that maximize phase synchronization. In particular, utilizing the system of coupled nonlinear pendulums with disorder and noise, we train a feedforward neural network (FNN), with the disorder parameters as input, to predict the Shannon entropy index that quantifies the phase synchronization strength. The trained FNN model is then deployed to search for the optimal disorder configurations in the high-dimensional space of the disorder parameters, providing a computationally efficient replacement of the stochastic differential equation solvers. Our results demonstrate that the FNN is capable of accurately predicting synchronization and facilitates an efficient inverse design solution to optimizing and enhancing synchronization.
http://arxiv.org/abs/2504.09809v1
See or Recall: A Sanity Check for the Role of Vision in Solving Visualization Question Answer Tasks with Multimodal LLMs
2025-04-14T02:19:28+00:00
Recent developments in multimodal large language models (MLLM) have equipped language models to reason about vision and language jointly. This permits MLLMs to both perceive and answer questions about data visualization across a variety of designs and tasks. Applying MLLMs to a broad range of visualization tasks requires us to properly evaluate their capabilities, and the most common way to conduct evaluation is through measuring a model's visualization reasoning capability, analogous to how we would evaluate human understanding of visualizations (e.g., visualization literacy). However, we found that in the context of visualization question answering (VisQA), how an MLLM perceives and reasons about visualizations can be fundamentally different from how humans approach the same problem. During the evaluation, even without visualization, the model could correctly answer a substantial portion of the visualization test questions, regardless of whether any selection options were provided. We hypothesize that the vast amount of knowledge encoded in the language model permits factual recall that supersedes the need to seek information from the visual signal. It raises concerns that the current VisQA evaluation may not fully capture the models' visualization reasoning capabilities. To address this, we propose a comprehensive sanity check framework that integrates a rule-based decision tree and a sanity check table to disentangle the effects of "seeing" (visual processing) and "recall" (reliance on prior knowledge). This validates VisQA datasets for evaluation, highlighting where models are truly "seeing", positively or negatively affected by the factual recall, or relying on inductive biases for question answering. Our study underscores the need for careful consideration in designing future visualization understanding studies when utilizing MLLMs.
http://arxiv.org/abs/2504.09810v1
High-Order Interior Penalty Finite Element Methods for Fourth-Order Phase-Field Models in Fracture Analysis
2025-04-14T02:20:37+00:00
This paper presents a novel approach for solving fourth-order phase-field models in brittle fracture mechanics using the Interior Penalty Finite Element Method (IP-FEM). The fourth-order model improves numerical stability and accuracy compared to traditional second-order phase-field models, particularly when simulating complex crack paths. The IP-FEM provides an efficient framework for discretizing these models, effectively handling nonconforming trial functions and complex boundary conditions. In this study, we leverage the FEALPy framework to implement a flexible computational tool that supports high-order IP-FEM discretizations. Our results show that as the polynomial order increases, the mesh dependence of the phase-field model decreases, offering improved accuracy and faster convergence. Additionally, we explore the trade-offs between computational cost and accuracy with varying polynomial orders and mesh sizes. The findings offer valuable insights for optimizing numerical simulations of brittle fracture in practical engineering applications.
http://arxiv.org/abs/2504.09811v1
Volume estimates for the singular sets of mean curvature flows
2025-04-14T02:21:30+00:00
In this paper, we establish uniform and sharp volume estimates for the singular set and the quantitative singular strata of mean curvature flows starting from a smooth, closed, mean-convex hypersurface in $\mathbb R^{n+1}$.
http://arxiv.org/abs/2504.09812v1
Efficient Multi-Task Modeling through Automated Fusion of Trained Models
2025-04-14T02:21:45+00:00
Although multi-task learning is widely applied in intelligent services, traditional multi-task modeling methods often require customized designs based on specific task combinations, resulting in a cumbersome modeling process. Inspired by the rapid development and excellent performance of single-task models, this paper proposes an efficient multi-task modeling method that can automatically fuse trained single-task models with different structures and tasks to form a multi-task model. As a general framework, this method allows modelers to simply prepare trained models for the required tasks, simplifying the modeling process while fully utilizing the knowledge contained in the trained models. This eliminates the need for excessive focus on task relationships and model structure design. To achieve this goal, we consider the structural differences among various trained models and employ model decomposition techniques to hierarchically decompose them into multiple operable model components. Furthermore, we have designed an Adaptive Knowledge Fusion (AKF) module based on Transformer, which adaptively integrates intra-task and inter-task knowledge based on model components. Through the proposed method, we achieve efficient and automated construction of multi-task models, and its effectiveness is verified through extensive experiments on three datasets.
http://arxiv.org/abs/2504.09813v1
A Practical Framework for Assessing the Performance of Observable Estimation in Quantum Simulation
2025-04-14T02:23:01+00:00
Simulating dynamics of physical systems is a key application of quantum computing, with potential impact in fields such as condensed matter physics and quantum chemistry. However, current quantum algorithms for Hamiltonian simulation yield results that are inadequate for real use cases and suffer from lengthy execution times when implemented on near-term quantum hardware. In this work, we introduce a framework for evaluating the performance of quantum simulation algorithms, focusing on the computation of observables, such as energy expectation values. Our framework provides end-to-end demonstrations of algorithmic optimizations that utilize Pauli term groups based on k-commutativity, generate customized Clifford measurement circuits, and implement weighted shot distribution strategies across these groups. These demonstrations span multiple quantum execution environments, allowing us to identify critical factors influencing runtime and solution accuracy. We integrate enhancements into the QED-C Application-Oriented Benchmark suite, utilizing problem instances from the open-source HamLib collection. Our results demonstrate a 27.1% error reduction through Pauli grouping methods, with an additional 37.6% improvement from the optimized shot distribution strategy. Our framework provides an essential tool for advancing quantum simulation performance using algorithmic optimization techniques, enabling systematic evaluation of improvements that could maximize near-term quantum computers' capabilities and advance practical quantum utility as hardware evolves.
http://arxiv.org/abs/2504.09814v1
DUDA: Distilled Unsupervised Domain Adaptation for Lightweight Semantic Segmentation
2025-04-14T02:30:18+00:00
Unsupervised Domain Adaptation (UDA) is essential for enabling semantic segmentation in new domains without requiring costly pixel-wise annotations. State-of-the-art (SOTA) UDA methods primarily use self-training with architecturally identical teacher and student networks, relying on Exponential Moving Average (EMA) updates. However, these approaches face substantial performance degradation with lightweight models due to inherent architectural inflexibility leading to low-quality pseudo-labels. To address this, we propose Distilled Unsupervised Domain Adaptation (DUDA), a novel framework that combines EMA-based self-training with knowledge distillation (KD). Our method employs an auxiliary student network to bridge the architectural gap between heavyweight and lightweight models for EMA-based updates, resulting in improved pseudo-label quality. DUDA employs a strategic fusion of UDA and KD, incorporating innovative elements such as gradual distillation from large to small networks, inconsistency loss prioritizing poorly adapted classes, and learning with multiple teachers. Extensive experiments across four UDA benchmarks demonstrate DUDA's superiority in achieving SOTA performance with lightweight models, often surpassing the performance of heavyweight models from other approaches.
http://arxiv.org/abs/2504.09815v1
Harvesting entanglement from the cylindrical gravitational wave spacetime
2025-04-14T02:32:14+00:00
We investigate the entanglement harvesting protocol within the context of cylindrical gravitational waves given first by Einstein and Rosen, focusing on the interactions between non-relativistic quantum systems and linearized quantum gravity. We study how two spatially separated detectors can extract entanglement from the specific spacetime in the presence of gravitational waves, which provides a precise quantification of the entanglement that can be harvested using these detectors. In particular, we obtain the relation between harvested entanglement and the distance to wave sources that emits gravitational waves and analyze the detectability using quantum Fisher information. The enhanced detectability demonstrates the advantages of cylindrical symmetric gravitational waves.
http://arxiv.org/abs/2504.09816v1
Augmented Relevance Datasets with Fine-Tuned Small LLMs
2025-04-14T02:35:00+00:00
Building high-quality datasets and labeling query-document relevance are essential yet resource-intensive tasks, requiring detailed guidelines and substantial effort from human annotators. This paper explores the use of small, fine-tuned large language models (LLMs) to automate relevance assessment, with a focus on improving ranking models' performance by augmenting their training dataset. We fine-tuned small LLMs to enhance relevance assessments, thereby improving dataset creation quality for downstream ranking model training. Our experiments demonstrate that these fine-tuned small LLMs not only outperform certain closed source models on our dataset but also lead to substantial improvements in ranking model performance. These results highlight the potential of leveraging small LLMs for efficient and scalable dataset augmentation, providing a practical solution for search engine optimization.
http://arxiv.org/abs/2504.09817v1
Stiffness, strength, energy dissipation and reusability in heterogeneous architected polycrystals
2025-04-14T02:35:51+00:00
We design, fabricate and test heterogeneous architected polycrystals, composed of hard plastomers and soft elastomers, which thus show outstanding mechanical resilience and energy dissipation simultaneously. Grain boundaries that separate randomly oriented single crystalline grains is carefully designed, first enabling coherent connectivity and strength in the grain boundary regions throughout the polycrystalline network. By combining experiments and numerical simulations on 3D-printed prototypes, we show that the interplay between grain interiors and grain boundaries is responsible for the grain-size effects emerging in these architected materials, analogous to those in their atomic or metallic counterparts. Furthermore, direct visualization of inter- and intra-grain deformation and failure mechanisms at the macroscopic scale reveals that crystallographic texture throughout the polycrystalline aggregates plays a fundamental role in the key mechanical features in our new heterogeneous polycrystals. Our results show that the engineered grain boundary and crystallographic texture not only modify the highly resilient yet dissipative global responses but also critically influence reusability in this new class of architected materials.
http://arxiv.org/abs/2504.09818v1
Transferable text data distillation by trajectory matching
2025-04-14T02:39:26+00:00
In the realm of large language model (LLM), as the size of large models increases, it also brings higher training costs. There is a urgent need to minimize the data size in LLM training. Compared with data selection method, the data distillation method aims to synthesize a small number of data samples to achieve the training effect of the full data set and has better flexibility. Despite its successes in computer vision, the discreteness of text data has hitherto stymied its exploration in natural language processing (NLP). In this work, we proposed a method that involves learning pseudo prompt data based on trajectory matching and finding its nearest neighbor ID to achieve cross-architecture transfer. During the distillation process, we introduce a regularization loss to improve the robustness of our distilled data. To our best knowledge, this is the first data distillation work suitable for text generation tasks such as instruction tuning. Evaluations on two benchmarks, including ARC-Easy and MMLU instruction tuning datasets, established the superiority of our distillation approach over the SOTA data selection method LESS. Furthermore, our method demonstrates a good transferability over LLM structures (i.e., OPT to Llama).
http://arxiv.org/abs/2504.09819v1
Density-based Object Detection in Crowded Scenes
2025-04-14T02:41:49+00:00
Compared with the generic scenes, crowded scenes contain highly-overlapped instances, which result in: 1) more ambiguous anchors during training of object detectors, and 2) more predictions are likely to be mistakenly suppressed in post-processing during inference. To address these problems, we propose two new strategies, density-guided anchors (DGA) and density-guided NMS (DG-NMS), which uses object density maps to jointly compute optimal anchor assignments and reweighing, as well as an adaptive NMS. Concretely, based on an unbalanced optimal transport (UOT) problem, the density owned by each ground-truth object is transported to each anchor position at a minimal transport cost. And density on anchors comprises an instance-specific density distribution, from which DGA decodes the optimal anchor assignment and re-weighting strategy. Meanwhile, DG-NMS utilizes the predicted density map to adaptively adjust the NMS threshold to reduce mistaken suppressions. In the UOT, a novel overlap-aware transport cost is specifically designed for ambiguous anchors caused by overlapped neighboring objects. Extensive experiments on the challenging CrowdHuman dataset with Citypersons dataset demonstrate that our proposed density-guided detector is effective and robust to crowdedness. The code and pre-trained models will be made available later.
http://arxiv.org/abs/2504.09820v1
Finite-Precision Conjugate Gradient Method for Massive MIMO Detection
2025-04-14T02:46:05+00:00
The implementation of the conjugate gradient (CG) method for massive MIMO detection is computationally challenging, especially for a large number of users and correlated channels. In this paper, we propose a low computational complexity CG detection from a finite-precision perspective. First, we develop a finite-precision CG (FP-CG) detection to mitigate the computational bottleneck of each CG iteration and provide the attainable accuracy, convergence, and computational complexity analysis to reveal the impact of finite-precision arithmetic. A practical heuristic is presented to select suitable precisions. Then, to further reduce the number of iterations, we propose a joint finite-precision and block-Jacobi preconditioned CG (FP-BJ-CG) detection. The corresponding performance analysis is also provided. Finally, simulation results validate the theoretical insights and demonstrate the superiority of the proposed detection.
http://arxiv.org/abs/2504.09821v1
Spontaneous Vectorization in the Einstein-Born-Infeld-Vector Model
2025-04-14T02:46:27+00:00
We investigate spontaneous vectorization in the Einstein-Born-Infeld-Vector (EBIV) model, where a massless vector field is nonminimally coupled to a nonlinear Born-Infeld (BI) electromagnetic field. This coupling results in an effective mass for the vector field in a Born-Infeld black hole (BIBH) background, triggering tachyonic instability. We numerically construct and analyze such vectorized Born-Infeld black holes (VBIBHs), focusing on their domain of existence, thermodynamic properties, and energy distributions in both Reissner-Nordstr\"om (RN)-like and Schwarzschild-like backgrounds. In RN-like BI backgrounds, vectorized solutions emerge from the perturbative instability threshold and persist down to extremality, exhibiting higher entropy and lower free energy compared to their unvectorized counterparts. Conversely, in Schwarzschild-like backgrounds, VBIBHs show bifurcation behavior with two coexisting solution branches, only one of which is thermodynamically favored. We reveal a contrasting energy redistribution pattern between the internal and external fields in the two regimes, governed by the competition between the vector field and the nonlinear BI field. Our findings highlight the rich structure of spontaneous vectorization in nonlinear electrodynamics and provide novel insights into black hole physics beyond linear Maxwell theory.
http://arxiv.org/abs/2504.09822v1
On the existence of parameterized noetherian rings
2025-04-14T02:46:43+00:00
A ring $R$ is called left strictly $(<\aleph_{\alpha})$-noetherian if $\aleph_{\alpha}$ is the minimum cardinal such that every ideal of $R$ is $(<\aleph_{\alpha})$-generated. In this note, we show that for every singular (resp., regular) cardinal $\aleph_{\alpha}$, there is a valuation domain $D$, which is strictly $(<\aleph_{\alpha})$-noetherian (resp., strictly $(<\aleph_{\alpha}^+)$-noetherian), positively answering a problem proposed in \cite{Marcos25} under some set theory assumption.
http://arxiv.org/abs/2504.09823v1
RAKG:Document-level Retrieval Augmented Knowledge Graph Construction
2025-04-14T02:47:23+00:00
With the rise of knowledge graph based retrieval-augmented generation (RAG) techniques such as GraphRAG and Pike-RAG, the role of knowledge graphs in enhancing the reasoning capabilities of large language models (LLMs) has become increasingly prominent. However, traditional Knowledge Graph Construction (KGC) methods face challenges like complex entity disambiguation, rigid schema definition, and insufficient cross-document knowledge integration. This paper focuses on the task of automatic document-level knowledge graph construction. It proposes the Document-level Retrieval Augmented Knowledge Graph Construction (RAKG) framework. RAKG extracts pre-entities from text chunks and utilizes these pre-entities as queries for RAG, effectively addressing the issue of long-context forgetting in LLMs and reducing the complexity of Coreference Resolution. In contrast to conventional KGC methods, RAKG more effectively captures global information and the interconnections among disparate nodes, thereby enhancing the overall performance of the model. Additionally, we transfer the RAG evaluation framework to the KGC field and filter and evaluate the generated knowledge graphs, thereby avoiding incorrectly generated entities and relationships caused by hallucinations in LLMs. We further developed the MINE dataset by constructing standard knowledge graphs for each article and experimentally validated the performance of RAKG. The results show that RAKG achieves an accuracy of 95.91 % on the MINE dataset, a 6.2 % point improvement over the current best baseline, GraphRAG (89.71 %). The code is available at https://github.com/LMMApplication/RAKG.
http://arxiv.org/abs/2504.09824v1
Abacus-SQL: A Text-to-SQL System Empowering Cross-Domain and Open-Domain Database Retrieval
2025-04-14T02:49:54+00:00
The existing text-to-SQL systems have made significant progress in SQL query generation, but they still face numerous challenges. Existing systems often lack retrieval capabilities for open-domain databases, requiring users to manually filter relevant databases. Additionally, their cross-domain transferability is limited, making it challenging to accommodate diverse query requirements. To address these issues, we propose Abacus-SQL. Abacus-SQL utilizes database retrieval technology to accurately locate the required databases in an open-domain database environment. It also enhances the system cross-domain transfer ability through data augmentation methods. Moreover, Abacus-SQL employs Pre-SQL and Self-debug methods, thereby enhancing the accuracy of SQL queries. Experimental results demonstrate that Abacus-SQL performs excellently in multi-turn text-to-SQL tasks, effectively validating the approach's effectiveness. Abacus-SQL is publicly accessible at https://huozi.8wss.com/abacus-sql/.
http://arxiv.org/abs/2504.09825v1
On relative fields of definition for log pairs, Vojta's height inequalities and asymptotic coordinate size dynamics
2025-04-14T02:50:23+00:00
We build on the perspective of the works \cite{Grieve:Noytaptim:fwd:orbits}, \cite{Matsuzawa:2023}, \cite{Grieve:qualitative:subspace}, \cite{Grieve:chow:approx}, \cite{Grieve:Divisorial:Instab:Vojta} (and others) and study the dynamical arithmetic complexity of rational points in projective varieties. Our main results make progress towards the attractive problem of asymptotic complexity of coordinate size dynamics in the sense formulated by Matsuzawa, in \cite[Question 1.1.2]{Matsuzawa:2023}, and building on earlier work of Silverman \cite{Silverman:1993}. A key tool to our approach here is a novel formulation of conjectural Vojta type inequalities for log canonical pairs and with respect to finite extensions of number fields. Among other features, these conjectured Diophantine arithmetic height inequalities raise the question of existence of log resolutions with respect to finite extensions of number fields which is another novel concept which we formulate in precise terms here and also which is of an independent interest.
http://arxiv.org/abs/2504.09826v1
Understanding the Baryon Stopping at the Relativistic Heavy Ion Collider
2025-04-14T02:51:29+00:00
The nucleon exhibits a rich internal structure governed by Quantum Chromodynamics (QCD), where its electric charge arises from valence quarks, while its spin and mass emerge from complex interactions among valence quarks, sea (anti-)quarks, and gluons. At the advent of QCD, an alternative hypothesis emerged suggesting, at high energies, the transport of a nucleon's baryon number could be traced by a non-perturbative configuration of gluon fields connecting its three valence quarks, forming a $Y$-shaped topology known as the gluon junction. Recent measurements by the STAR experiment are compatible with this scenario. In light of these measurements, this study aims to explore the mechanisms of baryon transport in high-energy nuclear collisions using the PYTHIA-8 framework, which incorporates a state-of-the-art hadronization model with advanced Color Flow (CF) and Color Reconnection (CR) mechanisms which mimic signatures of a baryon junction. Within this model setup, we investigate (i) the rapidity slope of the net-baryon distributions in photon-included processes ($\gamma$+p) and (ii) baryon over charge transport in the isobaric (Ru+Ru and Zr+Zr) collisions. Our study highlights the importance of the CF and CR mechanisms in PYTHIA-8, which plays a crucial role in baryon transport. The results show that the CF and CR schemes significantly affect the isobaric baryon-to-charge ratio, leading to different predictions for baryon stopping and underscoring the need to account for CF and CR effects in comparisons with experimental measurements.
http://arxiv.org/abs/2504.09827v2
Redesign of Online Design Communities: Facilitating Personalized Visual Design Learning with Structured Comments
2025-04-14T02:53:08+00:00
Online Design Communities (ODCs) offer various artworks with members' comments for beginners to learn visual design. However, as identified by our Formative Study (N = 10), current ODCs lack features customized for personal learning purposes, e.g., searching artworks and digesting useful comments to learn design principles about buttons. In this paper, we present DesignLearner, a redesigned interface of ODCs to facilitate personalized visual design learning with comments structured based on UI components (e.g., button, text) and visual elements (e.g., color, contrast). In DesignLearner, learners can specify the UI components and visual elements that they wish to learn to filter artworks and associated comments. They can interactively read comments on an artwork, take notes, and get suggestions for the next artworks to explore. Our between-subjects study (N = 24) indicates that compared to a traditional ODC interface, DesignLearner can improve the user learning outcome and is deemed significantly more useful. We conclude with design considerations for customizing the interface of online communities to satisfy users' learning needs.
http://arxiv.org/abs/2504.09828v1
FATE: A Prompt-Tuning-Based Semi-Supervised Learning Framework for Extremely Limited Labeled Data
2025-04-14T02:54:28+00:00
Semi-supervised learning (SSL) has achieved significant progress by leveraging both labeled data and unlabeled data. Existing SSL methods overlook a common real-world scenario when labeled data is extremely scarce, potentially as limited as a single labeled sample in the dataset. General SSL approaches struggle to train effectively from scratch under such constraints, while methods utilizing pre-trained models often fail to find an optimal balance between leveraging limited labeled data and abundant unlabeled data. To address this challenge, we propose Firstly Adapt, Then catEgorize (FATE), a novel SSL framework tailored for scenarios with extremely limited labeled data. At its core, the two-stage prompt tuning paradigm FATE exploits unlabeled data to compensate for scarce supervision signals, then transfers to downstream tasks. Concretely, FATE first adapts a pre-trained model to the feature distribution of downstream data using volumes of unlabeled samples in an unsupervised manner. It then applies an SSL method specifically designed for pre-trained models to complete the final classification task. FATE is designed to be compatible with both vision and vision-language pre-trained models. Extensive experiments demonstrate that FATE effectively mitigates challenges arising from the scarcity of labeled samples in SSL, achieving an average performance improvement of 33.74% across seven benchmarks compared to state-of-the-art SSL methods. Code is available at https://anonymous.4open.science/r/Semi-supervised-learning-BA72.
http://arxiv.org/abs/2504.09829v1
$q$-Deformed Heisenberg Picture Equation
2025-04-14T02:56:11+00:00
In this paper we introduce the $q$-deformed Heisenberg picture equation. We consider some examples such as : the spinless particle, the electr\'on interaction with a magnnetic field and $q$-deformed harmonnic oscillator. The $q$-Heisenberg picture equation for any dynamical function at the end of the paper.
http://arxiv.org/abs/2504.09830v1
Many-body localization properties of one-dimensional anisotropic spin-1/2 chains
2025-04-14T02:56:42+00:00
In this paper, we theoretically investigate the many-body localization (MBL) properties of one-dimensional anisotropic spin-1/2 chains by using the exact matrix diagonalization method. Starting from the Ising spin-1/2 chain, we introduce different forms of external fields and spin coupling interactions, and construct three distinct anisotropic spin-1/2 chain models. The influence of these interactions on the MBL phase transition is systematically explored. We first analyze the eigenstate properties by computing the excited-state fidelity. The results show that MBL phase transitions occur in all three models, and that both the anisotropy parameter and the finite system size significantly affect the critical disorder strength of the transition. Moreover, we calculated the bipartite entanglement entropy of the system, and the critical points determined by the intersection of curves for different system sizes are basically consistent with those obtained from the excited-state fidelity. Then, the dynamical characteristics of the systems are studied through the time evolution of diagonal entropy (DE), local magnetization, and fidelity. These observations further confirm the occurrence of the MBL phase transition and allow for a clear distinction between the ergodic (thermal) phase and the many-body localized phase. Finally, to examine the effect of additional interactions on the transition, we incorporate Dzyaloshinskii-Moriya (DM) interactions into the three models. The results demonstrate that the MBL phase transition still occurs in the presence of DM interactions. However, the anisotropy parameter and finite system size significantly affect the critical disorder strength. Moreover, the critical behavior is somewhat suppressed, indicating that DM interactions tend to inhibit the onset of localization.
http://arxiv.org/abs/2504.09831v1
Offline Dynamic Inventory and Pricing Strategy: Addressing Censored and Dependent Demand
2025-04-14T02:57:51+00:00
In this paper, we study the offline sequential feature-based pricing and inventory control problem where the current demand depends on the past demand levels and any demand exceeding the available inventory is lost. Our goal is to leverage the offline dataset, consisting of past prices, ordering quantities, inventory levels, covariates, and censored sales levels, to estimate the optimal pricing and inventory control policy that maximizes long-term profit. While the underlying dynamic without censoring can be modeled by Markov decision process (MDP), the primary obstacle arises from the observed process where demand censoring is present, resulting in missing profit information, the failure of the Markov property, and a non-stationary optimal policy. To overcome these challenges, we first approximate the optimal policy by solving a high-order MDP characterized by the number of consecutive censoring instances, which ultimately boils down to solving a specialized Bellman equation tailored for this problem. Inspired by offline reinforcement learning and survival analysis, we propose two novel data-driven algorithms to solving these Bellman equations and, thus, estimate the optimal policy. Furthermore, we establish finite sample regret bounds to validate the effectiveness of these algorithms. Finally, we conduct numerical experiments to demonstrate the efficacy of our algorithms in estimating the optimal policy. To the best of our knowledge, this is the first data-driven approach to learning optimal pricing and inventory control policies in a sequential decision-making environment characterized by censored and dependent demand. The implementations of the proposed algorithms are available at https://github.com/gundemkorel/Inventory_Pricing_Control
http://arxiv.org/abs/2504.09832v1
Quantum Entanglement between gauge boson pairs at a Muon Collider
2025-04-14T03:00:41+00:00
Quantum entanglement is one of significant physics phenomena that can be examined at a particle collider. A muon collider can provide a stage on which we can study substantial physics phenomenon, starting from the precision measurements of the Standard Model and beyond to the undiscovered area of physics. In this work, we present a through study of quantum entanglement in $\mu^+\mu^-\to ZZ$ events at a future muon collider. By fixing the spin density matrix, observables quantifying entanglement between $Z$ boson pairs can be measured. After systematic Monte-Carlo simulation and background analysis, we measure the value of entanglement variables and perform hypothesis testing against the non-entangled hypothesis, finally observing the entanglement of the $ZZ$ system up to $2$ significance level.
http://arxiv.org/abs/2504.09833v1
PreCi: Pretraining and Continual Improvement of Humanoid Locomotion via Model-Assumption-Based Regularization
2025-04-14T03:02:02+00:00
Humanoid locomotion is a challenging task due to its inherent complexity and high-dimensional dynamics, as well as the need to adapt to diverse and unpredictable environments. In this work, we introduce a novel learning framework for effectively training a humanoid locomotion policy that imitates the behavior of a model-based controller while extending its capabilities to handle more complex locomotion tasks, such as more challenging terrain and higher velocity commands. Our framework consists of three key components: pre-training through imitation of the model-based controller, fine-tuning via reinforcement learning, and model-assumption-based regularization (MAR) during fine-tuning. In particular, MAR aligns the policy with actions from the model-based controller only in states where the model assumption holds to prevent catastrophic forgetting. We evaluate the proposed framework through comprehensive simulation tests and hardware experiments on a full-size humanoid robot, Digit, demonstrating a forward speed of 1.5 m/s and robust locomotion across diverse terrains, including slippery, sloped, uneven, and sandy terrains.
http://arxiv.org/abs/2504.09834v1
Mining for Lags in Updating Critical Security Threats: A Case Study of Log4j Library
2025-04-14T03:02:16+00:00
The Log4j-Core vulnerability, known as Log4Shell, exposed significant challenges to dependency management in software ecosystems. When a critical vulnerability is disclosed, it is imperative that dependent packages quickly adopt patched versions to mitigate risks. However, delays in applying these updates can leave client systems exposed to exploitation. Previous research has primarily focused on NPM, but there is a need for similar analysis in other ecosystems, such as Maven. Leveraging the 2025 mining challenge dataset of Java dependencies, we identify factors influencing update lags and categorize them based on version classification (major, minor, patch release cycles). Results show that lags exist, but projects with higher release cycle rates tend to address severe security issues more swiftly. In addition, over half of vulnerability fixes are implemented through patch updates, highlighting the critical role of incremental changes in maintaining software security. Our findings confirm that these lags also appear in the Maven ecosystem, even when migrating away from severe threats.
http://arxiv.org/abs/2504.09835v1
Laugh at Your Own Pace: Basic Performance Evaluation of Language Learning Assistance by Adjustment of Video Playback Speeds Based on Laughter Detection
2025-04-14T03:03:42+00:00
Among various methods to learn a second language (L2), such as listening and shadowing, Extensive Viewing involves learning L2 by watching many videos. However, it is difficult for many L2 learners to smoothly and effortlessly comprehend video contents made for native speakers at the original speed. Therefore, we developed a language learning assistance system that automatically adjusts the playback speed according to the learner's comprehension. Our system judges that learners understand the contents if they laugh at the punchlines of comedy dramas, and vice versa. Experimental results show that this system supports learners with relatively low L2 ability (under 700 in TOEIC Score in the experimental condition) to understand video contents. Our system can widen learners' possible options of native speakers' videos as Extensive Viewing material.
http://arxiv.org/abs/2504.09836v1
Score Matching Diffusion Based Feedback Control and Planning of Nonlinear Systems
2025-04-14T03:04:48+00:00
We propose a novel control-theoretic framework that leverages principles from generative modeling -- specifically, Denoising Diffusion Probabilistic Models (DDPMs) -- to stabilize control-affine systems with nonholonomic constraints. Unlike traditional stochastic approaches, which rely on noise-driven dynamics in both forward and reverse processes, our method crucially eliminates the need for noise in the reverse phase, making it particularly relevant for control applications. We introduce two formulations: one where noise perturbs all state dimensions during the forward phase while the control system enforces time reversal deterministically, and another where noise is restricted to the control channels, embedding system constraints directly into the forward process. For controllable nonlinear drift-free systems, we prove that deterministic feedback laws can exactly reverse the forward process, ensuring that the system's probability density evolves correctly without requiring artificial diffusion in the reverse phase. Furthermore, for linear time-invariant systems, we establish a time-reversal result under the second formulation. By eliminating noise in the backward process, our approach provides a more practical alternative to machine learning-based denoising methods, which are unsuitable for control applications due to the presence of stochasticity. We validate our results through numerical simulations on benchmark systems, including a unicycle model in a domain with obstacles, a driftless five-dimensional system, and a four-dimensional linear system, demonstrating the potential for applying diffusion-inspired techniques in linear, nonlinear, and settings with state space constraints.
http://arxiv.org/abs/2504.09837v1
Schoenberg type inequalities
2025-04-14T03:13:45+00:00
In the geometry of polynomials, Schoenberg's conjecture, now a theorem, is a quadratic inequality between the zeros and critical points of a polynomial whose centroid is at the origin. We call its higher order extension and generalization Schoenberg type inequalities. While inequality of order four have been previously established, little is known about other orders. In this paper, we present a Schoenberg type inequality of order six, as well as a novel inequality of order one, marking the first discovery in the odd-order case. These results partially answer an open problem posed by Kushel and Tyaglov. We also make a connection to Sendov's conjecture.
http://arxiv.org/abs/2504.09838v1
Broadband source-surrounded cloak for on-chip antenna radiation pattern protection
2025-04-14T03:17:30+00:00
As the frequency range of electromagnetic wave communication continues to expand and the integration of integrated circuits increases, electromagnetic waves emitted by on-chip antennas are prone to scattering from electronic components, which limits further improvements in integration and the protection of radiation patterns. Cloaks can be used to reduce electromagnetic scattering; however, they cannot achieve both broadband and omnidirectional effectiveness simultaneously. Moreover, their operating modes are typically designed for scenarios where the source is located outside the cloak, making it difficult to address this problem. In this work, we propose a dispersionless air-impedance-matched metamaterial over the 2-8 GHz bandwidth that achieves an adjustable effective refractive index ranging from 1.1 to 1.5, with transmittance maintained above 93%. Based on this metamaterial, we introduce a broadband source-surrounded cloak that can guide electromagnetic waves from a broadband source surrounded by the cloak in any propagation direction to bypass obstacles and reproduce the original wavefronts outside the cloak. Thereby protecting the radiation pattern from distortion due to scattering caused by obstacles. Our work demonstrates significant potential for enhancing the integration density of integrated circuits and improving the operational stability of communication systems.
http://arxiv.org/abs/2504.09839v1
SafeSpeech: Robust and Universal Voice Protection Against Malicious Speech Synthesis
2025-04-14T03:21:23+00:00
Speech synthesis technology has brought great convenience, while the widespread usage of realistic deepfake audio has triggered hazards. Malicious adversaries may unauthorizedly collect victims' speeches and clone a similar voice for illegal exploitation (\textit{e.g.}, telecom fraud). However, the existing defense methods cannot effectively prevent deepfake exploitation and are vulnerable to robust training techniques. Therefore, a more effective and robust data protection method is urgently needed. In response, we propose a defensive framework, \textit{\textbf{SafeSpeech}}, which protects the users' audio before uploading by embedding imperceptible perturbations on original speeches to prevent high-quality synthetic speech. In SafeSpeech, we devise a robust and universal proactive protection technique, \textbf{S}peech \textbf{PE}rturbative \textbf{C}oncealment (\textbf{SPEC}), that leverages a surrogate model to generate universally applicable perturbation for generative synthetic models. Moreover, we optimize the human perception of embedded perturbation in terms of time and frequency domains. To evaluate our method comprehensively, we conduct extensive experiments across advanced models and datasets, both subjectively and objectively. Our experimental results demonstrate that SafeSpeech achieves state-of-the-art (SOTA) voice protection effectiveness and transferability and is highly robust against advanced adaptive adversaries. Moreover, SafeSpeech has real-time capability in real-world tests. The source code is available at \href{https://github.com/wxzyd123/SafeSpeech}{https://github.com/wxzyd123/SafeSpeech}.
http://arxiv.org/abs/2504.09840v1
Minimizing Eigenvalues of the Fractional Laplacian
2025-04-14T03:21:30+00:00
We study the minimizers of \begin{equation} \lambda_k^s(A) + |A| \end{equation} where $\lambda^s_k(A)$ is the $k$-th Dirichlet eigenvalue of the fractional Laplacian on $A$. Unlike in the case of the Laplacian, the free boundary of minimizers exhibit distinct global behavior. Our main results include: the existence of minimizers, optimal H\"older regularity for the corresponding eigenfunctions, and in the case where $\lambda_k$ is simple, non-degeneracy, density estimates, separation of the free boundary, and free boundary regularity. We propose a combinatorial toy problem related to the global configuration of such minimizers.
http://arxiv.org/abs/2504.09841v1
StruPhantom: Evolutionary Injection Attacks on Black-Box Tabular Agents Powered by Large Language Models
2025-04-14T03:22:04+00:00
The proliferation of autonomous agents powered by large language models (LLMs) has revolutionized popular business applications dealing with tabular data, i.e., tabular agents. Although LLMs are observed to be vulnerable against prompt injection attacks from external data sources, tabular agents impose strict data formats and predefined rules on the attacker's payload, which are ineffective unless the agent navigates multiple layers of structural data to incorporate the payload. To address the challenge, we present a novel attack termed StruPhantom which specifically targets black-box LLM-powered tabular agents. Our attack designs an evolutionary optimization procedure which continually refines attack payloads via the proposed constrained Monte Carlo Tree Search augmented by an off-topic evaluator. StruPhantom helps systematically explore and exploit the weaknesses of target applications to achieve goal hijacking. Our evaluation validates the effectiveness of StruPhantom across various LLM-based agents, including those on real-world platforms, and attack scenarios. Our attack achieves over 50% higher success rates than baselines in enforcing the application's response to contain phishing links or malicious codes.
http://arxiv.org/abs/2504.09842v1
Symphony of Symmetry Selective Resonances in Fe-MgO-ZnO-MgO-Fe
2025-04-14T03:27:22+00:00
We propose the perspective of symmetry-selective resonance of the $\Delta_1$ states in the Fe/MgO/ZnO/MgO/Fe heterostructures, offering a broad landscape to design magnetic tunnel junctions (MTJs) that yield a towering tunnel magnetoresistance (TMR) up to $3.5\times10^4\%$ with the resistance area (RA) product dipping down to a minimum of $0.05~\Omega\cdot\mu \text{m}^2$, while maintaining a nearly perfect (99\%) spin polarization. Our predictions are based on the self-consistent coupling of the non-equilibrium Green's function with density functional theory. We also present the charge current, spin current, and TMR with applied voltage of the Fe/MgO(3-layer)/ZnO(3-layer)/MgO(3-layer)/Fe MTJ, which offers a superior performance triad of TMR ($1.3\times10^4\%$), RA ($0.45~\Omega\cdot\mu \text{m}^2$), and spin polarization (99\%) over a regular Fe/MgO(6-layer)/Fe based MTJ (TMR $\approx 3.4\times10^3\%$, RA $\approx 22~\Omega\cdot\mu \text{m}^2$). We provide a comprehensive insight integrating the transmission eigenchannel, spectral density, and the band structure of the Fe contacts to establish the role of symmetry-selective resonance in the Fe/MgO/ZnO/MgO/Fe MTJ.
http://arxiv.org/abs/2504.09843v1
ST-Booster: An Iterative SpatioTemporal Perception Booster for Vision-and-Language Navigation in Continuous Environments
2025-04-14T03:29:08+00:00
Vision-and-Language Navigation in Continuous Environments (VLN-CE) requires agents to navigate unknown, continuous spaces based on natural language instructions. Compared to discrete settings, VLN-CE poses two core perception challenges. First, the absence of predefined observation points leads to heterogeneous visual memories and weakened global spatial correlations. Second, cumulative reconstruction errors in three-dimensional scenes introduce structural noise, impairing local feature perception. To address these challenges, this paper proposes ST-Booster, an iterative spatiotemporal booster that enhances navigation performance through multi-granularity perception and instruction-aware reasoning. ST-Booster consists of three key modules -- Hierarchical SpatioTemporal Encoding (HSTE), Multi-Granularity Aligned Fusion (MGAF), and ValueGuided Waypoint Generation (VGWG). HSTE encodes long-term global memory using topological graphs and captures shortterm local details via grid maps. MGAF aligns these dualmap representations with instructions through geometry-aware knowledge fusion. The resulting representations are iteratively refined through pretraining tasks. During reasoning, VGWG generates Guided Attention Heatmaps (GAHs) to explicitly model environment-instruction relevance and optimize waypoint selection. Extensive comparative experiments and performance analyses are conducted, demonstrating that ST-Booster outperforms existing state-of-the-art methods, particularly in complex, disturbance-prone environments.
http://arxiv.org/abs/2504.09844v1
OVERLORD: Ultimate Scaling of DataLoader for Multi-Source Large Foundation Model Training
2025-04-14T03:31:22+00:00
Modern frameworks for training large foundation models (LFMs) employ data loaders in a data parallel paradigm. While this design offers implementation simplicity, it introduces two fundamental challenges. First, due to the quadratic computational complexity of the attention operator, the non-uniform sample distribution over data-parallel ranks leads to a significant workload imbalance among loaders, which degrades the training efficiency. This paradigm also impedes the implementation of data mixing algorithms (e.g., curriculum learning) over different datasets. Second, to acquire a broad range of capability, LFMs training ingests data from diverse sources, each with distinct file access states. Colocating massive datasets within loader instances can easily exceed local pod memory capacity. Additionally, heavy sources with higher transformation latency require larger worker pools, further exacerbating memory consumption. We present OVERLORD, an industrial-grade distributed data loading architecture with three innovations: (1) A centralized and declarative data plane, which facilitates elastic data orchestration strategy, such as long-short context, multimodal, and curriculum learning; (2) Disaggregated multisource preprocessing through role-specific actors, i.e., Source Loaders and Data Constructors, leveraging autoscaling for Source Loaders towards heterogeneous and evolving source preprocessing cost; (3) Shadow Loaders with differential checkpointing for uninterrupted fault recovery. Deployed on production clusters scaling to multi-thousand GPU, OVERLORD achieves: (1) 4.5x end-to-end training throughput improvement, (2) a minimum 3.6x reduction in CPU memory usage, with further improvements to be added in later experiments.
http://arxiv.org/abs/2504.09845v1
Simultaneous Multiphoton-Multiatom Processes in Atomic Gases and Their Application in Enhancing Ultraweak Atomic Absorption Transitions
2025-04-14T03:32:34+00:00
We investigate simultaneous multiphoton-multiatom (MPMA) processes in atomic gases subjected to laser fields. Our study reveals that the composite factor governing the transition rate of these processes can reach extraordinarily high magnitudes, with an intrinsic regulation mechanism causing the rate to exhibit near-saturation behavior. By integrating an MPMA process into an ultraweak atomic absorption transition, a substantial enhancement of the overall transition rate can be achieved. This enhancement enables the detection of transitions that would otherwise remain undetectable, thereby opening new avenues for exploring ultraweak quantum phenomena in atomic systems.
http://arxiv.org/abs/2504.09846v1
GlyTwin: Digital Twin for Glucose Control in Type 1 Diabetes Through Optimal Behavioral Modifications Using Patient-Centric Counterfactuals
2025-04-14T03:32:39+00:00
Frequent and long-term exposure to hyperglycemia (i.e., high blood glucose) increases the risk of chronic complications such as neuropathy, nephropathy, and cardiovascular disease. Current technologies like continuous subcutaneous insulin infusion (CSII) and continuous glucose monitoring (CGM) primarily model specific aspects of glycemic control-like hypoglycemia prediction or insulin delivery. Similarly, most digital twin approaches in diabetes management simulate only physiological processes. These systems lack the ability to offer alternative treatment scenarios that support proactive behavioral interventions. To address this, we propose GlyTwin, a novel digital twin framework that uses counterfactual explanations to simulate optimal treatments for glucose regulation. Our approach helps patients and caregivers modify behaviors like carbohydrate intake and insulin dosing to avoid abnormal glucose events. GlyTwin generates behavioral treatment suggestions that proactively prevent hyperglycemia by recommending small adjustments to daily choices, reducing both frequency and duration of these events. Additionally, it incorporates stakeholder preferences into the intervention design, making recommendations patient-centric and tailored. We evaluate GlyTwin on AZT1D, a newly constructed dataset with longitudinal data from 21 type 1 diabetes (T1D) patients on automated insulin delivery systems over 26 days. Results show GlyTwin outperforms state-of-the-art counterfactual methods, generating 76.6% valid and 86% effective interventions. These findings demonstrate the promise of counterfactual-driven digital twins in delivering personalized healthcare.
http://arxiv.org/abs/2504.09847v1
$\mathbb{Z}_N$ generalizations of three-dimensional stabilizer codes
2025-04-14T03:37:52+00:00
In this work, we generalize several three-dimensional Z2 stabilizer models--including the X-cube model, the three-dimensional toric code, and Haah's code--to their ZN counterparts. Under periodic boundary conditions, we analyze their ground state degeneracies and topological excitations, and uncover behaviors that strongly depend on system size. For the X-cube model, we identify excitations with mobility restricted under local operations but relaxed under nonlocal ones derived from global topology. These excitations, previously confined to open boundaries in the Z2 model, now appear even under periodic boundaries. In the toric code, we observe nontrivial braiding between string and point excitations despite the absence of ground state degeneracy, indicating long-range entanglement independent of topological degeneracy. Again, this effect extends from open to periodic boundaries in the generalized models. For Haah's code, we find new excitations--fracton tripoles and monopoles--that remain globally constrained, along with a relaxation of immobility giving rise to lineons and planons. These results reveal new forms of topological order and suggest a broader framework for understanding fracton phases beyond the conventional Z2 setting.
http://arxiv.org/abs/2504.09848v1
A Survey of Large Language Model-Powered Spatial Intelligence Across Scales: Advances in Embodied Agents, Smart Cities, and Earth Science
2025-04-14T03:38:31+00:00
Over the past year, the development of large language models (LLMs) has brought spatial intelligence into focus, with much attention on vision-based embodied intelligence. However, spatial intelligence spans a broader range of disciplines and scales, from navigation and urban planning to remote sensing and earth science. What are the differences and connections between spatial intelligence across these fields? In this paper, we first review human spatial cognition and its implications for spatial intelligence in LLMs. We then examine spatial memory, knowledge representations, and abstract reasoning in LLMs, highlighting their roles and connections. Finally, we analyze spatial intelligence across scales -- from embodied to urban and global levels -- following a framework that progresses from spatial memory and understanding to spatial reasoning and intelligence. Through this survey, we aim to provide insights into interdisciplinary spatial intelligence research and inspire future studies.
http://arxiv.org/abs/2504.09849v1
CKMImageNet: A Dataset for AI-Based Channel Knowledge Map Towards Environment-Aware Communication and Sensing
2025-04-14T03:40:35+00:00
With the increasing demand for real-time channel state information (CSI) in sixth-generation (6G) mobile communication networks, channel knowledge map (CKM) emerges as a promising technique, offering a site-specific database that enables environment-awareness and significantly enhances communication and sensing performance by leveraging a priori wireless channel knowledge. However, efficient construction and utilization of CKMs require high-quality, massive, and location-specific channel knowledge data that accurately reflects the real-world environments. Inspired by the great success of ImageNet dataset in advancing computer vision and image understanding in artificial intelligence (AI) community, we introduce CKMImageNet, a dataset developed to bridge AI and environment-aware wireless communications and sensing by integrating location-specific channel knowledge data, high-fidelity environmental maps, and their visual representations. CKMImageNet supports a wide range of AI-driven approaches for CKM construction with spatially consistent and location-specific channel knowledge data, including both supervised and unsupervised, as well as discriminative and generative AI methods.
http://arxiv.org/abs/2504.09850v1
Accelerating Differentially Private Federated Learning via Adaptive Extrapolation
2025-04-14T03:43:27+00:00
The federated learning (FL) framework enables multiple clients to collaboratively train machine learning models without sharing their raw data, but it remains vulnerable to privacy attacks. One promising approach is to incorporate differential privacy (DP)-a formal notion of privacy-into the FL framework. DP-FedAvg is one of the most popular algorithms for DP-FL, but it is known to suffer from the slow convergence in the presence of heterogeneity among clients' data. Most of the existing methods to accelerate DP-FL require 1) additional hyperparameters or 2) additional computational cost for clients, which is not desirable since 1) hyperparameter tuning is computationally expensive and data-dependent choice of hyperparameters raises the risk of privacy leakage, and 2) clients are often resource-constrained. To address this issue, we propose DP-FedEXP, which adaptively selects the global step size based on the diversity of the local updates without requiring any additional hyperparameters or client computational cost. We show that DP-FedEXP provably accelerates the convergence of DP-FedAvg and it empirically outperforms existing methods tailored for DP-FL.
http://arxiv.org/abs/2504.09851v1
Carbon-Efficient 3D DNN Acceleration: Optimizing Performance and Sustainability
2025-04-14T03:48:37+00:00
As Deep Neural Networks (DNNs) continue to drive advancements in artificial intelligence, the design of hardware accelerators faces growing concerns over embodied carbon footprint due to complex fabrication processes. 3D integration improves performance but introduces sustainability challenges, making carbon-aware optimization essential. In this work, we propose a carbon-efficient design methodology for 3D DNN accelerators, leveraging approximate computing and genetic algorithm-based design space exploration to optimize Carbon Delay Product (CDP). By integrating area-efficient approximate multipliers into Multiply-Accumulate (MAC) units, our approach effectively reduces silicon area and fabrication overhead while maintaining high computational accuracy. Experimental evaluations across three technology nodes (45nm, 14nm, and 7nm) show that our method reduces embodied carbon by up to 30% with negligible accuracy drop.
http://arxiv.org/abs/2504.09852v1
GFT: Gradient Focal Transformer
2025-04-14T03:49:06+00:00
Fine-Grained Image Classification (FGIC) remains a complex task in computer vision, as it requires models to distinguish between categories with subtle localized visual differences. Well-studied CNN-based models, while strong in local feature extraction, often fail to capture the global context required for fine-grained recognition, while more recent ViT-backboned models address FGIC with attention-driven mechanisms but lack the ability to adaptively focus on truly discriminative regions. TransFG and other ViT-based extensions introduced part-aware token selection to enhance attention localization, yet they still struggle with computational efficiency, attention region selection flexibility, and detail-focus narrative in complex environments. This paper introduces GFT (Gradient Focal Transformer), a new ViT-derived framework created for FGIC tasks. GFT integrates the Gradient Attention Learning Alignment (GALA) mechanism to dynamically prioritize class-discriminative features by analyzing attention gradient flow. Coupled with a Progressive Patch Selection (PPS) strategy, the model progressively filters out less informative regions, reducing computational overhead while enhancing sensitivity to fine details. GFT achieves SOTA accuracy on FGVC Aircraft, Food-101, and COCO datasets with 93M parameters, outperforming ViT-based advanced FGIC models in efficiency. By bridging global context and localized detail extraction, GFT sets a new benchmark in fine-grained recognition, offering interpretable solutions for real-world deployment scenarios.
http://arxiv.org/abs/2504.09853v1
Principal Subsimplex Analysis
2025-04-14T03:50:05+00:00
Compositional data, also referred to as simplicial data, naturally arise in many scientific domains such as geochemistry, microbiology, and economics. In such domains, obtaining sensible lower-dimensional representations and modes of variation plays an important role. A typical approach to the problem is applying a log-ratio transformation followed by principal component analysis (PCA). However, this approach has several well-known weaknesses: it amplifies variation in minor variables; it can obscure important variation within major elements; it is not directly applicable to data sets containing zeros and zero imputation methods give highly variable results; it has limited ability to capture linear patterns present in compositional data. In this paper, we propose novel methods that produce nested sequences of simplices of decreasing dimensions analogous to backwards principal component analysis. These nested sequences offer both interpretable lower dimensional representations and linear modes of variation. In addition, our methods are applicable to data sets contain zeros without any modification. We demonstrate our methods on simulated data and on relative abundances of diatom species during the late Pliocene. Supplementary materials and R implementations for this article are available online.
http://arxiv.org/abs/2504.09854v1
To Buy an Electric Vehicle or Not? A Bayesian Analysis of Consumer Intent in the United States
2025-04-14T03:53:05+00:00
The adoption of electric vehicles (EVs) is considered critical to achieving climate goals, yet it hinges on consumer interest. This study explores how public intent to purchase EVs relates to four unexamined factors: exposure to EV information, perceptions of EVs' environmental benefits, views on government climate policy, and confidence in future EV infrastructure; while controlling for prior EV ownership, political affiliation, and demographic characteristics (e.g., age, gender, education, and geographic location). We utilize data from three nationally representative opinion polls conducted by the Pew Research Center between 2021 and 2023, and employ Bayesian techniques to estimate the ordinal probit and ordinal quantile models. Results from ordinal probit show that respondents who are well-informed about EVs, perceive them as environmentally beneficial, or are confident in development of charging stations are more likely to express strong interest in buying an EV, with covariate effects--a metric rarely reported in EV research--of 10.2, 15.5, and 19.1 percentage points, respectively. In contrast, those skeptical of government climate initiatives are more likely to express no interest, by more than 10 percentage points. Prior EV ownership exhibits the highest covariate effect (ranging from 19.0 to 23.1 percentage points), and the impact of most demographic variables is consistent with existing studies. The ordinal quantile models demonstrate significant variation in covariate effects across the distribution of EV purchase intent, offering insights beyond the ordinal probit model. This article is the first to use quantile modeling to reveal how covariate effects differ significantly throughout the spectrum of EV purchase intent.
http://arxiv.org/abs/2504.09855v1
PestMA: LLM-based Multi-Agent System for Informed Pest Management
2025-04-14T03:53:59+00:00
Effective pest management is complex due to the need for accurate, context-specific decisions. Recent advancements in large language models (LLMs) open new possibilities for addressing these challenges by providing sophisticated, adaptive knowledge acquisition and reasoning. However, existing LLM-based pest management approaches often rely on a single-agent paradigm, which can limit their capacity to incorporate diverse external information, engage in systematic validation, and address complex, threshold-driven decisions. To overcome these limitations, we introduce PestMA, an LLM-based multi-agent system (MAS) designed to generate reliable and evidence-based pest management advice. Building on an editorial paradigm, PestMA features three specialized agents, an Editor for synthesizing pest management recommendations, a Retriever for gathering relevant external data, and a Validator for ensuring correctness. Evaluations on real-world pest scenarios demonstrate that PestMA achieves an initial accuracy of 86.8% for pest management decisions, which increases to 92.6% after validation. These results underscore the value of collaborative agent-based workflows in refining and validating decisions, highlighting the potential of LLM-based multi-agent systems to automate and enhance pest management processes.
http://arxiv.org/abs/2504.09856v1
Estimate for the first Dirichlet eigenvalue of $p-$Laplacian on non-compact manifolds
2025-04-14T03:55:56+00:00
In this paper, we establish a sharp lower bound for the first Dirichlet eigenvalue of the $p$-Laplacian on bounded domains of a complete, non-compact Riemannian manifold with non-negative Ricci curvature.
http://arxiv.org/abs/2504.09857v1
Working with Large Language Models to Enhance Messaging Effectiveness for Vaccine Confidence
2025-04-14T04:06:46+00:00
Vaccine hesitancy and misinformation are significant barriers to achieving widespread vaccination coverage. Smaller public health departments may lack the expertise or resources to craft effective vaccine messaging. This paper explores the potential of ChatGPT-augmented messaging to promote confidence in vaccination uptake. We conducted a survey in which participants chose between pairs of vaccination messages and assessed which was more persuasive and to what extent. In each pair, one message was the original, and the other was augmented by ChatGPT. At the end of the survey, participants were informed that half of the messages had been generated by ChatGPT. They were then asked to provide both quantitative and qualitative responses regarding how knowledge of a message's ChatGPT origin affected their impressions. Overall, ChatGPT-augmented messages were rated slightly higher than the original messages. These messages generally scored better when they were longer. Respondents did not express major concerns about ChatGPT-generated content, nor was there a significant relationship between participants' views on ChatGPT and their message ratings. Notably, there was a correlation between whether a message appeared first or second in a pair and its score. These results point to the potential of ChatGPT to enhance vaccine messaging, suggesting a promising direction for future research on human-AI collaboration in public health communication.
http://arxiv.org/abs/2504.09858v1
Reasoning Models Can Be Effective Without Thinking
2025-04-14T04:08:16+00:00
Recent LLMs have significantly improved reasoning capabilities, primarily by including an explicit, lengthy Thinking process as part of generation. In this paper, we question whether this explicit thinking is necessary. Using the state-of-the-art DeepSeek-R1-Distill-Qwen, we find that bypassing the thinking process via simple prompting, denoted as NoThinking, can be surprisingly effective. When controlling for the number of tokens, NoThinking outperforms Thinking across a diverse set of seven challenging reasoning datasets--including mathematical problem solving, formal theorem proving, and coding--especially in low-budget settings, e.g., 51.3 vs. 28.9 on ACM 23 with 700 tokens. Notably, the performance of NoThinking becomes more competitive with pass@k as k increases. Building on this observation, we demonstrate that a parallel scaling approach that uses NoThinking to generate N outputs independently and aggregates them is highly effective. For aggregation, we use task-specific verifiers when available, or we apply simple best-of-N strategies such as confidence-based selection. Our method outperforms a range of baselines with similar latency using Thinking, and is comparable to Thinking with significantly longer latency (up to 9x). Together, our research encourages a reconsideration of the necessity of lengthy thinking processes, while also establishing a competitive reference for achieving strong reasoning performance in low-budget settings or at low latency using parallel scaling.
http://arxiv.org/abs/2504.09859v1
Can VLMs Assess Similarity Between Graph Visualizations?
2025-04-14T04:08:27+00:00
Graph visualizations have been studied for tasks such as clustering and temporal analysis, but how these visual similarities relate to established graph similarity measures remains unclear. In this paper, we explore the potential of Vision Language Models (VLMs) to approximate human-like perception of graph similarity. We generate graph datasets of various sizes and densities and compare VLM-derived visual similarity scores with feature-based measures. Our findings indicate VLMs can assess graph similarity in a manner similar to feature-based measures, even though differences among the measures exist. In future work, we plan to extend our research by conducting experiments on human visual graph perception.
http://arxiv.org/abs/2504.09860v1
SUMART: SUMmARizing Translation from Wordy to Concise Expression
2025-04-14T04:13:09+00:00
We propose SUMART, a method for summarizing and compressing the volume of verbose subtitle translations. SUMART is designed for understanding translated captions (e.g., interlingual conversations via subtitle translation or when watching movies in foreign language audio and translated captions). SUMART is intended for users who want a big-picture and fast understanding of the conversation, audio, video content, and speech in a foreign language. During the training data collection, when a speaker makes a verbose statement, SUMART employs a large language model on-site to compress the volume of subtitles. This compressed data is then stored in a database for fine-tuning purposes. Later, SUMART uses data pairs from those non-compressed ASR results and compressed translated results for fine-tuning the translation model to generate more concise translations for practical uses. In practical applications, SUMART utilizes this trained model to produce concise translation results. Furthermore, as a practical application, we developed an application that allows conversations using subtitle translation in augmented reality spaces. As a pilot study, we conducted qualitative surveys using a SUMART prototype and a survey on the summarization model for SUMART. We envision the most effective use case of this system is where users need to consume a lot of information quickly (e.g., Speech, lectures, podcasts, Q&A in conferences).
http://arxiv.org/abs/2504.09861v1
EthosGPT: Mapping Human Value Diversity to Advance Sustainable Development Goals (SDGs)
2025-04-14T04:14:13+00:00
Large language models (LLMs) are transforming global decision-making and societal systems by processing diverse data at unprecedented scales. However, their potential to homogenize human values poses critical risks, similar to biodiversity loss undermining ecological resilience. Rooted in the ancient Greek concept of ethos, meaning both individual character and the shared moral fabric of communities, EthosGPT draws on a tradition that spans from Aristotle's virtue ethics to Adam Smith's moral sentiments as the ethical foundation of economic cooperation. These traditions underscore the vital role of value diversity in fostering social trust, institutional legitimacy, and long-term prosperity. EthosGPT addresses the challenge of value homogenization by introducing an open-source framework for mapping and evaluating LLMs within a global scale of human values. Using international survey data on cultural indices, prompt-based assessments, and comparative statistical analyses, EthosGPT reveals both the adaptability and biases of LLMs across regions and cultures. It offers actionable insights for developing inclusive LLMs, such as diversifying training data and preserving endangered cultural heritage to ensure representation in AI systems. These contributions align with the United Nations Sustainable Development Goals (SDGs), especially SDG 10 (Reduced Inequalities), SDG 11.4 (Cultural Heritage Preservation), and SDG 16 (Peace, Justice and Strong Institutions). Through interdisciplinary collaboration, EthosGPT promotes AI systems that are both technically robust and ethically inclusive, advancing value plurality as a cornerstone for sustainable and equitable futures.