url
stringlengths
33
33
title
stringlengths
18
214
date_published
stringdate
2025-03-20 00:07:06
2025-04-17 04:46:57
abstract
stringlengths
114
1.92k
http://arxiv.org/abs/2504.11671v1
Steering Prosocial AI Agents: Computational Basis of LLM's Decision Making in Social Simulation
2025-04-16T00:02:28+00:00
Large language models (LLMs) increasingly serve as human-like decision-making agents in social science and applied settings. These LLM-agents are typically assigned human-like characters and placed in real-life contexts. However, how these characters and contexts shape an LLM's behavior remains underexplored. This study proposes and tests methods for probing, quantifying, and modifying an LLM's internal representations in a Dictator Game -- a classic behavioral experiment on fairness and prosocial behavior. We extract ``vectors of variable variations'' (e.g., ``male'' to ``female'') from the LLM's internal state. Manipulating these vectors during the model's inference can substantially alter how those variables relate to the model's decision-making. This approach offers a principled way to study and regulate how social concepts can be encoded and engineered within transformer-based models, with implications for alignment, debiasing, and designing AI agents for social simulations in both academic and commercial applications.
http://arxiv.org/abs/2504.11672v1
Extended scenarios for solar radio emissions with downshifted electron beam plasma excitations
2025-04-16T00:09:40+00:00
First-principle studies of radiative processes aimed at explaining the origin of type II and type III solar radio bursts raise questions on the implications of downshifted electron beam plasma excitations with frequency (slightly) below the plasma frequency ($\omega\lesssim\omega_{pe}$) in the generation of radio emissions. Unlike the beam-induced Langmuir waves ($\omega \gtrsim \omega_{pe}$) in the standard radio emission plasma model, the primary wave excitations of cooler and/or denser beams have predominantly downshifted frequencies. Broadbands of such downshifted excitations are also confirmed by in situ observations in association with terrestrial foreshock and electron beams (in contrast to narrowband Langmuir waves), but their involvement in radiative processes has not been examined so far. We revisit three radiative scenarios specific to downshifted primary excitations, and the results demonstrate their direct or indirect involvement in plasma radio emission. Downshifted excitations of an electron beam primarily play an indirect role, contributing to the relaxation to a plateau-on-tail still able to induce Langmuir beam waves that satisfy conditions for nonlinear wave-wave interactions leading to free radio waves. At longer time scales, the primary excitations can become predominantly downshifted, and then directly couple with the secondary (backscattered) Langmuir waves to generate the second harmonic of radio emissions. Two counterbeams are more efficient and lead to faster radiative mechanisms, involving counterpropagating downshifted excitations, which couple to each other and generate intense, broadband and isotropic radio spectra of downshifted second harmonics. Such a long-lasting (second) radio harmonic can thus be invoked to distinguish regimes with downshifted ($\omega \gtrsim \omega_{pe}$) primary excitations.
http://arxiv.org/abs/2504.11673v1
Higher-Order Binding of Language Model Virtual Personas: a Study on Approximating Political Partisan Misperceptions
2025-04-16T00:10:34+00:00
Large language models (LLMs) are increasingly capable of simulating human behavior, offering cost-effective ways to estimate user responses during the early phases of survey design. While previous studies have examined whether models can reflect individual opinions or attitudes, we argue that a \emph{higher-order} binding of virtual personas requires successfully approximating not only the opinions of a user as an identified member of a group, but also the nuanced ways in which that user perceives and evaluates those outside the group. In particular, faithfully simulating how humans perceive different social groups is critical for applying LLMs to various political science studies, including timely topics on polarization dynamics, inter-group conflict, and democratic backsliding. To this end, we propose a novel methodology for constructing virtual personas with synthetic user ``backstories" generated as extended, multi-turn interview transcripts. Our generated backstories are longer, rich in detail, and consistent in authentically describing a singular individual, compared to previous methods. We show that virtual personas conditioned on our backstories closely replicate human response distributions (up to an 87\% improvement as measured by Wasserstein Distance) and produce effect sizes that closely match those observed in the original studies. Altogether, our work extends the applicability of LLMs beyond estimating individual self-opinions, enabling their use in a broader range of human studies.
http://arxiv.org/abs/2504.11674v1
DM-OSVP++: One-Shot View Planning Using 3D Diffusion Models for Active RGB-Based Object Reconstruction
2025-04-16T00:14:52+00:00
Active object reconstruction is crucial for many robotic applications. A key aspect in these scenarios is generating object-specific view configurations to obtain informative measurements for reconstruction. One-shot view planning enables efficient data collection by predicting all views at once, eliminating the need for time-consuming online replanning. Our primary insight is to leverage the generative power of 3D diffusion models as valuable prior information. By conditioning on initial multi-view images, we exploit the priors from the 3D diffusion model to generate an approximate object model, serving as the foundation for our view planning. Our novel approach integrates the geometric and textural distributions of the object model into the view planning process, generating views that focus on the complex parts of the object to be reconstructed. We validate the proposed active object reconstruction system through both simulation and real-world experiments, demonstrating the effectiveness of using 3D diffusion priors for one-shot view planning.
http://arxiv.org/abs/2504.11675v1
VLM-Fuzz: Vision Language Model Assisted Recursive Depth-first Search Exploration for Effective UI Testing of Android Apps
2025-04-16T00:19:31+00:00
Testing Android apps effectively requires a systematic exploration of the app's possible states by simulating user interactions and system events. While existing approaches have proposed several fuzzing techniques to generate various text inputs and trigger user and system events for UI state exploration, achieving high code coverage remains a significant challenge in Android app testing. The main challenges are (1) reasoning about the complex and dynamic layout of UI screens; (2) generating required inputs/events to deal with certain widgets like pop-ups; and (3) coordination between current test inputs and previous inputs to avoid getting stuck in the same UI screen without improving test coverage. To address these problems, we propose a novel, automated fuzzing approach called VLM-Fuzz for effective UI testing of Android apps. We present a novel heuristic-based depth-first search (DFS) exploration algorithm, assisted with a vision language model (VLM), to effectively explore the UI states of the app. We use static analysis to analyze the Android Manifest file and the runtime UI hierarchy XML to extract the list of components, intent-filters and interactive UI widgets. VLM is used to reason about complex UI layout and widgets on an on-demand basis. Based on the inputs from static analysis, VLM, and the current UI state, we use some heuristics to deal with the above-mentioned challenges. We evaluated VLM-Fuzz based on a benchmark containing 59 apps obtained from a recent work and compared it against two state-of-the-art approaches: APE and DeepGUI. VLM-Fuzz outperforms the best baseline by 9.0%, 3.7%, and 2.1% in terms of class coverage, method coverage, and line coverage, respectively. We also ran VLM-Fuzz on 80 recent Google Play apps (i.e., updated in 2024). VLM-Fuzz detected 208 unique crashes in 24 apps, which have been reported to respective developers.
http://arxiv.org/abs/2504.11676v1
Maximum bound principle for Q-tensor gradient flow with low regularity integrators
2025-04-16T00:22:05+00:00
We investigate low-regularity integrator (LRI) methods for the Q-tensor model governing nematic liquid-crystalline semilinear parabolic equation. First- and second-order temporal discretizations are developed using Duhamel's formula, and we rigorously prove that both schemes preserve the maximum bound principle (MBP) and energy dissipation under minimal regularity requirements. Optimal convergence rates are established for the proposed methods. Numerical experiments validate the theoretical findings, demonstrating that the eigenvalues of Q remain strictly confined within the physical range (-1/3},2/3).
http://arxiv.org/abs/2504.11677v1
An Ideal Correspondence Result for Crossed Products by Quantum Groups
2025-04-16T00:27:25+00:00
Given a weak Kac system with duality $(\mathcal{H},V,U)$ arising from regular $\mathrm{C}^{*}$-algebraic locally compact quantum group $(\mathcal{G},\Delta)$, a $\mathrm{C}^{*}$-algebra $A$, and a sufficiently well-behaved coaction $\alpha$, we construct natural lattice isomorphisms from the coaction invariant ideals of $A$ to the dual coaction invariant ideals of full and reduced crossed products associated to $(\mathcal{H},V,U)$. In particular, these lattice isomorphisms are determined by either the maximality or normality of the coaction $\alpha$. This result directly generalizes the main theorem of Gillespie, Kaliszewski, Quigg, and Williams in arXiv:2406.06780, which in turn generalized an older ideal correspondence result of Gootman and Lazar for locally compact amenable groups. Throughout, we also develop basic conventions and motivate through elementary examples how crossed product $\mathrm{C}^{*}$-algebras by quantum groups generalize the classical crossed product theory.
http://arxiv.org/abs/2504.11678v1
Towards High-Voltage Cathodes for Zinc-Ion Batteries: Discovery Pipeline and Material Design Rules
2025-04-16T00:28:04+00:00
Efficient energy storage systems are crucial to address the intermittency of renewable energy sources. As multivalent batteries, Zn-ion batteries (ZIBs), while inherently low voltage, offer a promising low cost alternative to Li-ion batteries due to viable use of zinc as the anode. However, to maximize the potential impact of ZIBs, rechargable cathodes with improved Zn diffusion are needed. To better understand the chemical and structural factors influencing Zn-ion mobility within battery electrode materials, we employ a high-throughput computational screening approach to systematically evaluate candidate intercalation hosts for ZIB cathodes, expanding the chemical search space on empty intercalation hosts that do not contain Zn. We leverage a high-throughput screening funnel to identify promising cathodes in ZIBs, integrating screening criteria with DFT-based calculations of Zn$^{2+}$ intercalation and diffusion inside the host materials. Using this data, we identify the design principles that favor Zn-ion mobility in candidate cathode materials. Building on previous work on divalent ion cathodes, this study broadens the chemical space for next-generation multivalent energy storage systems.
http://arxiv.org/abs/2504.11679v1
Radiative Flux from a High-Resolution Atmospheric Dynamics Simulation of a Hot-Jupiter for JWST and Ariel
2025-04-16T00:33:26+00:00
We present medium-wave ($\sim$0.5~$\mu$m to $\sim$13~$\mu$m) radiative flux distributions and spectra derived from high-resolution atmospheric dynamics simulations of an exoplanet \WASPP. This planet serves to illustrate several important features. Assuming different chemical compositions for its atmosphere (e.g., H$_2$/He only and $Z \in \{1, 12\}$ times solar metallicity), the outgoing radiative flux is computed using full radiative transfer that folds in the James Webb Space Telescope (JWST) and Ariel instrument characteristics. We find that the observed variability depends strongly on the the assumed chemistry and the instrument wavelength range, hence the probed altitude of the atmosphere. With H$_2$/He only, the flux and variability originate near the 10$^5$~Pa level; with solar and higher metallicity, $\sim$10$^3$~Pa level is probed, and the variability is distinguishably reduced. Our calculations show that JWST and Ariel have the sensitivity to capture the atmospheric variability of exoplanets like \WASPP, depending on the metallicity -- both in repeated eclipse and phase-curve observations.
http://arxiv.org/abs/2504.11680v1
FEM-DtN-SIM Method for Computing Resonances of Schrödinger Operators
2025-04-16T00:34:11+00:00
The study of resonances of the Schr\"{o}dinger operator has a long-standing tradition in mathematical physics. Extensive theoretical investigations have explored the proximity of resonances to the real axis, their distribution, and bounds on the counting functions. However, computational results beyond one dimension remain scarce due to the nonlinearity of the problem and the unbounded nature of the domain. We propose a novel approach that integrates finite elements, Dirichlet-to-Neumann (DtN) mapping, and the spectral indicator method. The DtN mapping, imposed on the boundary of a truncated computational domain, enforces the outgoing condition. Finite elements allow for the efficient handling of complicated potential functions. The spectral indicator method effectively computes (complex) eigenvalues of the resulting nonlinear algebraic system without introducing spectral pollution. The viability of this approach is demonstrated through a range of numerical examples.
http://arxiv.org/abs/2504.11681v1
TurboFNO: High-Performance Fourier Neural Operator with Fused FFT-GEMM-iFFT on GPU
2025-04-16T00:41:18+00:00
Fourier Neural Operators (FNO) are widely used for learning partial differential equation solution operators. However, FNO lacks architecture-aware optimizations,with its Fourier layers executing FFT, filtering, GEMM, zero padding, and iFFT as separate stages, incurring multiple kernel launches and significant global memory traffic. We propose TurboFNO, the first fully fused FFT-GEMM-iFFT GPU kernel with built-in FFT optimizations. We first develop FFT and GEMM kernels from scratch, achieving performance comparable to or faster than the closed-source SOTA cuBLAS and cuFFT. Additionally, our FFT kernel integrates a built-in high-frequency truncation, input zero-padding, and pruning feature to avoid additional memory copy kernels. To fuse the FFT and GEMM workloads, we propose an FFT variant in which a single thread block iterates over the hidden dimension, aligning with the $k$-loop in GEMM. Additionally, we design two shared memory swizzling patterns to achieve 100\% memory bank utilization when forwarding FFT output to GEMM and enabling the iFFT to retrieve GEMM results directly from shared memory.Experimental result on an NVIDIA A100 GPU shows TurboFNO outperforms PyTorch, cuBLAS, and cuFFT by up to 150\%.
http://arxiv.org/abs/2504.11682v1
Predominant Electronic Order Parameter for Structural Chirality -- Role of Spinless Electronic Toroidal Multipoles
2025-04-16T00:42:03+00:00
We discuss predominant order parameters for structural chirality, and demonstrate that time-reversal-even axial-quadrupole plays a key role in stabilizing a chiral structure. Using the symmetry-adapted closest Wannier model of the trigonal Te and Se, we quantify the evolution of the spin-independent (spinless) and spin-dependent (spinful) electric toroidal (ET) (axial) multipole moments across the transition from an achiral to a chiral structure. Our results clearly identify that a spin-independent off-diagonal real hopping between $p$ orbitals, which corresponds to the bond-cluster spinless ET quadrupole of $(3z^{2}-r^{2})$ type $G_{u}$, is the predominant order parameter in stabilizing helical structures. We further elucidate that the above itinerant spinless ET quadrupole induces a monopole-like orbital angular momentum texture in the momentum space, which can be observed via circular-dichroism in soft x-ray photoemission spectroscopy measurement. Our findings highlight a critical role of the orbital angular momentum in chiral materials rather than less dominant spin angular momentum arising from the relativistic spin-orbit coupling.
http://arxiv.org/abs/2504.11683v1
Velocity Distribution and Diffusion of an Athermal Inertial Run-and-Tumble Particle in a Shear-Thinning Medium
2025-04-16T00:52:27+00:00
We study the dynamics of an athermal inertial active particle moving in a shear-thinning medium in $d=1$. The viscosity of the medium is modeled using a Coulomb-tanh function, while the activity is represented by an asymmetric dichotomous noise with strengths $-\Delta$ and $\mu\Delta$, transitioning between these states at a rate $\lambda$. Starting from the Fokker-Planck~(FP) equation for the time-dependent probability distributions $P(v,-\Delta,t)$ and $P(v,\mu\Delta,t)$ of the particle's velocity $v$ at time $t$, moving under the influence of active forces $-\Delta$ and $\mu\Delta$ respectively, we analytically derive the steady-state velocity distribution function $P_s(v)$, explicitly dependent on $\mu$. Also, we obtain a quadrature expression for the effective diffusion coefficient $D_e$ for the symmetric active force case~($\mu=1$). For a given $\Delta$ and $\mu$, we show that $P_s(v)$ exhibits multiple transitions as $\lambda$ is varied. Subsequently, we numerically compute $P_s(v)$, the mean-squared velocity $\langle v^2\rangle(t)$, and the diffusion coefficient $D_e$ by solving the particle's equation of motion, all of which show excellent agreement with the analytical results in the steady-state. Finally, we examine the universal nature of the transitions in $P_s(v)$ by considering an alternative functional form of medium's viscosity that also capture the shear-thinning behavior.
http://arxiv.org/abs/2504.11684v1
Chasing finite shadows of infinite groups through geometry
2025-04-16T01:01:05+00:00
There are many situations in geometry and group theory where it is natural, convenient or necessary to explore infinite groups via their actions on finite objects, i.e. via the finite quotients of the group. But how much understanding can one really gain about an infinite group by examining its finite images? Which properties of the group can one recognise, and when does the set of finite images determine the group completely? How hard is it to decide what the finite images of a given infinite group are? These notes follow my plenary lecture at the ECM in Sevilla, July 2024. The goal of the lecture was to sketch some of the rich history of the preceding problems and to present results that illustrate how the field surrounding these questions has been transformed in recent years by input from low-dimensional topology and the study of non-positively curved spaces.
http://arxiv.org/abs/2504.11685v1
Quantum simulations of nuclear resonances with variational methods
2025-04-16T01:01:56+00:00
The many-body nature of nuclear physics problems poses significant computational challenges. These challenges become even more pronounced when studying the resonance states of nuclear systems, which are governed by the non-Hermitian Hamiltonian. Quantum computing, particularly for quantum many-body systems, offers a promising alternative, especially within the constraints of current noisy intermediate-scale quantum (NISQ) devices. This work aims to simulate nuclear resonances using quantum algorithms by developing a variational framework compatible with non-Hermitian Hamiltonians and implementing it fully on a quantum simulator. We employ the complex scaling technique to extract resonance positions classically and adapt it for quantum simulations using a two-step algorithm. First, we transform the non-Hermitian Hamiltonian into a Hermitian form by using the energy variance as a cost function within a variational framework. Second, we perform theta-trajectory calculations to determine optimal resonance positions in the complex energy plane. To address resource constraints on NISQ devices, we utilize Gray Code (GC) encoding to reduce qubit requirements. We first validate our approach using a schematic potential model that mimics a nuclear potential, successfully reproducing known resonance energies with high fidelity. We then extend the method to a more realistic alpha-alpha nuclear potential and compute the resonance energies with a basis size of 16, using only four qubits. This study demonstrates, for the first time, that the complete theta-trajectory method can be implemented on a quantum computer without relying on any classical input beyond the Hamiltonian. The results establish a scalable and efficient quantum framework for simulating resonance phenomena in nuclear systems. This work represents a significant step toward quantum simulations of open quantum systems.
http://arxiv.org/abs/2504.11686v1
Can GPT tell us why these images are synthesized? Empowering Multimodal Large Language Models for Forensics
2025-04-16T01:02:46+00:00
The rapid development of generative AI facilitates content creation and makes image manipulation easier and more difficult to detect. While multimodal Large Language Models (LLMs) have encoded rich world knowledge, they are not inherently tailored for combating AI-generated Content (AIGC) and struggle to comprehend local forgery details. In this work, we investigate the application of multimodal LLMs in forgery detection. We propose a framework capable of evaluating image authenticity, localizing tampered regions, providing evidence, and tracing generation methods based on semantic tampering clues. Our method demonstrates that the potential of LLMs in forgery analysis can be effectively unlocked through meticulous prompt engineering and the application of few-shot learning techniques. We conduct qualitative and quantitative experiments and show that GPT4V can achieve an accuracy of 92.1% in Autosplice and 86.3% in LaMa, which is competitive with state-of-the-art AIGC detection methods. We further discuss the limitations of multimodal LLMs in such tasks and propose potential improvements.
http://arxiv.org/abs/2504.11687v1
The Cocytos Stream: A Disrupted Globular Cluster from our Last Major Merger?
2025-04-16T01:05:10+00:00
The census of stellar streams and dwarf galaxies in the Milky Way provides direct constraints on galaxy formation models and the nature of dark matter. The DESI Milky Way survey (with a footprint of 14,000$~deg{^2}$ and a depth of $r<19$ mag) delivers the largest sample of distant metal-poor stars compared to previous optical fiber-fed spectroscopic surveys. This makes DESI an ideal survey to search for previously undetected streams and dwarf galaxies. We present a detailed characterization of the Cocytos stream, which was re-discovered using a clustering analysis with a catalog of giants in the DESI year 3 data, supplemented with Magellan/MagE spectroscopy. Our analysis reveals a relatively metal-rich ([Fe/H]$=-1.3$) and thick stream (width$=1.5^\circ$) at a heliocentric distance of $\approx 25$ kpc, with an internal velocity dispersion of 6.5-9 km s$^{-1}$. The stream's metallicity, radial orbit, and proximity to the Virgo stellar overdensities suggest that it is most likely a disrupted globular cluster that came in with the Gaia-Enceladus merger. We also confirm its association with the Pyxis globular cluster. Our result showcases the ability of wide-field spectroscopic surveys to kinematically discover faint disrupted dwarfs and clusters, enabling constraints on the dark matter distribution in the Milky Way.
http://arxiv.org/abs/2504.11688v1
A method for bounding high-order finite element functions: Applications to mesh validity and bounds-preserving limiters
2025-04-16T01:06:48+00:00
We introduce a novel method for bounding high-order multi-dimensional polynomials in finite element approximations. The method involves precomputing optimal piecewise-linear bounding boxes for polynomial basis functions, which can then be used to locally bound any combination of these basis functions. This approach can be applied to any element/basis type at any approximation order, can provide local (i.e., subcell) extremum bounds to a desired level of accuracy, and can be evaluated efficiently on-the-fly in simulations. Furthermore, we show that this approach generally yields more accurate bounds in comparison to traditional methods based on convex hull properties (e.g., Bernstein polynomials). The efficacy of this technique is shown in applications such as mesh validity checks and optimization for high-order curved meshes, where positivity of the element Jacobian determinant can be ensured throughout the entire element, and continuously bounds-preserving limiters for hyperbolic systems, which can enforce maximum principle bounds across the entire solution polynomial.
http://arxiv.org/abs/2504.11689v1
Advancing quantum simulations of nuclear shell model with noise-resilient protocols
2025-04-16T01:13:39+00:00
Some of the computational limitations in solving the nuclear many-body problem could be overcome by utilizing quantum computers. The nuclear shell-model calculations providing deeper insights into the properties of atomic nuclei, is one such case with high demand for resources as the size of the Hilbert space grows exponentially with the number of particles involved. Quantum algorithms are being developed to overcome these challenges and advance such calculations. To develop quantum circuits for the nuclear shell-model, leveraging the capabilities of noisy intermediate-scale quantum (NISQ) devices. We aim to minimize resource requirements (specifically in terms of qubits and gates) and strive to reduce the impact of noise by employing relevant mitigation techniques. We achieve noise resilience by designing an optimized ansatz for the variational quantum eigensolver (VQE) based on Givens rotations and incorporating qubit-ADAPT-VQE in combination with variational quantum deflation (VQD) to compute ground and excited states incorporating the zero-noise extrapolation mitigation technique. Furthermore, the qubit requirements are significantly reduced by mapping the basis states to qubits using Gray code encoding and generalizing transformations of fermionic operators to efficiently represent manybody states. By employing the noise-resilient protocols, we achieve the ground and excited state energy levels of 38Ar and 6Li with better accuracy. These energy levels are presented for noiseless simulations, noisy conditions, and after applying noise mitigation techniques. Results are compared for Jordan Wigner and Gray code encoding using VQE, qubit-ADAPT-VQE, and VQD. Our work highlights the potential of noise-resilient protocols to leverage the full potential of NISQ devices in scaling the nuclear shell model calculations.
http://arxiv.org/abs/2504.11690v1
Infrared Imaging of Photochromic Contrast in Thiazolothiazole-Embedded Polymer Films
2025-04-16T01:16:43+00:00
The increasing demand for optical technologies with dynamic spectral control has driven interest in chromogenic materials, particularly for applications in tunable infrared metasurfaces. Phase-change materials such as vanadium dioxide and germanium-antimony-tellurium, for instance, have been widely used in the infrared regime. However, their reliance on thermal and electrical tuning introduces challenges such as high power consumption, limited emissivity tuning, and slow modulation speeds. Photochromic materials may offer an alternative approach to dynamic infrared metasurfaces, potentially overcoming these limitations through rapid, light-induced changes in optical properties. This manuscript explores the potential of thiazolothiazole-embedded polymers, known for their reversible photochromic transitions and strong infrared absorption changes, for tunable infrared metasurfaces. The material exhibits low absorption and a strong photochromic contrast in the spectral range from 1500 cm-1 to 1700 cm-1, making it suitable for dynamic infrared light control. This manuscript reports on infrared imaging experiments demonstrating photochromic contrast in thiazolothiazole-embedded polymer and thereby provides compelling evidence for their potential applications for dynamic infrared metasurfaces.
http://arxiv.org/abs/2504.11691v1
Measuring Global Migration Flows using Online Data
2025-04-16T01:19:26+00:00
Existing estimates of human migration are limited in their scope, reliability, and timeliness, prompting the United Nations and the Global Compact on Migration to call for improved data collection. Using privacy protected records from three billion Facebook users, we estimate country-to-country migration flows at monthly granularity for 181 countries, accounting for selection into Facebook usage. Our estimates closely match high-quality measures of migration where available but can be produced nearly worldwide and with less delay than alternative methods. We estimate that 39.1 million people migrated internationally in 2022 (0.63% of the population of the countries in our sample). Migration flows significantly changed during the COVID-19 pandemic, decreasing by 64% before rebounding in 2022 to a pace 24% above the pre-crisis rate. We also find that migration from Ukraine increased tenfold in the wake of the Russian invasion. To support research and policy interventions, we will release these estimates publicly through the Humanitarian Data Exchange.
http://arxiv.org/abs/2504.11692v1
Beyond ISAC: Toward Integrated Heterogeneous Service Provisioning via Elastic Multi-Dimensional Multiple Access
2025-04-16T01:21:56+00:00
Integrated heterogeneous service provisioning (IHSP) is a promising paradigm that is designed to concurrently support a variety of heterogeneous services, extending beyond sensing and communication to meet the diverse needs of emerging applications. However, a primary challenge of IHSP is addressing the conflicts between multiple competing service demands under constrained resources. In this paper, we overcome this challenge by the joint use of two novel elastic design strategies: compromised service value assessment and flexible multi-dimensional resource multiplexing. Consequently, we propose a value-prioritized elastic multi-dimensional multiple access (MDMA) mechanism for IHSP systems. First, we modify the Value-of-Service (VoS) metric by incorporating elastic parameters to characterize user-specific tolerance and compromise in response to various performance degradations under constrained resources. This VoS metric serves as the foundation for prioritizing services and enabling effective fairness service scheduling among concurrent competing demands. Next, we adapt the MDMA to elastically multiplex services using appropriate multiple access schemes across different resource domains. This protocol leverages user-specific interference tolerances and cancellation capabilities across different domains to reduce resource-demanding conflicts and co-channel interference within the same domain. Then, we maximize the system's VoS by jointly optimizing MDMA design and power allocation. Since this problem is non-convex, we propose a monotonic optimization-assisted dynamic programming (MODP) algorithm to obtain its optimal solution. Additionally, we develop the VoS-prioritized successive convex approximation (SCA) algorithm to efficiently find its suboptimal solution. Finally, simulations are presented to validate the effectiveness of the proposed designs.
http://arxiv.org/abs/2504.11693v1
Epitaxial formation of ultrathin HfO2 on graphene by sequential oxidation
2025-04-16T01:24:48+00:00
We demonstrate the formation of epitaxial, ultrathin hafnia (HfO2) on graphene. Monoclinic hafnia (m-HfO2) forms as the end of a series of sequential oxidation reactions. Starting from Hf metal grown epitaxially on graphene, oxidation leads first to an amorphous suboxide (a-HfOx), then to a crystalline, hexagonal suboxide (h-HfOx) in epitaxial relationship with the substrate, and finally to m-HfO2 that is also epitaxial. We use scanning transmission electron microscopy to characterize the epitaxial relationships and to investigate the structure of h-HfOx. We propose a series of displacive transformations that relate the different crystalline phases and are consistent with the observed epitaxial relationships with the graphene substrate. ReaxFF based reactive molecular dynamics simulations confirm our model of the oxide phase sequencing, and illustrate the role of graphene in promoting oxide crystallization. Our results suggest a way to achieve heteroepitaxial integration of high-performance, crystalline dielectrics with two dimensional (2D) semiconductors with an atomically sharp interface, which is also relevant to hafnia phase engineering.
http://arxiv.org/abs/2504.11694v1
$\ell^p$-Stability of Weighted Persistence Diagrams
2025-04-16T01:34:29+00:00
We introduce the concept of weighted persistence diagrams and develop a functorial pipeline for constructing them from finite metric measure spaces. This builds upon an existing functorial framework for generating classical persistence diagrams from finite pseudo-metric spaces. To quantify differences between weighted persistence diagrams, we define the $p$-edit distance for $p\in [1,\infty]$, and-focusing on the weighted Vietoris-Rips filtration-we establish that these diagrams are stable with respect to the $p$-Gromov-Wasserstein distance as a direct consequence of functoriality. In addition, we present an Optimal Transport-inspired formulation of the $p$-edit distance, enhancing its conceptual clarity. Finally, we explore the discriminative power of weighted persistence diagrams, demonstrating advantages over their unweighted counterparts.
http://arxiv.org/abs/2504.11695v1
Interpreting the Linear Structure of Vision-language Model Embedding Spaces
2025-04-16T01:40:06+00:00
Vision-language models encode images and text in a joint space, minimizing the distance between corresponding image and text pairs. How are language and images organized in this joint space, and how do the models encode meaning and modality? To investigate this, we train and release sparse autoencoders (SAEs) on the embedding spaces of four vision-language models (CLIP, SigLIP, SigLIP2, and AIMv2). SAEs approximate model embeddings as sparse linear combinations of learned directions, or "concepts". We find that, compared to other methods of linear feature learning, SAEs are better at reconstructing the real embeddings, while also able to retain the most sparsity. Retraining SAEs with different seeds or different data diet leads to two findings: the rare, specific concepts captured by the SAEs are liable to change drastically, but we also show that the key commonly-activating concepts extracted by SAEs are remarkably stable across runs. Interestingly, while most concepts are strongly unimodal in activation, we find they are not merely encoding modality per se. Many lie close to - but not entirely within - the subspace defining modality, suggesting that they encode cross-modal semantics despite their unimodal usage. To quantify this bridging behavior, we introduce the Bridge Score, a metric that identifies concept pairs which are both co-activated across aligned image-text inputs and geometrically aligned in the shared space. This reveals that even unimodal concepts can collaborate to support cross-modal integration. We release interactive demos of the SAEs for all models, allowing researchers to explore the organization of the concept spaces. Overall, our findings uncover a sparse linear structure within VLM embedding spaces that is shaped by modality, yet stitched together through latent bridges-offering new insight into how multimodal meaning is constructed.
http://arxiv.org/abs/2504.11696v1
A New Paradigm of User-Centric Wireless Communication Driven by Large Language Models
2025-04-16T01:43:36+00:00
The next generation of wireless communications seeks to deeply integrate artificial intelligence (AI) with user-centric communication networks, with the goal of developing AI-native networks that more accurately address user requirements. The rapid development of large language models (LLMs) offers significant potential in realizing these goals. However, existing efforts that leverage LLMs for wireless communication often overlook the considerable gap between human natural language and the intricacies of real-world communication systems, thus failing to fully exploit the capabilities of LLMs. To address this gap, we propose a novel LLM-driven paradigm for wireless communication that innovatively incorporates the nature language to structured query language (NL2SQL) tool. Specifically, in this paradigm, user personal requirements is the primary focus. Upon receiving a user request, LLMs first analyze the user intent in terms of relevant communication metrics and system parameters. Subsequently, a structured query language (SQL) statement is generated to retrieve the specific parameter values from a high-performance real-time database. We further utilize LLMs to formulate and solve an optimization problem based on the user request and the retrieved parameters. The solution to this optimization problem then drives adjustments in the communication system to fulfill the user's requirements. To validate the feasibility of the proposed paradigm, we present a prototype system. In this prototype, we consider user-request centric semantic communication (URC-SC) system in which a dynamic semantic representation network at the physical layer adapts its encoding depth to meet user requirements. Additionally, two LLMs are employed to analyze user requests and generate SQL statements, respectively. Simulation results demonstrate the effectiveness.
http://arxiv.org/abs/2504.11697v1
Fractional spatiotemporal optical vortices
2025-04-16T01:47:39+00:00
Spatiotemporal optical vortices (STOVs) with spiral phase in the space-time domain, which carry intrinsic transverse orbital angular momentum (OAM), introduce a new degree of freedom to light beams and exhibit unique properties. While integer and fractional spatial vortices have been extensively studied and widely applied, and research on integer STOVs have grown prosperously, fractional STOVs (FSTOVs), classified as STOVs with fractional spiral phases are rarely explored due to the challenges in characterizing rapidly varying spatiotemporal phases. Furthermore, approaches for the rapid recognition of FSTOVs are lacking. Herein, we experimentally and theoretically demonstrate the generation of FSTOVs in the far field. The generation, evolution, and diffraction rules of FSTOVs are revealed. Furthermore, a self-referential method for the rapid recognition of FSTOVs based on the energy ratio between the two end lobes of their diffraction patterns is proposed. This work will promote the development of the theory of light with transverse OAM, and open new opportunities for the applications of STOV, such as STOV-based optical communication and quantum information.
http://arxiv.org/abs/2504.11698v1
An Online Adaptation Method for Robust Depth Estimation and Visual Odometry in the Open World
2025-04-16T01:48:10+00:00
Recently, learning-based robotic navigation systems have gained extensive research attention and made significant progress. However, the diversity of open-world scenarios poses a major challenge for the generalization of such systems to practical scenarios. Specifically, learned systems for scene measurement and state estimation tend to degrade when the application scenarios deviate from the training data, resulting to unreliable depth and pose estimation. Toward addressing this problem, this work aims to develop a visual odometry system that can fast adapt to diverse novel environments in an online manner. To this end, we construct a self-supervised online adaptation framework for monocular visual odometry aided by an online-updated depth estimation module. Firstly, we design a monocular depth estimation network with lightweight refiner modules, which enables efficient online adaptation. Then, we construct an objective for self-supervised learning of the depth estimation module based on the output of the visual odometry system and the contextual semantic information of the scene. Specifically, a sparse depth densification module and a dynamic consistency enhancement module are proposed to leverage camera poses and contextual semantics to generate pseudo-depths and valid masks for the online adaptation. Finally, we demonstrate the robustness and generalization capability of the proposed method in comparison with state-of-the-art learning-based approaches on urban, in-house datasets and a robot platform. Code is publicly available at: https://github.com/jixingwu/SOL-SLAM.
http://arxiv.org/abs/2504.11699v1
H$^3$GNNs: Harmonizing Heterophily and Homophily in GNNs via Joint Structural Node Encoding and Self-Supervised Learning
2025-04-16T01:51:25+00:00
Graph Neural Networks (GNNs) struggle to balance heterophily and homophily in representation learning, a challenge further amplified in self-supervised settings. We propose H$^3$GNNs, an end-to-end self-supervised learning framework that harmonizes both structural properties through two key innovations: (i) Joint Structural Node Encoding. We embed nodes into a unified space combining linear and non-linear feature projections with K-hop structural representations via a Weighted Graph Convolution Network(WGCN). A cross-attention mechanism enhances awareness and adaptability to heterophily and homophily. (ii) Self-Supervised Learning Using Teacher-Student Predictive Architectures with Node-Difficulty Driven Dynamic Masking Strategies. We use a teacher-student model, the student sees the masked input graph and predicts node features inferred by the teacher that sees the full input graph in the joint encoding space. To enhance learning difficulty, we introduce two novel node-predictive-difficulty-based masking strategies. Experiments on seven benchmarks (four heterophily datasets and three homophily datasets) confirm the effectiveness and efficiency of H$^3$GNNs across diverse graph types. Our H$^3$GNNs achieves overall state-of-the-art performance on the four heterophily datasets, while retaining on-par performance to previous state-of-the-art methods on the three homophily datasets.
http://arxiv.org/abs/2504.11700v1
The Gevrey Gelfand-Shilov regularizing effect of the Landau equation with soft potential
2025-04-16T01:53:13+00:00
This paper studies the Cauchy problem for the spatially inhomogeneous Landau equation with soft potential in the perturbative framework around the Maxwellian distribution. Under a smallness assumption on the initial datum with exponential decay in the velocity variable, we establish the optimal Gevrey Gelfand-Shilov regularizing effect for the solution to the Cauchy problem.
http://arxiv.org/abs/2504.11701v1
Non-uniform Point Cloud Upsampling via Local Manifold Distribution
2025-04-16T01:54:33+00:00
Existing learning-based point cloud upsampling methods often overlook the intrinsic data distribution charac?teristics of point clouds, leading to suboptimal results when handling sparse and non-uniform point clouds. We propose a novel approach to point cloud upsampling by imposing constraints from the perspective of manifold distributions. Leveraging the strong fitting capability of Gaussian functions, our method employs a network to iteratively optimize Gaussian components and their weights, accurately representing local manifolds. By utilizing the probabilistic distribution properties of Gaussian functions, we construct a unified statistical manifold to impose distribution constraints on the point cloud. Experimental results on multiple datasets demonstrate that our method generates higher-quality and more uniformly distributed dense point clouds when processing sparse and non-uniform inputs, outperforming state-of-the-art point cloud upsampling techniques.
http://arxiv.org/abs/2504.11702v1
Clustering and analysis of user behaviour in blockchain: A case study of Planet IX
2025-04-16T01:57:33+00:00
Decentralised applications (dApps) that run on public blockchains have the benefit of trustworthiness and transparency as every activity that happens on the blockchain can be publicly traced through the transaction data. However, this introduces a potential privacy problem as this data can be tracked and analysed, which can reveal user-behaviour information. A user behaviour analysis pipeline was proposed to present how this type of information can be extracted and analysed to identify separate behavioural clusters that can describe how users behave in the game. The pipeline starts with the collection of transaction data, involving smart contracts, that is collected from a blockchain-based game called Planet IX. Both the raw transaction information and the transaction events are considered in the data collection. From this data, separate game actions can be formed and those are leveraged to present how and when the users conducted their in-game activities in the form of user flows. An extended version of these user flows also presents how the Non-Fungible Tokens (NFTs) are being leveraged in the user actions. The latter is given as input for a Graph Neural Network (GNN) model to provide graph embeddings for these flows which then can be leveraged by clustering algorithms to cluster user behaviours into separate behavioural clusters. We benchmark and compare well-known clustering algorithms as a part of the proposed method. The user behaviour clusters were analysed and visualised in a graph format. It was found that behavioural information can be extracted regarding the users that belong to these clusters. Such information can be exploited by malicious users to their advantage. To demonstrate this, a privacy threat model was also presented based on the results that correspond to multiple potentially affected areas.
http://arxiv.org/abs/2504.11703v1
Progent: Programmable Privilege Control for LLM Agents
2025-04-16T01:58:40+00:00
LLM agents are an emerging form of AI systems where large language models (LLMs) serve as the central component, utilizing a diverse set of tools to complete user-assigned tasks. Despite their great potential, LLM agents pose significant security risks. When interacting with the external world, they may encounter malicious commands from attackers, leading to the execution of dangerous actions. A promising way to address this is by enforcing the principle of least privilege: allowing only essential actions for task completion while blocking unnecessary ones. However, achieving this is challenging, as it requires covering diverse agent scenarios while preserving both security and utility. We introduce Progent, the first privilege control mechanism for LLM agents. At its core is a domain-specific language for flexibly expressing privilege control policies applied during agent execution. These policies provide fine-grained constraints over tool calls, deciding when tool calls are permissible and specifying fallbacks if they are not. This enables agent developers and users to craft suitable policies for their specific use cases and enforce them deterministically to guarantee security. Thanks to its modular design, integrating Progent does not alter agent internals and requires only minimal changes to agent implementation, enhancing its practicality and potential for widespread adoption. To automate policy writing, we leverage LLMs to generate policies based on user queries, which are then updated dynamically for improved security and utility. Our extensive evaluation shows that it enables strong security while preserving high utility across three distinct scenarios or benchmarks: AgentDojo, ASB, and AgentPoison. Furthermore, we perform an in-depth analysis, showcasing the effectiveness of its core components and the resilience of its automated policy generation against adaptive attacks.
http://arxiv.org/abs/2504.11704v1
A Library of LLM Intrinsics for Retrieval-Augmented Generation
2025-04-16T02:02:22+00:00
In the developer community for large language models (LLMs), there is not yet a clean pattern analogous to a software library, to support very large scale collaboration. Even for the commonplace use case of Retrieval-Augmented Generation (RAG), it is not currently possible to write a RAG application against a well-defined set of APIs that are agreed upon by different LLM providers. Inspired by the idea of compiler intrinsics, we propose some elements of such a concept through introducing a library of LLM Intrinsics for RAG. An LLM intrinsic is defined as a capability that can be invoked through a well-defined API that is reasonably stable and independent of how the LLM intrinsic itself is implemented. The intrinsics in our library are released as LoRA adapters on HuggingFace, and through a software interface with clear structured input/output characteristics on top of vLLM as an inference platform, accompanied in both places with documentation and code. This article describes the intended usage, training details, and evaluations for each intrinsic, as well as compositions of multiple intrinsics.
http://arxiv.org/abs/2504.11705v1
Learning What NOT to Count
2025-04-16T02:05:47+00:00
Few/zero-shot object counting methods reduce the need for extensive annotations but often struggle to distinguish between fine-grained categories, especially when multiple similar objects appear in the same scene. To address this limitation, we propose an annotation-free approach that enables the seamless integration of new fine-grained categories into existing few/zero-shot counting models. By leveraging latent generative models, we synthesize high-quality, category-specific crowded scenes, providing a rich training source for adapting to new categories without manual labeling. Our approach introduces an attention prediction network that identifies fine-grained category boundaries trained using only synthetic pseudo-annotated data. At inference, these fine-grained attention estimates refine the output of existing few/zero-shot counting networks. To benchmark our method, we further introduce the FGTC dataset, a taxonomy-specific fine-grained object counting dataset for natural images. Our method substantially enhances pre-trained state-of-the-art models on fine-grained taxon counting tasks, while using only synthetic data. Code and data to be released upon acceptance.
http://arxiv.org/abs/2504.11706v1
The characterization of graphs with two trivial distance ideals
2025-04-16T02:09:44+00:00
The distance ideals of graphs are algebraic invariants that generalize the Smith normal form (SNF) and the spectrum of several distance matrices associated with a graph. In general, distance ideals are not monotone under taking induced subgraphs. However, in [7] the characterizations of connected graphs with one trivial distance ideal over $\mathbb{Z}[X]$ and over $\mathbb{Q}[X]$ were obtained in terms of induced subgraphs, where $X$ is a set of variables indexed by the vertices. Later, in [3], the first attempt was made to characterize the family of connected graphs with at most two trivial distance ideals over $\mathbb{Z}[X]$. There, it was proven that these graphs are $\{ \mathcal {F},\textsf{odd-holes}_{7}\}$-free, where $\textsf{odd-holes}_{7}$ consists of the odd cycles of length at least seven and $\mathcal{F}$ is a set of sixteen graphs. Here, we give a characterization of the $\{\mathcal{F},\textsf{odd-holes}_{7}\}$-free graphs and prove that the $\{\mathcal{F},\textsf{odd-holes}_{7}\}$-free graphs are precisely the graphs with at most two trivial distance ideals over $\mathbb{Z}[X]$. As byproduct, we also find that the determinant of the distance matrix of a connected bipartite graph is even, this suggests that it is possible to extend, to connected bipartite graphs, the Graham-Pollak-Lov\'asz celebrated formula $\det(D(T_{n+1}))=(-1)^nn2^{n-1}$, and the Hou-Woo result stating that $\text{SNF}(D(T_{n+1}))=\textsf{I}_2\oplus 2\textsf{I}_{n-2}\oplus (2n)$, for any tree $T_{n+1}$ with $n+1$ vertices. Finally, we also give the characterizations of graphs with at most two trivial distance ideals over $\mathbb{Q}[X]$, and the graphs with at most two trivial distance univariate ideals.
http://arxiv.org/abs/2504.11707v1
Towards Safe Synthetic Image Generation On the Web: A Multimodal Robust NSFW Defense and Million Scale Dataset
2025-04-16T02:10:42+00:00
In the past years, we have witnessed the remarkable success of Text-to-Image (T2I) models and their widespread use on the web. Extensive research in making T2I models produce hyper-realistic images has led to new concerns, such as generating Not-Safe-For-Work (NSFW) web content and polluting the web society. To help prevent misuse of T2I models and create a safer web environment for users features like NSFW filters and post-hoc security checks are used in these models. However, recent work unveiled how these methods can easily fail to prevent misuse. In particular, adversarial attacks on text and image modalities can easily outplay defensive measures. %Exploiting such leads to the growing concern of preventing adversarial attacks on text and image modalities. Moreover, there is currently no robust multimodal NSFW dataset that includes both prompt and image pairs and adversarial examples. This work proposes a million-scale prompt and image dataset generated using open-source diffusion models. Second, we develop a multimodal defense to distinguish safe and NSFW text and images, which is robust against adversarial attacks and directly alleviates current challenges. Our extensive experiments show that our model performs well against existing SOTA NSFW detection methods in terms of accuracy and recall, drastically reducing the Attack Success Rate (ASR) in multimodal adversarial attack scenarios. Code: https://github.com/shahidmuneer/multimodal-nsfw-defense.
http://arxiv.org/abs/2504.11708v1
Fast Mixed-Precision Real Evaluation
2025-04-16T02:12:20+00:00
Evaluating real-valued expressions to high precision is a key building block in computational mathematics, physics, and numerics. A typical implementation evaluates the whole expression in a uniform precision, doubling that precision until a sufficiently-accurate result is achieved. This is wasteful: usually only a few operations really need to be performed at high precision, and the bulk of the expression could be computed much faster. However, such non-uniform precision assignments have, to date, been impractical to compute. We propose a fast new algorithm for deriving such precision assignments. The algorithm leverages results computed at lower precisions to analytically determine a mixed-precision assignment that will result in a sufficiently-accurate result. Our implementation, Reval, achieves an average speed-up of 1.72x compared to the state-of-the-art Sollya tool, with the speed-up increasing to 5.21x on the most difficult input points. An examination of the precisions used with and without precision tuning shows that the speed-up results from assigning lower precisions for the majority of operations, though additional optimizations enabled by the non-uniform precision assignments also play a role.
http://arxiv.org/abs/2504.11709v1
ESC-MVQ: End-to-End Semantic Communication With Multi-Codebook Vector Quantization
2025-04-16T02:12:57+00:00
This paper proposes a novel end-to-end digital semantic communication framework based on multi-codebook vector quantization (VQ), referred to as ESC-MVQ. Unlike prior approaches that rely on end-to-end training with a specific power or modulation scheme, often under a particular channel condition, ESC-MVQ models a channel transfer function as parallel binary symmetric channels (BSCs) with trainable bit-flip probabilities. Building on this model, ESC-MVQ jointly trains multiple VQ codebooks and their associated bit-flip probabilities with a single encoder-decoder pair. To maximize inference performance when deploying ESC-MVQ in digital communication systems, we devise an optimal communication strategy that jointly optimizes codebook assignment, adaptive modulation, and power allocation. To this end, we develop an iterative algorithm that selects the most suitable VQ codebook for semantic features and flexibly allocates power and modulation schemes across the transmitted symbols. Simulation results demonstrate that ESC-MVQ, using a single encoder-decoder pair, outperforms existing digital semantic communication methods in both performance and memory efficiency, offering a scalable and adaptive solution for realizing digital semantic communication in diverse channel conditions.
http://arxiv.org/abs/2504.11710v1
Tilings from Tops of Overlapping Iterated Function Systems
2025-04-16T02:16:39+00:00
The top of the attractor $A$ of a hyperbolic iterated function system $\left\{ f_{i}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}|i=1,2,\dots,M\right\} $ is defined and used to extend self-similar tilings to overlapping systems. The theory interprets expressions of the form $\lim_{k\rightarrow\infty}f_{j_{1}}^{-1}f_{j_{2}}^{-1}\dots f_{j_{k}} ^{-1}(\left\{ top(f_{i_{1}}f_{i_{2}}\dots f_{i_{k+1}}(A))|i_{1}i_{2}\dots i_{k+1}\in\{1,2,\dots,M\}^{k+1}\right\} )$ to yield tilings of $\mathbb{R}^{n}$. Examples include systems of finite type, tilings related to aperiodic monotiles, and ones where there are infinitely many distinct but related prototiles.
http://arxiv.org/abs/2504.11711v2
The Hitchhiker's Guide to Program Analysis, Part II: Deep Thoughts by LLMs
2025-04-16T02:17:06+00:00
Static analysis is a cornerstone for software vulnerability detection, yet it often struggles with the classic precision-scalability trade-off. In practice, such tools often produce high false positive rates, particularly in large codebases like the Linux kernel. This imprecision can arise from simplified vulnerability modeling and over-approximation of path and data constraints. While large language models (LLMs) show promise in code understanding, their naive application to program analysis yields unreliable results due to inherent reasoning limitations. We introduce BugLens, a post-refinement framework that significantly improves static analysis precision. BugLens guides an LLM to follow traditional analysis steps by assessing buggy code patterns for security impact and validating the constraints associated with static warnings. Evaluated on real-world Linux kernel bugs, BugLens raises precision from 0.10 (raw) and 0.50 (semi-automated refinement) to 0.72, substantially reducing false positives and revealing four previously unreported vulnerabilities. Our results suggest that a structured LLM-based workflow can meaningfully enhance the effectiveness of static analysis tools.
http://arxiv.org/abs/2504.11712v1
Ultra-high energy cosmic rays with UFA-15 source model in Bumblebee gravity theory
2025-04-16T02:17:30+00:00
We explore the effects of Bumblebee gravity on the propagation of ultra-high energy cosmic rays (UHECRs) using astrophysical sources modeled in the Unger-Farrar-Anchordoqui (UFA) framework (2015), which includes star formation rate (SFR), gamma-ray bursts (GRBs), and active galactic nuclei (AGN). We compute the density enhancement factor for various source separation distances ($d_\text{s}$s) up to 100 Mpc within the Bumblebee gravity scenario. Additionally, we calculate the CRs flux and their suppression, comparing the results with observational data from the Pierre Auger Observatory (PAO) and the Telescope Array through $\chi^2$ and $\chi_\text{red}^2$ analysis for the flux and Levenberg-Marquardt algorithm for suppression. The anisotropy in CRs arrival directions is examined, with corresponding $\chi^2$ and $\chi_\text{red}^2$ values obtained from the PAO surface detector data (SD 750 and SD 1500). Finally, we present skymaps of flux and anisotropy under different model assumptions, providing insights into the observational signatures of UHECRs in Bumblebee gravity. Our results show that increasing the Bumblebee gravity parameter $l$ enhances the density factor $\xi$, particularly at low energies, highlighting Lorentz violation's impact on CRs' propagation. Larger $d_\text{s}$ values amplify deviations from the $\Lambda$CDM model, with AGN sources dominating at high energies and GRB/SFR sources at lower energies. The skymaps indicate the structured flux patterns at large $d_\text{s}$ and structured anisotropy at higher energies.
http://arxiv.org/abs/2504.11713v1
Adjoint Sampling: Highly Scalable Diffusion Samplers via Adjoint Matching
2025-04-16T02:20:06+00:00
We introduce Adjoint Sampling, a highly scalable and efficient algorithm for learning diffusion processes that sample from unnormalized densities, or energy functions. It is the first on-policy approach that allows significantly more gradient updates than the number of energy evaluations and model samples, allowing us to scale to much larger problem settings than previously explored by similar methods. Our framework is theoretically grounded in stochastic optimal control and shares the same theoretical guarantees as Adjoint Matching, being able to train without the need for corrective measures that push samples towards the target distribution. We show how to incorporate key symmetries, as well as periodic boundary conditions, for modeling molecules in both cartesian and torsional coordinates. We demonstrate the effectiveness of our approach through extensive experiments on classical energy functions, and further scale up to neural network-based energy models where we perform amortized conformer generation across many molecular systems. To encourage further research in developing highly scalable sampling methods, we plan to open source these challenging benchmarks, where successful methods can directly impact progress in computational chemistry.
http://arxiv.org/abs/2504.11714v1
Unravelling Technical debt topics through Time, Programming Languages and Repository
2025-04-16T02:20:56+00:00
This study explores the dynamic landscape of Technical Debt (TD) topics in software engineering by examining its evolution across time, programming languages, and repositories. Despite the extensive research on identifying and quantifying TD, there remains a significant gap in understanding the diversity of TD topics and their temporal development. To address this, we have conducted an explorative analysis of TD data extracted from GitHub issues spanning from 2015 to September 2023. We employed BERTopic for sophisticated topic modelling. This study categorises the TD topics and tracks their progression over time. Furthermore, we have incorporated sentiment analysis for each identified topic, providing a deeper insight into the perceptions and attitudes associated with these topics. This offers a more nuanced understanding of the trends and shifts in TD topics through time, programming language, and repository.
http://arxiv.org/abs/2504.11715v1
Continuity for the spectral propinquity of the Dirac operators associated with an analytic path of Riemannian metrics
2025-04-16T02:34:00+00:00
We prove that a polynomial path of Riemannian metrics on a closed spin manifold induces a continuous field in the spectral propinquity of metric spectral triples.
http://arxiv.org/abs/2504.11716v1
A Technical Survey of Sparse Linear Solvers in Electronic Design Automation
2025-04-16T02:34:21+00:00
Sparse linear system solvers ($Ax=b$) are critical computational kernels in Electronic Design Automation (EDA), underpinning vital simulations for modern IC and system design. Applications like power integrity verification and electrothermal analysis fundamentally solve large-scale, sparse algebraic systems from Modified Nodal Analysis (MNA) or Finite Element/Volume Method (FEM/FVM) discretizations of PDEs. Problem dimensions routinely reach $10^6-10^9$ unknowns, escalating towards $10^{10}$+ for full-chip power grids \cite{Tsinghua21}, demanding stringent solver scalability, low memory footprint, and efficiency. This paper surveys predominant sparse solver paradigms in EDA: direct factorization methods (LU, Cholesky), iterative Krylov subspace methods (CG, GMRES, BiCGSTAB), and multilevel multigrid techniques. We examine their mathematical foundations, convergence, conditioning sensitivity, implementation aspects (storage formats CSR/CSC, fill-in mitigation via reordering), the critical role of preconditioning for ill-conditioned systems \cite{SaadIterative, ComparisonSolversArxiv}, and multigrid's potential optimal $O(N)$ complexity \cite{TrottenbergMG}. Solver choice critically depends on the performance impact of frequent matrix updates (e.g., transient/non-linear), where iterative/multigrid methods often amortize costs better than direct methods needing repeated factorization \cite{SaadIterative}. We analyze trade-offs in runtime complexity, memory needs, numerical robustness, parallel scalability (MPI, OpenMP, GPU), and precision (FP32/FP64). Integration into EDA tools for system-level multiphysics is discussed, with pseudocode illustrations. The survey concludes by emphasizing the indispensable nature and ongoing evolution of sparse solvers for designing and verifying complex electronic systems.
http://arxiv.org/abs/2504.11717v2
Safety with Agency: Human-Centered Safety Filter with Application to AI-Assisted Motorsports
2025-04-16T02:42:08+00:00
We propose a human-centered safety filter (HCSF) for shared autonomy that significantly enhances system safety without compromising human agency. Our HCSF is built on a neural safety value function, which we first learn scalably through black-box interactions and then use at deployment to enforce a novel state-action control barrier function (Q-CBF) safety constraint. Since this Q-CBF safety filter does not require any knowledge of the system dynamics for both synthesis and runtime safety monitoring and intervention, our method applies readily to complex, black-box shared autonomy systems. Notably, our HCSF's CBF-based interventions modify the human's actions minimally and smoothly, avoiding the abrupt, last-moment corrections delivered by many conventional safety filters. We validate our approach in a comprehensive in-person user study using Assetto Corsa-a high-fidelity car racing simulator with black-box dynamics-to assess robustness in "driving on the edge" scenarios. We compare both trajectory data and drivers' perceptions of our HCSF assistance against unassisted driving and a conventional safety filter. Experimental results show that 1) compared to having no assistance, our HCSF improves both safety and user satisfaction without compromising human agency or comfort, and 2) relative to a conventional safety filter, our proposed HCSF boosts human agency, comfort, and satisfaction while maintaining robustness.
http://arxiv.org/abs/2504.11718v1
Some Remarks On Krein--von Neumann Extensions
2025-04-16T02:42:58+00:00
We survey various properties of Krein--von Neumann extensions $S_K$ and the reduced Krein--von Neumann operator $\hatt S_K$ in connection with a strictly positive (symmetric) operator $S$ with nonzero deficiency indices. In particular, we focus on the resolvents of $S_K$ and $\hatt S_K$ and of the trace ideal properties of the resolvent of $\hatt S_K$, and make some comparisons with the corresponding properties of the resolvent of the Friedrichs extension $S_F$. We also recall a parametrization of all nonnegative self-adjoint extensions of $S$ and various Krein-type resolvent formulas for any two relatively prime self-adjoint extensions of $S$, utilizing a Donoghue-type $M$-operator (i.e., an energy parameter dependent Dirichlet-to-Neumann-type map).
http://arxiv.org/abs/2504.11719v1
Equivalence between Superharmonic functions and renormalized solutions for the equations with $(p, q)$-growth
2025-04-16T02:44:48+00:00
We establish the equivalence between superharmonic functions and locally renormalized solutions for the elliptic measure data problems with $(p, q)$-growth. By showing that locally renormalized solutions are essentially bounded below and using Wolff potential estimates, we extend the results of [T. Kilpel\"{a}inen, T. Kuusi, A. Tuhola-Kujanp\"{a}\"{a}, Superharmonic functions are locally renormalized solutions, Ann. Inst. H. Poincar\'{e} C Anal. Non Lin\'{e}aire, 2011] to a broader class of problems. Our work provides the first equivalence result between locally renormalized solutions and superharmonic functions for the nonstandard growth equations.
http://arxiv.org/abs/2504.11720v1
Polarisation-Inclusive Spiking Neural Networks for Real-Time RFI Detection in Modern Radio Telescopes
2025-04-16T02:45:00+00:00
Radio Frequency Interference (RFI) is a known growing challenge for radio astronomy, intensified by increasing observatory sensitivity and prevalence of orbital RFI sources. Spiking Neural Networks (SNNs) offer a promising solution for real-time RFI detection by exploiting the time-varying nature of radio observation and neuron dynamics together. This work explores the inclusion of polarisation information in SNN-based RFI detection, using simulated data from the Hydrogen Epoch of Reionisation Array (HERA) instrument and provides power usage estimates for deploying SNN-based RFI detection on existing neuromorphic hardware. Preliminary results demonstrate state-of-the-art detection accuracy and highlight possible extensive energy-efficiency gains.
http://arxiv.org/abs/2504.11721v1
Climate-economy projections under shared socioeconomic pathways and net-zero scenarios
2025-04-16T02:48:18+00:00
We examine future trajectories of the social cost of carbon, global temperatures, and carbon concentrations using the cost-benefit Dynamic Integrated Climate-Economy (DICE) model calibrated to the five Shared Socioeconomic Pathways (SSPs) under two mitigation scenarios: achieving net-zero carbon emissions by 2050 and by 2100. The DICE model is calibrated to align industrial and land-use carbon emissions with projections from six leading process-based integrated assessment models (IAMs): IMAGE, MESSAGE--GLOBIOM, AIM/CGE, GCAM, REMIND--MAgPIE and WITCH--GLOBIOM. We find that even with aggressive mitigation (net-zero by 2050), global temperatures are projected to exceed $2^\circ\text{C}$ above preindustrial levels by 2100, with estimates ranging from $2.5^\circ\text{C}$ to $2.7^\circ\text{C}$ across all SSPs and IAMs considered. Under the more lenient mitigation scenario (net-zero by 2100), global temperatures are projected to rise to between $3^\circ\text{C}$ and $3.7^\circ\text{C}$ by 2100. Additionally, the social cost of carbon is estimated to increase from approximately USD 30--50 in 2025 to USD 250--400 in 2100.
http://arxiv.org/abs/2504.11722v1
Inversion of biological strategies in engineering technology: in case underwater soft robot
2025-04-16T02:48:28+00:00
This paper proposes a biomimetic design framework based on biological strategy inversion, aiming to systematically map solutions evolved in nature to the engineering field. By constructing a "Function-Behavior-Feature-Environment" (F-B-Cs in E) knowledge model, combined with natural language processing (NLP) and multi-criteria decision-making methods, it achieves efficient conversion from biological strategies to engineering solutions. Using underwater soft robot design as a case study, the effectiveness of the framework in optimizing drive mechanisms, power distribution, and motion pattern design is verified. This research provides scalable methodological support for interdisciplinary biomimetic innovation.
http://arxiv.org/abs/2504.11723v1
Probing the Unknown: Exploring Student Interactions with Probeable Problems at Scale in Introductory Programming
2025-04-16T02:50:00+00:00
Introductory programming courses often rely on small code-writing exercises that have clearly specified problem statements. This limits opportunities for students to practice how to clarify ambiguous requirements -- a critical skill in real-world programming. In addition, the emerging capabilities of large language models (LLMs) to produce code from well-defined specifications may harm student engagement with traditional programming exercises. This study explores the use of ``Probeable Problems'', automatically gradable tasks that have deliberately vague or incomplete specifications. Such problems require students to submit test inputs, or `probes', to clarify requirements before implementation. Through analysis of over 40,000 probes in an introductory course, we identify patterns linking probing behaviors to task success. Systematic strategies, such as thoroughly exploring expected behavior before coding, resulted in fewer incorrect code submissions and correlated with course success. Feedback from nearly 1,000 participants highlighted the challenges and real-world relevance of these tasks, as well as benefits to critical thinking and metacognitive skills. Probeable Problems are easy to set up and deploy at scale, and help students recognize and resolve uncertainties in programming problems.
http://arxiv.org/abs/2504.11724v1
Ideal antiferroelectricity with large digital electrostrain in PbZrO3 epitaxial thin films
2025-04-16T02:55:31+00:00
Antiferroelectrics exhibit reversible antipolar-polar phase transitions under electric fields, yielding large electrostrain suitable for electromechanical devices. Nevertheless, in thin-film form, the antiferroelectric behavior is often obscured by competing ferroic orders, resulting in slanted hysteresis loops with undesired remnant polarization, subsequently posing challenges in obtaining ideal antiferroelectricity and understanding their intrinsic electrical behavior. Here, atomistic models for controllable antiferroelectric-ferroelectric phase transition pathways are unveiled along specific crystallographic directions. Guided by the anisotropic phase transition and orientation design, we achieved ideal antiferroelectricity with square double hysteresis loop, large saturated polarization (~60 {\mu}C/cm2), near-zero remnant polarization, fast response time (~75 ns), and near-fatigue-free performance (~10^10 cycles) in (111)P-oriented PbZrO3 epitaxial thin films. Moreover, a bipolar and frequency-independent digital electrostrain (~0.83%) were demonstrated in this architype antiferroelectric system. In-situ X-ray diffraction studies further reveal that the large digital electrostrain results from intrinsic field-induced antiferroelectric-ferroelectric structural transition. This work demonstrates the anisotropic phase transition mechanism and ideal antiferroelectricity with large digital electrostrain in antiferroelectric thin films, offering a new avenue for applications of antiferroelectricity in nanoelectromechanical systems.
http://arxiv.org/abs/2504.11725v1
Sparsity-promoting methods for isolating dominant linear amplification mechanisms in wall-bounded flows
2025-04-16T02:59:01+00:00
This work proposes a method to identify and isolate the physical mechanisms that are responsible for linear energy amplification in fluid flows. This is achieved by applying a sparsity-promoting methodology to the resolvent form of the governing equations, solving an optimization problem that balances retaining the amplification properties of the original operator with minimizing the number of terms retained in the simplified sparse model. This results in simplified operators that often have very similar pseudospectral properties as the original equations. The method is demonstrated on both incompressible and compressible wall-bounded parallel shear flows, where the results obtained from the proposed method appear to be consistent with known mechanisms and simplifying assumptions, such as the lift-up mechanism, and (for the compressible case) Morkovin's hypothesis and the strong Reynolds analogy. This provides a framework for the application of this method to problems for which knowledge of pertinent amplification mechanisms is less established.
http://arxiv.org/abs/2504.11726v1
Saga: Capturing Multi-granularity Semantics from Massive Unlabelled IMU Data for User Perception
2025-04-16T03:03:42+00:00
Inertial measurement units (IMUs), have been prevalently used in a wide range of mobile perception applications such as activity recognition and user authentication, where a large amount of labelled data are normally required to train a satisfactory model. However, it is difficult to label micro-activities in massive IMU data due to the hardness of understanding raw IMU data and the lack of ground truth. In this paper, we propose a novel fine-grained user perception approach, called Saga, which only needs a small amount of labelled IMU data to achieve stunning user perception accuracy. The core idea of Saga is to first pre-train a backbone feature extraction model, utilizing the rich semantic information of different levels embedded in the massive unlabelled IMU data. Meanwhile, for a specific downstream user perception application, Bayesian Optimization is employed to determine the optimal weights for pre-training tasks involving different semantic levels. We implement Saga on five typical mobile phones and evaluate Saga on three typical tasks on three IMU datasets. Results show that when only using about 100 training samples per class, Saga can achieve over 90% accuracy of the full-fledged model trained on over ten thousands training samples with no additional system overhead.
http://arxiv.org/abs/2504.11727v1
Contrast Enhancement of Barely Visible Impact Damage using Speckle-Based Dark-Field Radiography
2025-04-16T03:04:16+00:00
Barely visible impact damage (BVID) can cause serious issue for composite structures, due to sub-surface damage seriously reducing the strength of the material without showing easily detectable surface signs. Dark-field imaging measures ultra-small angle scattering caused by microscopic features within samples. It is sensitive to damage in composite materials which would otherwise be invisible in conventional radiography. Here we demonstrate BVID detection with speckle-based dark-field imaging, a technique requiring only sandpaper (to create the speckle-pattern) in addition to a conventional X-ray imaging setup to extract the dark-field imaging. We demonstrate that the technique is capable of detecting both matrix cracking and delaminations by imaging materials susceptible to these failure mechanisms.
http://arxiv.org/abs/2504.11728v1
Enumeration of Bases in Matroid with Exponentially Large Ground Set
2025-04-16T03:05:39+00:00
When we deal with a matroid ${\mathcal M}=(U,{\mathcal I})$, we usually assume that it is implicitly given by means of the membership (MEM) oracle. Many existing efficient algorithms run in polynomial time with respect to $|U|$ and the running time of the MEM-oracle. However, they are not efficient any more when $U$ is exponentially large in some context. In this paper, we study two problems of enumerating bases in such matroids. First, we present an incremental-polynomial algorithm that enumerates all minimum-weighted bases, where the bounding polynomial does not depend on $|U|$. To design the algorithm, we assume two oracles other than the MEM-oracle: the MinB-oracle that returns a minimum basis and the REL-oracle that returns a relevant element one by one in non-decreasing order of weight. The proposed algorithm is applicable to enumeration of minimum bases of binary matroids from cycle space, path space and cut space, all of which have exponentially large $U$ with respect to a given graph. The highlight in this context is that, to design the REL-oracle for cut space, we develop the first polynomial-delay algorithm that enumerates all relevant cuts of a given graph in non-decreasing order of weight. Finally, we present a polynomial-delay algorithm that enumerates all sets of linearly independent $r$-dimensional $r$ vectors over $\mathit{GF}(2)$. Using the algorithm, we can enumerate all unweighted bases of a binary matroid such that elements are closed under addition, in polynomial-delay with respect to the matroid rank $r$.
http://arxiv.org/abs/2504.11729v1
EdgePrompt: A Distributed Key-Value Inference Framework for LLMs in 6G Networks
2025-04-16T03:07:07+00:00
As sixth-generation (6G) networks advance, large language models (LLMs) are increasingly integrated into 6G infrastructure to enhance network management and intelligence. However, traditional LLMs architecture struggle to meet the stringent latency and security requirements of 6G, especially as the increasing in sequence length leads to greater task complexity. This paper proposes Edge-Prompt, a cloud-edge collaborative framework based on a hierarchical attention splicing mechanism. EdgePrompt employs distributed key-value (KV) pair optimization techniques to accelerate inference and adapt to network conditions. Additionally, to reduce the risk of data leakage, EdgePrompt incorporates a privacy preserving strategy by isolating sensitive information during processing. Experiments on public dataset show that EdgePrompt effectively improves the inference throughput and reduces the latency, which provides a reliable solution for LLMs deployment in 6G environments.
http://arxiv.org/abs/2504.11730v1
Blockchain Application in Metaverse: A Review
2025-04-16T03:07:35+00:00
In recent years, the term Metaverse emerged as one of the most compelling concepts, captivating the interest of international companies such as Tencent, ByteDance, Microsoft, and Facebook. These company recognized the Metaverse as a pivotal element for future success and have since made significant investments in this area. The Metaverse is still in its developmental stages, requiring the integration and advancement of various technologies to bring its vision to life. One of the key technologies associated with the Metaverse is blockchain, known for its decentralization, security, trustworthiness, and ability to manage time-series data. These characteristics align perfectly with the ecosystem of the Metaverse, making blockchain foundational for its security and infrastructure. This paper introduces both blockchain and the Metaverse ecosystem while exploring the application of the blockchain within the Metaverse, including decentralization, consensus mechanisms, hash algorithms, timestamping, smart contracts, distributed storage, distributed ledgers, and non-fungible tokens (NFTs) to provide insights for researchers investigating these topics.
http://arxiv.org/abs/2504.11731v1
Bayesian Optimization for Ion Beam Centroid Correction
2025-04-16T03:10:54+00:00
An activity of the TRIUMF automatic beam tuning program, the Bayesian optimization for Ion Steering, BOIS, method has been developed to perform corrective centroid steering of beams at the TRIUMF ISAC facility. BOIS exclusively controls the steerers for centroid correction after the transverse optics have been set according to theory. The method is fully online, easy to deploy, and has been tested in low energy and postaccelerated beams at ISAC, achieving results comparable to human operators. scaleBOIS and boundBOIS are naive proof of concept solutions to preferably select beam paths with minimal steering. Repeatable and robust automated steering reduces reliance on operator expertise and operational overhead, ensuring reliable beam delivery to the experiments and thereby supporting TRIUMF's scientific mission.
http://arxiv.org/abs/2504.11732v1
EgoExo-Gen: Ego-centric Video Prediction by Watching Exo-centric Videos
2025-04-16T03:12:39+00:00
Generating videos in the first-person perspective has broad application prospects in the field of augmented reality and embodied intelligence. In this work, we explore the cross-view video prediction task, where given an exo-centric video, the first frame of the corresponding ego-centric video, and textual instructions, the goal is to generate futur frames of the ego-centric video. Inspired by the notion that hand-object interactions (HOI) in ego-centric videos represent the primary intentions and actions of the current actor, we present EgoExo-Gen that explicitly models the hand-object dynamics for cross-view video prediction. EgoExo-Gen consists of two stages. First, we design a cross-view HOI mask prediction model that anticipates the HOI masks in future ego-frames by modeling the spatio-temporal ego-exo correspondence. Next, we employ a video diffusion model to predict future ego-frames using the first ego-frame and textual instructions, while incorporating the HOI masks as structural guidance to enhance prediction quality. To facilitate training, we develop an automated pipeline to generate pseudo HOI masks for both ego- and exo-videos by exploiting vision foundation models. Extensive experiments demonstrate that our proposed EgoExo-Gen achieves better prediction performance compared to previous video prediction models on the Ego-Exo4D and H2O benchmark datasets, with the HOI masks significantly improving the generation of hands and interactive objects in the ego-centric videos.
http://arxiv.org/abs/2504.11733v2
DVLTA-VQA: Decoupled Vision-Language Modeling with Text-Guided Adaptation for Blind Video Quality Assessment
2025-04-16T03:20:28+00:00
Inspired by the dual-stream theory of the human visual system (HVS) - where the ventral stream is responsible for object recognition and detail analysis, while the dorsal stream focuses on spatial relationships and motion perception - an increasing number of video quality assessment (VQA) works built upon this framework are proposed. Recent advancements in large multi-modal models, notably Contrastive Language-Image Pretraining (CLIP), have motivated researchers to incorporate CLIP into dual-stream-based VQA methods. This integration aims to harness the model's superior semantic understanding capabilities to replicate the object recognition and detail analysis in ventral stream, as well as spatial relationship analysis in dorsal stream. However, CLIP is originally designed for images and lacks the ability to capture temporal and motion information inherent in videos.To address the limitation, this paper propose a Decoupled Vision-Language Modeling with Text-Guided Adaptation for Blind Video Quality Assessment (DVLTA-VQA), which decouples CLIP's visual and textual components, and integrates them into different stages of the NR-VQA pipeline. Specifically, a Video-Based Temporal CLIP module is proposed to explicitly model temporal dynamics and enhance motion perception, aligning with the dorsal stream. Additionally, a Temporal Context Module is developed to refine inter-frame dependencies, further improving motion modeling. On the ventral stream side, a Basic Visual Feature Extraction Module is employed to strengthen detail analysis. Finally, a text-guided adaptive fusion strategy is proposed to enable dynamic weighting of features, facilitating more effective integration of spatial and temporal information.
http://arxiv.org/abs/2504.11734v1
Recent Advance in 3D Object and Scene Generation: A Survey
2025-04-16T03:22:06+00:00
In recent years, the demand for 3D content has grown exponentially with intelligent upgrading of interactive media, extended reality (XR), and Metaverse industries. In order to overcome the limitation of traditional manual modeling approaches, such as labor-intensive workflows and prolonged production cycles, revolutionary advances have been achieved through the convergence of novel 3D representation paradigms and artificial intelligence generative technologies. In this survey, we conduct a systematically review of the cutting-edge achievements in static 3D object and scene generation, as well as establish a comprehensive technical framework through systematic categorization. Specifically, we initiate our analysis with mainstream 3D object representations, followed by in-depth exploration of two principal technical pathways in object generation: data-driven supervised learning methods and deep generative model-based approaches. Regarding scene generation, we focus on three dominant paradigms: layout-guided compositional synthesis, 2D prior-based scene generation, and rule-driven modeling. Finally, we critically examine persistent challenges in 3D generation and propose potential research directions for future investigation. This survey aims to provide readers with a structured understanding of state-of-the-art 3D generation technologies while inspiring researchers to undertake more exploration in this domain.
http://arxiv.org/abs/2504.11735v1
WalletProbe: A Testing Framework for Browser-based Cryptocurrency Wallet Extensions
2025-04-16T03:24:30+00:00
Serving as the first touch point for users to the cryptocurrency world, cryptocurrency wallets allow users to manage, receive, and transmit digital assets on blockchain networks and interact with emerging decentralized finance (DeFi) applications. Unfortunately, cryptocurrency wallets have always been the prime targets for attackers, and incidents of wallet breaches have been reported from time to time. Although some recent studies have characterized the vulnerabilities and scams related to wallets, they have generally been characterized in coarse granularity, overlooking potential risks inherent in detailed designs of cryptocurrency wallets, especially from perspectives including user interaction and advanced features. To fill the void, in this paper, we present a fine-grained security analysis on browser-based cryptocurrency wallets. To pinpoint security issues of components in wallets, we design WalletProbe, a mutation-based testing framework based on visual-level oracles. We have identified 13 attack vectors that can be abused by attackers to exploit cryptocurrency wallets and exposed 21 concrete attack strategies. By applying WalletProbe on 39 widely-adopted browser-based wallet extensions, we astonishingly figure out all of them can be abused to steal crypto assets from innocent users. Identified potential attack vectors were reported to wallet developers timely and 26 issues have been patched already. It is, hence, urgent for our community to take action to mitigate threats related to cryptocurrency wallets. We promise to release all code and data to promote the development of the community.
http://arxiv.org/abs/2504.11736v1
Low-energy neutrino responses for 71Ga by electron capture rates, charge exchange reactions and shell model calculations
2025-04-16T03:30:19+00:00
Weak Gamow-Teller (GT) responses for low-lying states in ${}^{71}\mathrm{Ga}$ are crucial for studying low-energy solar neutrinos and the Ga anomaly, i.e., the possible transition to the sterile state. The responses for the ground state, the first excited state, and the second excited state are evaluated for the first time using the experimental electron capture rates, the experimental charge exchange reaction (CER) rates corrected for the tensor-interaction effect and the theoretical interacting shell model (ISM) calculations. The contributions from the two excited states to the solar and ${}^{51}\mathrm{Cr}$ neutrinos are found to be $4.2 \pm 1.2\%$ of that for the ground state. This is slightly larger than the ISM values but little smaller than the CER values without corrections for the tensor interaction effect. The Ga anomaly is far beyond the uncertainty of the obtained nuclear responses.
http://arxiv.org/abs/2504.11737v1
Hardware Co-Designed Optimal Control for Programmable Atomic Quantum Processors via Reinforcement Learning
2025-04-16T03:30:40+00:00
Developing scalable, fault-tolerant atomic quantum processors requires precise control over large arrays of optical beams. This remains a major challenge due to inherent imperfections in classical control hardware, such as inter-channel crosstalk and beam leakage. In this work, we introduce a hardware co-designed intelligent quantum control framework to address these limitations. We construct a mathematical model of the photonic control hardware, integrate it into the quantum optimal control (QOC) framework, and apply reinforcement learning (RL) techniques to discover optimal control strategies. We demonstrate that the proposed framework enables robust, high-fidelity parallel single-qubit gate operations under realistic control conditions, where each atom is individually addressed by an optical beam. Specifically, we implement and benchmark three optimization strategies: a classical hybrid Self-Adaptive Differential Evolution-Adam (SADE-Adam) optimizer, a conventional RL approach based on Proximal Policy Optimization (PPO), and a novel end-to-end differentiable RL method. Using SADE-Adam as a baseline, we find that while PPO performance degrades as system complexity increases, the end-to-end differentiable RL consistently achieves gate fidelities above 99.9$\%$, exhibits faster convergence, and maintains robustness under varied channel crosstalk strength and randomized dynamic control imperfections.
http://arxiv.org/abs/2504.11738v1
Infinitely many solutions for an instantaneous and non-instantaneous fourth-order differential system with local assumptions
2025-04-16T03:32:13+00:00
We investigate a class of fourth-order differential systems with instantaneous and non-instantaneous impulses. Our technical approach is mainly based on a variant of Clark's theorem without the global assumptions. Under locally subquadratic growth conditions imposed on the nonlinear terms $f_i(t,u)$ and impulsive terms $I_i$, combined with perturbations governed by arbitrary continuous functions of small coefficient $\varepsilon$, we establish the existence of multiple small solutions. Specifically, the system exhibits infinitely many solutions in the case where $\varepsilon=0$.
http://arxiv.org/abs/2504.11739v1
The Devil is in the Prompts: Retrieval-Augmented Prompt Optimization for Text-to-Video Generation
2025-04-16T03:33:25+00:00
The evolution of Text-to-video (T2V) generative models, trained on large-scale datasets, has been marked by significant progress. However, the sensitivity of T2V generative models to input prompts highlights the critical role of prompt design in influencing generative outcomes. Prior research has predominantly relied on Large Language Models (LLMs) to align user-provided prompts with the distribution of training prompts, albeit without tailored guidance encompassing prompt vocabulary and sentence structure nuances. To this end, we introduce \textbf{RAPO}, a novel \textbf{R}etrieval-\textbf{A}ugmented \textbf{P}rompt \textbf{O}ptimization framework. In order to address potential inaccuracies and ambiguous details generated by LLM-generated prompts. RAPO refines the naive prompts through dual optimization branches, selecting the superior prompt for T2V generation. The first branch augments user prompts with diverse modifiers extracted from a learned relational graph, refining them to align with the format of training prompts via a fine-tuned LLM. Conversely, the second branch rewrites the naive prompt using a pre-trained LLM following a well-defined instruction set. Extensive experiments demonstrate that RAPO can effectively enhance both the static and dynamic dimensions of generated videos, demonstrating the significance of prompt optimization for user-provided prompts. Project website: \href{https://whynothaha.github.io/Prompt_optimizer/RAPO.html}{GitHub}.
http://arxiv.org/abs/2504.11740v1
A cautionary note for plasmode simulation studies in the setting of causal inference
2025-04-16T03:36:27+00:00
Plasmode simulation has become an important tool for evaluating the operating characteristics of different statistical methods in complex settings, such as pharmacoepidemiological studies of treatment effectiveness using electronic health records (EHR) data. These studies provide insight into how estimator performance is impacted by challenges including rare events, small sample size, etc., that can indicate which among a set of methods performs best in a real-world dataset. Plasmode simulation combines data resampled from a real-world dataset with synthetic data to generate a known truth for an estimand in realistic data. There are different potential plasmode strategies currently in use. We compare two popular plasmode simulation frameworks. We provide numerical evidence and a theoretical result, which shows that one of these frameworks can cause certain estimators to incorrectly appear overly biased with lower than nominal confidence interval coverage. Detailed simulation studies using both synthetic and real-world EHR data demonstrate that these pitfalls remain at large sample sizes and when analyzing data from a randomized controlled trial. We conclude with guidance for the choice of a plasmode simulation approach that maintains good theoretical properties to allow a fair evaluation of statistical methods while also maintaining the desired similarity to real data.
http://arxiv.org/abs/2504.11741v1
Climbing the Ladder of Reasoning: What LLMs Can-and Still Can't-Solve after SFT?
2025-04-16T03:39:38+00:00
Recent supervised fine-tuning (SFT) approaches have significantly improved language models' performance on mathematical reasoning tasks, even when models are trained at a small scale. However, the specific capabilities enhanced through such fine-tuning remain poorly understood. In this paper, we conduct a detailed analysis of model performance on the AIME24 dataset to understand how reasoning capabilities evolve. We discover a ladder-like structure in problem difficulty, categorize questions into four tiers (Easy, Medium, Hard, and Extremely Hard (Exh)), and identify the specific requirements for advancing between tiers. We find that progression from Easy to Medium tier requires adopting an R1 reasoning style with minimal SFT (500-1K instances), while Hard-level questions suffer from frequent model's errors at each step of the reasoning chain, with accuracy plateauing at around 65% despite logarithmic scaling. Exh-level questions present a fundamentally different challenge; they require unconventional problem-solving skills that current models uniformly struggle with. Additional findings reveal that carefully curated small-scale datasets offer limited advantage-scaling dataset size proves far more effective. Our analysis provides a clearer roadmap for advancing language model capabilities in mathematical reasoning.
http://arxiv.org/abs/2504.11742v2
Multi-channel Single-Pixel Imaging for a Composite Motion Target
2025-04-16T03:46:02+00:00
Single-pixel imaging (SPI) exhibits cost-effectiveness, broad spectrum, and stable sub-Nyquist sampling reconstruction, enabling applications across diverse imaging fields.However, due to the inherent reconstruction mechanism, SPI is not well-suited for high-speed moving targets. To address these challenges, we propose a novel, universal SPI configuration for tracking and imaging moving objects.Unlike traditional motion compensation methods, our approach enables the recovery of targets undergoing arbitrary motion, including translation, rotation, periodic, or non-periodic movements, within a two-dimensional plane without increasing the number of modulation frames.By leveraging the centroid positions from multiple wavelength channels, we determine the target's motion state from a kinematic perspective. Moreover, we developed an adapted reconstruction method, the (P-IT) pseudo-inverse transformation method, which allows for the efficient reconstruction of objects with composite motion. With a maximum flip rate of 20 kHz for the digital micromirror device (DMD), the theoretical perception frame rate can reach up to 2222 Hz, comparable to that of conventional motion-compensated SPI for purely translational objects.
http://arxiv.org/abs/2504.11743v1
Constraining the initial Lorentz factor of gamma-ray bursts under different circumburst mediums
2025-04-16T03:46:39+00:00
The initial Lorentz factor ($\Gamma_{\text{0}}$) plays a crucial role in uncovering the physical characteristics of gamma-ray bursts (GRBs). Previous studies have indicated that the ambient medium density index $k$ for GRBs falls in the range of 0 - 2, rather than exactly equal to 0 (homogeneous interstellar ambient) or 2 (typical stellar wind). In this work, we aim to constrain the $\Gamma_0$ of GRBs considering their distinct circumburst medium. We select a total of 33 GRBs for our analysis, comprising 7 X-ray GRBs and 26 optical GRBs. Subsequently, by utilizing the deceleration time of fireball $t_{\rm p}$, we derive the $\Gamma_0$ for the 33 GRBs assuming the radiation efficiency of $\eta =$ 0.2. The inferred initial Lorentz factor was found to be from 50 to 500, consistent with previous studies. We then investigate the correlation between the $\Gamma_0$ and the isotropic energy $E_{\rm \gamma,iso}$ (as well as the mean isotropic luminosity $L_{\rm \gamma,iso}$), finding very tight correlations between them, i.e., $\Gamma_0$ $\propto$ $E^{0.24}_{\rm \gamma,iso,52}$ ($\Gamma_0$ $\propto$ $L^{0.20}_{\rm \gamma,iso.49}$) with $\eta$=0.2. Additionally, we verify the correlation among $\Gamma_0$, the isotropic energy $E_{\rm \gamma,iso}$ (or $L_{\rm \gamma,iso}$) and the peak energy $E_{\rm{p,z}}$, i.e., $E_{\rm \gamma,iso,52}$ $\propto$ $\Gamma^{1.36}_0$$E^{0.82}_{\rm{p,z}}$ ($L_{\rm \gamma,iso,49}$ $\propto$ $\Gamma^{1.05}_0$$E^{0.66}_{\rm{p,z}}$) under the same radiation efficiency ($\eta$=0.2).
http://arxiv.org/abs/2504.11744v1
From Cyber Threat to Data Shield: Constructing Provably Secure File Erasure with Repurposed Ransomware Cryptography
2025-04-16T03:47:17+00:00
Ransomware has emerged as a persistent cybersecurity threat,leveraging robust encryption schemes that often remain unbroken even after public disclosure of source code. Motivated by the technical resilience of such mechanisms, this paper presents SEER (Secure and Efficient Encryption-based Erasure via Ransomware), a provably secure file destruction system that repurposes ransomware encryption for legitimate data erasure tasks. SEER integrates the triple-encryption design of the Babuk ransomware family, including Curve25519-based key exchange,SHA-256-based key derivation, and the Sosemanuk stream cipher, to construct a layered key management architecture. It tightly couples encryption and key destruction by securely erasing session keys immediately after use. Experimental results on an ESXI platform demonstrate that SEER achieves four orders of magnitude performance improvement over the DoD 5220.22 standard. The proposed system further ensures provable security through both theoretical foundations and practical validation, offering an efficient and resilient solution for the secure destruction of sensitive data.
http://arxiv.org/abs/2504.11745v1
$D^0-\bar{D}^0$ mixing in the Dyson-Schwinger approach
2025-04-16T03:49:44+00:00
In view of difficulty to reproduce observables in the $D^0-\bar{D}^0$ mixing via the operator product expansion, we discuss the Dyson-Schwinger approach to this process. Formulated by the parameterization of quark propagators, SU(3) breaking relevant to charm mixing is evaluated in such a way that properly takes account of dynamical chiral symmetry breaking. The $\bar{D}^0\to D^0$ transition is discussed in the vacuum-insertion approximation with locality of the light valence-quark field, represented by the decay constant of $D^0$ meson as well as relevant momentum integrals. It is found that dimensionless mass-difference observable in this approach leads to $|x|=(1.3-2.9)\times 10^{-3}$, the order of magnitude comparable to the HFLAV data, and thereby offering a certain improvement as a theoretical framework.
http://arxiv.org/abs/2504.11746v1
A New Radio Continuum Study of the Large Magellanic Cloud Supernova Remnant MC SNR J0519-6902
2025-04-16T03:51:27+00:00
We present a new radio continuum study of the Large Magellanic Cloud supernova remnant (SNR) MC SNR J0519-6902. With a diameter of ~8 pc, this SNR shows a radio ring-like morphology with three bright regions toward the north, east, and south. Its linear polarisation is prominent with average values of 5 +- 1% and 6 +- 1% at 5500 and 9000 MHz, and we find a spectral index of -0.62 +- 0.02 , typical of a young SNR. The average rotation measure is estimated at -124 +- 83 rad m-2 and the magnetic field strength at ~11 muG. We also estimate an equipartition magnetic field of 72 +- 5 muG and minimum explosion energy of Emin = 2.6x1048 erg. Finally, we identified an H I cloud that may be associated with MC SNR J0519-6902, located in the southeastern part of the remnant, along with a potential wind-bubble cavity.
http://arxiv.org/abs/2504.11747v1
Detectors for local discrimination of sets of generalized Bell states
2025-04-16T03:55:53+00:00
A fundamental problem in quantum information processing is the discrimination among a set of orthogonal quantum states of a composite system under local operations and classical communication (LOCC). Corresponding to the LOCC indistinguishable sets of four ququad-ququad orthogonal maximally entangled states (MESs) constructed by Yu et al. [Phys. Rev. Lett. 109, 020506 (2012)], the maximum commutative sets (MCSs) were introduced as detectors for the local distinguishability of the set of generalized Bell states (GBSs), for which the detectors are sufficient to determine the LOCC distinguishability. In this work, we show how to determine all the detectors for a given GBS set. We construct also several 4-GBS sets without detectors, most of which are one-way LOCC indistinguishable and only one is one-way LOCC distinguishable, indicating that the detectors are not necessary for LOCC distinguishability. Furthermore, we show that for 4-GBS sets in quantum system $\mathbb{C}^{6}\otimes\mathbb{C}^{6}$, the detectors are almost necessary for one-way LOCC distinguishability, except for one set in the sense of local unitary equivalence. The problem of one-way LOCC discrimination of 4-GBS sets in $\mathbb{C}^{6}\otimes\mathbb{C}^{6}$ is completely resolved.
http://arxiv.org/abs/2504.11748v1
Steerable rolling of a 1-DoF robot using an internal pendulum
2025-04-16T03:59:30+00:00
We present ROCK (Rolling One-motor Controlled rocK), a 1 degree-of-freedom robot consisting of a round shell and an internal pendulum. An uneven shell surface enables steering by using only the movement of the pendulum, allowing for mechanically simple designs that may be feasible to scale to large quantities or small sizes. We train a control policy using reinforcement learning in simulation and deploy it onto the robot to complete a rectangular trajectory.
http://arxiv.org/abs/2504.11749v1
SkeletonX: Data-Efficient Skeleton-based Action Recognition via Cross-sample Feature Aggregation
2025-04-16T04:01:42+00:00
While current skeleton action recognition models demonstrate impressive performance on large-scale datasets, their adaptation to new application scenarios remains challenging. These challenges are particularly pronounced when facing new action categories, diverse performers, and varied skeleton layouts, leading to significant performance degeneration. Additionally, the high cost and difficulty of collecting skeleton data make large-scale data collection impractical. This paper studies one-shot and limited-scale learning settings to enable efficient adaptation with minimal data. Existing approaches often overlook the rich mutual information between labeled samples, resulting in sub-optimal performance in low-data scenarios. To boost the utility of labeled data, we identify the variability among performers and the commonality within each action as two key attributes. We present SkeletonX, a lightweight training pipeline that integrates seamlessly with existing GCN-based skeleton action recognizers, promoting effective training under limited labeled data. First, we propose a tailored sample pair construction strategy on two key attributes to form and aggregate sample pairs. Next, we develop a concise and effective feature aggregation module to process these pairs. Extensive experiments are conducted on NTU RGB+D, NTU RGB+D 120, and PKU-MMD with various GCN backbones, demonstrating that the pipeline effectively improves performance when trained from scratch with limited data. Moreover, it surpasses previous state-of-the-art methods in the one-shot setting, with only 1/10 of the parameters and much fewer FLOPs. The code and data are available at: https://github.com/zzysteve/SkeletonX
http://arxiv.org/abs/2504.11750v1
Characterizing and Optimizing LLM Inference Workloads on CPU-GPU Coupled Architectures
2025-04-16T04:02:39+00:00
Large language model (LLM)-based inference workloads increasingly dominate data center costs and resource utilization. Therefore, understanding the inference workload characteristics on evolving CPU-GPU coupled architectures is crucial for optimization. This paper presents an in-depth analysis of LLM inference behavior on loosely-coupled (PCIe A100/H100) and closely-coupled (GH200) systems. We analyze performance dynamics using fine-grained operator-to-kernel trace analysis, facilitated by our novel profiler SKIP and metrics like Total Kernel Launch and Queuing Time (TKLQT). Results show that closely-coupled (CC) GH200 significantly outperforms loosely-coupled (LC) systems at large batch sizes, achieving 1.9x-2.7x faster prefill latency for Llama 3.2-1B. However, our analysis also reveals that GH200 remains CPU-bound up to 4x larger batch sizes than LC systems. In this extended CPU-bound region, we identify the performance characteristics of the Grace CPU as a key factor contributing to higher inference latency at low batch sizes on GH200. We demonstrate that TKLQT accurately identifies this CPU/GPU-bound transition point. Based on this analysis, we further show that kernel fusion offers significant potential to mitigate GH200's low-batch latency bottleneck by reducing kernel launch overhead. This detailed kernel-level characterization provides critical insights for optimizing diverse CPU-GPU coupling strategies. This work is an initial effort, and we plan to explore other major AI/DL workloads that demand different degrees of CPU-GPU heterogeneous architectures.
http://arxiv.org/abs/2504.11751v2
Wandering Flows on the Plane
2025-04-16T04:06:41+00:00
We study planar flows without non-wandering points and prove several properties of these flows in relation with their prolongational relation. The main results of this article are that a planar (regular) wandering flow has no generalized recurrence and has only two topological invariants: the space of its orbits and its prolongational relation (or, equivalently, its smallest stream). As a byproduct, our results show that, even in absence of any type of recurrence, the stream of a flow contains fundamental information on its behavior.
http://arxiv.org/abs/2504.11752v2
Real-Time Reconstruction of Ground Motion During Small Magnitude Earthquakes: A Pilot Study
2025-04-16T04:06:50+00:00
This study presents a pilot investigation into a novel method for reconstructing real-time ground motion during small magnitude earthquakes (M < 4.5), removing the need for computationally expensive source characterization and simulation processes to assess ground shaking. Small magnitude earthquakes, which occur frequently and can be modeled as point sources, provide ideal conditions for evaluating real-time reconstruction methods. Utilizing sparse observation data, the method applies the Gappy Auto-Encoder (Gappy AE) algorithm for efficient field data reconstruction. This is the first study to apply the Gappy AE algorithm to earthquake ground motion reconstruction. Numerical experiments conducted with SW4 simulations demonstrate the method's accuracy and speed across varying seismic scenarios. The reconstruction performance is further validated using real seismic data from the Berkeley area in California, USA, demonstrating the potential for practical application of real-time earthquake data reconstruction using Gappy AE. As a pilot investigation, it lays the groundwork for future applications to larger and more complex seismic events.
http://arxiv.org/abs/2504.11753v1
The $L^p$-boundedness of wave operators for 4-th order Schrödinger operators on $\mathbb{R}^2$, I
2025-04-16T04:10:47+00:00
We prove that high energy parts of wave operators for fourth order Schr\"odinger operators $H=\Delta^2 + V(x)$ in $\mathbb{R}^2$ are bounded in $L^p(\mathbb{R}^2)$ for $p\in(1,\infty)$.
http://arxiv.org/abs/2504.11754v1
GrabS: Generative Embodied Agent for 3D Object Segmentation without Scene Supervision
2025-04-16T04:13:53+00:00
We study the hard problem of 3D object segmentation in complex point clouds without requiring human labels of 3D scenes for supervision. By relying on the similarity of pretrained 2D features or external signals such as motion to group 3D points as objects, existing unsupervised methods are usually limited to identifying simple objects like cars or their segmented objects are often inferior due to the lack of objectness in pretrained features. In this paper, we propose a new two-stage pipeline called GrabS. The core concept of our method is to learn generative and discriminative object-centric priors as a foundation from object datasets in the first stage, and then design an embodied agent to learn to discover multiple objects by querying against the pretrained generative priors in the second stage. We extensively evaluate our method on two real-world datasets and a newly created synthetic dataset, demonstrating remarkable segmentation performance, clearly surpassing all existing unsupervised methods.
http://arxiv.org/abs/2504.11755v1
Asymmetric Cross-Correlation functions with delays in Sco X-1: Evidence of possible Jet triggering
2025-04-16T04:18:21+00:00
The formation and origin of Jets from Z sources are not well understood, although an X-ray-radio correlation has been observed. We analyzed a few observations of Sco X-1 using the Rossi X-ray Timing Experiment Satellite. Out of the 17 observations, 5 showed lags of a few 10s of seconds with an asymmetry in the CCF between the soft and hard bands in their cross-correlation function (CCF) analysis. Interestingly, during these observations, a ballistic-type radio jet of Ultra-relativistic(UR) nature was reported. The observed lags and associated cross-correlation coefficients were validated using simulations. The rest of the 12 observations' CCFs were symmetric, and their associated Power Density Spectrum (PDS) displayed Normal Branch(NBO)/Normal + Horizontal Branch Oscillations(NBO+HBO). The X-ray spectral study of 2 obs. where radio core emission was seen with abrupt variation in both PDS and CCF showed a black-body flux variation of 10-20%, but no spectral parameter varied. We suggest that the ballistic jet caused a disturbance in the inner accretion region, viz., the Boundary Layer plausibly along with the Corona, that caused the lags observed in the CCFs along with the absence of any oscillatory features in the PDS tracing only a flat-topped noise. Whereas the regions with no lags showed a persistent NBO/NBO+HBO feature, suggesting a steady accretion flow. Although the UR jet can't be related to NBO or HBO, we suggest it could be related to the phenomena that cause NBO since the majority of PDSs displayed NBO. We also constrain the inner accretion region size to 20-30 km, which is responsible for the accretion ejecta in Sco X-1.
http://arxiv.org/abs/2504.11756v1
AQETuner: Reliable Query-level Configuration Tuning for Analytical Query Engines
2025-04-16T04:18:25+00:00
Modern analytical query engines (AQEs) are essential for large-scale data analysis and processing. These systems usually provide numerous query-level tunable knobs that significantly affect individual query performance. While several studies have explored automatic DBMS configuration tuning, they have several limitations to handle query-level tuning. Firstly, they fail to capture how knobs influence query plans, which directly affect query performance. Secondly, they overlook query failures during the tuning processing, resulting in low tuning efficiency. Thirdly, they struggle with cold-start problems for new queries, leading to prolonged tuning time. To address these challenges, we propose AQETuner, a novel Bayesian Optimization-based system tailored for reliable query-level knob tuning in AQEs. AQETuner first applies the attention mechanisms to jointly encode the knobs and plan query, effectively identifying the impact of knobs on plan nodes. Then, AQETuner employs a dual-task Neural Process to predict both query performance and failures, leveraging their interactions to guide the tuning process. Furthermore, AQETuner utilizes Particle Swarm Optimization to efficiently generate high-quality samples in parallel during the initial tuning stage for the new queries. Experimental results show that AQETuner significantly outperforms existing methods, reducing query latency by up to 23.7% and query failures by up to 51.2%.
http://arxiv.org/abs/2504.11757v1
Dynamics and Computational Principles of Echo State Networks: A Mathematical Perspective
2025-04-16T04:28:05+00:00
Reservoir computing (RC) represents a class of state-space models (SSMs) characterized by a fixed state transition mechanism (the reservoir) and a flexible readout layer that maps from the state space. It is a paradigm of computational dynamical systems that harnesses the transient dynamics of high-dimensional state spaces for efficient processing of temporal data. Rooted in concepts from recurrent neural networks, RC achieves exceptional computational power by decoupling the training of the dynamic reservoir from the linear readout layer, thereby circumventing the complexities of gradient-based optimization. This work presents a systematic exploration of RC, addressing its foundational properties such as the echo state property, fading memory, and reservoir capacity through the lens of dynamical systems theory. We formalize the interplay between input signals and reservoir states, demonstrating the conditions under which reservoirs exhibit stability and expressive power. Further, we delve into the computational trade-offs and robustness characteristics of RC architectures, extending the discussion to their applications in signal processing, time-series prediction, and control systems. The analysis is complemented by theoretical insights into optimization, training methodologies, and scalability, highlighting open challenges and potential directions for advancing the theoretical underpinnings of RC.
http://arxiv.org/abs/2504.11758v1
Hardy spaces, Campanato spaces and higher order Riesz transforms associated with Bessel operators
2025-04-16T04:34:41+00:00
Let $\nu = (\nu_1, \ldots, \nu_n) \in (-1/2, \infty)^n$, with $n \ge 1$, and let $\Delta_\nu$ be the multivariate Bessel operator defined by \[ \Delta_{\nu} = -\sum_{j=1}^n\left( \frac{\partial^2}{\partial x_j^2} - \frac{\nu_j^2 - 1/4}{x_j^2} \right). \] In this paper, we develop the theory of Hardy spaces and BMO-type spaces associated with the Bessel operator $\Delta_\nu$. We then study the higher-order Riesz transforms associated with $\Delta_\nu$. First, we show that these transforms are Calder\'on-Zygmund operators. We further prove that they are bounded on the Hardy spaces and BMO-type spaces associated with $\Delta_\nu$.
http://arxiv.org/abs/2504.11759v1
Bringing closure to FDR control: beating the e-Benjamini-Hochberg procedure
2025-04-16T04:36:12+00:00
False discovery rate (FDR) has been a key metric for error control in multiple hypothesis testing, and many methods have developed for FDR control across a diverse cross-section of settings and applications. We develop a closure principle for all FDR controlling procedures, i.e., we provide a characterization based on e-values for all admissible FDR controlling procedures. We leverage this idea to formulate the closed eBH procedure, a (usually strict) improvement over the eBH procedure for FDR control when provided with e-values. We demonstrate the practical performance of closed eBH in simulations.
http://arxiv.org/abs/2504.11760v1
The Topological Structures of the Orders of Hypergraphs
2025-04-16T04:40:12+00:00
We provide first a categorical exploration of, and then completion of the mapping of the relationships among, three fundamental perspectives on binary relations: as the incidence matrices of hypergraphs, as the formal contexts of concept lattices, and as specifying topological cosheaves of simplicial (Dowker) complexes on simplicial (Dowker) complexes. We provide an integrative, functorial framework combining previously known with three new results: 1) given a binary relation, there are order isomorphisms among the bounded edge order of the intersection complexes of its dual hypergraphs and its concept lattice; 2) the concept lattice of a context is an isomorphism invariant of the Dowker cosheaf (of abstract simplicial complexes) of that context; and 3) a novel Dowker cosheaf (of chain complexes) of a relation is an isomorphism invariant of the concept lattice of the context that generalizes Dowker's original homological result. We illustrate these concepts throughout with a running example, and demonstrate relationships to past results.
http://arxiv.org/abs/2504.11761v1
Delayed Acceptance Markov Chain Monte Carlo for Robust Bayesian Analysis
2025-04-16T04:40:17+00:00
This study introduces a computationally efficient algorithm, delayed acceptance Markov chain Monte Carlo (DA-MCMC), designed to improve posterior simulation in quasi-Bayesian inference. Quasi-Bayesian methods, which do not require fully specifying a probabilistic model, are often computationally expensive owing to the need to evaluate the inverse and determinant of large covariance matrices. DA-MCMC addresses this challenge by employing a two-stage process: In the first stage, proposals are screened using an approximate posterior, whereas a final acceptance or rejection decision is made in the second stage based on the exact target posterior. This reduces the need for costly matrix computations, thereby improving efficiency without sacrificing accuracy. We demonstrate the effectiveness of DA-MCMC through applications to both synthetic and real data. The results demonstrate that, although DA-MCMC slightly reduces the effective sample size per iteration compared with the standard MCMC, it achieves substantial improvement in terms of effective sample size per second, approximately doubling the efficiency. This makes DA-MCMC particularly useful for cases where posterior simulation is computationally intensive. Thus, the DA-MCMC algorithm offers a significant advancement in computational efficiency for quasi-Bayesian inference, making it a valuable tool for robust Bayesian analysis.
http://arxiv.org/abs/2504.11762v1
Gas-solid Reaction Dynamics on Li$_6$PS$_5$Cl Surfaces: A Case Study of the Influence of CO$_2$ and CO$_2$/O$_2$ Atmospheres Using AIMD and MLFF Simulations
2025-04-16T04:45:40+00:00
In recent years, rapid progress has been made in solid-state lithium batteries. Among various technologies, coating the surface of electrodes or electrolytes has proven to be an effective method to enhance interfacial stability and improve battery cycling performance. Recent experimental studies showed that gas-solid reactions offer a convenient approach to form modified coating layers on the solid electrolyte. Here, we performed computational simulations to investigate this surface reaction process. Specifically, we simulated the gas-solid reactions of Li$_6$PS$_5$Cl(LPSC) solid-state electrolytes in pure CO$_2$ and in mixed CO$_2$/O$_2$ atmospheres using ab-initio molecular dynamics (AIMD) and machine-learning force fields (MLFF)-accelerated molecular dynamics (MD) approaches. In the former case, LPSC surfaces primarily form Li$_2$CO$_2$S because it is difficult to dissociate another oxygen atom from the second CO$_2$ molecule. While in CO$_2$/O$_2$ mixed atmosphere, O$_2$ molecules preferentially adsorb onto LPSC, which supplies oxygen sites for subsequent CO$_2$ adsorption to form carbonate -CO$_3$ units. This reaction pathway ultimately generates an interfacial product dominated by Li$_2$CO$_3$. These coatings exhibit distinct electronic and ionic conductivity characteristics, allowing the possibility to control coating compositions and configurations by adjusting the gas-solid reactions. Key criteria for applying this strategy are extracted from the current research.
http://arxiv.org/abs/2504.11763v1
Extended Short- and Long-Range Mesh Learning for Fast and Generalized Garment Simulation
2025-04-16T04:56:01+00:00
3D garment simulation is a critical component for producing cloth-based graphics. Recent advancements in graph neural networks (GNNs) offer a promising approach for efficient garment simulation. However, GNNs require extensive message-passing to propagate information such as physical forces and maintain contact awareness across the entire garment mesh, which becomes computationally inefficient at higher resolutions. To address this, we devise a novel GNN-based mesh learning framework with two key components to extend the message-passing range with minimal overhead, namely the Laplacian-Smoothed Dual Message-Passing (LSDMP) and the Geodesic Self-Attention (GSA) modules. LSDMP enhances message-passing with a Laplacian features smoothing process, which efficiently propagates the impact of each vertex to nearby vertices. Concurrently, GSA introduces geodesic distance embeddings to represent the spatial relationship between vertices and utilises attention mechanisms to capture global mesh information. The two modules operate in parallel to ensure both short- and long-range mesh modelling. Extensive experiments demonstrate the state-of-the-art performance of our method, requiring fewer layers and lower inference latency.
http://arxiv.org/abs/2504.11764v1
Probing the Abyss of the Quantum Vacuum: A Quest for Fluctuation-Free Domains
2025-04-16T04:59:14+00:00
The modification of electromagnetic vacuum fluctuations by boundary conditions is a fundamental prediction of quantum electrodynamics (QED). However, direct experimental verification in the optical regime is hindered by the need for sub-wavelength spatial resolution. Here, we present a novel approach to indirectly probe the spatial distribution of vacuum fluctuations by leveraging radio-frequency (RF) measurements of thermal noise. At RF frequencies, thermal noise, which occupies the same electromagnetic modes as vacuum fluctuations and is similarly shaped by boundary conditions, dominates the single-photon energy. By precisely characterizing the spatial distribution of thermal noise near a conducting boundary, we infer the corresponding modification of vacuum modes and, consequently, the vacuum fluctuations themselves. Our experimental setup, employing coaxial cables and RF splitters to mimic optical mirrors and beam splitters, enables controlled manipulation of boundary conditions and precise thermal noise measurements. We observe a reduction in thermal noise near the conducting boundary, providing indirect evidence for the theoretically predicted suppression of vacuum fluctuations. This work establishes a new experimental framework for investigating QED effects in constrained environments, with potential implications for quantum-limited precision measurements, such as gravitational wave detection and intensity-stabilized light sources. This RF approach circumvents the limitations of optical techniques and opens new avenues for exploring fundamental quantum phenomena.
http://arxiv.org/abs/2504.11765v1
Shared Disk KV Cache Management for Efficient Multi-Instance Inference in RAG-Powered LLMs
2025-04-16T04:59:18+00:00
Recent large language models (LLMs) face increasing inference latency as input context length and model size continue to grow. In particular, the retrieval-augmented generation (RAG) technique, which enhances LLM responses by incorporating external knowledge, exacerbates this issue by significantly increasing the number of input tokens. This expansion in token length leads to a substantial rise in computational overhead, particularly during the prefill stage, resulting in prolonged time-to-first-token (TTFT). To address this issue, this paper proposes a method to reduce TTFT by leveraging a disk-based key-value (KV) cache to lessen the computational burden during the prefill stage. We also introduce a disk-based shared KV cache management system, called Shared RAG-DCache, for multi-instance LLM RAG service environments. This system, together with an optimal system configuration, improves both throughput and latency under given resource constraints. Shared RAG-DCache exploits the locality of documents related to user queries in RAG, as well as the queueing delay in LLM inference services. It proactively generates and stores disk KV caches for query-related documents and shares them across multiple LLM instances to enhance inference performance. In experiments on a single host equipped with 2 GPUs and 1 CPU, Shared RAG-DCache achieved a 15~71% increase in throughput and up to a 12~65% reduction in latency, depending on the resource configuration.
http://arxiv.org/abs/2504.11766v1
On cohomogeneity one hyperpolar actions related to $G_{2}$
2025-04-16T05:03:19+00:00
Cohomogeneity one actions on irreducible Riemannian symmetric spaces of compact type are classified into three cases: Hermann actions, actions induced by the linear isotropy representation of a Riemannian symmetric space of rank 2, and exceptional actions. In this paper, we consider exceptional actions related to the exceptional compact Lie group $G_{2}$ and investigate some properties of their orbits as Riemannian submanifolds. In particular, we examine the principal curvatures of principal orbits and classify principal orbits that are minimal, austere, weakly reflective, and proper biharmonic.
http://arxiv.org/abs/2504.11767v1
Post-selection Inference in Regression Models for Group Testing Data
2025-04-16T05:08:57+00:00
We develop methodology for valid inference after variable selection in logistic regression when the responses are partially observed, that is, when one observes a set of error-prone testing outcomes instead of the true values of the responses. Aiming at selecting important covariates while accounting for missing information in the response data, we apply the expectation-maximization algorithm to compute maximum likelihood estimators subject to LASSO penalization. Subsequent to variable selection, we make inferences on the selected covariate effects by extending post-selection inference methodology based on the polyhedral lemma. Empirical evidence from our extensive simulation study suggests that our post-selection inference results are more reliable than those from naive inference methods that use the same data to perform variable selection and inference without adjusting for variable selection.
http://arxiv.org/abs/2504.11768v1
Representability theorems via metric techniques
2025-04-16T05:11:38+00:00
We prove new Brown representability theorems for triangulated categories using metric techniques as introduced in the work of Neeman. In the setting of algebraic geometry, this gives us new representability theorems for homological and cohomological functors on the bounded derived category of coherent sheaves. To prove this result, we introduce a generalisation of the notion of an approximable triangulated category.
http://arxiv.org/abs/2504.11769v1
Sliding Block Martingale based Multi-hop Delay QoS Analysis
2025-04-16T05:13:53+00:00
With the growing density of wireless networks and demand for multi-hop transmissions, precise delay Quality of Service (QoS) analysis has become a critical challenge. This paper introduces a multi-hop delay QoS analysis framework based on the sliding block martingale, addressing the loose boundary issue of prior methods that rely on service process martingales and min-plus transformations. By constructing a sliding block martingale with a window, we capture both long-term trends and short-term fluctuations in the backlog, eliminating the reliance on the generalized incremental property. The framework redefines delay unreliability events using cascading attributes, deriving a more compact Delay Unreliability Probability Boundary (DUPB). To improve the efficiency of solving the key parameter $\theta$, we propose a Micrometric Intervals based Supermartingale Upcrossing Estimate Theorem, quantifying the upper bound of event occurrence frequency to constrain the solution space of $\theta$. Simulations based on the 3GPP UMa/UMi channel model validate the framework's effectiveness. Results show that in 2-7 hop scenarios, the maximum deviation between theoretical boundaries and Monte Carlo simulations is $4.116 \times 10^{-5}$, with a lower RMSE than existing methods. Iteration count and CPU time for solving $\theta$ are reduced by $59\%-72\%$ and $60.6\%-70.5\%$, respectively, improving analysis efficiency. Furthermore, the derived minimum service rate for multi-hop queues offers a valuable reference for resource allocation. The framework demonstrates high accuracy, scalability, and practicality in complex multi-hop networks.
http://arxiv.org/abs/2504.11770v1
Unsupervised Classification of English Words Based on Phonological Information: Discovery of Germanic and Latinate Clusters
2025-04-16T05:20:08+00:00
Cross-linguistically, native words and loanwords follow different phonological rules. In English, for example, words of Germanic and Latinate origin exhibit different stress patterns, and a certain syntactic structure is exclusive to Germanic verbs. When seeing them as a cognitive model, however, such etymology-based generalizations face challenges in terms of learnability, since the historical origins of words are presumably inaccessible information for general language learners. In this study, we present computational evidence indicating that the Germanic-Latinate distinction in the English lexicon is learnable from the phonotactic information of individual words. Specifically, we performed an unsupervised clustering on corpus-extracted words, and the resulting word clusters largely aligned with the etymological distinction. The model-discovered clusters also recovered various linguistic generalizations documented in the previous literature regarding the corresponding etymological classes. Moreover, our findings also uncovered previously unrecognized features of the quasi-etymological clusters, offering novel hypotheses for future experimental studies.