url
stringlengths
33
33
title
stringlengths
18
214
date_published
stringdate
2025-03-20 00:07:06
2025-04-17 04:46:57
abstract
stringlengths
114
1.92k
http://arxiv.org/abs/2503.17584v2
Enzyme as Maxwell's Demon: Steady-state Deviation from Chemical Equilibrium by Enhanced Enzyme Diffusion
2025-03-22T00:01:46+00:00
Enhanced enzyme diffusion (EED), in which the diffusion coefficient of an enzyme transiently increases during catalysis, has been extensively reported experimentally. We numerically and analytically demonstrate that such enzymes can act as Maxwell's demons. They use their enhanced diffusion as a memory of the previous catalytic reaction, to gain information and drive steady-state chemical concentrations away from chemical equilibrium. Our theoretical analysis identifies the conditions for this process, highlighting the functional role of EED and its relevance to cellular systems.
http://arxiv.org/abs/2503.17585v1
How do Massive Primordial Black Holes Impact the Formation of the First Stars and Galaxies?
2025-03-22T00:06:11+00:00
We investigate the impact of massive primordial black holes (PBHs; $m_{\rm BH}\sim 10^6~M_{\odot}$) on the star formation and first galaxy assembly process using high-resolution hydrodynamical simulations from $z = 1100$ to $z \sim 9$. We find that PBH accretion is self-regulated by feedback, suppressing mass growth unless feedback is weak. PBHs accelerate structure formation by seeding dark matter halos and gravitationally attracting gas, but strong feedback can delay cooling and suppress star formation. In addition, the presence of baryon-dark matter streaming creates an offset between the PBH location and the peaks induced in gas density, promoting earlier and more efficient star formation compared to standard $\Lambda$CDM. By $z \sim 10$, PBH-seeded galaxies form dense star clusters, with PBH-to-stellar mass ratios comparable to observed high-$z$ AGN like UHZ-1. Our results support PBHs as viable SMBH seeds but do not exclude alternative scenarios. We emphasize that PBH-seeding provides a natural explanation for some of the newly-discovered overmassive SMBHs at high redshift, in particular those with extreme ratios of BH-to-dynamical (virial) mass that challenge standard formation channels. Future studies with ultra-deep JWST surveys, the Roman Space Telescope, and radio surveys with facilities such as SKA and HERA will be critical in distinguishing PBH-driven SMBH growth from other pathways.
http://arxiv.org/abs/2503.17586v1
A note on the long time behavior of the elephant random walk with stops
2025-03-22T00:07:01+00:00
We study the long time behavior of the elephant random walk with stops, introduced by Kumar, Harbola and Lindenberg (2010), and establish the phase transition of the number of visited points up to time $n$, and the correlation between the position at time $n$ and the number of moves up to time $n$.
http://arxiv.org/abs/2503.17587v1
ConSol: Sequential Probability Ratio Testing to Find Consistent LLM Reasoning Paths Efficiently
2025-03-22T00:07:28+00:00
Recent advancements in large language models (LLMs) integrating explicit reasoning, such as OpenAI's o3-mini, DeepSeek-R1, and QWQ-32B, enable smaller models to solve complex tasks by generating intermediate reasoning steps prior to providing answers. However, this approach significantly increases computational costs, both monetarily and environmentally. The widely-used self-consistency method further exacerbates these costs by aggregating multiple reasoning paths to improve accuracy, often requiring between 40 to 64 samples per task. Although aggregation effectively reduces variance and bias, additional sampling can lead to diminishing returns when early samples yield consistent results. To address inefficiencies, we propose leveraging Sequential Probability Ratio Testing (SPRT) to dynamically terminate sampling once sufficient consistency is achieved. We calibrate SPRT parameters specifically for LLM applications, accounting for sensitivity to detect the mode of the distribution. Our experiments demonstrate that incorporating SPRT significantly enhances token efficiency, achieving comparable accuracy to self-consistency methods but at a substantially reduced computational cost. To promote transparency and facilitate reproducibility, we have made the source code and datasets used in our experiments publicly available at our GitHub repository: https://github.com/LiuzLab/consol, or available as a PyPI package: pip install consol. We hope that this resource will support further research and encourage the development of new methods building upon our work.
http://arxiv.org/abs/2503.17588v1
LEMIX: Enabling Testing of Embedded Applications as Linux Applications
2025-03-22T00:14:47+00:00
Dynamic analysis, through rehosting, is an important capability for security assessment in embedded systems software. Existing rehosting techniques aim to provide high-fidelity execution by accurately emulating hardware and peripheral interactions. However, these techniques face challenges in adoption due to the increasing number of available peripherals and the complexities involved in designing emulation models for diverse hardware. Additionally, contrary to the prevailing belief that guides existing works, our analysis of reported bugs shows that high-fidelity execution is not required to expose most bugs in embedded software. Our key hypothesis is that security vulnerabilities are more likely to arise at higher abstraction levels. To substantiate our hypothesis, we introduce LEMIX, a framework enabling dynamic analysis of embedded applications by rehosting them as x86 Linux applications decoupled from hardware dependencies. Enabling embedded applications to run natively on Linux facilitates security analysis using available techniques and takes advantage of the powerful hardware available on the Linux platform for higher testing throughput. We develop various techniques to address the challenges involved in converting embedded applications to Linux applications. We evaluated LEMIX on 18 real-world embedded applications across four RTOSes and found 21 new bugs in 12 of the applications and all 4 of the RTOS kernels. We report that LEMIX is superior to existing state-of-the-art techniques both in terms of code coverage (~2x more coverage) and bug detection (18 more bugs).
http://arxiv.org/abs/2503.17589v1
Extending First-order Motion Planners to Second-order Dynamics
2025-03-22T00:15:34+00:00
This paper extends first-order motion planners to robots governed by second-order dynamics. Two control schemes are proposed based on the knowledge of a scalar function whose negative gradient aligns with a given first-order motion planner. When such a function is known, the first-order motion planner is combined with a damping velocity vector with a dynamic gain to extend the safety and convergence guarantees of the first-order motion planner to second-order systems. If no such function is available, we propose an alternative control scheme ensuring that the error between the robot's velocity and the first-order motion planner converges to zero. The theoretical developments are supported by simulation results demonstrating the effectiveness of the proposed approaches.
http://arxiv.org/abs/2503.17590v1
Dual Block Gradient Ascent for Entropically Regularised Quantum Optimal Transport
2025-03-22T00:17:14+00:00
We present a block gradient ascent method for solving the quantum optimal transport problem with entropic regularisation similar to the algorithm proposed in [D. Feliciangeli, A. Gerolin, L. Portinale: J. Funct. Anal. 285 (2023), no. 4, 109963] and [E. Caputo, A. Gerolin, N. Monina, L. Portinale: arXiv:2409.03698]. We prove a linear convergence rate based on strong concavity of the dual functional and present some results of numerical experiments of an implementation.
http://arxiv.org/abs/2503.17591v1
Quantum Computation based on Open Quantum Walks
2025-03-22T00:26:50+00:00
Open Quantum Walks (OQW) are a type of quantum walk governed by the system's interaction with its environment. We explore the time evolution and the limit behavior of the OQW framework for Quantum Computation and show how we can represent random unitary quantum channels, such as the dephasing and depolarizing channels in this model. We also developed a simulation protocol with circuit representation for this model, which is heavily inspired by the fact that graphs represent OQW and are, thereby, local (in the graph sense). We obtain asymptotic advantages in system dimension, depth, and CNOT count compared to other simulation methods.
http://arxiv.org/abs/2503.17592v1
Benchmark Dataset for Pore-Scale CO2-Water Interaction
2025-03-22T00:42:42+00:00
Accurately capturing the complex interaction between CO2 and water in porous media at the pore scale is essential for various geoscience applications, including carbon capture and storage (CCS). We introduce a comprehensive dataset generated from high-fidelity numerical simulations to capture the intricate interaction between CO2 and water at the pore scale. The dataset consists of 624 2D samples, each of size 512x512 with a resolution of 35 {\mu}m, covering 100 time steps under a constant CO2 injection rate. It includes various levels of heterogeneity, represented by different grain sizes with random variation in spacing, offering a robust testbed for developing predictive models. This dataset provides high-resolution temporal and spatial information crucial for benchmarking machine learning models.
http://arxiv.org/abs/2503.17593v1
Guidance Free Image Editing via Explicit Conditioning
2025-03-22T00:44:23+00:00
Current sampling mechanisms for conditional diffusion models rely mainly on Classifier Free Guidance (CFG) to generate high-quality images. However, CFG requires several denoising passes in each time step, e.g., up to three passes in image editing tasks, resulting in excessive computational costs. This paper introduces a novel conditioning technique to ease the computational burden of the well-established guidance techniques, thereby significantly improving the inference time of diffusion models. We present Explicit Conditioning (EC) of the noise distribution on the input modalities to achieve this. Intuitively, we model the noise to guide the conditional diffusion model during the diffusion process. We present evaluations on image editing tasks and demonstrate that EC outperforms CFG in generating diverse high-quality images with significantly reduced computations.
http://arxiv.org/abs/2503.17594v1
A new tail bound for the sum of bounded independent random variables
2025-03-22T00:44:51+00:00
We construct a new tail bound for the sum of independent random variables for situations in which the expected value of the sum is known and each random variable lies within a specified interval, which may be different for each variable. This new bound can be computed by solving a two-dimensional convex optimization problem. Simulations demonstrate that the new bound is often substantially tighter than Hoeffding's inequality for cases in which both bounds are applicable.
http://arxiv.org/abs/2503.17595v1
On the higher rational topological complexity of certain elliptic spaces
2025-03-22T00:50:09+00:00
Through this paper, we show that $\text{TC}_r(Z)\leq r\cdot \text{cat}(Z)+\chi_{\pi}(Z)$, for any simply-connected elliptic space $Z$ admitting a pure minimal Sullivan model with a differential of constant length. Here $\chi_{\pi}(Z)$ denotes the homotopy characteristic and $r$ is an integer greater or equals than $2$. We also give a lower bound for $\text{TC}_r$ in the framework of coformal spaces and we compute the exact value of $\text{TC}_r$ for certain families of spaces.
http://arxiv.org/abs/2503.17596v1
Individual and cooperative superexchange enhancement in cuprates
2025-03-22T00:50:40+00:00
It is now widely accepted that the antiferromagnetic coupling within high temperature superconductors strongly exhibits a profound correlation with the upper limit of superconducting transition temperature these materials can reach. Thus, accurately calculating the positive and negative mechanisms that influence magnetic coupling in specific materials is crucial for the exploration of superconductivity at higher temperatures. Nevertheless, it is notoriously difficult to establish a complete description of electron correlations employing ab initio theories because of the large number of orbitals involved. In this study, we tackle the challenge of achieving high-level ab initio wave function theory calculations, which allow an explicit treatment of electron correlations associated with a large number of high-energy orbitals. We elucidate the atomic-shell-wise contributions to the superexchange coupling in the lanthanum cuprate, including individual effects of high-energy orbitals (Cu 4d, 5d, 4f, 5p) and cooperative effects between the core and these high-energy orbitals. Specifically, the prominent contributions from Cu 4d, 5d, 4f and 5p give rise to a rich collection of previously unexamined superexchange channels. We propose a p-d-f model to universally account for the contributions of high-energy orbitals at copper sites. Our calculations and physical rationalizations offer a more robust theoretical foundation for investigating cuprate-type high-temperature superconductors.
http://arxiv.org/abs/2503.17597v1
Non-Hermitian non-Abelian topological transition in the S=1 electron spin system of a nitrogen vacancy centre in diamond
2025-03-22T00:54:28+00:00
Topological phases and transitions are of fundamental importance in physics, which provide a deep insight into the understanding of materials. Recently, non-Abelian topological transitions have been investigated in Hermitian systems, revealing important topological features. With non-Hermiticity introduced, non-Hermitian non-Abelian topological transitions bring about more intriguing topological features, yet has not been experimentally explored. In this work, we report the observation of the non-Hermitian non-Abelian topological transition at the atomic scale utilizing a nitrogenvacancy center in diamond. While the well-established topological numbers, failed to recognize this transition, we successfully characterized such a transition with the measurement of the complex eigenvalue braids. We obtained the braid invariants from the measured relative phases between eigenvalues. The observed change in braid invariants provides a clear signature of the non-Abelian topological transition. Furthermore, we experimentally revealed an intriguing consequence of this transition, which is the creation of a third-order exceptional point through the collision of two second-order exceptional points with opposite charges. Our experimental findings shed light on the abundant non-Abelian topological phenomena involving non-Hermiticity, and provide insights into manipulating the spectral topology in atomic scale systems to achieve exotic functionalities arising from non-Abelian band braiding.
http://arxiv.org/abs/2503.17598v3
Coarse-Grained Games: A Framework for Bounded Perception in Game Theory
2025-03-22T00:59:22+00:00
In everyday life, we frequently make coarse-grained judgments. When we say that Olivia and Noah excel in mathematics, we disregard the specific differences in their mathematical abilities. Similarly, when we claim that a particular automobile manufacturer produces high-quality cars, we overlook the minor variations among individual vehicles. These coarse-grained assessments are distinct from erroneous or deceptive judgments, such as those resulting from student cheating or false advertising by corporations. Despite the prevalence of such judgments, little attention has been given to their underlying mathematical structure. In this paper, we introduce the concept of coarse-graining into game theory, analyzing games where players may perceive different payoffs as identical while preserving the underlying order structure. We call it a Coarse-Grained Game (CGG). This framework allows us to examine the rational inference processes that arise when players equate distinct micro-level payoffs at a macro level, and to explore how Nash equilibria are preserved or altered as a result. Our key findings suggest that CGGs possess several desirable properties that make them suitable for modeling phenomena in the social sciences. This paper demonstrates two such applications: first, in cases of overly minor product updates, consumers may encounter an equilibrium selection problem, resulting in market behavior that is not driven by objective quality differences; second, the lemon market can be analyzed not only through objective information asymmetry but also through asymmetries in perceptual resolution or recognition ability.
http://arxiv.org/abs/2503.17599v1
GPBench: A Comprehensive and Fine-Grained Benchmark for Evaluating Large Language Models as General Practitioners
2025-03-22T01:02:44+00:00
General practitioners (GPs) serve as the cornerstone of primary healthcare systems by providing continuous and comprehensive medical services. However, due to community-oriented nature of their practice, uneven training and resource gaps, the clinical proficiency among GPs can vary significantly across regions and healthcare settings. Currently, Large Language Models (LLMs) have demonstrated great potential in clinical and medical applications, making them a promising tool for supporting general practice. However, most existing benchmarks and evaluation frameworks focus on exam-style assessments-typically multiple-choice question-lack comprehensive assessment sets that accurately mirror the real-world scenarios encountered by GPs. To evaluate how effectively LLMs can make decisions in the daily work of GPs, we designed GPBench, which consists of both test questions from clinical practice and a novel evaluation framework. The test set includes multiple-choice questions that assess fundamental knowledge of general practice, as well as realistic, scenario-based problems. All questions are meticulously annotated by experts, incorporating rich fine-grained information related to clinical management. The proposed LLM evaluation framework is based on the competency model for general practice, providing a comprehensive methodology for assessing LLM performance in real-world settings. As the first large-model evaluation set targeting GP decision-making scenarios, GPBench allows us to evaluate current mainstream LLMs. Expert assessment and evaluation reveal that in areas such as disease staging, complication recognition, treatment detail, and medication usage, these models exhibit at least ten major shortcomings. Overall, existing LLMs are not yet suitable for independent use in real-world GP working scenarios without human oversight.
http://arxiv.org/abs/2503.17600v2
Imaging Intravoxel Vessel Size Distribution in the Brain Using Susceptibility Contrast Enhanced MRI
2025-03-22T01:05:28+00:00
Vascular remodelling is inherent to the pathogenesis of many diseases including cancer, neurodegeneration, fibrosis, hypertension, and diabetes. In this paper, a new susceptibility-contrast based MRI approach is established to non-invasively image intravoxel vessel size distribution (VSD), enabling a more comprehensive and quantitative assessment of vascular remodelling. The approach utilizes high-resolution light-sheet fluorescence microscopy images of rodent brain vasculature, simulating gradient echo sampling of free induction decay and spin echo (GESFIDE) MRI signals for the three-dimensional vascular networks, and training a deep learning model to predict cerebral blood volume (CBV) and VSD from GESFIDE signals. The results from ex vivo experiments demonstrated strong correlation (r = 0.96) between the true and predicted CBV. High similarity between true and predicted VSDs was observed (mean Bhattacharya Coefficient = 0.92). With further in vivo validation, intravoxel VSD imaging could become a transformative preclinical and clinical tool for interrogating disease and treatment induced vascular remodelling.
http://arxiv.org/abs/2503.17601v1
Wideband Cognitive Radio for Joint Communication and Sensing: Optimization of Subcarrier Allocation and beamforming
2025-03-22T01:14:11+00:00
As data traffic grows, wireless systems shift to higher frequency bands (6 GHz and above), where radar systems also operate. This coexistence demands effective interference management and efficient wideband utilization. Cognitive Radio (CR) offers a solution but remains limited to single-node or narrowband systems. This paper introduces a generalized wideband CR-enabled communication and sensing system with multiple users and targets. We propose a communication and sensing sub-carrier allocations framework, followed by transmit beamforming for the primary communication BS and sensing signal design for the secondary radar BS. The goal is to maximize the communication sum rate while ensuring sensing requirements, minimizing interference, and adhering to power constraints. To solve the resulting non-convex problem, we develop a manifold optimization algorithm for communication-only sub-carriers and an alternating optimization approach using the generalized Rayleigh quotient and semidefinite relaxation for communication-sensing sub-carriers. Compared to a non-cooperative benchmark, the proposed system achieves a \qty{10}{\percent} gain in communication sum rate and a \qty{32}{\percent} gain in sensing sum rate with \num{12} BS antennas.
http://arxiv.org/abs/2503.17602v1
Multiport Support for Vortex OpenGPU Memory Hierarchy
2025-03-22T01:16:24+00:00
Modern day applications have grown in size and require more computational power. The rise of machine learning and AI increased the need for parallel computation, which has increased the need for GPGPUs. With the increasing demand for computational power, GPGPUs' SIMT architecture has solved this with an increase in the number of threads and the number of cores in a GPU, increasing the throughput of these processors to match the demand of the applications. However, this created a larger demand for the memory, making the memory bandwidth a bottleneck. The introduction of High-Bandwidth Memory (HBM) with its increased number of memory ports offers a potential solution for the GPU to exploit its memory parallelism to increase the memory bandwidth. However, effectively leveraging HBM's memory parallelism to maximize bandwidth presents a unique and complex challenge for GPU architectures on how to distribute those ports among the streaming multiprocessors in the GPGPU. In this work, we extend the Vortex OpenGPU microarchitecture to incorporate a multiport memory hierarchy, spanning from the L1 cache to the last-level cache (LLC). In addition, we propose various arbitration strategies to optimize memory transfers across the cache hierarchy. The results have shown that an increase in memory ports increases IPC, achieving an average speedup of 2.34x with 8 memory ports in the tested configuration while showing relatively small area overhead.
http://arxiv.org/abs/2503.17603v1
A Generative Caching System for Large Language Models
2025-03-22T01:17:56+00:00
Caching has the potential to be of significant benefit for accessing large language models (LLMs) due to their high latencies which typically range from a small number of seconds to well over a minute. Furthermore, many LLMs charge money for queries; caching thus has a clear monetary benefit. This paper presents a new caching system for improving user experiences with LLMs. In addition to reducing both latencies and monetary costs for accessing LLMs, our system also provides important features that go beyond the performance benefits typically associated with caches. A key feature we provide is generative caching, wherein multiple cached responses can be synthesized to provide answers to queries which have never been seen before. Our generative caches function as repositories of valuable information which can be mined and analyzed. We also improve upon past semantic caching techniques by tailoring the caching algorithms to optimally balance cost and latency reduction with the quality of responses provided. Performance tests indicate that our caches are considerably faster than GPTcache.
http://arxiv.org/abs/2503.17604v3
OmniScience: A Domain-Specialized LLM for Scientific Reasoning and Discovery
2025-03-22T01:18:59+00:00
Large Language Models (LLMs) have demonstrated remarkable potential in advancing scientific knowledge and addressing complex challenges. In this work, we introduce OmniScience, a specialized large reasoning model for general science, developed through three key components: (1) domain adaptive pretraining on a carefully curated corpus of scientific literature, (2) instruction tuning on a specialized dataset to guide the model in following domain-specific tasks, and (3) reasoning-based knowledge distillation through fine-tuning to significantly enhance its ability to generate contextually relevant and logically sound responses. We demonstrate the versatility of OmniScience by developing a battery agent that efficiently ranks molecules as potential electrolyte solvents or additives. Comprehensive evaluations reveal that OmniScience is competitive with state-of-the-art large reasoning models on the GPQA Diamond and domain-specific battery benchmarks, while outperforming all public reasoning and non-reasoning models with similar parameter counts. We further demonstrate via ablation experiments that domain adaptive pretraining and reasoning-based knowledge distillation are critical to attain our performance levels, across benchmarks.
http://arxiv.org/abs/2503.17605v1
Explainable identification of similarities between entities for discovery in large text
2025-03-22T01:20:43+00:00
With the availability of virtually infinite number text documents in digital format, automatic comparison of textual data is essential for extracting meaningful insights that are difficult to identify manually. Many existing tools, including AI and large language models, struggle to provide precise and explainable insights into textual similarities. In many cases they determine the similarity between documents as reflected by the text, rather than the similarities between the subjects being discussed in these documents. This study addresses these limitations by developing an n-gram analysis framework designed to compare documents automatically and uncover explainable similarities. A scoring formula is applied to assigns each of the n-grams with a weight, where the weight is higher when the n-grams are more frequent in both documents, but is penalized when the n-grams are more frequent in the English language. Visualization tools like word clouds enhance the representation of these patterns, providing clearer insights. The findings demonstrate that this framework effectively uncovers similarities between text documents, offering explainable insights that are often difficult to identify manually. This non-parametric approach provides a deterministic solution for identifying similarities across various fields, including biographies, scientific literature, historical texts, and more. Code for the method is publicly available.
http://arxiv.org/abs/2503.17606v1
Combining longitudinal cohort studies to examine cardiovascular risk factor trajectories across the adult lifespan
2025-03-22T01:21:13+00:00
We introduce a statistical framework for combining data from multiple large longitudinal cardiovascular cohorts to enable the study of long-term cardiovascular health starting in early adulthood. Using data from seven cohorts belonging to the Lifetime Risk Pooling Project (LRPP), we present a Bayesian hierarchical multivariate approach that jointly models multiple longitudinal risk factors over time and across cohorts. Because few cohorts in our project cover the entire adult lifespan, our strategy uses information from all risk factors to increase precision for each risk factor trajectory and borrows information across cohorts to fill in unobserved risk factors. We develop novel diagnostic testing and model validation methods to ensure that our model robustly captures and maintains critical relationships over time and across risk factors.
http://arxiv.org/abs/2503.17607v1
Phonon-mediated relaxation in nanomaterials from combining Density Functional Theory based non-adiabatic molecular dynamics with Kadanoff-Baym-Keldysh technique
2025-03-22T01:30:54+00:00
Boltzmann transport equation (BE) is a potent approach to dynamics of a photoexcited (nano)material. BE collision integrals for different relaxation channels can be systematically computed using the Kadanoff-Baym-Keldysh (KBK) formalism (also called NEGF) utilizing the Density Functional Theory (DFT) simulation output. However, accurate description of phonon-mediated relaxation in a general class of (nano)materials that includes exciton effects is still an outstanding problem. The approach proposed here is based on the observation that the non-adiabatic couplings of the DFT-based non-adiabatic molecular dynamics (NAMD) play the role of a time-dependent external potential coupled to the electrons. This allows application of the Keldysh approach resulting in the exciton-phonon BE collision integral, which incorporates exciton wave functions and energies obtained from Bethe-Salpeter equation. As an application, we augment BE with radiative recombination and photon-mediated exciton-exciton transition terms and then use it to calculate photoluminescence (PL) spectrum for several 1.5-$nm$ semiconductor chalcogenide nanocrystals, such as $Cd_{37}Pb_{31}Se_{68},~Cd_{31}Pb_{37}Se_{68},$ which are Janus-type, and for $Pb_{68}Se_{68}.$
http://arxiv.org/abs/2503.17608v1
Accelerating detector simulations with Celeritas: profiling and performance optimizations
2025-03-22T01:36:42+00:00
Celeritas is a GPU-optimized MC particle transport code designed to meet the growing computational demands of next-generation HEP experiments. It provides efficient simulation of EM physics processes in complex geometries with magnetic fields, detector hit scoring, and seamless integration into Geant4-driven applications to offload EM physics to GPUs. Recent efforts have focused on performance optimizations and expanding profiling capabilities. This paper presents some key advancements, including the integration of the Perfetto system profiling tool for detailed performance analysis and the development of track-sorting methods to improve computational efficiency.
http://arxiv.org/abs/2503.17609v2
Embers of Active Galactic Nuclei: Tidal Disruption Events and Quasiperiodic Eruptions
2025-03-22T01:42:55+00:00
Recent observations have confirmed the direct association between tidal disruption events (TDEs) and quasiperiodic eruptions (QPEs). In addition, TDE hosts and QPE hosts are statistically found to be similar in their morphological properties and in the strong overrepresentation of poststarburst galaxies. Particularly, both of them show an intriguing preference for extending emission line regions (EELRs), which are indicative of recently faded active galactic nuclei (AGNs). This further suggests that QPEs might be produced following TDEs involving supermassive black holes at a particular stage, when the AGN activity has recently ceased. Moreover, in the framework of "QPEs=extreme mass ratio inspiral (EMRI) + accretion disk" model, a large fraction of QPE EMRIs are inferred to be quasi-circular from the QPE timing, indicating that they are wet EMRIs that were formed in the AGN disk during a previous AGN phase. Based on these facts, we propose a unified scenario that connects these three phenomena: AGN activities boost both the TDE rate and the formation rate of low-eccentricity EMRIs, consequently TDEs are preferentially found in recently faded AGNs instead of in ongoing AGNs due to selection effects, and QPEs are also preferentially found in recently faded AGNs where TDEs frequently feed a misaligned accretion disk to the EMRI.
http://arxiv.org/abs/2503.17610v1
Photoluminescent colour centres on a mainstream silicon photonic foundry platform
2025-03-22T01:58:12+00:00
The fabrication of silicon photonic components in commercial CMOS-compatible foundries has revolutionized the impact of silicon photonics on advancing communication, quantum computing and artificial intelligence, due to their benefits of mass production, high throughput, low cost, and high performance. The indirect bandgap of silicon introduces a fundamental challenge; thus, the mainstream silicon-on-insulator (SOI) platform does not have efficient light sources. Recently, luminescent colour centres in SOI have emerged as one promising approach for developing efficient on-chip classical and quantum light sources, although past work has relied on custom fabrication that is not foundry-compatible. In this work, we demonstrate W-centre photoluminescence on a mainstream silicon photonics platform through development of a straightforward back end-of-line (BEOL) treatment. At an optimal implant energy of 7~MeV, we observed W-centre photoluminescence with a brightness comparable to prior in-house processes. We performed a series of experiments on Circular Bragg Grating (CBG) devices with varying pitches, duty cycles, and implant energies to confirm the PL emission from the encapsulated SOI device layer rather than the handle wafer. Our novel approach in fabricating silicon colour centres in commercial silicon photonic foundry processes opens up new opportunities for integrating classical and quantum light sources directly onto silicon photonic circuits, unlocking opportunities for large-scale integration of advanced photonic architectures on chip.
http://arxiv.org/abs/2503.17611v1
On continuous polynomials of the Macías space
2025-03-22T02:06:11+00:00
Let $\mathbb{N}$ be the set of natural numbers. The Mac\'ias space $M(\mathbb{N})$ is the topological space $(\mathbb{N},\tau_M)$ where $\tau_M$ is generated by the collection of sets $\sigma_n := \{ m \in \mathbb{N} : \gcd(n, m) = 1 \}$. In this paper, we characterize the continuity of polynomials over $ M(\mathbb{N})$ and prove that the only continuous polynomials are monomials
http://arxiv.org/abs/2503.17612v1
Evolution of Photospheric Magnetic Field and Electric Currents during the X1.6 Flare in Active Region NOAA 12192
2025-03-22T02:13:19+00:00
The dynamics of magnetic fields in the Sun's active regions plays a key role in triggering solar eruptions. Studies have shown that changes in the photosphere's magnetic field can destabilize large-scale structure of the corona, leading to explosive events such as flares and coronal mass ejections (CMEs). This paper delves into the magnetic field evolution associated with a powerful X1.6 class flare that erupted on October 22nd, 2014, from the flare-rich active region NOAA 12192. We utilized high-resolution vector magnetograms from the Helioseismic and Magnetic Imager (HMI) on NASA's Solar Dynamic Observatory (SDO) to track these changes. Our analysis reveals that a brightening, a precursor to the flare, began near the newly emerged, small-scale bipolar flux regions. During the X1.6 flare, the magnetic flux in both polarities displayed emergence and cancellation. The total current within the active region peaked during the flare. But, it is a non CME event and the ratio of direct to return current value remain close to 1. The large flare in this active region occured when the net current in both polarities attain the same sign. This implies that the Lorentz force, a consequence of the interaction between currents and magnetic fields, would have pushed the field lines together in this scenario. This reconnection of opposing magnetic fields is believed to be the driving force behind major flare occurred in this active region.
http://arxiv.org/abs/2503.17613v1
Coordinated Shirking in Technology Adoption
2025-03-22T02:15:24+00:00
This paper studies a model of technology adoption: a principal tries to induce a group of agents to exert costly effort to vet a new production technology before they choose whether to use it. The principal finds it too costly to simultaneously punish large groups of unproductive agents, so they shirk when coordination is possible. Widely applicable technology expands productive possibilities but also provides an opportunity for coordinated shirking, and can thus lead to widespread production failure. Furthermore, even agents who learn that they are using flawed technology may continue to do so. Applications include mortgage securitization in the financial crisis of 2008, and the adoption of generative artificial intelligence.
http://arxiv.org/abs/2503.17614v1
Cross section Measurements for $^{12}$C$(K^-, K^+Ξ^-)$ and $^{12}$C$(K^-, K^+ΛΛ)$ Reactions at 1.8 GeV$/c$
2025-03-22T02:20:23+00:00
We present a measurement of the production of $\Xi^-$ and $\Lambda\Lambda$ in the $^{12}$C$(K^-, K^+)$ reaction at an incident beam momentum of 1.8 GeV/$\mathit{c}$, based on high-statistics data from J-PARC E42. The cross section for the $^{12}$C$(K^-, K^+\Xi^-)$ reaction, compared to the inclusive $^{12}$C$(K^-, K^+)$ reaction cross section, indicates that the $\Xi^-$ escaping probability peaks at 65\% in the energy region of $E_\Xi=100$ to 150 MeV above the $\Xi^-$ emission threshold. A classical approach using eikonal approximation shows that the total cross sections for $\Xi^-$ inelastic scattering ranges between 43 mb and 23 mb in the $\Xi^-$ momentum range from 0.4 to 0.6 GeV/c. Furthermore, based on the relative cross section for the $^{12}$C$(K^-, K^+\Lambda\Lambda)$ reaction, the total cross section for $\Xi^-p\to\Lambda\Lambda$ is estimated in the same approach to vary between 2.6 mb and 1.1 mb in the momentum range of 0.40 to 0.65 GeV/c. Specifically, a cross section of 1.1 mb in the momentum range of 0.5 to 0.6 GeV/c imposes a constraint on the upper bound of the decay width of the $\Xi^-$ particle in infinite nuclear matter, revealing $\Gamma_\Xi< \sim 0.7$ MeV.
http://arxiv.org/abs/2503.17615v1
Feature Selection Based on Reinforcement Learning and Hazard State Classification for Magnetic Adhesion Wall-Climbing Robots
2025-03-22T02:21:11+00:00
Magnetic adhesion tracked wall-climbing robots face potential risks of overturning during high-altitude operations, making their stability crucial for ensuring safety. This study presents a dynamic feature selection method based on Proximal Policy Optimization (PPO) reinforcement learning, combined with typical machine learning models, aimed at improving the classification accuracy of hazardous states under complex operating conditions. Firstly, this work innovatively employs a fiber rod-based MEMS attitude sensor to collect vibration data from the robot and extract high-dimensional feature vectors in both time and frequency domains. Then, a reinforcement learning model is used to dynamically select the optimal feature subset, reducing feature redundancy and enhancing classification accuracy. Finally, a CNN-LSTM deep learning model is employed for classification and recognition. Experimental results demonstrate that the proposed method significantly improves the robot's ability to assess hazardous states across various operational scenarios, providing reliable technical support for robotic safety monitoring.
http://arxiv.org/abs/2503.17616v1
Generalized Scattering Matrix Synthesis for Hybrid Systems with Multiple Scatterers and Antennas Using Independent Structure Simulations
2025-03-22T02:23:41+00:00
This paper presents a unified formulation for calculating the generalized scattering matrix (GS-matrix) of hybrid systems involving multiple scatterers and antennas. The GS-matrix of the entire system is synthesized through the scattering matrices and GS-matrices of each independent component, using the addition theorem of vector spherical wavefunctions and fully matrix-based operations. Since our formulation is applicable to general antenna-scatterer hybrid systems, previous formulas for multiple scattering and antenna arrays become special cases of our approach. This also establishes our formulation as a universal domain decomposition method for analyzing the electromagnetic performance of hybrid systems. We provide numerous numerical examples to comprehensively demonstrate the capabilities and compatibility of the proposed formulation, including its potential application in studying the effects of structural rotation.
http://arxiv.org/abs/2503.17617v1
Flocking Beyond One Species: Novel Phase Coexistence in a Generalized Two-Species Vicsek Model
2025-03-22T02:26:53+00:00
A hallmark in natural systems, self-organization often stems from very simple interaction rules between individual agents. While single-species self-propelled particle (SPP) systems are well understood, the behavior of binary mixtures with general alignment interactions remains largely unexplored with few scattered results hinting at the existence of a rich emergent phase behavior. Here, we investigate systematically a generalization of the two-species Vicsek model with reciprocal intra- and interspecies (anti)alignment couplings, uncovering a rich phenomenology of emergent states. Notably, we show that rather than destroying polar order, anti-aligning interactions can promote phase separation and the emergence of global polar order. In doing so, we uncover a novel phase separation mechanism. We further find these coexistence patterns can be generalized to multi-species systems with cyclic alignment interactions.
http://arxiv.org/abs/2503.17618v1
A Spherical Crank-Nicolson Integrator Based on the Exponential Map and the Spherical Linear Interpolation
2025-03-22T02:28:36+00:00
We propose implicit integrators for solving stiff differential equations on unit spheres. Our approach extends the standard backward Euler and Crank-Nicolson methods in Cartesian space by incorporating the geometric constraint inherent to the unit sphere without additional projection steps to enforce the unit length constraint on the solution. We construct these algorithms using the exponential map and spherical linear interpolation (SLERP) formula on the unit sphere. Specifically, we introduce a spherical backward Euler method, a projected backward Euler method, and a second-order symplectic spherical Crank-Nicolson method. While all methods require solving a system of nonlinear equations to advance the solution to the next time step, these nonlinear systems can be efficiently solved using Newton's iterations. We will present several numerical examples to demonstrate the effectiveness and convergence of these numerical schemes. These examples will illustrate the advantages of our proposed methods in accurately capturing the dynamics of stiff systems on unit spheres.
http://arxiv.org/abs/2503.17619v1
The Birch and Swinnerton-Dyer conjecture implies Goldfeld's conjecture
2025-03-22T02:30:45+00:00
Given an elliptic curve E/Q, we show that 50% of the quadratic twists of E have $2^{\infty}$-Selmer corank 0 and 50% have $2^{\infty}$-Selmer corank 1. As one consequence, we prove that the Birch and Swinnerton-Dyer conjecture implies Goldfeld's conjecture. Previously, this result was known by work of the author for elliptic curves over Q satisfying certain technical conditions. As part of this work, we determine the distribution of 2-Selmer ranks in the quadratic twist family of E. In the cases where this distribution was not already known, it is distinct from the model for distributions of 2-Selmer groups constructed by Poonen and Rains.
http://arxiv.org/abs/2503.17620v2
A Case Study of Scalable Content Annotation Using Multi-LLM Consensus and Human Review
2025-03-22T02:32:09+00:00
Content annotation at scale remains challenging, requiring substantial human expertise and effort. This paper presents a case study in code documentation analysis, where we explore the balance between automation efficiency and annotation accuracy. We present MCHR (Multi-LLM Consensus with Human Review), a novel semi-automated framework that enhances annotation scalability through the systematic integration of multiple LLMs and targeted human review. Our framework introduces a structured consensus-building mechanism among LLMs and an adaptive review protocol that strategically engages human expertise. Through our case study, we demonstrate that MCHR reduces annotation time by 32% to 100% compared to manual annotation while maintaining high accuracy (85.5% to 98%) across different difficulty levels, from basic binary classification to challenging open-set scenarios.
http://arxiv.org/abs/2503.17621v1
Higgs-portal vector dark matter at a low reheating temperature
2025-03-22T02:34:51+00:00
We study vector dark matter (DM) production with Higgs-portal type interactions in the scenarios with a low reheating temperature which can be realized by a prolonged decay of the inflaton after inflation. We take the reheating temperature to be large enough to match the observations in Standard Cosmology such as Big Bang Nucleosynthesis but small enough below the DM mass for the DM production. We analyze the impact of the model parameters including the extra gauge coupling and the reheating temperature on the DM relic density, collider bounds and DM direct and indirect detection experiments. Our results reveal a strong correlation between the DM mass ($M_{W_D}$) and the reheating temperature ($T_R$) with ratio of around $T_R/M_{W_D} \sim 0.1$ to obtain correct DM density for detectable interaction strength. The decay processes are generally subdominant for the DM production but they can be important when kinematically allowed and the DM mass is close to half of the Higgses mass. The DM production with DM masses below 100 GeV is driven primarily by the scatterings of the SM fermions and Higgses decay whereas the case with higher DM masses is achieved mainly due to the Higgses scatterings. The enhanced coupling for the strong freeze-in in our framework enables potential detection prospects in direct and indirect detections and collider experiments. The parameter space of the model has already been explored partly by the current direct detection experiments and it can be explored further by future experiments such as Darwin. On the other hand, the indirect detection experiments in the current and near future are not sensitive enough to test our model.
http://arxiv.org/abs/2503.17622v1
Infinite Horizon Mean-Field Linear-Quadratic Optimal Control Problems with Switching and Indefinite-Weighted Costs
2025-03-22T02:35:10+00:00
This paper is concerned with an infinite horizon stochastic linear quadratic (LQ, for short) optimal control problems with conditional mean-field terms in a switching environment. Different from [17], the cost functionals do not have positive-definite weights here. When the problems are merely finite, we construct a sequence of asymptotic optimal controls and derive their closed-loop representations. For the solvability, an equivalence result between the open-loop and closed-loop cases is established through algebraic Riccati equations and infinite horizon backward stochastic differential equations. It can be seen that the research in [17] with positive-definite weights is a special case of the current paper.
http://arxiv.org/abs/2503.17623v1
Unraveling Pedestrian Fatality Patterns: A Comparative Study with Explainable AI
2025-03-22T02:44:41+00:00
Road fatalities pose significant public safety and health challenges worldwide, with pedestrians being particularly vulnerable in vehicle-pedestrian crashes due to disparities in physical and performance characteristics. This study employs explainable artificial intelligence (XAI) to identify key factors contributing to pedestrian fatalities across the five U.S. states with the highest crash rates (2018-2022). It compares them to the five states with the lowest fatality rates. Using data from the Fatality Analysis Reporting System (FARS), the study applies machine learning techniques-including Decision Trees, Gradient Boosting Trees, Random Forests, and XGBoost-to predict contributing factors to pedestrian fatalities. To address data imbalance, the Synthetic Minority Over-sampling Technique (SMOTE) is utilized, while SHapley Additive Explanations (SHAP) values enhance model interpretability. The results indicate that age, alcohol and drug use, location, and environmental conditions are significant predictors of pedestrian fatalities. The XGBoost model outperformed others, achieving a balanced accuracy of 98 %, accuracy of 90 %, precision of 92 %, recall of 90 %, and an F1 score of 91 %. Findings reveal that pedestrian fatalities are more common in mid-block locations and areas with poor visibility, with older adults and substance-impaired individuals at higher risk. These insights can inform policymakers and urban planners in implementing targeted safety measures, such as improved lighting, enhanced pedestrian infrastructure, and stricter traffic law enforcement, to reduce fatalities and improve public safety.
http://arxiv.org/abs/2503.17624v2
How Users Employ Workarounds in Software Forms
2025-03-22T02:46:50+00:00
Workarounds enable users to achieve goals despite system limitations but expose design flaws, reduce productivity, risk compromising data quality, and cause inconsistencies. This study investigates how users employ workarounds when the data they want to enter does not align with software form constraints. Through a descriptive user study, we analyzed how workarounds originate and impact system design and data integrity. Understanding workarounds is essential for software designers to identify unmet user needs.
http://arxiv.org/abs/2503.17625v1
AI-Based Screening for Depression and Social Anxiety Through Eye Tracking: An Exploratory Study
2025-03-22T02:53:02+00:00
Well-being is a dynamic construct that evolves over time and fluctuates within individuals, presenting challenges for accurate quantification. Reduced well-being is often linked to depression or anxiety disorders, which are characterised by biases in visual attention towards specific stimuli, such as human faces. This paper introduces a novel approach to AI-assisted screening of affective disorders by analysing visual attention scan paths using convolutional neural networks (CNNs). Data were collected from two studies examining (1) attentional tendencies in individuals diagnosed with major depression and (2) social anxiety. These data were processed using residual CNNs through images generated from eye-gaze patterns. Experimental results, obtained with ResNet architectures, demonstrated an average accuracy of 48% for a three-class system and 62% for a two-class system. Based on these exploratory findings, we propose that this method could be employed in rapid, ecological, and effective mental health screening systems to assess well-being through eye-tracking.
http://arxiv.org/abs/2503.17626v1
Transferable Latent-to-Latent Locomotion Policy for Efficient and Versatile Motion Control of Diverse Legged Robots
2025-03-22T03:01:25+00:00
Reinforcement learning (RL) has demonstrated remarkable capability in acquiring robot skills, but learning each new skill still requires substantial data collection for training. The pretrain-and-finetune paradigm offers a promising approach for efficiently adapting to new robot entities and tasks. Inspired by the idea that acquired knowledge can accelerate learning new tasks with the same robot and help a new robot master a trained task, we propose a latent training framework where a transferable latent-to-latent locomotion policy is pretrained alongside diverse task-specific observation encoders and action decoders. This policy in latent space processes encoded latent observations to generate latent actions to be decoded, with the potential to learn general abstract motion skills. To retain essential information for decision-making and control, we introduce a diffusion recovery module that minimizes information reconstruction loss during pretrain stage. During fine-tune stage, the pretrained latent-to-latent locomotion policy remains fixed, while only the lightweight task-specific encoder and decoder are optimized for efficient adaptation. Our method allows a robot to leverage its own prior experience across different tasks as well as the experience of other morphologically diverse robots to accelerate adaptation. We validate our approach through extensive simulations and real-world experiments, demonstrating that the pretrained latent-to-latent locomotion policy effectively generalizes to new robot entities and tasks with improved efficiency.
http://arxiv.org/abs/2503.17627v1
Automated Methods for Abundance Determination
2025-03-22T03:01:25+00:00
As the multiplexing power of spectroscopic instruments increases, so does the need for automated analysis. In practice, the bottleneck for speed is the calculation of model spectra to evaluate the likelihood of candidate parameters. This presentation gives an overview of the steps required for automating spectroscopic analyses, focusing on the speedups achievable by precomputing regular grids of synthetic spectra for on-the-fly interpolation, and a new technique based on precomputed irregular grids capable of tackling problems with much higher dimensionality, as in the case when we are interested in deriving the abundances of multiple elements. Accuracy, ease of use and portability will be discussed.
http://arxiv.org/abs/2503.17628v2
Electron transport in disordered insulating lattice under nonlinear electric field
2025-03-22T03:17:01+00:00
Transport in disordered systems often occurs via the variable range hopping (VRH) in the dilute carrier density limit, where electrons hop between randomly distributed localized levels. We study the nonequilibrium transport by a uniform DC electric field on a one-dimensional insulating tight-binding chain with the on-site disorder, using a finite-lattice calculation and the coherent potential approximation. We develop a theory of electric-field-assisted variable range hopping as a mechanism for nonlinear transport in a disordered chain. Our finite-lattice calculations of the electron propagation distance and the electron mobility determine the range of the variable range hopping as $\Delta < W \lesssim 2\Delta$ in the gap $\Delta$. We further propose a nonlinear scaling of the conductivity by an electric field that is similar to Mott's variable range hopping in equilibrium. The nonlinear conductivity of an electronic lattice model follows the scaling law $\sigma(E) \propto \exp[-(E_0/E)^{\nu}]$ with the exponent $\nu = 1/3$ in one dimension for the VRH. We also discuss the experimental relevance of temperature-dependent nonlinear current-voltage relation.
http://arxiv.org/abs/2503.17629v1
Planning and Learning in Average Risk-aware MDPs
2025-03-22T03:18:09+00:00
For continuing tasks, average cost Markov decision processes have well-documented value and can be solved using efficient algorithms. However, it explicitly assumes that the agent is risk-neutral. In this work, we extend risk-neutral algorithms to accommodate the more general class of dynamic risk measures. Specifically, we propose a relative value iteration (RVI) algorithm for planning and design two model-free Q-learning algorithms, namely a generic algorithm based on the multi-level Monte Carlo method, and an off-policy algorithm dedicated to utility-base shortfall risk measures. Both the RVI and MLMC-based Q-learning algorithms are proven to converge to optimality. Numerical experiments validate our analysis, confirms empirically the convergence of the off-policy algorithm, and demonstrate that our approach enables the identification of policies that are finely tuned to the intricate risk-awareness of the agent that they serve.
http://arxiv.org/abs/2503.17630v1
Generating Realistic, Diverse, and Fault-Revealing Inputs with Latent Space Interpolation for Testing Deep Neural Networks
2025-03-22T03:19:55+00:00
Deep Neural Networks (DNNs) have been widely employed across various domains, including safety-critical systems, necessitating comprehensive testing to ensure their reliability. Although numerous DNN model testing methods have been proposed to generate adversarial samples that are capable of revealing faults, existing methods typically perturb samples in the input space and then mutate these based on feedback from the DNN model. These methods often result in test samples that are not realistic and with low-probability reveal faults. To address these limitations, we propose a black-box DNN test input generation method, ARGUS, to generate realistic, diverse, and fault-revealing test inputs. ARGUS first compresses samples into a continuous latent space and then perturbs the original samples by interpolating these with samples of different classes. Subsequently, we employ a vector quantizer and decoder to reconstruct adversarial samples back into the input space. Additionally, we employ discriminators both in the latent space and in the input space to ensure the realism of the generated samples. Evaluation of ARGUS in comparison with state-of-the-art black-box testing and white-box testing methods, shows that ARGUS excels in generating realistic and diverse adversarial samples relative to the target dataset, and ARGUS successfully perturbs all original samples and achieves up to 4 times higher error rate than the best baseline method. Furthermore, using these adversarial samples for model retraining can improve model classification accuracy.
http://arxiv.org/abs/2503.17631v1
A numerical framework for studying asymptotic quantities
2025-03-22T03:24:02+00:00
In this contribution we present an overview of our work on the numerical simulation of the perturbation of a black hole space-time by incoming gravitational waves. The formulation we use is based on Friedrich's general conformal equations which have the unique property that they allow access to the asymptotic region of an asymptotically regular space-time. In our approach we set up an initial boundary value problem on a finite boundary, which cleanly separates the initial conditions, a static black hole, from the perturbation, an incoming gravitational wave specified by a spin-2 function on the time-like boundary. The main advantage of this approach is that the finite boundary expands fast enough to reach null-infinity where the asymptotic properties can be studied. This provides, for the first time, a direct relationship between finite initial and boundary data and asymptotic quantities within one simulation. We discuss the possibilities and limitations of this approach.
http://arxiv.org/abs/2503.17632v1
FairFlow: Mitigating Dataset Biases through Undecided Learning
2025-03-22T03:35:51+00:00
Language models are prone to dataset biases, known as shortcuts and spurious correlations in data, which often result in performance drop on new data. We present a new debiasing framework called ``FairFlow'' that mitigates dataset biases by learning to be undecided in its predictions for data samples or representations associated with known or unknown biases. The framework introduces two key components: a suite of data and model perturbation operations that generate different biased views of input samples, and a contrastive objective that learns debiased and robust representations from the resulting biased views of samples. Experiments show that FairFlow outperforms existing debiasing methods, particularly against out-of-domain and hard test samples without compromising the in-domain performance
http://arxiv.org/abs/2503.17633v1
Enhancing Martian Terrain Recognition with Deep Constrained Clustering
2025-03-22T03:38:16+00:00
Martian terrain recognition is pivotal for advancing our understanding of topography, geomorphology, paleoclimate, and habitability. While deep clustering methods have shown promise in learning semantically homogeneous feature embeddings from Martian rover imagery, the natural variations in intensity, scale, and rotation pose significant challenges for accurate terrain classification. To address these limitations, we propose Deep Constrained Clustering with Metric Learning (DCCML), a novel algorithm that leverages multiple constraint types to guide the clustering process. DCCML incorporates soft must-link constraints derived from spatial and depth similarities between neighboring patches, alongside hard constraints from stereo camera pairs and temporally adjacent images. Experimental evaluation on the Curiosity rover dataset (with 150 clusters) demonstrates that DCCML increases homogeneous clusters by 16.7 percent while reducing the Davies-Bouldin Index from 3.86 to 1.82 and boosting retrieval accuracy from 86.71 percent to 89.86 percent. This improvement enables more precise classification of Martian geological features, advancing our capacity to analyze and understand the planet's landscape.
http://arxiv.org/abs/2503.17634v1
Mixed-gradients Distributed Filtered Reference Least Mean Square Algorithm -- A Robust Distributed Multichannel Active Noise Control Algorithm
2025-03-22T03:42:09+00:00
Distributed multichannel active noise control (DMCANC), which utilizes multiple individual processors to achieve a global noise reduction performance comparable to conventional centralized multichannel active noise control (MCANC), has become increasingly attractive due to its high computational efficiency. However, the majority of current DMCANC algorithms disregard the impact of crosstalk across nodes and impose the assumption of an ideal network devoid of communication limitations, which is an unrealistic assumption. Therefore, this work presents a robust DMCANC algorithm that employs the compensating filter to mitigate the impact of crosstalk. The proposed solution enhances the DMCANC system's flexibility and security by utilizing local gradients instead of local control filters to convey enhanced information, resulting in a mixed-gradients distributed filtered reference least mean square (MGDFxLMS) algorithm. The performance investigation demonstrates that the proposed approach performs well with the centralized method. Furthermore, to address the issue of communication delay in the distributed network, a practical strategy that auto-shrinks the step size value in response to the delayed samples is implemented to improve the system's resilience. The numerical simulation results demonstrate the efficacy of the proposed auto-shrink step size MGDFxLMS (ASSS-MGDFxLMS) algorithm across various communication delays, highlighting its practical value.
http://arxiv.org/abs/2503.17635v1
Stochastic origin of primordial fluctuations in the Sky
2025-03-22T03:42:51+00:00
We provide a study of the effects of the Effective Field Theory (EFT) generalisation of stochastic inflation on the production of primordial black holes (PBHs) in a model-independent single-field context. We demonstrate how the scalar perturbations' Infra-Red (IR) contributions and the emerging Fokker-Planck equation driving the probability distribution characterise the Langevin equations for the ``soft" modes in the quasi-de Sitter background. Both the classical-drift and quantum-diffusion-dominated regimes undergo a specific analysis of the distribution function using the stochastic-$\delta N$ formalism, which helps us to evade a no-go theorem on the PBH mass. Using the EFT-induced alterations, we evaluate the local non-Gaussian parameters in the drift-dominated limit.
http://arxiv.org/abs/2503.17636v1
Random cluster models on random graphs
2025-03-22T03:46:39+00:00
On locally tree-like random graphs, we relate the random cluster model with external magnetic fields and $q\geq 2$ to Ising models with vertex-dependent external fields. The fact that one can formulate general random cluster models in terms of two-spin ferromagnetic Ising models is quite interesting in its own right. However, in the general setting, the external fields are both positive and negative, which is mathematically unexplored territory. Interestingly, due to the reformulation as a two-spin model, we can show that the Bethe partition function, which is believed to have the same pressure per particle, is always a {\em lower bound} on the graph pressure per particle. We further investigate special cases in which the external fields do always have the same sign. The first example is the Potts model with general external fields on random $d$-regular graphs. In this case, we show that the pressure per particle in the quenched setting agrees with that of the annealed setting, and verify \cite[Assumption 1.4]{BasDemSly23}. We show that there is a line of values for the external fields where the model displays a first-order phase transition. This completes the identification of the phase diagram of the Potts model on the random $d$-regular graph. As a second example, we consider the high external field and low temperature phases of the system on locally tree-like graphs with general degree distribution.
http://arxiv.org/abs/2503.17637v1
Asymptotic Behaviour of Solutions to the Fokker-Planck Equation: Naval Dynamics Under Stochastic Influence
2025-03-22T03:53:14+00:00
This study investigates the asymptotic dynamics of solutions to the Fokker-Planck-Kolmogorov (FPK) equation, with a specific focus on ship roll stability in dynamic sea conditions. Utilizing a fourth-order filter, we conduct a thorough analysis of the time evolution of the probability distributions for roll angles, roll speeds, and roll excitations. Our theoretical framework provides new insights into the long-term behavior of these systems, emphasizing the role of stochastic perturbations. Key findings reveal that the probability of capsizing remains constant over time, offering significant contributions to the stability assessment of maritime vessels under uncertain environmental conditions. This work paves the way for more robust models in maritime engineering and dynamic stability analysis.
http://arxiv.org/abs/2503.17638v1
Collective Wisdom: Policy Averaging with an Application to the Newsvendor Problem
2025-03-22T03:56:03+00:00
We propose a Policy Averaging Approach (PAA) that synthesizes the strengths of existing approaches to create more reliable, flexible and justifiable policies for stochastic optimization problems. An important component of the PAA is risk diversification to reduce the randomness of policies. A second component emulates model averaging from statistics. A third component involves using cross-validation to diversify and optimize weights among candidate policies. We demonstrate the use of the PAA for the newsvendor problem. For that problem, model-based approaches typically use specific and potentially unreliable assumptions of either independently and identically distributed (i.i.d.) demand or feature-dependent demand with covariates or autoregressive functions. Data-driven approaches, including sample averaging and the use of functions of covariates to set order quantities, typically suffer from overfitting and provide limited insights to justify recommended policies. By integrating concepts from statistics and finance, the PAA avoids these problems. We show using theoretical analysis, a simulation study, and an empirical study, that the PAA outperforms all those earlier approaches. The demonstrated benefits of the PAA include reduced expected cost, more stable performance, and improved insights to justify recommendations. Extensions to consider tail risk and the use of stratified sampling are discussed. Beyond the newsvendor problem, the PAA is applicable to a wide variety of decision-making problems under uncertainty.
http://arxiv.org/abs/2503.17639v1
Kintsugi-Inspired Design: Communicatively Reconstructing Identities Online After Trauma
2025-03-22T03:57:49+00:00
Trauma can disrupt one's sense of self and mental well-being, leading survivors to reconstruct their identities in online communities. Drawing from 30 in-depth interviews, we present a sociotechnical process model that illustrates the mechanisms of online identity reconstruction and the pathways to integration. We introduce the concept of fractured identities, reflecting the enduring impact of trauma on one's self-concept.
http://arxiv.org/abs/2503.17640v1
On the Hopf-Cole Transform for Control-affine Schrödinger Bridge
2025-03-22T04:08:10+00:00
The purpose of this note is to clarify the importance of the relation $\boldsymbol{gg}^{\top}\propto \boldsymbol{\sigma\sigma}^{\top}$ in solving control-affine Schr\"{o}dinger bridge problems via the Hopf-Cole transform, where $\boldsymbol{g},\boldsymbol{\sigma}$ are the control and noise coefficients, respectively. We show that the Hopf-Cole transform applied to the conditions of optimality for generic control-affine Schr\"{o}dinger bridge problems, i.e., without the assumption $\boldsymbol{gg}^{\top}\propto\boldsymbol{\sigma\sigma}^{\top}$, gives a pair of forward-backward PDEs that are neither linear nor equation-level decoupled. We explain how the resulting PDEs can be interpreted as nonlinear forward-backward advection-diffusion-reaction equations, where the nonlinearity stem from additional drift and reaction terms involving the gradient of the log-likelihood a.k.a. the score. These additional drift and reaction vanish when $\boldsymbol{gg}^{\top}\propto\boldsymbol{\sigma\sigma}^{\top}$, and the resulting boundary-coupled system of linear PDEs can then be solved by dynamic Sinkhorn recursions. A key takeaway of our work is that the numerical solution of the generic control-affine Schr\"{o}dinger bridge requires further algorithmic development, possibly generalizing the dynamic Sinkhorn recursion or otherwise.
http://arxiv.org/abs/2503.17641v1
InstructVEdit: A Holistic Approach for Instructional Video Editing
2025-03-22T04:12:20+00:00
Video editing according to instructions is a highly challenging task due to the difficulty in collecting large-scale, high-quality edited video pair data. This scarcity not only limits the availability of training data but also hinders the systematic exploration of model architectures and training strategies. While prior work has improved specific aspects of video editing (e.g., synthesizing a video dataset using image editing techniques or decomposed video editing training), a holistic framework addressing the above challenges remains underexplored. In this study, we introduce InstructVEdit, a full-cycle instructional video editing approach that: (1) establishes a reliable dataset curation workflow to initialize training, (2) incorporates two model architectural improvements to enhance edit quality while preserving temporal consistency, and (3) proposes an iterative refinement strategy leveraging real-world data to enhance generalization and minimize train-test discrepancies. Extensive experiments show that InstructVEdit achieves state-of-the-art performance in instruction-based video editing, demonstrating robust adaptability to diverse real-world scenarios. Project page: https://o937-blip.github.io/InstructVEdit.
http://arxiv.org/abs/2503.17642v1
High-precise determination of critical exponents in holographic QCD
2025-03-22T04:14:57+00:00
The precise determination of critical exponents is crucial for understanding the properties of strongly interacting matter under extreme conditions. These exponents are fundamentally linked to the system's behavior near the critical end point (CEP), making precise localization of the CEP essential. However, precisely identifying the CEP within the framework of AdS/CFT correspondence presents considerable challenges. In this study, we explore critical phenomena and critical exponents within a holographic QCD model. We achieve high-precision calculations of the CEP's position and dynamically analyze the behavior of the critical exponent as it approaches the CEP using limiting methods. Our results indicate that linear fitting is only appropriate in regions very close to the CEP. Furthermore, we find that although the values of the critical exponents vary when approaching the CEP from different directions, they ultimately converge to a single fixed value, revealing a universal behavior among these exponents. Our research underscores the importance of precisely determining the CEP, selecting the fitting region, and considering the direction of the approach when predicting critical exponents. These findings establish a vital benchmark for identifying critical phenomena and their associated exponents.
http://arxiv.org/abs/2503.17643v1
Measurements of the branching fractions of $Ξ_{c}^{+}\to Σ^{+}K_{S}^{0}$, $Ξ_{c}^{+}\to Ξ^{0}π^{+}$, and $Ξ_{c}^{+}\to Ξ^{0}K^{+}$ at Belle and Belle II
2025-03-22T04:19:23+00:00
Using 983.0 $\rm{fb}^{-1}$ and 427.9 $\rm{fb}^{-1}$ data samples collected with the Belle and Belle II detectors at the KEKB and SuperKEKB asymmetric energy $e^+e^-$ colliders, respectively, we present studies of the Cabibbo-favored $\Xi_c^+$ decays ${\Xi_{c}^{+}\to \Sigma^{+}K_{S}^{0}}$ and $\Xi_{c}^{+}\to \Xi^{0}\pi^{+}$, and the singly Cabibbo-suppressed decay $\Xi_{c}^{+}\to \Xi^{0}K^{+}$. The ratios of branching fractions of ${\Xi_{c}^{+}\to \Sigma^{+}K_{S}^{0}}$ and $\Xi_{c}^{+}\to \Xi^{0}K^{+}$ relative to that of $\Xi_{c}^{+}\to\Xi^{-}\pi^{+}\pi^{+}$ are measured for the first time, while the ratio ${\cal B}(\Xi_{c}^{+}\to\Xi^{0}\pi^{+})/{\cal B}(\Xi_{c}^{+}\to\Xi^{-}\pi^{+}\pi^{+}) $ is also determined and improved by an order of magnitude in precision. The measured branching fraction ratios are $\frac{\cal{B}(\Xi_{c}^{+} \to \Sigma^{+}K_{S}^{0})}{\cal{B}(\Xi_{c}^{+}\to \Xi^{-}\pi^{+}\pi^+)}= 0.067 \pm 0.007 \pm 0.003$, $\frac{\cal{B}(\Xi_c^{+} \to \Xi^{0}\pi^{+})}{\cal{B}(\Xi_{c}^{+}\to \Xi^{-}\pi^{+}\pi^+)} = 0.248 \pm 0.005 \pm 0.009$, $\frac{\cal{B}(\Xi_c^{+} \to \Xi^{0}K^{+})}{\cal{B}(\Xi_{c}^{+}\to \Xi^{-}\pi^{+}\pi^+)} = 0.017 \pm 0.003 \pm 0.001$. Additionally, the ratio ${\cal B}(\Xi_{c}^{+}\to\Xi^{0}K^{+})/{\cal B}(\Xi_{c}^{+}\to\Xi^{0}\pi^{+})$ is measured to be $ 0.068 \pm 0.010 \pm 0.004$. Here, the first and second uncertainties are statistical and systematic, respectively. Multiplying the ratios by the branching fraction of the normalization mode, ${\mathcal B}(\Xi_{c}^{+}\to\Xi^{-}\pi^{+}\pi^+)= (2.9\pm 1.3)\%$, we obtain the following absolute branching fractions ${\cal B}(\Xi_{c}^{+}\to\Sigma^{+}K^{0}_{S}) = (0.194 \pm 0.021 \pm 0.009 \pm 0.087 )%$, ${\cal B}(\Xi_{c}^{+}\to\Xi^{0}\pi^{+}) = (0.719 \pm 0.014 \pm 0.024 \pm 0.322 )%$, ${\cal B}(\Xi_{c}^{+}\to\Xi^{0}K^{+}) = (0.049 \pm 0.007 \pm 0.002 \pm 0.022 )%$.
http://arxiv.org/abs/2503.17644v1
On The Sample Complexity Bounds In Bilevel Reinforcement Learning
2025-03-22T04:22:04+00:00
Bilevel reinforcement learning (BRL) has emerged as a powerful mathematical framework for studying generative AI alignment and related problems. While several principled algorithmic frameworks have been proposed, key theoretical foundations, particularly those related to sample complexity, remain underexplored. Understanding and deriving tight sample complexity bounds are crucial for bridging the gap between theory and practice, guiding the development of more efficient algorithms. In this work, we present the first sample complexity result for BRL, achieving a bound of $\epsilon^{-4}$. This result extends to standard bilevel optimization problems, providing an interesting theoretical contribution with practical implications. To address the computational challenges associated with hypergradient estimation in bilevel optimization, we develop a first-order Hessian-free algorithm that does not rely on costly hypergradient computations. By leveraging matrix-free techniques and constrained optimization methods, our approach ensures scalability and practicality. Our findings pave the way for improved methods in AI alignment and other fields reliant on bilevel optimization.
http://arxiv.org/abs/2503.17645v1
A Modular Dataset to Demonstrate LLM Abstraction Capability
2025-03-22T04:25:30+00:00
Large language models (LLMs) exhibit impressive capabilities but struggle with reasoning errors due to hallucinations and flawed logic. To investigate their internal representations of reasoning, we introduce ArrangementPuzzle, a novel puzzle dataset with structured solutions and automated stepwise correctness verification. We trained a classifier model on LLM activations on this dataset and found that it achieved over 80% accuracy in predicting reasoning correctness, implying that LLMs internally distinguish between correct and incorrect reasoning steps, with the strongest representations in middle-late Transformer layers. Further analysis reveals that LLMs encode abstract reasoning concepts within the middle activation layers of the transformer architecture, distinguishing logical from semantic equivalence. These findings provide insights into LLM reasoning mechanisms and contribute to improving AI reliability and interpretability, thereby offering the possibility to manipulate and refine LLM reasoning.
http://arxiv.org/abs/2503.17646v1
Leveraging Audio Representations for Vibration-Based Crowd Monitoring in Stadiums
2025-03-22T04:27:30+00:00
Crowd monitoring in sports stadiums is important to enhance public safety and improve the audience experience. Existing approaches mainly rely on cameras and microphones, which can cause significant disturbances and often raise privacy concerns. In this paper, we sense floor vibration, which provides a less disruptive and more non-intrusive way of crowd sensing, to predict crowd behavior. However, since the vibration-based crowd monitoring approach is newly developed, one main challenge is the lack of training data due to sports stadiums being large public spaces with complex physical activities. In this paper, we present ViLA (Vibration Leverage Audio), a vibration-based method that reduces the dependency on labeled data by pre-training with unlabeled cross-modality data. ViLA is first pre-trained on audio data in an unsupervised manner and then fine-tuned with a minimal amount of in-domain vibration data. By leveraging publicly available audio datasets, ViLA learns the wave behaviors from audio and then adapts the representation to vibration, reducing the reliance on domain-specific vibration data. Our real-world experiments demonstrate that pre-training the vibration model using publicly available audio data (YouTube8M) achieved up to a 5.8x error reduction compared to the model without audio pre-training.
http://arxiv.org/abs/2503.17647v2
A note on the state occupancy distribution for Markov chains
2025-03-22T04:29:39+00:00
In a recent paper, Shah [arXiv:2502.03073] derived an explicit expression for the distribution of occupancy times for a two-state Markov chain, using a method based on enumerating sample paths. We consider here the more general problem of finding the distribution of occupancy times for countable-state Markov chains in discrete time. Our approach, which employs generating functions, leads to arguably simpler formulae for the occupancy distribution for the two-state chain.
http://arxiv.org/abs/2503.17648v1
Graph-based Change Point Detection for Functional Data
2025-03-22T04:43:02+00:00
Modeling functions that are sequentially observed as functional time series is becoming increasingly common. In such models, it is often crucial to ensure data homogeneity. We investigate the sensitivity of graph-based change point detection for changes in the distribution of functional data that demarcate homogeneous regions. Related test statistics and thresholds for detection are given. A key factor in the efficacy of such tests is the graph construction. Practical considerations for constructing a graph on arbitrary data are explored. Simulation experiments investigate tuning parameters for graph construction and evaluate the graph-based methods in comparison to existing functional methods. In addition to sensitivity of lower and higher order changes, robustness to the tuning parameter choices, and practical recommendations, are shown. Applications to multi-year pedestrian counts, high-frequency asset returns, and continuous electricity prices corroborate the simulation results.
http://arxiv.org/abs/2503.17649v1
Quantized Analog Beamforming Enabled Multi-task Federated Learning Over-the-air
2025-03-22T04:46:16+00:00
Over-the-air computation (AirComp) has recently emerged as a pivotal technique for communication-efficient federated learning (FL) in resource-constrained wireless networks. Though AirComp leverages the superposition property of multiple access channels for computation, it inherently limits its ability to manage inter-task interference in multi-task computing. In this paper, we propose a quantized analog beamforming scheme at the receiver to enable simultaneous multi-task FL. Specifically, inspiring by the favorable propagation and channel hardening properties of large-scale antenna arrays, a targeted analog beamforming method in closed form is proposed for statistical interference elimination. Analytical results reveal that the interference power vanishes by an order of $\mathcal{O}\left(1/N_r\right)$ with the number of analog phase shifters, $N_r$, irrespective of their quantization precision. Numerical results demonstrate the effectiveness of the proposed analog beamforming method and show that the performance upper bound of ideal learning without errors can be achieved by increasing the number of low-precision analog phase shifters.
http://arxiv.org/abs/2503.17650v1
Visual Variational Autoencoder Prompt Tuning
2025-03-22T04:59:51+00:00
Parameter-efficient fine-tuning (PEFT) has emerged as a crucial approach for adapting large vision transformers to downstream tasks without the prohibitive computational costs of full fine-tuning. While existing visual prompt tuning (VPT) methods have made significant strides, they predominantly rely on static, domain-specific prompts that fail to capture the rich visual diversity within individual instances. This paper introduces V$^2$APT (Visual Variational Autoencoder Prompt Tuning), a novel framework that generates dynamic, input-dependent prompts using a variational autoencoder architecture. By learning a latent representation of image-specific features and decoding them into customized prompts, V$^2$APT adapts to the unique visual characteristics of each input. Extensive experiments on FGVC, HTA, and VTAB-1k benchmarks demonstrate that our approach consistently outperforms state-of-the-art PEFT methods. Notably, V$^2$APT achieves +3.2\% improvement over VPT-Deep on HTA, with an average performance gain of +2.0\% across all three datasets.
http://arxiv.org/abs/2503.17651v1
Collaborative Temporal Consistency Learning for Point-supervised Natural Language Video Localization
2025-03-22T05:04:12+00:00
Natural language video localization (NLVL) is a crucial task in video understanding that aims to localize the target moment in videos specified by a given language description. Recently, a point-supervised paradigm has been presented to address this task, requiring only a single annotated frame within the target moment rather than complete temporal boundaries. Compared with the fully-supervised paradigm, it offers a balance between localization accuracy and annotation cost. However, due to the absence of complete annotation, it is challenging to align the video content with language descriptions, consequently hindering accurate moment prediction. To address this problem, we propose a new COllaborative Temporal consistEncy Learning (COTEL) framework that leverages the synergy between saliency detection and moment localization to strengthen the video-language alignment. Specifically, we first design a frame- and a segment-level Temporal Consistency Learning (TCL) module that models semantic alignment across frame saliencies and sentence-moment pairs. Then, we design a cross-consistency guidance scheme, including a Frame-level Consistency Guidance (FCG) and a Segment-level Consistency Guidance (SCG), that enables the two temporal consistency learning paths to reinforce each other mutually. Further, we introduce a Hierarchical Contrastive Alignment Loss (HCAL) to comprehensively align the video and text query. Extensive experiments on two benchmarks demonstrate that our method performs favorably against SoTA approaches. We will release all the source codes.
http://arxiv.org/abs/2503.17652v1
Time- and Space-Optimal Silent Self-Stabilizing Exact Majority in Population Protocols
2025-03-22T05:04:44+00:00
We address the self-stabilizing exact majority problem in the population protocol model, introduced by Angluin, Aspnes, Diamadi, Fischer, and Peralta (2004). In this model, there are $n$ state machines, called agents, which form a network. At each time step, only two agents interact with each other, and update their states. In the self-stabilizing exact majority problem, each agent has a fixed opinion, $\mathtt{A}$ or $\mathtt{B}$, and stabilizes to a safe configuration in which all agents output the majority opinion from any initial configuration. In this paper, we show the impossibility of solving the self-stabilizing exact majority problem without knowledge of $n$ in any protocol. We propose a silent self-stabilizing exact majority protocol, which stabilizes within $O(n)$ parallel time in expectation and within $O(n \log n)$ parallel time with high probability, using $O(n)$ states, with knowledge of $n$. Here, a silent protocol means that, after stabilization, the state of each agent does not change. We establish lower bounds, proving that any silent protocol requires $\Omega(n)$ states, $\Omega(n)$ parallel time in expectation, and $\Omega(n \log n)$ parallel time with high probability to reach a safe configuration. Thus, the proposed protocol is time- and space-optimal.
http://arxiv.org/abs/2503.17653v1
Probing thermonuclear bursts and X-ray reflection features in Aql X-1 during 2024 outburst
2025-03-22T05:06:14+00:00
We report the broadband timing and spectral properties of the neutron star low-mass X-ray binary Aql X-1 during the 2024 outburst with NICER, NuSTAR, and Swift observatories. We detected six thermonuclear X-ray bursts during the NICER and NuSTAR observations, with the observed X-ray burst profiles exhibiting a strong energy dependence. The time-resolved burst spectra indicate the presence of soft excess during the burst, which can be modeled by using a variable persistent emission method ($f_a$ method), or the relxillNS reflection model. We found that the reflection model can contribute $\sim$20% of total emission as observed during the NICER burst. The reflection and blackbody component fluxes are strongly correlated as observed during a burst. The excess emission is possible due to the enhanced mass accretion rate to the neutron star due to the Poynting-Rodertson drag and a fraction of burst emission may be reflected from the disk. The bursts did not show photospheric radius expansion during the peak. Moreover, we examined the burst-free accretion emission in the broadband range with NuSTAR, NICER, and Swift at two epochs of the outburst. The persistent emission showed X-ray reflection feature, which can be well modeled with the relativistic reflection model relxillCp. The inner disk radius (R$_{in}$) is found to be nearly 22 and 10 times $\rm R_{g}$ for two observations, respectively. Assuming that the inner disk is truncated at the magnetospheric radius, the magnetic field strength at the poles of the neutron star is estimated to be $(0.6-1.9) \times 10^9$ G.
http://arxiv.org/abs/2503.17654v1
LZMidi: Compression-Based Symbolic Music Generation
2025-03-22T05:14:17+00:00
Recent advances in symbolic music generation primarily rely on deep learning models such as Transformers, GANs, and diffusion models. While these approaches achieve high-quality results, they require substantial computational resources, limiting their scalability. We introduce LZMidi, a lightweight symbolic music generation framework based on a Lempel-Ziv (LZ78)-induced sequential probability assignment (SPA). By leveraging the discrete and sequential structure of MIDI data, our approach enables efficient music generation on standard CPUs with minimal training and inference costs. Theoretically, we establish universal convergence guarantees for our approach, underscoring its reliability and robustness. Compared to state-of-the-art diffusion models, LZMidi achieves competitive Frechet Audio Distance (FAD), Wasserstein Distance (WD), and Kullback-Leibler (KL) scores, while significantly reducing computational overhead - up to 30x faster training and 300x faster generation. Our results position LZMidi as a significant advancement in compression-based learning, highlighting how universal compression techniques can efficiently model and generate structured sequential data, such as symbolic music, with practical scalability and theoretical rigor.
http://arxiv.org/abs/2503.17655v1
Stabilizer codes of less than two dimensions have constant distance
2025-03-22T05:30:08+00:00
The surface code is a two-dimensional stabiliser code with parameters $[[n,1,\Theta(\sqrt{n})]]$. To this day, no stabiliser code with growing distance is know to live in less than two dimensions. In this note we show that no such code can exist.
http://arxiv.org/abs/2503.17656v1
NaFM: Pre-training a Foundation Model for Small-Molecule Natural Products
2025-03-22T05:32:03+00:00
Natural products, as metabolites from microorganisms, animals, or plants, exhibit diverse biological activities, making them crucial for drug discovery. Nowadays, existing deep learning methods for natural products research primarily rely on supervised learning approaches designed for specific downstream tasks. However, such one-model-for-a-task paradigm often lacks generalizability and leaves significant room for performance improvement. Additionally, existing molecular characterization methods are not well-suited for the unique tasks associated with natural products. To address these limitations, we have pre-trained a foundation model for natural products based on their unique properties. Our approach employs a novel pretraining strategy that is especially tailored to natural products. By incorporating contrastive learning and masked graph learning objectives, we emphasize evolutional information from molecular scaffolds while capturing side-chain information. Our framework achieves state-of-the-art (SOTA) results in various downstream tasks related to natural product mining and drug discovery. We first compare taxonomy classification with synthesized molecule-focused baselines to demonstrate that current models are inadequate for understanding natural synthesis. Furthermore, by diving into a fine-grained analysis at both the gene and microbial levels, NaFM demonstrates the ability to capture evolutionary information. Eventually, our method is experimented with virtual screening, illustrating informative natural product representations that can lead to more effective identification of potential drug candidates.
http://arxiv.org/abs/2503.17657v1
Efficient Diffusion Training through Parallelization with Truncated Karhunen-Loève Expansion
2025-03-22T05:34:02+00:00
Diffusion denoising models have become a popular approach for image generation, but they often suffer from slow convergence during training. In this paper, we identify that this slow convergence is partly due to the complexity of the Brownian motion driving the forward-time process. To address this, we represent the Brownian motion using the Karhunen-Lo\`eve expansion, truncating it to a limited number of eigenfunctions. We propose a novel ordinary differential equation with augmented random initials, termed KL diffusion, as a new forward-time process for training and sampling. By developing an appropriate denoising loss function, we facilitate the integration of our KL-diffusion into existing denoising-based models. Using the widely adopted DDIM framework as our baseline ensures a fair comparison, as our modifications focus solely on the forward process and loss function, leaving the network architecture and sampling methods unchanged. Our method significantly outperforms baseline diffusion models, achieving convergence speeds that are twice faster to reach the best FID score of the baseline and ultimately yielding much lower FID scores. Notably, our approach allows for highly parallelized computation, requires no additional learnable parameters, and can be flexibly integrated into existing diffusion methods. The code will be made publicly available.
http://arxiv.org/abs/2503.17658v1
Sentinel: Multi-Patch Transformer with Temporal and Channel Attention for Time Series Forecasting
2025-03-22T06:01:50+00:00
Transformer-based time series forecasting has recently gained strong interest due to the ability of transformers to model sequential data. Most of the state-of-the-art architectures exploit either temporal or inter-channel dependencies, limiting their effectiveness in multivariate time-series forecasting where both types of dependencies are crucial. We propose Sentinel, a full transformer-based architecture composed of an encoder able to extract contextual information from the channel dimension, and a decoder designed to capture causal relations and dependencies across the temporal dimension. Additionally, we introduce a multi-patch attention mechanism, which leverages the patching process to structure the input sequence in a way that can be naturally integrated into the transformer architecture, replacing the multi-head splitting process. Extensive experiments on standard benchmarks demonstrate that Sentinel, because of its ability to "monitor" both the temporal and the inter-channel dimension, achieves better or comparable performance with respect to state-of-the-art approaches.
http://arxiv.org/abs/2503.17659v1
Why the DESI Results Should Not Be A Surprise
2025-03-22T06:03:34+00:00
The recent DESI results provide increasing evidence that the density of dark energy is time-dependent. I will recall why, from the point of view of fundamental theory,, this result should not be surprising.
http://arxiv.org/abs/2503.17660v2
OMR-Diffusion:Optimizing Multi-Round Enhanced Training in Diffusion Models for Improved Intent Understanding
2025-03-22T06:10:57+00:00
Generative AI has significantly advanced text-driven image generation, but it still faces challenges in producing outputs that consistently align with evolving user preferences and intents, particularly in multi-turn dialogue scenarios. In this research, We present a Visual Co-Adaptation (VCA) framework that incorporates human-in-the-loop feedback, utilizing a well-trained reward model specifically designed to closely align with human preferences. Using a diverse multi-turn dialogue dataset, the framework applies multiple reward functions (such as diversity, consistency, and preference feedback) to refine the diffusion model through LoRA, effectively optimizing image generation based on user input. We also constructed multi-round dialogue datasets with prompts and image pairs that well-fit user intent. Experiments show the model achieves 508 wins in human evaluation, outperforming DALL-E 3 (463 wins) and others. It also achieves 3.4 rounds in dialogue efficiency (vs. 13.7 for DALL-E 3) and excels in metrics like LPIPS (0.15) and BLIP (0.59). Various experiments demonstrate the effectiveness of the proposed method over state-of-the-art baselines, with significant improvements in image consistency and alignment with user intent.
http://arxiv.org/abs/2503.17661v2
A Qualitative Study of User Perception of M365 AI Copilot
2025-03-22T06:11:10+00:00
Adopting AI copilots in professional workflows presents opportunities for enhanced productivity, efficiency, and decision making. In this paper, we present results from a six month trial of M365 Copilot conducted at our organisation in 2024. A qualitative interview study was carried out with 27 participants. The study explored user perceptions of M365 Copilot's effectiveness, productivity impact, evolving expectations, ethical concerns, and overall satisfaction. Initial enthusiasm for the tool was met with mixed post trial experiences. While some users found M365 Copilot beneficial for tasks such as email coaching, meeting summaries, and content retrieval, others reported unmet expectations in areas requiring deeper contextual understanding, reasoning, and integration with existing workflows. Ethical concerns were a recurring theme, with users highlighting issues related to data privacy, transparency, and AI bias. While M365 Copilot demonstrated value in specific operational areas, its broader impact remained constrained by usability limitations and the need for human oversight to validate AI generated outputs.
http://arxiv.org/abs/2503.17662v2
Enhancing Persona Consistency for LLMs' Role-Playing using Persona-Aware Contrastive Learning
2025-03-22T06:12:34+00:00
In recent years, large language models (LLMs) have achieved breakthrough progress in many dialogue generation tasks. However, their lack of emotion and fine-grained role awareness limits the model's ability to provide personalized and diverse interactions further. Current methods face high costs in collecting high-quality annotated data for scenarios such as role-playing, and traditional human alignment methods are difficult to deploy due to the inherent diversity of model behavior in role-playing scenarios. Inspired by the alignment of models for safety behaviors through RLHF (Reinforcement Learning from Human Feedback), in this paper, we revisit model role-playing behavior from the perspective of persona alignment and propose a novel annotation-free framework named \textbf{\underline{P}}ersona-Aware \textbf{\underline{C}}ontrastive \textbf{\underline{L}}earning (PCL) to align LLMs' behavior during role-playing, enhancing the model's role consistency. Specifically, we first design a role chain method to encourage the model to self-question based on the role characteristics and dialogue context to adjust personality consistency. Then, we further enhance the model's role-playing strategy through iterative contrastive learning between the use of role characteristics and not. Experiments on both black-box and white-box LLMs show that LLMs equipped with PCL significantly outperform vanilla LLMs under automatic evaluation methods (CharEval \& GPT-4) and human expert evaluation.
http://arxiv.org/abs/2503.17663v1
Lam-Tung relation breaking effects and weak dipole moments at lepton colliders
2025-03-22T06:16:03+00:00
The breaking of the Lam-Tung relation in the Drell-Yan process at the LHC exhibits a long-standing tension with the Standard Model (SM) prediction at $\mathcal{O}(\alpha_s^3)$ accuracy. This tension could be explained by weak dipole interactions of leptons and quarks, associated with the $Z$-boson within the framework of the Standard Model Effective Field Theory (SMEFT). In this paper, we propose to cross-check these weak dipole interactions by measuring the violation effects of the Lam-Tung relation at future lepton colliders through the processes $e^+e^- \to Z\gamma \to \ell\bar{\ell}\gamma$ and $e^+e^- \to Z\gamma \to q\bar{q}\gamma$. By considering different decay modes of the $Z$-boson, these channels exhibit distinct sensitivities to various dipole operators, providing a way to disentangle their individual effects. Additionally, the high flavor-tagging efficiencies at lepton colliders could provide strong constraints on the dipole interactions of heavy quarks, such as $b$ and $c$ quarks, which are challenging to probe in the Drell-Yan process at the LHC due to the suppression of parton distribution functions.
http://arxiv.org/abs/2503.17664v1
CardioTabNet: A Novel Hybrid Transformer Model for Heart Disease Prediction using Tabular Medical Data
2025-03-22T06:17:08+00:00
The early detection and prediction of cardiovascular diseases are crucial for reducing the severe morbidity and mortality associated with these conditions worldwide. A multi-headed self-attention mechanism, widely used in natural language processing (NLP), is operated by Transformers to understand feature interactions in feature spaces. However, the relationships between various features within biological systems remain ambiguous in these spaces, highlighting the necessity of early detection and prediction of cardiovascular diseases to reduce the severe morbidity and mortality with these conditions worldwide. We handle this issue with CardioTabNet, which exploits the strength of tab transformer to extract feature space which carries strong understanding of clinical cardiovascular data and its feature ranking. As a result, performance of downstream classical models significantly showed outstanding result. Our study utilizes the open-source dataset for heart disease prediction with 1190 instances and 11 features. In total, 11 features are divided into numerical (age, resting blood pressure, cholesterol, maximum heart rate, old peak, weight, and fasting blood sugar) and categorical (resting ECG, exercise angina, and ST slope). Tab transformer was used to extract important features and ranked them using random forest (RF) feature ranking algorithm. Ten machine-learning models were used to predict heart disease using selected features. After extracting high-quality features, the top downstream model (a hyper-tuned ExtraTree classifier) achieved an average accuracy rate of 94.1% and an average Area Under Curve (AUC) of 95.0%. Furthermore, a nomogram analysis was conducted to evaluate the model's effectiveness in cardiovascular risk assessment. A benchmarking study was conducted using state-of-the-art models to evaluate our transformer-driven framework.
http://arxiv.org/abs/2503.17665v1
Indication of the electron-to-proton mass ratio variation within the Galaxy
2025-03-22T06:18:52+00:00
Near (~100 pc) and far (~8.7 kpc) relative to the Galactic center, the molecular clouds SgrB2(N) and Orion-KL exhibit different values of the fundamental physical constant mu=m_e/m_p - the electron-to-proton mass ratio. Measured frequency difference between the emission lines of methanol (CH3OH), - J_K_u - J_K_l = 6_3 - 5_2 A+ 542000.981 MHz, 6_3 - 5_2 A- 542081.936 MHz, and 8_0 - 7_-1 E 543076.194 MHz, - observed with the space observatory Herschel toward SgrB2(N) and Orion-KL corresponds to (Sgr-Ori): Delta mu/mu = (-3.7 +/- 0.5)*10^(-7) (1 sigma C.L.). At the same time, comparison of the same methanol lines in Orion-KL with laboratory frequencies shows no significant changes in mu (Ori-lab): Delta mu/mu = (-0.5 +/- 0. 6)*10^(-7), while a comparison between SgrB2(N) and laboratory lines indicates a lower value of mu near the Galactic center (Sgr-lab): Delta mu/mu = (-4.2 +/- 0.7)*10^(-7). The reduced value of mu in SgrB2(N) is not explained by known systematic effects and requires further investigation.
http://arxiv.org/abs/2503.17666v1
Multi-Modality Representation Learning for Antibody-Antigen Interactions Prediction
2025-03-22T06:23:51+00:00
While deep learning models play a crucial role in predicting antibody-antigen interactions (AAI), the scarcity of publicly available sequence-structure pairings constrains their generalization. Current AAI methods often focus on residue-level static details, overlooking fine-grained structural representations of antibodies and their inter-antibody similarities. To tackle this challenge, we introduce a multi-modality representation approach that integates 3D structural and 1D sequence data to unravel intricate intra-antibody hierarchical relationships. By harnessing these representations, we present MuLAAIP, an AAI prediction framework that utilizes graph attention networks to illuminate graph-level structural features and normalized adaptive graph convolution networks to capture inter-antibody sequence associations. Furthermore, we have curated an AAI benchmark dataset comprising both structural and sequence information along with interaction labels. Through extensive experiments on this benchmark, our results demonstrate that MuLAAIP outperforms current state-of-the-art methods in terms of predictive performance. The implementation code and dataset are publicly available at https://github.com/trashTian/MuLAAIP for reproducibility.
http://arxiv.org/abs/2503.17667v1
DGAR: A Unified Domain Generalization Framework for RF-Enabled Human Activity Recognition
2025-03-22T06:27:30+00:00
Radio-frequency (RF)-based human activity recognition (HAR) is a non-intrusive and privacy-preserving technology with applications in smart homes, healthcare, and security systems. However, real-world deployments face challenges from domain shifts caused by user behavior, physical attributes, and environmental conditions, leading to performance degradation. To address this, we propose DGAR, a domain-generalized activity recognition framework that learns domain-invariant and domain-specific representations without requiring target domain data. DGAR leverages correlation alignment to reduce inter-domain discrepancies and integrates a squeeze-and-excitation (SE) block to enhance the extraction of salient spatial and temporal features from RF data. Extensive experiments on multiple public datasets, including HUST-HAR, Lab-LFM, and Office-LFM, validate DGAR's effectiveness, achieving F1-score improvements ranging from 2.09% to 5.81% over state-of-the-art methods. These results demonstrate DGAR's ability to address domain shift challenges, paving the way for robust, real-world HAR applications in diverse and dynamic scenarios.
http://arxiv.org/abs/2503.17668v1
3D Modeling: Camera Movement Estimation and path Correction for SFM Model using the Combination of Modified A-SIFT and Stereo System
2025-03-22T06:37:54+00:00
Creating accurate and efficient 3D models poses significant challenges, particularly in addressing large viewpoint variations, computational complexity, and alignment discrepancies. Efficient camera path generation can help resolve these issues. In this context, a modified version of the Affine Scale-Invariant Feature Transform (ASIFT) is proposed to extract more matching points with reduced computational overhead, ensuring an adequate number of inliers for precise camera rotation angle estimation. Additionally, a novel two-camera-based rotation correction model is introduced to mitigate small rotational errors, further enhancing accuracy. Furthermore, a stereo camera-based translation estimation and correction model is implemented to determine camera movement in 3D space by altering the Structure From Motion (SFM) model. Finally, the novel combination of ASIFT and two camera-based SFM models provides an accurate camera movement trajectory in 3D space. Experimental results show that the proposed camera movement approach achieves 99.9% accuracy compared to the actual camera movement path and outperforms state-of-the-art camera path estimation methods. By leveraging this accurate camera path, the system facilitates the creation of precise 3D models, making it a robust solution for applications requiring high fidelity and efficiency in 3D reconstruction.
http://arxiv.org/abs/2503.17669v2
TDRI: Two-Phase Dialogue Refinement and Co-Adaptation for Interactive Image Generation
2025-03-22T06:40:21+00:00
Although text-to-image generation technologies have made significant advancements, they still face challenges when dealing with ambiguous prompts and aligning outputs with user intent.Our proposed framework, TDRI (Two-Phase Dialogue Refinement and Co-Adaptation), addresses these issues by enhancing image generation through iterative user interaction. It consists of two phases: the Initial Generation Phase, which creates base images based on user prompts, and the Interactive Refinement Phase, which integrates user feedback through three key modules. The Dialogue-to-Prompt (D2P) module ensures that user feedback is effectively transformed into actionable prompts, which improves the alignment between user intent and model input. By evaluating generated outputs against user expectations, the Feedback-Reflection (FR) module identifies discrepancies and facilitates improvements. In an effort to ensure consistently high-quality results, the Adaptive Optimization (AO) module fine-tunes the generation process by balancing user preferences and maintaining prompt fidelity. Experimental results show that TDRI outperforms existing methods by achieving 33.6% human preference, compared to 6.2% for GPT-4 augmentation, and the highest CLIP and BLIP alignment scores (0.338 and 0.336, respectively). In iterative feedback tasks, user satisfaction increased to 88% after 8 rounds, with diminishing returns beyond 6 rounds. Furthermore, TDRI has been found to reduce the number of iterations and improve personalization in the creation of fashion products. TDRI exhibits a strong potential for a wide range of applications in the creative and industrial domains, as it streamlines the creative process and improves alignment with user preferences
http://arxiv.org/abs/2503.17670v1
Do You "Trust" This Visualization? An Inventory to Measure Trust in Visualizations
2025-03-22T06:43:10+00:00
Trust plays a critical role in visual data communication and decision-making, yet existing visualization research employs varied trust measures, making it challenging to compare and synthesize findings across studies. In this work, we first took a bottom-up, data-driven approach to understand what visualization readers mean when they say they "trust" a visualization. We compiled and adapted a broad set of trust-related statements from existing inventories and collected responses on visualizations with varying degrees of trustworthiness. Through exploratory factor analysis, we derived an operational definition of trust in visualizations. Our findings indicate that people perceive a trustworthy visualization as one that presents credible information and is comprehensible and usable. Additionally, we found that general trust disposition influences how individuals assess visualization trustworthiness. Building on these insights, we developed a compact inventory consisting of statements that not only effectively represent each trust factor but also exhibit high item discrimination. We further validated our inventory through two trust games with real-world stakes, demonstrating that our measures reliably predict behavioral trust. Finally, we illustrate how this standardized inventory can be applied across diverse visualization research contexts. Utilizing our inventory, future research can examine how design choices, tasks, and domains influence trust, and how to foster appropriate trusting behavior in human-data interactions.
http://arxiv.org/abs/2503.17671v1
ComfyGPT: A Self-Optimizing Multi-Agent System for Comprehensive ComfyUI Workflow Generation
2025-03-22T06:48:50+00:00
ComfyUI provides a widely-adopted, workflow-based interface that enables users to customize various image generation tasks through an intuitive node-based architecture. However, the intricate connections between nodes and diverse modules often present a steep learning curve for users. In this paper, we introduce ComfyGPT, the first self-optimizing multi-agent system designed to generate ComfyUI workflows based on task descriptions automatically. ComfyGPT comprises four specialized agents: ReformatAgent, FlowAgent, RefineAgent, and ExecuteAgent. The core innovation of ComfyGPT lies in two key aspects. First, it focuses on generating individual node links rather than entire workflows, significantly improving generation precision. Second, we proposed FlowAgent, a LLM-based workflow generation agent that uses both supervised fine-tuning (SFT) and reinforcement learning (RL) to improve workflow generation accuracy. Moreover, we introduce FlowDataset, a large-scale dataset containing 13,571 workflow-description pairs, and FlowBench, a comprehensive benchmark for evaluating workflow generation systems. We also propose four novel evaluation metrics: Format Validation (FV), Pass Accuracy (PA), Pass Instruct Alignment (PIA), and Pass Node Diversity (PND). Experimental results demonstrate that ComfyGPT significantly outperforms existing LLM-based methods in workflow generation.
http://arxiv.org/abs/2503.17672v1
A Temporal Modeling Framework for Video Pre-Training on Video Instance Segmentation
2025-03-22T07:01:25+00:00
Contemporary Video Instance Segmentation (VIS) methods typically adhere to a pre-train then fine-tune regime, where a segmentation model trained on images is fine-tuned on videos. However, the lack of temporal knowledge in the pre-trained model introduces a domain gap which may adversely affect the VIS performance. To effectively bridge this gap, we present a novel video pre-training approach to enhance VIS models, especially for videos with intricate instance relationships. Our crucial innovation focuses on reducing disparities between the pre-training and fine-tuning stages. Specifically, we first introduce consistent pseudo-video augmentations to create diverse pseudo-video samples for pre-training while maintaining the instance consistency across frames. Then, we incorporate a multi-scale temporal module to enhance the model's ability to model temporal relations through self- and cross-attention at short- and long-term temporal spans. Our approach does not set constraints on model architecture and can integrate seamlessly with various VIS methods. Experiment results on commonly adopted VIS benchmarks show that our method consistently outperforms state-of-the-art methods. Our approach achieves a notable 4.0% increase in average precision on the challenging OVIS dataset.
http://arxiv.org/abs/2503.17673v1
DCEvo: Discriminative Cross-Dimensional Evolutionary Learning for Infrared and Visible Image Fusion
2025-03-22T07:01:58+00:00
Infrared and visible image fusion integrates information from distinct spectral bands to enhance image quality by leveraging the strengths and mitigating the limitations of each modality. Existing approaches typically treat image fusion and subsequent high-level tasks as separate processes, resulting in fused images that offer only marginal gains in task performance and fail to provide constructive feedback for optimizing the fusion process. To overcome these limitations, we propose a Discriminative Cross-Dimension Evolutionary Learning Framework, termed DCEvo, which simultaneously enhances visual quality and perception accuracy. Leveraging the robust search capabilities of Evolutionary Learning, our approach formulates the optimization of dual tasks as a multi-objective problem by employing an Evolutionary Algorithm (EA) to dynamically balance loss function parameters. Inspired by visual neuroscience, we integrate a Discriminative Enhancer (DE) within both the encoder and decoder, enabling the effective learning of complementary features from different modalities. Additionally, our Cross-Dimensional Embedding (CDE) block facilitates mutual enhancement between high-dimensional task features and low-dimensional fusion features, ensuring a cohesive and efficient feature integration process. Experimental results on three benchmarks demonstrate that our method significantly outperforms state-of-the-art approaches, achieving an average improvement of 9.32% in visual quality while also enhancing subsequent high-level tasks. The code is available at https://github.com/Beate-Suy-Zhang/DCEvo.
http://arxiv.org/abs/2503.17674v1
MultiScale Contextual Bandits for Long Term Objectives
2025-03-22T07:03:45+00:00
The feedback that AI systems (e.g., recommender systems, chatbots) collect from user interactions is a crucial source of training data. While short-term feedback (e.g., clicks, engagement) is widely used for training, there is ample evidence that optimizing short-term feedback does not necessarily achieve the desired long-term objectives. Unfortunately, directly optimizing for long-term objectives is challenging, and we identify the disconnect in the timescales of short-term interventions (e.g., rankings) and the long-term feedback (e.g., user retention) as one of the key obstacles. To overcome this disconnect, we introduce the framework of MultiScale Policy Learning to contextually reconcile that AI systems need to act and optimize feedback at multiple interdependent timescales. For any two levels, our formulation selects the shorter-term objective at the next lower scale to optimize the longer-term objective at the next higher scale. As a result, the policies at all levels effectively optimize for the long-term. We instantiate the framework with MultiScale Off-Policy Bandit Learning (MSBL) and demonstrate its effectiveness on three tasks relating to recommender systems and text generation.
http://arxiv.org/abs/2503.17675v1
Towards Transformer-Based Aligned Generation with Self-Coherence Guidance
2025-03-22T07:03:57+00:00
We introduce a novel, training-free approach for enhancing alignment in Transformer-based Text-Guided Diffusion Models (TGDMs). Existing TGDMs often struggle to generate semantically aligned images, particularly when dealing with complex text prompts or multi-concept attribute binding challenges. Previous U-Net-based methods primarily optimized the latent space, but their direct application to Transformer-based architectures has shown limited effectiveness. Our method addresses these challenges by directly optimizing cross-attention maps during the generation process. Specifically, we introduce Self-Coherence Guidance, a method that dynamically refines attention maps using masks derived from previous denoising steps, ensuring precise alignment without additional training. To validate our approach, we constructed more challenging benchmarks for evaluating coarse-grained attribute binding, fine-grained attribute binding, and style binding. Experimental results demonstrate the superior performance of our method, significantly surpassing other state-of-the-art methods across all evaluated tasks. Our code is available at https://scg-diffusion.github.io/scg-diffusion.
http://arxiv.org/abs/2503.17676v1
Odd spanning trees of a graph
2025-03-22T07:04:48+00:00
A graph $G=(V,E)$ is said to be odd (or even, resp.) if $d_G(v)$ is odd (or even, resp.) for any $v\in V$. Trivially, the order of an odd graph must be even. In this paper, we show that every 4-edge connected graph of even order has a connected odd factor. A spanning tree $T$ of $G$ is called a homeomorphically irreducible spanning tree (HIST by simply) if $T$ contains no vertex of degree two. Trivially, an odd spanning tree must be a HIST. In 1990, Albertson, Berman, Hutchinson, and Thomassen showed that every connected graph of order $n$ with $\delta(G)\geq \min\{\frac n 2, 4\sqrt{2n}\}$ contains a HIST. We show that every complete bipartite graph with both parts being even has no odd spanning tree, thereby for any even integer $n$ divisible by 4, there exists a graph of order $n$ with the minimum degree $\frac n 2$ having no odd spanning tree. Furthermore, we show that every graph of order $n$ with $\delta(G)\geq \frac n 2 +1$ has an odd spanning tree. We also characterize all split graphs having an odd spanning tree. As an application, for any graph $G$ with diameter at least 4, $\overline{G}$ has a spanning odd double star. Finally, we also give a necessary and sufficient condition for a triangle-free graph $G$ whose complement contains an odd spanning tree. A number of related open problems are proposed.
http://arxiv.org/abs/2503.17677v1
Reducing Class-wise Confusion for Incremental Learning with Disentangled Manifolds
2025-03-22T07:07:15+00:00
Class incremental learning (CIL) aims to enable models to continuously learn new classes without catastrophically forgetting old ones. A promising direction is to learn and use prototypes of classes during incremental updates. Despite simplicity and intuition, we find that such methods suffer from inadequate representation capability and unsatisfied feature overlap. These two factors cause class-wise confusion and limited performance. In this paper, we develop a Confusion-REduced AuTo-Encoder classifier (CREATE) for CIL. Specifically, our method employs a lightweight auto-encoder module to learn compact manifold for each class in the latent subspace, constraining samples to be well reconstructed only on the semantically correct auto-encoder. Thus, the representation stability and capability of class distributions are enhanced, alleviating the potential class-wise confusion problem. To further distinguish the overlapped features, we propose a confusion-aware latent space separation loss that ensures samples are closely distributed in their corresponding low-dimensional manifold while keeping away from the distributions of features from other classes. Our method demonstrates stronger representational capacity and discrimination ability by learning disentangled manifolds and reduces class confusion. Extensive experiments on multiple datasets and settings show that CREATE outperforms other state-of-the-art methods up to 5.41%.
http://arxiv.org/abs/2503.17678v1
Computationally and Sample Efficient Safe Reinforcement Learning Using Adaptive Conformal Prediction
2025-03-22T07:16:54+00:00
Safety is a critical concern in learning-enabled autonomous systems especially when deploying these systems in real-world scenarios. An important challenge is accurately quantifying the uncertainty of unknown models to generate provably safe control policies that facilitate the gathering of informative data, thereby achieving both safe and optimal policies. Additionally, the selection of the data-driven model can significantly impact both the real-time implementation and the uncertainty quantification process. In this paper, we propose a provably sample efficient episodic safe learning framework that remains robust across various model choices with quantified uncertainty for online control tasks. Specifically, we first employ Quadrature Fourier Features (QFF) for kernel function approximation of Gaussian Processes (GPs) to enable efficient approximation of unknown dynamics. Then the Adaptive Conformal Prediction (ACP) is used to quantify the uncertainty from online observations and combined with the Control Barrier Functions (CBF) to characterize the uncertainty-aware safe control constraints under learned dynamics. Finally, an optimism-based exploration strategy is integrated with ACP-based CBFs for safe exploration and near-optimal safe nonlinear control. Theoretical proofs and simulation results are provided to demonstrate the effectiveness and efficiency of the proposed framework.
http://arxiv.org/abs/2503.17679v2
Polymer: Development Workflows as Software
2025-03-22T07:18:44+00:00
Software development builds digital tools to automate processes, yet its initial phases, up to deployment, remain largely manual. There are two reasons: Development tasks are often under-specified and transitions between tasks usually require a translator. These reasons are mutually reinforcing: it makes little sense to specify tasks when you cannot connect them and writing a translator requires a specification. LLMs change this cost equation: they can handle under-specified systems and they excel at translation. Thus, they can act as skeleton keys that unlock the automation of tasks and transitions that were previously too expensive to interlink. We introduce a recipe for writing development workflows as software (polymer) to further automate the initial phases of development. We show how adopting polymer at Volvo, a large automotive manufacturer, to automate testing saved 2--3 FTEs at the cost of two months to develop and deploy. We close with open challenges when polymerizing development workflows.
http://arxiv.org/abs/2503.17680v1
Bounded-METANET: A new discrete-time second-order macroscopic traffic flow model for bounded speed
2025-03-22T07:37:26+00:00
Macroscopic traffic flow models are essential for analysing traffic dynamics in highways and urban roads. While second-order models like METANET capture non-equilibrium traffic states, they often produce unrealistic speed predictions, such as negative values or speeds above the free-flow limit, which limits their reliability in traffic management. To overcome these limitations, we introduce Bounded-METANET, a new discrete-time second-order model that refines METANET's speed update equation by removing the convection term and adding a virtual density mechanism to reflect anticipation and merging effects. This ensures that speeds stay bounded between zero and the free-flow speed, simplifying calibration and boosting usability. Validated with SUMO simulations and real-world German highway data, Bounded-METANET accurately captures non-equilibrium flow and the capacity drop phenomenon, and outperforms METANET in estimating the fundamental diagram under congestion. It achieves lower RMSE for speed and density in noise-free simulation data and better flow estimation in real-world data, though METANET edges out in speed RMSE. Unlike METANET, which can produce erratic shockwave speeds and flow errors, Bounded-METANET delivers consistent, realistic predictions. This makes it a promising tool for traffic modelling and control in various scenarios.
http://arxiv.org/abs/2503.17681v1
Staying Alive: Online Neural Network Maintenance and Systemic Drift
2025-03-22T07:38:44+00:00
We present the Subset Extended Kalman Filter (SEKF) as a method to update previously trained model weights online rather than retraining or finetuning them when the system a model represents drifts away from the conditions under which it was trained. We identify the parameters to be updated using the gradient of the loss function and use the SEKF to update only these parameters. We compare finetuning and SEKF for online model maintenance in the presence of systemic drift through four dynamic regression case studies and find that the SEKF is able to maintain model accuracy as-well if not better than finetuning while requiring significantly less time per iteration, and less hyperparameter tuning.
http://arxiv.org/abs/2503.17682v1
Safe RLHF-V: Safe Reinforcement Learning from Human Feedback in Multimodal Large Language Models
2025-03-22T07:40:20+00:00
Multimodal large language models (MLLMs) are critical for developing general-purpose AI assistants, yet they face growing safety risks. How can we ensure that MLLMs are safely aligned to prevent undesired behaviors such as discrimination, misinformation, or violations of ethical standards? In a further step, we need to explore how to fine-tune MLLMs to enhance reasoning performance while ensuring they satisfy safety constraints. Fundamentally, this can be formulated as a min-max optimization problem. In this study, we propose Safe RLHF-V, the first multimodal safety alignment framework that jointly optimizes helpfulness and safety using separate multimodal reward and cost models within a Lagrangian-based constrained optimization framework. Given that there is a lack of preference datasets that separate helpfulness and safety in multimodal scenarios, we introduce BeaverTails-V, the first open-source dataset with dual preference annotations for helpfulness and safety, along with multi-level safety labels (minor, moderate, severe). Additionally, we design a Multi-level Guardrail System to proactively defend against unsafe queries and adversarial attacks. By applying the Beaver-Guard-V moderation for 5 rounds of filtering and re-generation on the precursor model, the overall safety of the upstream model is significantly improved by an average of 40.9%. Experimental results demonstrate that fine-tuning different MLLMs with Safe RLHF can effectively enhance model helpfulness while ensuring improved safety. Specifically, Safe RLHF-V improves model safety by 34.2% and helpfulness by 34.3%. All of datasets, models, and code can be found at https://github.com/SafeRLHF-V to support the safety development of MLLMs and reduce potential societal risks.
http://arxiv.org/abs/2503.17683v1
Decentralized Federated Dataset Dictionary Learning for Multi-Source Domain Adaptation
2025-03-22T07:48:48+00:00
Decentralized Multi-Source Domain Adaptation (DMSDA) is a challenging task that aims to transfer knowledge from multiple related and heterogeneous source domains to an unlabeled target domain within a decentralized framework. Our work tackles DMSDA through a fully decentralized federated approach. In particular, we extend the Federated Dataset Dictionary Learning (FedDaDiL) framework by eliminating the necessity for a central server. FedDaDiL leverages Wasserstein barycenters to model the distributional shift across multiple clients, enabling effective adaptation while preserving data privacy. By decentralizing this framework, we enhance its robustness, scalability, and privacy, removing the risk of a single point of failure. We compare our method to its federated counterpart and other benchmark algorithms, showing that our approach effectively adapts source domains to an unlabeled target domain in a fully decentralized manner.