Dataset Viewer
Auto-converted to Parquet
url
stringlengths
33
33
title
stringlengths
18
214
date_published
stringdate
2025-03-20 00:07:06
2025-04-17 04:46:57
abstract
stringlengths
114
1.92k
http://arxiv.org/abs/2504.12538v1
Non-invasive mid-circuit measurement and reset on atomic qubits
2025-04-17T00:02:04+00:00
Mid-circuit measurement and reset of subsets of qubits is a crucial ingredient of quantum error correction and many quantum information applications. Measurement of atomic qubits is accomplished through resonant fluorescence, which typically disturbs neighboring atoms due to photon scattering. We propose and prototype a new scheme for measurement that provides both spatial and spectral isolation by using tightly-focused individual laser beams and narrow atomic transitions. The unique advantage of this scheme is that all operations are applied exclusively to the read-out qubit, with negligible disturbance to the other qubits of the same species and little overhead. In this letter, we pave the way for non-invasive and high fidelity mid-circuit measurement and demonstrate all key building blocks on a single trapped barium ion.
http://arxiv.org/abs/2504.12539v1
The Intergalactic Medium
2025-04-17T00:03:37+00:00
The intergalactic medium (IGM) comprises all the matter that lies between galaxies. Hosting the vast majority ($\gtrsim 90\%$) of the baryons in the Universe, the IGM is a critical reservoir and probe for cosmology and astrophysics, providing insights into large-scale structure formation and galaxy evolution. In this Chapter, we present an overview of the general properties of the IGM, focusing on their dependence on cosmic environment and cosmic time. Emphasis is given to the basic physical principles that allow us to model the density, temperature, and ionization state of the IGM, supported by results from cosmological hydrodynamical simulations. We also cover the foundational principles of quasar spectroscopy used to probe the IGM in absorption, with a particular focus on HI absorption lines. Finally, we briefly discuss future prospects and complementary observational techniques to enhance our understanding of the IGM.
http://arxiv.org/abs/2504.12540v1
UniPhys: Unified Planner and Controller with Diffusion for Flexible Physics-Based Character Control
2025-04-17T00:04:31+00:00
Generating natural and physically plausible character motion remains challenging, particularly for long-horizon control with diverse guidance signals. While prior work combines high-level diffusion-based motion planners with low-level physics controllers, these systems suffer from domain gaps that degrade motion quality and require task-specific fine-tuning. To tackle this problem, we introduce UniPhys, a diffusion-based behavior cloning framework that unifies motion planning and control into a single model. UniPhys enables flexible, expressive character motion conditioned on multi-modal inputs such as text, trajectories, and goals. To address accumulated prediction errors over long sequences, UniPhys is trained with the Diffusion Forcing paradigm, learning to denoise noisy motion histories and handle discrepancies introduced by the physics simulator. This design allows UniPhys to robustly generate physically plausible, long-horizon motions. Through guided sampling, UniPhys generalizes to a wide range of control signals, including unseen ones, without requiring task-specific fine-tuning. Experiments show that UniPhys outperforms prior methods in motion naturalness, generalization, and robustness across diverse control tasks.
http://arxiv.org/abs/2504.12541v1
Evolving Atmospheric Ion Escape from Kepler-1649 b and c: Power-Law Trends in Atmospheric Loss
2025-04-17T00:07:53+00:00
Rocky planets orbiting M-dwarf stars are prime targets for characterizing terrestrial atmospheres, yet their long-term evolution under intense stellar winds and high-energy radiation remains poorly understood. The Kepler-1649 system, which hosts two terrestrial exoplanets orbiting an M5V star, presents a valuable opportunity to explore atmospheric evolution in the extreme environments characteristic of M-dwarf stellar systems. In this Letter we show that both planets could have retained atmospheres over gigayear timescales. Using a multi-species magnetohydrodynamic model, we simulate atmospheric ion escape driven by stellar winds and extreme ultraviolet radiation from 0.7 to 4.8 Gyrs. The results show that total ion escape rates follow a power-law decline ($\propto \tau^{-1.6}$ for Kepler-1649 b, $\propto \tau^{-1.5}$ for Kepler-1649 c$\,$), with O$^{+}$ dominating atmospheric loss (76.8%-98.7%). The escape rates at 4.8 Gyrs are two orders of magnitude lower than those during the early epochs ($1.9\times10^{27}$ s$^{-1}$ at 0.7 Gyr vs. $3.0\times10^{25}$ s$^{-1}$ at 4.8 Gyrs for planet b$\,$), while planet b consistently exhibits 1.1-1.9$\times$ higher O$^{+}$ escape rates than planet c due to its closer orbit (0.051 AU vs. 0.088 AU). Despite substantial early atmospheric erosion, both planets may still retain significant atmospheres, suggesting the potential for long-term habitability. These findings offer predictive insight into atmospheric retention in M-dwarf systems and inform future JWST observations aimed at refining habitability assessments.
http://arxiv.org/abs/2504.12542v1
Post-Hurricane Debris Segmentation Using Fine-Tuned Foundational Vision Models
2025-04-17T00:08:50+00:00
Timely and accurate detection of hurricane debris is critical for effective disaster response and community resilience. While post-disaster aerial imagery is readily available, robust debris segmentation solutions applicable across multiple disaster regions remain limited. Developing a generalized solution is challenging due to varying environmental and imaging conditions that alter debris' visual signatures across different regions, further compounded by the scarcity of training data. This study addresses these challenges by fine-tuning pre-trained foundational vision models, achieving robust performance with a relatively small, high-quality dataset. Specifically, this work introduces an open-source dataset comprising approximately 1,200 manually annotated aerial RGB images from Hurricanes Ian, Ida, and Ike. To mitigate human biases and enhance data quality, labels from multiple annotators are strategically aggregated and visual prompt engineering is employed. The resulting fine-tuned model, named fCLIPSeg, achieves a Dice score of 0.70 on data from Hurricane Ida -- a disaster event entirely excluded during training -- with virtually no false positives in debris-free areas. This work presents the first event-agnostic debris segmentation model requiring only standard RGB imagery during deployment, making it well-suited for rapid, large-scale post-disaster impact assessments and recovery planning.
http://arxiv.org/abs/2504.12543v1
Ruled zero mean curvature surfaces in the three-dimensional light cone
2025-04-17T00:09:36+00:00
We obtain a complete classification of ruled zero mean curvature surfaces in the three-dimensional light cone. En route, we examine geodesics and screw motions in the space form, allowing us to discover helicoids. We also consider their relationship to catenoids using Weierstrass representations of zero mean curvature surfaces in the three-dimensional light cone.
http://arxiv.org/abs/2504.12544v1
In-situ mid-circuit qubit measurement and reset in a single-species trapped-ion quantum computing system
2025-04-17T00:10:35+00:00
We implement in-situ mid-circuit measurement and reset (MCMR) operations on a trapped-ion quantum computing system by using metastable qubit states in $^{171}\textrm{Yb}^+$ ions. We introduce and compare two methods for isolating data qubits from measured qubits: one shelves the data qubit into the metastable state and the other drives the measured qubit to the metastable state without disturbing the other qubits. We experimentally demonstrate both methods on a crystal of two $^{171}\textrm{Yb}^+$ ions using both the $S_{1/2}$ ground state hyperfine clock qubit and the $S_{1/2}$-$D_{3/2}$ optical qubit. These MCMR methods result in errors on the data qubit of about $2\%$ without degrading the measurement fidelity. With straightforward reductions in laser noise, these errors can be suppressed to less than $0.1\%$. The demonstrated method allows MCMR to be performed in a single-species ion chain without shuttling or additional qubit-addressing optics, greatly simplifying the architecture.
http://arxiv.org/abs/2504.12545v1
Knowledge Acquisition on Mass-shooting Events via LLMs for AI-Driven Justice
2025-04-17T00:13:04+00:00
Mass-shooting events pose a significant challenge to public safety, generating large volumes of unstructured textual data that hinder effective investigations and the formulation of public policy. Despite the urgency, few prior studies have effectively automated the extraction of key information from these events to support legal and investigative efforts. This paper presented the first dataset designed for knowledge acquisition on mass-shooting events through the application of named entity recognition (NER) techniques. It focuses on identifying key entities such as offenders, victims, locations, and criminal instruments, that are vital for legal and investigative purposes. The NER process is powered by Large Language Models (LLMs) using few-shot prompting, facilitating the efficient extraction and organization of critical information from diverse sources, including news articles, police reports, and social media. Experimental results on real-world mass-shooting corpora demonstrate that GPT-4o is the most effective model for mass-shooting NER, achieving the highest Micro Precision, Micro Recall, and Micro F1-scores. Meanwhile, o1-mini delivers competitive performance, making it a resource-efficient alternative for less complex NER tasks. It is also observed that increasing the shot count enhances the performance of all models, but the gains are more substantial for GPT-4o and o1-mini, highlighting their superior adaptability to few-shot learning scenarios.
http://arxiv.org/abs/2504.12546v1
Anonymous Public Announcements
2025-04-17T00:14:37+00:00
We formalise the notion of an \emph{anonymous public announcement} in the tradition of public announcement logic. Such announcements can be seen as in-between a public announcement from ``the outside" (an announcement of $\phi$) and a public announcement by one of the agents (an announcement of $K_a\phi$): we get more information than just $\phi$, but not (necessarily) about exactly who made it. Even if such an announcement is prima facie anonymous, depending on the background knowledge of the agents it might reveal the identity of the announcer: if I post something on a message board, the information might reveal who I am even if I don't sign my name. Furthermore, like in the Russian Cards puzzle, if we assume that the announcer's intention was to stay anonymous, that in fact might reveal more information. In this paper we first look at the case when no assumption about intentions are made, in which case the logic with an anonymous public announcement operator is reducible to epistemic logic. We then look at the case when we assume common knowledge of the intention to stay anonymous, which is both more complex and more interesting: in several ways it boils down to the notion of a ``safe" announcement (again, similarly to Russian Cards). Main results include formal expressivity results and axiomatic completeness for key logical languages.
http://arxiv.org/abs/2504.12547v1
Magnetoresistance in ZrSi$X$ ($X=$ S, Se, Te) nodal-line semimetals
2025-04-17T00:15:51+00:00
We present a comprehensive first-principles study of the magnetoresistance in ZrSi$X$ ($X=$ S, Se, Te) topological nodal-line semimetals. Our study demonstrates that all primary features of the experimentally measured magnetoresistance in these materials are captured by our calculations, including the unusual butterfly-shaped anisotropic magnetoresistance. This anisotropic magnetoresistance can be accurately reproduced using the semiclassical Boltzmann transport theory without introducing any information on the topological nature of bands or the concepts of topological phase transition. Considering the complex structure of the Fermi surface in these topological materials, we develop a theoretical description explaining the features observed in magnetoresistance measurements. Additionally, the atypical Hall resistance can be interpreted by the same semiclassical approach. Our findings establish magnetotransport as a powerful tool for analyzing the geometry of the Fermi surface, complementing angle-resolved photoemission spectroscopy and quantum oscillation measurements. This approach is demonstrated to be particularly useful for determining the role of non-trivial topology in transport properties.
http://arxiv.org/abs/2504.12548v1
On the Grad-Mercier equation and Semilinear Free Boundary Problems
2025-04-17T00:17:46+00:00
In this paper, we establish regularity and uniqueness results for Grad-Mercier type equations that arise in the context of plasma physics. We show that solutions of this problem naturally develop a dead core, which corresponds to the set where the solutions become identically equal to their maximum. We prove uniqueness, sharp regularity, and non-degeneracy bounds for solutions under suitable assumptions on the reaction term. Of independent interest, our methods allow us to prove that the free boundaries of a broad class of semilinear equations have locally finite $H^{n-1}$ measure.
http://arxiv.org/abs/2504.12549v1
Memorization: A Close Look at Books
2025-04-17T00:20:18+00:00
To what extent can entire books be extracted from LLMs? Using the Llama 3 70B family of models, and the "prefix-prompting" extraction technique, we were able to auto-regressively reconstruct, with a very high level of similarity, one entire book (Alice's Adventures in Wonderland) from just the first 500 tokens. We were also able to obtain high extraction rates on several other books, piece-wise. However, these successes do not extend uniformly to all books. We show that extraction rates of books correlate with book popularity and thus, likely duplication in the training data. We also confirm the undoing of mitigations in the instruction-tuned Llama 3.1, following recent work (Nasr et al., 2025). We further find that this undoing comes from changes to only a tiny fraction of weights concentrated primarily in the lower transformer blocks. Our results provide evidence of the limits of current regurgitation mitigation strategies and introduce a framework for studying how fine-tuning affects the retrieval of verbatim memorization in aligned LLMs.
http://arxiv.org/abs/2504.12550v1
The Hard Lefschetz Theorem on Kähler Lie Algebroids
2025-04-17T00:26:05+00:00
Compact K\"ahler manifolds classically satisfy the Hard Lefschetz Theorem, which gives strong control on the underlying topology of the manifold. One expects a similar theorem to be true for K\"ahler Lie Algebroids, and we show for a certain class of them that this is indeed true, with an added ellipticity requirement. We provide examples of Lie Algebroids satisfying this, as well as an example of a K\"ahler Lie Algebroid that does not meet this Ellipticity requirement, and consequently fails to satisfy the Hard Lefschetz condition.
http://arxiv.org/abs/2504.12551v1
Fast Computation of the Discrete Fourier Transform Rectangular Index Coefficients
2025-04-17T00:44:22+00:00
In~\cite{sic-magazine-2025}, the authors show that the square index coefficients (SICs) of the \(N\)-point discrete Fourier transform (DFT) -- that is, the coefficients \(X_{k\sqrt{N}}\) for \(k = 0, 1, \ldots, \sqrt{N} - 1\) -- can be losslessly compressed from \(N\) to \(\sqrt{N}\) points, thereby accelerating the computation of these specific DFT coefficients accordingly. Following up on that, in this article we generalize SICs into what we refer to as rectangular index coefficients (RICs) of the DFT, formalized as $X_{kL}, k=0,1,\cdots,C-1$, in which the integers $C$ and $L$ are generic roots of $N$ such that $N=LC$. We present an algorithm to compress the $N$-point input signal $\mathbf{x}$ into a $C$-point signal $\mathbf{\hat{x}}$ at the expense of $\mathcal{O}(N)$ complex sums and no complex multiplication. We show that a DFT on $\mathbf{\hat{x}}$ is equivalent to a DFT on the RICs of $\mathbf{x}$. In cases where specific frequencies of \(\mathbf{x}\) are of interest -- as in harmonic analysis -- one can conveniently adjust the signal parameters (e.g., frequency resolution) to align the RICs with those frequencies, and use the proposed algorithm to compute them significantly faster. If $N$ is a power of two -- as required by the fast Fourier transform (FFT) algorithm -- then $C$ can be any power of two in the range $[2, N/2]$ and one can use our algorithm along with FFT to compute all RICs in $\mathcal{O}(C\log C)$ time complexity.
http://arxiv.org/abs/2504.12552v1
Privacy-Preserving Operating Room Workflow Analysis using Digital Twins
2025-04-17T00:46:06+00:00
Purpose: The operating room (OR) is a complex environment where optimizing workflows is critical to reduce costs and improve patient outcomes. The use of computer vision approaches for the automatic recognition of perioperative events enables identification of bottlenecks for OR optimization. However, privacy concerns limit the use of computer vision for automated event detection from OR videos, which makes privacy-preserving approaches needed for OR workflow analysis. Methods: We propose a two-stage pipeline for privacy-preserving OR video analysis and event detection. In the first stage, we leverage vision foundation models for depth estimation and semantic segmentation to generate de-identified Digital Twins (DT) of the OR from conventional RGB videos. In the second stage, we employ the SafeOR model, a fused two-stream approach that processes segmentation masks and depth maps for OR event detection. We evaluate this method on an internal dataset of 38 simulated surgical trials with five event classes. Results: Our results indicate that this DT-based approach to the OR event detection model achieves performance on par and sometimes even better than raw RGB video-based models on detecting OR events. Conclusion: DTs enable privacy-preserving OR workflow analysis, facilitating the sharing of de-identified data across institutions and they can potentially enhance model generalizability by mitigating domain-specific appearance differences.
http://arxiv.org/abs/2504.12553v1
ELAB: Extensive LLM Alignment Benchmark in Persian Language
2025-04-17T00:50:41+00:00
This paper presents a comprehensive evaluation framework for aligning Persian Large Language Models (LLMs) with critical ethical dimensions, including safety, fairness, and social norms. It addresses the gaps in existing LLM evaluation frameworks by adapting them to Persian linguistic and cultural contexts. This benchmark creates three types of Persian-language benchmarks: (i) translated data, (ii) new data generated synthetically, and (iii) new naturally collected data. We translate Anthropic Red Teaming data, AdvBench, HarmBench, and DecodingTrust into Persian. Furthermore, we create ProhibiBench-fa, SafeBench-fa, FairBench-fa, and SocialBench-fa as new datasets to address harmful and prohibited content in indigenous culture. Moreover, we collect extensive dataset as GuardBench-fa to consider Persian cultural norms. By combining these datasets, our work establishes a unified framework for evaluating Persian LLMs, offering a new approach to culturally grounded alignment evaluation. A systematic evaluation of Persian LLMs is performed across the three alignment aspects: safety (avoiding harmful content), fairness (mitigating biases), and social norms (adhering to culturally accepted behaviors). We present a publicly available leaderboard that benchmarks Persian LLMs with respect to safety, fairness, and social norms at: https://huggingface.co/spaces/MCILAB/LLM_Alignment_Evaluation.
http://arxiv.org/abs/2504.12554v1
Acoustic Analysis of Uneven Blade Spacing and Toroidal Geometry for Reducing Propeller Annoyance
2025-04-17T00:52:19+00:00
Unmanned aerial vehicles (UAVs) are becoming more commonly used in populated areas, raising concerns about noise pollution generated from their propellers. This study investigates the acoustic performance of unconventional propeller designs, specifically toroidal and uneven-blade spaced propellers, for their potential in reducing psychoacoustic annoyance. Our experimental results show that these designs noticeably reduced acoustic characteristics associated with noise annoyance.
http://arxiv.org/abs/2504.12555v1
Generalized Neumann's Principle as a Unified Framework for Fractional Quantum and Conventional Ferroelectricity
2025-04-17T01:00:18+00:00
Monolayer In$_2$Se$3$ exhibits unexpected in-plane polarization, despite having $C_{3v}$ symmetry, a feature that was traditionally considered forbidden by symmetry. To explain this remarkable behavior, Ji et al. proposed the concept of fractional quantum ferroelectricity (FQFE), in which polarization occurs in fractional multiples of a quantum, and argued that this phenomenon violates Neumann's principle. However, we introduce a generalized form of Neumann's principle and demonstrate that both FQFE and conventional ferroelectricity can be consistently described within this unified theoretical framework.We propose a method, based on the generalized Neumann's principle, for the systematic identification of FQFE materials. This approach is not only more straightforward to apply but also offers a clearer conceptual understanding and deeper physical insight compared to previous methods. Using this method, we determine all symmetry-allowed FQFE cases across the 32 crystallographic point groups.Since practical applications rely on the ability to control polarization, we further show that FQFE can be effectively switched via coupling with conventional polarization. Using HfZnN$_2$ as an illustrative example, we reveal the underlying mechanism of this coupling and outline a strategy to identify other materials with similar switching behavior.
http://arxiv.org/abs/2504.12556v1
Contour Field based Elliptical Shape Prior for the Segment Anything Model
2025-04-17T01:08:24+00:00
The elliptical shape prior information plays a vital role in improving the accuracy of image segmentation for specific tasks in medical and natural images. Existing deep learning-based segmentation methods, including the Segment Anything Model (SAM), often struggle to produce segmentation results with elliptical shapes efficiently. This paper proposes a new approach to integrate the prior of elliptical shapes into the deep learning-based SAM image segmentation techniques using variational methods. The proposed method establishes a parameterized elliptical contour field, which constrains the segmentation results to align with predefined elliptical contours. Utilizing the dual algorithm, the model seamlessly integrates image features with elliptical priors and spatial regularization priors, thereby greatly enhancing segmentation accuracy. By decomposing SAM into four mathematical sub-problems, we integrate the variational ellipse prior to design a new SAM network structure, ensuring that the segmentation output of SAM consists of elliptical regions. Experimental results on some specific image datasets demonstrate an improvement over the original SAM.
http://arxiv.org/abs/2504.12557v1
TraCeS: Trajectory Based Credit Assignment From Sparse Safety Feedback
2025-04-17T01:11:08+00:00
In safe reinforcement learning (RL), auxiliary safety costs are used to align the agent to safe decision making. In practice, safety constraints, including cost functions and budgets, are unknown or hard to specify, as it requires anticipation of all possible unsafe behaviors. We therefore address a general setting where the true safety definition is unknown, and has to be learned from sparsely labeled data. Our key contributions are: first, we design a safety model that performs credit assignment to estimate each decision step's impact on the overall safety using a dataset of diverse trajectories and their corresponding binary safety labels (i.e., whether the corresponding trajectory is safe/unsafe). Second, we illustrate the architecture of our safety model to demonstrate its ability to learn a separate safety score for each timestep. Third, we reformulate the safe RL problem using the proposed safety model and derive an effective algorithm to optimize a safe yet rewarding policy. Finally, our empirical results corroborate our findings and show that this approach is effective in satisfying unknown safety definition, and scalable to various continuous control tasks.
http://arxiv.org/abs/2504.12558v1
Benchmarking LLM-based Relevance Judgment Methods
2025-04-17T01:13:21+00:00
Large Language Models (LLMs) are increasingly deployed in both academic and industry settings to automate the evaluation of information seeking systems, particularly by generating graded relevance judgments. Previous work on LLM-based relevance assessment has primarily focused on replicating graded human relevance judgments through various prompting strategies. However, there has been limited exploration of alternative assessment methods or comprehensive comparative studies. In this paper, we systematically compare multiple LLM-based relevance assessment methods, including binary relevance judgments, graded relevance assessments, pairwise preference-based methods, and two nugget-based evaluation methods~--~document-agnostic and document-dependent. In addition to a traditional comparison based on system rankings using Kendall correlations, we also examine how well LLM judgments align with human preferences, as inferred from relevance grades. We conduct extensive experiments on datasets from three TREC Deep Learning tracks 2019, 2020 and 2021 as well as the ANTIQUE dataset, which focuses on non-factoid open-domain question answering. As part of our data release, we include relevance judgments generated by both an open-source (Llama3.2b) and a commercial (gpt-4o) model. Our goal is to \textit{reproduce} various LLM-based relevance judgment methods to provide a comprehensive comparison. All code, data, and resources are publicly available in our GitHub Repository at https://github.com/Narabzad/llm-relevance-judgement-comparison.
http://arxiv.org/abs/2504.12559v1
Fine Flood Forecasts: Incorporating local data into global models through fine-tuning
2025-04-17T01:14:21+00:00
Floods are the most common form of natural disaster and accurate flood forecasting is essential for early warning systems. Previous work has shown that machine learning (ML) models are a promising way to improve flood predictions when trained on large, geographically-diverse datasets. This requirement of global training can result in a loss of ownership for national forecasters who cannot easily adapt the models to improve performance in their region, preventing ML models from being operationally deployed. Furthermore, traditional hydrology research with physics-based models suggests that local data -- which in many cases is only accessible to local agencies -- is valuable for improving model performance. To address these concerns, we demonstrate a methodology of pre-training a model on a large, global dataset and then fine-tuning that model on data from individual basins. This results in performance increases, validating our hypothesis that there is extra information to be captured in local data. In particular, we show that performance increases are most significant in watersheds that underperform during global training. We provide a roadmap for national forecasters who wish to take ownership of global models using their own data, aiming to lower the barrier to operational deployment of ML-based hydrological forecast systems.
http://arxiv.org/abs/2504.12560v1
CDF-RAG: Causal Dynamic Feedback for Adaptive Retrieval-Augmented Generation
2025-04-17T01:15:13+00:00
Retrieval-Augmented Generation (RAG) has significantly enhanced large language models (LLMs) in knowledge-intensive tasks by incorporating external knowledge retrieval. However, existing RAG frameworks primarily rely on semantic similarity and correlation-driven retrieval, limiting their ability to distinguish true causal relationships from spurious associations. This results in responses that may be factually grounded but fail to establish cause-and-effect mechanisms, leading to incomplete or misleading insights. To address this issue, we introduce Causal Dynamic Feedback for Adaptive Retrieval-Augmented Generation (CDF-RAG), a framework designed to improve causal consistency, factual accuracy, and explainability in generative reasoning. CDF-RAG iteratively refines queries, retrieves structured causal graphs, and enables multi-hop causal reasoning across interconnected knowledge sources. Additionally, it validates responses against causal pathways, ensuring logically coherent and factually grounded outputs. We evaluate CDF-RAG on four diverse datasets, demonstrating its ability to improve response accuracy and causal correctness over existing RAG-based methods. Our code is publicly available at https://github.com/ elakhatibi/CDF-RAG.
http://arxiv.org/abs/2504.12561v1
Kernel Ridge Regression for Efficient Learning of High-Capacity Hopfield Networks
2025-04-17T01:17:28+00:00
Hebbian learning limits Hopfield network capacity. While kernel methods like Kernel Logistic Regression (KLR) improve performance via iterative learning, we propose Kernel Ridge Regression (KRR) as an alternative. KRR learns dual variables non-iteratively via a closed-form solution, offering significant learning speed advantages. We show KRR achieves comparably high storage capacity (reaching ratio 1.5 shown) and noise robustness (recalling from around 80% corrupted patterns) as KLR, while drastically reducing training time, establishing KRR as an efficient method for building high-performance associative memories.
http://arxiv.org/abs/2504.12562v1
ZeroSumEval: Scaling LLM Evaluation with Inter-Model Competition
2025-04-17T01:23:50+00:00
Evaluating the capabilities of Large Language Models (LLMs) has traditionally relied on static benchmark datasets, human assessments, or model-based evaluations - methods that often suffer from overfitting, high costs, and biases. ZeroSumEval is a novel competition-based evaluation protocol that leverages zero-sum games to assess LLMs with dynamic benchmarks that resist saturation. ZeroSumEval encompasses a diverse suite of games, including security challenges (PyJail), classic games (Chess, Liar's Dice, Poker), knowledge tests (MathQuiz), and persuasion challenges (Gandalf, Debate). These games are designed to evaluate a range of AI capabilities such as strategic reasoning, planning, knowledge application, and creativity. Building upon recent studies that highlight the effectiveness of game-based evaluations for LLMs, ZeroSumEval enhances these approaches by providing a standardized and extensible framework. To demonstrate this, we conduct extensive experiments with >7000 simulations across 7 games and 13 models. Our results show that while frontier models from the GPT and Claude families can play common games and answer questions, they struggle to play games that require creating novel and challenging questions. We also observe that models cannot reliably jailbreak each other and fail generally at tasks requiring creativity. We release our code at https://github.com/facebookresearch/ZeroSumEval.
http://arxiv.org/abs/2504.12563v1
MetaSynth: Meta-Prompting-Driven Agentic Scaffolds for Diverse Synthetic Data Generation
2025-04-17T01:25:15+00:00
Recent smaller language models such Phi-3.5 and Phi-4 rely on synthetic data generated using larger Language models. Questions remain about leveraging synthetic data for other use cases, such as adapting LLMs to specific domains. A key limitation of synthetic data is low diversity, which negatively impacts its downstream applicability for improving other models. To address this, we propose MetaSynth, a method for generating synthetic data that enhances diversity through meta-prompting, where a language model orchestrates multiple "expert" LLM agents to collaboratively generate data. Using only 25 million tokens of synthetic data generated with MetaSynth, we successfully adapt a well-trained LLM (Mistral-7B-v0.3) to two specialized domains-Finance and Biomedicine-without compromising the capabilities of the resulting model in general tasks. In addition, we evaluate the diversity of our synthetic data using seven automated metrics, and find that it approaches the diversity of LLM pre-training corpora. Continually pre-training Mistral-7B-v0.3 with MetaSynth notably outperforms the base LLM, showing improvements of up to 4.08% in Finance and 13.75% in Biomedicine. The same model shows degraded performance when trained on data generated using a template prompt, even when the template includes prior generations and varying In-Context exemplars of real data. Our findings suggest that a few million tokens of diverse synthetic data without mixing any real data, is sufficient for effective domain adaptation when using MetaSynth.
http://arxiv.org/abs/2504.12564v1
The rational cuspidal subgroup of J_0(N)
2025-04-17T01:28:13+00:00
For a positive integer $N$, let $J_0(N)$ be the Jacobian of the modular curve $X_0(N)$. In this paper we completely determine the structure of the rational cuspidal subgroup of $J_0(N)$ when the largest perfect square dividing $N$ is either an odd prime power or a product of two odd prime powers. Indeed, we prove that the rational cuspidal divisor class group of $X_0(N)$ is the whole rational cuspidal subgroup of $J_0(N)$ for such an $N$, and the structure of the former group is already determined by the first author in [14].
http://arxiv.org/abs/2504.12565v1
Enhancing Quantum Dense Coding Robustness Using Information Entropy-Based Metrics
2025-04-17T01:29:40+00:00
Superdense Coding is a cornerstone in secure quantum communication, exploiting pre-shared entanglement to encode two classical bits within a single qubit. However, noise and decoherence deteriorate entanglement quality, restricting both fidelity and channel capacity in practical settings. Traditional methods, such as error correcting codes or entanglement distillation, are generally inadequate for dynamically varying noise conditions. Moreover, reliance on fidelity alone may fail to capture more subtle noise effects. This work introduces an adaptive protocol that integrates the five-qubit perfect code with a novel global adaptive purification that avoids discarding entangled pairs. By monitoring two information entropy-based metrics, quantum discord (QD) and entanglement of formation (EoF) from pilot pairs, we dynamically tune a global unitary to counteract noise. Our simulations, under both amplitude and phase damping, indicate that this integrated strategy could significantly enhance superdense coding robustness while preserving high throughput, thereby offering a scalable pathway toward a high-capacity quantum internet.
http://arxiv.org/abs/2504.12566v1
The Automorphism Group of the Finitary Power Monoid of the Integers under Addition
2025-04-17T01:32:34+00:00
Endowed with the binary operation of set addition carried over from the integers, the family $\mathcal P_{\mathrm{fin}}(\mathbb Z) $ of all non-empty finite subsets of $\mathbb Z$ forms a monoid whose neutral element is the singleton $\{0\}$. Building upon recent work by Tringali and Yan, we determine the automorphisms of $\mathcal P_{\mathrm{fin}}(\mathbb Z)$. In particular, we find that the automorphism group of $\mathcal P_{\mathrm{fin}}(\mathbb Z)$ is isomorphic to the direct product of a cyclic group of order two by the infinite dihedral group.
http://arxiv.org/abs/2504.12567v1
The existence of explicit symplectic integrators for general nonseparable Hamiltonian systems
2025-04-17T01:32:43+00:00
The existence of explicit symplectic integrators for general nonseparable Hamiltonian systems is an open and important problem in both numerical analysis and computing in science and engineering, as explicit integrators are usually more efficient than the implicit integrators of the same order of accuracy. Up to now, all responses to this problem are negative. That is, there exist explicit symplectic integrators only for some special nonseparable Hamiltonian systems, whereas the universal design involving explicit symplectic integrators for general nonseparable Hamiltonian systems has not yet been studied sufficiently. In this paper, we present a constructive proof for the existence of explicit symplectic integrators for general nonseparable Hamiltonian systems via finding explicit symplectic mappings under which the special submanifold of the extended phase space is invariant. It turns out that the proposed explicit integrators are symplectic in both the extended phase space and the original phase space. Moreover, on the basis of the global modified Hamiltonians of the proposed integrators, the backward error analysis is made via a parameter relaxation and restriction technique to show the linear growth of global errors and the near-preservation of first integrals. In particular, the effective estimated time interval is nearly the same as classical implicit symplectic integrators when applied to (near-) integrable Hamiltonian systems. Numerical experiments with a completely integrable nonseparable Hamiltonian and a nonintegrable nonseparable Hamiltonian illustrate the good long-term behavior and high efficiency of the explicit symplectic integrators proposed and analyzed in this paper.
http://arxiv.org/abs/2504.12568v1
Evolutionary Policy Optimization
2025-04-17T01:33:06+00:00
A key challenge in reinforcement learning (RL) is managing the exploration-exploitation trade-off without sacrificing sample efficiency. Policy gradient (PG) methods excel in exploitation through fine-grained, gradient-based optimization but often struggle with exploration due to their focus on local search. In contrast, evolutionary computation (EC) methods excel in global exploration, but lack mechanisms for exploitation. To address these limitations, this paper proposes Evolutionary Policy Optimization (EPO), a hybrid algorithm that integrates neuroevolution with policy gradient methods for policy optimization. EPO leverages the exploration capabilities of EC and the exploitation strengths of PG, offering an efficient solution to the exploration-exploitation dilemma in RL. EPO is evaluated on the Atari Pong and Breakout benchmarks. Experimental results show that EPO improves both policy quality and sample efficiency compared to standard PG and EC methods, making it effective for tasks that require both exploration and local optimization.
http://arxiv.org/abs/2504.12569v1
The Others: Naturally Isolating Out-of-Distribution Samples for Robust Open-Set Semi-Supervised Learning
2025-04-17T01:37:53+00:00
Open-Set Semi-Supervised Learning (OSSL) tackles the practical challenge of learning from unlabeled data that may include both in-distribution (ID) and unknown out-of-distribution (OOD) classes. However, existing OSSL methods form suboptimal feature spaces by either excluding OOD samples, interfering with them, or overtrusting their information during training. In this work, we introduce MagMatch, a novel framework that naturally isolates OOD samples through a prototype-based contrastive learning paradigm. Unlike conventional methods, MagMatch does not assign any prototypes to OOD samples; instead, it selectively aligns ID samples with class prototypes using an ID-Selective Magnetic (ISM) module, while allowing OOD samples - the "others" - to remain unaligned in the feature space. To support this process, we propose Selective Magnetic Alignment (SMA) loss for unlabeled data, which dynamically adjusts alignment based on sample confidence. Extensive experiments on diverse datasets demonstrate that MagMatch significantly outperforms existing methods in both closed-set classification accuracy and OOD detection AUROC, especially in generalizing to unseen OOD data.
http://arxiv.org/abs/2504.12570v1
Long-Lived Quasinormal Modes of Brane-Localized Reissner-Nordström--de Sitter Black Holes
2025-04-17T01:38:18+00:00
We study the quasinormal modes of a massive scalar field propagating on the Reissner-Nordstr\"om--de Sitter (RNdS) black hole black hole background on a 3+1-dimensional brane embedded in a higher-dimensional world. Using the WKB method supplemented with Pad\'e approximants and validated by time-domain integration via the Prony method, we compute the dominant quasinormal frequencies for a wide range of black hole and field parameters. We show that the presence of the cosmological constant, black hole charge, and bulk dimensionality significantly affect the oscillation frequencies and damping rates of the scalar perturbations. In particular, we observe the emergence of long lived modes and slowly decaying oscillatory tails in the regime of large field mass. The results demonstrate good agreement between the frequency- and time-domain methods, reinforcing the reliability of the semi-analytic approach in this context.
http://arxiv.org/abs/2504.12571v1
AI for CSI Prediction in 5G-Advanced and Beyond
2025-04-17T01:39:19+00:00
Artificial intelligence (AI) is pivotal in advancing fifth-generation (5G)-Advanced and sixth-generation systems, capturing substantial research interest. Both the 3rd Generation Partnership Project (3GPP) and leading corporations champion AI's standardization in wireless communication. This piece delves into AI's role in channel state information (CSI) prediction, a sub-use case acknowledged in 5G-Advanced by the 3GPP. We offer an exhaustive survey of AI-driven CSI prediction, highlighting crucial elements like accuracy, generalization, and complexity. Further, we touch on the practical side of model management, encompassing training, monitoring, and data gathering. Moreover, we explore prospects for CSI prediction in future wireless communication systems, entailing integrated design with feedback, multitasking synergy, and predictions in rapid scenarios. This article seeks to be a touchstone for subsequent research in this burgeoning domain.
http://arxiv.org/abs/2504.12572v1
Observation of the Axion quasiparticle in 2D MnBi$_2$Te$_4$
2025-04-17T01:39:53+00:00
In 1978, Wilczek and Weinberg theoretically discovered a new boson-the Axion-which is the coherent oscillation of the $\theta$ field in QCD. Its existence can solve multiple fundamental questions including the strong CP problem of QCD and the dark matter. However, its detection is challenging because it has almost no interaction with existing particles. Similar $\theta$ has been introduced to condensed matter and so far studied as a static, quantized value to characterize topology of materials. But the coherent oscillation of $\theta$ in condensed matter is proposed to lead to new physics directly analogous to the high-energy Axion particle, the dynamical Axion quasiparticle (DAQ). In this paper, we present the direct observation of the DAQ. By combining 2D electronic device with ultrafast pump-probe optics, we manage to measure the magnetoelectric coupling $\theta$ ($\theta\propto\alpha$) of 2D MnBi$_2$Te$_4$ with sub-picosecond time-resolution. This allows us to directly observe the DAQ by seeing a coherent oscillation of $\theta$ at ~44 GHz in real time, which is uniquely induced by the out-of-phase antiferromagnetic magnon. Interestingly, in 2D MnBi$_2$Te$_4$, the DAQ arises from the magnon-induced coherent modulation of Berry curvature. Such ultrafast control of quantum wavefunction can be generalized to manipulate Berry curvature and quantum metric of other materials in ultrafast time-scale. Moreover, the DAQ enables novel quantum physics such as Axion polariton and electric control of ultrafast spin polarization, implying applications in unconventional light-matter interaction and coherent antiferromagnetic spintronics. Beyond condensed matter, the DAQ can serve as a detector of the dark matter Axion particle. We estimate the detection frequency range and sensitivity in the critically-lacking meV regime, contributing to one of the most challenging questions in fundamental physics.
http://arxiv.org/abs/2504.12573v1
Parsimonious Dataset Construction for Laparoscopic Cholecystectomy Structure Segmentation
2025-04-17T01:40:30+00:00
Labeling has always been expensive in the medical context, which has hindered related deep learning application. Our work introduces active learning in surgical video frame selection to construct a high-quality, affordable Laparoscopic Cholecystectomy dataset for semantic segmentation. Active learning allows the Deep Neural Networks (DNNs) learning pipeline to include the dataset construction workflow, which means DNNs trained by existing dataset will identify the most informative data from the newly collected data. At the same time, DNNs' performance and generalization ability improve over time when the newly selected and annotated data are included in the training data. We assessed different data informativeness measurements and found the deep features distances select the most informative data in this task. Our experiments show that with half of the data selected by active learning, the DNNs achieve almost the same performance with 0.4349 mean Intersection over Union (mIoU) compared to the same DNNs trained on the full dataset (0.4374 mIoU) on the critical anatomies and surgical instruments.
http://arxiv.org/abs/2504.12574v1
Prompt-Driven and Training-Free Forgetting Approach and Dataset for Large Language Models
2025-04-17T01:44:57+00:00
The widespread adoption of diffusion models in image generation has increased the demand for privacy-compliant unlearning. However, due to the high-dimensional nature and complex feature representations of diffusion models, achieving selective unlearning remains challenging, as existing methods struggle to remove sensitive information while preserving the consistency of non-sensitive regions. To address this, we propose an Automatic Dataset Creation Framework based on prompt-based layered editing and training-free local feature removal, constructing the ForgetMe dataset and introducing the Entangled evaluation metric. The Entangled metric quantifies unlearning effectiveness by assessing the similarity and consistency between the target and background regions and supports both paired (Entangled-D) and unpaired (Entangled-S) image data, enabling unsupervised evaluation. The ForgetMe dataset encompasses a diverse set of real and synthetic scenarios, including CUB-200-2011 (Birds), Stanford-Dogs, ImageNet, and a synthetic cat dataset. We apply LoRA fine-tuning on Stable Diffusion to achieve selective unlearning on this dataset and validate the effectiveness of both the ForgetMe dataset and the Entangled metric, establishing them as benchmarks for selective unlearning. Our work provides a scalable and adaptable solution for advancing privacy-preserving generative AI.
http://arxiv.org/abs/2504.12575v1
Featuremetric benchmarking: Quantum computer benchmarks based on circuit features
2025-04-17T01:49:02+00:00
Benchmarks that concisely summarize the performance of many-qubit quantum computers are essential for measuring progress towards the goal of useful quantum computation. In this work, we present a benchmarking framework that is based on quantifying how a quantum computer's performance on quantum circuits varies as a function of features of those circuits, such as circuit depth, width, two-qubit gate density, problem input size, or algorithmic depth. Our featuremetric benchmarking framework generalizes volumetric benchmarking -- a widely-used methodology that quantifies performance versus circuit width and depth -- and we show that it enables richer and more faithful models of quantum computer performance. We demonstrate featuremetric benchmarking with example benchmarks run on IBM Q and IonQ systems of up to 27 qubits, and we show how to produce performance summaries from the data using Gaussian process regression. Our data analysis methods are also of interest in the special case of volumetric benchmarking, as they enable the creation of intuitive two-dimensional capability regions using data from few circuits.
http://arxiv.org/abs/2504.12576v1
CM3AE: A Unified RGB Frame and Event-Voxel/-Frame Pre-training Framework
2025-04-17T01:49:46+00:00
Event cameras have attracted increasing attention in recent years due to their advantages in high dynamic range, high temporal resolution, low power consumption, and low latency. Some researchers have begun exploring pre-training directly on event data. Nevertheless, these efforts often fail to establish strong connections with RGB frames, limiting their applicability in multi-modal fusion scenarios. To address these issues, we propose a novel CM3AE pre-training framework for the RGB-Event perception. This framework accepts multi-modalities/views of data as input, including RGB images, event images, and event voxels, providing robust support for both event-based and RGB-event fusion based downstream tasks. Specifically, we design a multi-modal fusion reconstruction module that reconstructs the original image from fused multi-modal features, explicitly enhancing the model's ability to aggregate cross-modal complementary information. Additionally, we employ a multi-modal contrastive learning strategy to align cross-modal feature representations in a shared latent space, which effectively enhances the model's capability for multi-modal understanding and capturing global dependencies. We construct a large-scale dataset containing 2,535,759 RGB-Event data pairs for the pre-training. Extensive experiments on five downstream tasks fully demonstrated the effectiveness of CM3AE. Source code and pre-trained models will be released on https://github.com/Event-AHU/CM3AE.
http://arxiv.org/abs/2504.12577v1
Local Data Quantity-Aware Weighted Averaging for Federated Learning with Dishonest Clients
2025-04-17T01:50:24+00:00
Federated learning (FL) enables collaborative training of deep learning models without requiring data to leave local clients, thereby preserving client privacy. The aggregation process on the server plays a critical role in the performance of the resulting FL model. The most commonly used aggregation method is weighted averaging based on the amount of data from each client, which is thought to reflect each client's contribution. However, this method is prone to model bias, as dishonest clients might report inaccurate training data volumes to the server, which is hard to verify. To address this issue, we propose a novel secure \underline{Fed}erated \underline{D}ata q\underline{u}antity-\underline{a}ware weighted averaging method (FedDua). It enables FL servers to accurately predict the amount of training data from each client based on their local model gradients uploaded. Furthermore, it can be seamlessly integrated into any FL algorithms that involve server-side model aggregation. Extensive experiments on three benchmarking datasets demonstrate that FedDua improves the global model performance by an average of 3.17% compared to four popular FL aggregation methods in the presence of inaccurate client data volume declarations.
http://arxiv.org/abs/2504.12578v1
Sub-Scalp Brain-Computer Interface Device Design and Fabrication
2025-04-17T01:51:16+00:00
Current brain-computer interfaces (BCI) face limitations in signal acquisition. While sub-scalp EEG offers a potential solution, existing devices prioritize chronic seizure monitoring and lack features suited for BCI applications. This work addresses this gap by outlining key specifications for sub-scalp BCI devices, focusing on channel count, sampling rate, power efficiency, and form factor. We present the Set-And-Forget EEG (SAFE) system, a custom-built amplifier and wireless transmitter meeting these criteria. This compact (12x12 mm), six-channel device offers 1024 Hz sampling and Bluetooth Low Energy data transmission. Validation using generated sinusoids and electrocorticography recordings of visual evoked potentials in sheep models demonstrated low noise recording. Future animal studies will assess sub-scalp EEG signal quality for BCI applications. This data lays the groundwork for human trials, ultimately paving the way for chronic, in-home BCIs that empower individuals with physical disabilities.
http://arxiv.org/abs/2504.12579v1
Provable Secure Steganography Based on Adaptive Dynamic Sampling
2025-04-17T01:52:09+00:00
The security of private communication is increasingly at risk due to widespread surveillance. Steganography, a technique for embedding secret messages within innocuous carriers, enables covert communication over monitored channels. Provably Secure Steganography (PSS) is state of the art for making stego carriers indistinguishable from normal ones by ensuring computational indistinguishability between stego and cover distributions. However, current PSS methods often require explicit access to the distribution of generative model for both sender and receiver, limiting their practicality in black box scenarios. In this paper, we propose a provably secure steganography scheme that does not require access to explicit model distributions for both sender and receiver. Our method incorporates a dynamic sampling strategy, enabling generative models to embed secret messages within multiple sampling choices without disrupting the normal generation process of the model. Extensive evaluations of three real world datasets and three LLMs demonstrate that our blackbox method is comparable with existing white-box steganography methods in terms of efficiency and capacity while eliminating the degradation of steganography in model generated outputs.
http://arxiv.org/abs/2504.12580v1
ChemKANs for Combustion Chemistry Modeling and Acceleration
2025-04-17T01:53:28+00:00
Efficient chemical kinetic model inference and application for combustion problems is challenging due to large ODE systems and wideley separated time scales. Machine learning techniques have been proposed to streamline these models, though strong nonlinearity and numerical stiffness combined with noisy data sources makes their application challenging. The recently developed Kolmogorov-Arnold Networks (KANs) and KAN ordinary differential equations (KAN-ODEs) have been demonstrated as powerful tools for scientific applications thanks to their rapid neural scaling, improved interpretability, and smooth activation functions. Here, we develop ChemKANs by augmenting the KAN-ODE framework with physical knowledge of the flow of information through the relevant kinetic and thermodynamic laws, as well as an elemental conservation loss term. This novel framework encodes strong inductive bias that enables streamlined training and higher accuracy predictions, while facilitating parameter sparsity through full sharing of information across all inputs and outputs. In a model inference investigation, we find that ChemKANs exhibit no overfitting or model degradation when tasked with extracting predictive models from data that is both sparse and noisy, a task that a standard DeepONet struggles to accomplish. Next, we find that a remarkably parameter-lean ChemKAN (only 344 parameters) can accurately represent hydrogen combustion chemistry, providing a 2x acceleration over the detailed chemistry in a solver that is generalizable to larger-scale turbulent flow simulations. These demonstrations indicate potential for ChemKANs in combustion physics and chemical kinetics, and demonstrate the scalability of generic KAN-ODEs in significantly larger and more numerically challenging problems than previously studied.
http://arxiv.org/abs/2504.12581v1
Modeling Coupled Epidemic-Information Dynamics via Reaction-Diffusion Processes on Multiplex Networks with Media and Mobility Effects
2025-04-17T01:53:57+00:00
While most existing epidemic models focus on the influence of isolated factors, infectious disease transmission is inherently shaped by the complex interplay of multiple interacting elements. To better capture real-world dynamics, it is essential to develop epidemic models that incorporate diverse, realistic factors. In this study, we propose a coupled disease-information spreading model on multiplex networks that simultaneously accounts for three critical dimensions: media influence, higher-order interactions, and population mobility. This integrated framework enables a systematic analysis of synergistic spreading mechanisms under practical constraints and facilitates the exploration of effective epidemic containment strategies. We employ a microscopic Markov chain approach (MMCA) to derive the coupled dynamical equations and identify epidemic thresholds, which are then validated through extensive Monte Carlo (MC) simulations. Our results show that both mass media dissemination and higher-order network structures contribute to suppressing disease transmission by enhancing public awareness. However, the containment effect of higher-order interactions weakens as the order of simplices increases. We also explore the influence of subpopulation characteristics, revealing that increasing inter-subpopulation connectivity in a connected metapopulation network leads to lower disease prevalence. Furthermore, guiding individuals to migrate toward less accessible or more isolated subpopulations is shown to effectively mitigate epidemic spread. These findings offer valuable insights for designing targeted and adaptive intervention strategies in complex epidemic settings.
http://arxiv.org/abs/2504.12582v1
Fair Conformal Prediction for Incomplete Covariate Data
2025-04-17T01:54:42+00:00
Conformal prediction provides a distribution-free framework for uncertainty quantification. This study explores the application of conformal prediction in scenarios where covariates are missing, which introduces significant challenges for uncertainty quantification. We establish that marginal validity holds for imputed datasets across various mechanisms of missing data and most imputation methods. Building on the framework of nonexchangeable conformal prediction, we demonstrate that coverage guarantees depend on the mask. To address this, we propose a nonexchangeable conformal prediction method for missing covariates that satisfies both marginal and mask-conditional validity. However, as this method does not ensure asymptotic conditional validity, we further introduce a localized conformal prediction approach that employs a novel score function based on kernel smoothing. This method achieves marginal, mask-conditional, and asymptotic conditional validity under certain assumptions. Extensive simulation studies and real-data analysis demonstrate the advantages of these proposed methods.
http://arxiv.org/abs/2504.12583v1
Total positivity of Hadamard product of dual Jacobi--Trudi matrices
2025-04-17T01:54:50+00:00
In 1992, Wagner proved that the Hadamard product of two totally positive lower triangular Toeplitz matrices is totally positive. In this work, we strengthen this result by establishing total monomial positivity for the Hadamard product of Jacobi--Trudi matrices. In particular, we resolve a conjecture of Sokal concerning the Hadamard square of Jacobi--Trudi matrices. Moreover, we provide a manifestly positive Schur expansion for the Hadamard square of Jacobi--Trudi matrices indexed by ribbons. In addition, we construct a corresponding representation, offering a representation-theoretic proof of the Schur positivity.
http://arxiv.org/abs/2504.12584v1
JADES NIRSpec Spectroscopy of GN-z11: Evidence for Wolf-Rayet contribution to stellar populations at 430 Myr after Big Bang?
2025-04-17T01:58:01+00:00
We investigate the unusual emission line luminosity ratios observed in the JADES NIRSpec spectroscopy of GN-z11, which reveal exceptionally strong emission lines and a significant detection of the rarely observed N III] $\lambda1748-1753$\r{A} multiplet. These features suggest an elevated N/O abundance, challenging existing models of stellar populations and nebular emission. To assess whether Wolf-Rayet (WR) stars can account for the observed line ratios, we construct a suite of stellar and nebular models incorporating high-resolution stellar spectral libraries, enabling a more accurate treatment of WR evolution and its influence on the ionising radiation field. We find that the inclusion of WR stars is essential for reproducing the observed position of GN-z11 in the C III]/He II versus C III]/C iv diagnostic plane, resolving discrepancies from previous studies. The model-derived metallicity (0.07$\lesssim$Z/Z$_{\odot}\lesssim$0.15), ionisation parameter ($\log\,U$$\approx$-2) and stellar ages are consistent with the literature estimates. However, our models under-predict the N III/O III] ratio, suggesting that WR stars alone cannot fully explain the nitrogen enrichment. This suggests that additional mechanisms, such as rapid chemical enrichment in a young, metal-poor environment, may be necessary to explain the nitrogen excess. While our models successfully reproduce most observed line ratios, further refinements to the models are needed to fully characterise the stellar populations and the enrichment processes of high-redshift galaxies like GN-z11.
http://arxiv.org/abs/2504.12585v1
Identifying and Mitigating the Influence of the Prior Distribution in Large Language Models
2025-04-17T02:00:53+00:00
Large language models (LLMs) sometimes fail to respond appropriately to deterministic tasks -- such as counting or forming acronyms -- because the implicit prior distribution they have learned over sequences of tokens influences their responses. In this work, we show that, in at least some cases, LLMs actually compute the information needed to perform these tasks correctly, and we identify some interventions that can allow them to access this information to improve their performance. First, we show that simply prompting the language model to not rely on its prior knowledge leads to dramatic improvements in prior-dominated tasks. We then use mechanistic interpretability techniques to localize the prior within the LLM and manipulate the extent to which that prior influences its responses. Specifically, we show that it is possible to identify layers of the underlying neural network that correlate with the prior probability of a response and that lightweight finetuning of these layers with basic prompts on prior-dominated tasks achieves high performance on held-out answers. These results suggest that the information required to produce a correct response is contained within the representations of the problems formed by the models. Furthermore, we show that this finetuning is significantly more effective for prior-dominated tasks, and that the error after finetuning is no longer correlated with the prior. Our results suggest that it may be possible to define effective methods for manipulating the extent to which LLMs rely upon their priors in solving problems, potentially increasing their performance in settings where LLMs hallucinate for reasons related to the prior probability of token sequences.
http://arxiv.org/abs/2504.12586v1
Quantum Search on Bipartite Multigraphs
2025-04-17T02:05:16+00:00
Quantum walks provide a powerful framework for achieving algorithmic speedup in quantum computing. This paper presents a quantum search algorithm for 2-tessellable graphs, a generalization of bipartite graphs, achieving a quadratic speedup over classical Markov chain-based search methods. Our approach employs an adapted version of the Szegedy quantum walk model (adapted SzQW), which takes place on bipartite graphs, and an adapted version of Staggered Quantum Walks (Adapted StQW), which takes place on 2-tessellable graphs, with the goal of efficiently finding a marked vertex by querying an oracle. The Ambainis, Gily\'en, Jeffery, and Kokainis' algorithm (AGJK), which provides a quadratic speedup on balanced bipartite graphs, is used as a subroutine in our algorithm. Our approach generalizes existing quantum walk techniques and offers a quadratic speedup in the number of queries needed, demonstrating the utility of our adapted quantum walk models in a broader class of graphs.
http://arxiv.org/abs/2504.12587v1
Software Engineering Principles for Fairer Systems: Experiments with GroupCART
2025-04-17T02:06:05+00:00
Discrimination-aware classification aims to make accurate predictions while satisfying fairness constraints. Traditional decision tree learners typically optimize for information gain in the target attribute alone, which can result in models that unfairly discriminate against protected social groups (e.g., gender, ethnicity). Motivated by these shortcomings, we propose GroupCART, a tree-based ensemble optimizer that avoids bias during model construction by optimizing not only for decreased entropy in the target attribute but also for increased entropy in protected attributes. Our experiments show that GroupCART achieves fairer models without data transformation and with minimal performance degradation. Furthermore, the method supports customizable weighting, offering a smooth and flexible trade-off between predictive performance and fairness based on user requirements. These results demonstrate that algorithmic bias in decision tree models can be mitigated through multi-task, fairness-aware learning. All code and datasets used in this study are available at: https://github.com/anonymous12138/groupCART.
http://arxiv.org/abs/2504.12588v1
Simplifying Graph Transformers
2025-04-17T02:06:50+00:00
Transformers have attained outstanding performance across various modalities, employing scaled-dot-product (SDP) attention mechanisms. Researchers have attempted to migrate Transformers to graph learning, but most advanced Graph Transformers are designed with major architectural differences, either integrating message-passing or incorporating sophisticated attention mechanisms. These complexities prevent the easy adoption of Transformer training advances. We propose three simple modifications to the plain Transformer to render it applicable to graphs without introducing major architectural distortions. Specifically, we advocate for the use of (1) simplified $L_2$ attention to measure the magnitude closeness of tokens; (2) adaptive root-mean-square normalization to preserve token magnitude information; and (3) a relative positional encoding bias with a shared encoder. Significant performance gains across a variety of graph datasets justify the effectiveness of our proposed modifications. Furthermore, empirical evaluation on the expressiveness benchmark reveals noteworthy realized expressiveness in the graph isomorphism.
http://arxiv.org/abs/2504.12589v1
Efficient MAP Estimation of LLM Judgment Performance with Prior Transfer
2025-04-17T02:08:51+00:00
LLM ensembles are widely used for LLM judges. However, how to estimate their accuracy, especially in an efficient way, is unknown. In this paper, we present a principled maximum a posteriori (MAP) framework for an economical and precise estimation of the performance of LLM ensemble judgment. We first propose a mixture of Beta-Binomial distributions to model the judgment distribution, revising from the vanilla Binomial distribution. Next, we introduce a conformal prediction-driven approach that enables adaptive stopping during iterative sampling to balance accuracy with efficiency. Furthermore, we design a prior transfer mechanism that utilizes learned distributions on open-source datasets to improve estimation on a target dataset when only scarce annotations are available. Finally, we present BetaConform, a framework that integrates our distribution assumption, adaptive stopping, and the prior transfer mechanism to deliver a theoretically guaranteed distribution estimation of LLM ensemble judgment with minimum labeled samples. BetaConform is also validated empirically. For instance, with only 10 samples from the TruthfulQA dataset, for a Llama ensembled judge, BetaConform gauges its performance with error margin as small as 3.37%.
http://arxiv.org/abs/2504.12590v1
Laser flash analysis using the Cattaneo heat equation
2025-04-17T02:11:40+00:00
Thermal diffusivity of solid materials is commonly measured using laser flash analysis. This technique involves applying a heat pulse to the front surface of a small sample of the material and calculating the thermal diffusivity from the resulting increase in temperature on the back surface. Current formulas for the thermal diffusivity are based on the assumption that heat is transported within the sample according to the standard heat equation. While this assumption is valid in most practical cases, it admits the non-physical property of infinite propagation speed, that is, the heat pulse applied at the front surface is instantaneously perceived at the back surface. This paper carries out a mathematical analysis to determine the effect of replacing the standard heat equation in laser flash analysis by the Cattaneo heat equation, which exhibits finite propagation speed through the inclusion of a relaxation time in the Fourier law. The main results of the paper include (i) analytical insights into the spatiotemporal behaviour of temperature within the sample and (ii) analytical formulas for determining the thermal diffusivity and relaxation time of the sample. Numerical experiments exploring and verifying the analytical results are presented with supporting MATLAB code made publicly available.
http://arxiv.org/abs/2504.12591v1
Nonreciprocal and temperature-tunable light absorption in AlAs/ITO/GaAs Hybrid Metasurfaces
2025-04-17T02:15:04+00:00
The single-band high-efficiency light absorption of nanostructures finds extensive applications in var ious fields such as photothermal conversion, optical sensing, and biomedicine. In this paper, a vertically stacked nanohybrid structure is designed with aluminum arsenide (AlAs), indium tin ox ide (ITO) and gallium arsenide (GaAs) stacked, and the photon absorption characteristics of this structure under near-infrared light at a single wavelength of 1240 nm are exploredbased on the finite difference time domain (FDTD) method. When AlAs, ITO, and GaAs are stacked and incident light enters from the GaAs side, a local light enhancement phenomenon occurs. The absorption rate can reach 91.67%, and the temperature change rate reaches 55. 53%, allowing for a wide-range regulation the absorption rate by temperature. In addition, the AlAs/ITO/GaAs sandwich-type hybrid structure also exhibits obvious nonreciprocity. With the change in temperature, the absorption rate of different structural sizes varies differently. The structure can be optimized and designed according to the requirements, providing new ideas for the design of multifunctional optoelectronic devices.
http://arxiv.org/abs/2504.12592v1
Coarse-Grained Force Fields via Rotational Entropy Corrections to Free Energy Landscapes of Diffusing Molecules
2025-04-17T02:29:11+00:00
The construction of accurate interatomic potentials, and related fields of forces, from equilibrium conformational distributions of molecules is a crucial step in coarse-grained modeling. In this work we show that in order to develop accurate lab-frame force fields that preserve translational and rotational diffusion of a molecule, the observed body-fixed free energy landscape must be corrected for conformation-dependent rotational entropy to isolate the potential energy surface. We further demonstrate that even when the instantaneous effects of the correction are small, the resulting lagged correlations of the modeled force can be greatly altered and hence the correction is especially vital when parameterizing friction coefficients using modeled interatomic potentials.
http://arxiv.org/abs/2504.12593v1
Leveraging Agency in Virtual Reality to Enable Situated Learning
2025-04-17T02:38:19+00:00
Learning is an active process that is deeply tied to physical and social contexts. Yet schools traditionally place learners in a passive role and focus on decontextualizing knowledge. Situating learning in more authentic tasks and contexts typically requires taking it outside the classroom via field trips and apprenticeships, but virtual reality (VR) is a promising tool to bring more authentically situated learning experiences into classrooms. In this position paper, I discuss how one of VR's primary affordances for learning is heightening agenct, and how such heightened agency can facilitate more authenticlaly situated learning by allowing learners legitimate peripheral participation.
http://arxiv.org/abs/2504.12594v1
Meta-Dependence in Conditional Independence Testing
2025-04-17T02:41:22+00:00
Constraint-based causal discovery algorithms utilize many statistical tests for conditional independence to uncover networks of causal dependencies. These approaches to causal discovery rely on an assumed correspondence between the graphical properties of a causal structure and the conditional independence properties of observed variables, known as the causal Markov condition and faithfulness. Finite data yields an empirical distribution that is "close" to the actual distribution. Across these many possible empirical distributions, the correspondence to the graphical properties can break down for different conditional independencies, and multiple violations can occur at the same time. We study this "meta-dependence" between conditional independence properties using the following geometric intuition: each conditional independence property constrains the space of possible joint distributions to a manifold. The "meta-dependence" between conditional independences is informed by the position of these manifolds relative to the true probability distribution. We provide a simple-to-compute measure of this meta-dependence using information projections and consolidate our findings empirically using both synthetic and real-world data.
http://arxiv.org/abs/2504.12595v1
Reentrant phase transition in quasiperiodic photonic waveguides
2025-04-17T02:42:21+00:00
Anderson transition in quasiperiodic potentials and the associated mobility edges have been a central focus in quantum simulation across multidisciplinary physical platforms. While these transitions have been experimentally observed in ultracold atoms, acoustic systems, optical waveguides, and superconducting junctions, their interplay between quasiperiodic potential and long-range hopping remains unexplored experimentally. In this work, we report the observation of localization-delocalization transition induced by the hopping between the next-nearest neighboring sites using quasiperiodic photonic waveguides. Our findings demonstrate that increasing the next-nearest hopping strength induces a reentrant phase transition, where the system transitions from an initially extended phase into a localized phase before eventually returning to an extended phase. This remarkable interplay between hopping and quasiperiodic potential in the lattice models provides crucial insights into the mechanism of Anderson transition. Furthermore, our numerical simulation reveals that this phase transition exhibits a critical exponent of $\nu \simeq 1/3$, which is experimentally observable for system sizes $L\sim10^3$ - $10^4$. These results establish a framework for direct observation of the Anderson transition and precise determination of its critical exponents, which can significantly advance our understanding of localization physics in quasiperiodic systems.
http://arxiv.org/abs/2504.12596v1
Higher-Order Mean-Motion Resonances Can Form in Type-I Disk Migration
2025-04-17T02:46:26+00:00
Type-I disk migration can form a chain of planets engaged in first-order mean-motion resonances (MMRs) parked at the disk inner edge. However, while second- or even third-order resonances were deemed unlikely due to their weaker strength, they have been observed in some planetary systems (e.g. TOI-178 bc: 5:3, TOI-1136 ef: 7:5, TRAPPIST-1 bcd: 8:5-5:3). We performed $>6,000$ Type-I simulations of multi-planet systems that mimic the observed {\it Kepler} sample in terms of stellar mass, planet size, multiplicity, and intra-system uniformity over a parameter space encompassing transitional and truncated disks. We found that Type-I migration coupled with a disk inner edge can indeed produce second- and third-order resonances (in a state of libration) in $\sim 10\%$ and 2\% of resonant-chain systems, respectively. Moreover, the fraction of individual resonances in our simulations reproduced that of the observed sample (notably, 5:3 is the most common second-order MMR). The formation of higher-order MMRs favors slower disk migration and a smaller outer planet mass. Higher-order resonances do not have to form with the help of a Laplace-like three-body resonance as was proposed for TRAPPIST-1. Instead, the formation of higher-order resonance is assisted by breaking a pre-existing first-order resonance, which generates small but non-zero initial eccentricities ($e\approx10^{-3}$ to 10$^{-2}$). We predict that 1) librating higher-order resonances have higher equilibrium $e$ ($\sim 0.1$); 2) be more likely found as an isolated pair in an otherwise first-order chain; 3) more likely emerge in the inner pairs of a chain.
http://arxiv.org/abs/2504.12597v1
GeoSense: Evaluating Identification and Application of Geometric Principles in Multimodal Reasoning
2025-04-17T02:46:27+00:00
Geometry problem-solving (GPS), a challenging task requiring both visual comprehension and symbolic reasoning, effectively measures the reasoning capabilities of multimodal large language models (MLLMs). Humans exhibit strong reasoning ability in this task through accurate identification and adaptive application of geometric principles within visual contexts. However, existing benchmarks fail to jointly assess both dimensions of the human-like geometric reasoning mechanism in MLLMs, remaining a critical gap in assessing their ability to tackle GPS. To this end, we introduce GeoSense, the first comprehensive bilingual benchmark designed to systematically evaluate the geometric reasoning abilities of MLLMs through the lens of geometric principles. GeoSense features a five-level hierarchical framework of geometric principles spanning plane and solid geometry, an intricately annotated dataset of 1,789 problems, and an innovative evaluation strategy. Through extensive experiments on GeoSense with various open-source and closed-source MLLMs, we observe that Gemini-2.0-pro-flash performs best, achieving an overall score of $65.3$. Our in-depth analysis reveals that the identification and application of geometric principles remain a bottleneck for leading MLLMs, jointly hindering their reasoning abilities. These findings underscore GeoSense's potential to guide future advancements in MLLMs' geometric reasoning capabilities, paving the way for more robust and human-like reasoning in artificial intelligence.
http://arxiv.org/abs/2504.12598v1
Discrepancy of Arithmetic Progressions in Boxes and Convex Bodies
2025-04-17T02:47:49+00:00
The combinatorial discrepancy of arithmetic progressions inside $[N] := \{1, \ldots, N\}$ is the smallest integer $D$ for which $[N]$ can be colored with two colors so that any arithmetic progression in $[N]$ contains at most $D$ more elements from one color class than the other. Bounding the discrepancy of such set systems is a classical problem in discrepancy theory. More recently, this problem was generalized to arithmetic progressions in grids like $[N]^d$ (Valk{\'o}) and $[N_1]\times \ldots \times [N_d]$ (Fox, Xu, and Zhou). In the latter setting, Fox, Xu, and Zhou gave upper and lower bounds on the discrepancy that match within a $\frac{\log |\Omega|}{\log \log |\Omega|}$ factor, where $\Omega := [N_1]\times \ldots \times [N_d]$ is the ground set. In this work, we use the connection between factorization norms and discrepancy to improve their upper bound to be within a $\sqrt{\log|\Omega|}$ factor from the lower bound. We also generalize Fox, Xu, and Zhou's lower bound, and our upper bounds to arithmetic progressions in arbitrary convex bodies.
http://arxiv.org/abs/2504.12599v1
3DResT: A Strong Baseline for Semi-Supervised 3D Referring Expression Segmentation
2025-04-17T02:50:52+00:00
3D Referring Expression Segmentation (3D-RES) typically requires extensive instance-level annotations, which are time-consuming and costly. Semi-supervised learning (SSL) mitigates this by using limited labeled data alongside abundant unlabeled data, improving performance while reducing annotation costs. SSL uses a teacher-student paradigm where teacher generates high-confidence-filtered pseudo-labels to guide student. However, in the context of 3D-RES, where each label corresponds to a single mask and labeled data is scarce, existing SSL methods treat high-quality pseudo-labels merely as auxiliary supervision, which limits the model's learning potential. The reliance on high-confidence thresholds for filtering often results in potentially valuable pseudo-labels being discarded, restricting the model's ability to leverage the abundant unlabeled data. Therefore, we identify two critical challenges in semi-supervised 3D-RES, namely, inefficient utilization of high-quality pseudo-labels and wastage of useful information from low-quality pseudo-labels. In this paper, we introduce the first semi-supervised learning framework for 3D-RES, presenting a robust baseline method named 3DResT. To address these challenges, we propose two novel designs called Teacher-Student Consistency-Based Sampling (TSCS) and Quality-Driven Dynamic Weighting (QDW). TSCS aids in the selection of high-quality pseudo-labels, integrating them into the labeled dataset to strengthen the labeled supervision signals. QDW preserves low-quality pseudo-labels by dynamically assigning them lower weights, allowing for the effective extraction of useful information rather than discarding them. Extensive experiments conducted on the widely used benchmark demonstrate the effectiveness of our method. Notably, with only 1% labeled data, 3DResT achieves an mIoU improvement of 8.34 points compared to the fully supervised method.
http://arxiv.org/abs/2504.12600v1
Boundary criticality in two-dimensional interacting topological insulators
2025-04-17T02:55:58+00:00
We study the boundary criticality in 2D interacting topological insulators. Using the determinant quantum Monte Carlo method, we present the first nonperturbative study of the boundary quantum phase diagram in the Kane-Mele-Hubbard-Rashba model. Our results reveal rich boundary critical phenomena at the quantum phase transition between a topological insulator and an antiferromagnetic insulator, encompassing ordinary, special, and extraordinary transitions. Combining analytical derivation of the boundary theory with unbiased numerically-exact quantum Monte Carlo simulations, we demonstrate that the presence of topological edge states enriches the ordinary transition that renders a continuous boundary scaling dimension and, more intriguingly, leads to a special transition of the Berezinskii-Kosterlitz-Thouless type. Our work establishes a novel framework for the nonperturbative study of boundary criticality in two-dimensional topological systems with strong electron correlations.
http://arxiv.org/abs/2504.12601v1
Stochastic Gradient Descent in Non-Convex Problems: Asymptotic Convergence with Relaxed Step-Size via Stopping Time Methods
2025-04-17T02:56:20+00:00
Stochastic Gradient Descent (SGD) is widely used in machine learning research. Previous convergence analyses of SGD under the vanishing step-size setting typically require Robbins-Monro conditions. However, in practice, a wider variety of step-size schemes are frequently employed, yet existing convergence results remain limited and often rely on strong assumptions. This paper bridges this gap by introducing a novel analytical framework based on a stopping-time method, enabling asymptotic convergence analysis of SGD under more relaxed step-size conditions and weaker assumptions. In the non-convex setting, we prove the almost sure convergence of SGD iterates for step-sizes $ \{ \epsilon_t \}_{t \geq 1} $ satisfying $\sum_{t=1}^{+\infty} \epsilon_t = +\infty$ and $\sum_{t=1}^{+\infty} \epsilon_t^p < +\infty$ for some $p > 2$. Compared with previous studies, our analysis eliminates the global Lipschitz continuity assumption on the loss function and relaxes the boundedness requirements for higher-order moments of stochastic gradients. Building upon the almost sure convergence results, we further establish $L_2$ convergence. These significantly relaxed assumptions make our theoretical results more general, thereby enhancing their applicability in practical scenarios.
http://arxiv.org/abs/2504.12602v1
Boolean-valued second-order logic revisited
2025-04-17T02:56:54+00:00
Following the paper~[3] by V\"{a}\"{a}n\"{a}nen and the author, we continue to investigate on the difference between Boolean-valued second-order logic and full second-order logic. We show that the compactness number of Boolean-valued second-order logic is equal to $\omega_1$ if there are proper class many Woodin cardinals. This contrasts the result by Magidor~[10] that the compactness number of full second-order logic is the least extendible cardinal. We also introduce the inner model $C^{2b}$ constructed from Boolean-valued second-order logic using the construction of G\"{o}del's Constructible Universe L. We show that $C^{2b}$ is the least inner model of $\mathsf{ZFC}$ closed under $\mathrm{M}_n^{\#}$ operators for all $n < \omega$, and that $C^{2b}$ enjoys various nice properties as G\"{o}del's L does, assuming that Projective Determinacy holds in any set generic extension. This contrasts the result by Myhill and Scott~[14] that the inner model constructed from full second-order logic is equal to HOD, the class of all hereditarily ordinal definable sets.
http://arxiv.org/abs/2504.12603v1
Mazurkiewicz Sets and Containment of Sierpiński-Zygmund Functions under Rotations
2025-04-17T03:04:18+00:00
A Mazurkiewicz set is a plane subset that intersect every straight line at exactly two points, and a Sierpi\'{n}ski-Zygmund function is a function from $\mathbb{R}$ into $\mathbb{R}$ that has as little of the standard continuity as possible. Building on the recent work of Kharazishvili, we construct a Mazurkiewicz set that contains a Sierpi\'{n}ski-Zygmund function in every direction and another one that contains none in any direction. Furthermore, we show that whether a Mazurkiewicz set can be expressed as a union of two Sierpi\'{n}ski-Zygmund functions is independent of Zermelo-Fraenkel set theory with the Axiom of Choice (ZFC). Some open problems related to the containment of Hamel functions are stated.
http://arxiv.org/abs/2504.12604v1
Codes over Finite Ring $\mathbb{Z}_k$, MacWilliams Identity and Theta Function
2025-04-17T03:07:48+00:00
In this paper, we study linear codes over $\mathbb{Z}_k$ based on lattices and theta functions. We obtain the complete weight enumerators MacWilliams identity and the symmetrized weight enumerators MacWilliams identity based on the theory of theta function. We extend the main work by Bannai, Dougherty, Harada and Oura to the finite ring $\mathbb{Z}_k$ for any positive integer $k$ and present the complete weight enumerators MacWilliams identity in genus $g$. When $k=p$ is a prime number, we establish the relationship between the theta function of associated lattices over a cyclotomic field and the complete weight enumerators with Hamming weight of codes, which is an analogy of the results by G. Van der Geer and F. Hirzebruch since they showed the identity with the Lee weight enumerators.
http://arxiv.org/abs/2504.12605v1
AdaQual-Diff: Diffusion-Based Image Restoration via Adaptive Quality Prompting
2025-04-17T03:08:27+00:00
Restoring images afflicted by complex real-world degradations remains challenging, as conventional methods often fail to adapt to the unique mixture and severity of artifacts present. This stems from a reliance on indirect cues which poorly capture the true perceptual quality deficit. To address this fundamental limitation, we introduce AdaQual-Diff, a diffusion-based framework that integrates perceptual quality assessment directly into the generative restoration process. Our approach establishes a mathematical relationship between regional quality scores from DeQAScore and optimal guidance complexity, implemented through an Adaptive Quality Prompting mechanism. This mechanism systematically modulates prompt structure according to measured degradation severity: regions with lower perceptual quality receive computationally intensive, structurally complex prompts with precise restoration directives, while higher quality regions receive minimal prompts focused on preservation rather than intervention. The technical core of our method lies in the dynamic allocation of computational resources proportional to degradation severity, creating a spatially-varying guidance field that directs the diffusion process with mathematical precision. By combining this quality-guided approach with content-specific conditioning, our framework achieves fine-grained control over regional restoration intensity without requiring additional parameters or inference iterations. Experimental results demonstrate that AdaQual-Diff achieves visually superior restorations across diverse synthetic and real-world datasets.
http://arxiv.org/abs/2504.12606v1
Robo-SGG: Exploiting Layout-Oriented Normalization and Restitution for Robust Scene Graph Generation
2025-04-17T03:09:22+00:00
In this paper, we introduce a novel method named Robo-SGG, i.e., Layout-Oriented Normalization and Restitution for Robust Scene Graph Generation. Compared to the existing SGG setting, the robust scene graph generation aims to perform inference on a diverse range of corrupted images, with the core challenge being the domain shift between the clean and corrupted images. Existing SGG methods suffer from degraded performance due to compromised visual features e.g., corruption interference or occlusions. To obtain robust visual features, we exploit the layout information, which is domain-invariant, to enhance the efficacy of existing SGG methods on corrupted images. Specifically, we employ Instance Normalization(IN) to filter out the domain-specific feature and recover the unchangeable structural features, i.e., the positional and semantic relationships among objects by the proposed Layout-Oriented Restitution. Additionally, we propose a Layout-Embedded Encoder (LEE) that augments the existing object and predicate encoders within the SGG framework, enriching the robust positional and semantic features of objects and predicates. Note that our proposed Robo-SGG module is designed as a plug-and-play component, which can be easily integrated into any baseline SGG model. Extensive experiments demonstrate that by integrating the state-of-the-art method into our proposed Robo-SGG, we achieve relative improvements of 5.6%, 8.0%, and 6.5% in mR@50 for PredCls, SGCls, and SGDet tasks on the VG-C dataset, respectively, and achieve new state-of-the-art performance in corruption scene graph generation benchmark (VG-C and GQA-C). We will release our source code and model.
http://arxiv.org/abs/2504.12607v1
Solving Constrained Combinatorial Optimization Problems with Variational Quantum Imaginary Time Evolution
2025-04-17T03:09:37+00:00
Solving combinatorial optimization problems using variational quantum algorithms (VQAs) has emerged as a promising research direction. Since the introduction of the Quantum Approximate Optimization Algorithm (QAOA), numerous variants have been proposed to enhance its performance. QAOA was later extended to the Quantum Alternating Operator Ansatz (QAOA+), which generalizes the initial state, phase-separation operator, and mixer to address constrained problems without relying on the standard Quadratic Unconstrained Binary Optimization (QUBO) formulation. However, QAOA+ often requires additional ancilla qubits and a large number of multi-controlled Toffoli gates to prepare the superposition of feasible states, resulting in deep circuits that are challenging for near-term quantum devices. Furthermore, VQAs are generally hindered by issues such as barren plateaus and suboptimal local minima. Recently, Quantum Imaginary Time Evolution (QITE), a ground-state preparation algorithm, has been explored as an alternative to QAOA and its variants. QITE has demonstrated improved performance in quantum chemistry problems and has been applied to unconstrained combinatorial problems such as Max-Cut. In this work, we apply the variational form of QITE (VarQITE) to solve the Multiple Knapsack Problem (MKP), a constrained problem, using a Max-Cut-tailored ansatz. To the best of our knowledge, this is the first attempt to address constrained optimization using VarQITE. We show that VarQITE achieves significantly lower mean optimality gaps compared to QAOA and other conventional methods. Moreover, we demonstrate that scaling the Hamiltonian coefficients can further reduce optimization costs and accelerate convergence.
http://arxiv.org/abs/2504.12608v1
Code Copycat Conundrum: Demystifying Repetition in LLM-based Code Generation
2025-04-17T03:13:39+00:00
Despite recent advances in Large Language Models (LLMs) for code generation, the quality of LLM-generated code still faces significant challenges. One significant issue is code repetition, which refers to the model's tendency to generate structurally redundant code, resulting in inefficiencies and reduced readability. To address this, we conduct the first empirical study to investigate the prevalence and nature of repetition across 19 state-of-the-art code LLMs using three widely-used benchmarks. Our study includes both quantitative and qualitative analyses, revealing that repetition is pervasive and manifests at various granularities and extents, including character, statement, and block levels. We further summarize a taxonomy of 20 repetition patterns. Building on our findings, we propose DeRep, a rule-based technique designed to detect and mitigate repetition in generated code. We evaluate DeRep using both open-source benchmarks and in an industrial setting. Our results demonstrate that DeRep significantly outperforms baselines in reducing repetition (with an average improvements of 91.3%, 93.5%, and 79.9% in rep-3, rep-line, and sim-line metrics) and enhancing code quality (with a Pass@1 increase of 208.3% over greedy search). Furthermore, integrating DeRep improves the performance of existing repetition mitigation methods, with Pass@1 improvements ranging from 53.7% to 215.7%.
http://arxiv.org/abs/2504.12609v1
Crossing the Human-Robot Embodiment Gap with Sim-to-Real RL using One Human Demonstration
2025-04-17T03:15:20+00:00
Teaching robots dexterous manipulation skills often requires collecting hundreds of demonstrations using wearables or teleoperation, a process that is challenging to scale. Videos of human-object interactions are easier to collect and scale, but leveraging them directly for robot learning is difficult due to the lack of explicit action labels from videos and morphological differences between robot and human hands. We propose Human2Sim2Robot, a novel real-to-sim-to-real framework for training dexterous manipulation policies using only one RGB-D video of a human demonstrating a task. Our method utilizes reinforcement learning (RL) in simulation to cross the human-robot embodiment gap without relying on wearables, teleoperation, or large-scale data collection typically necessary for imitation learning methods. From the demonstration, we extract two task-specific components: (1) the object pose trajectory to define an object-centric, embodiment-agnostic reward function, and (2) the pre-manipulation hand pose to initialize and guide exploration during RL training. We found that these two components are highly effective for learning the desired task, eliminating the need for task-specific reward shaping and tuning. We demonstrate that Human2Sim2Robot outperforms object-aware open-loop trajectory replay by 55% and imitation learning with data augmentation by 68% across grasping, non-prehensile manipulation, and multi-step tasks. Project Site: https://human2sim2robot.github.io
http://arxiv.org/abs/2504.12610v1
Machine Learning Methods for Gene Regulatory Network Inference
2025-04-17T03:19:49+00:00
Gene Regulatory Networks (GRNs) are intricate biological systems that control gene expression and regulation in response to environmental and developmental cues. Advances in computational biology, coupled with high throughput sequencing technologies, have significantly improved the accuracy of GRN inference and modeling. Modern approaches increasingly leverage artificial intelligence (AI), particularly machine learning techniques including supervised, unsupervised, semi-supervised, and contrastive learning to analyze large scale omics data and uncover regulatory gene interactions. To support both the application of GRN inference in studying gene regulation and the development of novel machine learning methods, we present a comprehensive review of machine learning based GRN inference methodologies, along with the datasets and evaluation metrics commonly used. Special emphasis is placed on the emerging role of cutting edge deep learning techniques in enhancing inference performance. The potential future directions for improving GRN inference are also discussed.
http://arxiv.org/abs/2504.12611v1
Implementing Slack-Free Custom Penalty Function for QUBO on Gate-Based Quantum Computers
2025-04-17T03:20:02+00:00
Solving NP-hard constrained combinatorial optimization problems using quantum algorithms remains a challenging yet promising avenue toward quantum advantage. Variational Quantum Algorithms (VQAs), such as the Variational Quantum Eigensolver (VQE), typically require constrained problems to be reformulated as unconstrained ones using penalty methods.A common approach introduces slack variables and quadratic penalties in the QUBO formulation to handle inequality constraints. However, this leads to increased qubit requirements and often distorts the optimization landscape, making it harder to find high-quality feasible solutions. To address these issues, we explore a slack-free formulation that directly encodes inequality constraints using custom penalty functions, specifically the exponential function and the Heaviside step function. These step-like penalties suppress infeasible solutions without introducing additional qubits or requiring finely tuned weights. Inspired by recent developments in quantum annealing and threshold-based constraint handling in gate-based algorithms, we implement and evaluate our approach on the Multiple Knapsack Problem (MKP). Experimental results show that the step-based formulation significantly improves feasibility and optimality rates compared to unbalanced penalization, while reducing overall qubit overhead.
http://arxiv.org/abs/2504.12612v1
The Chronicles of Foundation AI for Forensics of Multi-Agent Provenance
2025-04-17T03:23:17+00:00
Provenance is the chronology of things, resonating with the fundamental pursuit to uncover origins, trace connections, and situate entities within the flow of space and time. As artificial intelligence advances towards autonomous agents capable of interactive collaboration on complex tasks, the provenance of generated content becomes entangled in the interplay of collective creation, where contributions are continuously revised, extended or overwritten. In a multi-agent generative chain, content undergoes successive transformations, often leaving little, if any, trace of prior contributions. In this study, we investigates the problem of tracking multi-agent provenance across the temporal dimension of generation. We propose a chronological system for post hoc attribution of generative history from content alone, without reliance on internal memory states or external meta-information. At its core lies the notion of symbolic chronicles, representing signed and time-stamped records, in a form analogous to the chain of custody in forensic science. The system operates through a feedback loop, whereby each generative timestep updates the chronicle of prior interactions and synchronises it with the synthetic content in the very act of generation. This research seeks to develop an accountable form of collaborative artificial intelligence within evolving cyber ecosystems.
http://arxiv.org/abs/2504.12613v1
Fast and Accurate Prediction of Antenna Reflection Coefficients in Planar Layered Media Environment via Generalized Scattering Matrix
2025-04-17T03:28:23+00:00
The numerical algorithm for evaluating the reflection coefficient of an antenna in the presence of the planar layered medium is reformulated using the antenna's generalized scattering matrix (GSM). The interaction between the antenna and the layered medium is modeled through spherical-to-planar vector wave transformations, ensuring no approximations that could compromise computational accuracy. This theoretical framework significantly reduces algebraic complexity, resulting in a marked increase in the speed of antenna performance evaluation. Excluding the one-time preprocessing cost of obtaining the antenna's GSM in free space, the numerical evaluation speed of this method exceeds that of the commercial software FEKO by several orders of magnitude, while maintaining nearly identical accuracy.
http://arxiv.org/abs/2504.12614v1
From Regulation to Support: Centering Humans in Technology-Mediated Emotion Intervention in Care Contexts
2025-04-17T03:35:01+00:00
Enhancing emotional well-being has become a significant focus in HCI and CSCW, with technologies increasingly designed to track, visualize, and manage emotions. However, these approaches have faced criticism for potentially suppressing certain emotional experiences. Through a scoping review of 53 empirical studies from ACM proceedings implementing Technology-Mediated Emotion Intervention (TMEI), we critically examine current practices through lenses drawn from HCI critical theories. Our analysis reveals emotion intervention mechanisms that extend beyond traditional emotion regulation paradigms, identifying care-centered goals that prioritize non-judgmental emotional support and preserve users' identities. The findings demonstrate how researchers design technologies for generating artificial care, intervening in power dynamics, and nudging behavioral changes. We contribute the concept of "emotion support" as an alternative approach to "emotion regulation," emphasizing human-centered approaches to emotional well-being. This work advances the understanding of diverse human emotional needs beyond individual and cognitive perspectives, offering design implications that critically reimagine how technologies can honor emotional complexity, preserve human agency, and transform power dynamics in care contexts.
http://arxiv.org/abs/2504.12615v1
Shrinkage priors for circulant correlation structure models
2025-04-17T03:39:52+00:00
We consider a new statistical model called the circulant correlation structure model, which is a multivariate Gaussian model with unknown covariance matrix and has a scale-invariance property. We construct shrinkage priors for the circulant correlation structure models and show that Bayesian predictive densities based on those priors asymptotically dominate Bayesian predictive densities based on Jeffreys priors under the Kullback-Leibler (KL) risk function. While shrinkage of eigenvalues of covariance matrices of Gaussian models has been successful, the proposed priors shrink a non-eigenvalue part of covariance matrices.
http://arxiv.org/abs/2504.12616v1
Graph-based Path Planning with Dynamic Obstacle Avoidance for Autonomous Parking
2025-04-17T03:43:20+00:00
Safe and efficient path planning in parking scenarios presents a significant challenge due to the presence of cluttered environments filled with static and dynamic obstacles. To address this, we propose a novel and computationally efficient planning strategy that seamlessly integrates the predictions of dynamic obstacles into the planning process, ensuring the generation of collision-free paths. Our approach builds upon the conventional Hybrid A star algorithm by introducing a time-indexed variant that explicitly accounts for the predictions of dynamic obstacles during node exploration in the graph, thus enabling dynamic obstacle avoidance. We integrate the time-indexed Hybrid A star algorithm within an online planning framework to compute local paths at each planning step, guided by an adaptively chosen intermediate goal. The proposed method is validated in diverse parking scenarios, including perpendicular, angled, and parallel parking. Through simulations, we showcase our approach's potential in greatly improving the efficiency and safety when compared to the state of the art spline-based planning method for parking situations.
http://arxiv.org/abs/2504.12617v1
Bayesian Density-Density Regression with Application to Cell-Cell Communications
2025-04-17T03:46:03+00:00
We introduce a scalable framework for regressing multivariate distributions onto multivariate distributions, motivated by the application of inferring cell-cell communication from population-scale single-cell data. The observed data consist of pairs of multivariate distributions for ligands from one cell type and corresponding receptors from another. For each ordered pair $e=(l,r)$ of cell types $(l \neq r)$ and each sample $i = 1, \ldots, n$, we observe a pair of distributions $(F_{ei}, G_{ei})$ of gene expressions for ligands and receptors of cell types $l$ and $r$, respectively. The aim is to set up a regression of receptor distributions $G_{ei}$ given ligand distributions $F_{ei}$. A key challenge is that these distributions reside in distinct spaces of differing dimensions. We formulate the regression of multivariate densities on multivariate densities using a generalized Bayes framework with the sliced Wasserstein distance between fitted and observed distributions. Finally, we use inference under such regressions to define a directed graph for cell-cell communications.
http://arxiv.org/abs/2504.12618v1
Simultaneous Superoscillations in Space and Time in Nonseparable Light Pulses
2025-04-17T03:47:11+00:00
A remarkable phenomenon of superoscillations implies that electromagnetic waves can locally oscillate in space or time faster than the fastest spatial and temporal Fourier component of the entire function. This phenomenon allows to focus light into an arbitrary small hotspot enabling superresolution imaging and optical metrology with accuracy far beyond the Abbey-Reileigh diffraction limit. Here we show that, in band-limited supertoroidal light pulses, the temporal and spatial superoscillations can be observed simultaneously at a specific region in space and at a specific interval in time.
http://arxiv.org/abs/2504.12619v1
SAM-Based Building Change Detection with Distribution-Aware Fourier Adaptation and Edge-Constrained Warping
2025-04-17T03:47:43+00:00
Building change detection remains challenging for urban development, disaster assessment, and military reconnaissance. While foundation models like Segment Anything Model (SAM) show strong segmentation capabilities, SAM is limited in the task of building change detection due to domain gap issues. Existing adapter-based fine-tuning approaches face challenges with imbalanced building distribution, resulting in poor detection of subtle changes and inaccurate edge extraction. Additionally, bi-temporal misalignment in change detection, typically addressed by optical flow, remains vulnerable to background noises. This affects the detection of building changes and compromises both detection accuracy and edge recognition. To tackle these challenges, we propose a new SAM-Based Network with Distribution-Aware Fourier Adaptation and Edge-Constrained Warping (FAEWNet) for building change detection. FAEWNet utilizes the SAM encoder to extract rich visual features from remote sensing images. To guide SAM in focusing on specific ground objects in remote sensing scenes, we propose a Distribution-Aware Fourier Aggregated Adapter to aggregate task-oriented changed information. This adapter not only effectively addresses the domain gap issue, but also pays attention to the distribution of changed buildings. Furthermore, to mitigate noise interference and misalignment in height offset estimation, we design a novel flow module that refines building edge extraction and enhances the perception of changed buildings. Our state-of-the-art results on the LEVIR-CD, S2Looking and WHU-CD datasets highlight the effectiveness of FAEWNet. The code is available at https://github.com/SUPERMAN123000/FAEWNet.
http://arxiv.org/abs/2504.12620v1
Fractional balanced chromatic number of signed subcubic graphs
2025-04-17T03:51:29+00:00
A signed graph is a pair $(G,\sigma)$, where $G$ is a graph and $\sigma: E(G)\rightarrow \{-, +\}$, called signature, is an assignment of signs to the edges. Given a signed graph $(G,\sigma)$ with no negative loops, a balanced $(p,q)$-coloring of $(G,\sigma)$ is an assignment $f$ of $q$ colors to each vertex from a pool of $p$ colors such that each color class induces a balanced subgraph, i.e., no negative cycles. Let $(K_4,-)$ be the signed graph on $K_4$ with all edges being negative. In this work, we show that every signed (simple) subcubic graph admits a balanced $(5,3)$-coloring except for $(K_4,-)$ and signed graphs switching equivalent to it. For this particular signed graph the best balanced colorings are $(2p,p)$-colorings.
http://arxiv.org/abs/2504.12621v1
Revisiting multifunctionality in reservoir computing
2025-04-17T03:54:08+00:00
Multifunctionality is ubiquitous in biological neurons. Several studies have translated the concept to artificial neural networks as well. Recently, multifunctionality in reservoir computing (RC) has gained the widespread attention of researchers. Multistable dynamics of the reservoir can be configured to capture multiple tasks, each by one of the co-existing attractors. However, there are several limitations in the applicability of this approach. So far, multifunctional RC has been shown to be able to reconstruct different attractor climates only when the attractors are well separated in the phase space. We propose a more flexible reservoir computing scheme capable of multifunctioning beyond the earlier limitations. The proposed architecture holds striking similarity with the multifunctional biological neural networks and showcases superior performance. It is capable of learning multiple chaotic attractors with overlapping phase space. We successfully train the RC to achieve multifunctionality with wide range of tasks.
http://arxiv.org/abs/2504.12622v1
Can metric radio bursts be used as a diagnostics tool for interplanetary coronal mass ejections?
2025-04-17T03:55:30+00:00
Metric radio bursts are often said to be valuable diagnostic tools for studying the near-sun kinematics and energetics of the Interplanetary Coronal Mass Ejections (ICMEs). Radio observations also serve as an indirect tool to estimate the coronal magnetic fields. However, how these estimated coronal magnetic fields are related to the magnetic field strength in the ICME at 1 AU has rarely been explored. We aim to establish a relation between the coronal magnetic fields obtained from the radio observations very close to the Sun and the magnetic field measured at 1 AU when the ICME arrives at the Earth. We performed statistical analysis of all metric type II radio bursts in solar cycles 23 and 24, which were found to be associated with ICMEs. We estimated the coronal magnetic field associated with the corresponding CME near the Sun (middle corona) using a split-band radio technique and compared those with the magnetic fields recorded at 1 AU with in-situ observations. We found that the estimated magnetic fields near the Sun using radio techniques are not well correlated with the magnetic fields measured at 1 AU using in-situ observations. This could be due to the complex evolution of the magnetic field as it propagates through the heliosphere. Our results suggest that while metric radio observations can serve as effective proxies for estimating magnetic fields near the Sun, they may not be as effective close to the Earth. At least, no linear relation could be established using metric radio emissions to estimate the magnetic fields at 1 AU with acceptable error margins.
http://arxiv.org/abs/2504.12623v1
Privacy-Preserving CNN Training with Transfer Learning: Two Hidden Layers
2025-04-17T03:58:23+00:00
In this paper, we present the demonstration of training a four-layer neural network entirely using fully homomorphic encryption (FHE), supporting both single-output and multi-output classification tasks in a non-interactive setting. A key contribution of our work is identifying that replacing \textit{Softmax} with \textit{Sigmoid}, in conjunction with the Binary Cross-Entropy (BCE) loss function, provides an effective and scalable solution for homomorphic classification. Moreover, we show that the BCE loss function, originally designed for multi-output tasks, naturally extends to the multi-class setting, thereby enabling broader applicability. We also highlight the limitations of prior loss functions such as the SLE loss and the one proposed in the 2019 CVPR Workshop, both of which suffer from vanishing gradients as network depth increases. To address the challenges posed by large-scale encrypted data, we further introduce an improved version of the previously proposed data encoding scheme, \textit{Double Volley Revolver}, which achieves a better trade-off between computational and memory efficiency, making FHE-based neural network training more practical. The complete, runnable C++ code to implement our work can be found at: \href{https://github.com/petitioner/ML.NNtraining}{$\texttt{https://github.com/petitioner/ML.NNtraining}$}.
http://arxiv.org/abs/2504.12624v1
The (3+1)-dimensional dispersionless integrable hierarchy and nonlinear Riemann-Hilbert problem associated with the Doubrov-Ferapontov modified heavenly equation
2025-04-17T03:58:33+00:00
According to the classification of integrable complex Monge-Ampere equations by Doubrov and Ferapontov, the modified heavenly equation is a typical (3+1)-dimensional dispersionless and canonical integrable equation.In this paper we use the eigenfunctions of the Doubrov-Ferapontov modified heavenly equation to obtain a related hierarchy. Next we construct the Lax-Sato equations with Hamiltonian vector fields and Zakharov-Shabat type equations which are equivalent to the hierarchy. The nonlinear Riemann-Hilbert problem is also applied to study the solution of Doubrov-Ferapontov modified heavenly equation.
http://arxiv.org/abs/2504.12625v1
Spectral Algorithms under Covariate Shift
2025-04-17T04:02:06+00:00
Spectral algorithms leverage spectral regularization techniques to analyze and process data, providing a flexible framework for addressing supervised learning problems. To deepen our understanding of their performance in real-world scenarios where the distributions of training and test data may differ, we conduct a rigorous investigation into the convergence behavior of spectral algorithms under distribution shifts, specifically within the framework of reproducing kernel Hilbert spaces. Our study focuses on the case of covariate shift. In this scenario, the marginal distributions of the input data differ between the training and test datasets, while the conditional distribution of the output given the input remains unchanged. Under this setting, we analyze the generalization error of spectral algorithms and show that they achieve minimax optimality when the density ratios between the training and test distributions are uniformly bounded. However, we also identify a critical limitation: when the density ratios are unbounded, the spectral algorithms may become suboptimal. To address this limitation, we propose a weighted spectral algorithm that incorporates density ratio information into the learning process. Our theoretical analysis shows that this weighted approach achieves optimal capacity-independent convergence rates. Furthermore, by introducing a weight clipping technique, we demonstrate that the convergence rates of the weighted spectral algorithm can approach the optimal capacity-dependent convergence rates arbitrarily closely. This improvement resolves the suboptimality issue in unbounded density ratio scenarios and advances the state-of-the-art by refining existing theoretical results.
http://arxiv.org/abs/2504.12626v1
Packing Input Frame Context in Next-Frame Prediction Models for Video Generation
2025-04-17T04:02:31+00:00
We present a neural network structure, FramePack, to train next-frame (or next-frame-section) prediction models for video generation. The FramePack compresses input frames to make the transformer context length a fixed number regardless of the video length. As a result, we are able to process a large number of frames using video diffusion with computation bottleneck similar to image diffusion. This also makes the training video batch sizes significantly higher (batch sizes become comparable to image diffusion training). We also propose an anti-drifting sampling method that generates frames in inverted temporal order with early-established endpoints to avoid exposure bias (error accumulation over iterations). Finally, we show that existing video diffusion models can be finetuned with FramePack, and their visual quality may be improved because the next-frame prediction supports more balanced diffusion schedulers with less extreme flow shift timesteps.
http://arxiv.org/abs/2504.12627v1
Uncertainty Quantification in Graph Neural Networks with Shallow Ensembles
2025-04-17T04:02:53+00:00
Machine-learned potentials (MLPs) have revolutionized materials discovery by providing accurate and efficient predictions of molecular and material properties. Graph Neural Networks (GNNs) have emerged as a state-of-the-art approach due to their ability to capture complex atomic interactions. However, GNNs often produce unreliable predictions when encountering out-of-domain data and it is difficult to identify when that happens. To address this challenge, we explore Uncertainty Quantification (UQ) techniques, focusing on Direct Propagation of Shallow Ensembles (DPOSE) as a computationally efficient alternative to deep ensembles. By integrating DPOSE into the SchNet model, we assess its ability to provide reliable uncertainty estimates across diverse Density Functional Theory datasets, including QM9, OC20, and Gold Molecular Dynamics. Our findings often demonstrate that DPOSE successfully distinguishes between in-domain and out-of-domain samples, exhibiting higher uncertainty for unobserved molecule and material classes. This work highlights the potential of lightweight UQ methods in improving the robustness of GNN-based materials modeling and lays the foundation for future integration with active learning strategies.
http://arxiv.org/abs/2504.12628v1
Enhancing NDAR with Delay-Gate-Induced Amplitude Damping
2025-04-17T04:09:11+00:00
The Noise-Directed Adaptive Remapping (NDAR) method utilizes amplitude damping noise to enhance the performance of quantum optimization algorithms. NDAR alternates between exploration by sampling solutions from the quantum circuit and exploitation by transforming the cost Hamiltonian by changing the signs of its terms. Both exploration and exploitation are important components in classical heuristic algorithm design. In this study, we examine how NDAR performance improves by adjusting the balance between these components. We control the degree of exploitation by varying the delay time to 0, 50, and $100~\mu\text{s}$, and investigate exploration strategies using two quantum circuits, QAOA and a random circuit, on IBM's Heron processor. Our results show that increasing delay time in NDAR improves the best objective value found in each iteration. In single-layer QAOA and random circuits applied to unweighted Max-Cut problem with low edge density, both exploration strategies yield similar objective value trajectories and provide competitive solution quality to simulated annealing for the 80-node problem. Their similar performance indicates that, in most cases, increasing amplitude damping noise via additional delay time results in information loss. On the other hand, QAOA outperforms random circuits in specific cases, such as positive-negative weighted Max-Cut on a fully connected graph. This suggests potential advantages of QAOA in more complex settings. We further develop a classical NDAR to better understand exploration strategies, demonstrating that controlling the Hamming weight distribution of sampled bitstrings yields higher quality solutions. This suggests that identifying suitable quantum circuits for exploration could enhance NDAR performance.
http://arxiv.org/abs/2504.12629v1
Sampling-based Quantum Optimization Algorithm with Quantum Relaxation
2025-04-17T04:13:51+00:00
Variational Quantum Algorithm (VQA) is a hybrid algorithm for noisy quantum devices. However, statistical fluctuations and physical noise degrade the solution quality, so it is difficult to maintain applicability for large-scale problems. In contrast, Sampling-based Quantum Algorithms have recently been successfully applied to large-scale quantum chemistry problems. The quantum device is used only for sampling, and the ground state and its energy are estimated on the classical device. In this study, we propose the Sampling-based Quantum Optimization Algorithm (SQOA). Two challenges exist in constructing a Sampling-based Quantum Algorithm for combinatorial optimization. The first challenge is that we need to encode the optimization problem in a non-diagonal Hamiltonian, even though many VQAs encode it into the Ising Hamiltonian, which is diagonal. The second challenge is that we need a method to prepare the input state to be sampled efficiently. We employ the Quantum Relaxation (QR) method for the first challenge, which encodes multiple classical variables in one qubit. It reduces required qubits compared to the Ising Hamiltonian approach. Moreover, we investigate the parameter transferability in the Quantum Alternating Operator Ansatz for QR Hamiltonians for the second challenge. We show that restricting parameters to a linear form exhibits moderate transferability for 3-regular MaxCut problems, similar to transferability observed in the Quantum Approximate Optimization Algorithm. This property allows us to efficiently prepare the input state for a large instance using the parameters from a small instance. We leveraged transferability to create input states and applied SQOA with QR to the MaxCut instances. Transferring parameters from a 20-node problem demonstrates that SQOA with QR provides high-quality solutions for 40-node problems without variational parameter optimization.
http://arxiv.org/abs/2504.12630v1
Crystal growth, structure and physical properties of quasi-one-dimensional tellurides Fe$_{4-x}$VTe$_{4-y}$ ($x=1.01$, $y=0.74$) and V$_{4.64}$Te$_4$
2025-04-17T04:13:56+00:00
A new ternary compound Fe$_{4-x}$VTe$_{4-y}$ ($x=1.01$, $y=0.74$) with Ti5Te4-type structure is identified. Fe and V atoms tend to occupy different crystallographic positions and form quasi-one-dimensional (quasi-1D) Fe-V chains along the c-axis. Millimeter-sized single crystal of Fe$_{2.99}$VTe$_{3.26}$ (FVT) with slender-stick shape could be grown by chemical vapor transport method which reflects its quasi-1D crystal structure. Magnetization measurements reveal that FVT orders antiferromagnetically below T$_N$=93 K with strong easy ab-plane magnetic anisotropy. Although a weak glassy-like behavior appears below 10 K, FVT is dominant by long-range antiferromagnetic order in contrast to the spin-glass state in previously reported isostructural Fe$_{5}$Te$_{4}$. We also synthesize V$_{4.64}$Te$_4$ with similar quasi-1D V-chains and find it has weak anomalies at 144 K on both resistivity and susceptibility curves. However, no clear evidence is found for the development of magnetic or charge order. X-ray photoelectron spectroscopy and Curie-Weiss fit reveal that the effective moments for Fe$^{2+}$ and V$^{4+}$ in both compounds have large deviations from the conventional local moment model, which may possibly result from the formation of Fe/V metal-metal bondings. Furthermore the resistivity of both FVT and V$_{4.64}$Te$_4$ exhibits semiconducting-like temperature-dependent behavior but with average values close to typical bad metals, which resembles the transport behavior in the normal state of Fe-based superconductors. These quasi-1D compounds have shown interesting physical properties for future condensed matter physics research.
http://arxiv.org/abs/2504.12631v1
Geometry-preserving Numerical Scheme for Riemannian Stochastic Differential Equations
2025-04-17T04:14:00+00:00
Stochastic differential equations (SDEs) on Riemannian manifolds have numerous applications in system identification and control. However, geometry-preserving numerical methods for simulating Riemannian SDEs remain relatively underdeveloped. In this paper, we propose the Exponential Euler-Maruyama (Exp-EM) scheme for approximating solutions of SDEs on Riemannian manifolds. The Exp-EM scheme is both geometry-preserving and computationally tractable. We establish a strong convergence rate of $\mathcal{O}(\delta^{\frac{1 - \epsilon}{2}})$ for the Exp-EM scheme, which extends previous results obtained for specific manifolds to a more general setting. Numerical simulations are provided to illustrate our theoretical findings.
http://arxiv.org/abs/2504.12632v1
Transferring linearly fixed QAOA angles: performance and real device results
2025-04-17T04:17:51+00:00
Quantum Approximate Optimization Algorithm (QAOA) enables solving combinatorial optimization problems on quantum computers by optimizing variational parameters for quantum circuits. We investigate a simplified approach that combines linear parameterization with parameter transferring, reducing the parameter space to just 4 dimensions regardless of the number of layers. This simplification draws inspiration from quantum annealing schedules providing both theoretical grounding and practical advantages. We compare this combined approach with standard QAOA and other parameter setting strategies such as INTERP and FOURIER, which require computationally demanding incremental layer-by-layer optimization. Notably, previously known methods like INTERP and FOURIER yield parameters that can be well fitted by linear functions, which supports our linearization strategy. Our analysis reveals that for the random Ising model, cost landscapes in this reduced parameter space demonstrate consistent structural patterns across different problem instances. Our experiments extend from classical simulation to actual quantum hardware implementation on IBM's Eagle processor, demonstrating the approach's viability on current NISQ devices. Furthermore, the numerical results indicate that parameter transferability primarily depends on the energy scale of problem instances, with normalization techniques improving transfer quality. Most of our numerical experiments are conducted on the random Ising model, while problem-dependence is also investigated across other models. A key advantage of parameter transferring is the complete elimination of instance-specific classical optimization overhead, as pre-trained parameters can be directly applied to other problem instances, reducing classical optimization costs by orders of magnitude for deeper circuits.
http://arxiv.org/abs/2504.12633v1
Towards Characterizing Subjectivity of Individuals through Modeling Value Conflicts and Trade-offs
2025-04-17T04:20:05+00:00
Large Language Models (LLMs) not only have solved complex reasoning problems but also exhibit remarkable performance in tasks that require subjective decision making. Existing studies suggest that LLM generations can be subjectively grounded to some extent, yet exploring whether LLMs can account for individual-level subjectivity has not been sufficiently studied. In this paper, we characterize subjectivity of individuals on social media and infer their moral judgments using LLMs. We propose a framework, SOLAR (Subjective Ground with Value Abstraction), that observes value conflicts and trade-offs in the user-generated texts to better represent subjective ground of individuals. Empirical results show that our framework improves overall inference results as well as performance on controversial situations. Additionally, we qualitatively show that SOLAR provides explanations about individuals' value preferences, which can further account for their judgments.
http://arxiv.org/abs/2504.12634v1
Toponium: Implementation of a toponium model in FeynRules
2025-04-17T04:31:54+00:00
Toponium -- a bound state of the top-antitop pair ($t\bar{t}$) -- emerges as the smallest and simplest hadronic system in QCD, with an ultrashort lifetime ($\tau_t \sim 2.5\times 10^{-25}$~s) and a femtometer-scale Bohr radius ($r_{\text{Bohr}} \sim 7\times 10^{-18}$~m). We present a computational framework extending the Standard Model (SM) with two S-wave toponium states: a spin-singlet $\eta_t$ ($J^{PC}=0^{-+}$) and a spin-triplet $J_t$ ($J^{PC}=1^{--}$). Using nonrelativistic QCD (NRQCD) and a Coulomb potential, we derived couplings to SM particles (gluons, electroweak bosons, Higgs boson, and fermion pairs) and implemented the Lagrangian in FeynRules, generating FeynArts, MadGraph, and WHIZARD models for collider simulations. Key results include dominant decay channels ($\eta_t \to gg/ZH$, $J_t \to W^+W^-/b\bar{b}$) and leading order (LO) cross sections for $pp \to \eta_t(nS) \to {\rm non-}t\bar{t}$ ({66 fb} at 13 TeV). The model avoids double-counting artifacts by excluding direct $t\bar{t}$ couplings, thereby ensuring consistency with perturbative QCD. This work establishes a complete pipeline for precision toponium studies, bridging NRQCD, collider phenomenology, and tests of SM validity at future lepton colliders (e.g., CEPC, FCC-ee, muon colliders) and the LHC. This provides the first publicly available UFO model for toponium, enabling direct integration with MadGraph and WHIZARD for simulations.
http://arxiv.org/abs/2504.12635v1
On Equivalence Between Decentralized Policy-Profile Mixtures and Behavioral Coordination Policies in Multi-Agent Systems
2025-04-17T04:34:14+00:00
Constrained decentralized team problem formulations are good models for many cooperative multi-agent systems. Constraints necessitate randomization when solving for optimal solutions -- our past results show that joint randomization amongst the team is necessary for (strong) Lagrangian duality to hold -- , but a better understanding of randomization still remains. For a partially observed multi-agent system with Borel hidden state and finite observations and actions, we prove the equivalence between joint mixtures of decentralized policy-profiles (both pure and behavioral) and common-information based behavioral coordination policies (also mixtures of them). This generalizes past work that shows equivalence between pure decentralized policy-profiles and pure coordination policies. The equivalence can be exploited to develop results on strong duality and number of randomizations.
http://arxiv.org/abs/2504.12636v1
A0: An Affordance-Aware Hierarchical Model for General Robotic Manipulation
2025-04-17T04:45:15+00:00
Robotic manipulation faces critical challenges in understanding spatial affordances--the "where" and "how" of object interactions--essential for complex manipulation tasks like wiping a board or stacking objects. Existing methods, including modular-based and end-to-end approaches, often lack robust spatial reasoning capabilities. Unlike recent point-based and flow-based affordance methods that focus on dense spatial representations or trajectory modeling, we propose A0, a hierarchical affordance-aware diffusion model that decomposes manipulation tasks into high-level spatial affordance understanding and low-level action execution. A0 leverages the Embodiment-Agnostic Affordance Representation, which captures object-centric spatial affordances by predicting contact points and post-contact trajectories. A0 is pre-trained on 1 million contact points data and fine-tuned on annotated trajectories, enabling generalization across platforms. Key components include Position Offset Attention for motion-aware feature extraction and a Spatial Information Aggregation Layer for precise coordinate mapping. The model's output is executed by the action execution module. Experiments on multiple robotic systems (Franka, Kinova, Realman, and Dobot) demonstrate A0's superior performance in complex tasks, showcasing its efficiency, flexibility, and real-world applicability.
http://arxiv.org/abs/2504.12637v1
Scaling Instruction-Tuned LLMs to Million-Token Contexts via Hierarchical Synthetic Data Generation
2025-04-17T04:46:57+00:00
Large Language Models (LLMs) struggle with long-context reasoning, not only due to the quadratic scaling of computational complexity with sequence length but also because of the scarcity and expense of annotating long-context data. There has been barely any open-source work that systematically ablates long-context data, nor is there any openly available instruction tuning dataset with contexts surpassing 100K tokens. To bridge this gap, we introduce a novel post-training synthetic data generation strategy designed to efficiently extend the context window of LLMs while preserving their general task performance. Our approach scalably extends to arbitrarily long context lengths, unconstrained by the length of available real-world data, which effectively addresses the scarcity of raw long-context data. Through a step-by-step rotary position embedding (RoPE) scaling training strategy, we demonstrate that our model, with a context length of up to 1M tokens, performs well on the RULER benchmark and InfiniteBench and maintains robust performance on general language tasks.
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
86