url
stringlengths 33
33
| title
stringlengths 18
214
| date_published
stringdate 2025-03-20 00:07:06
2025-04-17 04:46:57
| abstract
stringlengths 114
1.92k
|
---|---|---|---|
http://arxiv.org/abs/2504.10768v1 | The Art of Audience Engagement: LLM-Based Thin-Slicing of Scientific Talks | 2025-04-15T00:08:13+00:00 | This paper examines the thin-slicing approach - the ability to make accurate judgments based on minimal information - in the context of scientific presentations. Drawing on research from nonverbal communication and personality psychology, we show that brief excerpts (thin slices) reliably predict overall presentation quality. Using a novel corpus of over one hundred real-life science talks, we employ Large Language Models (LLMs) to evaluate transcripts of full presentations and their thin slices. By correlating LLM-based evaluations of short excerpts with full-talk assessments, we determine how much information is needed for accurate predictions. Our results demonstrate that LLM-based evaluations align closely with human ratings, proving their validity, reliability, and efficiency. Critically, even very short excerpts (less than 10 percent of a talk) strongly predict overall evaluations. This suggests that the first moments of a presentation convey relevant information that is used in quality evaluations and can shape lasting impressions. The findings are robust across different LLMs and prompting strategies. This work extends thin-slicing research to public speaking and connects theories of impression formation to LLMs and current research on AI communication. We discuss implications for communication and social cognition research on message reception. Lastly, we suggest an LLM-based thin-slicing framework as a scalable feedback tool to enhance human communication. |
http://arxiv.org/abs/2504.10769v1 | Three-dimensional neural network driving self-interference digital holography enables high-fidelity, non-scanning volumetric fluorescence microscopy | 2025-04-15T00:14:23+00:00 | We present a deep learning driven computational approach to overcome the limitations of self-interference digital holography that imposed by inferior axial imaging performances. We demonstrate a 3D deep neural network model can simultaneously suppresses the defocus noise and improves the spatial resolution and signal-to-noise ratio of conventional numerical back-propagation-obtained holographic reconstruction. Compared with existing 2D deep neural networks used for hologram reconstruction, our 3D model exhibits superior performance in enhancing the resolutions along all the three spatial dimensions. As the result, 3D non-scanning volumetric fluorescence microscopy can be achieved, using 2D self-interference hologram as input, without any mechanical and opto-electronic scanning and complicated system calibration. Our method offers a high spatiotemporal resolution 3D imaging approach which can potentially benefit, for example, the visualization of dynamics of cellular structure and measurement of 3D behavior of high-speed flow field. |
http://arxiv.org/abs/2504.10770v1 | Collaborative Bayesian Optimization via Wasserstein Barycenters | 2025-04-15T00:15:09+00:00 | Motivated by the growing need for black-box optimization and data privacy, we introduce a collaborative Bayesian optimization (BO) framework that addresses both of these challenges. In this framework agents work collaboratively to optimize a function they only have oracle access to. In order to mitigate against communication and privacy constraints, agents are not allowed to share their data but can share their Gaussian process (GP) surrogate models. To enable collaboration under these constraints, we construct a central model to approximate the objective function by leveraging the concept of Wasserstein barycenters of GPs. This central model integrates the shared models without accessing the underlying data. A key aspect of our approach is a collaborative acquisition function that balances exploration and exploitation, allowing for the optimization of decision variables collaboratively in each iteration. We prove that our proposed algorithm is asymptotically consistent and that its implementation via Monte Carlo methods is numerically accurate. Through numerical experiments, we demonstrate that our approach outperforms other baseline collaborative frameworks and is competitive with centralized approaches that do not consider data privacy. |
http://arxiv.org/abs/2504.10771v1 | Simon's Period Finding on a Quantum Annealer | 2025-04-15T00:17:13+00:00 | Dating to 1994, Simon's period-finding algorithm is among the earliest and most fragile of quantum algorithms. The algorithm's fragility arises from the requirement that, to solve an n qubit problem, one must fault-tolerantly sample O(n) linearly independent values from a solution space. In this paper, we study an adiabatic implementation of Simon's algorithm that requires a constant number of successful samples regardless of problem size. We implement this algorithm on D-Wave hardware and solve problems with up to 298 qubits. We compare the runtime of classical algorithms to the D-Wave solution to analyze any potential advantage. |
http://arxiv.org/abs/2504.10772v1 | Scanning-free three-dimensional fluorescent dipoles imaging by polarization self-interference digital holography (pSIDH) | 2025-04-15T00:19:51+00:00 | Polarization microscopy provides insights into the structure and orientational organization of biomolecules and their architectures in cells. The above key functional signatures, which are natively 3D, can be only detected in 2D for a single measurement in conventional polarization microscopy. It is so far a challenging task to capture simultaneously the 3D structure and molecular orientation in a single frame of far-field intensity distribution, within the timescale of rapid-happened spatial organization events of bio-complexes. We report an optical imaging method called pSIDH, to encode multidimensional sample information includes 3D structures and dipole orientations, in their far-field fluorescence-self-interference pattern. The computational reconstruction from the holographic extracted complex-valued light field provides optical-aberration-corrected 3D polarization images of the sample. In pSIDH microscope incorporating planar liquid crystal lens and high numerical aperture objective, we demonstrate scanning-free 3D volumetric polarization imaging of fluorescently-labelled sample, with simultaneously computational-improved system measuring accuracy on the 3D spatial and polarization dimensions. The pSIDH imaging on phalloidin-fluorophore labelling U2OS cells provides rapid tools of capturing simultaneous the 3D structural details and spatial-averaged molecular orientation distributions of biological complex architectures such as actin filaments. |
http://arxiv.org/abs/2504.10773v1 | The origin of X-ray intra-day variability in HBL PKS 2155-304 | 2025-04-15T00:23:03+00:00 | The origin and physics of X-ray intra-day variability (IDV) in blazars, which is a long-standing issue, is studied by modelling the broad-band X-ray spectrum, the light curves (LCs), and the Fourier time lags. We present the timing analysis of three archived XMM-Newton observations with a total exposure of $>80$ ks of PKS 2155-304, which is one of the brightest and most studied HBLs in the X-ray band. For each observation, we constructed averaged X-ray spectra in 0.5-10 keV band, as well as 100 s binned LCs in various sub-bands. We performed the Bayesian power spectral density (PSD) analysis and Fourier time-lag analyses of the variable LCs. The results are carefully modelled in the context of a multi-zone jet model. PSD analysis reveals that the X-ray variability can be characterised by red noise. The lag-frequency spectra measured in two observations show only the soft or negative lags, with the magnitude of the lags increasing as the frequency decreases. For another observation, the lag-frequency spectra are characterised by small positive or zero time lags at the lowest frequencies, which drops to negative values at higher frequencies. The magnitude of the soft lags ranges from $\sim5$ to $\sim40$ minutes, and increases with the energy difference of two compared LCs. The observed X-ray spectra and lag-frequency spectra can both be successfully described by our proposed two-zone model, with the physical parameters constrained in a fully acceptable space. Moreover, the LC profiles at different energy bands can be satisfactorily reproduced by only varying the injection rate of the energetic electrons. The IDV of PKS 2155-304 should be caused by the injection of energetic electrons, and accelerated by shocks formed in a weakly magnetised jet. |
http://arxiv.org/abs/2504.10774v1 | Ground-State-Based Model Reduction with Unitary Circuits | 2025-04-15T00:23:13+00:00 | We present a method to numerically obtain low-energy effective models based on a unitary transformation of the ground state. The algorithm finds a unitary circuit that transforms the ground state of the original model to a projected wavefunction with only the low-energy degrees of freedom. The effective model can then be derived using the unitary transformation encoded in the circuit. We test our method on the one-dimensional and two-dimensional square-lattice Hubbard model at half-filling, and obtain more accurate effective spin models than the standard perturbative approach. |
http://arxiv.org/abs/2504.10775v1 | Generative and Explainable AI for High-Dimensional Channel Estimation | 2025-04-15T00:29:40+00:00 | In this paper, we propose a new adversarial training framework to address high-dimensional instantaneous channel estimation in wireless communications. Specifically, we train a generative adversarial network to predict a channel realization in the time-frequency-space domain, in which the generator exploits the third-order moment of the input in its loss function and applies a new reparameterization method for latent distribution learning to minimize the Wasserstein distance between the true and estimated channel distributions. Next, we propose an explainable artificial intelligence mechanism to examine how the critic discriminates the generated channel. We demonstrate that our proposed framework is superior to existing methods in terms of minimizing estimation errors. Additionally, we find that the critic's attention focuses on the high-power portion of the channel's time-frequency representation. |
http://arxiv.org/abs/2504.10776v1 | Rainy: Unlocking Satellite Calibration for Deep Learning in Precipitation | 2025-04-15T00:30:46+00:00 | Precipitation plays a critical role in the Earth's hydrological cycle, directly affecting ecosystems, agriculture, and water resource management. Accurate precipitation estimation and prediction are crucial for understanding climate dynamics, disaster preparedness, and environmental monitoring. In recent years, artificial intelligence (AI) has gained increasing attention in quantitative remote sensing (QRS), enabling more advanced data analysis and improving precipitation estimation accuracy. Although traditional methods have been widely used for precipitation estimation, they face limitations due to the difficulty of data acquisition and the challenge of capturing complex feature relationships. Furthermore, the lack of standardized multi-source satellite datasets, and in most cases, the exclusive reliance on station data, significantly hinders the effective application of advanced AI models. To address these challenges, we propose the Rainy dataset, a multi-source spatio-temporal dataset that integrates pure satellite data with station data, and propose Taper Loss, designed to fill the gap in tasks where only in-situ data is available without area-wide support. The Rainy dataset supports five main tasks: (1) satellite calibration, (2) precipitation event prediction, (3) precipitation level prediction, (4) spatiotemporal prediction, and (5) precipitation downscaling. For each task, we selected benchmark models and evaluation metrics to provide valuable references for researchers. Using precipitation as an example, the Rainy dataset and Taper Loss demonstrate the seamless collaboration between QRS and computer vision, offering data support for AI for Science in the field of QRS and providing valuable insights for interdisciplinary collaboration and integration. |
http://arxiv.org/abs/2504.10777v1 | AtlasD: Automatic Local Symmetry Discovery | 2025-04-15T00:41:55+00:00 | Existing symmetry discovery methods predominantly focus on global transformations across the entire system or space, but they fail to consider the symmetries in local neighborhoods. This may result in the reported symmetry group being a misrepresentation of the true symmetry. In this paper, we formalize the notion of local symmetry as atlas equivariance. Our proposed pipeline, automatic local symmetry discovery (AtlasD), recovers the local symmetries of a function by training local predictor networks and then learning a Lie group basis to which the predictors are equivariant. We demonstrate AtlasD is capable of discovering local symmetry groups with multiple connected components in top-quark tagging and partial differential equation experiments. The discovered local symmetry is shown to be a useful inductive bias that improves the performance of downstream tasks in climate segmentation and vision tasks. |
http://arxiv.org/abs/2504.10778v1 | Deciphering Spin-Parity Assignments of Nuclear Levels | 2025-04-15T00:45:25+00:00 | Spin-parity assignments of nuclear levels are critical for understanding nuclear structure and reactions. However, inconsistent notation conventions and ambiguous reporting in research papers often lead to confusion and misinterpretations. This paper examines the policies of the Evaluated Nuclear Structure Data File (ENSDF) and the evaluations by Endt and collaborators, highlighting key differences in their approaches to spin-parity notation. Sources of confusion are identified, including ambiguous use of strong and weak arguments and the conflation of new experimental results with prior constraints. Recommendations are provided to improve clarity and consistency in reporting spin-parity assignments, emphasizing the need for explicit notation conventions, clear differentiation of argument strengths, community education, and separate reporting of new findings. These steps aim to enhance the accuracy and utility of nuclear data for both researchers and evaluators. |
http://arxiv.org/abs/2504.10779v1 | Conformally Invariant Dirac Equation with Non-Local Nonlinearity | 2025-04-15T00:45:47+00:00 | We study a conformally invariant equation involving the Dirac operator and a non-linearity of convolution type. This non-linearity is inspired from the conformal Einstein-Dirac problem in dimension 4. We first investigate the compactness, bubbling and energy quantization of the associated energy functional then we characterize the ground state solutions of the problem on the standard sphere. As a consequence, we prove an Aubin-type inequality that assures the existence of solutions to our problem and in particular the conformal Einstein-Dirac problem in dimension 4. Moreover, we investigate the effect of a linear perturbation to our problem, leading us to a Brezis-Nirenberg type result. |
http://arxiv.org/abs/2504.10780v1 | Energy shifts in predissociating levels of diatomic molecules: The case of N$_2$ (C$''^5Π_u$) and N$_2$(1$^7Σ^+_u$) interacting states | 2025-04-15T00:47:37+00:00 | This work presents a perturbative calculation methodology for evaluating the energy shifts and broadening of vibrational energy levels, caused by interactions between bound and unbound dissociative electronic states. The method is validated against previously semiclassical analyzed cases, demonstrating remarkable consistency. We successfully applied this approach to the N$_2$ molecule, which exhibits a strong spin-orbit interaction between the bound C$''^5\Pi_u$ and the repulsive 1$^7\Sigma^+_u$ electronic states, around 36 cm$^{-1}$. This interaction constitutes an major pathway for N($^{2}$D) production, important in both excitation and quenching in plasma afterglows. As a result, the maximum absolute shift of 0.15 cm$^{-1}$ was found for the C$''^5\Pi_u$ ($v$ = 7) and maximum broadening of 0.45 cm$^{-1}$ was calculated for $v$ = 8, demonstrating significant perturbation of the C$''^5\Pi_u$ by the 1$^7\Sigma^+_u$ state. The results obtained were compared with direct calculations of the predissociation rates of the C$''^5\Pi_u$ bound state, showing very good agreement. |
http://arxiv.org/abs/2504.10781v1 | Neural Network Emulation of the Classical Limit in Quantum Systems via Learned Observable Mappings | 2025-04-15T00:48:36+00:00 | The classical limit of quantum mechanics, formally investigated through frameworks like strict deformation quantization, remains a profound area of inquiry in the philosophy of physics. This paper explores a computational approach employing a neural network to emulate the emergence of classical behavior from the quantum harmonic oscillator as Planck's constant $\hbar$ approaches zero. We develop and train a neural network architecture to learn the mapping from initial expectation values and $\hbar$ to the time evolution of the expectation value of position. By analyzing the network's predictions across different regimes of hbar, we aim to provide computational insights into the nature of the quantum-classical transition. This work demonstrates the potential of machine learning as a complementary tool for exploring foundational questions in quantum mechanics and its classical limit. |
http://arxiv.org/abs/2504.10782v1 | Deep Audio Watermarks are Shallow: Limitations of Post-Hoc Watermarking Techniques for Speech | 2025-04-15T00:52:01+00:00 | In the audio modality, state-of-the-art watermarking methods leverage deep neural networks to allow the embedding of human-imperceptible signatures in generated audio. The ideal is to embed signatures that can be detected with high accuracy when the watermarked audio is altered via compression, filtering, or other transformations. Existing audio watermarking techniques operate in a post-hoc manner, manipulating "low-level" features of audio recordings after generation (e.g. through the addition of a low-magnitude watermark signal). We show that this post-hoc formulation makes existing audio watermarks vulnerable to transformation-based removal attacks. Focusing on speech audio, we (1) unify and extend existing evaluations of the effect of audio transformations on watermark detectability, and (2) demonstrate that state-of-the-art post-hoc audio watermarks can be removed with no knowledge of the watermarking scheme and minimal degradation in audio quality. |
http://arxiv.org/abs/2504.10783v1 | Superfast Configuration-Space Convex Set Computation on GPUs for Online Motion Planning | 2025-04-15T00:54:55+00:00 | In this work, we leverage GPUs to construct probabilistically collision-free convex sets in robot configuration space on the fly. This extends the use of modern motion planning algorithms that leverage such representations to changing environments. These planners rapidly and reliably optimize high-quality trajectories, without the burden of challenging nonconvex collision-avoidance constraints. We present an algorithm that inflates collision-free piecewise linear paths into sequences of convex sets (SCS) that are probabilistically collision-free using massive parallelism. We then integrate this algorithm into a motion planning pipeline, which leverages dynamic roadmaps to rapidly find one or multiple collision-free paths, and inflates them. We then optimize the trajectory through the probabilistically collision-free sets, simultaneously using the candidate trajectory to detect and remove collisions from the sets. We demonstrate the efficacy of our approach on a simulation benchmark and a KUKA iiwa 7 robot manipulator with perception in the loop. On our benchmark, our approach runs 17.1 times faster and yields a 27.9% increase in reliability over the nonlinear trajectory optimization baseline, while still producing high-quality motion plans. |
http://arxiv.org/abs/2504.10784v1 | ATLASv2: LLM-Guided Adaptive Landmark Acquisition and Navigation on the Edge | 2025-04-15T00:55:57+00:00 | Autonomous systems deployed on edge devices face significant challenges, including resource constraints, real-time processing demands, and adapting to dynamic environments. This work introduces ATLASv2, a novel system that integrates a fine-tuned TinyLLM, real-time object detection, and efficient path planning to enable hierarchical, multi-task navigation and manipulation all on the edge device, Jetson Nano. ATLASv2 dynamically expands its navigable landmarks by detecting and localizing objects in the environment which are saved to its internal knowledge base to be used for future task execution. We evaluate ATLASv2 in real-world environments, including a handcrafted home and office setting constructed with diverse objects and landmarks. Results show that ATLASv2 effectively interprets natural language instructions, decomposes them into low-level actions, and executes tasks with high success rates. By leveraging generative AI in a fully on-board framework, ATLASv2 achieves optimized resource utilization with minimal prompting latency and power consumption, bridging the gap between simulated environments and real-world applications. |
http://arxiv.org/abs/2504.10785v1 | Non-resonant two-photon x-ray absorption in Cu | 2025-04-15T00:59:35+00:00 | We present a real-space Green's function theory and calculations of two-photon x-ray absorption (TPA). Our focus is on non-resonant K-shell TPA in metallic Cu, which has been observed experimentally at intense x-ray free electron laser (XFEL) sources. The theory is based on an independentparticle Green's function treatment of the Kramers-Heisenberg equation and an approximation for the sum over non-resonant intermediate states in terms of a static quadrupole transition operator. XFEL effects are modeled by a partially depleted d-band. This approach is shown to give results for K-shell TPA in quantitative agreement with XFEL experiment and with a Bethe-Salpeter Equation approach. We also briefly discuss many-body corrections and TPA sum-rules. |
http://arxiv.org/abs/2504.10786v2 | Visual Language Models show widespread visual deficits on neuropsychological tests | 2025-04-15T01:04:56+00:00 | Visual Language Models (VLMs) show remarkable performance in visual reasoning tasks, successfully tackling college-level challenges that require high-level understanding of images. However, some recent reports of VLMs struggling to reason about elemental visual concepts like orientation, position, continuity, and occlusion suggest a potential gulf between human and VLM vision. Here we use the toolkit of neuropsychology to systematically assess the capabilities of three state-of-the-art VLMs across visual domains. Using 51 tests drawn from six clinical and experimental batteries, we characterise the visual abilities of leading VLMs relative to normative performance in healthy adults. While the models excel in straightforward object recognition tasks, we find widespread deficits in low- and mid-level visual abilities that would be considered clinically significant in humans. These selective deficits, profiled through validated test batteries, suggest that an artificial system can achieve complex object recognition without developing foundational visual concepts that in humans require no explicit training. |
http://arxiv.org/abs/2504.10787v1 | Greedy Beta-expansions for families of Salem numbers | 2025-04-15T01:09:26+00:00 | We give criteria for finding the greedy $\beta$-expansion for $1$ for families of Salem numbers that approach a given Pisot number. We show that these expansions are related to the greedy expansion under the Pisot base. This expands on the work of Hare and Tweedle. |
http://arxiv.org/abs/2504.10788v1 | A novel heuristic algorithm: adaptive and various learning-based algorithm | 2025-04-15T01:16:00+00:00 | A novel population-based heuristic algorithm called the adaptive and various learning-based algorithm (AVLA) is proposed for solving general optimization problems in this paper. The main idea of AVLA is inspired by the learning behaviors of individuals in a group, e.g. a school class. The algorithm formulates the following learning behaviors: a. Elite members will learn from each other; b. A common member will learn from some elite member and other common members; c. Members with unsatisfied performance will reflect their behavior after performance estimation; d. The whole group will reflect their behavior and try to improve if the performance of the group as a whole has not been improved for a long time. AVLA adopts the success-history based parameter adaptation to lighten the burden of parameter adjustment. To verify the efficiency of the AVLA, we apply it and its no-adaptation version with other eight well-known heuristics to 100 benchmark problems. The comparison clearly shows that AVLA performs as well as SHADE and the non-adaption version of AVLA outperforms others except AVLA and SHADE. |
http://arxiv.org/abs/2504.10789v1 | Can Large Language Models Trade? Testing Financial Theories with LLM Agents in Market Simulations | 2025-04-15T01:18:36+00:00 | This paper presents a realistic simulated stock market where large language models (LLMs) act as heterogeneous competing trading agents. The open-source framework incorporates a persistent order book with market and limit orders, partial fills, dividends, and equilibrium clearing alongside agents with varied strategies, information sets, and endowments. Agents submit standardized decisions using structured outputs and function calls while expressing their reasoning in natural language. Three findings emerge: First, LLMs demonstrate consistent strategy adherence and can function as value investors, momentum traders, or market makers per their instructions. Second, market dynamics exhibit features of real financial markets, including price discovery, bubbles, underreaction, and strategic liquidity provision. Third, the framework enables analysis of LLMs' responses to varying market conditions, similar to partial dependence plots in machine-learning interpretability. The framework allows simulating financial theories without closed-form solutions, creating experimental designs that would be costly with human participants, and establishing how prompts can generate correlated behaviors affecting market stability. |
http://arxiv.org/abs/2504.10790v1 | Symplectic Non-hyperbolicity | 2025-04-15T01:24:49+00:00 | Complex (affine) lines are a major object of study in complex geometry, but their symplectic aspects are not well understood. We perform a systematic study based on their associated Ahlfors currents. In particular, we generalize (by a different method) a result of Bangert on the existence of complex lines. We show that Ahlfors currents control the asymptotic behavior of families of pseudoholomorphic curves, refining a result of Demailly. Lastly, we show that the space of Ahlfors currents is convex. |
http://arxiv.org/abs/2504.10791v1 | Proposal of a generating function of partition sequences | 2025-04-15T01:25:31+00:00 | In this paper, we introduce the generating functions of partition sequences. Partition sequences have a one-to-one correspondence with partitions. Therefore, the generating function has no multiplicity and appears meaningless initially. However, we show that using a matrix can give meaning to the coefficients and preserve valuable information about partitions. We also introduce some restrictions on partitions suitable for these generating functions. |
http://arxiv.org/abs/2504.10792v1 | GUM-SAGE: A Novel Dataset and Approach for Graded Entity Salience Prediction | 2025-04-15T01:26:14+00:00 | Determining and ranking the most salient entities in a text is critical for user-facing systems, especially as users increasingly rely on models to interpret long documents they only partially read. Graded entity salience addresses this need by assigning entities scores that reflect their relative importance in a text. Existing approaches fall into two main categories: subjective judgments of salience, which allow for gradient scoring but lack consistency, and summarization-based methods, which define salience as mention-worthiness in a summary, promoting explainability but limiting outputs to binary labels (entities are either summary-worthy or not). In this paper, we introduce a novel approach for graded entity salience that combines the strengths of both approaches. Using an English dataset spanning 12 spoken and written genres, we collect 5 summaries per document and calculate each entity's salience score based on its presence across these summaries. Our approach shows stronger correlation with scores based on human summaries and alignments, and outperforms existing techniques, including LLMs. We release our data and code at https://github.com/jl908069/gum_sum_salience to support further research on graded salient entity extraction. |
http://arxiv.org/abs/2504.10793v1 | SonicSieve: Bringing Directional Speech Extraction to Smartphones Using Acoustic Microstructures | 2025-04-15T01:30:48+00:00 | Imagine placing your smartphone on a table in a noisy restaurant and clearly capturing the voices of friends seated around you, or recording a lecturer's voice with clarity in a reverberant auditorium. We introduce SonicSieve, the first intelligent directional speech extraction system for smartphones using a bio-inspired acoustic microstructure. Our passive design embeds directional cues onto incoming speech without any additional electronics. It attaches to the in-line mic of low-cost wired earphones which can be attached to smartphones. We present an end-to-end neural network that processes the raw audio mixtures in real-time on mobile devices. Our results show that SonicSieve achieves a signal quality improvement of 5.0 dB when focusing on a 30{\deg} angular region. Additionally, the performance of our system based on only two microphones exceeds that of conventional 5-microphone arrays. |
http://arxiv.org/abs/2504.10794v2 | Cataclysmic Variable Candidates Identified in eROSITA-DE DR1,XMM-Newton, Swift, and ROSAT Catalogs | 2025-04-15T01:33:42+00:00 | Cataclysmic variables (CVs) are binary systems with a white dwarf accreting matter from a low-mass star, making them significant sources of X-ray emission in the Galaxy. We present a systematic search for X-ray emitting CV candidates by cross-matching four X-ray catalogs (eROSITA, XMM-Newton, Swift, and ROSAT) with Gaia sources located in the bridge region between the main sequence and white dwarf cooling sequence in the Hertzsprung-Russell diagram. From 444 candidates (267 confirmed CVs and 177 new candidates), we detect orbital modulation in 56 sources using ZTF/TESS light curves. The eROSITA catalog contributes 51% of candidates, outperforming other surveys due to its wider sky coverage and higher sensitivity (~10^{-14} erg cm^{-2} s^{-1}). Our method demonstrates the efficiency of combining X-ray data with time-domain analysis for CV identification, with future eROSITA observations expected to expand the population of X-ray emitting CVs. |
http://arxiv.org/abs/2504.10795v1 | 3D Wavelet Convolutions with Extended Receptive Fields for Hyperspectral Image Classification | 2025-04-15T01:39:42+00:00 | Deep neural networks face numerous challenges in hyperspectral image classification, including high-dimensional data, sparse ground object distributions, and spectral redundancy, which often lead to classification overfitting and limited generalization capability. To better adapt to ground object distributions while expanding receptive fields without introducing excessive parameters and skipping redundant information, this paper proposes WCNet, an improved 3D-DenseNet model integrated with wavelet transforms. We introduce wavelet transforms to effectively extend convolutional receptive fields and guide CNNs to better respond to low frequencies through cascading, termed wavelet convolution. Each convolution focuses on different frequency bands of the input signal with gradually increasing effective ranges. This process enables greater emphasis on low-frequency components while adding only a small number of trainable parameters. This dynamic approach allows the model to flexibly focus on critical spatial structures when processing different regions, rather than relying on fixed receptive fields of single static kernels. The Wavelet Conv module enhances model representation capability by expanding receptive fields through 3D wavelet transforms without increasing network depth or width. Experimental results demonstrate superior performance on the IN, UP, and KSC datasets, outperforming mainstream hyperspectral image classification methods. |
http://arxiv.org/abs/2504.10796v2 | Wasserstein Distributionally Robust Regret Optimization | 2025-04-15T01:47:11+00:00 | Distributionally Robust Optimization (DRO) is a popular framework for decision-making under uncertainty, but its adversarial nature can lead to overly conservative solutions. To address this, we study ex-ante Distributionally Robust Regret Optimization (DRRO), focusing on Wasserstein-based ambiguity sets which are popular due to their links to regularization and machine learning. We provide a systematic analysis of Wasserstein DRRO, paralleling known results for Wasserstein DRO. Under smoothness and regularity conditions, we show that Wasserstein DRRO coincides with Empirical Risk Minimization (ERM) up to first-order terms, and exactly so in convex quadratic settings. We revisit the Wasserstein DRRO newsvendor problem, where the loss is the maximum of two linear functions of demand and decision. Extending [25], we show that the regret can be computed by maximizing two one-dimensional concave functions. For more general loss functions involving the maximum of multiple linear terms in multivariate random variables and decision vectors, we prove that computing the regret and thus also the DRRO policy is NP-hard. We then propose a convex relaxation for these more general Wasserstein DRRO problems and demonstrate its strong empirical performance. Finally, we provide an upper bound on the optimality gap of our relaxation and show it improves over recent alternatives. |
http://arxiv.org/abs/2504.10797v1 | Name of Thrones: Evaluating How LLMs Rank Student Names, Race, and Gender in Status Hierarchies | 2025-04-15T01:47:39+00:00 | Across cultures, names tell a lot about their bearers as they carry deep personal and cultural significance. Names also serve as powerful signals of gender, race, and status in the social hierarchy - a pecking order in which individual positions shape others' expectations on their perceived competence and worth. With the widespread adoption of LLMs and as names are often an input for LLMs, it is crucial to evaluate whether LLMs may sort people into status positions based on first and last names and, if so, whether it is in an unfair, biased fashion. While prior work has primarily investigated biases in first names, little attention has been paid to last names and even less to the combined effects of first and last names. In this study, we conduct a large-scale analysis of name variations across 5 ethnicities to examine how AI exhibits name biases. Our study investigates three key characteristics of inequality and finds that LLMs reflect and reinforce status hierarchies based on names that signal gender and ethnicity as they encode differential expectations of competence, leadership, and economic potential. Contrary to the common assumption that AI tends to favor Whites, we show that East and, in some contexts, South Asian names receive higher rankings. We also disaggregate Asians, a population projected to be the largest immigrant group in the U.S. by 2055. Our results challenge the monolithic Asian model minority assumption, illustrating a more complex and stratified model of bias. Gender moderates biases, with girls facing unfair disadvantages in certain racial groups. Additionally, spanning cultural categories by adopting Western first names improves AI-perceived status for East and Southeast Asian students, particularly for girls. Our findings underscore the importance of intersectional and more nuanced understandings of race, gender, and mixed identities in the evaluation of LLMs. |
http://arxiv.org/abs/2504.10798v1 | AdapCsiNet: Environment-Adaptive CSI Feedback via Scene Graph-Aided Deep Learning | 2025-04-15T01:51:15+00:00 | Accurate channel state information (CSI) is critical for realizing the full potential of multiple-antenna wireless communication systems. While deep learning (DL)-based CSI feedback methods have shown promise in reducing feedback overhead, their generalization capability across varying propagation environments remains limited due to their data-driven nature. Existing solutions based on online training improve adaptability but impose significant overhead in terms of data collection and computational resources. In this work, we propose AdapCsiNet, an environment-adaptive DL-based CSI feedback framework that eliminates the need for online training. By integrating environmental information -- represented as a scene graph -- into a hypernetwork-guided CSI reconstruction process, AdapCsiNet dynamically adapts to diverse channel conditions. A two-step training strategy is introduced to ensure baseline reconstruction performance and effective environment-aware adaptation. Simulation results demonstrate that AdapCsiNet achieves up to 46.4% improvement in CSI reconstruction accuracy and matches the performance of online learning methods without incurring additional runtime overhead. |
http://arxiv.org/abs/2504.10799v1 | Double-optical phase-transition in a three level Rydberg state in thermal Rubidium vapor | 2025-04-15T01:52:28+00:00 | We report on the observation of electromagnetically induced transparency (EIT) with intrinsic phase transitions in a three-level ladder system within rubidium atomic vapor. The observed abrupt transitions between low and high Rydberg occupancy states manifest in the probe beam transmission, depending on the principal quantum number, the Rabi frequency of the coupling field, atomic density, and probe beam detuning. Our study elucidates the underlying interaction mechanisms governing the EIT phase transition and enriches the existing experiments of multi-parameter regulation phase transitions. These findings establish a robust platform for investigating nonequilibrium phase transitions in atomic ensembles, bridging the gap between classical mean-field theories and microscopic quantum dynamics. |
http://arxiv.org/abs/2504.10800v1 | Products of Recursive Programs for Hypersafety Verification | 2025-04-15T01:52:50+00:00 | We study the problem of automated hypersafety verification of infinite-state recursive programs. We propose an infinite class of product programs, specifically designed with recursion in mind, that reduce the hypersafety verification of a recursive program to standard safety verification. For this, we combine insights from language theory and concurrency theory to propose an algorithmic solution for constructing an infinite class of recursive product programs. One key insight is that, using the simple theory of visibly pushdown languages, one can maintain the recursive structure of syntactic program alignments which is vital to constructing a new product program that can be viewed as a classic recursive program -- that is, one that can be executed on a single stack. Another key insight is that techniques from concurrency theory can be generalized to help define product programs based on the view that the parallel composition of individual recursive programs includes all possible alignments from which a sound set of alignments that faithfully preserve the satisfaction of the hypersafety property can be selected. On the practical side, we formulate a family of parametric canonical product constructions that are intuitive to programmers and can be used as building blocks to specify recursive product programs for the purpose of relational and hypersafety verification, with the idea that the right product program can be verified automatically using existing techniques. We demonstrate the effectiveness of these techniques through an implementation and highly promising experimental results. |
http://arxiv.org/abs/2504.10801v1 | Q-Cluster: Quantum Error Mitigation Through Noise-Aware Unsupervised Learning | 2025-04-15T01:53:39+00:00 | Quantum error mitigation (QEM) is critical in reducing the impact of noise in the pre-fault-tolerant era, and is expected to complement error correction in fault-tolerant quantum computing (FTQC). In this paper, we propose a novel QEM approach, Q-Cluster, that uses unsupervised learning (clustering) to reshape the measured bit-string distribution. Our approach starts with a simplified bit-flip noise model. It first performs clustering on noisy measurement results, i.e., bit-strings, based on the Hamming distance. The centroid of each cluster is calculated using a qubit-wise majority vote. Next, the noisy distribution is adjusted with the clustering outcomes and the bit-flip error rates using Bayesian inference. Our simulation results show that Q-Cluster can mitigate high noise rates (up to 40% per qubit) with the simple bit-flip noise model. However, real quantum computers do not fit such a simple noise model. To address the problem, we (a) apply Pauli twirling to tailor the complex noise channels to Pauli errors, and (b) employ a machine learning model, ExtraTrees regressor, to estimate an effective bit-flip error rate using a feature vector consisting of machine calibration data (gate & measurement error rates), circuit features (number of qubits, numbers of different types of gates, etc.) and the shape of the noisy distribution (entropy). Our experimental results show that our proposed Q-Cluster scheme improves the fidelity by a factor of 1.46x, on average, compared to the unmitigated output distribution, for a set of low-entropy benchmarks on five different IBM quantum machines. Our approach outperforms the state-of-art QEM approaches M3 [24], Hammer [35], and QBEEP [33] by 1.29x, 1.47x, and 2.65x, respectively. |
http://arxiv.org/abs/2504.10802v1 | Quantum Geometry of the Light Cone: Fock representation and Spectrum of Radiated Power | 2025-04-15T01:58:17+00:00 | Starting from the symplectic potential for the $\gamma$-Palatini--Holst action on a null hypersurface, we identify an auxiliary conformal field theory (CFT), which carries a representation of the constraint algebra of general relativity on a null surface. The radiative data, which is encoded into the shear of each null generator, is mapped into an $SU(1,1)$ current algebra on each light ray. We study the resulting quantum theory for both bosonic and fermionic representations. In the fermionic representation, the central charge on each null ray is positive, for bosons it is negative. A negative central charge implies a non-unitary CFT, which has negative norm states. In the model, there is a natural $SU(1,1)$ Casimir. For the bosonic representations, the $SU(1,1)$ Casimir can have either sign. For the fermionic representations, the $SU(1,1)$ Casimir is always greater or equal to zero. To exclude negative norm states, we restrict ourselves to the fermionic case. To understand the physical implications of this restriction, we express the $SU(1,1)$ Casimir in terms of the geometric data. In this way, the positivity bound on the $SU(1,1)$ Casimir translates into an upper bound for the shear of each null generator. In the model, this bound must be satisfied for all three-dimensional null hypersurfaces. This in turn suggests to apply it to an entire null foliation in an asymptotically flat spacetime. In this way, we obtain a bound on the radiated power of gravitational waves in the model. |
http://arxiv.org/abs/2504.10803v1 | Control-driven critical fluctuations across quantum trajectories | 2025-04-15T01:59:09+00:00 | Monitored quantum circuits in which entangling unitary dynamics compete with projective local measurements can host measurement-induced phase transitions witnessed by entanglement measures at late times. Adding feedback conditioned on the measurement outcomes gives rise to another type of phase transition witnessed by local order parameters and correlation functions. These transitions, known as control or absorbing-state transitions, generically occur within the area-law entanglement phase and are thought to be governed by classical physics in that their critical exponents match those of the classical limit of the model. In this work, we examine quantum features of these transitions, focusing on a Bernoulli circuit model with a well-defined classical limit. First we demonstrate that, in the local basis defined by the absorbing state, the steady-state quantum coherence undergoes a phase transition at the control transition, where its logarithm changes discontinuously from volume- to area-law scaling. Second, we analyze the control transition from the perspective of fluctuations in observables, which carry two contributions: classical fluctuations over circuit realizations (present in the classical limit), and quantum fluctuations over trajectories and states (both absent in the classical limit). Both contributions can be estimated in experiments without post-selection. The circuit-to-circuit fluctuations, the dominant contribution, carry the critical behavior of the classical limit. However, the subleading quantum fluctuations that represent fluctuations between different quantum "worlds" also go critical at the control transition. These critical quantum fluctuations at the control transition also occur in other models, and we discuss how they can be measured experimentally without post-selection. |
http://arxiv.org/abs/2504.10804v1 | The Sword of Damocles in ViTs: Computational Redundancy Amplifies Adversarial Transferability | 2025-04-15T01:59:47+00:00 | Vision Transformers (ViTs) have demonstrated impressive performance across a range of applications, including many safety-critical tasks. However, their unique architectural properties raise new challenges and opportunities in adversarial robustness. In particular, we observe that adversarial examples crafted on ViTs exhibit higher transferability compared to those crafted on CNNs, suggesting that ViTs contain structural characteristics favorable for transferable attacks. In this work, we investigate the role of computational redundancy in ViTs and its impact on adversarial transferability. Unlike prior studies that aim to reduce computation for efficiency, we propose to exploit this redundancy to improve the quality and transferability of adversarial examples. Through a detailed analysis, we identify two forms of redundancy, including the data-level and model-level, that can be harnessed to amplify attack effectiveness. Building on this insight, we design a suite of techniques, including attention sparsity manipulation, attention head permutation, clean token regularization, ghost MoE diversification, and test-time adversarial training. Extensive experiments on the ImageNet-1k dataset validate the effectiveness of our approach, showing that our methods significantly outperform existing baselines in both transferability and generality across diverse model architectures. |
http://arxiv.org/abs/2504.10805v1 | The Internal Logic and Finite Colimits | 2025-04-15T01:59:57+00:00 | We describe how finite colimits can be described using the internal lanuage, also known as the Mitchell-Benabou language, of a topos, provided the topos admits countably infinite colimits. This description is based on the set theoretic definitions of colimits and coequalisers, however the translation is not direct due to the differences between set theory and the internal language, these differences are described as internal versus external. Solutions to the hurdles which thus arise are given. |
http://arxiv.org/abs/2504.10806v1 | ACSNet: A Deep Neural Network for Compound GNSS Jamming Signal Classification | 2025-04-15T02:05:30+00:00 | In the global navigation satellite system (GNSS), identifying not only single but also compound jamming signals is crucial for ensuring reliable navigation and positioning, particularly in future wireless communication scenarios such as the space-air-ground integrated network (SAGIN). However, conventional techniques often struggle with low recognition accuracy and high computational complexity, especially under low jamming-to-noise ratio (JNR) conditions. To overcome the challenge of accurately identifying compound jamming signals embedded within GNSS signals, we propose ACSNet, a novel convolutional neural network designed specifically for this purpose. Unlike traditional methods that tend to exhibit lower accuracy and higher computational demands, particularly in low JNR environments, ACSNet addresses these issues by integrating asymmetric convolution blocks, which enhance its sensitivity to subtle signal variations. Simulations demonstrate that ACSNet significantly improves accuracy in low JNR regions and shows robust resilience to power ratio (PR) variations, confirming its effectiveness and efficiency for practical GNSS interference management applications. |
http://arxiv.org/abs/2504.10807v1 | Power-scaled Bayesian Inference with Score-based Generative mModels | 2025-04-15T02:06:04+00:00 | We propose a score-based generative algorithm for sampling from power-scaled priors and likelihoods within the Bayesian inference framework. Our algorithm enables flexible control over prior-likelihood influence without requiring retraining for different power-scaling configurations. Specifically, we focus on synthesizing seismic velocity models conditioned on imaged seismic. Our method enables sensitivity analysis by sampling from intermediate power posteriors, allowing us to assess the relative influence of the prior and likelihood on samples of the posterior distribution. Through a comprehensive set of experiments, we evaluate the effects of varying the power parameter in different settings: applying it solely to the prior, to the likelihood of a Bayesian formulation, and to both simultaneously. The results show that increasing the power of the likelihood up to a certain threshold improves the fidelity of posterior samples to the conditioning data (e.g., seismic images), while decreasing the prior power promotes greater structural diversity among samples. Moreover, we find that moderate scaling of the likelihood leads to a reduced shot data residual, confirming its utility in posterior refinement. |
http://arxiv.org/abs/2504.10808v1 | Tabular foundation model to detect empathy from visual cues | 2025-04-15T02:06:05+00:00 | Detecting empathy from video interactions is an emerging area of research. Video datasets, however, are often released as extracted features (i.e., tabular data) rather than raw footage due to privacy and ethical concerns. Prior research on such tabular datasets established tree-based classical machine learning approaches as the best-performing models. Motivated by the recent success of textual foundation models (i.e., large language models), we explore the use of tabular foundation models in empathy detection from tabular visual features. We experiment with two recent tabular foundation models $-$ TabPFN v2 and TabICL $-$ through in-context learning and fine-tuning setups. Our experiments on a public human-robot interaction benchmark demonstrate a significant boost in cross-subject empathy detection accuracy over several strong baselines (accuracy: $0.590 \rightarrow 0.730$; AUC: $0.564 \rightarrow 0.669$). In addition to performance improvement, we contribute novel insights and an evaluation setup to ensure generalisation on unseen subjects in this public benchmark. As the practice of releasing video features as tabular datasets is likely to persist due to privacy constraints, our findings will be widely applicable to future empathy detection video datasets as well. |
http://arxiv.org/abs/2504.10809v1 | GaSLight: Gaussian Splats for Spatially-Varying Lighting in HDR | 2025-04-15T02:08:42+00:00 | We present GaSLight, a method that generates spatially-varying lighting from regular images. Our method proposes using HDR Gaussian Splats as light source representation, marking the first time regular images can serve as light sources in a 3D renderer. Our two-stage process first enhances the dynamic range of images plausibly and accurately by leveraging the priors embedded in diffusion models. Next, we employ Gaussian Splats to model 3D lighting, achieving spatially variant lighting. Our approach yields state-of-the-art results on HDR estimations and their applications in illuminating virtual objects and scenes. To facilitate the benchmarking of images as light sources, we introduce a novel dataset of calibrated and unsaturated HDR to evaluate images as light sources. We assess our method using a combination of this novel dataset and an existing dataset from the literature. The code to reproduce our method will be available upon acceptance. |
http://arxiv.org/abs/2504.10810v1 | PatrolVision: Automated License Plate Recognition in the wild | 2025-04-15T02:10:43+00:00 | Adoption of AI driven techniques in public services remains low due to challenges related to accuracy and speed of information at population scale. Computer vision techniques for traffic monitoring have not gained much popularity despite their relative strength in areas such as autonomous driving. Despite large number of academic methods for Automatic License Plate Recognition (ALPR) systems, very few provide an end to end solution for patrolling in the city. This paper presents a novel prototype for a low power GPU based patrolling system to be deployed in an urban environment on surveillance vehicles for automated vehicle detection, recognition and tracking. In this work, we propose a complete ALPR system for Singapore license plates having both single and double line creating our own YOLO based network. We focus on unconstrained capture scenarios as would be the case in real world application, where the license plate (LP) might be considerably distorted due to oblique views. In this work, we first detect the license plate from the full image using RFB-Net and rectify multiple distorted license plates in a single image. After that, the detected license plate image is fed to our network for character recognition. We evaluate the performance of our proposed system on a newly built dataset covering more than 16,000 images. The system was able to correctly detect license plates with 86\% precision and recognize characters of a license plate in 67\% of the test set, and 89\% accuracy with one incorrect character (partial match). We also test latency of our system and achieve 64FPS on Tesla P4 GPU |
http://arxiv.org/abs/2504.10811v1 | FlexiContracts: A Novel and Efficient Scheme for Upgrading Smart Contracts in Ethereum Blockchain | 2025-04-15T02:20:42+00:00 | Blockchain technology has revolutionized contractual processes, enhancing efficiency and trust through smart contracts. Ethereum, as a pioneer in this domain, offers a platform for decentralized applications but is challenged by the immutability of smart contracts, which makes upgrades cumbersome. Existing design patterns, while addressing upgradability, introduce complexity, increased development effort, and higher gas costs, thus limiting their effectiveness. In response, we introduce FlexiContracts, an innovative scheme that reimagines the evolution of smart contracts on Ethereum. By enabling secure, in-place upgrades without losing historical data, FlexiContracts surpasses existing approaches, introducing a previously unexplored path in smart contract evolution. Its streamlined design transcends the limitations of current design patterns by simplifying smart contract development, eliminating the need for extensive upfront planning, and significantly reducing the complexity of the design process. This advancement fosters an environment for continuous improvement and adaptation to new requirements, redefining the possibilities for dynamic, upgradable smart contracts. |
http://arxiv.org/abs/2504.10812v1 | E2E Parking Dataset: An Open Benchmark for End-to-End Autonomous Parking | 2025-04-15T02:21:09+00:00 | End-to-end learning has shown great potential in autonomous parking, yet the lack of publicly available datasets limits reproducibility and benchmarking. While prior work introduced a visual-based parking model and a pipeline for data generation, training, and close-loop test, the dataset itself was not released. To bridge this gap, we create and open-source a high-quality dataset for end-to-end autonomous parking. Using the original model, we achieve an overall success rate of 85.16% with lower average position and orientation errors (0.24 meters and 0.34 degrees). |
http://arxiv.org/abs/2504.10813v1 | Enhanced Data Race Prediction Through Modular Reasoning | 2025-04-15T02:22:58+00:00 | There are two orthogonal methodologies for efficient prediction of data races from concurrent program runs: commutativity and prefix reasoning. There are several instances of each methodology in the literature, with the goal of predicting data races using a streaming algorithm where the required memory does not grow proportional to the length of the observed run, but these instances were mostly created in an ad hoc manner, without much attention to their unifying underlying principles. In this paper, we identify and formalize these principles for each category with the ultimate goal of paving the way for combining them into a new algorithm which shares their efficiency characteristics but offers strictly more prediction power. In particular, we formalize three distinct classes of races predictable using commutativity reasoning, and compare them. We identify three different styles of prefix reasoning, and prove that they predict the same class of races, which provably contains all races predictable by any commutativity reasoning technique. Our key contribution is combining prefix reasoning and commutativity reasoning in a modular way to introduce a new class of races, granular prefix races, that are predictable in constant-space and linear time, in a streaming fashion. This class of races includes all races predictable using commutativity and prefix reasoning techniques. We present an improved constant-space algorithm for prefix reasoning alone based on the idea of antichains (from language theory). This improved algorithm is the stepping stone that is required to devise an efficient algorithm for prediction of granular prefix races. We present experimental results to demonstrate the expressive power and performance of our new algorithm. |
http://arxiv.org/abs/2504.10814v1 | An Operator Splitting Method for Large-Scale CVaR-Constrained Quadratic Programs | 2025-04-15T02:28:55+00:00 | We introduce a fast and scalable method for solving quadratic programs with conditional value-at-risk (CVaR) constraints. While these problems can be formulated as standard quadratic programs, the number of variables and constraints grows linearly with the number of scenarios, making general-purpose solvers impractical for large-scale problems. Our method combines operator splitting with a specialized $O(m\log m)$ algorithm for projecting onto CVaR constraints, where $m$ is the number of scenarios. The method alternates between solving a linear system and performing parallel projections: onto CVaR constraints using our specialized algorithm and onto box constraints with a closed-form solution. Numerical examples from several application domains demonstrate that our method outperforms general-purpose solvers by several orders of magnitude on problems with up to millions of scenarios. Our method is implemented in an open-source package called CVQP. |
http://arxiv.org/abs/2504.10815v1 | Room-Temperature Hybrid 2D-3D Quantum Spin System for Enhanced Magnetic Sensing and Many-Body Dynamics | 2025-04-15T02:30:07+00:00 | Advances in hybrid quantum systems and their precise control are pivotal for developing advanced quantum technologies. Two-dimensional (2D) materials with optically accessible spin defects have emerged as a promising platform for building integrated quantum spin systems due to their exceptional flexibility and scalability. However, experimentally realizing such systems and demonstrating their superiority remains challenging. Here, we present a hybrid spin system operating under ambient conditions, integrating boron vacancy (VB) spins in 2D hexagonal boron nitride flakes with a single nitrogen vacancy (NV) center in 3D single-crystal diamonds. This combined system achieves full controllability and exhibits enhanced performance for nanoscale magnetic sensing, including an improved dynamic range. Moreover, we investigate the rich many-body spin dynamics within the hybrid system, enabling the first-time quantification of the fluorescence intensity of a single VB defect at 104 counts per second. This result represents a critical step toward the direct optical observation of single VB defects. |
http://arxiv.org/abs/2504.10816v2 | CSPLADE: Learned Sparse Retrieval with Causal Language Models | 2025-04-15T02:31:34+00:00 | In recent years, dense retrieval has been the focus of information retrieval (IR) research. While effective, dense retrieval produces uninterpretable dense vectors, and suffers from the drawback of large index size. Learned sparse retrieval (LSR) has emerged as promising alternative, achieving competitive retrieval performance while also being able to leverage the classical inverted index data structure for efficient retrieval. However, limited works have explored scaling LSR beyond BERT scale. In this work, we identify two challenges in training large language models (LLM) for LSR: (1) training instability during the early stage of contrastive training; (2) suboptimal performance due to pre-trained LLM's unidirectional attention. To address these challenges, we propose two corresponding techniques: (1) a lightweight adaptation training phase to eliminate training instability; (2) two model variants to enable bidirectional information. With these techniques, we are able to train LSR models with 8B scale LLM, and achieve competitive retrieval performance with reduced index size. Furthermore, we are among the first to analyze the performance-efficiency tradeoff of LLM-based LSR model through the lens of model quantization. Our findings provide insights into adapting LLMs for efficient retrieval modeling. |
http://arxiv.org/abs/2504.10817v1 | FHBench: Towards Efficient and Personalized Federated Learning for Multimodal Healthcare | 2025-04-15T02:38:00+00:00 | Federated Learning (FL) has emerged as an effective solution for multi-institutional collaborations without sharing patient data, offering a range of methods tailored for diverse applications. However, real-world medical datasets are often multimodal, and computational resources are limited, posing significant challenges for existing FL approaches. Recognizing these limitations, we developed the Federated Healthcare Benchmark(FHBench), a benchmark specifically designed from datasets derived from real-world healthcare applications. FHBench encompasses critical diagnostic tasks across domains such as the nervous, cardiovascular, and respiratory systems and general pathology, providing comprehensive support for multimodal healthcare evaluations and filling a significant gap in existing benchmarks. Building on FHBench, we introduced Efficient Personalized Federated Learning with Adaptive LoRA(EPFL), a personalized FL framework that demonstrates superior efficiency and effectiveness across various healthcare modalities. Our results highlight the robustness of FHBench as a benchmarking tool and the potential of EPFL as an innovative approach to advancing healthcare-focused FL, addressing key limitations of existing methods. |
http://arxiv.org/abs/2504.10818v1 | A Sample of Extreme Eclipsing Binaries with Accretion Disks from LAMOST and ZTF | 2025-04-15T02:38:18+00:00 | Extreme eclipsing binaries may harbor peculiar physical properties. In this work, we aim to identify a sample of such systems by selecting binaries with pronounced eclipsing light curves, characterized of large variability ($\Delta \mathrm{mag} > 0.3$ in ZTF $g$ band) and significant differences between primary and secondary eclipses (eclipse depth ratio $>$ 20 in ZTF $g$ band). We identified 23 candidates by combining the photometric data and the LAMOST spectroscopic survey. Spectroscopic analysis revealed that all of these systems are dominated by A-type stars in the optical band. Further investigation confirmed that all 23 candidates are Algol-type binaries, with 22 of them being newly discovered. Their orbital periods range from 2.57 to 19.21 days. These systems consist of low-luminosity, highly stripped subgiant donors and accreting A-type stars. The donor stars, with radii of $2.5-8.9~R_\odot$ and effective temperatures around 4000 K, have typical masses of $M_2 \sim 0.3~M_\odot$, indicating substantial mass loss through Roche-lobe overflow. The presence of ellipsoidal variability and H$\alpha$ emission provides strong evidence for ongoing mass transfer. By fitting the spectral energy distributions, spectra, and light curves, we found that most of the accretors have luminosities lower than expected from the mass-luminosity relation, aligning with the predicted faint phase for mass-gaining stars. Three objects of our sample exhibit pulsations with periods from 18 minutes to 8 hours, providing opportunities for asteroseismic studies. The low mass transfer rates and stability make the sample excellent systems for studying mass accretion, advancing our understanding of the Algol-type binary evolution. |
http://arxiv.org/abs/2504.10819v1 | Generalized Audio Deepfake Detection Using Frame-level Latent Information Entropy | 2025-04-15T02:39:46+00:00 | Generalizability, the capacity of a robust model to perform effectively on unseen data, is crucial for audio deepfake detection due to the rapid evolution of text-to-speech (TTS) and voice conversion (VC) technologies. A promising approach to differentiate between bonafide and spoof samples lies in identifying intrinsic disparities to enhance model generalizability. From an information-theoretic perspective, we hypothesize the information content is one of the intrinsic differences: bonafide sample represents a dense, information-rich sampling of the real world, whereas spoof sample is typically derived from lower-dimensional, less informative representations. To implement this, we introduce frame-level latent information entropy detector(f-InfoED), a framework that extracts distinctive information entropy from latent representations at the frame level to identify audio deepfakes. Furthermore, we present AdaLAM, which extends large pre-trained audio models with trainable adapters for enhanced feature extraction. To facilitate comprehensive evaluation, the audio deepfake forensics 2024 (ADFF 2024) dataset was built by the latest TTS and VC methods. Extensive experiments demonstrate that our proposed approach achieves state-of-the-art performance and exhibits remarkable generalization capabilities. Further analytical studies confirms the efficacy of AdaLAM in extracting discriminative audio features and f-InfoED in leveraging latent entropy information for more generalized deepfake detection. |
http://arxiv.org/abs/2504.10820v1 | Efficient and Robust Remote Sensing Image Denoising Using Randomized Approximation of Geodesics' Gramian on the Manifold Underlying the Patch Space | 2025-04-15T02:46:05+00:00 | Remote sensing images are widely utilized in many disciplines such as feature recognition and scene semantic segmentation. However, due to environmental factors and the issues of the imaging system, the image quality is often degraded which may impair subsequent visual tasks. Even though denoising remote sensing images plays an essential role before applications, the current denoising algorithms fail to attain optimum performance since these images possess complex features in the texture. Denoising frameworks based on artificial neural networks have shown better performance; however, they require exhaustive training with heterogeneous samples that extensively consume resources like power, memory, computation, and latency. Thus, here we present a computationally efficient and robust remote sensing image denoising method that doesn't require additional training samples. This method partitions patches of a remote-sensing image in which a low-rank manifold, representing the noise-free version of the image, underlies the patch space. An efficient and robust approach to revealing this manifold is a randomized approximation of the singular value spectrum of the geodesics' Gramian matrix of the patch space. The method asserts a unique emphasis on each color channel during denoising so the three denoised channels are merged to produce the final image. |
http://arxiv.org/abs/2504.10821v1 | Progressive Rock Music Classification | 2025-04-15T02:48:52+00:00 | This study investigates the classification of progressive rock music, a genre characterized by complex compositions and diverse instrumentation, distinct from other musical styles. Addressing this Music Information Retrieval (MIR) task, we extracted comprehensive audio features, including spectrograms, Mel-Frequency Cepstral Coefficients (MFCCs), chromagrams, and beat positions from song snippets using the Librosa library. A winner-take-all voting strategy was employed to aggregate snippet-level predictions into final song classifications. We conducted a comparative analysis of various machine learning techniques. Ensemble methods, encompassing Bagging (Random Forest, ExtraTrees, Bagging Classifier) and Boosting (XGBoost, Gradient Boosting), were explored, utilizing Principal Component Analysis (PCA) for dimensionality reduction to manage computational constraints with high-dimensional feature sets. Additionally, deep learning approaches were investigated, including the development of custom 1D Convolutional Neural Network (1D CNN) architectures (named "Zuck" and "Satya") featuring specific layer configurations, normalization, and activation functions. Furthermore, we fine-tuned a state-of-the-art Audio Spectrogram Transformer (AST) model, leveraging its attention-based mechanisms for audio classification. Performance evaluation on validation and test sets revealed varying effectiveness across models, with ensemble methods like Extra Trees achieving test accuracies up to 76.38%. This research provides insights into the application and relative performance of diverse machine learning paradigms for the nuanced task of progressive rock genre classification. |
http://arxiv.org/abs/2504.10822v1 | IlluSign: Illustrating Sign Language Videos by Leveraging the Attention Mechanism | 2025-04-15T02:53:32+00:00 | Sign languages are dynamic visual languages that involve hand gestures, in combination with non manual elements such as facial expressions. While video recordings of sign language are commonly used for education and documentation, the dynamic nature of signs can make it challenging to study them in detail, especially for new learners and educators. This work aims to convert sign language video footage into static illustrations, which serve as an additional educational resource to complement video content. This process is usually done by an artist, and is therefore quite costly. We propose a method that illustrates sign language videos by leveraging generative models' ability to understand both the semantic and geometric aspects of images. Our approach focuses on transferring a sketch like illustration style to video footage of sign language, combining the start and end frames of a sign into a single illustration, and using arrows to highlight the hand's direction and motion. While many style transfer methods address domain adaptation at varying levels of abstraction, applying a sketch like style to sign languages, especially for hand gestures and facial expressions, poses a significant challenge. To tackle this, we intervene in the denoising process of a diffusion model, injecting style as keys and values into high resolution attention layers, and fusing geometric information from the image and edges as queries. For the final illustration, we use the attention mechanism to combine the attention weights from both the start and end illustrations, resulting in a soft combination. Our method offers a cost effective solution for generating sign language illustrations at inference time, addressing the lack of such resources in educational materials. |
http://arxiv.org/abs/2504.10823v1 | CLASH: Evaluating Language Models on Judging High-Stakes Dilemmas from Multiple Perspectives | 2025-04-15T02:54:16+00:00 | Navigating high-stakes dilemmas involving conflicting values is challenging even for humans, let alone for AI. Yet prior work in evaluating the reasoning capabilities of large language models (LLMs) in such situations has been limited to everyday scenarios. To close this gap, this work first introduces CLASH (Character perspective-based LLM Assessments in Situations with High-stakes), a meticulously curated dataset consisting of 345 high-impact dilemmas along with 3,795 individual perspectives of diverse values. In particular, we design CLASH in a way to support the study of critical aspects of value-based decision-making processes which are missing from prior work, including understanding decision ambivalence and psychological discomfort as well as capturing the temporal shifts of values in characters' perspectives. By benchmarking 10 open and closed frontier models, we uncover several key findings. (1) Even the strongest models, such as GPT-4o and Claude-Sonnet, achieve less than 50% accuracy in identifying situations where the decision should be ambivalent, while they perform significantly better in clear-cut scenarios. (2) While LLMs reasonably predict psychological discomfort as marked by human, they inadequately comprehend perspectives involving value shifts, indicating a need for LLMs to reason over complex values. (3) Our experiments also reveal a significant correlation between LLMs' value preferences and their steerability towards a given value. (4) Finally, LLMs exhibit greater steerability when engaged in value reasoning from a third-party perspective, compared to a first-person setup, though certain value pairs benefit uniquely from the first-person framing. |
http://arxiv.org/abs/2504.10824v1 | A simple proof of the Atkin-O'Brien partition congruence conjecture for powers of 13 | 2025-04-15T02:58:36+00:00 | In 1967, Atkin and O'Brien conjectured congruences for the partition function involving Hecke operators modulo powers of 13. In this paper, we provide a simple proof of this conjecture. |
http://arxiv.org/abs/2504.10825v1 | OmniVDiff: Omni Controllable Video Diffusion for Generation and Understanding | 2025-04-15T03:05:46+00:00 | In this paper, we propose a novel framework for controllable video diffusion, OmniVDiff, aiming to synthesize and comprehend multiple video visual content in a single diffusion model. To achieve this, OmniVDiff treats all video visual modalities in the color space to learn a joint distribution, while employing an adaptive control strategy that dynamically adjusts the role of each visual modality during the diffusion process, either as a generation modality or a conditioning modality. This allows flexible manipulation of each modality's role, enabling support for a wide range of tasks. Consequently, our model supports three key functionalities: (1) Text-conditioned video generation: multi-modal visual video sequences (i.e., rgb, depth, canny, segmentaion) are generated based on the text conditions in one diffusion process; (2) Video understanding: OmniVDiff can estimate the depth, canny map, and semantic segmentation across the input rgb frames while ensuring coherence with the rgb input; and (3) X-conditioned video generation: OmniVDiff generates videos conditioned on fine-grained attributes (e.g., depth maps or segmentation maps). By integrating these diverse tasks into a unified video diffusion framework, OmniVDiff enhances the flexibility and scalability for controllable video diffusion, making it an effective tool for a variety of downstream applications, such as video-to-video translation. Extensive experiments demonstrate the effectiveness of our approach, highlighting its potential for various video-related applications. |
http://arxiv.org/abs/2504.10826v1 | SteerMusic: Enhanced Musical Consistency for Zero-shot Text-Guided and Personalized Music Editing | 2025-04-15T03:08:09+00:00 | Music editing is an important step in music production, which has broad applications, including game development and film production. Most existing zero-shot text-guided methods rely on pretrained diffusion models by involving forward-backward diffusion processes for editing. However, these methods often struggle to maintain the music content consistency. Additionally, text instructions alone usually fail to accurately describe the desired music. In this paper, we propose two music editing methods that enhance the consistency between the original and edited music by leveraging score distillation. The first method, SteerMusic, is a coarse-grained zero-shot editing approach using delta denoising score. The second method, SteerMusic+, enables fine-grained personalized music editing by manipulating a concept token that represents a user-defined musical style. SteerMusic+ allows for the editing of music into any user-defined musical styles that cannot be achieved by the text instructions alone. Experimental results show that our methods outperform existing approaches in preserving both music content consistency and editing fidelity. User studies further validate that our methods achieve superior music editing quality. Audio examples are available on https://steermusic.pages.dev/. |
http://arxiv.org/abs/2504.10827v1 | Large-time behavior of solutions to the Boussinesq equations with partial dissipation and influence of rotation | 2025-04-15T03:10:27+00:00 | This paper investigates the stability and large-time behavior of solutions to the rotating Boussinesq system under the influence of a general gravitational potential $\Psi$, which is widely used to model the dynamics of stratified geophysical fluids on the $f-$plane. Our main results are threefold: First, by imposing physically realistic boundary conditions and viscosity constraints, we prove that the solutions of the system smust necessarily take the following steady-state form $(\rho,u,v,w,p)=(\rho_s,0,v_s,0, p_s)$. These solutions are characterized by both geostrophic balance, given by $fv_s-\frac{\partial p_s}{\partial x}=\rho_s\frac{\partial \Psi}{\partial x}$ and hydrostatic balance, expressed as $-\frac{\partial p_s}{\partial z}=\rho_s\frac{\partial \Psi}{\partial z}$. Second, we establish that any steady-state solution satisfying the conditions $\nabla \rho_s=\delta (x,z)\nabla \Psi$ with $v_s(x,z)=a_0x+a_1$ is linearly unstable when the conditions $\delta(x,z)|_{(x_0,z_0)}>0$ and $(f+\alpha_0)\leq 0$ are simultaneously satisfied. This instability under the condition $\delta(x,z)|_{(x_0,z_0)}>0$ corresponds to the well-known Rayleigh-Taylor instability. Third, although the inherent Rayleigh-Taylor instability could potentially amplify the velocity around unstable steady-state solutions (heavier density over lighter one), we rigorously demonstrate that for any sufficiently smooth initial data, the solutions of the system asymptotically converge to a neighborhood of a steady-state solution in which both the zonal and vertical velocity components vanish. Finally, under a moderate additional assumption, we demonstrate that the system converges to a specific steady-state solution. In this state, the density profile is given by $\rho=-\gamma \Psi+\beta$, where $\gamma$ and $\beta$ are positive constants, and the meridional velocity $v$ depends solely and linearly on $x$ variable. |
http://arxiv.org/abs/2504.10828v1 | Following Is All You Need: Robot Crowd Navigation Using People As Planners | 2025-04-15T03:11:10+00:00 | Navigating in crowded environments requires the robot to be equipped with high-level reasoning and planning techniques. Existing works focus on developing complex and heavyweight planners while ignoring the role of human intelligence. Since humans are highly capable agents who are also widely available in a crowd navigation setting, we propose an alternative scheme where the robot utilises people as planners to benefit from their effective planning decisions and social behaviours. Through a set of rule-based evaluations, we identify suitable human leaders who exhibit the potential to guide the robot towards its goal. Using a simple base planner, the robot follows the selected leader through shorthorizon subgoals that are designed to be straightforward to achieve. We demonstrate through both simulated and real-world experiments that our novel framework generates safe and efficient robot plans compared to existing planners, even without predictive or data-driven modules. Our method also brings human-like robot behaviours without explicitly defining traffic rules and social norms. Code will be available at https://github.com/centiLinda/PeopleAsPlanner.git. |
http://arxiv.org/abs/2504.10829v1 | LayoutCoT: Unleashing the Deep Reasoning Potential of Large Language Models for Layout Generation | 2025-04-15T03:12:01+00:00 | Conditional layout generation aims to automatically generate visually appealing and semantically coherent layouts from user-defined constraints. While recent methods based on generative models have shown promising results, they typically require substantial amounts of training data or extensive fine-tuning, limiting their versatility and practical applicability. Alternatively, some training-free approaches leveraging in-context learning with Large Language Models (LLMs) have emerged, but they often suffer from limited reasoning capabilities and overly simplistic ranking mechanisms, which restrict their ability to generate consistently high-quality layouts. To this end, we propose LayoutCoT, a novel approach that leverages the reasoning capabilities of LLMs through a combination of Retrieval-Augmented Generation (RAG) and Chain-of-Thought (CoT) techniques. Specifically, LayoutCoT transforms layout representations into a standardized serialized format suitable for processing by LLMs. A Layout-aware RAG is used to facilitate effective retrieval and generate a coarse layout by LLMs. This preliminary layout, together with the selected exemplars, is then fed into a specially designed CoT reasoning module for iterative refinement, significantly enhancing both semantic coherence and visual quality. We conduct extensive experiments on five public datasets spanning three conditional layout generation tasks. Experimental results demonstrate that LayoutCoT achieves state-of-the-art performance without requiring training or fine-tuning. Notably, our CoT reasoning module enables standard LLMs, even those without explicit deep reasoning abilities, to outperform specialized deep-reasoning models such as deepseek-R1, highlighting the potential of our approach in unleashing the deep reasoning capabilities of LLMs for layout generation tasks. |
http://arxiv.org/abs/2504.10830v1 | Radiation Footprint Control in Cell-Free Cooperative ISAC: Optimal Joint BS Activation and Beamforming Coordination | 2025-04-15T03:14:05+00:00 | Coordinated beamforming across distributed base stations (BSs) in cell-free architectures can efficiently support integrated sensing and communication (ISAC) users by improving resource sharing and reducing conflicts in the spatial domain. However, coordinating numerous BSs within the ISAC network poses risks of generating substantial interference for other networks sharing the spectrum, while also increasing operational costs from power consumption and signaling overhead. Therefore, in this paper, we propose an interference-suppressed and cost-optimized cell-free ISAC network by opportunistically and cooperatively orchestrating distributed radio resources to address competing sensing and communication (S\&C) demands. Specifically, we conceive a radiation footprint control mechanism that autonomously suppresses interference across the entire signal propagation space to safeguard other networks without exchanging signaling. Then, we propose joint BS activation and beamforming coordination to dynamically activate appropriate BSs and orchestrate their spatial beams for service provisioning. Building upon this framework, we formulate a cost-efficient utility maximization problem that considers individual S\&C demands and location-dependent radiation footprint constraints. Since this results in a non-convex optimization problem, we develop a monotonic optimization embedded branch-and-bound (MO-BRB) algorithm to find the optimal solution. Additionally, we apply a low-complexity iterative method to obtain near-optimal solutions. Finally, simulation results validate the effectiveness of the proposed algorithms. |
http://arxiv.org/abs/2504.10831v1 | Hallucination-Aware Generative Pretrained Transformer for Cooperative Aerial Mobility Control | 2025-04-15T03:21:08+00:00 | This paper proposes SafeGPT, a two-tiered framework that integrates generative pretrained transformers (GPTs) with reinforcement learning (RL) for efficient and reliable unmanned aerial vehicle (UAV) last-mile deliveries. In the proposed design, a Global GPT module assigns high-level tasks such as sector allocation, while an On-Device GPT manages real-time local route planning. An RL-based safety filter monitors each GPT decision and overrides unsafe actions that could lead to battery depletion or duplicate visits, effectively mitigating hallucinations. Furthermore, a dual replay buffer mechanism helps both the GPT modules and the RL agent refine their strategies over time. Simulation results demonstrate that SafeGPT achieves higher delivery success rates compared to a GPT-only baseline, while substantially reducing battery consumption and travel distance. These findings validate the efficacy of combining GPT-based semantic reasoning with formal safety guarantees, contributing a viable solution for robust and energy-efficient UAV logistics. |
http://arxiv.org/abs/2504.10832v1 | Unlimited Vector Processing for Wireless Baseband Based on RISC-V Extension | 2025-04-15T03:23:02+00:00 | Wireless baseband processing (WBP) serves as an ideal scenario for utilizing vector processing, which excels in managing data-parallel operations due to its parallel structure. However, conventional vector architectures face certain constraints such as limited vector register sizes, reliance on power-of-two vector length multipliers, and vector permutation capabilities tied to specific architectures. To address these challenges, we have introduced an instruction set extension (ISE) based on RISC-V known as unlimited vector processing (UVP). This extension enhances both the flexibility and efficiency of vector computations. UVP employs a novel programming model that supports non-power-of-two register groupings and hardware strip-mining, thus enabling smooth handling of vectors of varying lengths while reducing the software strip-mining burden. Vector instructions are categorized into symmetric and asymmetric classes, complemented by specialized load/store strategies to optimize execution. Moreover, we present a hardware implementation of UVP featuring sophisticated hazard detection mechanisms, optimized pipelines for symmetric tasks such as fixed-point multiplication and division, and a robust permutation engine for effective asymmetric operations. Comprehensive evaluations demonstrate that UVP significantly enhances performance, achieving up to 3.0$\times$ and 2.1$\times$ speedups in matrix multiplication and fast Fourier transform (FFT) tasks, respectively, when measured against lane-based vector architectures. Our synthesized RTL for a 16-lane configuration using SMIC 40nm technology spans 0.94 mm$^2$ and achieves an area efficiency of 21.2 GOPS/mm$^2$. |
http://arxiv.org/abs/2504.10833v1 | Towards Spatially-Aware and Optimally Faithful Concept-Based Explanations | 2025-04-15T03:24:13+00:00 | Post-hoc, unsupervised concept-based explanation methods (U-CBEMs) are a promising tool for generating semantic explanations of the decision-making processes in deep neural networks, having applications in both model improvement and understanding. It is vital that the explanation is accurate, or faithful, to the model, yet we identify several limitations of prior faithfulness metrics that inhibit an accurate evaluation; most notably, prior metrics involve only the set of concepts present, ignoring how they may be spatially distributed. We address these limitations with Surrogate Faithfulness (SF), an evaluation method that introduces a spatially-aware surrogate and two novel faithfulness metrics. Using SF, we produce Optimally Faithful (OF) explanations, where concepts are found that maximize faithfulness. Our experiments show that (1) adding spatial-awareness to prior U-CBEMs increases faithfulness in all cases; (2) OF produces significantly more faithful explanations than prior U-CBEMs (30% or higher improvement in error); (3) OF's learned concepts generalize well to out-of-domain data and are more robust to adversarial examples, where prior U-CBEMs struggle. |
http://arxiv.org/abs/2504.10834v1 | LightFormer: A lightweight and efficient decoder for remote sensing image segmentation | 2025-04-15T03:25:39+00:00 | Deep learning techniques have achieved remarkable success in the semantic segmentation of remote sensing images and in land-use change detection. Nevertheless, their real-time deployment on edge platforms remains constrained by decoder complexity. Herein, we introduce LightFormer, a lightweight decoder for time-critical tasks that involve unstructured targets, such as disaster assessment, unmanned aerial vehicle search-and-rescue, and cultural heritage monitoring. LightFormer employs a feature-fusion and refinement module built on channel processing and a learnable gating mechanism to aggregate multi-scale, multi-range information efficiently, which drastically curtails model complexity. Furthermore, we propose a spatial information selection module (SISM) that integrates long-range attention with a detail preservation branch to capture spatial dependencies across multiple scales, thereby substantially improving the recognition of unstructured targets in complex scenes. On the ISPRS Vaihingen benchmark, LightFormer attains 99.9% of GLFFNet's mIoU (83.9% vs. 84.0%) while requiring only 14.7% of its FLOPs and 15.9% of its parameters, thus achieving an excellent accuracy-efficiency trade-off. Consistent results on LoveDA, ISPRS Potsdam, RescueNet, and FloodNet further demonstrate its robustness and superior perception of unstructured objects. These findings highlight LightFormer as a practical solution for remote sensing applications where both computational economy and high-precision segmentation are imperative. |
http://arxiv.org/abs/2504.10835v1 | Gapless Foliated-Exotic Duality | 2025-04-15T03:26:33+00:00 | In this work, we construct a new foliated quantum field theory equivalent to the exotic $\phi$-theory -- a fractonic gapless scalar field theory described by tensor gauge fields and exhibiting $U(1) \times U(1)$ subsystem global symmetry. This subsystem symmetry has an 't Hooft anomaly, which is captured by a subsystem symmetry-protected topological (SSPT) phase in one dimension higher via the anomaly inflow mechanism. By analyzing both the anomaly inflow structure and the foliated-exotic duality in the SSPT phases, we establish the foliated-exotic duality in the $\phi$-theories. Furthermore, we also investigate the foliated-exotic duality in the $\hat\phi$-theory, which is dual to the $\phi$-theory, and construct the foliated $\hat\phi$-theory. These are the first examples of the foliated-exotic duality in gapless theories. |
http://arxiv.org/abs/2504.10836v1 | Uplink Assisted Joint Channel Estimation and CSI Feedback: An Approach Based on Deep Joint Source-Channel Coding | 2025-04-15T03:29:24+00:00 | In frequency division duplex (FDD) multiple-input multiple-output (MIMO) wireless communication systems, the acquisition of downlink channel state information (CSI) is essential for maximizing spatial resource utilization and improving system spectral efficiency. The separate design of modules in AI-based CSI feedback architectures under traditional modular communication frameworks, including channel estimation (CE), CSI compression and feedback, leads to sub-optimal performance. In this paper, we propose an uplink assisted joint CE and and CSI feedback approach via deep learning for downlink CSI acquisition, which mitigates performance degradation caused by distribution bias across separately trained modules in traditional modular communication frameworks. The proposed network adopts a deep joint source-channel coding (DJSCC) architecture to mitigate the cliff effect encountered in the conventional separate source-channel coding. Furthermore, we exploit the uplink CSI as auxiliary information to enhance CSI reconstruction accuracy by leveraging the partial reciprocity between the uplink and downlink channels in FDD systems, without introducing additional overhead. The effectiveness of uplink CSI as assisted information and the necessity of an end-toend multi-module joint training architecture is validated through comprehensive ablation and scalability experiments. |
http://arxiv.org/abs/2504.10837v1 | Elastocaloric signature of the excitonic instability in Ta$_2$NiSe$_5$ | 2025-04-15T03:38:21+00:00 | On cooling through a temperature $T_S$ of around 324 K, Ta$_2$NiSe$_5$ undergoes a transition from a semimetallic state to one with a gapped electronic spectrum which is suspected to be an excitonic insulator. However, at this transition the structure also changes, from orthorhombic to monoclinic, leaving open the question of whether it is driven primarily by excitonic ordering or by a lattice instability. A lattice instability of this symmetry would correspond to softening of a B$_{2g}$ optical or acoustic phonon mode. Here, we report that elastocaloric measurements of Ta$_2$NiSe$_5$ with induced B$_{2g}$ strain reveal a thermodynamic susceptibility described by a Curie-Weiss law with a Curie temperature $T^*$ of 298 K. The fact that $T^*$ is close to $T_S$ rules out the possibility that the B$_{2g}$ acoustic mode is responsible for the transition. Since prior Raman measurements have shown minimal softening of the B$_{2g}$ optical mode as well, our finding strengthens the case that the transition is largely excitonic in nature. Our work underscores the potential of using strain as a tool for separating electronic and lattice contributions in phase transitions. |
http://arxiv.org/abs/2504.10838v1 | Directional Expansiveness for Rd-Actions and for Penrose Tilings | 2025-04-15T03:43:06+00:00 | We define and study two kinds of directional expansiveness, weak and strong, for an action T of \mathbb{R}^d on a compact metric space X. We show that for \mathbb{R}^2 finite local complexity (FLC) tiling dynamical systems, weak and strong expansiveness are the same, and are both equivalent to a simple coding property. Then we show for the Penrose tiling dynamical system, which is FLC, there are exactly five non expansive directions, the directions perpendicular to the 5th roots of unity. We also study Raphael Robinson's set of 24 Penrose Wang tiles and show the corresponding Penrose Wang tile dynamical system is strictly ergodic. Finally, we study two deformations of the Penrose Wang tile system, one where the square Wang tiles are all deformed into a 2\pi/5 rhombus, and another where they are deformed into a set of eleven tetragon tiles. We show both of these are topologically conjugate to the Penrose tiling dynamical system. |
http://arxiv.org/abs/2504.10839v1 | Rethinking Theory of Mind Benchmarks for LLMs: Towards A User-Centered Perspective | 2025-04-15T03:44:43+00:00 | The last couple of years have witnessed emerging research that appropriates Theory-of-Mind (ToM) tasks designed for humans to benchmark LLM's ToM capabilities as an indication of LLM's social intelligence. However, this approach has a number of limitations. Drawing on existing psychology and AI literature, we summarize the theoretical, methodological, and evaluation limitations by pointing out that certain issues are inherently present in the original ToM tasks used to evaluate human's ToM, which continues to persist and exacerbated when appropriated to benchmark LLM's ToM. Taking a human-computer interaction (HCI) perspective, these limitations prompt us to rethink the definition and criteria of ToM in ToM benchmarks in a more dynamic, interactional approach that accounts for user preferences, needs, and experiences with LLMs in such evaluations. We conclude by outlining potential opportunities and challenges towards this direction. |
http://arxiv.org/abs/2504.10840v1 | XRD study of the magnetization plateau above 40 T in the frustrated helimagnet CuGaCr$_{4}$S$_{8}$ | 2025-04-15T03:55:12+00:00 | CuGaCr$_{4}$S$_{8}$, which contains a chromium breathing pyrochlore network, exhibits diverse magnetic phases, including an incommensurate helical state below 31 K and a 1/2-magnetization plateau above 40 T, owing to the interplay between magnetic frustration and spin-lattice coupling. Here, we perform a single-shot powder x-ray diffraction experiment on CuGaCr$_{4}$S$_{8}$ in a pulsed high magnetic field of 55 T, revealing an orthorhombic-to-cubic (or pseudocubic) structural transition upon entering the 1/2-magnetization plateau phase at low temperatures. This observation suggests the emergence of a commensurate ferrimagnetic order, where a 3-up-1-down spin configuration is realized in each small tetrahedron, and the all-up or all-down in each large tetrahedron. We propose two types of 16-sublattice magnetic structures, which are degenerate within exchange interactions between the first, second, and third nearest neighbors. |
http://arxiv.org/abs/2504.10841v1 | Some four-dimensional orthogonal invariants | 2025-04-15T03:59:00+00:00 | Let $p$ be an odd prime and $\mathbb{F}_p$ be the prime field of order $p$. Consider a $2$-dimensional orthogonal group $G$ over $\mathbb{F}_p$ acting on the standard representation $V$ and the dual space $V^*$. We compute the invariant ring $\mathbb{F}_p[V\oplus V^*]^G$ via explicitly exhibiting a minimal generating set. Our method finds an application of $s$-invariants appeared in covariant theory of finite groups. |
http://arxiv.org/abs/2504.10842v1 | A comprehensive review of remote sensing in wetland classification and mapping | 2025-04-15T03:59:36+00:00 | Wetlands constitute critical ecosystems that support both biodiversity and human well-being; however, they have experienced a significant decline since the 20th century. Back in the 1970s, researchers began to employ remote sensing technologies for wetland classification and mapping to elucidate the extent and variations of wetlands. Although some review articles summarized the development of this field, there is a lack of a thorough and in-depth understanding of wetland classification and mapping: (1) the scientific importance of wetlands, (2) major data, methods used in wetland classification and mapping, (3) driving factors of wetland changes, (4) current research paradigm and limitations, (5) challenges and opportunities in wetland classification and mapping under the context of technological innovation and global environmental change. In this review, we aim to provide a comprehensive perspective and new insights into wetland classification and mapping for readers to answer these questions. First, we conduct a meta-analysis of over 1,200 papers, encompassing wetland types, methods, sensor types, and study sites, examining prevailing trends in wetland classification and mapping. Next, we review and synthesize the wetland features and existing data and methods in wetland classification and mapping. We also summarize typical wetland mapping products and explore the intrinsic driving factors of wetland changes across multiple spatial and temporal scales. Finally, we discuss current limitations and propose future directions in response to global environmental change and technological innovation. This review consolidates our understanding of wetland remote sensing and offers scientific recommendations that foster transformative progress in wetland science. |
http://arxiv.org/abs/2504.10843v1 | Stable and High-Precision 3D Positioning via Tunable Composite-Dimensional Hong-Ou-Mandel Interference | 2025-04-15T04:01:22+00:00 | We propose a stable and high-precision three-dimensional (3D) quantum positioning scheme based on Hong-Ou-Mandel interference. While previous studies have explored HOM interference in quantum metrology, they were mostly limited to one-dimensional scenarios, whereas real-world applications require full 3D spatial resolution. Our approach not only generalizes HOM positioning to 3D-achieving ultimate sensitivity as defined by the quantum Cramer-Rao bound-but also stabilizes estimation accuracy through simple polarization tuning, ensuring that the Fisher information remains independent of the estimated parameters. Theoretical analysis and simulations demonstrate that our method achieves ultra-precise and reliable 3D positioning, even with a limited number of detected photons. |
http://arxiv.org/abs/2504.10844v1 | Nonlinear Diffusion Equations on Graphs: Global Well-Posedness, Blow-Up Analysis and Applications | 2025-04-15T04:06:12+00:00 | For a nonlinear diffusion equation on graphs whose nonlinearity violates the Lipschitz condition, we prove short-time solution existence and characterize global well-posedness by establishing sufficient criteria for blow-up phenomena and quantifying blow-up rates. These theoretical results are then applied to model complex dynamical networks, with supporting numerical experiments. This work mainly makes two contributions: (i) generalization of existing results for diffusion equations on graphs to cases with nontrivial potentials, producing richer analytical results; (ii) a new PDE approach to model complex dynamical networks, with preliminary numerical experiments confirming its validity. |
http://arxiv.org/abs/2504.10845v1 | Moving Beyond Next-Token Prediction: Transformers are Context-Sensitive Language Generators | 2025-04-15T04:06:27+00:00 | Large Language Models (LLMs), powered by Transformers, have demonstrated human-like intelligence capabilities, yet their underlying mechanisms remain poorly understood. This paper presents a novel framework for interpreting LLMs as probabilistic left context-sensitive languages (CSLs) generators. We hypothesize that Transformers can be effectively decomposed into three fundamental components: context windows, attention mechanisms, and autoregressive generation frameworks. This decomposition allows for the development of more flexible and interpretable computational models, moving beyond the traditional view of attention and autoregression as inseparable processes. We argue that next-token predictions can be understood as probabilistic, dynamic approximations of left CSL production rules, providing an intuitive explanation for how simple token predictions can yield human-like intelligence outputs. Given that all CSLs are left context-sensitive (Penttonen, 1974), we conclude that Transformers stochastically approximate CSLs, which are widely recognized as models of human-like intelligence. This interpretation bridges the gap between Formal Language Theory and the observed generative power of Transformers, laying a foundation for future advancements in generative AI theory and applications. Our novel perspective on Transformer architectures will foster a deeper understanding of LLMs and their future potentials. |
http://arxiv.org/abs/2504.10846v1 | Mosaic: Client-driven Account Allocation Framework in Sharded Blockchains | 2025-04-15T04:07:09+00:00 | Recent account allocation studies in sharded blockchains are typically miner-driven, requiring miners to perform global optimizations for all accounts to enhance system-wide performance. This forces each miner to maintain a complete copy of the entire ledger, resulting in significant storage, communication, and computation overhead. In this work, we explore an alternative research direction by proposing Mosaic, the first client-driven framework for distributed, lightweight local optimization. Rather than relying on miners to allocate all accounts, Mosaic enables clients to independently execute a local algorithm to determine their residing shards. Clients can submit migration requests to a beacon chain when relocation is necessary. Mosaic naturally addresses key limitations of miner-driven approaches, including the lack of miner incentives and the significant overhead. While clients are flexible to adopt any algorithm for shard allocation, we design and implement a reference algorithm, Pilot, to guide them. Clients execute Pilot to maximize their own benefits, such as reduced transaction fees and confirmation latency. On a real-world Ethereum dataset, we implement and evaluate Pilot against state-of-the-art miner-driven global optimization solutions. The results demonstrate that Mosaic significantly enhances computational efficiency, achieving a four-order-of-magnitude reduction in computation time, with the reduced input data size from 1.44 GB to an average of 228.66 bytes per account. Despite these efficiency gains, Pilot introduces only about a 5% increase in the cross-shard ratio and maintains approximately 98% of the system throughput, demonstrating a minimal trade-off in overall effectiveness. |
http://arxiv.org/abs/2504.10847v1 | Cosmic-Ray Constraints on the Flux of Ultra-High-Energy Neutrino Event KM3-230213A | 2025-04-15T04:07:13+00:00 | The detection of a $\simeq220$~PeV muon neutrino by the KM3NeT neutrino telescope offers an unprecedented opportunity to probe the Universe at extreme energies. We analyze the origin of this event under three scenarios, viz., a transient point source, a diffuse astrophysical emission, and line-of-sight interaction of ultrahigh-energy cosmic rays (UHECR; $E \gtrsim 0.1$~EeV). Our analysis includes the flux from both a KM3NeT-only fit and a joint fit, incorporating data from KM3NeT, IceCube, and Pierre Auger Observatory. If the neutrino event originates from transients, it requires a new population of transient that is energetic, gamma-ray dark, and more abundant than known ones. In the framework of diffuse astrophysical emission, we compare the required local UHECR energy injection rate at $\gtrsim4$ EeV, assuming a proton primary, with the rate derived from the flux measurements by Auger. This disfavors the KM3NeT-only fit at all redshifts, while the joint fit remains viable for $z\gtrsim 1$, based on redshift evolution models of known source populations. For cosmogenic origin from point sources, our results suggest that the luminosity obtained at redshifts $z \lesssim 1$ from the joint fit is compatible with the Eddington luminosity of supermassive black holes in active galactic nuclei. |
http://arxiv.org/abs/2504.10848v1 | Ichiyo: Fragile and Transient Interaction in Neighborhood | 2025-04-15T04:16:48+00:00 | As the Internet develops, social networking and other communication tools have transformed people's relationships into something fast, visible, and geographically huge. However, these communication tools have not expanded opportunities for acquainting oneself with neighbors outside one's social network; rather, they have comparatively diminished occasions for interacting with unfamiliar neighbors by prioritizing communication with existing friends. Therefore, we invented the medium Ichiyo to increase the opportunities to think of neighbors walking along the same street or in the same neighborhood and to expand the imagination of those who pass by and those who used to be there. Thus, users can engage in indirect interaction. We used commercially available laser cutters to engrave QR codes on leaves that are naturally found in our living space to prevent environmental invasion. The QR codes lead to a communal space on the web where users can freely leave messages. By engraving QR codes, information can be virtually expanded to be presented. To get the feedback of Ichiyo, we let a total of several thousand people experience a new way of communication as a part of the exhibition ''iii Exhibition 2022'', an art exhibition at the University of Tokyo. A total of more than 1,000 leaves engraved with QR codes were prepared and scattered at the exhibition site and along the road from the nearest station to the venue. |
http://arxiv.org/abs/2504.10849v1 | Real-Time Word-Level Temporal Segmentation in Streaming Speech Recognition | 2025-04-15T04:17:08+00:00 | Rich-text captions are essential to help communication for Deaf and hard-of-hearing (DHH) people, second-language learners, and those with autism spectrum disorder (ASD). They also preserve nuances when converting speech to text, enhancing the realism of presentation scripts and conversation or speech logs. However, current real-time captioning systems lack the capability to alter text attributes (ex. capitalization, sizes, and fonts) at the word level, hindering the accurate conveyance of speaker intent that is expressed in the tones or intonations of the speech. For example, ''YOU should do this'' tends to be considered as indicating ''You'' as the focus of the sentence, whereas ''You should do THIS'' tends to be ''This'' as the focus. This paper proposes a solution that changes the text decorations at the word level in real time. As a prototype, we developed an application that adjusts word size based on the loudness of each spoken word. Feedback from users implies that this system helped to convey the speaker's intent, offering a more engaging and accessible captioning experience. |
http://arxiv.org/abs/2504.10850v1 | How to Enhance Downstream Adversarial Robustness (almost) without Touching the Pre-Trained Foundation Model? | 2025-04-15T04:17:37+00:00 | With the rise of powerful foundation models, a pre-training-fine-tuning paradigm becomes increasingly popular these days: A foundation model is pre-trained using a huge amount of data from various sources, and then the downstream users only need to fine-tune and adapt it to specific downstream tasks. However, due to the high computation complexity of adversarial training, it is not feasible to fine-tune the foundation model to improve its robustness on the downstream task. Observing the above challenge, we want to improve the downstream robustness without updating/accessing the weights in the foundation model. Inspired from existing literature in robustness inheritance (Kim et al., 2020), through theoretical investigation, we identify a close relationship between robust contrastive learning with the adversarial robustness of supervised learning. To further validate and utilize this theoretical insight, we design a simple-yet-effective robust auto-encoder as a data pre-processing method before feeding the data into the foundation model. The proposed approach has zero access to the foundation model when training the robust auto-encoder. Extensive experiments demonstrate the effectiveness of the proposed method in improving the robustness of downstream tasks, verifying the connection between the feature robustness (implied by small adversarial contrastive loss) and the robustness of the downstream task. |
http://arxiv.org/abs/2504.10851v1 | ICAFS: Inter-Client-Aware Feature Selection for Vertical Federated Learning | 2025-04-15T04:19:04+00:00 | Vertical federated learning (VFL) enables a paradigm for vertically partitioned data across clients to collaboratively train machine learning models. Feature selection (FS) plays a crucial role in Vertical Federated Learning (VFL) due to the unique nature that data are distributed across multiple clients. In VFL, different clients possess distinct subsets of features for overlapping data samples, making the process of identifying and selecting the most relevant features a complex yet essential task. Previous FS efforts have primarily revolved around intra-client feature selection, overlooking vital feature interaction across clients, leading to subpar model outcomes. We introduce ICAFS, a novel multi-stage ensemble approach for effective FS in VFL by considering inter-client interactions. By employing conditional feature synthesis alongside multiple learnable feature selectors, ICAFS facilitates ensemble FS over these selectors using synthetic embeddings. This method bypasses the limitations of private gradient sharing and allows for model training using real data with refined embeddings. Experiments on multiple real-world datasets demonstrate that ICAFS surpasses current state-of-the-art methods in prediction accuracy. |
http://arxiv.org/abs/2504.10852v1 | Enhancing Features in Long-tailed Data Using Large Vision Mode | 2025-04-15T04:21:50+00:00 | Language-based foundation models, such as large language models (LLMs) or large vision-language models (LVLMs), have been widely studied in long-tailed recognition. However, the need for linguistic data is not applicable to all practical tasks. In this study, we aim to explore using large vision models (LVMs) or visual foundation models (VFMs) to enhance long-tailed data features without any language information. Specifically, we extract features from the LVM and fuse them with features in the baseline network's map and latent space to obtain the augmented features. Moreover, we design several prototype-based losses in the latent space to further exploit the potential of the augmented features. In the experimental section, we validate our approach on two benchmark datasets: ImageNet-LT and iNaturalist2018. |
http://arxiv.org/abs/2504.10853v1 | PT-Mark: Invisible Watermarking for Text-to-image Diffusion Models via Semantic-aware Pivotal Tuning | 2025-04-15T04:25:57+00:00 | Watermarking for diffusion images has drawn considerable attention due to the widespread use of text-to-image diffusion models and the increasing need for their copyright protection. Recently, advanced watermarking techniques, such as Tree Ring, integrate watermarks by embedding traceable patterns (e.g., Rings) into the latent distribution during the diffusion process. Such methods disrupt the original semantics of the generated images due to the inevitable distribution shift caused by the watermarks, thereby limiting their practicality, particularly in digital art creation. In this work, we present Semantic-aware Pivotal Tuning Watermarks (PT-Mark), a novel invisible watermarking method that preserves both the semantics of diffusion images and the traceability of the watermark. PT-Mark preserves the original semantics of the watermarked image by gradually aligning the generation trajectory with the original (pivotal) trajectory while maintaining the traceable watermarks during whole diffusion denoising process. To achieve this, we first compute the salient regions of the watermark at each diffusion denoising step as a spatial prior to identify areas that can be aligned without disrupting the watermark pattern. Guided by the region, we then introduce an additional pivotal tuning branch that optimizes the text embedding to align the semantics while preserving the watermarks. Extensive evaluations demonstrate that PT-Mark can preserve the original semantics of the diffusion images while integrating robust watermarks. It achieves a 10% improvement in the performance of semantic preservation (i.e., SSIM, PSNR, and LPIPS) compared to state-of-the-art watermarking methods, while also showing comparable robustness against real-world perturbations and four times greater efficiency. |
http://arxiv.org/abs/2504.10854v1 | LVLM_CSP: Accelerating Large Vision Language Models via Clustering, Scattering, and Pruning for Reasoning Segmentation | 2025-04-15T04:27:15+00:00 | Large Vision Language Models (LVLMs) have been widely adopted to guide vision foundation models in performing reasoning segmentation tasks, achieving impressive performance. However, the substantial computational overhead associated with LVLMs presents a new challenge. The primary source of this computational cost arises from processing hundreds of image tokens. Therefore, an effective strategy to mitigate such overhead is to reduce the number of image tokens, a process known as image token pruning. Previous studies on image token pruning for LVLMs have primarily focused on high level visual understanding tasks, such as visual question answering and image captioning. In contrast, guiding vision foundation models to generate accurate visual masks based on textual queries demands precise semantic and spatial reasoning capabilities. Consequently, pruning methods must carefully control individual image tokens throughout the LVLM reasoning process. Our empirical analysis reveals that existing methods struggle to adequately balance reductions in computational overhead with the necessity to maintain high segmentation accuracy. In this work, we propose LVLM_CSP, a novel training free visual token pruning method specifically designed for LVLM based reasoning segmentation tasks. LVLM_CSP consists of three stages: clustering, scattering, and pruning. Initially, the LVLM performs coarse-grained visual reasoning using a subset of selected image tokens. Next, fine grained reasoning is conducted, and finally, most visual tokens are pruned in the last stage. Extensive experiments demonstrate that LVLM_CSP achieves a 65% reduction in image token inference FLOPs with virtually no accuracy degradation, and a 70% reduction with only a minor 1% drop in accuracy on the 7B LVLM. |
http://arxiv.org/abs/2504.10855v1 | Virtual Contraction Approach to Decentralized Adaptive Stabilization of Nonlinear Time-Delayed Networks | 2025-04-15T04:34:24+00:00 | In this paper, we utilize a diagonally dominant structure for the decentralized stabilization of unknown nonlinear time-delayed networks. Generalizing the idea of virtual contraction analysis to time-delayed systems, we demonstrate that nonlinear time-delayed networks can be stabilized by diagonal high-gains if the input matrices possess certain generalized (column/row) diagonally dominant properties. To achieve stabilization of unknown networks, we further propose a distributed adaptive tuning rule for each individual gain function, ensuring that all closed-loop trajectories converge to the origin. The effectiveness of the proposed decentralized adaptive control is verified in a case study on epidemic spreading control in SIS networks with transmission delays. |
http://arxiv.org/abs/2504.10856v1 | On five-dimensional curvature squared supergravity and holography | 2025-04-15T04:35:09+00:00 | In this work, we report the recent progress in obtaining new curvature-squared invariants in 5D, N=1 gauged minimal supergravity. We exhibit the structure of various composite multiplets that are pivotal in the construction. We also present the form of the gauged Riemann-squared and Gauss-Bonnet superinvariants in a dilaton-Weyl multiplet. As a first application of the new curvature squared invariants, we compute their corrections to holographic central charges and the Euclidean action of supersymmetric charged rotating black holes, exhibiting exact matching between the gravity and CFT results. |
http://arxiv.org/abs/2504.10857v1 | ZeroGrasp: Zero-Shot Shape Reconstruction Enabled Robotic Grasping | 2025-04-15T04:37:39+00:00 | Robotic grasping is a cornerstone capability of embodied systems. Many methods directly output grasps from partial information without modeling the geometry of the scene, leading to suboptimal motion and even collisions. To address these issues, we introduce ZeroGrasp, a novel framework that simultaneously performs 3D reconstruction and grasp pose prediction in near real-time. A key insight of our method is that occlusion reasoning and modeling the spatial relationships between objects is beneficial for both accurate reconstruction and grasping. We couple our method with a novel large-scale synthetic dataset, which comprises 1M photo-realistic images, high-resolution 3D reconstructions and 11.3B physically-valid grasp pose annotations for 12K objects from the Objaverse-LVIS dataset. We evaluate ZeroGrasp on the GraspNet-1B benchmark as well as through real-world robot experiments. ZeroGrasp achieves state-of-the-art performance and generalizes to novel real-world objects by leveraging synthetic data. |
http://arxiv.org/abs/2504.10858v1 | Universal thermodynamic topological classes of three-dimensional BTZ black holes | 2025-04-15T04:39:37+00:00 | We establish a universal thermodynamic topological classification for three-dimensional static neutral Ba\~{n}ados-Teitelboim-Zanelli (BTZ), charged BTZ, and rotating BTZ black holes. We demonstrate that in all three cases (static neutral BTZ, charged BTZ, and rotating BTZ black holes), both the innermost small black hole states and the outermost large black hole states exhibit stable thermodynamic behavior. In the low-temperature limit, all three cases exhibit a thermodynamically stable small black hole state. Conversely, in the high-temperature limit, each system admits a thermodynamically stable large black hole state. Through this analysis, we have rigorously shown that static neutral, charged, and rotating BTZ black holes are consistently classified within the $W^{1+}$ category. Our results demonstrate that neither the charge parameter nor the rotation parameter exerts significant influence on the universal thermodynamic topological classification of three-dimensional static neutral BTZ black holes. This reveals a fundamental dichotomy: while angular momentum and electric charge dominate the thermodynamic topology of four-dimensional static black holes, their effects become negligible in the three-dimensional static BTZ case, highlighting a dimension-driven divergence in black hole thermodynamic behavior. |
http://arxiv.org/abs/2504.10859v1 | A Sublinear Algorithm for Path Feasibility Among Rectangular Obstacles | 2025-04-15T04:40:25+00:00 | The problem of finding a path between two points while avoiding obstacles is critical in robotic path planning. We focus on the feasibility problem: determining whether such a path exists. We model the robot as a query-specific rectangular object capable of moving parallel to its sides. The obstacles are axis-aligned, rectangular, and may overlap. Most previous works only consider nondisjoint rectangular objects and point-sized or statically sized robots. Our approach introduces a novel technique leveraging generalized Gabriel graphs and constructs a data structure to facilitate online queries regarding path feasibility with varying robot sizes in sublinear time. To efficiently handle feasibility queries, we propose an online algorithm utilizing sweep line to construct a generalized Gabriel graph under the $L_\infty$ norm, capturing key gap constraints between obstacles. We utilize a persistent disjoint-set union data structure to efficiently determine feasibility queries in $\mathcal{O}(\log n)$ time and $\mathcal{O}(n)$ total space. |
http://arxiv.org/abs/2504.10860v1 | Bell-Mermin-Klyshko Inequalities and One-way Information Deficit of Dirac Fields in Noninertial Frames | 2025-04-15T04:48:18+00:00 | We investigate the Bell-Mermin-Klyshko inequalities and the one-way information deficit of Dirac fields in noninertial frames, where the quantum correlations are shared between inertial and accelerated observers due to the Unruh effect. We derive partial analytical results for specific quantum states using the one-way information deficit. Additionally, we present numerical results for the Bell-Mermin-Klyshko inequalities. The study reveals the presence of Bell nonlocality and the significance of the one-way information deficit in relativistic quantum information. |
http://arxiv.org/abs/2504.10861v1 | Ai2 Scholar QA: Organized Literature Synthesis with Attribution | 2025-04-15T04:48:18+00:00 | Retrieval-augmented generation is increasingly effective in answering scientific questions from literature, but many state-of-the-art systems are expensive and closed-source. We introduce Ai2 Scholar QA, a free online scientific question answering application. To facilitate research, we make our entire pipeline public: as a customizable open-source Python package and interactive web app, along with paper indexes accessible through public APIs and downloadable datasets. We describe our system in detail and present experiments analyzing its key design decisions. In an evaluation on a recent scientific QA benchmark, we find that Ai2 Scholar QA outperforms competing systems. |
http://arxiv.org/abs/2504.10862v1 | Testing redshift variation of the X-ray and ultraviolet luminosity relations of quasars | 2025-04-15T04:48:23+00:00 | Quasars serve as important cosmological probes and constructing accurate luminosity relations for them is essential for their use in cosmology. If the coefficients of quasar's luminosity relation vary with redshift, it could introduce biases into cosmological constraints derived from quasars. In this paper, we conduct a detailed analysis of the redshift variation in the X-ray luminosity and ultraviolet (UV) luminosity ($L_\mathrm{X}$-$L_\mathrm{UV}$) relations of quasars. For the standard $L_\mathrm{X}$-$L_\mathrm{UV}$ relation, we find that the relation coefficients exhibit a strong and linear correlation with redshift, which is not attributable to the selection effect. Additionally, we examine two three-dimensional, redshift-evolving $L_\mathrm{X}$-$L_\mathrm{UV}$ relations and find that the inclusion of a redshift-dependent term does not eliminate the impact of redshift evolution, as the relation coefficients continue to evolve with redshift. Finally, we construct a new $L_\mathrm{X}$-$L_\mathrm{UV}$ relation in which the redshift evolution of the relation coefficients is nearly eliminated. Calibrating the luminosity relations using Hubble parameter measurements, we demonstrate that quasars utilizing our new relation yield effective constraints on cosmological parameters that are consistent with results from Planck CMB data, unlike constraints derived from the standard relation. |
http://arxiv.org/abs/2504.10863v1 | Intertwined fluctuations and isotope effects in the Hubbard-Holstein model on the square lattice from functional renormalization | 2025-04-15T04:49:50+00:00 | Electron-electron and electron-phonon interactions are responsible for the formation of spin, charge, and superconducting correlations in layered quantum materials. A paradigmatic model for such materials that captures both kinds of interactions is the two-dimensional Hubbard-Holstein model with a dispersionless Einstein phonon. In this work, we provide a detailed analysis of the magnetic, density, and superconducting fluctuations at and away from half-filling. To that end, we employ the functional renormalization group using the recently introduced extension of the single-boson exchange formulation. More precisely, we go beyond previous approaches to the model by resolving the full frequency dependence of the two-particle vertex and taking into account the feedback from the electronic self-energy. We perform broad parameter scans in the space of Hubbard repulsion, electron-phonon coupling strength, and phonon frequency to explore the leading magnetic, density, and superconducting susceptibilities from the adiabatic to the anti-adiabatic regime. Our numerical data reveal that self-energy effects lead to an enhancement of the $d$-wave superconducting susceptibility towards larger phonon frequencies, in contrast to earlier isotope-effect studies. At small phonon frequencies, large density contributions to the $s$-wave superconducting susceptibility change sign and eventually lead to a reduction of $s$-wave superconductivity with increasing electron-phonon coupling, signaling the breakdown of Migdal-Eliashberg theory. We analyze our findings systematically, employing detailed diagnostics of the intertwined fluctuations and pinning down the various positive and negative isotope effects of the physical susceptibilities. |
http://arxiv.org/abs/2504.10864v1 | Automata for the commutative closure of regular sets | 2025-04-15T04:54:02+00:00 | Consider $ A^* $, the free monoid generated by the finite alphabet $A$ with the concatenation operation. Two words have the same commutative image when one is a permutation of the symbols of the other. The commutative closure of a set $ L \subseteq A^* $ is the set $ {C}(L) \subseteq A^* $ of words whose commutative image coincides with that of some word in $ L $. We provide an algorithm that, given a regular set $ L $, produces a finite state automaton that accepts the commutative closure $ {C}(L) $, provided that this closure set is regular. The problem of deciding whether $ {C}(L) $ is regular was solved by Ginsburg and Spanier in 1966 using the decidability of Presburger sentences, and by Gohon in 1985 via formal power series. The problem of constructing an automaton that accepts $ {C}(L) $ has already been studied in the literature. We give a simpler algorithm using an algebraic approach. |
http://arxiv.org/abs/2504.10865v1 | Understanding the theoretical properties of projected Bellman equation, linear Q-learning, and approximate value iteration | 2025-04-15T04:56:33+00:00 | In this paper, we study the theoretical properties of the projected Bellman equation (PBE) and two algorithms to solve this equation: linear Q-learning and approximate value iteration (AVI). We consider two sufficient conditions for the existence of a solution to PBE : strictly negatively row dominating diagonal (SNRDD) assumption and a condition motivated by the convergence of AVI. The SNRDD assumption also ensures the convergence of linear Q-learning, and its relationship with the convergence of AVI is examined. Lastly, several interesting observations on the solution of PBE are provided when using $\epsilon$-greedy policy. |
http://arxiv.org/abs/2504.10866v1 | Gaussian Approximation for High-Dimensional $U$-statistics with Size-Dependent Kernels | 2025-04-15T04:58:58+00:00 | Motivated by small bandwidth asymptotics for kernel-based semiparametric estimators in econometrics, this paper establishes Gaussian approximation results for high-dimensional fixed-order $U$-statistics whose kernels depend on the sample size. Our results allow for a situation where the dominant component of the Hoeffding decomposition is absent or unknown, including cases with known degrees of degeneracy as special forms. The obtained error bounds for Gaussian approximations are sharp enough to almost recover the weakest bandwidth condition of small bandwidth asymptotics in the fixed-dimensional setting when applied to a canonical semiparametric estimation problem. We also present an application to an adaptive goodness-of-fit testing, along with discussions about several potential applications. |
http://arxiv.org/abs/2504.10867v1 | Precise measurement of the form factors in $D^0\rightarrow K^*(892)^-μ^+ν_μ$ and test of lepton universality with $D^0\rightarrow K^*(892)^-\ell^+ν_{\ell}$ decays | 2025-04-15T04:59:18+00:00 | We report a study of the semileptonic decay $D^0 \rightarrow \bar{K}^0\pi^-\mu^+\nu_{\mu}$ based on a sample of $7.9~\mathrm{fb}^{-1}$ of $e^+e^-$ annihilation data collected at a center-of-mass energy of 3.773~GeV with the BESIII detector at the BEPCII collider. The branching fraction of the decay is measured for the first time to be $\mathcal{B}(D^0\rightarrow \bar{K}^0\pi^-\mu^+\nu_{\mu}) = (1.373 \pm 0.020_{\rm stat} \pm 0.023_{\rm syst})\%$, where the first uncertainty is statistical and the second is systematic. Based on the investigation of the decay dynamics, we find that the decay is dominated by the $K^{*}(892)^-$ resonance with the branching fraction measured to be $\mathcal{B}(D^0\rightarrow K^{*}(892)^-\mu^+\nu_{\mu}) = (1.948 \pm 0.033_{\rm stat} \pm 0.036_{\rm syst})\%$. We also determine the hadronic form factors for the $D^0\rightarrow K^{*}(892)^-\mu^+\nu_{\mu}$ decay to be $r_{V} = V(0)/A_1(0) = 1.46 \pm 0.11_{\rm stat} \pm 0.04_{\rm syst}$, $r_{2} = A_2(0)/A_1(0) = 0.71 \pm 0.08_{\rm stat} \pm 0.03_{\rm syst}$, and $A_1(0)=0.609 \pm 0.008_{\rm stat} \pm 0.008_{\rm syst}$, where $V(0)$ is the vector form factor and $A_{1,2}(0)$ are the axial form factors evaluated at $q^2=0$. The $A_1(0)$ is measured for the first time in $D^0\rightarrow K^{*}(892)^-\mu^+\nu_{\mu}$ decay. Averaging the form-factor parameters that we reported previously in $D^0\rightarrow K^*(892)^-(\rightarrow \bar{K}^0\pi^-)e^+\nu_{e}$ and $D^0\rightarrow K^*(892)^-(\rightarrow K^-\pi^0)\mu^+\nu_{\mu}$ decays, we obtain $r_{V}=1.456\pm0.040_{\rm stat}\pm0.016_{\rm syst}$, $r_{2}=0.715\pm0.031_{\rm stat}\pm0.014_{\rm stat}$, and $A_1(0)=0.614\pm0.005_{\rm stat}\pm0.004_{\rm syst}$. This is the most precise determination of the form-factor parameters to date measured in $D\rightarrow K^*(892)$ transition, which provide the most stringent test on various theoretical models. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.