url
stringlengths 33
33
| title
stringlengths 18
214
| date_published
stringdate 2025-03-20 00:07:06
2025-04-17 04:46:57
| abstract
stringlengths 114
1.92k
|
---|---|---|---|
http://arxiv.org/abs/2503.16754v1 | Distributed Consensus Optimization with Consensus ALADIN | 2025-03-21T00:11:14+00:00 | TThe paper proposes the Consensus Augmented Lagrange Alternating Direction Inexact Newton (Consensus ALADIN) algorithm, a novel approach for solving distributed consensus optimization problems (DC). Consensus ALADIN allows each agent to independently solve its own nonlinear programming problem while coordinating with other agents by solving a consensus quadratic programming (QP) problem. Building on this, we propose Broyden-Fletcher-Goldfarb-Shanno (BFGS) Consensus ALADIN, a communication-and-computation-efficient Consensus ALADIN.BFGS Consensus ALADIN improves communication efficiency through BFGS approximation techniques and enhances computational efficiency by deriving a closed form for the consensus QP problem. Additionally, by replacing the BFGS approximation with a scaled identity matrix, we develop Reduced Consensus ALADIN, a more computationally efficient variant. We establish the convergence theory for Consensus ALADIN and demonstrate its effectiveness through application to a non-convex sensor allocation problem. |
http://arxiv.org/abs/2503.16755v1 | Fast online node labeling with graph subsampling | 2025-03-21T00:13:16+00:00 | Large data applications rely on storing data in massive, sparse graphs with millions to trillions of nodes. Graph-based methods, such as node prediction, aim for computational efficiency regardless of graph size. Techniques like localized approximate personalized page rank (APPR) solve sparse linear systems with complexity independent of graph size, but is in terms of the maximum node degree, which can be much larger in practice than the average node degree for real-world large graphs. In this paper, we consider an \emph{online subsampled APPR method}, where messages are intentionally dropped at random. We use tools from graph sparsifiers and matrix linear algebra to give approximation bounds on the graph's spectral properties ($O(1/\epsilon^2)$ edges), and node classification performance (added $O(n\epsilon)$ overhead). |
http://arxiv.org/abs/2503.16756v1 | Stabilizing Linear Systems under Partial Observability: Sample Complexity and Fundamental Limits | 2025-03-21T00:14:55+00:00 | We study the problem of stabilizing an unknown partially observable linear time-invariant (LTI) system. For fully observable systems, leveraging an unstable/stable subspace decomposition approach, state-of-art sample complexity is independent from system dimension $n$ and only scales with respect to the dimension of the unstable subspace. However, it remains open whether such sample complexity can be achieved for partially observable systems because such systems do not admit a uniquely identifiable unstable subspace. In this paper, we propose LTS-P, a novel technique that leverages compressed singular value decomposition (SVD) on the ''lifted'' Hankel matrix to estimate the unstable subsystem up to an unknown transformation. Then, we design a stabilizing controller that integrates a robust stabilizing controller for the unstable mode and a small-gain-type assumption on the stable subspace. We show that LTS-P stabilizes unknown partially observable LTI systems with state-of-the-art sample complexity that is dimension-free and only scales with the number of unstable modes, which significantly reduces data requirements for high-dimensional systems with many stable modes. |
http://arxiv.org/abs/2503.16757v1 | Measure-expansive systems | 2025-03-21T00:15:36+00:00 | We call a dynamical system on a measurable metric space {\em measure-expansive} if the probability of two orbits remain close each other for all time is negligible (i.e. zero). We extend results of expansive systems on compact metric spaces to the measure-expansive context. For instance, the measure-expansive homeomorphisms are characterized as those homeomorphisms $f$ for which the diagonal is almost invariant for $f\times f$ with respect to the product measure. In addition, the set of points with converging semi-orbits for such homeomorphisms have measure zero. In particular, the set of periodic orbits for these homeomorphisms is also of measure zero. We also prove that there are no measure-expansive homeomorphisms in the interval and, in the circle, they are the Denjoy ones. As an application we obtain probabilistic proofs of some result of expansive systems. We also present some analogous results for continuous maps. |
http://arxiv.org/abs/2503.16758v1 | Nonlinear stability of compressible vortex sheets in three-dimensional elastodynamics | 2025-03-21T00:20:25+00:00 | We investigate the nonlinear stability of compressible vortex sheet solutions for three-dimensional (3D) isentropic elastic flows. Building upon previous results on the weakly linear stability of elastic vortex sheets [19], we perform a detailed study of the roots of the Lopatinskii determinant and identify a geometric stability condition associated with the deformation gradient. We employ an upper triangularization technique that isolates the outgoing modes into a closed system, where they appear only at the leading order. This enables us to derive energy estimates despite derivative loss. The major novelty of our approach includes the following two key aspects: (1) For the 3D compressible Euler vortex sheets, the front symbol exhibits degenerate ellipticity in certain frequency directions, which makes it challenging to ensure the front's regularity using standard energy estimates. Our analysis reveals that the non-parallel structure of the deformation gradient tensor plays a crucial role in recovering ellipticity in the front symbol, thereby enhancing the regularity of the free interface. (2) Another significant challenge in 3D arises from the strong degeneracy caused by the collision of repeated roots and poles. Unlike in 2D, where such interactions are absent, we encounter a co-dimension one set in frequency space where a double root coincides with a double pole. To resolve this, we refine Coulombel's diagonalization framework [21] and construct a suitable transformation that reduces the degeneracy order of the Lopatinskii matrix, enabling the use of localized Garding-type estimates to control the characteristic components. Finally, we employ a Nash-Moser iteration scheme to establish the local existence and nonlinear stability of vortex sheets under small initial perturbations, showing stability within a subsonic regime. |
http://arxiv.org/abs/2503.16759v1 | elaTCSF: A Temporal Contrast Sensitivity Function for Flicker Detection and Modeling Variable Refresh Rate Flicker | 2025-03-21T00:23:10+00:00 | The perception of flicker has been a prominent concern in illumination and electronic display fields for over a century. Traditional approaches often rely on Critical Flicker Frequency (CFF), primarily suited for high-contrast (full-on, full-off) flicker. To tackle varying contrast flicker, the International Committee for Display Metrology (ICDM) introduced a Temporal Contrast Sensitivity Function TCSF$_{IDMS}$ within the Information Display Measurements Standard (IDMS). Nevertheless, this standard overlooks crucial parameters: luminance, eccentricity, and area. Existing models incorporating these parameters are inadequate for flicker detection, especially at low spatial frequencies. To address these limitations, we extend the TCSF$_{IDMS}$ and combine it with a new spatial probability summation model to incorporate the effects of luminance, eccentricity, and area (elaTCSF). We train the elaTCSF on various flicker detection datasets and establish the first variable refresh rate flicker detection dataset for further verification. Additionally, we contribute to resolving a longstanding debate on whether the flicker is more visible in peripheral vision. We demonstrate how elaTCSF can be used to predict flicker due to low-persistence in VR headsets, identify flicker-free VRR operational ranges, and determine flicker sensitivity in lighting design. |
http://arxiv.org/abs/2503.16760v1 | Rethinking the Role of Spatial Mixing | 2025-03-21T00:28:30+00:00 | Until quite recently, the backbone of nearly every state-of-the-art computer vision model has been the 2D convolution. At its core, a 2D convolution simultaneously mixes information across both the spatial and channel dimensions of a representation. Many recent computer vision architectures consist of sequences of isotropic blocks that disentangle the spatial and channel-mixing components. This separation of the operations allows us to more closely juxtapose the effects of spatial and channel mixing in deep learning. In this paper, we take an initial step towards garnering a deeper understanding of the roles of these mixing operations. Through our experiments and analysis, we discover that on both classical (ResNet) and cutting-edge (ConvMixer) models, we can reach nearly the same level of classification performance by and leaving the spatial mixers at their random initializations. Furthermore, we show that models with random, fixed spatial mixing are naturally more robust to adversarial perturbations. Lastly, we show that this phenomenon extends past the classification regime, as such models can also decode pixel-shuffled images. |
http://arxiv.org/abs/2503.16761v1 | Valley-dependent giant orbital moments and transport feature in rhombohedral graphene multilayers | 2025-03-21T00:31:39+00:00 | Recent years have witnessed a great interest in orbital related electronics (also termed as orbitronics). In the current work, we present a first-principles density functional theory calculation on the orbital magnetic moments, intrinsic orbital Hall effect, and ordinary magnetoconductivity effects in rhombohedral graphene multilayers. Our calculations suggest a giant orbital moment that arises from inter-atomic cycloid motion, reaching over 30 muB under an intermediate gate voltage. This leads to a valley polarization under an external magnetic field, as observed in recent experiments [Nature 623, 41-47 (2023)]. In addition, the orbital-related transport feature exhibit significant responses that are potentially observed in experiments. We also suggest that under a periodic field driven (such as high frequency light field), the ungated graphene multilayers could host strong quantum anomalous and orbital Hall effects, engineered by the layer number. As the graphene multilayers are intrinsically nonmagnetic with negligible spin-orbit coupling, the orbital moments would not be entangled by spin-related signals. Thus, they serve as an ideal platform to conduct orbitronic measurements and utilization for next generation information read/write nanodevices. |
http://arxiv.org/abs/2503.16762v1 | Unraveling phase transformation with phononic hyperbolicity using off-resonant terahertz light | 2025-03-21T00:34:53+00:00 | Noncontacting and nondestructive control of geometric phase in conventional semiconductors plays a pivotal role in various applications. In the current work, we present a theoretical and computational investigation on terahertz (THz) light-induced phase transformation of conventional binary semiconducting compounds among different structures including rock-salt, zinc-blende, wurtzite, and hexagonal phases. Using MgS and MgSe as prototypical examples, we perform anharmonic phonon mediated calculations and reveal large contrasting lattice contributed dielectric susceptibility in the THz regime. We then construct a THz-induced phase diagram under intermediate temperature and reveal rock-salt to hexagonal and then wurtzite structure transformations with increasing light intensity. This does not require a high temperature environment as observed in traditional experiments. The low energy barrier suggests that the phase transition kinetics can be fast, and the stable room temperature phonon dispersions guarantee their non-volatile nature. Furthermore, we disclose the phononic hyperbolicity with strong anisotropic THz susceptibility components, which serves as a natural hyperbolic material with negative refractive index. Our work suggests the potential to realize metastable hidden phases using noninvasive THz irradiation, which expands the conventional pressure-temperature ($P-T$) phase diagram by adding light as an additional control factor. |
http://arxiv.org/abs/2503.16763v1 | On uniqueness of free boundary minimal annuli in geodesic balls of $\mathbb{S}^3_+$ and $\mathbb{H}^3$ | 2025-03-21T00:38:02+00:00 | We consider $\Sigma$ an embedded free boundary minimal annulus in a geodesic ball in the round hemisphere $\mathbb{S}^3_+$ or in the hyperbolic space $\mathbb{H}^3$. Under the hypothesis of invariance due to an antipodal map on the geodesic ball and using the fact that this surface satisfies the Steklov problem with frequency, we prove that $\Sigma$ is congruent to a critical rotational annulus. |
http://arxiv.org/abs/2503.16764v1 | Improving mmWave based Hand Hygiene Monitoring through Beam Steering and Combining Techniques | 2025-03-21T00:39:37+00:00 | We introduce BeaMsteerX (BMX), a novel mmWave hand hygiene gesture recognition technique that improves accuracy in longer ranges (1.5m). BMX steers a mmWave beam towards multiple directions around the subject, generating multiple views of the gesture that are then intelligently combined using deep learning to enhance gesture classification. We evaluated BMX using off-the-shelf mmWave radars and collected a total of 7,200 hand hygiene gesture data from 10 subjects performing a six-step hand-rubbing procedure, as recommended by the World Health Organization, using sanitizer, at 1.5m -- over five times longer than in prior works. BMX outperforms state-of-the-art approaches by 31--43% and achieves 91% accuracy at boresight by combining only two beams, demonstrating superior gesture classification in low SNR scenarios. BMX maintained its effectiveness even when the subject was positioned 30 degrees away from the boresight, exhibiting a modest 5% drop in accuracy. |
http://arxiv.org/abs/2503.16765v1 | A thermodynamically consistent phase-field model for mass transport with interfacial reaction and deformation | 2025-03-21T00:40:24+00:00 | In this paper, a thermodynamically consistent phase-field model is proposed to describe the mass transport and reaction processes of multiple species in a fluid. A key feature of this model is that reactions between different species occur only at the interface, and may induce deformation of the interface. For the governing equations derived based on the energy variational method, we propose a structure-preserving numerical scheme that satisfies the mass conservation and energy dissipation laws at the discrete level. Furthermore, we carry out a rigorous error analysis of the time-discrete scheme for a simplified case. A series of numerical experiments are conducted to validate the effectiveness of the model as well as the accuracy and stability of the scheme. In particular, we simulate microvessels with straight and bifurcated structures to illustrate the risk of microaneurysm formation. |
http://arxiv.org/abs/2503.16766v3 | Quantized volume comparison for Fano manifolds | 2025-03-21T00:40:24+00:00 | A result of Kento Fujita says that the volume of a K\"ahler-Einstein Fano manifold is bounded from above by the volume of the projective space. In this short note we establish quantized versions of Fujita's result. |
http://arxiv.org/abs/2503.16767v1 | Production, Characteristics and Biological effects of Protonated Small Water Clusters | 2025-03-21T00:48:12+00:00 | The production and characteristics of protonated small water clusters (PSWCs) were reported in this work, where in electrospray ionization (ESI) of pure water, the species obtained were singly charged molecular ions consisting of 2, 3, 4 or 5 water molecules attached to a hydrogen ion, [(H2O)n+H]+, where n = 2, 3, 4 or 5. We proposed a new type of PSWCs structure: 2, 3, 4, 5 water molecules wrapped around a hydrogen ion which is located at the electrical and geometric center, forming a very stable molecular structure. Furthermore, biological tests of the PSWCs on mitochondrial function of intestinal epithelial cells and liver cells in mice showed the better therapeutic effect on inflammatory bowel diseases compared to that of the biologic agent Infliximab. |
http://arxiv.org/abs/2503.16768v1 | Dynamic Attention Mechanism in Spatiotemporal Memory Networks for Object Tracking | 2025-03-21T00:48:31+00:00 | Mainstream visual object tracking frameworks predominantly rely on template matching paradigms. Their performance heavily depends on the quality of template features, which becomes increasingly challenging to maintain in complex scenarios involving target deformation, occlusion, and background clutter. While existing spatiotemporal memory-based trackers emphasize memory capacity expansion, they lack effective mechanisms for dynamic feature selection and adaptive fusion. To address this gap, we propose a Dynamic Attention Mechanism in Spatiotemporal Memory Network (DASTM) with two key innovations: 1) A differentiable dynamic attention mechanism that adaptively adjusts channel-spatial attention weights by analyzing spatiotemporal correlations between the templates and memory features; 2) A lightweight gating network that autonomously allocates computational resources based on target motion states, prioritizing high-discriminability features in challenging scenarios. Extensive evaluations on OTB-2015, VOT 2018, LaSOT, and GOT-10K benchmarks demonstrate our DASTM's superiority, achieving state-of-the-art performance in success rate, robustness, and real-time efficiency, thereby offering a novel solution for real-time tracking in complex environments. |
http://arxiv.org/abs/2503.16769v2 | Shear and bulk viscous coefficients of a hot and chirally imbalanced quark matter using NJL model | 2025-03-21T00:53:52+00:00 | The shear $\eta$ and bulk $\zeta$ viscous coefficients have been calculated in a hot and chirally asymmetric quark matter quantified in terms of a chiral chemical potential (CCP) using the two-flavor Nambu-Jona--Lasinio (NJL) model. This is done by employing the one-loop Green-Kubo formalism where the viscous coefficients have been extracted from the long-wavelength limit of the in-medium spectral function corresponding to the energy momentum tensor (EMT) current correlator calculated using the real time formalism of finite temperature field theory. The momentum dependent thermal width of the quark/antiquark that enters into the expression of the viscosities as a dynamical input containing interactions, has been obtained from the $2\to2$ scattering processes mediated via the collective mesonic modes in scalar and pseudoscalar chanels encoded in respective in-medium polarization functions having explicit temperature and CCP dependence. Several thermodynamic quantities such as pressure, energy density, entropy density $(s)$, specific heat and isentropic speed of sound have also been calculated at finite CCP. The temperature and CCP dependence of the viscosity to entropy density ratios $\eta/s$ and $\zeta/s$ have also been studied. |
http://arxiv.org/abs/2503.16770v1 | An Improved Upper Bound on the Threshold Bias of the Oriented-cycle game | 2025-03-21T00:59:26+00:00 | We study the $b$-biased Oriented-cycle game where two players, OMaker and OBreaker, take turns directing the edges of $K_n$ (the complete graph on $n$ vertices). In each round, OMaker directs one previously undirected edge followed by OBreaker directing between one and $b$ previously undirected edges. The game ends once all edges have been directed, and OMaker wins if and only if the resulting tournament contains a directed cycle. Bollob\'as and Szab\'o asked the following question: what is the largest value of the bias $b$ for which OMaker has a winning strategy? Ben-Eliezer, Krivelevich and Sudakov proved that OMaker has a winning strategy for $b \leq n/2 - 2$. In the other direction, Clemens and Liebenau proved that OBreaker has a winning strategy for $b \geq 5n/6+2$. Inspired by their approach, we propose a significantly stronger strategy for OBreaker which we prove to be winning for $b \geq 0.7845n + O(1)$. |
http://arxiv.org/abs/2503.16771v1 | On Explaining (Large) Language Models For Code Using Global Code-Based Explanations | 2025-03-21T01:00:45+00:00 | In recent years, Language Models for Code (LLM4Code) have significantly changed the landscape of software engineering (SE) on downstream tasks, such as code generation, by making software development more efficient. Therefore, a growing interest has emerged in further evaluating these Language Models to homogenize the quality assessment of generated code. As the current evaluation process can significantly overreact on accuracy-based metrics, practitioners often seek methods to interpret LLM4Code outputs beyond canonical benchmarks. While the majority of research reports on code generation effectiveness in terms of expected ground truth, scant attention has been paid to LLMs' explanations. In essence, the decision-making process to generate code is hard to interpret. To bridge this evaluation gap, we introduce code rationales (Code$Q$), a technique with rigorous mathematical underpinning, to identify subsets of tokens that can explain individual code predictions. We conducted a thorough Exploratory Analysis to demonstrate the method's applicability and a User Study to understand the usability of code-based explanations. Our evaluation demonstrates that Code$Q$ is a powerful interpretability method to explain how (less) meaningful input concepts (i.e., natural language particle `at') highly impact output generation. Moreover, participants of this study highlighted Code$Q$'s ability to show a causal relationship between the input and output of the model with readable and informative explanations on code completion and test generation tasks. Additionally, Code$Q$ also helps to uncover model rationale, facilitating comparison with a human rationale to promote a fair level of trust and distrust in the model. |
http://arxiv.org/abs/2503.16772v1 | Two-Photon Resonance Fluorescence in a Three-Level Ladder-Type Atom | 2025-03-21T01:01:00+00:00 | In this work, we consider a three-level ladder-type atom driven by a coherent field, inspired by the experimental work of Gasparinetti et al. [Phys. Rev. A 100, 033802 (2019)]. When driven on two-photon resonance, the atom is excited into its highest energy state $| f \rangle$ by absorbing two photons simultaneously. The atom then de-excites via a cascaded decay $| f \rangle \rightarrow | e \rangle \rightarrow | g \rangle$. Here we present a theoretical study of the atomic fluorescence spectrum where, upon strong coherent driving, the spectrum exhibits seven distinct frequencies corresponding to transitions amongst the atomic dressed states. We characterize the quantum statistics of the emitted photons by investigating the second-order correlation functions of the emitted field. We do so by considering the total field emitted by the atom and focusing on each of the dressed-state components, taking in particular a secular-approximation and deriving straightforward, transparent analytic expressions for the second-order auto- and cross-correlations. |
http://arxiv.org/abs/2503.16773v1 | Hydrodynamics of ultralight complex scalar field dark matter and its impact on the growth of structure | 2025-03-21T01:04:07+00:00 | The mass window of ultralight axion dark matter motivated by suppressing the growth of structure on subgalactic scales, $m\sim 10^{-22}\,\mathrm{eV}$, is now severely constrained by various observation data (e.g. Lyman-$\alpha$ forest). As an attempt to reopen this mass window, we investigate an alternative ultralight dark matter candidate, the complex scalar field dark matter (SFDM). We derive the relativistic hydrodynamics of the complex SFDM in the framework of cosmological perturbation theory. Our formalism contains two novel ingredients uniquely associated with the complex SFDM model: the Eckart frame defined by the conserved Noether current, and the stiff gauge condition, $c_s^2\equiv (\delta P/\delta\rho)|_s=1$. In the Eckart frame, the complex SFDM is effectively an imperfect fluid with a dissipative energy flux, distinguishing itself from axion dark matter. The energy flux can affect the growth of density fluctuations dynamically. Meanwhile, we apply the stiff gauge condition to find new constitutive equations for the complex SFDM. We revisit the homogeneous evolution of the complex SFDM and present illustrative early-stage solutions for perturbations of the complex SFDM in a simplified setting. We demonstrate the effects of varying the model parameters on the evolution of the perturbation variables. |
http://arxiv.org/abs/2503.16774v1 | Current and Future Use of Large Language Models for Knowledge Work | 2025-03-21T01:07:21+00:00 | Large Language Models (LLMs) have introduced a paradigm shift in interaction with AI technology, enabling knowledge workers to complete tasks by specifying their desired outcome in natural language. LLMs have the potential to increase productivity and reduce tedious tasks in an unprecedented way. A systematic study of LLM adoption for work can provide insight into how LLMs can best support these workers. To explore knowledge workers' current and desired usage of LLMs, we ran a survey (n=216). Workers described tasks they already used LLMs for, like generating code or improving text, but imagined a future with LLMs integrated into their workflows and data. We ran a second survey (n=107) a year later that validated our initial findings and provides insight into up-to-date LLM use by knowledge workers. We discuss implications for adoption and design of generative AI technologies for knowledge work. |
http://arxiv.org/abs/2503.16775v1 | Region Masking to Accelerate Video Processing on Neuromorphic Hardware | 2025-03-21T01:07:53+00:00 | The rapidly growing demand for on-chip edge intelligence on resource-constrained devices has motivated approaches to reduce energy and latency of deep learning models. Spiking neural networks (SNNs) have gained particular interest due to their promise to reduce energy consumption using event-based processing. We assert that while sigma-delta encoding in SNNs can take advantage of the temporal redundancy across video frames, they still involve a significant amount of redundant computations due to processing insignificant events. In this paper, we propose a region masking strategy that identifies regions of interest at the input of the SNN, thereby eliminating computation and data movement for events arising from unimportant regions. Our approach demonstrates that masking regions at the input not only significantly reduces the overall spiking activity of the network, but also provides significant improvement in throughput and latency. We apply region masking during video object detection on Loihi 2, demonstrating that masking approximately 60% of input regions can reduce energy-delay product by 1.65x over a baseline sigma-delta network, with a degradation in [email protected] by 1.09%. |
http://arxiv.org/abs/2503.16776v1 | OpenCity3D: What do Vision-Language Models know about Urban Environments? | 2025-03-21T01:11:21+00:00 | Vision-language models (VLMs) show great promise for 3D scene understanding but are mainly applied to indoor spaces or autonomous driving, focusing on low-level tasks like segmentation. This work expands their use to urban-scale environments by leveraging 3D reconstructions from multi-view aerial imagery. We propose OpenCity3D, an approach that addresses high-level tasks, such as population density estimation, building age classification, property price prediction, crime rate assessment, and noise pollution evaluation. Our findings highlight OpenCity3D's impressive zero-shot and few-shot capabilities, showcasing adaptability to new contexts. This research establishes a new paradigm for language-driven urban analytics, enabling applications in planning, policy, and environmental monitoring. See our project page: opencity3d.github.io |
http://arxiv.org/abs/2503.16777v1 | Physics-Informed Deep B-Spline Networks for Dynamical Systems | 2025-03-21T01:15:40+00:00 | Physics-informed machine learning provides an approach to combining data and governing physics laws for solving complex partial differential equations (PDEs). However, efficiently solving PDEs with varying parameters and changing initial conditions and boundary conditions (ICBCs) with theoretical guarantees remains an open challenge. We propose a hybrid framework that uses a neural network to learn B-spline control points to approximate solutions to PDEs with varying system and ICBC parameters. The proposed network can be trained efficiently as one can directly specify ICBCs without imposing losses, calculate physics-informed loss functions through analytical formulas, and requires only learning the weights of B-spline functions as opposed to both weights and basis as in traditional neural operator learning methods. We provide theoretical guarantees that the proposed B-spline networks serve as universal approximators for the set of solutions of PDEs with varying ICBCs under mild conditions and establish bounds on the generalization errors in physics-informed learning. We also demonstrate in experiments that the proposed B-spline network can solve problems with discontinuous ICBCs and outperforms existing methods, and is able to learn solutions of 3D dynamics with diverse initial conditions. |
http://arxiv.org/abs/2503.16778v1 | Displacement-Actuated Continuum Robots: A Joint Space Abstraction | 2025-03-21T01:16:27+00:00 | The displacement-actuated continuum robot as an abstraction has been shown as a key abstraction to significantly simplify and improve approaches due to its relation to the Clarke transform. To highlight further potentials, we revisit and extend this abstraction that features an increasingly popular length extension and an underutilized twisting. For each extension, the corresponding mapping from the joint values to the local coordinates of the manifold embedded in the joint spaces is provided. Each mapping is characterized by its compactness and linearity. |
http://arxiv.org/abs/2503.16779v1 | Chain-of-Tools: Utilizing Massive Unseen Tools in the CoT Reasoning of Frozen Language Models | 2025-03-21T01:26:12+00:00 | Tool learning can further broaden the usage scenarios of large language models (LLMs). However most of the existing methods either need to finetune that the model can only use tools seen in the training data, or add tool demonstrations into the prompt with lower efficiency. In this paper, we present a new Tool Learning method Chain-of-Tools. It makes full use of the powerful semantic representation capability of frozen LLMs to finish tool calling in CoT reasoning with a huge and flexible tool pool which may contain unseen tools. Especially, to validate the effectiveness of our approach in the massive unseen tool scenario, we construct a new dataset SimpleToolQuestions. We conduct experiments on two numerical reasoning benchmarks (GSM8K-XL and FuncQA) and two knowledge-based question answering benchmarks (KAMEL and SimpleToolQuestions). Experimental results show that our approach performs better than the baseline. We also identify dimensions of the model output that are critical in tool selection, enhancing the model interpretability. Our code and data are available at: https://github.com/fairyshine/Chain-of-Tools . |
http://arxiv.org/abs/2503.16780v1 | A-IDE : Agent-Integrated Denoising Experts | 2025-03-21T01:26:54+00:00 | Recent advances in deep-learning based denoising methods have improved Low-Dose CT image quality. However, due to distinct HU distributions and diverse anatomical characteristics, a single model often struggles to generalize across multiple anatomies. To address this limitation, we introduce \textbf{Agent-Integrated Denoising Experts (A-IDE)} framework, which integrates three anatomical region-specialized RED-CNN models under the management of decision-making LLM agent. The agent analyzes semantic cues from BiomedCLIP to dynamically route incoming LDCT scans to the most appropriate expert model. We highlight three major advantages of our approach. A-IDE excels in heterogeneous, data-scarce environments. The framework automatically prevents overfitting by distributing tasks among multiple experts. Finally, our LLM-driven agentic pipeline eliminates the need for manual interventions. Experimental evaluations on the Mayo-2016 dataset confirm that A-IDE achieves superior performance in RMSE, PSNR, and SSIM compared to a single unified denoiser. |
http://arxiv.org/abs/2503.16781v2 | StrNim: a variant of Nim played on strings | 2025-03-21T01:33:32+00:00 | We propose a variant of Nim, named StrNim. Whereas a position in Nim is a tuple of non-negative integers, that in StrNim is a string, a sequence of characters. In every turn, each player shrinks the string, by removing a substring repeating the same character. As a first study on this new game, we present some sufficient conditions for the positions to be P-positions. |
http://arxiv.org/abs/2503.16782v1 | Learning Part Knowledge to Facilitate Category Understanding for Fine-Grained Generalized Category Discovery | 2025-03-21T01:37:51+00:00 | Generalized Category Discovery (GCD) aims to classify unlabeled data containing both seen and novel categories. Although existing methods perform well on generic datasets, they struggle in fine-grained scenarios. We attribute this difficulty to their reliance on contrastive learning over global image features to automatically capture discriminative cues, which fails to capture the subtle local differences essential for distinguishing fine-grained categories. Therefore, in this paper, we propose incorporating part knowledge to address fine-grained GCD, which introduces two key challenges: the absence of annotations for novel classes complicates the extraction of the part features, and global contrastive learning prioritizes holistic feature invariance, inadvertently suppressing discriminative local part patterns. To address these challenges, we propose PartGCD, including 1) Adaptive Part Decomposition, which automatically extracts class-specific semantic parts via Gaussian Mixture Models, and 2) Part Discrepancy Regularization, enforcing explicit separation between part features to amplify fine-grained local part distinctions. Experiments demonstrate state-of-the-art performance across multiple fine-grained benchmarks while maintaining competitiveness on generic datasets, validating the effectiveness and robustness of our approach. |
http://arxiv.org/abs/2503.16783v1 | CoBRA: A Universal Strategyproof Confirmation Protocol for Quorum-based Proof-of-Stake Blockchains | 2025-03-21T01:39:29+00:00 | We present a formal analysis of quorum-based State Machine Replication (SMR) protocols in Proof-of-Stake (PoS) systems under a hybrid threat model comprising honest, Byzantine, and rational validators. Our analysis of traditional quorum-based protocols establishes two fundamental impossibility results: (1) in partially synchronous networks, no quorum-based protocol can achieve SMR when rational and Byzantine validators comprise more than $1/3$ of participants, and (2) in synchronous networks, SMR remains impossible when rational and Byzantine validators comprise $2/3$ or more of participants. To overcome these limitations, we propose two complementary solutions in our hybrid model. First, we introduce a protocol that enforces a bound on the volume of the total transacted amount that is finalized within any time window $\Delta$ and prove that this bound is necessary for secure SMR protocols in our model. Second, we present the \emph{strongest chain rule}, which enables efficient finalization of transactions when the majority of honest participants provably support the SMR execution. Through empirical analysis of Ethereum and Cosmos networks, we demonstrate that validator participation consistently exceeds the required ${5}/{6}$ threshold, establishing the practical feasibility of our solution in production PoS systems. |
http://arxiv.org/abs/2503.16784v1 | Multi-property directed generative design of inorganic materials through Wyckoff-augmented transfer learning | 2025-03-21T01:41:25+00:00 | Accelerated materials discovery is an urgent demand to drive advancements in fields such as energy conversion, storage, and catalysis. Property-directed generative design has emerged as a transformative approach for rapidly discovering new functional inorganic materials with multiple desired properties within vast and complex search spaces. However, this approach faces two primary challenges: data scarcity for functional properties and the multi-objective optimization required to balance competing tasks. Here, we present a multi-property-directed generative framework designed to overcome these limitations and enhance site symmetry-compliant crystal generation beyond P1 (translational) symmetry. By incorporating Wyckoff-position-based data augmentation and transfer learning, our framework effectively handles sparse and small functional datasets, enabling the generation of new stable materials simultaneously conditioned on targeted space group, band gap, and formation energy. Using this approach, we identified previously unknown thermodynamically and lattice-dynamically stable semiconductors in tetragonal, trigonal, and cubic systems, with bandgaps ranging from 0.13 to 2.20 eV, as validated by density functional theory (DFT) calculations. Additionally, we assessed their thermoelectric descriptors using DFT, indicating their potential suitability for thermoelectric applications. We believe our integrated framework represents a significant step forward in generative design of inorganic materials. |
http://arxiv.org/abs/2503.16785v1 | Milliwatt-level UV generation using sidewall poled lithium niobate | 2025-03-21T01:44:34+00:00 | Integrated coherent sources of ultra-violet (UV) light are essential for a wide range of applications, from ion-based quantum computing and optical clocks to gas sensing and microscopy. Conventional approaches that rely on UV gain materials face limitations in terms of wavelength versatility; in response frequency upconversion approaches that leverage various optical nonlinearities have received considerable attention. Among these, the integrated thin-film lithium niobate (TFLN) photonic platform shows particular promise owing to lithium niobate's transparency into the UV range, its strong second order nonlinearity, and high optical confinement. However, to date, the high propagation losses and lack of reliable techniques for consistent poling of cm-long waveguides with small poling periods have severely limited the utility of this platform. Here we present a sidewall poled lithium niobate (SPLN) waveguide approach that overcomes these obstacles and results in a more than two orders of magnitude increase in generated UV power compared to the state-of-the-art. Our UV SPLN waveguides feature record-low propagation losses of 2.3 dB/cm, complete domain inversion of the waveguide cross-section, and an optimum 50% duty cycle, resulting in a record-high normalized conversion efficiency of 5050 %W$^{-1}$cm$^{-2}$, and 4.2 mW of generated on-chip power at 390 nm wavelength. This advancement makes the TFLN photonic platform a viable option for high-quality on-chip UV generation, benefiting emerging applications. |
http://arxiv.org/abs/2503.16786v1 | Average Nikolskii factors for random trigonometric polynomials | 2025-03-21T01:45:02+00:00 | For $1\le p,q\le \infty$, the Nikolskii factor for a trigonometric polynomial $T_{\bf a}$ is defined by $$\mathcal N_{p,q}(T_{\bf a})=\frac{\|T_{\bf a}\|_{q}}{\|T_{\bf a}\|_{p}},\ \ T_{\bf a}(x)=a_{1}+\sum\limits^{n}_{k=1}(a_{2k}\sqrt{2}\cos kx+a_{2k+1}\sqrt{2}\sin kx).$$ We study this average Nikolskii factor for random trigonometric polynomials with independent $N(0,\sigma^{2})$ coefficients and obtain that the exact order. For $1\leq p degree to the 0, as compared to the degree $1/p-1/q$ worst case bound. We also give the generalization to random multivariate trigonometric polynomials. |
http://arxiv.org/abs/2503.16787v1 | Photoinduced phase transitions and lattice deformation in 2D NbOX$_{2}$ (X=Cl, Br, I) | 2025-03-21T01:50:15+00:00 | We present a comprehensive investigation of light-induced phase transitions and strain in two-dimensional NbOX$_{2}$ (X = Cl, Br, I) using first-principles calculations. In particular, we identify a light-induced ferroelectric-to-paraelectric phase transition in these 2D systems. Furthermore, we demonstrate the possibility of inducing an antiferroelectric-to-paraelectric transition under illumination. Additionally, we find that these 2D systems exhibit significant photostrictive behavior, adding a new functionality to their already notable optical properties. The ability to control and manipulate ferroelectric order in these nanoscale materials through external stimuli, such as light, holds considerable promise for the development of next-generation electronic and optoelectronic devices. |
http://arxiv.org/abs/2503.16788v1 | Does Chain-of-Thought Reasoning Help Mobile GUI Agent? An Empirical Study | 2025-03-21T01:52:43+00:00 | Reasoning capabilities have significantly improved the performance of vision-language models (VLMs) in domains such as mathematical problem-solving, coding, and visual question-answering. However, their impact on real-world applications remains unclear. This paper presents the first empirical study on the effectiveness of reasoning-enabled VLMs in mobile GUI agents, a domain that requires interpreting complex screen layouts, understanding user instructions, and executing multi-turn interactions. We evaluate two pairs of commercial models--Gemini 2.0 Flash and Claude 3.7 Sonnet--comparing their base and reasoning-enhanced versions across two static benchmarks (ScreenSpot and AndroidControl) and one interactive environment (AndroidWorld). We surprisingly find the Claude 3.7 Sonnet reasoning model achieves state-of-the-art performance on AndroidWorld. However, reasoning VLMs generally offer marginal improvements over non-reasoning models on static benchmarks and even degrade performance in some agent setups. Notably, reasoning and non-reasoning VLMs fail on different sets of tasks, suggesting that reasoning does have an impact, but its benefits and drawbacks counterbalance each other. We attribute these inconsistencies to the limitations of benchmarks and VLMs. Based on the findings, we provide insights for further enhancing mobile GUI agents in terms of benchmarks, VLMs, and their adaptability in dynamically invoking reasoning VLMs. The experimental data are publicly available at https://github.com/LlamaTouch/VLM-Reasoning-Traces. |
http://arxiv.org/abs/2503.16789v1 | Conversational User-AI Intervention: A Study on Prompt Rewriting for Improved LLM Response Generation | 2025-03-21T02:01:02+00:00 | Human-LLM conversations are increasingly becoming more pervasive in peoples' professional and personal lives, yet many users still struggle to elicit helpful responses from LLM Chatbots. One of the reasons for this issue is users' lack of understanding in crafting effective prompts that accurately convey their information needs. Meanwhile, the existence of real-world conversational datasets on the one hand, and the text understanding faculties of LLMs on the other, present a unique opportunity to study this problem, and its potential solutions at scale. Thus, in this paper we present the first LLM-centric study of real human-AI chatbot conversations, focused on investigating aspects in which user queries fall short of expressing information needs, and the potential of using LLMs to rewrite suboptimal user prompts. Our findings demonstrate that rephrasing ineffective prompts can elicit better responses from a conversational system, while preserving the user's original intent. Notably, the performance of rewrites improves in longer conversations, where contextual inferences about user needs can be made more accurately. Additionally, we observe that LLMs often need to -- and inherently do -- make \emph{plausible} assumptions about a user's intentions and goals when interpreting prompts. Our findings largely hold true across conversational domains, user intents, and LLMs of varying sizes and families, indicating the promise of using prompt rewriting as a solution for better human-AI interactions. |
http://arxiv.org/abs/2503.16790v1 | Fractal tiles induced by tent maps | 2025-03-21T02:01:21+00:00 | In the present article we introduce geometrical objects induced by the tent maps associated with special Pisot numbers that we call tent-tiles. They are compact subsets of the one-, two-, or three-dimensional Euclidean space, depending on the particular special Pisot number. Most of the tent-tiles have a fractal shape and we study the Hausdorff dimension of their boundary. Furthermore, we are concerned with tilings induced by tent-tiles. It turns out that tent-tiles give rise to two types of lattice tilings. In order to obtain these results we establish and exploit connections between tent-tiles and Rauzy fractals induced by substitutions and automorphisms of the free group. |
http://arxiv.org/abs/2503.16791v1 | "The Diagram is like Guardrails": Structuring GenAI-assisted Hypotheses Exploration with an Interactive Shared Representation | 2025-03-21T02:01:37+00:00 | Data analysis encompasses a spectrum of tasks, from high-level conceptual reasoning to lower-level execution. While AI-powered tools increasingly support execution tasks, there remains a need for intelligent assistance in conceptual tasks. This paper investigates the design of an ordered node-link tree interface augmented with AI-generated information hints and visualizations, as a potential shared representation for hypothesis exploration. Through a design probe (n=22), participants generated diagrams averaging 21.82 hypotheses. Our findings showed that the node-link diagram acts as "guardrails" for hypothesis exploration, facilitating structured workflows, providing comprehensive overviews, and enabling efficient backtracking. The AI-generated information hints, particularly visualizations, aided users in transforming abstract ideas into data-backed concepts while reducing cognitive load. We further discuss how node-link diagrams can support both parallel exploration and iterative refinement in hypothesis formulation, potentially enhancing the breadth and depth of human-AI collaborative data analysis. |
http://arxiv.org/abs/2503.16792v1 | Numerical simulation of wormhole propagation with the mixed hybridized discontinuous Galerkin finite element method | 2025-03-21T02:02:33+00:00 | The acid treatment of carbonate reservoirs is a widely employed technique for enhancing the productivity of oil and gas reservoirs. In this paper, we present a novel combined hybridized mixed discontinuous Galerkin (HMDG) finite element method to simulate the dissolution process near the wellbore, commonly referred to as the wormhole phenomenon. The primary contribution of this work lies in the application of hybridization techniques to both the pressure and concentration equations. Additionally, an upwind scheme is utilized to address convection-dominant scenarios, and a ``cut-off" operator is introduced to maintain the boundedness of porosity. Compared to traditional discontinuous Galerkin methods, the proposed approach results in a global system with fewer unknowns and sparser stencils, thereby significantly reducing computational costs. We analyze the existence and uniqueness of the new combined method and derive optimal error estimates using the developed technique. Numerical examples are provided to validate the theoretical analysis. |
http://arxiv.org/abs/2503.16793v1 | Restoring Forgotten Knowledge in Non-Exemplar Class Incremental Learning through Test-Time Semantic Evolution | 2025-03-21T02:02:35+00:00 | Continual learning aims to accumulate knowledge over a data stream while mitigating catastrophic forgetting. In Non-exemplar Class Incremental Learning (NECIL), forgetting arises during incremental optimization because old classes are inaccessible, hindering the retention of prior knowledge. To solve this, previous methods struggle in achieving the stability-plasticity balance in the training stages. However, we note that the testing stage is rarely considered among them, but is promising to be a solution to forgetting. Therefore, we propose RoSE, which is a simple yet effective method that \textbf{R}est\textbf{o}res forgotten knowledge through test-time \textbf{S}emantic \textbf{E}volution. Specifically designed for minimizing forgetting, RoSE is a test-time semantic drift compensation framework that enables more accurate drift estimation in a self-supervised manner. Moreover, to avoid incomplete optimization during online testing, we derive an analytical solution as an alternative to gradient descent. We evaluate RoSE on CIFAR-100, TinyImageNet, and ImageNet100 datasets, under both cold-start and warm-start settings. Our method consistently outperforms most state-of-the-art (SOTA) methods across various scenarios, validating the potential and feasibility of test-time evolution in NECIL. |
http://arxiv.org/abs/2503.16794v1 | Local Ratio based Real-time Job Offloading and Resource Allocation in Mobile Edge Computing | 2025-03-21T02:06:25+00:00 | Mobile Edge Computing (MEC) has emerged as a promising paradigm enabling vehicles to handle computation-intensive and time-sensitive applications for intelligent transportation. Due to the limited resources in MEC, effective resource management is crucial for improving system performance. While existing studies mostly focus on the job offloading problem and assume that job resource demands are fixed and given apriori, the joint consideration of job offloading (selecting the edge server for each job) and resource allocation (determining the bandwidth and computation resources for offloading and processing) remains underexplored. This paper addresses the joint problem for deadline-constrained jobs in MEC with both communication and computation resource constraints, aiming to maximize the total utility gained from jobs. To tackle this problem, we propose an approximation algorithm, $\mathtt{IDAssign}$, with an approximation bound of $\frac{1}{6}$, and experimentally evaluate the performance of $\mathtt{IDAssign}$ by comparing it to state-of-the-art heuristics using a real-world taxi trace and object detection applications. |
http://arxiv.org/abs/2503.16795v1 | DCEdit: Dual-Level Controlled Image Editing via Precisely Localized Semantics | 2025-03-21T02:14:03+00:00 | This paper presents a novel approach to improving text-guided image editing using diffusion-based models. Text-guided image editing task poses key challenge of precisly locate and edit the target semantic, and previous methods fall shorts in this aspect. Our method introduces a Precise Semantic Localization strategy that leverages visual and textual self-attention to enhance the cross-attention map, which can serve as a regional cues to improve editing performance. Then we propose a Dual-Level Control mechanism for incorporating regional cues at both feature and latent levels, offering fine-grained control for more precise edits. To fully compare our methods with other DiT-based approaches, we construct the RW-800 benchmark, featuring high resolution images, long descriptive texts, real-world images, and a new text editing task. Experimental results on the popular PIE-Bench and RW-800 benchmarks demonstrate the superior performance of our approach in preserving background and providing accurate edits. |
http://arxiv.org/abs/2503.16796v1 | Finite-time scaling with two characteristic time scales: Driven critical dynamics with emergent symmetry | 2025-03-21T02:14:14+00:00 | Critical points with emergent symmetry exhibit intriguing scaling properties induced by two divergent length scales, attracting extensive investigations recently. We study the driven critical dynamics in a three-dimensional $q$-state clock model, in which the ordered phase breaks the $Z_q$ discrete symmetry, while an emergent $U(1)$ symmetry appears at the critical point. By increasing the temperature at a finite velocity $v$ to traverse the critical point from the ordered phase, we uncover rich dynamic scaling properties beyond the celebrated Kibble-Zurek mechanism. Our findings reveal the existence of two finite-time scaling (FTS) regions, characterized by two driving-induced time scales $\zeta_d\propto v^{-z/r}$ and $\zeta_d'\propto v^{-z/r'}$, respectively. Here $z$ is the dynamic exponent, $r$ is the usual critical exponent of $v$, and $r'$ represents an additional critical exponent of $v$ associated with the dangerously irrelevant scaling variable. While the square of the order parameter $M^2$ obeys the usual FTS form, the angular order parameter $\phi_q$ shows remarkably distinct scaling behaviors controlled by both FTS regions. For small $v$, $\phi_q$ is dominated by the time scale $\zeta_d$, whereas for large $v$, $\phi_q$ is governed by the second time scale $\zeta_d'$. We verify the universality of these scaling properties in models with both isotropic and anisotropic couplings. Our theoretical insights provide a promising foundation for further experimental investigations in the hexagonal RMnO$_3$ (R=rare earth) materials. |
http://arxiv.org/abs/2503.16797v1 | A Learnability Analysis on Neuro-Symbolic Learning | 2025-03-21T02:16:11+00:00 | This paper analyzes the learnability of neuro-symbolic (NeSy) tasks within hybrid systems. We show that the learnability of NeSy tasks can be characterized by their derived constraint satisfaction problems (DCSPs). Specifically, a task is learnable if the corresponding DCSP has a unique solution; otherwise, it is unlearnable. For learnable tasks, we establish error bounds by exploiting the clustering property of the hypothesis space. Additionally, we analyze the asymptotic error for general NeSy tasks, showing that the expected error scales with the disagreement among solutions. Our results offer a principled approach to determining learnability and provide insights into the design of new algorithms. |
http://arxiv.org/abs/2503.16798v1 | A Pathway to Near Tissue Computing through Processing-in-CTIA Pixels for Biomedical Applications | 2025-03-21T02:19:57+00:00 | Near-tissue computing requires sensor-level processing of high-resolution images, essential for real-time biomedical diagnostics and surgical guidance. To address this need, we introduce a novel Capacitive Transimpedance Amplifier-based In-Pixel Computing (CTIA-IPC) architecture. Our design leverages CTIA pixels that are widely used for biomedical imaging owing to the inherent advantages of excellent linearity, low noise, and robust operation under low-light conditions. We augment CTIA pixels with IPC to enable precise deep learning computations including multi-channel, multi-bit convolution operations along with integrated batch normalization (BN) and Rectified Linear Unit (ReLU) functionalities in the peripheral ADC (Analog to Digital Converters). This design improves the linearity of Multiply and Accumulate (MAC) operations while enhancing computational efficiency. Leveraging 3D integration to embed pixel circuitry and weight storage, CTIA-IPC maintains pixel density comparable to standard CTIA designs. Moreover, our algorithm-circuit co-design approach enables efficient real-time diagnostics and AI-driven medical analysis. Evaluated on the EndoVis tissu dataset (1280x1024), CTIA-IPC achieves approximately 12x reduction in data bandwidth, yielding segmentation IoUs of 75.91% (parts), and 28.58% (instrument)-a minimal accuracy reduction (1.3%-2.5%) compared to baseline methods. Achieving 1.98 GOPS throughput and 3.39 GOPS/W efficiency, our CTIA-IPC architecture offers a promising computational framework tailored specifically for biomedical near-tissue computing. |
http://arxiv.org/abs/2503.16799v1 | Causally Aligned Curriculum Learning | 2025-03-21T02:20:38+00:00 | A pervasive challenge in Reinforcement Learning (RL) is the "curse of dimensionality" which is the exponential growth in the state-action space when optimizing a high-dimensional target task. The framework of curriculum learning trains the agent in a curriculum composed of a sequence of related and more manageable source tasks. The expectation is that when some optimal decision rules are shared across source tasks and the target task, the agent could more quickly pick up the necessary skills to behave optimally in the environment, thus accelerating the learning process. However, this critical assumption of invariant optimal decision rules does not necessarily hold in many practical applications, specifically when the underlying environment contains unobserved confounders. This paper studies the problem of curriculum RL through causal lenses. We derive a sufficient graphical condition characterizing causally aligned source tasks, i.e., the invariance of optimal decision rules holds. We further develop an efficient algorithm to generate a causally aligned curriculum, provided with qualitative causal knowledge of the target task. Finally, we validate our proposed methodology through experiments in discrete and continuous confounded tasks with pixel observations. |
http://arxiv.org/abs/2503.16800v2 | Charged Black Holes in the Kalb-Ramond Background with Lorentz Violation: Null Geodesics and Optical Appearance of a Thin Accretion Disk | 2025-03-21T02:22:57+00:00 | In this paper, we investigate the optical appearance of a charged black hole in the Kalb-Ramond background, incorporating a Lorentz-violating parameter $l=0.01$. By analyzing the null geodesics, we derive the photon sphere, event horizon, effective potential, and critical impact parameters. We then employ a ray-tracing technique to study the trajectories of photons surrounding a thin accretion disk. Three different emission models are considered to explore the observed intensity profiles of direct rings, lensing rings, and photon sphere. By comparing these results with those of the standard Reissner-Nordstr\"om black hole ($l=0$) and the Kalb-Ramond black hole with different values of Lorentz-violating parameter (specifically, $l=0.05$ and $l=0.1$ respectively), we find that the Lorentz symmetry breaking will lead to a decrease in the radii of the photon sphere, the event horizon, and the innermost stable circular orbit. Consequently, this makes the detection of these black holes more challenging. |
http://arxiv.org/abs/2503.16801v1 | Auto-Regressive Diffusion for Generating 3D Human-Object Interactions | 2025-03-21T02:25:59+00:00 | Text-driven Human-Object Interaction (Text-to-HOI) generation is an emerging field with applications in animation, video games, virtual reality, and robotics. A key challenge in HOI generation is maintaining interaction consistency in long sequences. Existing Text-to-Motion-based approaches, such as discrete motion tokenization, cannot be directly applied to HOI generation due to limited data in this domain and the complexity of the modality. To address the problem of interaction consistency in long sequences, we propose an autoregressive diffusion model (ARDHOI) that predicts the next continuous token. Specifically, we introduce a Contrastive Variational Autoencoder (cVAE) to learn a physically plausible space of continuous HOI tokens, thereby ensuring that generated human-object motions are realistic and natural. For generating sequences autoregressively, we develop a Mamba-based context encoder to capture and maintain consistent sequential actions. Additionally, we implement an MLP-based denoiser to generate the subsequent token conditioned on the encoded context. Our model has been evaluated on the OMOMO and BEHAVE datasets, where it outperforms existing state-of-the-art methods in terms of both performance and inference speed. This makes ARDHOI a robust and efficient solution for text-driven HOI tasks |
http://arxiv.org/abs/2503.16802v1 | Opening and closing a bandgap via alternating softening and hardening nonlinearities | 2025-03-21T02:26:09+00:00 | Recent studies have shown some unusual nonlinear dispersion behaviors that are disconnected from the linear regime. However, existing analytical techniques, such as perturbation methods, fail to correctly capture these behaviors. Here we propose a general theoretical approach that converts the nonlinear wave equation to an equivalent linear eigenvalue problem, which directly gives the nonlinear dispersion relation and modal vectors. The theoretical approach is employed to 1D phononic chains and 2D hexagonal lattices with alternating softening and hardening nonlinearities, revealing amplitude-induced bandgap opening and closing phenomena. The theoretical results are validated via full-scale simulations with periodic boundary conditions, in which steady-state nonlinear plane wave responses are numerically obtained. Moreover, we leverage these nonlinear phenomena to achieve tunable frequency splitting and focusing effects. Thus, our work opens new paradigms for understanding nonlinear wave physics and for achieving novel wave control capabilities. |
http://arxiv.org/abs/2503.16803v1 | BEAC: Imitating Complex Exploration and Task-oriented Behaviors for Invisible Object Nonprehensile Manipulation | 2025-03-21T02:26:14+00:00 | Applying imitation learning (IL) is challenging to nonprehensile manipulation tasks of invisible objects with partial observations, such as excavating buried rocks. The demonstrator must make such complex action decisions as exploring to find the object and task-oriented actions to complete the task while estimating its hidden state, perhaps causing inconsistent action demonstration and high cognitive load problems. For these problems, work in human cognitive science suggests that promoting the use of pre-designed, simple exploration rules for the demonstrator may alleviate the problems of action inconsistency and high cognitive load. Therefore, when performing imitation learning from demonstrations using such exploration rules, it is important to accurately imitate not only the demonstrator's task-oriented behavior but also his/her mode-switching behavior (exploratory or task-oriented behavior) under partial observation. Based on the above considerations, this paper proposes a novel imitation learning framework called Belief Exploration-Action Cloning (BEAC), which has a switching policy structure between a pre-designed exploration policy and a task-oriented action policy trained on the estimated belief states based on past history. In simulation and real robot experiments, we confirmed that our proposed method achieved the best task performance, higher mode and action prediction accuracies, while reducing the cognitive load in the demonstration indicated by a user study. |
http://arxiv.org/abs/2503.16804v2 | Anisotropic flows of identified hadrons in the equal-velocity quark combination model at RHIC energy | 2025-03-21T02:28:30+00:00 | We employ an equal-velocity quark combination model to study anisotropic flows $v_{2}$, $v_{3}$ and $v_{4}$ of identified hadrons at mid-rapidity in heavy-ion collisions at RHIC energies. Under the equal-velocity combination mechanism of constituent quarks at hadronization, we build analytical formulas of anisotropic flows of hadrons in terms of those of quarks just before hadronization. We systematically analyze the contribution of higher order flows of quarks, and show how simple formulas of $v_{2}$, $v_{3}$ and $v_{4}$ of identified hadrons with the desired precision can be obtained by neglecting the small contribution of higher order flows of quarks. We systematically test these simple formulas of hadronic flows by the experimental data of $v_{2}$, $v_{3}$ and $v_{4}$ of identified hadrons $\phi$, $\Lambda$, $\Xi^{-}$, $\Omega^{-}$, $\bar{\Lambda}$, $\bar{\Xi}^{+}$, $\bar{\Omega}^{+}$, $p$ and $\bar{p}$ in Au+Au collisions at $\sqrt{s_{NN}}=$ 19.6, 54.4 and 200 GeV, and we find that the equal-velocity quark combination model can well describe the measured $v_{2}$, $v_{3}$ and $v_{4}$ of identified hadrons in Au+Au collisions at those collision energies. We further study the obtained anisotropic flows of quarks and find two scaling properties\textcolor{red}{{} }which can be qualitatively understood by the hydrodynamic evolution of thermal quark medium produced in relativistic heavy-ion collisions. |
http://arxiv.org/abs/2503.16805v1 | Nuclear magnetic resonance investigation of strain-tuned iron-based superconductors (Druckabhängige Untersuchung eisenbasierter Supraleiter mittels Kernspinresonanz) | 2025-03-21T02:29:00+00:00 | Final report for a Deutsche Forschungsgemeinschaft, Eigenestelle Grant, summarizing work mainly on uniaxial-pressure-dependent nuclear magnetic resonance (NMR) investigations of BaFe$_2$As$_2$. We have conducted systematic $^{75}$As NMR experiments in BaFe$_2$As$_2$ under in-situ controlled conditions of uniaxial pressure. We find that the electric field gradient (EFG), spin--lattice relaxation rate T$_1^{-1}$, spin--spin relaxation rate T$_2^{-1}$, and Knight shift $K$ at the As site are sensitive to applied uniaxial pressure. These properties allow us to locally probe the nematic susceptibility, as well as orbital and spin degrees of freedom. Our spectral measurements in the magnetic state provide no evidence for spin reorientation below the T$_N$ for both positive and negative applied uniaxial pressure up to the point of sample failure. |
http://arxiv.org/abs/2503.16806v1 | DyWA: Dynamics-adaptive World Action Model for Generalizable Non-prehensile Manipulation | 2025-03-21T02:29:52+00:00 | Nonprehensile manipulation is crucial for handling objects that are too thin, large, or otherwise ungraspable in unstructured environments. While conventional planning-based approaches struggle with complex contact modeling, learning-based methods have recently emerged as a promising alternative. However, existing learning-based approaches face two major limitations: they heavily rely on multi-view cameras and precise pose tracking, and they fail to generalize across varying physical conditions, such as changes in object mass and table friction. To address these challenges, we propose the Dynamics-Adaptive World Action Model (DyWA), a novel framework that enhances action learning by jointly predicting future states while adapting to dynamics variations based on historical trajectories. By unifying the modeling of geometry, state, physics, and robot actions, DyWA enables more robust policy learning under partial observability. Compared to baselines, our method improves the success rate by 31.5% using only single-view point cloud observations in the simulation. Furthermore, DyWA achieves an average success rate of 68% in real-world experiments, demonstrating its ability to generalize across diverse object geometries, adapt to varying table friction, and robustness in challenging scenarios such as half-filled water bottles and slippery surfaces. |
http://arxiv.org/abs/2503.16807v1 | Multi-View Orthogonal Projection Regression with Application in Multi-omics integration | 2025-03-21T02:31:32+00:00 | Multi-omics integration offers novel insights into complex biological mechanisms by utlizing the fused information from various omics datasets. However, the inherent within- and inter-modality correlations in multi-omics data present significant challenges for traditional variable selection methods, such as Lasso regression. These correlations can lead to multicollinearity, compromising the stability and interpretability of selected variables. To address these problems, we introduce the Multi-View Orthogonal Projection Regression (MVOPR), a novel approach for variable selection in multi-omics analysis. MVOPR leverages the unidirectional associations among omics layers, inspired by the Central Dogma of Molecular Biology, to transform predictors into an uncorrelated feature space. This orthogonal projection framework effectively mitigates the correlations, allowing penalized regression models to operate on independent components. Through simulations under both well-specified and misspecified scenarios, MVOPR demonstrates superior performance in variable selection, outperforming traditional Lasso-based methods and factor-based models. In real-data analysis on the CAARS dataset, MVOPR consistently identifies biologically relevant features, including the Bacteroidaceae family and key metabolites which align well with known asthma biomarkers. These findings illustrate MVOPR's ability to enhance variable selection while offering biologically interpretable insights, offering a robust tool for integrative multi-omics research. |
http://arxiv.org/abs/2503.16808v1 | Gradient continuity for the parabolic $(1,\,p)$-Laplace system | 2025-03-21T02:36:49+00:00 | This paper deals with the parabolic $(1,\,p)$-Laplace system, a parabolic system that involves the one-Laplace and $p$-Laplace operators with $p\in(1,\,\infty)$. We aim to prove that a spatial gradient is continuous in space and time. An external force term is treated under the optimal regularity assumption in the parabolic Lebesgue spaces. We also discuss a generalized parabolic system with the Uhlenbeck structure. A main difficulty is that the uniform ellipticity of the $(1,\,p)$-Laplace operator is violated on a facet, or the degenerate region of a spatial gradient. The gradient continuity is proved by showing local H\"{o}lder continuity of a truncated gradient, whose support is far from the facet. This is rigorously demonstrated by considering approximate parabolic systems and deducing various regularity estimates for approximate solutions by classical methods such as De Giorgi's truncation, Moser's iteration, and freezing coefficient arguments. A weak maximum principle is also utilized when $p$ is not in the supercritical range. |
http://arxiv.org/abs/2503.16809v1 | Online Selective Conformal Prediction: Errors and Solutions | 2025-03-21T02:37:28+00:00 | In online selective conformal inference, data arrives sequentially, and prediction intervals are constructed only when an online selection rule is met. Since online selections may break the exchangeability between the selected test datum and the rest of the data, one must correct for this by suitably selecting the calibration data. In this paper, we evaluate existing calibration selection strategies and pinpoint some fundamental errors in the associated claims that guarantee selection-conditional coverage and control of the false coverage rate (FCR). To address these shortcomings, we propose novel calibration selection strategies that provably preserve the exchangeability of the calibration data and the selected test datum. Consequently, we demonstrate that online selective conformal inference with these strategies guarantees both selection-conditional coverage and FCR control. Our theoretical findings are supported by experimental evidence examining tradeoffs between valid methods. |
http://arxiv.org/abs/2503.16810v1 | Glueballonia as Hopfions | 2025-03-21T02:39:15+00:00 | We work out the Hopfion description of glueballs by inclusively comparing the energy spectra obtained by quantizing Hopfions with experimental data and lattice QCD. Identifying a Hopfion carrying a unit topological charge as $f_0(1500)$, the Hopfions with the topological charge two are classified as glueballonia, i.e., two glueballs are bound together. We find a tightly and a loosely bound glueballonia complying with $f_0 (2470)$ and a novel scalar particle carrying the mass around 2814 MeV, respectively, and calculate their binding energies. By the rigid body quantization of Hopfions, we predict a characteristic multiplet structure of tensor glueball states. Some of them are missing in the current experimental data and can be verified in future measurements. |
http://arxiv.org/abs/2503.16811v1 | Seg2Box: 3D Object Detection by Point-Wise Semantics Supervision | 2025-03-21T02:39:32+00:00 | LiDAR-based 3D object detection and semantic segmentation are critical tasks in 3D scene understanding. Traditional detection and segmentation methods supervise their models through bounding box labels and semantic mask labels. However, these two independent labels inherently contain significant redundancy. This paper aims to eliminate the redundancy by supervising 3D object detection using only semantic labels. However, the challenge arises due to the incomplete geometry structure and boundary ambiguity of point-cloud instances, leading to inaccurate pseudo labels and poor detection results. To address these challenges, we propose a novel method, named Seg2Box. We first introduce a Multi-Frame Multi-Scale Clustering (MFMS-C) module, which leverages the spatio-temporal consistency of point clouds to generate accurate box-level pseudo-labels. Additionally, the Semantic?Guiding Iterative-Mining Self-Training (SGIM-ST) module is proposed to enhance the performance by progressively refining the pseudo-labels and mining the instances without generating pseudo-labels. Experiments on the Waymo Open Dataset and nuScenes Dataset show that our method significantly outperforms other competitive methods by 23.7\% and 10.3\% in mAP, respectively. The results demonstrate the great label-efficient potential and advancement of our method. |
http://arxiv.org/abs/2503.16812v1 | Development of High-Quality $α$-Ta Film at Room Temperature via Seed Layer Engineering | 2025-03-21T02:43:14+00:00 | The growth of high-quality superconducting thin film on silicon substrates is essential for quantum computing, and low signal interconnects with industrial compatibility. Recently, the growth of $\alpha$-Ta (alpha-phase tantalum) thin films has gained attention over conventional superconductors like Nb and Al due to their high-density native oxide ($Ta_2O_5$), which offers excellent chemical resistance, superior dielectric properties, and mechanical robustness. The growth of $\alpha$-Ta thin films can be achieved through high-temperature/cryogenic growth, ultra-thin seed layers, or thick films (>300 nm). While high-temperature deposition produces high-quality films, it can cause thermal stress, silicide formation at the interface, and defects due to substrate-film mismatch. Room-temperature deposition minimizes these issues, benefiting heat-sensitive substrates and device fabrication. Low-temperature growth using amorphous (defective) seed layers such as TaN and TiN has shown promise for phase stabilization. However, nitrogen gas, used as a source of metallic nitride, can introduce defects and lead to the formation of amorphous seed layers. This study explores using crystalline seed layers to optimize $\alpha$-Ta thin films, demonstrating improved film quality, including reduced surface roughness, enhanced phase orientation, and higher transition temperatures compared to amorphous seed layers like metal nitrides. These advancements could interest the superconducting materials community for fabricating high-quality quantum devices. |
http://arxiv.org/abs/2503.16813v1 | A note on the existence of self-similar profiles of the hydrodynamic formulation of the focusing nonlinear Schrödinger equation | 2025-03-21T02:44:19+00:00 | After performing the Madelung transformation, the nonlinear Schr\"odinger equation is transformed into a hydrodynamic equation akin to the compressible Euler equations with a certain dissipation. In this short note, we construct self-similar solutions of such system in the focusing case for any mass supercritical exponent. To the best of our knowledge these solutions are new, and may formally arise as potential blow-up profiles of the focusing NLS equation. |
http://arxiv.org/abs/2503.16814v1 | When Debate Fails: Bias Reinforcement in Large Language Models | 2025-03-21T02:51:30+00:00 | Large Language Models $($LLMs$)$ solve complex problems using training-free methods like prompt engineering and in-context learning, yet ensuring reasoning correctness remains challenging. While self-correction methods such as self-consistency and self-refinement aim to improve reliability, they often reinforce biases due to the lack of effective feedback mechanisms. Multi-Agent Debate $($MAD$)$ has emerged as an alternative, but we identify two key limitations: bias reinforcement, where debate amplifies model biases instead of correcting them, and lack of perspective diversity, as all agents share the same model and reasoning patterns, limiting true debate effectiveness. To systematically evaluate these issues, we introduce $\textit{MetaNIM Arena}$, a benchmark designed to assess LLMs in adversarial strategic decision-making, where dynamic interactions influence optimal decisions. To overcome MAD's limitations, we propose $\textbf{DReaMAD}$ $($$\textbf{D}$iverse $\textbf{Rea}$soning via $\textbf{M}$ulti-$\textbf{A}$gent $\textbf{D}$ebate with Refined Prompt$)$, a novel framework that $(1)$ refines LLM's strategic prior knowledge to improve reasoning quality and $(2)$ promotes diverse viewpoints within a single model by systematically modifying prompts, reducing bias. Empirical results show that $\textbf{DReaMAD}$ significantly improves decision accuracy, reasoning diversity, and bias mitigation across multiple strategic tasks, establishing it as a more effective approach for LLM-based decision-making. |
http://arxiv.org/abs/2503.16815v1 | DeFT: Mitigating Data Dependencies for Flexible Communication Scheduling in Distributed Training | 2025-03-21T02:59:25+00:00 | Communication scheduling aims to reduce communication bottlenecks in data parallel training (DP) by maximizing the overlap between computation and communication. However, existing schemes fall short due to three main issues: (1) hard data dependencies break some overlapping between communication and computation; (2) high coverage rates impair further improvement on performance; (3) imbalanced communication/computation times of tensors caused by partitioning/fusion strategies cause more bubbles. To address these drawbacks, we propose a new communication scheduling scheme DeFT, whose key insight is to mitigate data dependencies and support flexible scheduling in distributed training. DeFT uncovers new overlapping chances in training by transforming the scheduling problem into multiple knapsack problems. Specifically, DeFT eliminates hard dependencies with delayed updates, reducing the coverage rate by adjusting update frequency and utilizing heterogeneous communication links, merging the computation times of backward or forward as the knapsack capacity to avoid the negative impact of unbalanced tensors. Additionally, DeFT preserves training accuracy by adjusting its scheduling strategy via convergence loss quantification. Extensive experiments with 16 A100 GPUs showed that DeFT achieved speedups of 29% to 115% on three representative benchmarks compared to US-Byte and Bytescheduler with no loss of accuracy. |
http://arxiv.org/abs/2503.16816v1 | ST-Prompt Guided Histological Hypergraph Learning for Spatial Gene Expression Prediction | 2025-03-21T03:10:43+00:00 | Spatial Transcriptomics (ST) reveals the spatial distribution of gene expression in tissues, offering critical insights into biological processes and disease mechanisms. However, predicting ST from H\&E-stained histology images is challenging due to the heterogeneous relationship between histomorphology and gene expression, which arises from substantial variability across different patients and tissue sections. A more practical and valuable approach is to utilize ST data from a few local regions to predict the spatial transcriptomic landscape across the remaining regions in H&E slides. In response, we propose PHG2ST, an ST-prompt guided histological hypergraph learning framework, which leverages sparse ST signals as prompts to guide histological hypergraph learning for global spatial gene expression prediction. Our framework fuses histological hypergraph representations at multiple scales through a masked ST-prompt encoding mechanism, improving robustness and generalizability. Benchmark evaluations on two public ST datasets demonstrate that PHG2ST outperforms the existing state-of-the-art methods and closely aligns with the ground truth. These results underscore the potential of leveraging sparse local ST data for scalable and cost-effective spatial gene expression mapping in real-world biomedical applications. |
http://arxiv.org/abs/2503.16817v1 | System Identification Under Bounded Noise: Optimal Rates Beyond Least Squares | 2025-03-21T03:13:32+00:00 | System identification is a fundamental problem in control and learning, particularly in high-stakes applications where data efficiency is critical. Classical approaches, such as the ordinary least squares estimator (OLS), achieve an $O(1/\sqrt{T})$ convergence rate under Gaussian noise assumptions, where $T$ is the number of samples. This rate has been shown to match the lower bound. However, in many practical scenarios, noise is known to be bounded, opening the possibility of improving sample complexity. In this work, we establish the minimax lower bound for system identification under bounded noise, proving that the $O(1/T)$ convergence rate is indeed optimal. We further demonstrate that OLS remains limited to an {$\Omega(1/\sqrt{T})$} convergence rate, making it fundamentally suboptimal in the presence of bounded noise. Finally, we instantiate two natural variations of OLS that obtain the optimal sample complexity. |
http://arxiv.org/abs/2503.16818v1 | Depth-Aided Color Image Inpainting in Quaternion Domain | 2025-03-21T03:18:41+00:00 | In this paper, we propose a depth-aided color image inpainting method in the quaternion domain, called depth-aided low-rank quaternion matrix completion (D-LRQMC). In conventional quaternion-based inpainting techniques, the color image is expressed as a quaternion matrix by using the three imaginary parts as the color channels, whereas the real part is set to zero and has no information. Our approach incorporates depth information as the real part of the quaternion representations, leveraging the correlation between color and depth to improve the result of inpainting. In the proposed method, we first restore the observed image with the conventional LRQMC and estimate the depth of the restored result. We then incorporate the estimated depth into the real part of the observed image and perform LRQMC again. Simulation results demonstrate that the proposed D-LRQMC can improve restoration accuracy and visual quality for various images compared to the conventional LRQMC. These results suggest the effectiveness of the depth information for color image processing in quaternion domain. |
http://arxiv.org/abs/2503.16819v1 | Topological blocking at the Bi(111) surface due to surface relaxation | 2025-03-21T03:20:09+00:00 | The topological characteristics of Bi and its alloys with Sb have fueled intense debate since the prediction of three-dimensional topological insulators. However, a definitive resolution has not been reached to date. Here, we provide theoretical evidence that surface relaxation conceals the underlying bulk topology of pure Bi. Using density functional theory calculations for thin Bi(111) films (up to 17 bilayers), we first demonstrate a substantial inter-bilayer expansion near the surface. Motivated by this finding, we extend our analysis to thick Bi(111) films (up to 250 bilayers) incorporating relaxation layers, within the framework of a relativistic empirical tight-binding model. Our results reveal that these relaxation layers topologically block the emergence of surface state and significantly suppress the one-particle spectrum of surface states, thereby obscuring the experimental identification of Bi's topological properties. This phenomenon, which we term "topological blocking", provides crucial insights into the long-standing difficulty of observing surface states of Bi(111) at the $\bar{M}$ point. Furthermore, it establishes a framework for understanding and predicting the topological behavior in systems where surface relaxation disrupts the bulk-edge correspondence. |
http://arxiv.org/abs/2503.16820v1 | Giant Self Spin-Valve Effect in the Kagome Helimagnet | 2025-03-21T03:20:44+00:00 | Kagome magnets can combine non-trivial band topology and electron correlations, offering a versatile playground for various quantum phenomena. In this work we propose that kagome magnets with frustrated interlayer interactions can intrinsically support a self spin-valve effect, and experimentally confirm this in the kagome helimagnet TmMn$_6$Sn$_6$. Under a magnetic field perpendicular to the helical axis, using magnetic force microscopy we observed stripe domains that stack strictly along the helical axis, which we attribute to the stability loss of the kagome helimagnetic state. Such a domain pattern spontaneously mimics the artificial multilayered structure in traditional spin valves, which, combined with the high spin polarization, leads to a giant magnetoresistance (GMR) ratio over 160%. This discovery opens an avenue to realize inherent spin valves in a variety of quantum magnets, and can hold promise in future spintronics. |
http://arxiv.org/abs/2503.16821v1 | The graph zeta functions with respect to the group matrix of a finite group | 2025-03-21T03:26:09+00:00 | In this paper, we present formulas for the edge zeta function and the second weighted zeta function with respect to the group matrix of a finite abelian group $\Gamma $. Furthermore, we give another proof of Dedekind Theorem for the group determinant of $\Gamma $ by the decomposition formula for a matrix of a group covering of a digraph. Finally, we treat the weighted complexity of the complete graph with entries of the group matrix of $\Gamma $ as arc weights. |
http://arxiv.org/abs/2503.16822v1 | RigGS: Rigging of 3D Gaussians for Modeling Articulated Objects in Videos | 2025-03-21T03:27:07+00:00 | This paper considers the problem of modeling articulated objects captured in 2D videos to enable novel view synthesis, while also being easily editable, drivable, and re-posable. To tackle this challenging problem, we propose RigGS, a new paradigm that leverages 3D Gaussian representation and skeleton-based motion representation to model dynamic objects without utilizing additional template priors. Specifically, we first propose skeleton-aware node-controlled deformation, which deforms a canonical 3D Gaussian representation over time to initialize the modeling process, producing candidate skeleton nodes that are further simplified into a sparse 3D skeleton according to their motion and semantic information. Subsequently, based on the resulting skeleton, we design learnable skin deformations and pose-dependent detailed deformations, thereby easily deforming the 3D Gaussian representation to generate new actions and render further high-quality images from novel views. Extensive experiments demonstrate that our method can generate realistic new actions easily for objects and achieve high-quality rendering. |
http://arxiv.org/abs/2503.16823v1 | Federated Digital Twin Construction via Distributed Sensing: A Game-Theoretic Online Optimization with Overlapping Coalitions | 2025-03-21T03:32:56+00:00 | In this paper, we propose a novel federated framework for constructing the digital twin (DT) model, referring to a living and self-evolving visualization model empowered by artificial intelligence, enabled by distributed sensing under edge-cloud collaboration. In this framework, the DT model to be built at the cloud is regarded as a global one being split into and integrating from multiple functional components, i.e., partial-DTs, created at various edge servers (ESs) using feature data collected by associated sensors. Considering time-varying DT evolutions and heterogeneities among partial-DTs, we formulate an online problem that jointly and dynamically optimizes partial-DT assignments from the cloud to ESs, ES-sensor associations for partial-DT creation, and as well as computation and communication resource allocations for global-DT integration. The problem aims to maximize the constructed DT's model quality while minimizing all induced costs, including energy consumption and configuration costs, in long runs. To this end, we first transform the original problem into an equivalent hierarchical game with an upper-layer two-sided matching game and a lower-layer overlapping coalition formation game. After analyzing these games in detail, we apply the Gale-Shapley algorithm and particularly develop a switch rules-based overlapping coalition formation algorithm to obtain short-term equilibria of upper-layer and lower-layer subgames, respectively. Then, we design a deep reinforcement learning-based solution, called DMO, to extend the result into a long-term equilibrium of the hierarchical game, thereby producing the solution to the original problem. Simulations show the effectiveness of the introduced framework, and demonstrate the superiority of the proposed solution over counterparts. |
http://arxiv.org/abs/2503.16824v1 | Toward AI-driven Multimodal Interfaces for Industrial CAD Modeling | 2025-03-21T03:34:23+00:00 | AI-driven multimodal interfaces have the potential to revolutionize industrial 3D CAD modeling by improving workflow efficiency and user experience. However, the integration of these technologies remains challenging due to software constraints, user adoption barriers, and limitations in AI model adaptability. This paper explores the role of multimodal AI in CAD environments, examining its current applications, key challenges, and future research directions. We analyze Bayesian workflow inference, multimodal input strategies, and collaborative AI-driven interfaces to identify areas where AI can enhance CAD design processes while addressing usability concerns in industrial manufacturing settings. |
http://arxiv.org/abs/2503.16825v2 | SGFormer: Satellite-Ground Fusion for 3D Semantic Scene Completion | 2025-03-21T03:37:08+00:00 | Recently, camera-based solutions have been extensively explored for scene semantic completion (SSC). Despite their success in visible areas, existing methods struggle to capture complete scene semantics due to frequent visual occlusions. To address this limitation, this paper presents the first satellite-ground cooperative SSC framework, i.e., SGFormer, exploring the potential of satellite-ground image pairs in the SSC task. Specifically, we propose a dual-branch architecture that encodes orthogonal satellite and ground views in parallel, unifying them into a common domain. Additionally, we design a ground-view guidance strategy that corrects satellite image biases during feature encoding, addressing misalignment between satellite and ground views. Moreover, we develop an adaptive weighting strategy that balances contributions from satellite and ground views. Experiments demonstrate that SGFormer outperforms the state of the art on SemanticKITTI and SSCBench-KITTI-360 datasets. Our code is available on https://github.com/gxytcrc/SGFormer. |
http://arxiv.org/abs/2503.16826v1 | When Tom Eats Kimchi: Evaluating Cultural Bias of Multimodal Large Language Models in Cultural Mixture Contexts | 2025-03-21T03:50:05+00:00 | In a highly globalized world, it is important for multi-modal large language models (MLLMs) to recognize and respond correctly to mixed-cultural inputs. For example, a model should correctly identify kimchi (Korean food) in an image both when an Asian woman is eating it, as well as an African man is eating it. However, current MLLMs show an over-reliance on the visual features of the person, leading to misclassification of the entities. To examine the robustness of MLLMs to different ethnicity, we introduce MixCuBe, a cross-cultural bias benchmark, and study elements from five countries and four ethnicities. Our findings reveal that MLLMs achieve both higher accuracy and lower sensitivity to such perturbation for high-resource cultures, but not for low-resource cultures. GPT-4o, the best-performing model overall, shows up to 58% difference in accuracy between the original and perturbed cultural settings in low-resource cultures. Our dataset is publicly available at: https://huggingface.co/datasets/kyawyethu/MixCuBe. |
http://arxiv.org/abs/2503.16827v1 | Discontinuous Galerkin Representation of the Maxwell-Jüttner Distribution | 2025-03-21T03:50:12+00:00 | Kinetic simulations of relativistic gases and plasmas are critical for understanding diverse astrophysical and terrestrial systems, but the accurate construction of the relativistic Maxwellian, the Maxwell-J\"uttner (MJ) distribution, on a discrete simulation grid is challenging. Difficulties arise from the finite velocity bounds of the domain, which may not capture the entire distribution function, as well as errors introduced by projecting the function onto a discrete grid. Here we present a novel scheme for iteratively correcting the moments of the projected distribution applicable to all grid-based discretizations of the relativistic kinetic equation. In addition, we describe how to compute the needed nonlinear quantities, such as Lorentz boost factors, in a discontinuous Galerkin (DG) scheme through a combination of numerical quadrature and weak operations. The resulting method accurately captures the distribution function and ensures that the moments match the desired values to machine precision. |
http://arxiv.org/abs/2503.16828v1 | Efficient and Expressive Public Key Authenticated Encryption with Keyword Search in Multi-user Scenarios | 2025-03-21T03:51:43+00:00 | Public key authenticated encryption with keyword search (PAEKS) represents a significant advancement of secure and searchable data sharing in public network systems, such as medical systems. It can effectively mitigate the risk of keyword guessing attacks (KGA), which is a critical issue in public key encryption with keyword search (PEKS). However, in scenarios with a large number of users, the enforced point-to-point access control necessitates that the data sender encrypt the same keyword using the public keys of multiple receivers to create indexes, while the data receiver also must generate trapdoors of size linear to senders in the system. The burden on users aiming for efficient data sharing is considerable, as the overheads increase linearly with the number of users. Furthermore, the majority of current PAEKS schemes lack expressive search functions, including conjunctions, disjunctions, or any monotone boolean formulas, which are prevalent in practical applications. To tackle the abovementioned challenges, we propose an efficient and expressive PAEKS scheme. In efficiency, one auxiliary server is integrated to assist users in generating indexes and trapdoors. Users encrypt with their respective private keys along with the public keys of the servers, facilitating secure and searchable data sharing while significantly minimizing overhead. Additionally, the LSSS is employed to implement expressive search, including monotone boolean queries. We also obfuscate the mapping relationship associated with the LSSS matrix to the keywords, thereby enhancing the privacy protection. Security analysis alongside theoretical and experimental evaluations of our scheme illustrates its practicality and efficiency in multi-user data sharing scenarios. |
http://arxiv.org/abs/2503.16829v1 | Quantitative stratification for the fractional Allen-Cahn equation and stationary nonlocal minimal surface | 2025-03-21T03:54:41+00:00 | We study properties of solutions to the fractional Allen-Cahn equation when $s\in (0, 1/2)$ and dimension $n\geq 2$. By applying the quantitative stratification principle developed by Naber and Valtorta, we obtain an optimal quantitative estimate on the transition set. As an application of this estimate, we improve the potential energy estimates of Cabre, Cinti, and Serra (2021), providing sharp versions for the fractional Allen-Cahn equation. Similarly, we obtain optimal perimeter estimates for stationary nonlocal minimal surfaces, extending previous results of Cinti, Serra, and Valdinoci (2019) from the stable case. |
http://arxiv.org/abs/2503.16830v1 | Artin-Schreier-Witt extensions and ramification breaks | 2025-03-21T03:56:12+00:00 | Let $K=k((t))$ be a local field of characteristic $p>0$, with perfect residue field $k$. Let $\vec{a}=(a_0,a_1,\dots,a_{n-1})\in W_n(K)$ be a Witt vector of length $n$. Artin-Schreier-Witt theory associates to $\vec{a}$ a cyclic extension $L/K$ of degree $p^i$ for some $i\le n$. Assume that the vector $\vec{a}$ is ``reduced'', and that $v_K(a_0)<0$; then $L/K$ is a totally ramified extension of degree $p^n$. In the case where $k$ is finite, Kanesaka-Sekiguchi and Thomas used class field theory to explicitly compute the upper ramification breaks of $L/K$ in terms of the valuations of the components of $\vec{a}$. In this note we use a direct method to show that these formulas remain valid when $k$ is an arbitrary perfect field. |
http://arxiv.org/abs/2503.16831v1 | Non-Lorentzian model for strong exciton-plasmon coupling | 2025-03-21T03:56:58+00:00 | We develop a non-Lorentzian approach for quantum emitters (QE) resonantly coupled to localized surface plasmons (LSP) in metal-dielectric structures. Using the exact LSP Green function, we derive non-Lorentzian version of Maxwell-Bloch equations which describe LSP in terms of metal complex dielectric function rather than via Lorentzian resonances. For a single QE coupled to the LSP, we obtain an explicit expression for the system effective optical polarizability which, in the Lorentzian approximation, recovers the classical coupled oscillator (CO) model. We demonstrate that non-Lorentzian effects originating from the temporal dispersion of metal dielectric function affect dramatically the optical spectra as the system transitions to the strong coupling regime. Specifically, in contrast to Lorentzian models, the main spectral weight is shifted towards the lower energy polaritonic band, consistent with the experiment. |
http://arxiv.org/abs/2503.16832v1 | Joint Self-Supervised Video Alignment and Action Segmentation | 2025-03-21T04:02:00+00:00 | We introduce a novel approach for simultaneous self-supervised video alignment and action segmentation based on a unified optimal transport framework. In particular, we first tackle self-supervised video alignment by developing a fused Gromov-Wasserstein optimal transport formulation with a structural prior, which trains efficiently on GPUs and needs only a few iterations for solving the optimal transport problem. Our single-task method achieves the state-of-the-art performance on multiple video alignment benchmarks and outperforms VAVA, which relies on a traditional Kantorovich optimal transport formulation with an optimality prior. Furthermore, we extend our approach by proposing a unified optimal transport framework for joint self-supervised video alignment and action segmentation, which requires training and storing a single model and saves both time and memory consumption as compared to two different single-task models. Extensive evaluations on several video alignment and action segmentation datasets demonstrate that our multi-task method achieves comparable video alignment yet superior action segmentation results over previous methods in video alignment and action segmentation respectively. Finally, to the best of our knowledge, this is the first work to unify video alignment and action segmentation into a single model. |
http://arxiv.org/abs/2503.16833v1 | The Deployment of End-to-End Audio Language Models Should Take into Account the Principle of Least Privilege | 2025-03-21T04:03:59+00:00 | We are at a turning point for language models that accept audio input. The latest end-to-end audio language models (Audio LMs) process speech directly instead of relying on a separate transcription step. This shift preserves detailed information, such as intonation or the presence of multiple speakers, that would otherwise be lost in transcription. However, it also introduces new safety risks, including the potential misuse of speaker identity cues and other sensitive vocal attributes, which could have legal implications. In this position paper, we urge a closer examination of how these models are built and deployed. We argue that the principle of least privilege should guide decisions on whether to deploy cascaded or end-to-end models. Specifically, evaluations should assess (1) whether end-to-end modeling is necessary for a given application; and (2), the appropriate scope of information access. Finally, We highlight related gaps in current audio LM benchmarks and identify key open research questions, both technical and policy-related, that must be addressed to enable the responsible deployment of end-to-end Audio LMs. |
http://arxiv.org/abs/2503.16834v1 | Betweenness Centrality Based Dynamic Source Routing for Flying Ad Hoc Networks in Marching Formation | 2025-03-21T04:08:47+00:00 | Designing high-performance routing protocols for flying ad hoc networks (FANETs) is challenging due to the diversity of applications and the dynamics of network topology. The existing general-purpose routing protocols for ad hoc networks often oversimplify mobility patterns and disregard the unequal importance of nodes, resulting in suboptimal routing decisions that are unsuitable for task-oriented FANETs. To break the bottleneck, in this paper we propose a betweenness centrality based dynamic source routing (BC-DSR) protocol for a flying ad hoc network (FANET) in marching formation. Firstly, we introduce a Gauss-Markov group (GMG) mobility model based on the leader-follower pattern, which accurately captures the temporal and spatial correlations of node movements in the realistic marching formation. Besides, we exploit the concept of BC defined in graph theory to measure the structural unequal importance of relay nodes, i.e., to determine link weights, in the particular marching formation topology. The path of least cost is calculated relying on a weighted directed graph constructed. The ns-3 based simulation results demonstrate that our BCDSR protocol achieves higher packet-delivery ratio and lower average end-to-end latency and routing overhead ratio than representative benchmark protocols used in FANETs, while maintaining a reasonably small network jitter. |
http://arxiv.org/abs/2503.16835v1 | Safe and Reliable Diffusion Models via Subspace Projection | 2025-03-21T04:09:25+00:00 | Large-scale text-to-image (T2I) diffusion models have revolutionized image generation, enabling the synthesis of highly detailed visuals from textual descriptions. However, these models may inadvertently generate inappropriate content, such as copyrighted works or offensive images. While existing methods attempt to eliminate specific unwanted concepts, they often fail to ensure complete removal, allowing the concept to reappear in subtle forms. For instance, a model may successfully avoid generating images in Van Gogh's style when explicitly prompted with 'Van Gogh', yet still reproduce his signature artwork when given the prompt 'Starry Night'. In this paper, we propose SAFER, a novel and efficient approach for thoroughly removing target concepts from diffusion models. At a high level, SAFER is inspired by the observed low-dimensional structure of the text embedding space. The method first identifies a concept-specific subspace $S_c$ associated with the target concept c. It then projects the prompt embeddings onto the complementary subspace of $S_c$, effectively erasing the concept from the generated images. Since concepts can be abstract and difficult to fully capture using natural language alone, we employ textual inversion to learn an optimized embedding of the target concept from a reference image. This enables more precise subspace estimation and enhances removal performance. Furthermore, we introduce a subspace expansion strategy to ensure comprehensive and robust concept erasure. Extensive experiments demonstrate that SAFER consistently and effectively erases unwanted concepts from diffusion models while preserving generation quality. |
http://arxiv.org/abs/2503.16836v1 | A Flexible Fairness Framework with Surrogate Loss Reweighting for Addressing Sociodemographic Disparities | 2025-03-21T04:10:14+00:00 | This paper presents a new algorithmic fairness framework called $\boldsymbol{\alpha}$-$\boldsymbol{\beta}$ Fair Machine Learning ($\boldsymbol{\alpha}$-$\boldsymbol{\beta}$ FML), designed to optimize fairness levels across sociodemographic attributes. Our framework employs a new family of surrogate loss functions, paired with loss reweighting techniques, allowing precise control over fairness-accuracy trade-offs through tunable hyperparameters $\boldsymbol{\alpha}$ and $\boldsymbol{\beta}$. To efficiently solve the learning objective, we propose Parallel Stochastic Gradient Descent with Surrogate Loss (P-SGD-S) and establish convergence guarantees for both convex and nonconvex loss functions. Experimental results demonstrate that our framework improves overall accuracy while reducing fairness violations, offering a smooth trade-off between standard empirical risk minimization and strict minimax fairness. Results across multiple datasets confirm its adaptability, ensuring fairness improvements without excessive performance degradation. |
http://arxiv.org/abs/2503.16837v2 | Recoil-induced errors and their correction in photon-mediated entanglement between atom qubits | 2025-03-21T04:13:10+00:00 | Photonically-interconnected matter qubit systems have wide-ranging applications across quantum science and technology, with entanglement between distant qubits serving as a universal resource. While state-of-the-art heralded entanglement generation performance thus far has been achieved in trapped atomic systems modelled as stationary emitters, the improvements to fidelities and generation rates demanded by large-scale applications require taking into account their motional degrees of freedom. Here, we derive the effects of atomic motion on spontaneous emission coupled into arbitrary optical modes, and study the implications for commonly-used atom-atom entanglement protocols. We arrive at a coherent physical picture in the form of "kick operators" associated with each instant in the photonic wavepackets, which also suggests a method to mitigate motional errors by disentangling qubit and motion post-herald. This proposed correction technique removes overheads associated with the thermal motion of atoms, and may greatly increase entanglement rates in long-distance quantum network links by allowing single-photon-based protocols to be used in the high-fidelity regime. |
http://arxiv.org/abs/2503.16838v1 | Investigation of $Δ(1232)$ resonance substructure in $pγ^*\to Δ(1232)$ process through helicity amplitudes | 2025-03-21T04:21:52+00:00 | This work investigates the substructure of the $\Delta(1232)$ resonance in the $p\gamma^*\to \Delta(1232)$ process through helicity transition amplitudes within the quark model framework. We consider the involved baryons composed of three quarks, and both the quark core and meson cloud contribute to the transition amplitudes. The comparison of theoretical results with experimental data reveals that, rather than the $L=0$ component of the $\Delta(1232)$ resonance, it is the $L=2$ component that significantly affects its $S_{1/2}$ amplitude. These findings indicate that the $\Delta(1232)$ resonance likely contains a substantial $L=2$ component, challenging the conventional view of the $\Delta(1232)$ resonance as an $L=0$ baryon. |
http://arxiv.org/abs/2503.16839v1 | Minimum saturated graphs without $4$-cycles and $5$-cycles | 2025-03-21T04:21:53+00:00 | Given a family of graphs $\mathcal{F}$, a graph $G$ is said to be $\mathcal{F}$-saturated if $G$ does not contain a copy of $F$ as a subgraph for any $F\in\mathcal{F}$, but the addition of any edge $e\notin E(G)$ creates at least one copy of some $F\in\mathcal{F}$ within $G$. The minimum size of an $\mathcal{F}$-saturated graph on $n$ vertices is called the saturation number, denoted by $\mbox{sat}(n, \mathcal{F})$. Let $C_r$ be the cycle of length $r$. In this paper, we study on $\mbox{sat}(n, \mathcal{F})$ when $\mathcal{F}$ is a family of cycles. In particular, we determine that $\mbox{sat}(n, \{C_4,C_5\})=\lceil\frac{5n}{4}-\frac{3}{2}\rceil$ for any positive integer $n$. |
http://arxiv.org/abs/2503.16840v1 | Extreme Ultraviolet Time-Resolved Photoelectron Spectrometer with an Ultrathin Liquid Flat Jet | 2025-03-21T04:25:37+00:00 | A setup for extreme-ultraviolet time-resolved photoelectron spectroscopy (XUV-TRPES) of liquids is described based on a gas-dynamic flat jet formed by a microfluidic chip device. In comparison to a cylindrical jet that has a typical diameter of 10-30 micrometers, the larger surface area of the flat jet with a width of ca. 300 micrometers allows for full overlap of the target with the pump and probe light beams. This results in an enhancement of photoelectrons emitted from the liquid, while simultaneously allowing smaller sample consumption compared with other flat jet techniques utilizing liquid collisions or converging slits. Femtosecond pulses of XUV light at a photon energy of 21.7 eV are prepared by high harmonic generation and a multilayer mirror that selects a single harmonic; the He gas used to form the gas-dynamic flat jet is transparent at this energy. Compared to a cylindrical jet, the photoelectron signal from the liquid is enhanced relative to that from the surrounding vapor jacket. Pump-probe spectra for aqueous thymine show notably higher signals for the flat vs cylindrical jet. Moreover, the time-dependent space-charge shift in UV pump/XUV probe experiments is smaller for the gas dynamic flat jet than for a cylindrical jet with the same flow rate, an effect that is accentuated at higher He backing pressures that yield a thinner jet. This reflects reduced multiphoton ionization of the solute by the UV pump pulse, the primary cause of the space charge shift, as the jet becomes thinner and reaches the thickness of a few tens of nm. |
http://arxiv.org/abs/2503.16841v1 | Preferential Multi-Objective Bayesian Optimization for Drug Discovery | 2025-03-21T04:27:06+00:00 | Despite decades of advancements in automated ligand screening, large-scale drug discovery remains resource-intensive and requires post-processing hit selection, a step where chemists manually select a few promising molecules based on their chemical intuition. This creates a major bottleneck in the virtual screening process for drug discovery, demanding experts to repeatedly balance complex trade-offs among drug properties across a vast pool of candidates. To improve the efficiency and reliability of this process, we propose a novel human-centered framework named CheapVS that allows chemists to guide the ligand selection process by providing preferences regarding the trade-offs between drug properties via pairwise comparison. Our framework combines preferential multi-objective Bayesian optimization with a docking model for measuring binding affinity to capture human chemical intuition for improving hit identification. Specifically, on a library of 100K chemical candidates targeting EGFR and DRD2, CheapVS outperforms state-of-the-art screening methods in identifying drugs within a limited computational budget. Notably, our method can recover up to 16/37 EGFR and 37/58 DRD2 known drugs while screening only 6% of the library, showcasing its potential to significantly advance drug discovery. |
http://arxiv.org/abs/2503.16842v1 | Downstream Analysis of Foundational Medical Vision Models for Disease Progression | 2025-03-21T04:27:49+00:00 | Medical vision foundational models are used for a wide variety of tasks, including medical image segmentation and registration. This work evaluates the ability of these models to predict disease progression using a simple linear probe. We hypothesize that intermediate layer features of segmentation models capture structural information, while those of registration models encode knowledge of change over time. Beyond demonstrating that these features are useful for disease progression prediction, we also show that registration model features do not require spatially aligned input images. However, for segmentation models, spatial alignment is essential for optimal performance. Our findings highlight the importance of spatial alignment and the utility of foundation model features for image registration. |
http://arxiv.org/abs/2503.16843v1 | LoRASculpt: Sculpting LoRA for Harmonizing General and Specialized Knowledge in Multimodal Large Language Models | 2025-03-21T04:31:09+00:00 | While Multimodal Large Language Models (MLLMs) excel at generalizing across modalities and tasks, effectively adapting them to specific downstream tasks while simultaneously retaining both general and specialized knowledge remains challenging. Although Low-Rank Adaptation (LoRA) is widely used to efficiently acquire specialized knowledge in MLLMs, it introduces substantial harmful redundancy during visual instruction tuning, which exacerbates the forgetting of general knowledge and degrades downstream task performance. To address this issue, we propose LoRASculpt to eliminate harmful redundant parameters, thereby harmonizing general and specialized knowledge. Specifically, under theoretical guarantees, we introduce sparse updates into LoRA to discard redundant parameters effectively. Furthermore, we propose a Conflict Mitigation Regularizer to refine the update trajectory of LoRA, mitigating knowledge conflicts with the pretrained weights. Extensive experimental results demonstrate that even at very high degree of sparsity ($\le$ 5%), our method simultaneously enhances generalization and downstream task performance. This confirms that our approach effectively mitigates the catastrophic forgetting issue and further promotes knowledge harmonization in MLLMs. |
http://arxiv.org/abs/2503.16844v1 | Upper limits on the gamma-ray emission from the microquasar V4641 Sgr | 2025-03-21T04:31:16+00:00 | Following a recent detection of TeV radiation by the Large High Altitude Air Shower Observatory (LHAASO) and the High-Altitude Water Cherenkov Observatory (HAWC), coincident with the direction of the microquasar V4641 Sgr, we search for possible GeV emission from this source. We explored the morphology and temporal features of the source as well as two nearby unassociated point sources which could be a part of extended structure of V4641 Sgr, and compared results with corresponding X-ray and TeV emissions. The 95% confidence level upper limits for the flux from the source, assuming both point and extended source models were 5.38$\times$ 10$^{-13}$ erg cm$^{-2}$ s$^{-1}$ and 1.12$\times$ 10$^{-12}$ erg cm$^{-2}$ s$^{-1}$, respectively. Additionally, no correlation between gamma-ray light curve and X-ray outbursts was observed. |
http://arxiv.org/abs/2503.16845v1 | One-Point Residual Feedback Algorithms for Distributed Online Convex and Non-convex Optimization | 2025-03-21T04:32:51+00:00 | This paper mainly addresses the distributed online optimization problem where the local objective functions are assumed to be convex or non-convex. First, the distributed algorithms are proposed for the convex and non-convex situations, where the one-point residual feedback technology is introduced to estimate gradient of local objective functions. Then the regret bounds of the proposed algorithms are derived respectively under the assumption that the local objective functions are Lipschitz or smooth, which implies that the regrets are sublinear. Finally, we give two numerical examples of distributed convex optimization and distributed resources allocation problem to illustrate the effectiveness of the proposed algorithm. |
http://arxiv.org/abs/2503.16846v1 | An Accelerated Bregman Algorithm for ReLU-based Symmetric Matrix Decomposition | 2025-03-21T04:32:53+00:00 | Symmetric matrix decomposition is an active research area in machine learning. This paper focuses on exploiting the low-rank structure of non-negative and sparse symmetric matrices via the rectified linear unit (ReLU) activation function. We propose the ReLU-based nonlinear symmetric matrix decomposition (ReLU-NSMD) model, introduce an accelerated alternating partial Bregman (AAPB) method for its solution, and present the algorithm's convergence results. Our algorithm leverages the Bregman proximal gradient framework to overcome the challenge of estimating the global $L$-smooth constant in the classic proximal gradient algorithm. Numerical experiments on synthetic and real datasets validate the effectiveness of our model and algorithm. |
http://arxiv.org/abs/2503.16847v1 | Early-MFC: Enhanced Flow Correlation Attacks on Tor via Multi-view Triplet Networks with Early Network Traffic | 2025-03-21T04:36:51+00:00 | Flow correlation attacks is an efficient network attacks, aiming to expose those who use anonymous network services, such as Tor. Conducting such attacks during the early stages of network communication is particularly critical for scenarios demanding rapid decision-making, such as cybercrime detection or financial fraud prevention. Although recent studies have made progress in flow correlation attacks techniques, research specifically addressing flow correlation with early network traffic flow remains limited. Moreover, due to factors such as model complexity, training costs, and real-time requirements, existing technologies cannot be directly applied to flow correlation with early network traffic flow. In this paper, we propose flow correlation attack with early network traffic, named Early-MFC, based on multi-view triplet networks. The proposed approach extracts multi-view traffic features from the payload at the transport layer and the Inter-Packet Delay. It then integrates multi-view flow information, converting the extracted features into shared embeddings. By leveraging techniques such as metric learning and contrastive learning, the method optimizes the embeddings space by ensuring that similar flows are mapped closer together while dissimilar flows are positioned farther apart. Finally, Bayesian decision theory is applied to determine flow correlation, enabling high-accuracy flow correlation with early network traffic flow. Furthermore, we investigate flow correlation attacks under extra-early network traffic flow conditions. To address this challenge, we propose Early-MFC+, which utilizes payload data to construct embedded feature representations, ensuring robust performance even with minimal packet availability. |
http://arxiv.org/abs/2503.16848v1 | HSM: Hierarchical Scene Motifs for Multi-Scale Indoor Scene Generation | 2025-03-21T04:36:57+00:00 | Despite advances in indoor 3D scene layout generation, synthesizing scenes with dense object arrangements remains challenging. Existing methods primarily focus on large furniture while neglecting smaller objects, resulting in unrealistically empty scenes. Those that place small objects typically do not honor arrangement specifications, resulting in largely random placement not following the text description. We present HSM, a hierarchical framework for indoor scene generation with dense object arrangements across spatial scales. Indoor scenes are inherently hierarchical, with surfaces supporting objects at different scales, from large furniture on floors to smaller objects on tables and shelves. HSM embraces this hierarchy and exploits recurring cross-scale spatial patterns to generate complex and realistic indoor scenes in a unified manner. Our experiments show that HSM outperforms existing methods by generating scenes that are more realistic and better conform to user input across room types and spatial configurations. |
http://arxiv.org/abs/2503.16849v1 | Safe On-Orbit Dislodging of Deployable Structures via Robust Adaptive MPC | 2025-03-21T04:40:04+00:00 | This paper proposes a novel robust adaptive model predictive controller for on-orbit dislodging. We consider the scenario where a servicer, equipped with a robot arm, must dislodge a client, a time-varying system composed of an underpowered jammed solar panel with a hybrid hinge system on a space station. Our approach leverages online set-membership identification to reduce the uncertainty to provide robust safety guarantees during dislodging despite bounded disturbances while balancing exploration and exploitation effectively in the parameter space. The feasibility of the developed robust adaptive MPC method is also examined through dislodging simulations and hardware experiments in zero-gravity and gravity environments, respectively. In addition, the advantages of our method are shown through comparison experiments with several state-of-the-art control schemes for both accuracy of parameter estimation and control performance. |
http://arxiv.org/abs/2503.16850v1 | Physics-Informed Neural Network Surrogate Models for River Stage Prediction | 2025-03-21T04:48:22+00:00 | This work investigates the feasibility of using Physics-Informed Neural Networks (PINNs) as surrogate models for river stage prediction, aiming to reduce computational cost while maintaining predictive accuracy. Our primary contribution demonstrates that PINNs can successfully approximate HEC-RAS numerical solutions when trained on a single river, achieving strong predictive accuracy with generally low relative errors, though some river segments exhibit higher deviations. By integrating the governing Saint-Venant equations into the learning process, the proposed PINN-based surrogate model enforces physical consistency and significantly improves computational efficiency compared to HEC-RAS. We evaluate the model's performance in terms of accuracy and computational speed, demonstrating that it closely approximates HEC-RAS predictions while enabling real-time inference. These results highlight the potential of PINNs as effective surrogate models for single-river hydrodynamics, offering a promising alternative for computationally efficient river stage forecasting. Future work will explore techniques to enhance PINN training stability and robustness across a more generalized multi-river model. |
http://arxiv.org/abs/2503.16851v1 | Towards LLM Guardrails via Sparse Representation Steering | 2025-03-21T04:50:25+00:00 | Large Language Models (LLMs) have demonstrated remarkable performance in natural language generation tasks, yet their uncontrolled outputs pose significant ethical and safety risks. Recently, representation engineering methods have shown promising results in steering model behavior by modifying the rich semantic information encoded in activation vectors. However, due to the difficulty of precisely disentangling semantic directions within high-dimensional representation space, existing approaches suffer from three major limitations: lack of fine-grained control, quality degradation of generated content, and poor interpretability. To address these challenges, we propose a sparse encoding-based representation engineering method, named SRE, which decomposes polysemantic activations into a structured, monosemantic feature space. By leveraging sparse autoencoding, our approach isolates and adjusts only task-specific sparse feature dimensions, enabling precise and interpretable steering of model behavior while preserving content quality. We validate our method on three critical domains, i.e., safety, fairness, and truthfulness using the open-source LLM Gemma-2-2B-it. Experimental results show that SRE achieves superior controllability while maintaining the overall quality of generated content (i.e., controllability and quality), demonstrating its effectiveness as a fine-grained and interpretable activation steering framework. |
http://arxiv.org/abs/2503.16852v1 | Casual Inference via Style Bias Deconfounding for Domain Generalization | 2025-03-21T04:52:31+00:00 | Deep neural networks (DNNs) often struggle with out-of-distribution data, limiting their reliability in diverse realworld applications. To address this issue, domain generalization methods have been developed to learn domain-invariant features from single or multiple training domains, enabling generalization to unseen testing domains. However, existing approaches usually overlook the impact of style frequency within the training set. This oversight predisposes models to capture spurious visual correlations caused by style confounding factors, rather than learning truly causal representations, thereby undermining inference reliability. In this work, we introduce Style Deconfounding Causal Learning (SDCL), a novel causal inference-based framework designed to explicitly address style as a confounding factor. Our approaches begins with constructing a structural causal model (SCM) tailored to the domain generalization problem and applies a backdoor adjustment strategy to account for style influence. Building on this foundation, we design a style-guided expert module (SGEM) to adaptively clusters style distributions during training, capturing the global confounding style. Additionally, a back-door causal learning module (BDCL) performs causal interventions during feature extraction, ensuring fair integration of global confounding styles into sample predictions, effectively reducing style bias. The SDCL framework is highly versatile and can be seamlessly integrated with state-of-the-art data augmentation techniques. Extensive experiments across diverse natural and medical image recognition tasks validate its efficacy, demonstrating superior performance in both multi-domain and the more challenging single-domain generalization scenarios. |
http://arxiv.org/abs/2503.16853v1 | Imagine to Hear: Auditory Knowledge Generation can be an Effective Assistant for Language Models | 2025-03-21T04:56:22+00:00 | Language models pretrained on text-only corpora often struggle with tasks that require auditory commonsense knowledge. Previous work addresses this problem by augmenting the language model to retrieve knowledge from external audio databases. This approach has several limitations, such as the potential lack of relevant audio in databases and the high costs associated with constructing and querying the databases. To address these issues, we propose Imagine to Hear, a novel approach that dynamically generates auditory knowledge using generative models. Our framework detects multiple audio-related textual spans from the given prompt and generates corresponding auditory knowledge. We develop several mechanisms to efficiently process multiple auditory knowledge, including a CLAP-based rejection sampler and a language-audio fusion module. Our experiments show that our method achieves state-of-the-art performance on AuditoryBench without relying on external databases, highlighting the effectiveness of our generation-based approach. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.