text
string | source
string |
---|---|
Matrix factorization is an inference problem that has acquired importance due
to its vast range of applications that go from dictionary learning to
recommendation systems and machine learning with deep networks. The study of
its fundamental statistical limits represents a true challenge, and despite a
decade-long history of efforts in the community, there is still no closed
formula able to describe its optimal performances in the case where the rank of
the matrix scales linearly with its size. In the present paper, we study this
extensive rank problem, extending the alternative 'decimation' procedure that
we recently introduced, and carry out a thorough study of its performance.
Decimation aims at recovering one column/line of the factors at a time, by
mapping the problem into a sequence of neural network models of associative
memory at a tunable temperature. Though being sub-optimal, decimation has the
advantage of being theoretically analyzable. We extend its scope and analysis
to two families of matrices. For a large class of compactly supported priors,
we show that the replica symmetric free entropy of the neural network models
takes a universal form in the low temperature limit. For sparse Ising prior, we
show that the storage capacity of the neural network models diverges as
sparsity in the patterns increases, and we introduce a simple algorithm based
on a ground state search that implements decimation and performs matrix
factorization, with no need of an informative initialization.
|
http://arxiv.org/abs/2307.16564v1
|
We prove the Hardy--Stein identity for vector functions in $L^p(\mathbb
R^d;\mathbb R^n)$ with $1<p<\infty$ and for the canonical paring of two real
functions in $L^p(\mathbb R^d)$ with $2\le p<\infty$. To this end we propose a
notion of Bregman co-divergence and study the corresponding integral forms.
|
http://arxiv.org/abs/2309.09856v1
|
The exponential growth of question answering (QA) has made it an
indispensable topic in any Natural Language Processing (NLP) course.
Additionally, the breadth of QA derived from this exponential growth makes it
an ideal scenario for teaching related NLP topics such as information
retrieval, explainability, and adversarial attacks among others. In this paper,
we introduce UKP-SQuARE as a platform for QA education. This platform provides
an interactive environment where students can run, compare, and analyze various
QA models from different perspectives, such as general behavior,
explainability, and robustness. Therefore, students can get a first-hand
experience in different QA techniques during the class. Thanks to this, we
propose a learner-centered approach for QA education in which students
proactively learn theoretical concepts and acquire problem-solving skills
through interactive exploration, experimentation, and practical assignments,
rather than solely relying on traditional lectures. To evaluate the
effectiveness of UKP-SQuARE in teaching scenarios, we adopted it in a
postgraduate NLP course and surveyed the students after the course. Their
positive feedback shows the platform's effectiveness in their course and
invites a wider adoption.
|
http://arxiv.org/abs/2305.19748v2
|
The emergent ability of Large Language Models to use a small number of
examples to learn to perform in novel domains and tasks, also called in-context
learning (ICL). In this work, we show that a much smaller model can be trained
to perform ICL by fine-tuning towards a specialized training objective,
exemplified on the task of domain adaptation for neural machine translation.
With this capacity for ICL, the model can take advantage of relevant few-shot
examples to adapt its output towards the domain. We compare the quality of this
domain adaptation to traditional supervised techniques and ICL with a
40B-parameter Large Language Model. Our approach allows efficient batch
inference on a mix of domains and outperforms state-of-the-art baselines in
terms of both translation quality and immediate adaptation rate, i.e. the
ability to reproduce a specific term after being shown a single example.
|
http://arxiv.org/abs/2309.08590v1
|
Transformer models have achieved remarkable success in various machine
learning tasks but suffer from high computational complexity and resource
requirements. The quadratic complexity of the self-attention mechanism further
exacerbates these challenges when dealing with long sequences and large
datasets. Specialized AI hardware accelerators, such as the Habana GAUDI
architecture, offer a promising solution to tackle these issues. GAUDI features
a Matrix Multiplication Engine (MME) and a cluster of fully programmable Tensor
Processing Cores (TPC). This paper explores the untapped potential of using
GAUDI processors to accelerate Transformer-based models, addressing key
challenges in the process. Firstly, we provide a comprehensive performance
comparison between the MME and TPC components, illuminating their relative
strengths and weaknesses. Secondly, we explore strategies to optimize MME and
TPC utilization, offering practical insights to enhance computational
efficiency. Thirdly, we evaluate the performance of Transformers on GAUDI,
particularly in handling long sequences and uncovering performance bottlenecks.
Lastly, we evaluate the end-to-end performance of two Transformer-based large
language models (LLM) on GAUDI. The contributions of this work encompass
practical insights for practitioners and researchers alike. We delve into
GAUDI's capabilities for Transformers through systematic profiling, analysis,
and optimization exploration. Our study bridges a research gap and offers a
roadmap for optimizing Transformer-based model training on the GAUDI
architecture.
|
http://arxiv.org/abs/2309.16976v1
|
Integrated quantum photonics, with potential applications in quantum
information processing, relies on the integration of quantum emitters into
on-chip photonic circuits. Hexagonal boron nitride (hBN) is recognized as a
material that is compatible with such implementations, owing to its relatively
high refractive index and low losses in the visible range, together with
advantageous fabrication techniques. Here, we combine hBN waveguide
nanofabrication with the recently demonstrated local generation of quantum
emitters using electron irradiation to realize a fully top-down elementary
quantum photonic circuit in this material, operating at room temperature. This
proof of principle constitutes a first step towards deterministic quantum
photonic circuits in hBN.
|
http://arxiv.org/abs/2304.00130v2
|
Could there be a quantum superposition of consciousness, as in the Wigner's
friend thought experiment? The integrated information theory (IIT) of
consciousness has turned this into a well-defined question. According to IIT,
consciousness is a measurable physical quantity given by integrated information
($\Phi$), such that the amount of consciousness in a system corresponds to its
amount of $\Phi$. We use the most recent IIT formalism (IIT4.0) to analyze the
simplest non-zero $\Phi$ system known as a feedback dyad. We then propose a
circuit that puts the dyad into a superposition of states which, according to
IIT, would correspond to a superposition of conscious states. We refer to this
as "Schr\"odinger's dyad". We therefore show that either IIT is false or the
simple dyad is conscious and can easily be put into a superposition of
conscious states. We then identify the simplest possible consciousness-collapse
model, which predicts that this superposition is unstable and collapses at a
rate determined by a measure of difference between the superposed conscious
states. Our analysis will enable us to make a number of key observations about
the general structure of integrated information theory (IIT2.0, IIT3.0, IIT4.0,
and QIIT) and the general structure of consciousness-collapse models.
|
http://arxiv.org/abs/2309.13826v1
|
We report on the use of an optically-trapped microsphere as an acoustic
transducer. A model for the hydrodynamic coupling between the microsphere and
the surrounding acoustic fluid flow is combined with thermo-mechanical
calibration of the microsphere's position detection to enable quantitative
acoustic measurements. We describe our technique in detail, including the
self-noise, sensitivity, and minimum detectable signals, using a model
appropriate for both liquid and gas environments. We then test our approach in
an air-based experiment and compare our measurements with two state-of-the-art
commercially-available acoustic sensors. Piezoelectrically-driven bursts of
pure tones and laser ablation provide two classes of test sounds. We find
accurate measurements with a bandwidth of 1 MHz are possible using our
technique, improving by several orders of magnitude the bandwidth of previous
flow measurements based on optically-trapped microspheres.
|
http://arxiv.org/abs/2310.00087v1
|
By training linear physical networks to learn linear transformations, we
discern how their physical properties evolve due to weight update rules. Our
findings highlight a striking similarity between the learning behaviors of such
networks and the processes of aging and memory formation in disordered and
glassy systems. We show that the learning dynamics resembles an aging process,
where the system relaxes in response to repeated application of the feedback
boundary forces in presence of an input force, thus encoding a memory of the
input-output relationship. With this relaxation comes an increase in the
correlation length, which is indicated by the two-point correlation function
for the components of the network. We also observe that the square root of the
mean-squared error as a function of epoch takes on a non-exponential form,
which is a typical feature of glassy systems. This physical interpretation
suggests that by encoding more detailed information into input and feedback
boundary forces, the process of emergent learning can be rather ubiquitous and,
thus, serve as a very early physical mechanism, from an evolutionary
standpoint, for learning in biological systems.
|
http://arxiv.org/abs/2309.04382v2
|
Hallucination in a foundation model (FM) refers to the generation of content
that strays from factual reality or includes fabricated information. This
survey paper provides an extensive overview of recent efforts that aim to
identify, elucidate, and tackle the problem of hallucination, with a particular
focus on ``Large'' Foundation Models (LFMs). The paper classifies various types
of hallucination phenomena that are specific to LFMs and establishes evaluation
criteria for assessing the extent of hallucination. It also examines existing
strategies for mitigating hallucination in LFMs and discusses potential
directions for future research in this area. Essentially, the paper offers a
comprehensive examination of the challenges and solutions related to
hallucination in LFMs.
|
http://arxiv.org/abs/2309.05922v1
|
A coupled oscillator network may be able to perform an energy-efficient
associative memory operation. However, its realization has been difficult
because inhomogeneities unavoidably arise among the oscillators during
fabrication and lead to an unreliable operation. This issue could be resolved
if the oscillator network were able to be formed from a single oscillator.
Here, we performed numerical simulations and theoretical analyses on an
associative memory operation that uses a virtual oscillator network based on a
spin-torque oscillator. The virtual network combines the concept of coupled
oscillators with that of feedforward neural networks. Numerical experiments
demonstrate successful associations of $60$-pixel patterns with various
memorized patterns. Moreover, the origin of the associative memory is shown to
be forced synchronization driven by feedforward input, where phase differences
among oscillators are fixed and correspond to the colors of the pixels in the
pattern.
|
http://arxiv.org/abs/2309.13198v3
|
Recent cosmological tensions, in particular, to infer the local value of the
Hubble constant $H_0$, have developed new independent techniques to constrain
cosmological parameters in several cosmologies. Moreover, even when the
concordance Cosmological Constant Cold Dark Matter ($\Lambda$CDM) model has
been well constrained with local observables, its physics has shown deviations
from a flat background. Therefore, to explore a possible deviation from a flat
$\Lambda$CDM model that could explain the $H_0$ value in tension with other
techniques, in this paper we study new cosmological constraints in spatial
curvature dark energy models. Additionally, to standard current Supernovae Type
Ia (SNIa) catalogs, we extend the empirical distance ladder method through an
SNIa sample using the capabilities of the James Webb Space Telescope (JWST) to
forecast SNIa up to $z \sim 6$, with information on the star formation rates at
high redshift. Furthermore, we found that our constraints provide an
improvement in the statistics associated with $\Omega_{m}$ when combining SNIa
Pantheon and SNIa Pantheon+ catalogs with JW forecasting data.
|
http://arxiv.org/abs/2309.12292v2
|
We define a notion of grading of a monoid T in a monoidal category C,
relative to a class of morphisms M (which provide a notion of M-subobject). We
show that, under reasonable conditions (including that M forms a factorization
system), there is a canonical grading of T. Our application is to graded monads
and models of computational effects. We demonstrate our results by
characterizing the canonical gradings of a number of monads, for which C is
endofunctors with composition. We also show that we can obtain canonical grades
for algebraic operations.
|
http://arxiv.org/abs/2307.16558v1
|
Neural network-based decisions tend to be overconfident, where their raw
outcome probabilities do not align with the true decision probabilities.
Calibration of neural networks is an essential step towards more reliable deep
learning frameworks. Prior metrics of calibration error primarily utilize crisp
bin membership-based measures. This exacerbates skew in model probabilities and
portrays an incomplete picture of calibration error. In this work, we propose a
Fuzzy Calibration Error metric (FCE) that utilizes a fuzzy binning approach to
calculate calibration error. This approach alleviates the impact of probability
skew and provides a tighter estimate while measuring calibration error. We
compare our metric with ECE across different data populations and class
memberships. Our results show that FCE offers better calibration error
estimation, especially in multi-class settings, alleviating the effects of skew
in model confidence scores on calibration error estimation. We make our code
and supplementary materials available at: https://github.com/bihani-g/fce
|
http://arxiv.org/abs/2305.00543v2
|
In recent years, large convolutional neural networks have been widely used as
tools for image deblurring, because of their ability in restoring images very
precisely. It is well known that image deblurring is mathematically modeled as
an ill-posed inverse problem and its solution is difficult to approximate when
noise affects the data. Really, one limitation of neural networks for
deblurring is their sensitivity to noise and other perturbations, which can
lead to instability and produce poor reconstructions. In addition, networks do
not necessarily take into account the numerical formulation of the underlying
imaging problem, when trained end-to-end. In this paper, we propose some
strategies to improve stability without losing to much accuracy to deblur
images with deep-learning based methods. First, we suggest a very small neural
architecture, which reduces the execution time for training, satisfying a green
AI need, and does not extremely amplify noise in the computed image. Second, we
introduce a unified framework where a pre-processing step balances the lack of
stability of the following, neural network-based, step. Two different
pre-processors are presented: the former implements a strong parameter-free
denoiser, and the latter is a variational model-based regularized formulation
of the latent imaging problem. This framework is also formally characterized by
mathematical analysis. Numerical experiments are performed to verify the
accuracy and stability of the proposed approaches for image deblurring when
unknown or not-quantified noise is present; the results confirm that they
improve the network stability with respect to noise. In particular, the
model-based framework represents the most reliable trade-off between visual
precision and robustness.
|
http://arxiv.org/abs/2305.19774v1
|
Pretrained vision-language models, such as CLIP, show promising zero-shot
performance across a wide variety of datasets. For closed-set classification
tasks, however, there is an inherent limitation: CLIP image encoders are
typically designed to extract generic image-level features that summarize
superfluous or confounding information for the target tasks. This results in
degradation of classification performance, especially when objects of interest
cover small areas of input images. In this work, we propose CLIP with Guided
Cropping (GC-CLIP), where we use an off-the-shelf zero-shot object detection
model in a preprocessing step to increase focus of zero-shot classifier to the
object of interest and minimize influence of extraneous image regions. We
empirically show that our approach improves zero-shot classification results
across architectures and datasets, favorably for small objects.
|
http://arxiv.org/abs/2309.06581v1
|
This paper address the question of thermodynamic entropy production in the
context of the dynamical Casimir effect. Specifically, we study a scalar
quantum field confined within a one-dimensional ideal cavity subject to
time-varying boundary conditions dictated by an externally prescribed
trajectory of one of the cavity mirrors. The central question is how the
thermodynamic entropy of the field evolves over time. Utilizing an effective
Hamiltonian approach, we compute the entropy production and reveal that it
exhibits scaling behavior concerning the number of particles created in the
short-time limit. Furthermore, this approach elucidates the direct connection
between this entropy and the emergence of quantum coherence within the mode
basis of the field. In addition, by considering a distinct approach based on
the time evolution of Gaussian states we examine the long-time limit of entropy
production within a single mode of the field. This approach results in
establishing a connection between the thermodynamic entropy production in a
single field mode and the entanglement between that particular mode and all
other modes. Consequently, by employing two distinct approaches, we
comprehensively address both the short-term and long-term dynamics of the
system. Our results thus link the irreversible dynamics of the field, as
measured by entropy production and induced by the dynamical Casimir effect, to
two fundamental aspects of quantum mechanics: coherence and entanglement.
|
http://arxiv.org/abs/2309.07847v2
|
A network of spatially distributed data centers can provide operational
flexibility to power systems by shifting computing tasks among electrically
remote locations. However, harnessing this flexibility in real-time through the
standard optimization techniques is challenged by the need for sensitive
operational datasets and substantial computational resources. To alleviate the
data and computational requirements, this paper introduces a coordination
mechanism based on contextual regression. This mechanism, abbreviated as
AgentCONCUR, associates cost-optimal task shifts with public and trusted
contextual data (e.g., real-time prices) and uses regression on this data as a
coordination policy. Notably, regression-based coordination does not learn the
optimal coordination actions from a labeled dataset. Instead, it exploits the
optimization structure of the coordination problem to ensure feasible and
cost-effective actions. A NYISO-based study reveals large coordination gains
and the optimal features for the successful regression-based coordination.
|
http://arxiv.org/abs/2309.16792v2
|
We present the analytical solutions for the trajectories of particles that
spiral and plunge inward the event horizon along the timelike geodesics
following general non-equatorial paths within Kerr-Newman spacetimes. Our
studies encompass both bound and unbound motions. The solutions can be written
in terms of the elliptical integrals and the Jacobian elliptic functions of
manifestly real functions of the Mino time. They can respectively reduce to the
Kerr, Reissner-Nordstr$\ddot{o}$m, and Schwarzschild black holes in certain
limits of the spin and charge of the black holes, and can be compared with the
known ones restricted in equatorial motion. These explicit solutions may have
some implications for the gravitational wave emission from extreme mass-ratio
inspirals.
|
http://arxiv.org/abs/2309.13832v3
|
Automated image caption generation is essential for improving the
accessibility and understanding of visual content. In this study, we introduce
FaceGemma, a model that accurately describes facial attributes such as
emotions, expressions, and features. Using FaceAttdb data, we generated
descriptions for 2000 faces with the Llama 3 - 70B model and fine-tuned the
PaliGemma model with these descriptions. Based on the attributes and captions
supplied in FaceAttDB, we created a new description dataset where each
description perfectly depicts the human-annotated attributes, including key
features like attractiveness, full lips, big nose, blond hair, brown hair,
bushy eyebrows, eyeglasses, male, smile, and youth. This detailed approach
ensures that the generated descriptions are closely aligned with the nuanced
visual details present in the images. Our FaceGemma model leverages an
innovative approach to image captioning by using annotated attributes,
human-annotated captions, and prompt engineering to produce high-quality facial
descriptions. Our method significantly improved caption quality, achieving an
average BLEU-1 score of 0.364 and a METEOR score of 0.355. These metrics
demonstrate the effectiveness of incorporating facial attributes into image
captioning, providing more accurate and descriptive captions for portrait
images.
|
http://arxiv.org/abs/2309.13601v2
|
Instruction tuning is essential for large language models (LLMs) to become
interactive. While many instruction tuning datasets exist in English, there is
a noticeable lack in other languages. Also, their effectiveness has not been
well verified in non-English languages. We construct a Japanese instruction
dataset by expanding and filtering existing datasets and apply the dataset to a
Japanese pre-trained base model. We performed Low-Rank Adaptation (LoRA) tuning
on both Japanese and English existing models using our instruction dataset. We
evaluated these models from both quantitative and qualitative perspectives. As
a result, the effectiveness of Japanese instruction datasets is confirmed. The
results also indicate that even with relatively small LLMs, performances in
downstream tasks would be improved through instruction tuning. Our instruction
dataset, tuned models, and implementation are publicly available online.
|
http://arxiv.org/abs/2309.03412v2
|
We study the data complexity of consistent query answering (CQA) on databases
that may violate the primary key constraints. A repair is a maximal consistent
subset of the database. For a Boolean query $q$, the problem
$\mathsf{CERTAINTY}(q)$ takes a database as input, and asks whether or not each
repair satisfies $q$. It is known that for any self-join-free Boolean
conjunctive query $q$, $\mathsf{CERTAINTY}(q)$ is in $\mathbf{FO}$,
$\mathbf{LSPACE}$-complete, or $\mathbf{coNP}$-complete. In particular,
$\mathsf{CERTAINTY}(q)$ is in $\mathbf{FO}$ for any self-join-free Boolean path
query $q$. In this paper, we show that if self-joins are allowed, the
complexity of $\mathsf{CERTAINTY}(q)$ for Boolean path queries $q$ exhibits a
tetrachotomy between $\mathbf{FO}$, $\mathbf{NL}$-complete,
$\mathbf{PTIME}$-complete, and $\mathbf{coNP}$-complete. Moreover, it is
decidable, in polynomial time in the size of the query~$q$, which of the four
cases applies.
|
http://arxiv.org/abs/2309.15270v1
|
Recent years have witnessed significant progress in developing effective
training and fast sampling techniques for diffusion models. A remarkable
advancement is the use of stochastic differential equations (SDEs) and their
marginal-preserving ordinary differential equations (ODEs) to describe data
perturbation and generative modeling in a unified framework. In this paper, we
carefully inspect the ODE-based sampling of a popular variance-exploding SDE
and reveal several intriguing structures of its sampling dynamics. We discover
that the data distribution and the noise distribution are smoothly connected
with a quasi-linear sampling trajectory and another implicit denoising
trajectory that even converges faster. Meanwhile, the denoising trajectory
governs the curvature of the corresponding sampling trajectory and its finite
differences yield various second-order samplers used in practice. Furthermore,
we establish a theoretical relationship between the optimal ODE-based sampling
and the classic mean-shift (mode-seeking) algorithm, with which we can
characterize the asymptotic behavior of diffusion models and identify the
empirical score deviation. Code is available at
\url{https://github.com/zju-pi/diff-sampler}.
|
http://arxiv.org/abs/2305.19947v3
|
We perform physical and numerical experiments to study the stick-slip
response of a stack of slabs in contact through dry frictional interfaces
driven in quasistatic shear. The ratio between the drive's stiffness and the
slab's shear stiffness controls the presence or absence of slip
synchronization. A sufficiently high stiffness ratio leads to synchronization,
comprising periodic slip events in which all interfaces slip simultaneously. A
lower stiffness ratio leads to asynchronous slips and, experimentally, to the
stick-slip amplitude becoming broadly distributed as the number of layers in
the stack increases. We interpret this broadening in light of the combined
effect of complex loading paths due to the asynchronous slips and creep.
Consequently, the aging rate of the interfaces can be readily extracted from
the stick-slip cycles, and it is found to be of the same order of magnitude as
existing experimental results on a similar material. Finally, we discuss the
emergence of slow slips and an increase in aging-rate variations when more
slabs are added to the stack.
|
http://arxiv.org/abs/2301.13745v3
|
Code review is an essential activity for ensuring the quality and
maintainability of software projects. However, it is a time-consuming and often
error-prone task that can significantly impact the development process.
Recently, ChatGPT, a cutting-edge language model, has demonstrated impressive
performance in various natural language processing tasks, suggesting its
potential to automate code review processes. However, it is still unclear how
well ChatGPT performs in code review tasks. To fill this gap, in this paper, we
conduct the first empirical study to understand the capabilities of ChatGPT in
code review tasks, specifically focusing on automated code refinement based on
given code reviews. To conduct the study, we select the existing benchmark
CodeReview and construct a new code review dataset with high quality. We use
CodeReviewer, a state-of-the-art code review tool, as a baseline for comparison
with ChatGPT. Our results show that ChatGPT outperforms CodeReviewer in code
refinement tasks. Specifically, our results show that ChatGPT achieves higher
EM and BLEU scores of 22.78 and 76.44 respectively, while the state-of-the-art
method achieves only 15.50 and 62.88 on a high-quality code review dataset. We
further identify the root causes for ChatGPT's underperformance and propose
several strategies to mitigate these challenges. Our study provides insights
into the potential of ChatGPT in automating the code review process, and
highlights the potential research directions.
|
http://arxiv.org/abs/2309.08221v1
|
Far-from-equilibrium phenomena are critical to all natural and engineered
systems, and essential to biological processes responsible for life. For over a
century and a half, since Carnot, Clausius, Maxwell, Boltzmann, and Gibbs,
among many others, laid the foundation for our understanding of equilibrium
processes, scientists and engineers have dreamed of an analogous treatment of
non-equilibrium systems. But despite tremendous efforts, a universal theory of
non-equilibrium behavior akin to equilibrium statistical mechanics and
thermodynamics has evaded description. Several methodologies have proved their
ability to accurately describe complex non-equilibrium systems at the
macroscopic scale, but their accuracy and predictive capacity is predicated on
either phenomenological kinetic equations fit to microscopic data, or on
running concurrent simulations at the particle level. Instead, we provide a
framework for deriving stand-alone macroscopic thermodynamics models directly
from microscopic physics without fitting in overdamped Langevin systems. The
only necessary ingredient is a functional form for a parameterized, approximate
density of states, in analogy to the assumption of a uniform density of states
in the equilibrium microcanonical ensemble. We highlight this framework's
effectiveness by deriving analytical approximations for evolving mechanical and
thermodynamic quantities in a model of coiled-coil proteins and double stranded
DNA, thus producing, to the authors' knowledge, the first derivation of the
governing equations for a phase propagating system under general loading
conditions without appeal to phenomenology. The generality of our treatment
allows for application to any system described by Langevin dynamics with
arbitrary interaction energies and external driving, including colloidal
macromolecules, hydrogels, and biopolymers.
|
http://arxiv.org/abs/2309.07112v1
|
The Gaussian graphical model (GGM) incorporates an undirected graph to
represent the conditional dependence between variables, with the precision
matrix encoding partial correlation between pair of variables given the others.
To achieve flexible and accurate estimation and inference of GGM, we propose
the novel method FLAG, which utilizes the random effects model for pairwise
conditional regression to estimate the precision matrix and applies statistical
tests to recover the graph. Compared with existing methods, FLAG has several
unique advantages: (i) it provides accurate estimation without sparsity
assumptions on the precision matrix, (ii) it allows for element-wise inference
of the precision matrix, (iii) it achieves computational efficiency by
developing an efficient PX-EM algorithm and a MM algorithm accelerated with
low-rank updates, and (iv) it enables joint estimation of multiple graphs using
FLAG-Meta or FLAG-CA. The proposed methods are evaluated using various
simulation settings and real data applications, including gene expression in
the human brain, term association in university websites, and stock prices in
the U.S. financial market. The results demonstrate that FLAG and its extensions
provide accurate precision estimation and graph recovery.
|
http://arxiv.org/abs/2306.17584v1
|
Initialization of neural network weights plays a pivotal role in determining
their performance. Feature Imitating Networks (FINs) offer a novel strategy by
initializing weights to approximate specific closed-form statistical features,
setting a promising foundation for deep learning architectures. While the
applicability of FINs has been chiefly tested in biomedical domains, this study
extends its exploration into other time series datasets. Three different
experiments are conducted in this study to test the applicability of imitating
Tsallis entropy for performance enhancement: Bitcoin price prediction, speech
emotion recognition, and chronic neck pain detection. For the Bitcoin price
prediction, models embedded with FINs reduced the root mean square error by
around 1000 compared to the baseline. In the speech emotion recognition task,
the FIN-augmented model increased classification accuracy by over 3 percent.
Lastly, in the CNP detection experiment, an improvement of about 7 percent was
observed compared to established classifiers. These findings validate the broad
utility and potency of FINs in diverse applications.
|
http://arxiv.org/abs/2309.12279v1
|
We study subsets of countable recursively saturated models of $\mathsf{PA}$
which can be defined using pathologies in satisfaction classes. More precisely,
we characterize those subsets $X$ such that there is a satisfaction class $S$
where $S$ behaves correctly on an idempotent disjunction of length $c$ if and
only if $c \in X$. We generalize this result to characterize several types of
pathologies including double negations, blocks of extraneous quantifiers, and
binary disjunctions and conjunctions. We find a surprising relationship between
the cuts which can be defined in this way and arithmetic saturation: namely, a
countable nonstandard model is arithmetically saturated if and only if every
cut can be the "idempotent disjunctively correct cut" in some satisfaction
class. We describe the relationship between types of pathologies and the
closure properties of the cuts defined by these pathologies.
|
http://arxiv.org/abs/2303.18069v1
|
Successfully training Physics Informed Neural Networks (PINNs) for highly
nonlinear PDEs on complex 3D domains remains a challenging task. In this paper,
PINNs are employed to solve the 3D incompressible Navier-Stokes (NS) equations
at moderate to high Reynolds numbers for complex geometries. The presented
method utilizes very sparsely distributed solution data in the domain. A
detailed investigation on the effect of the amount of supplied data and the
PDE-based regularizers is presented. Additionally, a hybrid data-PINNs approach
is used to generate a surrogate model of a realistic flow-thermal electronics
design problem. This surrogate model provides near real-time sampling and was
found to outperform standard data-driven neural networks when tested on unseen
query points. The findings of the paper show how PINNs can be effective when
used in conjunction with sparse data for solving 3D nonlinear PDEs or for
surrogate modeling of design spaces governed by them.
|
http://arxiv.org/abs/2309.03374v3
|
A prominent goal of representation learning research is to achieve
representations which are factorized in a useful manner with respect to the
ground truth factors of variation. The fields of disentangled and equivariant
representation learning have approached this ideal from a range of
complimentary perspectives; however, to date, most approaches have proven to
either be ill-specified or insufficiently flexible to effectively separate all
realistic factors of interest in a learned latent space. In this work, we
propose an alternative viewpoint on such structured representation learning
which we call Flow Factorized Representation Learning, and demonstrate it to
learn both more efficient and more usefully structured representations than
existing frameworks. Specifically, we introduce a generative model which
specifies a distinct set of latent probability paths that define different
input transformations. Each latent flow is generated by the gradient field of a
learned potential following dynamic optimal transport. Our novel setup brings
new understandings to both \textit{disentanglement} and \textit{equivariance}.
We show that our model achieves higher likelihoods on standard representation
learning benchmarks while simultaneously being closer to approximately
equivariant models. Furthermore, we demonstrate that the transformations
learned by our model are flexibly composable and can also extrapolate to new
data, implying a degree of robustness and generalizability approaching the
ultimate goal of usefully factorized representation learning.
|
http://arxiv.org/abs/2309.13167v1
|
Large Language Models (LLMs) have emerged as one of the most important
breakthroughs in NLP for their impressive skills in language generation and
other language-specific tasks. Though LLMs have been evaluated in various
tasks, mostly in English, they have not yet undergone thorough evaluation in
under-resourced languages such as Bengali (Bangla). To this end, this paper
introduces BenLLM-Eval, which consists of a comprehensive evaluation of LLMs to
benchmark their performance in the Bengali language that has modest resources.
In this regard, we select various important and diverse Bengali NLP tasks, such
as text summarization, question answering, paraphrasing, natural language
inference, transliteration, text classification, and sentiment analysis for
zero-shot evaluation of popular LLMs, namely, GPT-3.5, LLaMA-2-13b-chat, and
Claude-2. Our experimental results demonstrate that while in some Bengali NLP
tasks, zero-shot LLMs could achieve performance on par, or even better than
current SOTA fine-tuned models; in most tasks, their performance is quite poor
(with the performance of open-source LLMs like LLaMA-2-13b-chat being
significantly bad) in comparison to the current SOTA results. Therefore, it
calls for further efforts to develop a better understanding of LLMs in
modest-resourced languages like Bengali.
|
http://arxiv.org/abs/2309.13173v2
|
Efficient navigation in unknown and dynamic environments is crucial for
expanding the application domain of mobile robots. The core challenge stems
from the nonavailability of a feasible global path for guiding
optimization-based local planners. As a result, existing local planners often
get trapped in poor local minima. In this paper, we present a novel optimizer
that can explore multiple homotopies to plan high-quality trajectories over
long horizons while still being fast enough for real-time applications. We
build on the gradient-free paradigm by augmenting the trajectory sampling
strategy with a projection optimization that guides the samples toward a
feasible region. As a result, our approach can recover from the frequently
encountered pathological cases wherein all the sampled trajectories lie in the
high-cost region. Furthermore, we also show that our projection optimization
has a highly parallelizable structure that can be easily accelerated over GPUs.
We push the state-of-the-art in the following respects. Over the navigation
stack of the Robot Operating System (ROS), we show an improvement of 7-13% in
success rate and up to two times in total travel time metric. On the same
benchmarks and metrics, our approach achieves up to 44% improvement over MPPI
and its recent variants. On simple point-to-point navigation tasks, our
optimizer is up to two times more reliable than SOTA gradient-based solvers, as
well as sampling-based approaches such as the Cross-Entropy Method (CEM) and
VPSTO. Codes: https://github.com/fatemeh-rastgar/PRIEST
|
http://arxiv.org/abs/2309.08235v1
|
Newton-Raphson controller is a powerful prediction-based variable gain
integral controller. Basically, the classical model-based Newton-Raphson
controller requires two elements: the prediction of the system output and the
derivative of the predicted output with respect to the control input. In real
applications, the model may not be known and it is infeasible to predict the
system sometime ahead and calculate the derivative by finite difference method
as done in simulation. To solve these problems, in this work, we utilize the
Koopman operator framework to reconstruct a linear model of the original
nonlinear dynamical system and then utilize the output of the new linear system
as the predictor of the Newton-Raphson controller. This method is only based on
collected data within some time instant thus more practical. Three examples
related to highly nonlinear systems are provided to verify the effectiveness of
our proposed method.
|
http://arxiv.org/abs/2309.17315v1
|
Outcome-dependent sampling designs are extensively utilized in various
scientific disciplines, including epidemiology, ecology, and economics, with
retrospective case-control studies being specific examples of such designs.
Additionally, if the outcome used for sample selection is also mismeasured,
then it is even more challenging to estimate the average treatment effect (ATE)
accurately. To our knowledge, no existing method can address these two issues
simultaneously. In this paper, we establish the identifiability of ATE and
propose a novel method for estimating ATE in the context of generalized linear
model. The estimator is shown to be consistent under some regularity
conditions. To relax the model assumption, we also consider generalized
additive model. We propose to estimate ATE using penalized B-splines and
establish asymptotic properties for the proposed estimator. Our methods are
evaluated through extensive simulation studies and the application to a dataset
from the UK Biobank, with alcohol intake as the treatment and gout as the
outcome.
|
http://arxiv.org/abs/2309.11764v1
|
Pedestrian detection under valet parking scenarios is fundamental for
autonomous driving. However, the presence of pedestrians can be manifested in a
variety of ways and postures under imperfect ambient conditions, which can
adversely affect detection performance. Furthermore, models trained on
publicdatasets that include pedestrians generally provide suboptimal outcomes
for these valet parking scenarios. In this paper, wepresent the Parking
Pedestrian Dataset (PPD), a large-scale fisheye dataset to support research
dealing with real-world pedestrians, especially with occlusions and diverse
postures. PPD consists of several distinctive types of pedestrians captured
with fisheye cameras. Additionally, we present a pedestrian detection baseline
on PPD dataset, and introduce two data augmentation techniques to improve the
baseline by enhancing the diversity ofthe original dataset. Extensive
experiments validate the effectiveness of our novel data augmentation
approaches over baselinesand the dataset's exceptional generalizability.
|
http://arxiv.org/abs/2309.11002v2
|
Audio anti-spoofing for automatic speaker verification aims to safeguard
users' identities from spoofing attacks. Although state-of-the-art spoofing
countermeasure(CM) models perform well on specific datasets, they lack
generalization when evaluated with different datasets. To address this
limitation, previous studies have explored large pre-trained models, which
require significant resources and time. We aim to develop a compact but
well-generalizing CM model that can compete with large pre-trained models. Our
approach involves multi-dataset co-training and sharpness-aware minimization,
which has not been investigated in this domain. Extensive experiments reveal
that proposed method yield competitive results across various datasets while
utilizing 4,000 times less parameters than the large pre-trained models.
|
http://arxiv.org/abs/2305.19953v2
|
The quasisymmetric generating function of the set of permutations whose
inverses have a fixed descent set is known to be symmetric and Schur-positive.
The corresponding representation of the symmetric group is called the descent
representation. In this paper, we provide an extension of this result to
colored permutation groups, where Gessel's fundamental quasisymmetric functions
are replaced by Poirier's colored quasisymmetric functions. For this purpose,
we introduce a colored analogue of zigzag shapes and prove that the
representations associated with these shapes coincide with colored descent
representations studied by Adin, Brenti and Roichman in the case of two colors
and Bagno and Biagioli in the general case. Additionally, we provide a colored
analogue of MaMahon's alternating formula which expresses ribbon Schur
functions in the basis of complete homogeneous symmetric functions.
|
http://arxiv.org/abs/2309.13615v1
|
We consider the problem of uplink power control for distributed massive
multiple-input multiple-output systems where the base stations (BSs) are
equipped with 1-bit analog-to-digital converters (ADCs). The scenario with a
single-user equipment (UE) is first considered to provide insights into the
signal-tonoise-and-distortion ratio (SNDR). With a single BS, the SNDR is a
unimodal function of the UE transmit power. With multiple BSs, the SNDR at the
output of the joint combiner can be made unimodal by adding properly tuned
dithering at each BS. As a result, the UE can be effectively served by multiple
BSs with 1-bit ADCs. Considering the
signal-to-interference-plus-noise-anddistortion ratio (SINDR) in the multi-UE
scenario, we aim at optimizing the UE transmit powers and the dithering at each
BS based on the min-power and max-min-SINDR criteria. To this end, we propose
three algorithms with different convergence and complexity properties.
Numerical results show that, if the desired SINDR can only be achieved via
joint combining across multiple BSs with properly tuned dithering, the optimal
UE transmit power is imposed by the distance to the farthest serving BS (unlike
in the unquantized case). In this context, dithering plays a crucial role in
enhancing the SINDR, especially for UEs with significant path loss disparity
among the serving BSs.
|
http://arxiv.org/abs/2309.09665v1
|
This study examines the use of a highly effective training method to conduct
one-class classification. The existence of both positive and negative examples
in the training data is necessary to develop an effective classifier in common
binary classification scenarios. Unfortunately, this criteria is not met in
many domains. Here, there is just one class of examples. Classification
algorithms that learn from solely positive input have been created to deal with
this setting. In this paper, an effective algorithm for dual soft-margin
one-class SVM training is presented. Our approach makes use of the Augmented
Lagrangian (AL-FPGM), a variant of the Fast Projected Gradient Method. The FPGM
requires only first derivatives, which for the dual soft margin OCC-SVM means
computing mainly a matrix-vector product. Therefore, AL-FPGM, being
computationally inexpensive, may complement existing quadratic programming
solvers for training large SVMs. We extensively validate our approach over
real-world datasets and demonstrate that our strategy obtains statistically
significant results.
|
http://arxiv.org/abs/2309.16745v1
|
We investigate the dynamics of chemical reaction networks (CRNs) with the
goal of deriving an upper bound on their reaction rates. This task is
challenging due to the nonlinear nature and discrete structure inherent in
CRNs. To address this, we employ an information geometric approach, using the
natural gradient, to develop a nonlinear system that yields an upper bound for
CRN dynamics. We validate our approach through numerical simulations,
demonstrating faster convergence in a specific class of CRNs. This class is
characterized by the number of chemicals, the maximum value of stoichiometric
coefficients of the chemical reactions, and the number of reactions. We also
compare our method to a conventional approach, showing that the latter cannot
provide an upper bound on reaction rates of CRNs. While our study focuses on
CRNs, the ubiquity of hypergraphs in fields from natural sciences to
engineering suggests that our method may find broader applications, including
in information science.
|
http://arxiv.org/abs/2309.10334v1
|
Reconfigurable intelligent surface (RIS) is considered a prospective
technology for beyond fifth-generation (5G) networks to improve the spectral
and energy efficiency at a low cost. Prior works on the RIS mainly rely on
perfect channel state information (CSI), which imposes a huge computational
complexity. This work considers a single-user RIS-assisted communication
system, where the second-order statistical knowledge of the channels is
exploited to reduce the training overhead. We present algorithms that do not
require estimation of the CSI and reconfiguration of the RIS in every channel
coherence interval, which constitutes one of the most critical practical issues
in an RIS-aided system.
|
http://arxiv.org/abs/2309.04341v1
|
Photometric characteristics for all models of Starlink satellites launched to
date are reviewed. The Original design that lacked brightness mitigation is the
most luminous. SpaceX installed a sunshade on the VisorSat model which reduced
its luminosity by a factor of 3. The visor was omitted on Post-VisorSat
spacecraft with laser communication which followed, but the company added a
reflective layer which resulted in an intermediate brightness between Original
and VisorSat. SpaceX is applying advanced brightness mitigation techniques to
their Generation 2 Starlink satellites which are larger. The first of these,
called Minis, are dimmer than Gen 1 Starlinks despite their greater size.
Photometric observations verify that brightness mitigation efforts employed by
SpaceX reduce spacecraft luminosity substantially. However, the satellites
still have some negative impact on astronomical observations and the very large
satellites planned for later in Gen 2 may interfere more seriously.
|
http://arxiv.org/abs/2309.14152v3
|
Data visualization can be defined as the visual communication of information.
One important barometer for the success of a visualization is whether the
intents of the communicator(s) are faithfully conveyed. The processes of
constructing and displaying visualizations have been widely studied by our
community. However, due to the lack of consistency in this literature, there is
a growing acknowledgment of a need for frameworks and methodologies for
classifying and formalizing the communicative component of visualization. This
work focuses on intent and introduces how this concept in communicative
visualization mirrors concepts in linguistics. We construct a mapping between
the two spaces that enables us to leverage relevant frameworks to apply to
visualization. We describe this translation as using the philosophy of language
as a base for explaining communication in visualization. Furthermore, we
illustrate the benefits and point out several prospective research directions.
|
http://arxiv.org/abs/2309.05739v1
|
Full-body avatars are suggested to be beneficial for communication in virtual
environments, and consistency between users' voices and gestures is considered
essential to ensure communication quality. This paper propose extending the
functionality of a web-based VR platform to support the use of full-body
avatars and delegated avatar transforms synchronization to WebRTC DataChannel
to enhance the consistency between voices and gestures. Finally, we conducted a
preliminary validation to confirm the consistency.
|
http://arxiv.org/abs/2309.14634v1
|
Spreadsheets are a vital tool for end-user data management. Using large
language models for formula authoring assistance in these environments can be
difficult, as these models are expensive to train and challenging to deploy due
to their size (up to billions of parameters). We present FLAME, a
transformer-based model trained exclusively on Excel formulas that leverages
domain insights to achieve competitive performance while being substantially
smaller (60M parameters) and training on two orders of magnitude less data. We
curate a training dataset using sketch deduplication, introduce an
Excel-specific formula tokenizer, and use domain-specific versions of masked
span prediction and noisy auto-encoding as pre-training objectives. We evaluate
FLAME on formula repair, formula completion, and similarity-based formula
retrieval. FLAME can outperform much larger models, such as the Davinci (175B)
and Cushman (12B) variants of Codex and CodeT5 (220M), in 10 of 14 evaluation
settings for the repair and completion tasks. For formula retrieval, FLAME
outperforms CodeT5, CodeBERT, and GraphCodeBERT.
|
http://arxiv.org/abs/2301.13779v2
|
In this article, we give a generalization to injective modules by using
$e$-exact sequences introduced by Akray in [1] and name it $e$-injective
modules and investigate their properties. We reprove both Baer criterion and
comparison theorem of homology using $e$-injective modules and $e$-injective
resolutions. Furthermore, we apply the notion $e$-injective modules into local
cohomology to construct a new form of the cohomology modules call it essential
cohomology modules (briefly $e$-cohomology modules). We show that the torsion
functor $\Gamma_a ( - )$ is an $e$-exact functor on torsion-free modules. We
seek about the relationship of $e$-cohomology within the classical cohomology.
Finally, we conclude that they are different on the vanishing of their $i_{th}$
cohomology modules.
|
http://arxiv.org/abs/2309.10452v1
|
Surrogate-assisted evolutionary algorithms (SAEAs) hold significant
importance in resolving expensive optimization problems~(EOPs). Extensive
efforts have been devoted to improving the efficacy of SAEAs through the
development of proficient model-assisted selection methods. However, generating
high-quality solutions is a prerequisite for selection. The fundamental
paradigm of evaluating a limited number of solutions in each generation within
SAEAs reduces the variance of adjacent populations, thus impacting the quality
of offspring solutions. This is a frequently encountered issue, yet it has not
gained widespread attention. This paper presents a framework using unevaluated
solutions to enhance the efficiency of SAEAs. The surrogate model is employed
to identify high-quality solutions for direct generation of new solutions
without evaluation. To ensure dependable selection, we have introduced two
tailored relation models for the selection of the optimal solution and the
unevaluated population. A comprehensive experimental analysis is performed on
two test suites, which showcases the superiority of the relation model over
regression and classification models in the selection phase. Furthermore, the
surrogate-selected unevaluated solutions with high potential have been shown to
significantly enhance the efficiency of the algorithm.
|
http://arxiv.org/abs/2309.11994v2
|
Point cloud registration has seen recent success with several learning-based
methods that focus on correspondence matching and, as such, optimize only for
this objective. Following the learning step of correspondence matching, they
evaluate the estimated rigid transformation with a RANSAC-like framework. While
it is an indispensable component of these methods, it prevents a fully
end-to-end training, leaving the objective to minimize the pose error
nonserved. We present a novel solution, Q-REG, which utilizes rich geometric
information to estimate the rigid pose from a single correspondence. Q-REG
allows to formalize the robust estimation as an exhaustive search, hence
enabling end-to-end training that optimizes over both objectives of
correspondence matching and rigid pose estimation. We demonstrate in the
experiments that Q-REG is agnostic to the correspondence matching method and
provides consistent improvement both when used only in inference and in
end-to-end training. It sets a new state-of-the-art on the 3DMatch, KITTI, and
ModelNet benchmarks.
|
http://arxiv.org/abs/2309.16023v1
|
The performance of a binary classifier is described by a confusion matrix
with four entries: the number of true positives (TP), true negatives (TN),
false positives (FP), and false negatives (FN).
The Matthew's Correlation Coefficient (MCC), F1, and Fowlkes--Mallows (FM)
scores are scalars that summarize a confusion matrix. Both the F1 and FM scores
are based on only three of the four entries in the confusion matrix (they
ignore TN). In contrast, the MCC takes into account all four entries of the
confusion matrix and thus can be seen as providing a more representative
picture.
However, in object detection problems, measuring the number of true negatives
is so large it is often intractable. Thus we ask, what happens to the MCC as
the number of true negatives approaches infinity? This paper provides insight
into the relationship between the MCC and FM score by proving that the
FM-measure is equal to the limit of the MCC as the number of true negatives
approaches infinity.
|
http://arxiv.org/abs/2305.00594v2
|
This paper present a control system for the attitude and low cost design of a
Bicopter. The control system uses a PID controller that receives feedback from
an IMU to calculate control inputs that adjust the Bicopters attitude (roll,
pitch and yaw angles) which is resistant to disturbances (wind noise) on a test
bed. The control system is implemented on a hardware platform consisting of a
Bicopter, an IMU sensor, and a microcontroller with low cost design. In
mechanical design, the Bicopter is designed to more closely resemble the letter
"V" so that the distribution of the centre of mass (CoM) of the Bicopter can be
such that the servomotor torque reaction is parallel to the axis of rotation of
the Bicopter during the movement of the pitch angle attitude. In electronic
design, the Bicopter was developed using the ATmega328P microcontroller.
|
http://arxiv.org/abs/2309.08209v1
|
Pulmonary diseases rank prominently among the principal causes of death
worldwide. Curing them will require, among other things, a better understanding
of the many complex 3D tree-shaped structures within the pulmonary system, such
as airways, arteries, and veins. In theory, they can be modeled using
high-resolution image stacks. Unfortunately, standard CNN approaches operating
on dense voxel grids are prohibitively expensive. To remedy this, we introduce
a point-based approach that preserves graph connectivity of tree skeleton and
incorporates an implicit surface representation. It delivers SOTA accuracy at a
low computational cost and the resulting models have usable surfaces. Due to
the scarcity of publicly accessible data, we have also curated an extensive
dataset to evaluate our approach and will make it public.
|
http://arxiv.org/abs/2309.17329v2
|
We demonstrate that Contrastive Decoding -- a simple, computationally light,
and training-free text generation method proposed by Li et al 2022 -- achieves
large out-of-the-box improvements over greedy decoding on a variety of
reasoning tasks. Originally shown to improve the perceived quality of long-form
text generation, Contrastive Decoding searches for strings that maximize a
weighted difference in likelihood between strong and weak models. We show that
Contrastive Decoding leads LLaMA-65B to outperform LLaMA 2, GPT-3.5 and PaLM
2-L on the HellaSwag commonsense reasoning benchmark, and to outperform LLaMA
2, GPT-3.5 and PaLM-540B on the GSM8K math word reasoning benchmark, in
addition to improvements on a collection of other tasks. Analysis suggests that
Contrastive Decoding improves over existing methods by preventing some abstract
reasoning errors, as well as by avoiding simpler modes such as copying sections
of the input during chain-of-thought. Overall, Contrastive Decoding outperforms
nucleus sampling for long-form generation and greedy decoding for reasoning
tasks, making it a powerful general purpose method for generating text from
language models.
|
http://arxiv.org/abs/2309.09117v2
|
We introduce the notion of a wall-connected twin building and show that the
local-to-global principle holds for these twin buildings. As each twin building
satisfying Condition (co) (introduced in [7]) is wall-connected, we obtain a
strengthening of the main result of [7] that covers also the thick irreducible
affne twin buildings of rank at least 3.
|
http://arxiv.org/abs/2303.18041v1
|
Principal component analysis is a long-standing go-to method for exploring
multivariate data. The principal components are linear combinations of the
original variables, ordered by descending variance. The first few components
typically provide a good visual summary of the data. Tours also make linear
projections of the original variables but offer many different views, like
examining the data from different directions. The grand tour shows a smooth
sequence of projections as an animation following interpolations between random
target bases. The manual radial tour rotates the selected variable's
contribution into and out of a projection. This allows the importance of the
variable to structure in the projection to be assessed. This work describes a
mixed-design user study evaluating the radial tour's efficacy compared with
principal component analysis and the grand tour. A supervised classification
task is assigned to participants who evaluate variable attribution of the
separation between two classes. Their accuracy in assigning the variable
importance is measured across various factors. Data were collected from 108
crowdsourced participants, who performed two trials with each visual for 648
trials in total. Mixed model regression finds strong evidence that the radial
tour results in a large increase in accuracy over the alternatives.
Participants also reported a preference for the radial tour in comparison to
the other two methods.
|
http://arxiv.org/abs/2301.00077v1
|
Normalizing Flows (NFs) describe a class of models that express a complex
target distribution as the composition of a series of bijective transformations
over a simpler base distribution. By limiting the space of candidate
transformations to diffeomorphisms, NFs enjoy efficient, exact sampling and
density evaluation, enabling NFs to flexibly behave as both discriminative and
generative models. Their restriction to diffeomorphisms, however, enforces that
input, output and all intermediary spaces share the same dimension, limiting
their ability to effectively represent target distributions with complex
topologies. Additionally, in cases where the prior and target distributions are
not homeomorphic, Normalizing Flows can leak mass outside of the support of the
target. This survey covers a selection of recent works that combine aspects of
other generative model classes, such as VAEs and score-based diffusion, and in
doing so loosen the strict bijectivity constraints of NFs to achieve a balance
of expressivity, training speed, sample efficiency and likelihood tractability.
|
http://arxiv.org/abs/2309.04433v1
|
Compressed Sparse Column (CSC) and Coordinate (COO) are popular compression
formats for sparse matrices. However, both CSC and COO are general purpose and
cannot take advantage of any of the properties of the data other than sparsity,
such as data redundancy. Highly redundant sparse data is common in many machine
learning applications, such as genomics, and is often too large for in-core
computation using conventional sparse storage formats. In this paper, we
present two extensions to CSC: (1) Value-Compressed Sparse Column (VCSC) and
(2) Index- and Value-Compressed Sparse Column (IVCSC). VCSC takes advantage of
high redundancy within a column to further compress data up to 3-fold over COO
and 2.25-fold over CSC, without significant negative impact to performance
characteristics. IVCSC extends VCSC by compressing index arrays through delta
encoding and byte-packing, achieving a 10-fold decrease in memory usage over
COO and 7.5-fold decrease over CSC. Our benchmarks on simulated and real data
show that VCSC and IVCSC can be read in compressed form with little added
computational cost. These two novel compression formats offer a broadly useful
solution to encoding and reading redundant sparse data.
|
http://arxiv.org/abs/2309.04355v1
|
The use of abusive language online has become an increasingly pervasive
problem that damages both individuals and society, with effects ranging from
psychological harm right through to escalation to real-life violence and even
death. Machine learning models have been developed to automatically detect
abusive language, but these models can suffer from temporal bias, the
phenomenon in which topics, language use or social norms change over time. This
study aims to investigate the nature and impact of temporal bias in abusive
language detection across various languages and explore mitigation methods. We
evaluate the performance of models on abusive data sets from different time
periods. Our results demonstrate that temporal bias is a significant challenge
for abusive language detection, with models trained on historical data showing
a significant drop in performance over time. We also present an extensive
linguistic analysis of these abusive data sets from a diachronic perspective,
aiming to explore the reasons for language evolution and performance decline.
This study sheds light on the pervasive issue of temporal bias in abusive
language detection across languages, offering crucial insights into language
evolution and temporal bias mitigation.
|
http://arxiv.org/abs/2309.14146v1
|
Calculations of excited states in Green's function formalism often invoke the
diagonal approximation, in which the quasiparticle states are taken from a
mean-field calculation. Here, we extend the stochastic approaches applied in
the many-body perturbation theory and overcome this limitation for large
systems in which we are interested in a small subset of states. We separate the
problem into a core subspace, whose coupling to the remainder of the system
environment is stochastically sampled. This method is exemplified on computing
hole injection energies into CO$_2$ on an extended gold surface with nearly
3000 electrons. We find that in the extended system, the size of the problem
can be compressed up to $95\%$ using stochastic sampling. This result provides
a way forward for self-consistent stochastic methods and determining Dyson
orbitals in large systems.
|
http://arxiv.org/abs/2309.15258v1
|
We investigate the differential emission rate of neutral scalar bosons from a
highly magnetized relativistic plasma. We show that three processes contribute
at the leading order: particle splitting ($\psi\rightarrow \psi+\phi $),
antiparticle splitting ($\bar{\psi} \rightarrow \bar{\psi}+\phi $), and
particle-antiparticle annihilation ($\psi + \bar{\psi}\rightarrow \phi $). This
is in contrast to the scenario with zero magnetic field, where only the
annihilation processes contribute to boson production. We examine the impact of
Landau-level quantization on the energy dependence of the rate and investigate
the angular distribution of emitted scalar bosons. The differential rate
resulting from both (anti)particle splitting and annihilation processes are
typically suppressed in the direction of the magnetic field and enhanced in
perpendicular directions. Overall, the background magnetic field significantly
amplifies the total emission rate. We speculate that our model calculations
provide valuable theoretical insights with potentially important applications.
|
http://arxiv.org/abs/2310.00050v2
|
When it comes to active particles, even an ideal-gas model in a harmonic
potential poses a mathematical challenge. An exception is a run-and-tumble
model (RTP) in one-dimension for which a stationary distribution is known
exactly. The case of two-dimensions is more complex but the solution is
possible. Incidentally, in both dimensions the stationary distributions
correspond to a beta function. In three-dimensions, a stationary distribution
is not known but simulations indicate that it does not have a beta function
form. The current work focuses on the three-dimensional RTP model in a harmonic
trap. The main result of this study is the derivation of the recurrence
relation for generating moments of a stationary distribution. These moments are
then used to recover a stationary distribution using the Fourier-Lagrange
expansion.
|
http://arxiv.org/abs/2309.12537v1
|
The electric double layer (EDL) has a pivotal role in screening charges on
surfaces as in supercapacitor electrodes or colloidal and polymer solutions.
Its structure is determined by correlations between the finite-sized ionic
charge carriers of the underlying electrolyte and, this way, these correlations
affect the properties of the EDL and of applications utilizing EDLs. We study
the structure of EDLs within classical density functional theory (DFT) in order
to uncover whether a structural transition in the first layer of the EDL that
is driven by changes in the surface potential depends on specific particle
interactions or has a general footing. This transition has been found in
full-atom simulations. Thus far, investigating the in-plane structure of the
EDL for the primitive model (PM) using DFT proved a challenge. We show here
that the use of an appropriate functional predicts the in-plane structure of
EDLs in excellent agreement with molecular dynamics (MD) simulations. This
provides the playground to investigate how the structure factor within a layer
parallel to a charged surface changes as function of both the applied surface
potential and its separation from the surface. We discuss pitfalls in properly
defining an in-plane structure factor and fully map out the structure of the
EDL within the PM for a wide range of electrostatic electrode potentials.
However, we do not find any signature of a structural crossover and conclude
that the previously reported effect is not fundamental but rather occurs due to
the specific force field of ions used in the simulations.
|
http://arxiv.org/abs/2309.06542v2
|
We compute the three-loop correction to the universal single-soft emission
current for the case of scattering amplitudes with two additional color-charged
partons. We present results valid for QCD and $\mathcal{N}=4$ super-symmetric
Yang-Mills theory. To achieve our results we develop a new integrand expansion
technique for scattering amplitudes in the presence of soft emissions.
Furthermore, we obtain contributions from single final-state parton matrix
elements to the Higgs boson and Drell-Yan production cross section at
next-to-next-to-next-to-next-to leading order (N$^4$LO) in perturbative QCD in
the threshold limit.
|
http://arxiv.org/abs/2309.07884v1
|
Electron cyclotron waves (whistlers), are commonly observed in plasmas near
Earth and the solar wind. In the presence of nonlinear mirror modes, bursts of
whistlers, usually called lion roars, have been observed within low magnetic
field regions associated to these modes. In the intracluster medium (ICM) of
galaxy clusters, the excitation of the mirror instability is expected, but it
is not yet clear whether electron and ion cyclotron waves can also be present
under conditions where gas pressure dominates over magnetic pressure (high
$\beta$). In this work, we perform fully kinetic particle-in-cell (PIC)
simulations of a plasma subject to a continuous amplification of the mean
magnetic field $\textbf{B}(t)$ to study the nonlinear stages of the mirror
instability and the ensuing excitation of whistler and ion cyclotron (IC) waves
under ICM conditions. Once mirror modes reach nonlinear amplitudes, both
whistler and IC waves start to emerge simultaneously, with sub-dominant
amplitudes, propagating in low-$\textbf{B}$ regions, and quasi-parallel to
$\textbf{B}(t)$. We show that the underlying source of excitation is the
pressure anisotropy of electrons and ions trapped in mirror modes with
loss-cone type distributions. We also observe that IC waves play an essential
role in regulating the ion pressure anisotropy at nonlinear stages. We argue
that whistler and IC waves are a concomitant feature at late stages of the
mirror instability even at high-$\beta$, and therefore expected to be present
in astrophysical environments like the ICM. We discuss the implications of our
results for collisionless heating and dissipation of turbulence in the ICM.
|
http://arxiv.org/abs/2309.16751v1
|
For arbitrary varieties of universal algebras, we develop the theory around
the first and second-cohomology groups characterizing extensions realizing
affine datum. Restricted to varieties with a weak-difference term, extensions
realizing affine datum are exactly extensions with abelian kernels. This
recovers many classic examples of extensions with abelian coefficients since
varieties with a weak-difference term give a far-reaching generalization of
algebras like groups with multiple operators; indeed, any variety of algebras
whose congruences form modular lattices. We introduce a notion of action and
its model relation with a set of equations. In varieties with a difference
term, central extensions are characterized by a property of their actions.
Restricting further to a subclass of varieties with a difference term which
still includes groups with multiple operators, we recover a special case of the
representation of extensions with abelian kernels.
|
http://arxiv.org/abs/2309.16989v2
|
We consider two distinct $q$-analogues of the bipartite distance matrix,
namely the $q$-bipartite distance matrix and the exponential distance matrix.
We provide formulae of the inverse for these matrices, which extend the
existing results for the bipartite distance matrix. These investigations lead
us to introduce a $q$-analogue version of the bipartite Laplacian matrix.
|
http://arxiv.org/abs/2309.10320v1
|
In this paper, the nonlinear (orbital) stability of static 180^\circ N\'eel
walls in ferromagnetic films, under the reduced wave-type dynamics for the
in-plane magnetization proposed by Capella, Melcher and Otto [CMO07], is
established. It is proved that the spectrum of the linearized operator around
the static N\'eel wall lies in the stable complex half plane with non-positive
real part. This information is used to show that small perturbations of the
static N\'eel wall converge to a translated orbit belonging to the manifold
generated by the static wall.
|
http://arxiv.org/abs/2309.04432v2
|
In Natural Language Processing (NLP), binary classification algorithms are
often evaluated using the F1 score. Because the sample F1 score is an estimate
of the population F1 score, it is not sufficient to report the sample F1 score
without an indication of how accurate it is. Confidence intervals are an
indication of how accurate the sample F1 score is. However, most studies either
do not report them or report them using methods that demonstrate poor
statistical properties. In the present study, I review current analytical
methods (i.e., Clopper-Pearson method and Wald method) to construct confidence
intervals for the population F1 score, propose two new analytical methods
(i.e., Wilson direct method and Wilson indirect method) to do so, and compare
these methods based on their coverage probabilities and interval lengths, as
well as whether these methods suffer from overshoot and degeneracy. Theoretical
results demonstrate that both proposed methods do not suffer from overshoot and
degeneracy. Experimental results suggest that both proposed methods perform
better, as compared to current methods, in terms of coverage probabilities and
interval lengths. I illustrate both current and proposed methods on two
suggestion mining tasks. I discuss the practical implications of these results,
and suggest areas for future research.
|
http://arxiv.org/abs/2309.14621v2
|
The swift advancement and widespread availability of foundational Large
Language Models (LLMs), complemented by robust fine-tuning methodologies, have
catalyzed their adaptation for innovative and industrious applications.
Enabling LLMs to recognize and interpret geospatial data, while offering a
linguistic access to vast cartographic datasets, is of significant importance.
OpenStreetMap (OSM) is the most ambitious open-source global initiative
offering detailed urban and rural geographic data, curated by a community of
over 10 million contributors, which constitutes a great potential for LLM
applications. In this study, we demonstrate the proof of concept and details of
the process of fine-tuning a relatively small scale (1B parameters) LLM with a
relatively small artificial dataset curated by a more capable teacher model, in
order to provide a linguistic interface to the OSM data of an arbitrary urban
region. Through this interface, users can inquire about a location's
attributes, covering a wide spectrum of concepts, such as its touristic appeal
or the potential profitability of various businesses in that vicinity. The
study aims to provide an initial guideline for such generative artificial
intelligence (AI) adaptations and demonstrate early signs of useful emerging
abilities in this context even in minimal computational settings. The
embeddings of artificially curated prompts including OSM data are also
investigated in detail, which might be instrumental for potential geospatially
aware urban Retrieval Augmented Generation (RAG) applications.
|
http://arxiv.org/abs/2310.01429v1
|
The search for new physics signals in Higgs precision measurements plays a
pivotal role in the High-Luminosity Large Hadron Collider (HL-LHC) and future
colliders programs. The Higgs properties are expected to be measured with great
experimental precision, implying higher-order perturbative computations of the
electroweak parameters from the theoretical side. In particular, the
renormalized Higgs boson mass parameter in the Standard Model shows significant
variation around the electroweak scale, resulting in a lower-bound theoretical
uncertainty that exceeds future collider expectations. A more stable result
under the renormalization group can be computed from a non-zero external
momentum Higgs self-energy, for which available calculations include 3-loop
corrections in the QCD sector. In this work, we present an additional
contribution by estimating the leading non-QCD 3-loop corrections to the mass
of the Higgs boson in the top-Yukawa sector of order $y_t^6$. The
momentum-dependent Higgs self-energy is computed in the tadpole-free scheme for
the Higgs vacuum expectation value in the Landau gauge, and the explicit
dependence upon the Higgs boson and top quark masses is shown. The obtained
result is expressed in dimensional regularization as a superposition of a set
of master integrals with coefficients that are free of poles in four space-time
dimensions, and the corrections are evaluated numerically by the sector
decomposition method.
|
http://arxiv.org/abs/2301.00076v3
|
Context: With the data releases from the astrometric space mission Gaia, the
exploration of the structure of the Milky Way has developed in unprecedented
detail and unveiled many previously unknown structures in the Galactic disc and
halo. One such feature is the phase spiral where the stars in the Galactic disc
form a spiral density pattern in the $Z-V_Z$ plane. Aims: We aim to
characterize the shape, rotation, amplitude, and metallicity of the phase
spiral in the outer disc of the Milky Way. This will allow us to better
understand which physical processes caused the phase spiral and can give
further clues to the Milky Way's past and the events that contributed to its
current state. Methods: We use Gaia data release 3 (DR3) to get full position
and velocity data on approximately 31.5 million stars, and metallicity for a
subset of them. We then compute the angular momenta of the stars and develop a
model to characterise the phase spiral in terms of amplitude and rotation at
different locations in the disc. Results: We find that the rotation angle of
the phase spiral changes with Galactic azimuth and Galactocentric radius,
making the phase spiral appear to rotate about $3^\circ$ per degree in Galactic
azimuth. Furthermore, we find that the phase spiral in the $2200 - 2400$ kpc km
s$^{-1}$ range of angular momentum is particularly strong compared to the phase
spiral that can be observed in the solar neighbourhood. The metallicity of the
phase spiral appears to match that of the Milky Way disc field stars.
Conclusions: We created a new model capable of fitting several key parameters
of the phase spiral. We have been able to determine the rotation rate of the
phase spiral and found a peak in the phase spiral amplitude which manifests as
a very clear phase spiral when using only stars with similar angular momentum.
|
http://arxiv.org/abs/2303.18040v3
|
Evaluation of QA systems is very challenging and expensive, with the most
reliable approach being human annotations of correctness of answers for
questions. Recent works (AVA, BEM) have shown that transformer LM encoder based
similarity metrics transfer well for QA evaluation, but they are limited by the
usage of a single correct reference answer. We propose a new evaluation metric:
SQuArE (Sentence-level QUestion AnsweRing Evaluation), using multiple reference
answers (combining multiple correct and incorrect references) for sentence-form
QA. We evaluate SQuArE on both sentence-level extractive (Answer Selection) and
generative (GenQA) QA systems, across multiple academic and industrial
datasets, and show that it outperforms previous baselines and obtains the
highest correlation with human annotations.
|
http://arxiv.org/abs/2309.12250v1
|
Despite the success of Transformer models in vision and language tasks, they
often learn knowledge from enormous data implicitly and cannot utilize
structured input data directly. On the other hand, structured learning
approaches such as graph neural networks (GNNs) that integrate prior
information can barely compete with Transformer models. In this work, we aim to
benefit from both worlds and propose a novel Multimodal Graph Transformer for
question answering tasks that requires performing reasoning across multiple
modalities. We introduce a graph-involved plug-and-play quasi-attention
mechanism to incorporate multimodal graph information, acquired from text and
visual data, to the vanilla self-attention as effective prior. In particular,
we construct the text graph, dense region graph, and semantic graph to generate
adjacency matrices, and then compose them with input vision and language
features to perform downstream reasoning. Such a way of regularizing
self-attention with graph information significantly improves the inferring
ability and helps align features from different modalities. We validate the
effectiveness of Multimodal Graph Transformer over its Transformer baselines on
GQA, VQAv2, and MultiModalQA datasets.
|
http://arxiv.org/abs/2305.00581v1
|
This paper explicitly models a coarse and noisy quantization in a
communication system empowered by orthogonal time frequency space (OTFS) for
cost and power efficiency. We first point out, with coarse quantization, the
effective channel is imbalanced and thus no longer able to circularly shift the
transmitted symbols along the delay-Doppler domain. Meanwhile, the effective
channel is non-isotropic, which imposes a significant loss to symbol detection
algorithms like the original approximate message passing (AMP). Although the
algorithm of generalized expectation consistent for signal recovery (GEC-SR)
can mitigate this loss, the complexity in computation is prohibitively high,
mainly due to an dramatic increase in the matrix size of OTFS. In this context,
we propose a low-complexity algorithm that incorporates into the GEC-SR a quick
inversion of quasi-banded matrices, reducing the complexity from a cubic order
to a linear order while keeping the performance at the same level.
|
http://arxiv.org/abs/2309.11759v3
|
Audio deepfake detection (ADD) is the task of detecting spoofing attacks
generated by text-to-speech or voice conversion systems. Spoofing evidence,
which helps to distinguish between spoofed and bona-fide utterances, might
exist either locally or globally in the input features. To capture these, the
Conformer, which consists of Transformers and CNN, possesses a suitable
structure. However, since the Conformer was designed for sequence-to-sequence
tasks, its direct application to ADD tasks may be sub-optimal. To tackle this
limitation, we propose HM-Conformer by adopting two components: (1)
Hierarchical pooling method progressively reducing the sequence length to
eliminate duplicated information (2) Multi-level classification token
aggregation method utilizing classification tokens to gather information from
different blocks. Owing to these components, HM-Conformer can efficiently
detect spoofing evidence by processing various sequence lengths and aggregating
them. In experimental results on the ASVspoof 2021 Deepfake dataset,
HM-Conformer achieved a 15.71% EER, showing competitive performance compared to
recent systems.
|
http://arxiv.org/abs/2309.08208v1
|
Controllable text generation is a fundamental aspect of natural language
generation, with numerous methods proposed for different constraint types.
However, these approaches often require significant architectural or decoding
modifications, making them challenging to apply to additional constraints or
resolve different constraint combinations. To address this, our paper
introduces Regular Expression Instruction (REI), which utilizes an
instruction-based mechanism to fully exploit regular expressions' advantages to
uniformly model diverse constraints. Specifically, our REI supports all popular
fine-grained controllable generation constraints, i.e., lexical, positional,
and length, as well as their complex combinations, via regular expression-style
instructions. Our method only requires fine-tuning on medium-scale language
models or few-shot, in-context learning on large language models, and requires
no further adjustment when applied to various constraint combinations.
Experiments demonstrate that our straightforward approach yields high success
rates and adaptability to various constraints while maintaining competitiveness
in automatic metrics and outperforming most previous baselines.
|
http://arxiv.org/abs/2309.10447v2
|
We apply Markov Chain Monte Carlo (MCMC) to the problem of parametric galaxy
modeling, estimating posterior distributions of galaxy properties such as
ellipticity and brightness for more than 100,000 images of galaxies taken from
DC2, a simulated telescope survey resembling the upcoming Rubin Observatory
Legacy Survey of Space and Time (LSST). We use a physically informed prior and
apply selection corrections to the likelihood. The resulting posterior samples
enable rigorous probabilistic inference of galaxy model parameters and their
uncertainties. These posteriors are one key ingredient in a fully probabilistic
description of galaxy catalogs, which can ultimately enable a refined Bayesian
estimate of cosmological parameters. We systematically examine the reliability
of the posterior mean as a point estimator of galaxy parameters, and of the
posterior width as a measure of uncertainty, under some common modeling
approximations. We implement the probabilistic modeling and MCMC inference
using the JIF (Joint Image Framework) tool, which we make freely available
online.
|
http://arxiv.org/abs/2309.10321v1
|
In the realm of biological flow networks, the ability to dynamically adjust
to varying demands is paramount. Drawing inspiration from the remarkable
adaptability of Physarum polycephalum, we present a novel physical mechanism
tailored to optimize flow networks. Central to our approach is the principle
that each network component -- specifically, the tubes -- harnesses locally
available information to collectively minimize a global cost function. Our
findings underscore the scalability of this mechanism, making it feasible for
larger, more complex networks. We construct a comprehensive phase diagram,
pinpointing the specific network parameters under which successful adaptation,
or tuning, is realized. There exists a phase boundary in the phase diagram,
revealing a distinct satisfiability-unsatisfiability (SAT-UNSAT) phase
transition delineating successful and unsuccessful adaptation.
|
http://arxiv.org/abs/2309.16988v2
|
We present a complementarity that addresses relationships among the
parameters in the neutrino and the quark mixing matrix, use it to estimate the
size of the uncertainty among the elements in the matrix and address its
implications to the unitarity of the quark mixing matrix and Wolfenstein
parameterization and the tension in the first row. First, we describe how a
complementarity with a phase being introduced as an extra parameter can be held
in the nine independent schemes of parameterizing the matrix introducing a
discrete parameter symmetry within a certain size of uncertainty and how it can
be related to a combination of sine functions. With that, for the first time,
we describe a method that we can use to constrain the size of the uncertainty
associated with the parameters, not the central values, complementing that
among the diagonal elements in the neutrino mixing matrix. Then we do the same
for the quark sector and discuss its implication in the relation to the size of
the uncertainty among the elements. Seeing that our estimation is larger than
that was reported by running the global fit in the quark sector, our result
could be an indication that we may need to be cautious when addressing the
tension in the first row of the matrix in the quark sector and when running
global fit to constrain the size of the uncertainty, where Wolfenstein
parameterization, one that is not unitarity guaranteed, is used, as opposed to
the combination of the three rotational matrix. Given that the size of the
uncertainty for the individual diagonal element in the second and the third
row, our result also could be an indication that we may need to wait until the
size of uncertainty for the second and the third row goes down further before
addressing the tension. It could be an opening of considering the possibility
of a mixing between the neutrino and the quark sector too.
|
http://arxiv.org/abs/2309.00132v3
|
Making the large data sets collected at the Large Hadron Collider (LHC)
accessible to the world is a considerable challenge because of both the
complexity and the volume of data. This paper presents the Ntuple Wizard, an
application that leverages the existing computing infrastructure available to
the LHCb collaboration in order to enable third-party users to request specific
data. An intuitive web interface allows the discovery of accessible data sets
and guides the user through the process of specifying a configuration-based
request. The application allows for fine-grained control of the level of access
granted to the public.
|
http://arxiv.org/abs/2302.14235v2
|
The generative process of Diffusion Models (DMs) has recently set
state-of-the-art on many AI generation benchmarks. Though the generative
process is traditionally understood as an "iterative denoiser", there is no
universally accepted language to describe it. We introduce a novel perspective
to describe DMs using the mathematical language of memory retrieval from the
field of energy-based Associative Memories (AMs), making efforts to keep our
presentation approachable to newcomers to both of these fields. Unifying these
two fields provides insight that DMs can be seen as a particular kind of AM
where Lyapunov stability guarantees are bypassed by intelligently engineering
the dynamics (i.e., the noise and step size schedules) of the denoising
process. Finally, we present a growing body of evidence that records DMs
exhibiting empirical behavior we would expect from AMs, and conclude by
discussing research opportunities that are revealed by understanding DMs as a
form of energy-based memory.
|
http://arxiv.org/abs/2309.16750v2
|
Despite the recent remarkable improvements in scene text recognition (STR),
the majority of the studies focused mainly on the English language, which only
includes few number of characters. However, STR models show a large performance
degradation on languages with a numerous number of characters (e.g., Chinese
and Korean), especially on characters that rarely appear due to the long-tailed
distribution of characters in such languages. To address such an issue, we
conducted an empirical analysis using synthetic datasets with different
character-level distributions (e.g., balanced and long-tailed distributions).
While increasing a substantial number of tail classes without considering the
context helps the model to correctly recognize characters individually,
training with such a synthetic dataset interferes the model with learning the
contextual information (i.e., relation among characters), which is also
important for predicting the whole word. Based on this motivation, we propose a
novel Context-Aware and Free Experts Network (CAFE-Net) using two experts: 1)
context-aware expert learns the contextual representation trained with a
long-tailed dataset composed of common words used in everyday life and 2)
context-free expert focuses on correctly predicting individual characters by
utilizing a dataset with a balanced number of characters. By training two
experts to focus on learning contextual and visual representations,
respectively, we propose a novel confidence ensemble method to compensate the
limitation of each expert. Through the experiments, we demonstrate that
CAFE-Net improves the STR performance on languages containing numerous number
of characters. Moreover, we show that CAFE-Net is easily applicable to various
STR models.
|
http://arxiv.org/abs/2304.08592v1
|
We completely classify the locally finite, infinite graphs with pure mapping
class groups admitting a coarsely bounded generating set. We also study
algebraic properties of the pure mapping class group: We establish a semidirect
product decomposition, compute first integral cohomology, and classify when
they satisfy residual finiteness and the Tits alternative. These results
provide a framework and some initial steps towards quasi-isometric and
algebraic rigidity of these groups.
|
http://arxiv.org/abs/2309.07885v1
|
The domain shift between training and testing data presents a significant
challenge for training generalizable deep learning models. As a consequence,
the performance of models trained with the independent and identically
distributed (i.i.d) assumption deteriorates when deployed in the real world.
This problem is exacerbated in the medical imaging context due to variations in
data acquisition across clinical centers, medical apparatus, and patients.
Domain generalization (DG) aims to address this problem by learning a model
that generalizes well to any unseen target domain. Many domain generalization
techniques were unsuccessful in learning domain-invariant representations due
to the large domain shift. Furthermore, multiple tasks in medical imaging are
not yet extensively studied in existing literature when it comes to DG point of
view. In this paper, we introduce a DG method that re-establishes the model
objective function as a maximization of mutual information with a large
pretrained model to the medical imaging field. We re-visit the problem of DG in
Diabetic Retinopathy (DR) classification to establish a clear benchmark with a
correct model selection strategy and to achieve robust domain-invariant
representation for an improved generalization. Moreover, we conduct extensive
experiments on public datasets to show that our proposed method consistently
outperforms the previous state-of-the-art by a margin of 5.25% in average
accuracy and a lower standard deviation. Source code available at
https://github.com/BioMedIA-MBZUAI/DGM-DR
|
http://arxiv.org/abs/2309.09670v1
|
Exceptional points (EPs) in open optical systems are rigorously studied using
the resonant-state expansion (RSE). A spherical resonator, specifically a
homogeneous dielectric sphere in a vacuum, perturbed by two point-like defects
which break the spherical symmetry and bring the optical modes to EPs, is used
as a worked example. The RSE is a non-perturbative approach encoding the
information about an open optical system in matrix form in a rigorous way, and
thus offering a suitable tool for studying its EPs. These are simultaneous
degeneracies of the eigenvalues and corresponding eigenfunctions of the system,
which are rigorously described by the RSE and illustrated for perturbed
whispering-gallery modes (WGMs). An exceptional arc, which is a line of
adjacent EPs, is obtained analytically for perturbed dipolar WGMs. Perturbation
of high-quality WGMs with large angular momentum and their EPs are found by
reducing the RSE equation to a two-state problem by means of an orthogonal
transformation of a large RSE matrix. WGM pairs have opposite chirality in
spherically symmetric systems and equal chirality at EPs. This chirality at EPs
can be observed in circular dichroism measurements, as it manifested itself in
a squared-Lorentzian part of the optical spectra, which we demonstrate here
analytically and numerically in the Purcell enhancement factor for the
perturbed dipolar WGMs.
|
http://arxiv.org/abs/2309.12536v3
|
We build a minimal model of dissipative vortex dynamics in two spatial
dimensions, subject to a kinematic constraint: dipole conservation. The
additional conservation law implies anomalously slow decay rates for vortices.
We argue that this model of vortex dynamics is relevant for a broad range of
time scales during a quench into a uniaxial charge density wave state. Our
predictions are consistent with recent experiments on uniaxial charge density
wave formation in $\mathrm{LaTe}_3$.
|
http://arxiv.org/abs/2310.00051v1
|
Exploration into quantum machine learning has grown tremendously in recent
years due to the ability of quantum computers to speed up classical programs.
However, these efforts have yet to solve unsupervised similarity detection
tasks due to the challenge of porting them to run on quantum computers. To
overcome this challenge, we propose SLIQ, the first open-sourced work for
resource-efficient quantum similarity detection networks, built with practical
and effective quantum learning and variance-reducing algorithms.
|
http://arxiv.org/abs/2309.15259v1
|
This work conducts an evaluation of GPT-4V's multimodal capability for
medical image analysis, with a focus on three representative tasks of radiology
report generation, medical visual question answering, and medical visual
grounding. For the evaluation, a set of prompts is designed for each task to
induce the corresponding capability of GPT-4V to produce sufficiently good
outputs. Three evaluation ways including quantitative analysis, human
evaluation, and case study are employed to achieve an in-depth and extensive
evaluation. Our evaluation shows that GPT-4V excels in understanding medical
images and is able to generate high-quality radiology reports and effectively
answer questions about medical images. Meanwhile, it is found that its
performance for medical visual grounding needs to be substantially improved. In
addition, we observe the discrepancy between the evaluation outcome from
quantitative analysis and that from human evaluation. This discrepancy suggests
the limitations of conventional metrics in assessing the performance of large
language models like GPT-4V and the necessity of developing new metrics for
automatic quantitative analysis.
|
http://arxiv.org/abs/2310.20381v5
|
Sparse Mixture-of-Experts models (MoEs) have recently gained popularity due
to their ability to decouple model size from inference efficiency by only
activating a small subset of the model parameters for any given input token. As
such, sparse MoEs have enabled unprecedented scalability, resulting in
tremendous successes across domains such as natural language processing and
computer vision. In this work, we instead explore the use of sparse MoEs to
scale-down Vision Transformers (ViTs) to make them more attractive for
resource-constrained vision applications. To this end, we propose a simplified
and mobile-friendly MoE design where entire images rather than individual
patches are routed to the experts. We also propose a stable MoE training
procedure that uses super-class information to guide the router. We empirically
show that our sparse Mobile Vision MoEs (V-MoEs) can achieve a better trade-off
between performance and efficiency than the corresponding dense ViTs. For
example, for the ViT-Tiny model, our Mobile V-MoE outperforms its dense
counterpart by 3.39% on ImageNet-1k. For an even smaller ViT variant with only
54M FLOPs inference cost, our MoE achieves an improvement of 4.66%.
|
http://arxiv.org/abs/2309.04354v1
|
This paper is concerned with identifying linear system dynamics without the
knowledge of individual system trajectories, but from the knowledge of the
system's reachable sets observed at different times. Motivated by a scenario
where the reachable sets are known from partially transparent manufacturer
specifications or observations of the collective behavior of adversarial
agents, we aim to utilize such sets to determine the unknown system's dynamics.
This paper has two contributions. Firstly, we show that the sequence of the
system's reachable sets can be used to uniquely determine the system's dynamics
for asymmetric input sets under some generic assumptions, regardless of the
system's dimensions. We also prove the same property holds up to a sign change
for two-dimensional systems where the input set is symmetric around zero.
Secondly, we present an algorithm to determine these dynamics. We apply and
verify the developed theory and algorithms on an unknown band-pass filter
circuit solely provided the unknown system's reachable sets over a finite
observation period.
|
http://arxiv.org/abs/2309.04340v1
|
This is the second of a series of papers in which we investigate the decay
estimates for dispersive equations with Aharonov-Bohm solenoids in a uniform
magnetic field. In our first starting paper \cite{WZZ}, we have studied the
Strichartz estimates for Schr\"odinger equation with one Aharonov-Bohm solenoid
in a uniform magnetic field. The wave equation in this setting becomes more
delicate since a difficulty is raised from the square root of the eigenvalue of
the Schr\"odinger operator $H_{\alpha, B_0}$ so that we cannot directly
construct the half-wave propagator. An independent interesting result
concerning the Gaussian upper bounds of the heat kernel is proved by using two
different methods. The first one is based on establishing Davies-Gaffney
inequality in this setting and the second one is straightforward to construct
the heat kernel (which efficiently captures the magnetic effects) based on the
Schulman-Sunada formula. As byproducts, we prove optimal bounds for the heat
kernel and show the Bernstein inequality and the square function inequality for
Schr\"odinger operator with one Aharonov-Bohm solenoid in a uniform magnetic
field.
|
http://arxiv.org/abs/2309.07649v1
|
This paper presents a method to learn hand-object interaction prior for
reconstructing a 3D hand-object scene from a single RGB image. The inference as
well as training-data generation for 3D hand-object scene reconstruction is
challenging due to the depth ambiguity of a single image and occlusions by the
hand and object. We turn this challenge into an opportunity by utilizing the
hand shape to constrain the possible relative configuration of the hand and
object geometry. We design a generalizable implicit function, HandNeRF, that
explicitly encodes the correlation of the 3D hand shape features and 2D object
features to predict the hand and object scene geometry. With experiments on
real-world datasets, we show that HandNeRF is able to reconstruct hand-object
scenes of novel grasp configurations more accurately than comparable methods.
Moreover, we demonstrate that object reconstruction from HandNeRF ensures more
accurate execution of downstream tasks, such as grasping and motion planning
for robotic hand-over and manipulation. Homepage:
https://samsunglabs.github.io/HandNeRF-project-page/
|
http://arxiv.org/abs/2309.07891v5
|
Quite much recent studies has been attracted to the operated algebra since it
unifies various notions such as the differential algebra and the Rota-Baxter
algebra. An $\Omega$-operated algebra is a an (associative) algebra equipped
with a set $\Omega$ of linear operators which might satisfy certain operator
identities such as the Leibniz rule. A free $\Omega$-operated algebra $B$ can
be generated on an algebra $A$ similar to a free algebra generated on a set. If
$A$ has a Gr\"{o}bner-Shirshov basis $G$ and if the linear operators $\Omega$
satisfy a set $\Phi$ of operator identities, it is natural to ask when the
union $G\cup \Phi$ is a Gr\"{o}bner-Shirshov basis of $B$. A previous work
answers this question affirmatively under a mild condition, and thereby obtains
a canonical linear basis of $B$.
In this paper, we answer this question in the general case of multiple linear
operators. As applications we get operated Gr\"{o}bner-Shirshov bases for free
differential Rota-Baxter algebras and free integro-differential algebras over
algebras as well as their linear bases. One of the key technical difficulties
is to introduce new monomial orders for the case of two operators, which might
be of independent interest.
|
http://arxiv.org/abs/2302.14221v3
|
We introduce a general method to determine the large scale non-equilibrium
steady-state properties of one-dimensional multi-species driven diffusive
systems with open boundaries, generalizing thus the max-min current principle
known for systems with a single type of particles. This method is based on the
solution of the Riemann problem of the associated system of conservation laws.
We demonstrate that the effective density of a reservoir depends not only on
the corresponding boundary hopping rates but also on the dynamics of the entire
system, emphasizing the interplay between bulk and reservoirs. We highlight the
role of Riemann variables in establishing the phase diagram of such systems. We
apply our method to three models of multi-species interacting particle systems
and compare the theoretical predictions with numerical simulations.
|
http://arxiv.org/abs/2309.06231v1
|
We propose a novel Bayesian inference framework for distributed
differentially private linear regression. We consider a distributed setting
where multiple parties hold parts of the data and share certain summary
statistics of their portions in privacy-preserving noise. We develop a novel
generative statistical model for privately shared statistics, which exploits a
useful distributional relation between the summary statistics of linear
regression. Bayesian estimation of the regression coefficients is conducted
mainly using Markov chain Monte Carlo algorithms, while we also provide a fast
version to perform Bayesian estimation in one iteration. The proposed methods
have computational advantages over their competitors. We provide numerical
results on both real and simulated data, which demonstrate that the proposed
algorithms provide well-rounded estimation and prediction.
|
http://arxiv.org/abs/2301.13778v2
|
In this work, we use general relativistic magnetohydrodynamics simulations to
explore the effect of spin orientation on the dynamics of gas in the vicinity
of merging black holes. We present a suite of eight simulations of
unequal-mass, spinning black hole binaries embedded in magnetized clouds of
matter. Each binary evolution covers approximately 15 orbits before the
coalescence. The geometry of the accretion flows in the vicinity of the black
holes is significantly altered by the orientation of the individual spins with
respect to the orbital angular momentum, with the primary black hole dominating
the mass accretion rate $\dot{M}$. We observe quasiperiodic modulations of
$\dot{M}$ in most of the configurations, whose amplitude is dependent on the
orientation of the black hole spins. We find the presence of a relation between
the average amplitude of $\dot{M}$ and the spin precession parameter
$\chi_{\mathrm{p}}$ showing that spin misalignment systematically leads to
stronger modulation, whereas configurations with spins aligned to the orbital
angular momentum damp out the quasiperiodicity. This finding suggests a
possible signature imprinted in the accretion luminosity of precessing binaries
approaching merger and has possible consequences on future multimessenger
observations of massive binary black hole systems.
|
http://arxiv.org/abs/2309.05738v1
|
We introduce the task of automatic human action co-occurrence identification,
i.e., determine whether two human actions can co-occur in the same interval of
time. We create and make publicly available the ACE (Action Co-occurrencE)
dataset, consisting of a large graph of ~12k co-occurring pairs of visual
actions and their corresponding video clips. We describe graph link prediction
models that leverage visual and textual information to automatically infer if
two actions are co-occurring. We show that graphs are particularly well suited
to capture relations between human actions, and the learned graph
representations are effective for our task and capture novel and relevant
information across different data domains. The ACE dataset and the code
introduced in this paper are publicly available at
https://github.com/MichiganNLP/vlog_action_co-occurrence.
|
http://arxiv.org/abs/2309.06219v3
|
Quantum decoherence effects in neutrinos, described by the open quantum
systems formalism, serve as a gateway to explore potential new physics,
including quantum gravity. Previous research extensively investigated these
effects across various neutrino sources, imposing stringent constraints on the
spontaneous loss of coherence. In this study, we demonstrate that even within
the Supernovae environment, where neutrinos are released as incoherent states,
quantum decoherence could influence the flavor equipartition of $3\nu$ mixing.
Additionally, we examine the potential energy dependence of quantum decoherence
parameters ($\Gamma = \Gamma_0 (E/E_0)^n$) with different power laws ($n = 0,
2, 5/2$). Our findings indicate that future-generation detectors (DUNE,
Hyper-K, and JUNO) can significantly constrain quantum decoherence effects
under different scenarios. For a Supernova located 10 kpc away from Earth, DUNE
could potentially establish $3\sigma$ bounds of $\Gamma \leq 6.2 \times
10^{-14}$ eV in the normal mass hierarchy (NH) scenario, while Hyper-K could
impose a $2\sigma$ limit of $\Gamma \leq 3.6 \times 10^{-14}$ eV for the
inverted mass hierarchy (IH) scenario with $n=0$ - assuming no energy exchange
between the neutrino subsystem and non-standard environment ($[H,V_p] = 0$).
These limits become even more restrictive for a closer Supernova. When we relax
the assumption of energy exchange ($[H,V_p] \neq 0$), for a 10 kpc SN, DUNE can
establish a $3\sigma$ limit of $\Gamma_8 \leq 4.2 \times 10^{-28}$ eV for NH,
while Hyper-K could constrain $\Gamma_8 \leq 1.3 \times 10^{-27}$ eV for IH
($n=0$) with $2\sigma$, representing the most stringent bounds reported to
date. Furthermore, we examine the impact of neutrino loss during propagation
for future Supernova detection.
|
http://arxiv.org/abs/2306.17591v2
|
This study proposes a novel planning framework based on a model predictive
control formulation that incorporates signal temporal logic (STL)
specifications for task completion guarantees and robustness quantification.
This marks the first-ever study to apply STL-guided trajectory optimization for
bipedal locomotion push recovery, where the robot experiences unexpected
disturbances. Existing recovery strategies often struggle with complex task
logic reasoning and locomotion robustness evaluation, making them susceptible
to failures caused by inappropriate recovery strategies or insufficient
robustness. To address this issue, the STL-guided framework generates optimal
and safe recovery trajectories that simultaneously satisfy the task
specification and maximize the locomotion robustness. Our framework outperforms
a state-of-the-art locomotion controller in a high-fidelity dynamic simulation,
especially in scenarios involving crossed-leg maneuvers. Furthermore, it
demonstrates versatility in tasks such as locomotion on stepping stones, where
the robot must select from a set of disjointed footholds to maneuver
successfully.
|
http://arxiv.org/abs/2309.13172v1
|
This article focuses on numerical efficiency of projection algorithms for
solving linear optimization problems. The theoretical foundation for this
approach is provided by the basic result that bounded finite dimensional linear
optimization problem can be solved by single projection operation on the
feasible polyhedron. The further simplification transforms this problem into
projection of a special point onto a convex polyhedral cone generated basically
by inequalities of the original linear optimization problem.
|
http://arxiv.org/abs/2309.03361v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.