text
string | source
string |
---|---|
Ring Polymer Surface-Hopping (RPSH) has been recently introduced as a
well-tailored method for incorporating nuclear quantum effects (NQEs), such as
zero-point energy and tunneling, into non-adiabatic molecular dynamics
simulations. The practical widespread usage of RPSH demands a comprehensive
benchmarking of different reaction regimes and conditions with equal emphasis
on demonstrating both the cons and pros of the method. Here, we investigate the
fundamental questions related to the conservation of energy and detailed
balance in the context of RPSH. Using Tully's avoided crossing model as well as
a 2-level system coupled to a classical bath undergoing Langevin dynamics, we
probe the critical problem of the proper treatment of the classically forbidden
transitions stemming from the surface hopping algorithm. We show that proper
treatment of these frustrated hops is key to the accurate description of
real-time dynamics as well as reproducing the exact quantum Boltzmann
population.
|
http://arxiv.org/abs/2305.13320v1
|
Recent advancements in Natural Language Processing (NLP), particularly in
Large Language Models (LLMs), associated with deep learning-based computer
vision techniques, have shown substantial potential for automating a variety of
tasks. One notable model is Visual ChatGPT, which combines ChatGPT's LLM
capabilities with visual computation to enable effective image analysis. The
model's ability to process images based on textual inputs can revolutionize
diverse fields. However, its application in the remote sensing domain remains
unexplored. This is the first paper to examine the potential of Visual ChatGPT,
a cutting-edge LLM founded on the GPT architecture, to tackle the aspects of
image processing related to the remote sensing domain. Among its current
capabilities, Visual ChatGPT can generate textual descriptions of images,
perform canny edge and straight line detection, and conduct image segmentation.
These offer valuable insights into image content and facilitate the
interpretation and extraction of information. By exploring the applicability of
these techniques within publicly available datasets of satellite images, we
demonstrate the current model's limitations in dealing with remote sensing
images, highlighting its challenges and future prospects. Although still in
early development, we believe that the combination of LLMs and visual models
holds a significant potential to transform remote sensing image processing,
creating accessible and practical application opportunities in the field.
|
http://arxiv.org/abs/2304.13009v2
|
This paper presents a novel Sequence-to-Sequence (Seq2Seq) model based on a
transformer-based attention mechanism and temporal pooling for Non-Intrusive
Load Monitoring (NILM) of smart buildings. The paper aims to improve the
accuracy of NILM by using a deep learning-based method. The proposed method
uses a Seq2Seq model with a transformer-based attention mechanism to capture
the long-term dependencies of NILM data. Additionally, temporal pooling is used
to improve the model's accuracy by capturing both the steady-state and
transient behavior of appliances. The paper evaluates the proposed method on a
publicly available dataset and compares the results with other state-of-the-art
NILM techniques. The results demonstrate that the proposed method outperforms
the existing methods in terms of both accuracy and computational efficiency.
|
http://arxiv.org/abs/2306.05012v1
|
We show that continuous group homomorphisms between unitary groups of unital
C*-algebras induce maps between spaces of continuous real-valued affine
functions on the trace simplices. Under certain $K$-theoretic regularity
conditions, these maps can be seen to commute with the pairing between $K_0$
and traces. If the homomorphism is contractive and sends the unit circle to the
unit circle, the map between spaces of continuous real-valued affine functions
can further be shown to be unital and positive (up to a minus sign).
|
http://arxiv.org/abs/2305.15989v2
|
We theoretically describe the phenomenon of non-adiabatic spin dynamics,
which occurs in a gas cell filled by alkali vapor in presence of a strong
alternating magnetic field and pump light. Steep increase of the spin
polarization occurs if frequency of the magnetic field is equal to the certain
value. Although, the observable effect relies on the periodic field that
consists of two perpendicular components defined by harmonics with the same
amplitudes and different frequencies. Considered spin effect cannot be
explained by a resonance, because the own Larmor frequency of spin precession
is absent without a constant component of magnetic field. Moreover, there are
some clearly visible peaks in the excitation spectrum of spin polarization, and
they are super narrow in comparison to relaxation rate. Detailed analysis
according to proposed quantum model results in the reasoning of the effect via
qualitative properties of non-adiabatic dynamics of atomic spin.
|
http://arxiv.org/abs/2307.12647v2
|
Well-tuned hyperparameters are crucial for obtaining good generalization
behavior in neural networks. They can enforce appropriate inductive biases,
regularize the model and improve performance -- especially in the presence of
limited data. In this work, we propose a simple and efficient way for
optimizing hyperparameters inspired by the marginal likelihood, an optimization
objective that requires no validation data. Our method partitions the training
data and a neural network model into $K$ data shards and parameter partitions,
respectively. Each partition is associated with and optimized only on specific
data shards. Combining these partitions into subnetworks allows us to define
the ``out-of-training-sample" loss of a subnetwork, i.e., the loss on data
shards unseen by the subnetwork, as the objective for hyperparameter
optimization. We demonstrate that we can apply this objective to optimize a
variety of different hyperparameters in a single training run while being
significantly computationally cheaper than alternative methods aiming to
optimize the marginal likelihood for neural networks. Lastly, we also focus on
optimizing hyperparameters in federated learning, where retraining and
cross-validation are particularly challenging.
|
http://arxiv.org/abs/2304.14766v1
|
The recent advances in machine learning in various fields of applications can
be largely attributed to the rise of deep learning (DL) methods and
architectures. Despite being a key technology behind autonomous cars, image
processing, speech recognition, etc., a notorious problem remains the lack of
theoretical understanding of DL and related interpretability and (adversarial)
robustness issues. Understanding the specifics of DL, as compared to, say,
other forms of nonlinear regression methods or statistical learning, is
interesting from a mathematical perspective, but at the same time it is of
crucial importance in practice: treating neural networks as mere black boxes
might be sufficient in certain cases, but many applications require waterproof
performance guarantees and a deeper understanding of what could go wrong and
why it could go wrong. It is probably fair to say that, despite being
mathematically well founded as a method to approximate complicated functions,
DL is mostly still more like modern alchemy that is firmly in the hands of
engineers and computer scientists. Nevertheless, it is evident that certain
specifics of DL that could explain its success in applications demands
systematic mathematical approaches. In this work, we review robustness issues
of DL and particularly bridge concerns and attempts from approximation theory
to statistical learning theory. Further, we review Bayesian Deep Learning as a
means for uncertainty quantification and rigorous explainability.
|
http://arxiv.org/abs/2307.02454v1
|
The velocity Slice Imaging technique has revolutionised electron molecule
interaction studies. Multiple electrostatic lens assemblies are often used in
spectrometers for resolving low kinetic energy fragments. However, in a
crossed-beam experiment with an effusive molecular beam, the extended source of
ion generation due to the presence of the background gas creates artefacts on
the momentum images as we try to magnify them beyond a certain size. Here, we
present a systematic study of this effect on momentum imaging and the solutions
to address this issue by background subtraction with suitable magnification.
Additionally, we demonstrated that a supersonic molecular beam target helps
minimise these artefacts in the image magnification by reducing the background
signal. These systematic findings may bring valuable insight into the
investigation of low kinetic energy release processes involving electron
impact, ion impact, and merge beam experiments with large interaction volumes
where high magnification is needed.
|
http://arxiv.org/abs/2306.16708v1
|
Let $A$ be the $n$-th Weyl algebra over a field of characteristic zero, and
$\varphi:A\rightarrow A$ an endomorphism with $S = \varphi(A)$. We prove that
if $A$ is finitely generated as a left or right $S$-module, then $S = A$. The
proof involves reduction to large positive characteristics. By holonomicity,
$A$ is always finitely generated as an $S$-bimodule. Moreover, if this bimodule
property could be transferred into a similar property in large positive
characteristics, then we could again conclude that $A=S$. The latter would
imply the Dixmier Conjecture.
|
http://arxiv.org/abs/2308.09384v2
|
We propose two methods to make unsupervised domain adaptation (UDA) more
parameter efficient using adapters, small bottleneck layers interspersed with
every layer of the large-scale pre-trained language model (PLM). The first
method deconstructs UDA into a two-step process: first by adding a domain
adapter to learn domain-invariant information and then by adding a task adapter
that uses domain-invariant information to learn task representations in the
source domain. The second method jointly learns a supervised classifier while
reducing the divergence measure. Compared to strong baselines, our simple
methods perform well in natural language inference (MNLI) and the cross-domain
sentiment classification task. We even outperform unsupervised domain
adaptation methods such as DANN and DSN in sentiment classification, and we are
within 0.85% F1 for natural language inference task, by fine-tuning only a
fraction of the full model parameters. We release our code at
https://github.com/declare-lab/domadapter
|
http://arxiv.org/abs/2302.03194v2
|
This work deals with the non-cutoff Boltzmann equation for all type of
potentials, in both the torus $\mathbf{T}^3$ and in the whole space
$\mathbf{R}^3$, under the incompressible Navier-Stokes scaling. We first
establish the well-posedness and decay of global mild solutions to this
rescaled Boltzmann equation in a perturbative framework, that is for solutions
close to the Maxwellian, obtaining in particular integrated-in-time
regularization estimates. We then combine these estimates with spectral-type
estimates in order to obtain the strong convergence of solutions to the
non-cutoff Boltzmannn equation towards the incompressible Navier-Stokes-Fourier
system.
|
http://arxiv.org/abs/2304.06362v3
|
This paper explores variants of the subspace iteration algorithm for
computing approximate invariant subspaces. The standard subspace iteration
approach is revisited and new variants that exploit gradient-type techniques
combined with a Grassmann manifold viewpoint are developed. A gradient method
as well as a nonlinear conjugate gradient technique are described. Convergence
of the gradient-based algorithm is analyzed and a few numerical experiments are
reported, indicating that the proposed algorithms are sometimes superior to
standard algorithms. This includes the Chebyshev-based subspace iteration and
the locally optimal block conjugate gradient method, when compared in terms of
number of matrix vector products and computational time, resp. The new methods,
on the other hand, do not require estimating optimal parameters. An important
contribution of this paper to achieve this good performance is the accurate and
efficient implementation of an exact line search. In addition, new convergence
proofs are presented for the non-accelerated gradient method that includes a
locally exponential convergence if started in a $\mathcal{O(\sqrt{\delta})}$
neighbourhood of the dominant subspace with spectral gap $\delta$.
|
http://arxiv.org/abs/2306.10379v2
|
Federated Learning (FL) has emerged as a significant advancement in the field
of Artificial Intelligence (AI), enabling collaborative model training across
distributed devices while maintaining data privacy. As the importance of FL
increases, addressing trustworthiness issues in its various aspects becomes
crucial. In this survey, we provide an extensive overview of the current state
of Trustworthy FL, exploring existing solutions and well-defined pillars
relevant to Trustworthy . Despite the growth in literature on trustworthy
centralized Machine Learning (ML)/Deep Learning (DL), further efforts are
necessary to identify trustworthiness pillars and evaluation metrics specific
to FL models, as well as to develop solutions for computing trustworthiness
levels. We propose a taxonomy that encompasses three main pillars:
Interpretability, Fairness, and Security & Privacy. Each pillar represents a
dimension of trust, further broken down into different notions. Our survey
covers trustworthiness challenges at every level in FL settings. We present a
comprehensive architecture of Trustworthy FL, addressing the fundamental
principles underlying the concept, and offer an in-depth analysis of trust
assessment mechanisms. In conclusion, we identify key research challenges
related to every aspect of Trustworthy FL and suggest future research
directions. This comprehensive survey serves as a valuable resource for
researchers and practitioners working on the development and implementation of
Trustworthy FL systems, contributing to a more secure and reliable AI
landscape.
|
http://arxiv.org/abs/2305.11537v1
|
In the light-front quark model (LFQM) amenable to the simultaneous study of
both the mass spectroscopy and the wave function related observables, we
examine the decay constants and distribution amplitudes (DAs) up to the
twist-4. The analysis of the heavy pseudoscalar mesons is carried out both in
the $1S$ and $2S$ states. This investigation involves calculating the local and
nonlocal matrix elements $\langle 0 |{\bar q}{\Gamma} q|P \rangle$ using three
distinct current operators ${\Gamma}=(\gamma^\mu\gamma_5,
i\gamma_5,\sigma^{\mu\nu}\gamma_5)$. Considering a general reference frame
where ${\bf P}_\perp\neq 0$ and investigating all available current components,
we examine not only the frame-independence but also the component-independence
of the decay constants. The explicit findings from our study provide the
evidence for the equality of the three pseudoscalar meson decay constants
obtained from the three distinct current operators $\Gamma$. The notable
agreement in decay constants is attained by imposing the Bakamjian-Thomas
construction of the LFQM, namely the meson state is constructed by the
noninteracting quark and antiquark representations while the interaction is
added to the mass operator, which provides the self-consistency condition
replacing the physical mass $M$ with the invariant mass $M_0$ for the
noninteracting quark-antiquark representation of the meson state. In addition
to obtaining the process-independent pseudoscalar meson decay constant,
regardless of the choice of current operators $\Gamma$, we further demonstrate
its explicit Lorentz and rotation invariance. In particular, we present the
analysis conducted on the twist-4 DA derived from the minus component of the
axial-vector current. Finally, we discuss the various twist DAs and their
$\xi$-moments associated with the $1S$ and $2S$ heavy pseudoscalar mesons.
|
http://arxiv.org/abs/2306.08536v2
|
The environment-dependent dilaton field is a well-motivated candidate for
dark energy and naturally arises in the strong coupling limit of string theory.
In this article, we present the very first experimental constraints on the
parameters of this model. For this, we employ data obtained from the qBounce
collaboration and the Lunar Laser Ranging (LLR) experiment. Furthermore, we
forecast expected exclusion plots for the Casimir And Non Newtonian force
EXperiment (Cannex) soon to be realised in an improved setup. Finally, we
provide a detailed analysis of the screening mechanism and additional
symmetries of the dilaton field theory.
|
http://arxiv.org/abs/2307.00243v1
|
This work aims at investigating the optical transmission system needed for
such lightweight sail, taking into account the physical constraints of such
unprecedented link and focusing on the optimal scheme for the optical signal
emission. In particular, the optical signal is distributed to several emitters
on the sail. The light diffraction resulting from the pattern of the emitters
acting coherently determines the characteristics of the whole beam transmitted
by the sail and of the received signal on the Earth. The performance of the
digital communication system using pulse position modulation (PPM) can be
assessed and channel coding schemes are proposed. We are using the paradigm for
which the entire sail communication system is described as a Tree-of-light: the
detectors, CPU, memory and laser transmitter are the central unit, representing
the trunk of the tree. The branches of the tree are the waveguides, directed to
the sail surface. By means of multimode splitters, the signal is further
distributed via the petioles to the emitters, the leaves, realized by grating
couplers (GCs), on which this work is more focused.
|
http://arxiv.org/abs/2308.01900v1
|
Nakajima's graded quiver varieties naturally appear in the study of bases of
cluster algebras. One particular family of these varieties, namely the
bipartite determinantal varieties, can be defined for any bipartite quiver and
gives a vast generalization of classical determinantal varieties with broad
applications to algebra, geometry, combinatorics, and statistics. The ideals
that define bipartite determinantal varieties are called bipartite
determinantal ideals.
We provide an elementary method of proof showing that the natural generators
of a bipartite determinantal ideal form a Gr\"obner basis, using an
S-polynomial construction method that relies on the Leibniz formula for
determinants. This method is developed from an idea by Narasimhan and
Caniglia--Guccione--Guccione.
As applications, we study the connection between double determinantal ideals
(which are bipartite determinantal ideals of a quiver with two vertices) and
tensors, and provide an interpretation of these ideals within the context of
algebraic statistics.
|
http://arxiv.org/abs/2305.01724v3
|
Micro- and nano-swimmers moving in a fluid solvent confined by structures
that produce entropic barriers are often described by overdamped active
Brownian particle dynamics, where viscous effects are large and inertia plays
no role. However, inertial effects should be considered for confined swimmers
moving in media where viscous effects are no longer dominant. Here, we study
how inertia affects the rectification and diffusion of self-propelled particles
in a two-dimensional asymmetric channel. We show that most of the particles
accumulate at the channel walls as the masses of the particles increase.
Furthermore, the average particle velocity has a maximum as a function of the
mass, indicating that particles with an optimal mass $M^{*}_{\rm op}$ can be
sorted from a mixture with particles of other masses. In particular, we find
that the effective diffusion coefficient exhibits an enhanced diffusion peak as
a function of the mass, which is a signature of the accumulation of most of the
particles at the channel walls. The dependence of $M^{*}_{\rm op}$ on the
rotational diffusion rate, self-propulsion force, aspect ratio of the channel,
and active torque is also determined. The results of this study could stimulate
the development of strategies for controlling the diffusion of self-propelled
particles in entropic ratchet systems.
|
http://arxiv.org/abs/2301.02902v2
|
Estimates suggest that while FRII jets appear to have lifetimes constrained
to hundreds of millions of years, radio galaxies with FRI jets appear to be
longer lived. We illustrate the nature of this time constraint from model
perspectives, showing how compatibility between theory and data match in a way
suggesting a key difference between active galaxies whose engines are
characterized by accretion onto co-rotating versus counter-rotating black
holes. We calculate a range of timescales for counter-rotating black holes for
a range of accretion rates compatible with theory which we then compare to
data. The validity of these timescales constitutes the most powerful recent
piece of evidence for considering counter-rotation between black holes and
accretion disks in high energy astrophysics.
|
http://arxiv.org/abs/2305.01042v1
|
Information Extraction (IE) is an essential task in Natural Language
Processing. Traditional methods have relied on coarse-grained extraction with
simple instructions. However, with the emergence of Large Language Models
(LLMs), there is a need to adapt IE techniques to leverage the capabilities of
these models. This paper introduces a fine-grained IE benchmark dataset
tailored for LLMs, employing augmented instructions for each information type,
which includes task descriptions, extraction rules, output formats, and
examples. Through extensive evaluations, we observe that encoder-decoder
models, particularly T5 and FLAN-T5, perform well in generalizing to unseen
information types, while ChatGPT exhibits greater adaptability to new task
forms. Our results also indicate that performance is not solely dictated by
model scale, and highlight the significance of architecture, data diversity,
and learning techniques. This work paves the way for a more refined and
versatile utilization of LLMs in Information Extraction.
|
http://arxiv.org/abs/2310.05092v1
|
We introduce a transformation framework that can be utilized to develop
online algorithms with low $\epsilon$-approximate regret in the random-order
model from offline approximation algorithms. We first give a general reduction
theorem that transforms an offline approximation algorithm with low average
sensitivity to an online algorithm with low $\epsilon$-approximate regret. We
then demonstrate that offline approximation algorithms can be transformed into
a low-sensitivity version using a coreset construction method. To showcase the
versatility of our approach, we apply it to various problems, including online
$(k,z)$-clustering, online matrix approximation, and online regression, and
successfully achieve polylogarithmic $\epsilon$-approximate regret for each
problem. Moreover, we show that in all three cases, our algorithm also enjoys
low inconsistency, which may be desired in some online applications.
|
http://arxiv.org/abs/2306.07163v2
|
We suggest a simple Gaussian mixture model for data generation that complies
with Feldman's long tail theory (2020). We demonstrate that a linear classifier
cannot decrease the generalization error below a certain level in the proposed
model, whereas a nonlinear classifier with a memorization capacity can. This
confirms that for long-tailed distributions, rare training examples must be
considered for optimal generalization to new data. Finally, we show that the
performance gap between linear and nonlinear models can be lessened as the tail
becomes shorter in the subpopulation frequency distribution, as confirmed by
experiments on synthetic and real data.
|
http://arxiv.org/abs/2307.10736v2
|
Data poisoning attacks manipulate training data to introduce unexpected
behaviors into machine learning models at training time. For text-to-image
generative models with massive training datasets, current understanding of
poisoning attacks suggests that a successful attack would require injecting
millions of poison samples into their training pipeline. In this paper, we show
that poisoning attacks can be successful on generative models. We observe that
training data per concept can be quite limited in these models, making them
vulnerable to prompt-specific poisoning attacks, which target a model's ability
to respond to individual prompts.
We introduce Nightshade, an optimized prompt-specific poisoning attack where
poison samples look visually identical to benign images with matching text
prompts. Nightshade poison samples are also optimized for potency and can
corrupt an Stable Diffusion SDXL prompt in <100 poison samples. Nightshade
poison effects "bleed through" to related concepts, and multiple attacks can
composed together in a single prompt. Surprisingly, we show that a moderate
number of Nightshade attacks can destabilize general features in a
text-to-image generative model, effectively disabling its ability to generate
meaningful images. Finally, we propose the use of Nightshade and similar tools
as a last defense for content creators against web scrapers that ignore
opt-out/do-not-crawl directives, and discuss possible implications for model
trainers and content creators.
|
http://arxiv.org/abs/2310.13828v3
|
We consider the sets of negatively associated (NA) and negatively correlated
(NC) distributions as subsets of the space $\mathcal{M}$ of all probability
distributions on $\mathbb{R}^n$, in terms of their relative topological
structures within the topological space of all measures on a given measurable
space. We prove that the class of NA distributions has a non-empty interior
with respect to the topology of the total variation metric on $\mathcal{M}$. We
show however that this is not the case in the weak topology (i.e. the topology
of convergence in distribution), unless the underlying probability space is
finite. We consider both the convexity and the connectedness of these classes
of probability measures, and also consider the two classes on their (widely
studied) restrictions to the Boolean cube in $\mathbb{R}^n$.
|
http://arxiv.org/abs/2304.09737v1
|
This paper introduces UncertaintyPlayground, a Python library built on
PyTorch and GPyTorch for uncertainty estimation in supervised learning tasks.
The library offers fast training for Gaussian and multi-modal outcome
distributions through Sparse and Variational Gaussian Process Regressions
(SVGPRs) for normally distributed outcomes and Mixed Density Networks (MDN) for
mixed distributions. In addition to model training with various
hyperparameters, UncertaintyPlayground can visualize the prediction intervals
of one or more instances. Due to using tensor operations, the library can be
trained both on CPU and GPU and offers various PyTorch-specific techniques for
speed optimization. The library contains unit tests for each module and ensures
multi-platform continuous integration with GitHub Workflows (online
integration) and Tox (local integration). Finally, the code is documented with
Google-style docstrings and offers a documentation website created with MkDocs
and MkDocStrings.
|
http://arxiv.org/abs/2310.15281v1
|
What are the best methods of capturing thematic similarity between literary
texts? Knowing the answer to this question would be useful for automatic
clustering of book genres, or any other thematic grouping. This paper compares
a variety of algorithms for unsupervised learning of thematic similarities
between texts, which we call "computational thematics". These algorithms belong
to three steps of analysis: text preprocessing, extraction of text features,
and measuring distances between the lists of features. Each of these steps
includes a variety of options. We test all the possible combinations of these
options: every combination of algorithms is given a task to cluster a corpus of
books belonging to four pre-tagged genres of fiction. This clustering is then
validated against the "ground truth" genre labels. Such comparison of
algorithms allows us to learn the best and the worst combinations for
computational thematic analysis. To illustrate the sharp difference between the
best and the worst methods, we then cluster 5000 random novels from the
HathiTrust corpus of fiction.
|
http://arxiv.org/abs/2305.11251v1
|
Located in Southern Europe, the Drina River Basin is shared between Bosnia
and Herzegovina, Montenegro, and Serbia. The power sectors of the three
countries have an exceptionally high dependence on coal for power generation.
In this paper, we analyse different development pathways for achieving climate
neutrality in these countries and explore the potential of variable renewable
energy (VRE) and its role in power sector decarbonization. We investigate
whether hydro and non-hydro renewables can enable a net-zero transition by 2050
and how VRE might affect the hydropower cascade shared by the three countries.
The Open-Source Energy Modelling System (OSeMOSYS) was used to develop a model
representation of the countries' power sectors. Findings show that the
renewable potential of the countries is a significant 94.4 GW. This potential
is 68% higher than previous assessments have shown. Under an Emission Limit
scenario assuming net zero by 2050, 17% of this VRE potential is utilized to
support the decarbonization of the power sectors. Additional findings show a
limited impact of VRE technologies on total power generation output from the
hydropower cascade. However, increased solar deployment shifts the operation of
the cascade to increased short-term balancing, moving from baseload to more
responsive power generation patterns. Prolonged use of thermal power plants is
observed under scenarios assuming high wholesale electricity prices, leading to
increased emissions. Results from scenarios with low cost of electricity trade
suggest power sector developments that lead to decreased energy security.
|
http://arxiv.org/abs/2305.07433v2
|
Camouflaged object detection (COD) aims to accurately detect objects hidden
in the surrounding environment. However, the existing COD methods mainly locate
camouflaged objects in the RGB domain, their performance has not been fully
exploited in many challenging scenarios. Considering that the features of the
camouflaged object and the background are more discriminative in the frequency
domain, we propose a novel learnable and separable frequency perception
mechanism driven by the semantic hierarchy in the frequency domain. Our entire
network adopts a two-stage model, including a frequency-guided coarse
localization stage and a detail-preserving fine localization stage. With the
multi-level features extracted by the backbone, we design a flexible frequency
perception module based on octave convolution for coarse positioning. Then, we
design the correction fusion module to step-by-step integrate the high-level
features through the prior-guided correction and cross-layer feature channel
association, and finally combine them with the shallow features to achieve the
detailed correction of the camouflaged objects. Compared with the currently
existing models, our proposed method achieves competitive performance in three
popular benchmark datasets both qualitatively and quantitatively.
|
http://arxiv.org/abs/2308.08924v1
|
For decades, Simultaneous Ascending Auction (SAA) has been the most popular
mechanism used for spectrum auctions. It has recently been employed by many
countries for the allocation of 5G licences. Although SAA presents relatively
simple rules, it induces a complex strategic game for which the optimal bidding
strategy is unknown. Considering the fact that sometimes billions of euros are
at stake in an SAA, establishing an efficient bidding strategy is crucial. In
this work, we model the auction as a $n$-player simultaneous move game with
complete information and propose the first efficient bidding algorithm that
tackles simultaneously its four main strategic issues: the $\textit{exposure
problem}$, the $\textit{own price effect}$, $\textit{budget constraints}$ and
the $\textit{eligibility management problem}$. Our solution, called
$SMS^\alpha$, is based on Simultaneous Move Monte Carlo Tree Search (SM-MCTS)
and relies on a new method for the prediction of closing prices. By introducing
a new reward function in $SMS^\alpha$, we give the possibility to bidders to
define their own level of risk-aversion. Through extensive numerical
experiments on instances of realistic size, we show that $SMS^\alpha$ largely
outperforms state-of-the-art algorithms, notably by achieving higher expected
utility while taking less risks.
|
http://arxiv.org/abs/2307.11428v2
|
3D visual grounding involves finding a target object in a 3D scene that
corresponds to a given sentence query. Although many approaches have been
proposed and achieved impressive performance, they all require dense
object-sentence pair annotations in 3D point clouds, which are both
time-consuming and expensive. To address the problem that fine-grained
annotated data is difficult to obtain, we propose to leverage weakly supervised
annotations to learn the 3D visual grounding model, i.e., only coarse
scene-sentence correspondences are used to learn object-sentence links. To
accomplish this, we design a novel semantic matching model that analyzes the
semantic similarity between object proposals and sentences in a coarse-to-fine
manner. Specifically, we first extract object proposals and coarsely select the
top-K candidates based on feature and class similarity matrices. Next, we
reconstruct the masked keywords of the sentence using each candidate one by
one, and the reconstructed accuracy finely reflects the semantic similarity of
each candidate to the query. Additionally, we distill the coarse-to-fine
semantic matching knowledge into a typical two-stage 3D visual grounding model,
which reduces inference costs and improves performance by taking full advantage
of the well-studied structure of the existing architectures. We conduct
extensive experiments on ScanRefer, Nr3D, and Sr3D, which demonstrate the
effectiveness of our proposed method.
|
http://arxiv.org/abs/2307.09267v1
|
In this study, we consider a variant of unlabelled sensing where the
measurements are sparsely permuted, and additionally, a few correspondences are
known. We present an estimator to solve for the unknown vector. We derive a
theoretical upper bound on the $\ell_2$ reconstruction error of the unknown
vector. Through numerical experiments, we demonstrate that the additional known
correspondences result in a significant improvement in the reconstruction
error. Additionally, we compare our estimator with the classical robust
regression estimator and we find that our method outperforms it on the
normalized reconstruction error metric by up to $20\%$ in the high permutation
regimes $(>30\%)$. Lastly, we showcase the practical utility of our framework
on a non-rigid motion estimation problem. We show that using a few manually
annotated points along point pairs with the key-point (SIFT-based) descriptor
pairs with unknown or incorrectly known correspondences can improve motion
estimation.
|
http://arxiv.org/abs/2309.01397v1
|
We present a detailed scheme for the analog quantum simulation of Z2 gauge
theories in crystals of trapped ions, which exploits a more efficient hybrid
encoding of the gauge and matter fields using the native internal and motional
degrees of freedom. We introduce a versatile toolbox based on parametric
excitations corresponding to different spin-motion-coupling schemes that induce
a tunneling of the ions vibrational excitations conditioned to their internal
qubit state. This building block, when implemented with a single trapped ion,
corresponds to a minimal Z2 gauge theory, where the qubit plays the role of the
gauge field on a synthetic link, and the vibrational excitations along
different trap axes mimic the dynamical matter fields two synthetic sites, each
carrying a Z2 charge. To evaluate their feasibility, we perform numerical
simulations of the state-dependent tunneling using realistic parameters, and
identify the leading sources of error in future experiments. We discuss how to
generalise this minimal case to more complex settings by increasing the number
of ions, moving from a single link to a Z2 plaquette, and to an entire Z2
chain. We present analytical expressions for the gauge-invariant dynamics and
the corresponding confinement, which are benchmarked using matrix product state
simulations.
|
http://arxiv.org/abs/2305.08700v2
|
The Vernier effect has seen extensive application in optical structures,
serving to augment the free spectral range (FSR). A substantial FSR is vital in
a myriad of applications including multiplexers, enabling a broad, clear band
comparable to the C-band to accommodate a maximum number of channels.
Nevertheless, a large FSR often conflicts with bending loss, as it necessitates
a smaller resonator radius, thus increase the insertion loss in the bending
portion. To facilitate FSR expansion without amplifying bending loss, we
employed cascaded and parallel racetrack resonators and ring resonators of
varying radius that demonstrate the Vernier effect. In this study, we designed,
fabricated, and tested multiple types of racetrack resonators to validate the
Vernier effect and its FSR extension capabilities. Our investigations
substantiate that the Vernier effect, based on cascaded and series-coupled
micro-ring resonator (MRR) sensors, can efficiently mitigate intra-channel
cross-talk at higher data rates. This is achieved by providing larger
input-to-through suppression, thus paving the way for future applications.
|
http://arxiv.org/abs/2305.17620v1
|
Transformer-based models, such as BERT and ViT, have achieved
state-of-the-art results across different natural language processing (NLP) and
computer vision (CV) tasks. However, these models are extremely memory
intensive during their fine-tuning process, making them difficult to deploy on
GPUs with limited memory resources. To address this issue, we introduce a new
tool called SlimFit that reduces the memory requirements of these models by
dynamically analyzing their training dynamics and freezing less-contributory
layers during fine-tuning. The layers to freeze are chosen using a runtime
inter-layer scheduling algorithm. SlimFit adopts quantization and pruning for
particular layers to balance the load of dynamic activations and to minimize
the memory footprint of static activations, where static activations refer to
those that cannot be discarded regardless of freezing. This allows SlimFit to
freeze up to 95% of layers and reduce the overall on-device GPU memory usage of
transformer-based models such as ViT and BERT by an average of 2.2x, across
different NLP and CV benchmarks/datasets such as GLUE, SQuAD 2.0, CIFAR-10,
CIFAR-100 and ImageNet with an average degradation of 0.2% in accuracy. For
such NLP and CV tasks, SlimFit can reduce up to 3.1x the total on-device memory
usage with an accuracy degradation of only up to 0.4%. As a result, while
fine-tuning of ViT on ImageNet and BERT on SQuAD 2.0 with a batch size of 128
requires 3 and 2 32GB GPUs respectively, SlimFit enables their fine-tuning on a
single 32GB GPU without any significant accuracy degradation.
|
http://arxiv.org/abs/2305.18513v1
|
In this paper, we find a natural four dimensional analog of the moderate
deviation results for the capacity of the random walk, which corresponds to
Bass, Chen and Rosen \cite{BCR} concerning the volume of the random walk range
for $d=2$. We find that the deviation statistics of the capacity of the random
walk can be related to the following constant of generalized
Gagliardo-Nirenberg inequalities, \begin{equation*} \label{eq:maxineq} \inf_{f:
\|\nabla f\|_{L^2}<\infty} \frac{\|f\|^{1/2}_{L^2} \|\nabla f\|^{1/2}_{L^2}}{
[\int_{(\mathbb{R}^4)^2} f^2(x) G(x-y) f^2(y) \text{d}x \text{d}y]^{1/4}}.
\end{equation*}
|
http://arxiv.org/abs/2310.07685v3
|
We present the production cross section of heavy quarks \sigma^{cc},
\sigma^{bb} and {\sigma^{tt}} at the next-to-leading order in the
electron-proton interaction by using the quarks and gluon distribution
functions at the initial scale Q^{2}_{0}. To do this, we use a fitted form of
the heavy quark coefficient functions for deep-inelastic lepton-hadron
scattering to obtain the structure functions of heavy quarks. Then, we
calculate the reduced cross section of heavy quarks by using the structure
functions and subsequently present the single differential and the integrated
cross section of heavy quarks at the center-of-mass energies of 319 GeV , 1.3
TeV and 3.5 TeV in the electron-proton collision. The obtained numerical
results of the cross section of the charm and beauty quarks are compared with
the HERA data, which is a combination from the results of the H1 and ZEUS
detectors, and with the predictions from H1PDF, MSTW2008 and MSRT03.
Furthermore, we present the production cross section of top quark as a direct
prediction from our calculations.
|
http://arxiv.org/abs/2301.00873v2
|
Let $X$ be a cubic threefold, quartic double solid or Gushel--Mukai
threefold, and $\mathcal{K}u(X)\subset \mathrm{D}^b(X)$ be its Kuznetsov
component. We show that a stability condition $\sigma$ on $\mathcal{K}u(X)$ is
Serre-invariant if and only if its homological dimension is at most $2$. As a
corollary, we prove that all Serre-invariant stability conditions on
$\mathcal{K}u(X)$ form a contractible connected component of the stability
manifold.
|
http://arxiv.org/abs/2310.16950v1
|
The works of (Daskalakis et al., 2009, 2022; Jin et al., 2022; Deng et al.,
2023) indicate that computing Nash equilibria in multi-player Markov games is a
computationally hard task. This fact raises the question of whether or not
computational intractability can be circumvented if one focuses on specific
classes of Markov games. One such example is two-player zero-sum Markov games,
in which efficient ways to compute a Nash equilibrium are known. Inspired by
zero-sum polymatrix normal-form games (Cai et al., 2016), we define a class of
zero-sum multi-agent Markov games in which there are only pairwise interactions
described by a graph that changes per state. For this class of Markov games, we
show that an $\epsilon$-approximate Nash equilibrium can be found efficiently.
To do so, we generalize the techniques of (Cai et al., 2016), by showing that
the set of coarse-correlated equilibria collapses to the set of Nash
equilibria. Afterwards, it is possible to use any algorithm in the literature
that computes approximate coarse-correlated equilibria Markovian policies to
get an approximate Nash equilibrium.
|
http://arxiv.org/abs/2305.14329v2
|
As the frontier of machine learning applications moves further into human
interaction, multiple concerns arise regarding automated decision-making. Two
of the most critical issues are fairness and data privacy. On the one hand, one
must guarantee that automated decisions are not biased against certain groups,
especially those unprotected or marginalized. On the other hand, one must
ensure that the use of personal information fully abides by privacy regulations
and that user identities are kept safe. The balance between privacy, fairness,
and predictive performance is complex. However, despite their potential
societal impact, we still demonstrate a poor understanding of the dynamics
between these optimization vectors. In this paper, we study this three-way
tension and how the optimization of each vector impacts others, aiming to
inform the future development of safe applications. In light of claims that
predictive performance and fairness can be jointly optimized, we find this is
only possible at the expense of data privacy. Overall, experimental results
show that one of the vectors will be penalized regardless of which of the three
we optimize. Nonetheless, we find promising avenues for future work in joint
optimization solutions, where smaller trade-offs are observed between the three
vectors.
|
http://arxiv.org/abs/2306.15567v1
|
In modern recommendation systems, the standard pipeline involves training
machine learning models on historical data to predict user behaviors and
improve recommendations continuously. However, these data training loops can
introduce interference in A/B tests, where data generated by control and
treatment algorithms, potentially with different distributions, are combined.
To address these challenges, we introduce a novel approach called weighted
training. This approach entails training a model to predict the probability of
each data point appearing in either the treatment or control data and
subsequently applying weighted losses during model training. We demonstrate
that this approach achieves the least variance among all estimators that do not
cause shifts in the training distributions. Through simulation studies, we
demonstrate the lower bias and variance of our approach compared to other
methods.
|
http://arxiv.org/abs/2310.17496v5
|
The mechanism by which galaxies stop forming stars and get rid of their
interstellar medium (ISM) remains elusive. Here, we study a sample of more than
two thousand elliptical galaxies in which dust emission has been detected. This
is the largest sample of such galaxies ever analysed. We infer the timescale
for removal of dust in these galaxies and investigate its dependency on
physical and environmental properties. We obtain a dust removal timescale in
elliptical galaxies of $\tau$ = 2.26 $\pm$ 0.18 Gyr, corresponding to a
half-life time of 1.57 $\pm$ 0.12 Gyr. This timescale does not depend on
environment, stellar mass or redshift. We observe a departure of dusty
elliptical galaxies from the star formation rate vs. dust mass relation. This
is caused by the star-formation rates declining faster than the dust masses and
indicates that there exists an internal mechanism, which affects star
formation, but leaves the ISM intact. Morphological quenching together with
ionisation or outflows caused by older stellar populations (supernova type Ia
or planetary nebulae) are consistent with these observations.
|
http://arxiv.org/abs/2306.05774v1
|
The L-shell fluorescence yields and the Coster-Kronig factors of ruthenium
(and the corresponding uncertainty) were determined for the first time
experimentally by applying radiometrically calibrated instrumentation of the
Physikalisch-Technische Bundesanstalt. The resulting fluorescence yields
($\omega_{L_3}=0.0459(20)$, $\omega_{L_2}=0.0415(26)$,
$\omega_{L_1}=0.0109(9)$) and the Coster-Kronig factors ($f_{23}=0.177(32)$,
$f_{13}=0.528(90)$, $f_{12}=0.173(73)$) agree reasonable well with parts of the
data from the literature.
|
http://arxiv.org/abs/2303.07965v1
|
In this paper we investigate tensor fluctuations of the metric at the end of
a Higgs inflationary period in the context of a recently introduced complex
geometrical scalar-tensor theory of gravity. In our model the Higgs field has a
geometrical origin and the affine connection is determined by the Palatini's
principle. Additionally, we consider an extra contribution to the
tensor-fluctuations equation coming from the vacuum term in the energy momentum
tensor associated to the Higgs field. The Higgs potential is rescaled by the
non-canonicity function of the kinetic term of the field which is modified by
the symmetry group of the background geometry. We obtain a nearly scale
invariant spectrum and a scalar to tensor ratio in agreement with PLANCK 2018
cosmological results.
|
http://arxiv.org/abs/2306.03305v2
|
We use fitness graphs, or directed cube graphs, for analyzing evolutionary
reversibility. The main application is antimicrobial drug resistance.
Reversible drug resistance has been observed both clinically and
experimentally. If drug resistance depends on a single point mutation, then a
possible scenario is that the mutation reverts back to the wild-type codon
after the drug has been discontinued, so that susceptibility is fully restored.
In general, a drug pause does not automatically imply fast elimination of drug
resistance. Also if drug resistance is reversible, the threshold concentration
for reverse evolution may be lower than for forward evolution. For a
theoretical understanding of evolutionary reversibility, including threshold
asymmetries, it is necessary to analyze obstacles in fitness landscapes. We
compare local and global obstacles, obstacles for forward and reverse
evolution, and conjecture that favorable landscapes for forward evolution
correlate with evolution being reversible. Both suboptimal peaks and plateaus
are analyzed with some observations on the impact of redundancy and
dimensionality. Our findings are compared with laboratory studies on
irreversible malarial drug resistance.
|
http://arxiv.org/abs/2307.14550v1
|
Multimodality eye disease screening is crucial in ophthalmology as it
integrates information from diverse sources to complement their respective
performances. However, the existing methods are weak in assessing the
reliability of each unimodality, and directly fusing an unreliable modality may
cause screening errors. To address this issue, we introduce a novel
multimodality evidential fusion pipeline for eye disease screening, EyeMoSt,
which provides a measure of confidence for unimodality and elegantly integrates
the multimodality information from a multi-distribution fusion perspective.
Specifically, our model estimates both local uncertainty for unimodality and
global uncertainty for the fusion modality to produce reliable classification
results. More importantly, the proposed mixture of Student's $t$ distributions
adaptively integrates different modalities to endow the model with heavy-tailed
properties, increasing robustness and reliability. Our experimental findings on
both public and in-house datasets show that our model is more reliable than
current methods. Additionally, EyeMost has the potential ability to serve as a
data quality discriminator, enabling reliable decision-making for multimodality
eye disease screening.
|
http://arxiv.org/abs/2303.09790v4
|
In the framework of collinear factorization and next-to-leading order (NLO)
perturbative QCD, we make predictions for inclusive and diffractive dijet
photoproduction in electron-proton and electron-nucleus scattering in the EIC
kinematics. We establish kinematic ranges in the ${\bar p}_T$, ${\bar \eta}$,
$x_A^{\rm obs}$ and $x_{\gamma}^{\rm obs}$ variables, quantify sensitivity to
small-$x$ nuclear PDFs, and analyze various scenarios of factorization breaking
in the case of diffractive scattering.
|
http://arxiv.org/abs/2303.05182v2
|
In this paper we consider ordinal sums of combinatorial games where each
summand is a number, not necessarily in canonical form. In doing so we give
formulas for the value of an ordinal sum of numbers where the literal form of
the base has certain properties. These formulas include a closed form of the
value of any ordinal sum of numbers where the base is in canonical form. Our
work employs a recent result of Clow which gives a criteria for an ordinal sum
G : K = H : K when G and H do not have the same literal form, as well as
expanding this theory with the introduction of new notation, a novel ruleset,
Teetering Towers, and a novel construction of the canonical forms of numbers in
Teetering Towers. In doing so, we resolve the problem of determining the value
of an ordinal sum of numbers in all but a few cases appearing in Conway's On
Numbers and Games; thus generalizing a number of existing results and
techniques including Berlekamp' sign rule, van Roode's signed binary number
method, and recent work by Carvalho, Huggan, Nowakowski, and Pereira dos
Santos. We conclude with a list of open problems related to our results.
|
http://arxiv.org/abs/2305.16516v1
|
Tokenization is a critical part of modern NLP pipelines. However,
contemporary tokenizers for Large Language Models are based on statistical
analysis of text corpora, without much consideration to the linguistic
features. I propose a linguistically motivated tokenization scheme, MorphPiece,
which is based partly on morphological segmentation of the underlying text. A
GPT-style causal language model trained on this tokenizer (called MorphGPT)
shows comparable or superior performance on a variety of supervised and
unsupervised NLP tasks, compared to the OpenAI GPT-2 model. Specifically I
evaluated MorphGPT on language modeling tasks, zero-shot performance on GLUE
Benchmark with various prompt templates, massive text embedding benchmark
(MTEB) for supervised and unsupervised performance, and lastly with another
morphological tokenization scheme (FLOTA, Hoffmann et al., 2022) and find that
the model trained on MorphPiece outperforms GPT-2 on most evaluations, at times
with considerable margin, despite being trained for about half the training
iterations.
|
http://arxiv.org/abs/2307.07262v2
|
Inspired by recent observations of $T_{c\bar{s}0}(2900)^0$ in the $D_s^+
\pi^-$ invariant mass distribution of $B^0 \to \bar{D}^0 D_s^+ \pi^-$ decay and
$T_{c\bar{s}0}(2900)^{++}$ in the $D_s^+ \pi^+$ invariant mass distribution of
$B^+ \to D^- D_s^+ \pi^+$ decay, we investigate the $T_{c\bar{s}0}(2900)^{++}$
contribution to the $B^+ \to K^+ D^+ D^-$ decay in a molecular scenario, where
we consider $T_{c\bar{s}0}(2900)r^{++}$ as a $D^{\ast +} K^{\ast+}$ molecular
state. Our estimations indicate that the fit fraction of
$T_{c\bar{s}0}(2900)^{++}$ in the $B^+ \to K^+ D^+ D^-$ is about $12.5\%$, and
its signal is visible in the $D^+ K^+$ invariant mass distribution. With the
involvement of $T_{c\bar{s}0}(2900)^{++}$, the fit fractions of
$\chi_{c0}(3915)$ and $\chi_{c2}(3930)$ may be much different with the ones
obtained by the present amplitude analysis [Phys. Rev. D \textbf{102}, 112003
(2020)], which may shed light on the long standing puzzle of $\chi_{c0}(3915)$
as the conventional charmonium.
|
http://arxiv.org/abs/2305.09436v1
|
Quantum coherence is a fundamental feature of quantum physics and plays a
significant role in quantum information processing. By generalizing the
resource theory of coherence from von Neumann measurements to positive
operator-valued measures (POVMs), POVM-based coherence measures have been
proposed with respect to the relative entropy of coherence, the $l_1$ norm of
coherence, the robustness of coherence and the Tsallis relative entropy of
coherence. We derive analytically the lower and upper bounds on these
POVM-based coherence of an arbitrary given superposed pure state in terms of
the POVM-based coherence of the states in superposition. Our results can be
used to estimate range of quantum coherence of superposed states. Detailed
examples are presented to verify our analytical bounds.
|
http://arxiv.org/abs/2305.06705v1
|
This study compares the National Cybersecurity Strategies (NCSSs) of publicly
available documents of ten nations across Europe (United Kingdom, France,
Lithuania, Estonia, Spain, and Norway), Asia-Pacific (Singapore and Australia),
and the American region (the United States of America and Canada). The study
observed that there is not a unified understanding of the term "Cybersecurity";
however, a common trajectory of the NCSSs shows that the fight against
cybercrime is a joint effort among various stakeholders, hence the need for
strong international cooperation. Using a comparative structure and an NCSS
framework, the research finds similarities in protecting critical assets,
commitment to research and development, and improved national and international
collaboration. The study finds that the lack of a unified underlying
cybersecurity framework leads to a disparity in the structure and contents of
the strategies. The strengths and weaknesses of the NCSSs from the research can
benefit countries planning to develop or update their cybersecurity strategies.
The study gives recommendations that strategy developers can consider when
developing an NCSS.
|
http://arxiv.org/abs/2303.13938v1
|
Given a rational polytope $P \subset \mathbb R^d$, the numerical function
counting lattice points in the integral dilations of $P$ is known to become a
quasi-polynomial, called the Ehrhart quasi-polynomial $\mathrm{ehr}_P$ of $P$.
In this paper we study the following problem: Given a rational $d$-polytope $P
\subset \mathbb R^d$, is there a nice way to know Ehrhart quasi-polynomials of
translated polytopes $P+ \mathbf v$ for all $\mathbf v \in \mathbb Q^d$? We
provide a way to compute such Ehrhart quasi-polynomials using a certain toric
arrangement and lattice point counting functions of translated cones of $P$.
This method allows us to visualize how constituent polynomials of
$\mathrm{ehr}_{P+\mathbf v}$ change in the torus $\mathbb R^d/\mathbb Z^d$. We
also prove that information of $\mathrm{ehr}_{P+\mathbf v}$ for all $\mathbf v
\in \mathbb Q^d$ determines the rational $d$-polytope $P \subset \mathbb R^d$
up to translations by integer vectors, and characterize all rational
$d$-polytopes $P \subset \mathbb R^d$ such that $\mathrm{ehr}_{P+\mathbf v}$ is
symmetric for all $\mathbf v \in \mathbb Q^d$.
|
http://arxiv.org/abs/2307.08151v1
|
We numerically investigate and develop analytic models for both the DC and
pulsed spin-orbit-torque (SOT)-driven response of order parameter in
single-domain Mn$_3$Sn, which is a metallic antiferromagnet with an anti-chiral
120$^\circ$ spin structure. We show that DC currents above a critical threshold
can excite oscillatory dynamics of the order parameter in the gigahertz to
terahertz frequency spectrum. Detailed models of the oscillation frequency
versus input current are developed and found to be in excellent agreement with
the numerical simulations of the dynamics. In the case of pulsed excitation,
the magnetization can be switched from one stable state to any of the other
five stable states in the Kagome plane by tuning the duration or the amplitude
of the current pulse. Precise functional forms of the final switched state
versus the input current are derived, offering crucial insights into the
switching dynamics of Mn$_3$Sn. The readout of the magnetic state can be
carried out via either the anomalous Hall effect, or the recently demonstrated
tunneling magnetoresistance in an all-Mn$_3$Sn junction. We also discuss
possible disturbance of the magnetic order due to heating that may occur if the
sample is subject to large currents. Operating the device in pulsed mode or
using low DC currents reduces the peak temperature rise in the sample due to
Joule heating. Our predictive modeling and simulation results can be used by
both theorists and experimentalists to explore the interplay of SOT and the
order dynamics in Mn$_3$Sn, and to further benchmark the device performance.
|
http://arxiv.org/abs/2305.08728v2
|
NSV 14264 and NSV 14172 are suspected to be variable stars of RR Lyr type
(Brun, 1964). They were observed during three nights in October 2018 with a
25cm diameter telescope. These observations completed by ASAS-SN survey data
bring to the conclusion that these two stars are not RR Lyraes but constant
stars in the limit of the precision of the present photometry. The analysis of
GAIA data allows to say that NSV 14264 is a main sequence dwarf similar to the
Sun but that NSV 14172 is a yellow giant star located in the HR diagram at the
limit between RR Lyraes and CW cepheids; however, it does not pulsate with
significant amplitude.
|
http://arxiv.org/abs/2306.09166v1
|
In this work we introduce a structured signaling game, an extension of the
classical signaling game with a similarity structure between meanings in the
context, along with a variant of the Rational Speech Act (RSA) framework which
we call structured-RSA (sRSA) for pragmatic reasoning in structured domains. We
explore the behavior of the sRSA in the domain of color and show that pragmatic
agents using sRSA on top of semantic representations, derived from the World
Color Survey, attain efficiency very close to the information theoretic limit
after only 1 or 2 levels of recursion. We also explore the interaction between
pragmatic reasoning and learning in multi-agent reinforcement learning
framework. Our results illustrate that artificial agents using sRSA develop
communication closer to the information theoretic frontier compared to agents
using RSA and just reinforcement learning. We also find that the ambiguity of
the semantic representation increases as the pragmatic agents are allowed to
perform deeper reasoning about each other during learning.
|
http://arxiv.org/abs/2305.10167v1
|
In this paper, we propose a framework for early-stage malware detection and
mitigation by leveraging natural language processing (NLP) techniques and
machine learning algorithms. Our primary contribution is presenting an approach
for predicting the upcoming actions of malware by treating application
programming interface (API) call sequences as natural language inputs and
employing text classification methods, specifically a Bi-LSTM neural network,
to predict the next API call. This enables proactive threat identification and
mitigation, demonstrating the effectiveness of applying NLP principles to API
call sequences. The Bi-LSTM model is evaluated using two datasets. %The model
achieved an accuracy of 93.6\% and 88.8\% for the %first and second dataset
respectively. Additionally, by modeling consecutive API calls as 2-gram and
3-gram strings, we extract new features to be further processed using a
Bagging-XGBoost algorithm, effectively predicting malware presence at its early
stages. The accuracy of the proposed framework is evaluated by simulations.
|
http://arxiv.org/abs/2306.06255v1
|
We determine the contribution of long-range pion interactions to the
$X(3872)$ dynamics, assuming it is a loosely bound $D^0 \bar{D}^{*0}$ molecule.
Our result is based on the distorted wave Born approximation in
non-relativistic quantum mechanics. Despite their long-range nature, we find
that pion interactions cannot produce a large and negative effective range.
Nonetheless, they introduce imaginary parts. In particular, they contribute to
the total decay width of the $X(3872)$ with a term associated with, but not
precisely corresponding to, the $D^*$ width. Our approach can also be applied
to the recently discovered $T_{cc}^+$ states.
|
http://arxiv.org/abs/2307.11400v2
|
The extraordinary capabilities of large language models (LLMs) such as
ChatGPT and GPT-4 are in part unleashed by aligning them with reward models
that are trained on human preferences, which are often represented as rankings
of responses to prompts. In this paper, we document the phenomenon of
\textit{reward collapse}, an empirical observation where the prevailing
ranking-based approach results in an \textit{identical} reward distribution
\textit{regardless} of the prompts during the terminal phase of training. This
outcome is undesirable as open-ended prompts like ``write a short story about
your best friend'' should yield a continuous range of rewards for their
completions, while specific prompts like ``what is the capital of New Zealand''
should generate either high or low rewards. Our theoretical investigation
reveals that reward collapse is primarily due to the insufficiency of the
ranking-based objective function to incorporate prompt-related information
during optimization. This insight allows us to derive closed-form expressions
for the reward distribution associated with a set of utility functions in an
asymptotic regime. To overcome reward collapse, we introduce a prompt-aware
optimization scheme that provably admits a prompt-dependent reward distribution
within the interpolating regime. Our experimental results suggest that our
proposed prompt-aware utility functions significantly alleviate reward collapse
during the training of reward models.
|
http://arxiv.org/abs/2305.17608v1
|
Traffic scene perception in computer vision is a critically important task to
achieve intelligent cities. To date, most existing datasets focus on autonomous
driving scenes. We observe that the models trained on those driving datasets
often yield unsatisfactory results on traffic monitoring scenes. However,
little effort has been put into improving the traffic monitoring scene
understanding, mainly due to the lack of specific datasets. To fill this gap,
we introduce a specialized traffic monitoring dataset, termed TSP6K, containing
images from the traffic monitoring scenario, with high-quality pixel-level and
instance-level annotations. The TSP6K dataset captures more crowded traffic
scenes with several times more traffic participants than the existing driving
scenes. We perform a detailed analysis of the dataset and comprehensively
evaluate previous popular scene parsing methods, instance segmentation methods
and unsupervised domain adaption methods. Furthermore, considering the vast
difference in instance sizes, we propose a detail refining decoder for scene
parsing, which recovers the details of different semantic regions in traffic
scenes owing to the proposed TSP6K dataset. Experiments show its effectiveness
in parsing the traffic monitoring scenes. Code and dataset are available at
https://github.com/PengtaoJiang/TSP6K.
|
http://arxiv.org/abs/2303.02835v2
|
Path planning is a basic capability of autonomous mobile robots. Former
approaches in path planning exploit only the given geometric information from
the environment without leveraging the inherent semantics within the
environment. The recently presented S-Graphs constructs 3D situational graphs
incorporating geometric, semantic, and relational aspects between the elements
to improve the overall scene understanding and the localization of the robot.
But these works do not exploit the underlying semantic graphs for improving the
path planning for mobile robots. To that aim, in this paper, we present S-Nav a
novel semantic-geometric path planner for mobile robots. It leverages S-Graphs
to enable fast and robust hierarchical high-level planning in complex indoor
environments. The hierarchical architecture of S-Nav adds a novel semantic
search on top of a traditional geometric planner as well as precise map
reconstruction from S-Graphs to improve planning speed, robustness, and path
quality. We demonstrate improved results of S-Nav in a synthetic environment.
|
http://arxiv.org/abs/2307.01613v1
|
Hematoxylin and Eosin (H&E) staining is a widely used sample preparation
procedure for enhancing the saturation of tissue sections and the contrast
between nuclei and cytoplasm in histology images for medical diagnostics.
However, various factors, such as the differences in the reagents used, result
in high variability in the colors of the stains actually recorded. This
variability poses a challenge in achieving generalization for machine-learning
based computer-aided diagnostic tools. To desensitize the learned models to
stain variations, we propose the Generative Stain Augmentation Network (G-SAN)
-- a GAN-based framework that augments a collection of cell images with
simulated yet realistic stain variations. At its core, G-SAN uses a novel and
highly computationally efficient Laplacian Pyramid (LP) based generator
architecture, that is capable of disentangling stain from cell morphology.
Through the task of patch classification and nucleus segmentation, we show that
using G-SAN-augmented training data provides on average 15.7% improvement in F1
score and 7.3% improvement in panoptic quality, respectively. Our code is
available at https://github.com/lifangda01/GSAN-Demo.
|
http://arxiv.org/abs/2305.14301v2
|
The global need for effective disease diagnosis remains substantial, given
the complexities of various disease mechanisms and diverse patient symptoms. To
tackle these challenges, researchers, physicians, and patients are turning to
machine learning (ML), an artificial intelligence (AI) discipline, to develop
solutions. By leveraging sophisticated ML and AI methods, healthcare
stakeholders gain enhanced diagnostic and treatment capabilities. However,
there is a scarcity of research focused on ML algorithms for enhancing the
accuracy and computational efficiency. This research investigates the capacity
of machine learning algorithms to improve the transmission of heart rate data
in time series healthcare metrics, concentrating particularly on optimizing
accuracy and efficiency. By exploring various ML algorithms used in healthcare
applications, the review presents the latest trends and approaches in ML-based
disease diagnosis (MLBDD). The factors under consideration include the
algorithm utilized, the types of diseases targeted, the data types employed,
the applications, and the evaluation metrics. This review aims to shed light on
the prospects of ML in healthcare, particularly in disease diagnosis. By
analyzing the current literature, the study provides insights into
state-of-the-art methodologies and their performance metrics.
|
http://arxiv.org/abs/2310.16978v1
|
Under the assumption that jets explode all core collapse supernovae (CCSNe) I
classify 14 CCSN remnants (CCSNRs) into five groups according to their
morphology as shaped by jets, and attribute the classes to the specific angular
momentum of the pre-collapse core. Point-symmetry (1 CCSNR): According to the
jittering jets explosion mechanism (JJEM) when the pre-collapse core rotates
very slowly the newly born neutron star (NS) launches tens of jet-pairs in all
directions. The last several jet-pairs might leave an imprint of several pairs
of ears, i.e., a point-symmetric morphology. One pair of ears (8 CCSNRs): More
rapidly rotating cores might force the last pair of jets to be long-lived and
shape one pair of jet-inflated ears that dominate the morphology. S-shaped (1
CCSNR): The accretion disk might precess, leading to an S-shaped morphology.
Barrel-shaped (3 CCSNRs): Even more rapidly rotating pre-collapse cores might
result in a final energetic pair of jets that clear the region along the axis
of the pre-collapse core rotation and form a barrel-shaped morphology.
Elongated (1 CCSNR): Very rapidly rotating pre-collapse core force all jets to
be along the same axis such that the jets are inefficient in expelling mass
from the equatorial plane and the long-lasting accretion process turns the NS
into a black hole (BH). The two new results of this study are the
classification of CCSNRs into five classes based on jet-shaped morphological
features, and the attribution of the morphological classes mainly to the
pre-collapse core rotation in the frame of the JJEM.
|
http://arxiv.org/abs/2307.15666v3
|
Quantum coherence is a crucial prerequisite for quantum technologies.
Therefore, the robust generation, as autonomous as possible, of quantum
coherence remains the essential problem for developing this field. We consider
a method of synthesizing and multiplexing quantum coherence from spin systems
without any direct drives only coupled to bosonic baths. The previous studies
in this field have demonstrated that a back-action of the bath to the spin
subsystem is important to generate it, however, it simultaneously gives
significant limits to the generated coherence. We propose a viable approach
with the bosonic bath that allows overcoming these limits by avoiding the
destructive effect of the back-action processes. Using this approach, we
suggest an advanced synthesis of the quantum coherence non-perturbatively in
the spin-boson coupling parameters of multiple bosonic baths to increase and
multiplex it for upcoming proof-of-principle experiments.
|
http://arxiv.org/abs/2303.07795v3
|
A number of arguments at the interplay of general relativity and quantum
theory suggest an operational limit to spatial resolution, conventionally
modelled as a generalized uncertainty principle (GUP). Recently, it has been
demonstrated that the dynamics postulated as a part of these models are only
loosely related to the existence of the minimal-length scale. In this paper, we
intend to make a more informed choice on the Hamiltonian by demanding, among
other properties, that the model be invariant under (possibly) deformed
Galilean transformations in one dimension. In this vein, we study a
two-particle system with general interaction potential under the condition that
the composition as well as the action of Galilean boosts on wave numbers be
deformed so as to comply with the cut-off. We find that the customary
GUP-Hamiltonian does not allow for invariance under (any kind of) generalised
Galilean transformations. Those Hamiltonians which allow for a deformed
relativity principle have to be related to the ordinary Galilean ones by virtue
of a momentum-space diffeomorphism, i.e. a canonical transformation. Far from
being trivial, the resulting dynamics is deformed, as we show at the example of
the harmonic interaction.
|
http://arxiv.org/abs/2307.12109v1
|
We describe a first measurement of the radiation from a $^{\bf 178m}$Hf
sample to search for dark matter. The $\gamma$ flux from this sample, possessed
by Los Alamos National Laboratory nuclear chemistry, was measured with a Ge
detector at a distance of 4 ft due to its high activity. We search for
$\gamma$s that cannot arise from the radioactive decay of $^{\bf 178m}$Hf, but
might arise from the production of a nuclear state due to the inelastic
scattering with dark matter. The limits obtained on this $\gamma$ flux are then
translated into constraints on the parameter space of inelastic dark matter.
Finally, we describe the potential reach of future studies with $^{\bf
178m}$Hf.
|
http://arxiv.org/abs/2306.04442v1
|
We describe the Gerstenhaber bracket structure on Hochschild cohomology of
Koszul quiver algebras in terms of homotopy lifting maps. There is a projective
bimodule resolution of Koszul quiver algebras that admits a comultiplicative
structure. Introducing new scalars, we describe homotopy lifting maps
associated to Hochschild cocycles using the comultiplicative structure. We show
that the scalars can be described by some recurrence relations and we give
several examples where these scalars appear in the literature. In particular,
for a member of a family of quiver algebras, we describe Hochschild 2-cocycles
and their associated homotopy lifting maps and determine the Maurer-Cartan
elements of the quiver algebra in two ways: (i) by the use of homotopy lifting
maps and (ii) by the use of a combinatorial star product that arises from the
deformation of algebras using reduction systems.
|
http://arxiv.org/abs/2308.12954v1
|
We present analytic expressions for the density of states and its consistent
derivation for the two-dimensional Qi-Wu-Zhang (QWZ) Hamiltonian, a generic
model for the Chern topological insulators of class A. This density of states
is expressed in terms of elliptical integrals. We discuss and plot special
cases of the dispersion relations and the corresponding densities of states.
Spectral moments are also presented. The exact formulae ought to be useful in
determining physical properties of the non-interacting Chern insulators and
within the dynamical mean-field theory for interacting fermions with the QWZ
Hamiltonian in the non-interacting limit.
|
http://arxiv.org/abs/2308.03681v2
|
Quantum private information retrieval (QPIR) for quantum messages is a
quantum communication task, in which a user retrieves one of the multiple
quantum states from the server without revealing which state is retrieved. In
the one-server setting, we find an exponential gap in the communication
complexities between the presence and absence of prior entanglement in this
problem with the one-server setting. To achieve this aim, as the first step, we
prove that the trivial solution of downloading all messages is optimal under
QPIR for quantum messages, which is a similar result to that of classical PIR
but different from QPIR for classical messages. As the second step, we propose
an efficient one-server one-round QPIR protocol with prior entanglement by
constructing a reduction from a QPIR protocol for classical messages to a QPIR
protocol for quantum messages in the presence of prior entanglement.
|
http://arxiv.org/abs/2304.05125v1
|
Few-shot text classification systems have impressive capabilities but are
infeasible to deploy and use reliably due to their dependence on prompting and
billion-parameter language models. SetFit (Tunstall et al., 2022) is a recent,
practical approach that fine-tunes a Sentence Transformer under a contrastive
learning paradigm and achieves similar results to more unwieldy systems.
Inexpensive text classification is important for addressing the problem of
domain drift in all classification tasks, and especially in detecting harmful
content, which plagues social media platforms. Here, we propose Like a Good
Nearest Neighbor (LaGoNN), a modification to SetFit that introduces no
learnable parameters but alters input text with information from its nearest
neighbor, for example, the label and text, in the training data, making novel
data appear similar to an instance on which the model was optimized. LaGoNN is
effective at flagging undesirable content and text classification, and improves
the performance of SetFit. To demonstrate the value of LaGoNN, we conduct a
thorough study of text classification systems in the context of content
moderation under four label distributions, and in general and multilingual
classification settings.
|
http://arxiv.org/abs/2302.08957v3
|
Inverse optimal control can be used to characterize behavior in sequential
decision-making tasks. Most existing work, however, is limited to fully
observable or linear systems, or requires the action signals to be known. Here,
we introduce a probabilistic approach to inverse optimal control for partially
observable stochastic non-linear systems with unobserved action signals, which
unifies previous approaches to inverse optimal control with maximum causal
entropy formulations. Using an explicit model of the noise characteristics of
the sensory and motor systems of the agent in conjunction with local
linearization techniques, we derive an approximate likelihood function for the
model parameters, which can be computed within a single forward pass. We
present quantitative evaluations on stochastic and partially observable
versions of two classic control tasks and two human behavioral tasks.
Importantly, we show that our method can disentangle perceptual factors and
behavioral costs despite the fact that epistemic and pragmatic actions are
intertwined in sequential decision-making under uncertainty, such as in active
sensing and active learning. The proposed method has broad applicability,
ranging from imitation learning to sensorimotor neuroscience.
|
http://arxiv.org/abs/2303.16698v2
|
Graph Transformer (GT) recently has emerged as a new paradigm of graph
learning algorithms, outperforming the previously popular Message Passing
Neural Network (MPNN) on multiple benchmarks. Previous work (Kim et al., 2022)
shows that with proper position embedding, GT can approximate MPNN arbitrarily
well, implying that GT is at least as powerful as MPNN. In this paper, we study
the inverse connection and show that MPNN with virtual node (VN), a commonly
used heuristic with little theoretical understanding, is powerful enough to
arbitrarily approximate the self-attention layer of GT.
In particular, we first show that if we consider one type of linear
transformer, the so-called Performer/Linear Transformer (Choromanski et al.,
2020; Katharopoulos et al., 2020), then MPNN + VN with only O(1) depth and O(1)
width can approximate a self-attention layer in Performer/Linear Transformer.
Next, via a connection between MPNN + VN and DeepSets, we prove the MPNN + VN
with O(n^d) width and O(1) depth can approximate the self-attention layer
arbitrarily well, where d is the input feature dimension. Lastly, under some
assumptions, we provide an explicit construction of MPNN + VN with O(1) width
and O(n) depth approximating the self-attention layer in GT arbitrarily well.
On the empirical side, we demonstrate that 1) MPNN + VN is a surprisingly
strong baseline, outperforming GT on the recently proposed Long Range Graph
Benchmark (LRGB) dataset, 2) our MPNN + VN improves over early implementation
on a wide range of OGB datasets and 3) MPNN + VN outperforms Linear Transformer
and MPNN on the climate modeling task.
|
http://arxiv.org/abs/2301.11956v4
|
Automatic Speech Recognition (ASR) has witnessed a profound research
interest. Recent breakthroughs have given ASR systems different prospects such
as faithfully transcribing spoken language, which is a pivotal advancement in
building conversational agents. However, there is still an imminent challenge
of accurately discerning context-dependent words and phrases. In this work, we
propose a novel approach for enhancing contextual recognition within ASR
systems via semantic lattice processing leveraging the power of deep learning
models in accurately delivering spot-on transcriptions across a wide variety of
vocabularies and speaking styles. Our solution consists of using Hidden Markov
Models and Gaussian Mixture Models (HMM-GMM) along with Deep Neural Networks
(DNN) models integrating both language and acoustic modeling for better
accuracy. We infused our network with the use of a transformer-based model to
properly rescore the word lattice achieving remarkable capabilities with a
palpable reduction in Word Error Rate (WER). We demonstrate the effectiveness
of our proposed framework on the LibriSpeech dataset with empirical analyses.
|
http://arxiv.org/abs/2310.09680v4
|
We propose and study a one-dimensional (1D) model consisting of two lanes
with open boundaries. One of the lanes executes diffusive and the other lane
driven unidirectional or asymmetric exclusion dynamics, which are mutually
coupled through particle exchanges in the bulk. We elucidate the generic
nonuniform steady states in this model. We show that in a parameter regime,
where hopping along the TASEP lane, diffusion along the SEP lane and the
exchange of particles between the TASEP and SEP lanes compete, the SEP
diffusivity $D$ appears as a tuning parameter for both the SEP and TASEP
densities for a given exchange rate in the nonequilibrium steady states of this
model. Indeed, $D$ can be tuned to achieve phase coexistence in the asymmetric
exclusion dynamics together with spatially smoothly varying density in the
diffusive dynamics in the steady state. We obtain phase diagrams of the model
by using mean field theories, and corroborate and complement the results by
stochastic Monte Carlo simulations. This model reduces to an isolated open
totally asymmetric exclusion process (TASEP) and an open TASEP with bulk
particle nonconserving Langmuir kinetics (LK), respectively, in the limits of
vanishing and diverging particle diffusivity in the lane executing diffusive
dynamics. Thus this model works as an overarching general model, connecting
both pure TASEPs and TASEPs with LK in different asymptotic limits. We further
define phases in the SEP and obtain phase diagrams, and show their
correspondence with the TASEP phases. In addition to its significance as a 1D
driven, diffusive model, this model also serves as a simple reduced model for
cell biological transport by molecular motors undergoing diffusive and directed
motion inside eukaryotic cells.
|
http://arxiv.org/abs/2306.14651v2
|
We use the Random Forest (RF) algorithm to develop a tool for automated
activity classification of galaxies into 5 different classes: Star-forming
(SF), AGN, LINER, Composite, and Passive. We train the algorithm on a
combination of mid-IR (WISE) and optical photometric data while the true labels
(activity classes) are based on emission line ratios. Our classifier is built
to be redshift-agnostic and it is applicable to objects up to z $\sim$0.1. It
reaches a completeness $>$80 % for SF and Passive galaxies, and $\sim$60 % for
AGN. Applying it to an all-sky galaxy catalog (HECATE) reveals a large
population of low-luminosity AGNs outside the AGN locus in the standard mid-IR
diagnostics.
|
http://arxiv.org/abs/2303.11691v1
|
Text-to-image diffusion models can generate diverse, high-fidelity images
based on user-provided text prompts. Recent research has extended these models
to support text-guided image editing. While text guidance is an intuitive
editing interface for users, it often fails to ensure the precise concept
conveyed by users. To address this issue, we propose Custom-Edit, in which we
(i) customize a diffusion model with a few reference images and then (ii)
perform text-guided editing. Our key discovery is that customizing only
language-relevant parameters with augmented prompts improves reference
similarity significantly while maintaining source similarity. Moreover, we
provide our recipe for each customization and editing process. We compare
popular customization methods and validate our findings on two editing methods
using various datasets.
|
http://arxiv.org/abs/2305.15779v1
|
This paper introduces a new transformer-based model for the problem of travel
time estimation. The key feature of the proposed GCT-TTE architecture is the
utilization of different data modalities capturing different properties of an
input path. Along with the extensive study regarding the model configuration,
we implemented and evaluated a sufficient number of actual baselines for
path-aware and path-blind settings. The conducted computational experiments
have confirmed the viability of our pipeline, which outperformed
state-of-the-art models on both considered datasets. Additionally, GCT-TTE was
deployed as a web service accessible for further experiments with user-defined
routes.
|
http://arxiv.org/abs/2306.04324v2
|
In this paper, modulation instability and nonlinear supratransmission are
investigated in a one-dimensional chain of atoms using cubic-quartic
nonlinearity coefficients. As a result, we establish the discrete nonlinear
evolution equation by using the multi-scale scheme. To calculate the modulation
instability gain, we use the linearizing scheme. Particular attention is given
to the impact of the higher nonlinear term on the modulation instability.
Following that, full numerical integration was performed to identify modulated
wave patterns, as well as the appearance of a rogue wave. Through the nonlinear
supratransmission phenomenon, one end of the discrete model is driven into the
forbidden bandgap. As a result, for driving amplitudes above the
supratransmission threshold, the solitonic bright soliton and modulated wave
patterns are satisfied. An important behavior is observed in the transient
range of time of propagation when the bright solitonic wave turns into a
chaotic solitonic wave. These results corroborate our analytical investigations
on the modulation instability and show that the one-dimensional chain of atoms
is a fruitful medium to generate long-lived modulated waves.
|
http://arxiv.org/abs/2303.01482v1
|
Optimal transport and its related problems, including optimal partial
transport, have proven to be valuable tools in machine learning for computing
meaningful distances between probability or positive measures. This success has
led to a growing interest in defining transport-based distances that allow for
comparing signed measures and, more generally, multi-channeled signals.
Transport $\mathrm{L}^{p}$ distances are notable extensions of the optimal
transport framework to signed and possibly multi-channeled signals. In this
paper, we introduce partial transport $\mathrm{L}^{p}$ distances as a new
family of metrics for comparing generic signals, benefiting from the robustness
of partial transport distances. We provide theoretical background such as the
existence of optimal plans and the behavior of the distance in various limits.
Furthermore, we introduce the sliced variation of these distances, which allows
for rapid comparison of generic signals. Finally, we demonstrate the
application of the proposed distances in signal class separability and nearest
neighbor classification.
|
http://arxiv.org/abs/2307.13571v1
|
"Flying focus" techniques produce laser pulses with dynamic focal points that
travels distances much greater than a Rayleigh length. The implementation of
these techniques in laser-based applications requires the design of optical
configurations that can both extend the focal range and structure the radial
group delay. This article describes a method for designing optical
configurations that produce ultrashort flying focus pulses with
arbitrary-trajectory focal points. The method is illustrated by several
examples that employ an axiparabola for extending the focal range and either a
reflective echelon or a deformable mirror-spatial light modulator pair for
structuring the radial group delay. The latter configuration enables rapid
exploration and optimization of flying foci, which could be ideal for
experiments.
|
http://arxiv.org/abs/2307.05313v1
|
Water and ammonia vapors are known to be the major sources of spectral
absorption at pressure levels observed by the microwave radiometer (MWR) on
Juno. However, the brightness temperatures and limb darkening observed by the
MWR at its longest wavelength channel of 50 cm (600 MHz) in the first 9
perijove passes indicate the existence of an additional source of opacity in
the deep atmosphere of Jupiter (pressures beyond 100 bar). The absorption
properties of ammonia and water vapor, and their relative abundances in
Jupiter's atmosphere do not provide sufficient opacity in deep atmosphere to
explain the 600 MHz channel observation. Here we show that free electrons due
to the ionization of alkali metals, i.e. sodium, and potassium, with sub-solar
metallicity [M/H] (log based 10 relative concentration to solar) in the range
of [M/H] = -2 to [M/H] = -5 can provide the missing source of opacity in the
deep atmosphere. If the alkali metals are not the source of additional opacity
in the MWR data, then their metallicity at 1000 bars can only be even lower.
The upper bound of -2 on the metallicity of the alkali metals contrasts with
the other heavy elements -- C, N, S, Ar, Kr, and Xe -- which are all enriched
relative to their solar abundances having a metallicity of approximately +0.5.
|
http://arxiv.org/abs/2306.12546v1
|
This paper investigates links between the eigenvalues and eigenfunctions of
the Laplace-Beltrami operator, and the higher Cheeger constants of smooth
Riemannian manifolds, possibly weighted and/or with boundary. The higher
Cheeger constants give a loose description of the major geometric features of a
manifold. We give a constructive upper bound on the higher Cheeger constants,
in terms of the eigenvalue of any eigenfunction with the corresponding number
of nodal domains. Specifically, we show that for each such eigenfunction, a
positive-measure collection of its superlevel sets have their Cheeger ratios
bounded above in terms of the corresponding eigenvalue.
Some manifolds have their major features entwined across several
eigenfunctions, and no single eigenfunction contains all the major features. In
this case, there may exist carefully chosen linear combinations of the
eigenfunctions, each with large values on a single feature, and small values
elsewhere. We can then apply a soft-thresholding operator to these linear
combinations to obtain new functions, each supported on a single feature. We
show that the Cheeger ratios of the level sets of these functions also give an
upper bound on the Laplace-Beltrami eigenvalues. We extend these level set
results to nonautonomous dynamical systems, and show that the dynamic Laplacian
eigenfunctions reveal sets with small dynamic Cheeger ratios.
|
http://arxiv.org/abs/2308.04850v1
|
Software visualizations are usually realized as standalone and isolated tools
that use embedded code viewers within the visualization. In the context of
program comprehension, only few approaches integrate visualizations into code
editors, such as integrated development environments. This is surprising since
professional developers consider reading source code as one of the most
important ways to understand software, therefore spend a lot of time with code
editors. In this paper, we introduce the design and proof-of-concept
implementation for a software visualization approach that can be embedded into
code editors. Our contribution differs from related work in that we use dynamic
analysis of a software system's runtime behavior. Additionally, we incorporate
distributed tracing. This enables developers to understand how, for example,
the currently handled source code behaves as a fully deployed, distributed
software system. Our visualization approach enhances common remote pair
programming tools and is collaboratively usable by employing shared code
cities. As a result, user interactions are synchronized between code editor and
visualization, as well as broadcasted to collaborators. To the best of our
knowledge, this is the first approach that combines code editors with
collaboratively usable code cities. Therefore, we conducted a user study to
collect first-time feedback regarding the perceived usefulness and perceived
usability of our approach. We additionally collected logging information to
provide more data regarding time spent in code cities that are embedded in code
editors. Seven teams with two students each participated in that study. The
results show that the majority of participants find our approach useful and
would employ it for their own use. We provide each participant's video
recording, raw results, and all steps to reproduce our experiment as
supplementary package.
|
http://arxiv.org/abs/2308.15785v1
|
The FLAIR #2 dataset hereby presented includes two very distinct types of
data, which are exploited for a semantic segmentation task aimed at mapping
land cover. The data fusion workflow proposes the exploitation of the fine
spatial and textural information of very high spatial resolution (VHR)
mono-temporal aerial imagery and the temporal and spectral richness of high
spatial resolution (HR) time series of Copernicus Sentinel-2 satellite images.
The French National Institute of Geographical and Forest Information (IGN), in
response to the growing availability of high-quality Earth Observation (EO)
data, is actively exploring innovative strategies to integrate these data with
heterogeneous characteristics. IGN is therefore offering this dataset to
promote innovation and improve our knowledge of our territories.
|
http://arxiv.org/abs/2305.14467v1
|
The spread of toxic content online is an important problem that has adverse
effects on user experience online and in our society at large. Motivated by the
importance and impact of the problem, research focuses on developing solutions
to detect toxic content, usually leveraging machine learning (ML) models
trained on human-annotated datasets. While these efforts are important, these
models usually do not generalize well and they can not cope with new trends
(e.g., the emergence of new toxic terms). Currently, we are witnessing a shift
in the approach to tackling societal issues online, particularly leveraging
large language models (LLMs) like GPT-3 or T5 that are trained on vast corpora
and have strong generalizability. In this work, we investigate how we can use
LLMs and prompt learning to tackle the problem of toxic content, particularly
focusing on three tasks; 1) Toxicity Classification, 2) Toxic Span Detection,
and 3) Detoxification. We perform an extensive evaluation over five model
architectures and eight datasets demonstrating that LLMs with prompt learning
can achieve similar or even better performance compared to models trained on
these specific tasks. We find that prompt learning achieves around 10\%
improvement in the toxicity classification task compared to the baselines,
while for the toxic span detection task we find better performance to the best
baseline (0.643 vs. 0.640 in terms of $F_1$-score). Finally, for the
detoxification task, we find that prompt learning can successfully reduce the
average toxicity score (from 0.775 to 0.213) while preserving semantic meaning.
|
http://arxiv.org/abs/2308.05596v1
|
We perform the linear analysis of causality and stability for a minimal
extended spin hydrodynamics up to second order of the gradient expansion. The
first order spin hydrodynamics, with a rank-3 spin tensor being antisymmetric
for only the last two indices, are proved to be acausal and unstable. We then
consider the minimal causal spin hydrodynamics up to second order of the
gradient expansion. We derive the necessary causality and stability conditions
for this minimal causal spin hydrodynamics. Interestingly, the satisfaction of
the stability conditions relies on the equations of state for the spin density
and chemical potentials. Moreover, different with the conventional relativistic
dissipative hydrodynamics, the stability of the theory seems to be broken at
the finite wave-vector when the stability conditions are fulfilled at small and
large wave-vector limits. It implies that the behavior in small and large
wave-vector limits may be insufficient to determine the stability conditions
for spin hydrodynamics in linear mode analysis.
|
http://arxiv.org/abs/2306.13880v3
|
Current benchmarks for evaluating neural code models focus on only a small
subset of programming languages, excluding many popular languages such as Go or
Rust. To ameliorate this issue, we present the BabelCode framework for
execution-based evaluation of any benchmark in any language. BabelCode enables
new investigations into the qualitative performance of models' memory, runtime,
and individual test case results. Additionally, we present a new code
translation dataset called Translating Python Programming Puzzles (TP3) from
the Python Programming Puzzles (Schuster et al. 2021) benchmark that involves
translating expert-level python functions to any language. With both BabelCode
and the TP3 benchmark, we investigate if balancing the distributions of 14
languages in a training dataset improves a large language model's performance
on low-resource languages. Training a model on a balanced corpus results in, on
average, 12.34% higher $pass@k$ across all tasks and languages compared to the
baseline. We find that this strategy achieves 66.48% better $pass@k$ on
low-resource languages at the cost of only a 12.94% decrease to high-resource
languages. In our three translation tasks, this strategy yields, on average,
30.77% better low-resource $pass@k$ while having 19.58% worse high-resource
$pass@k$.
|
http://arxiv.org/abs/2302.01973v3
|
In the quantum theory, it has been shown that one can see if a process has
the time reversal symmetry by applying the matrix transposition and examining
if it remains physical. However, recent discoveries regarding the indefinite
causal order of quantum processes suggest that there may be other, more general
symmetry transformations of time besides the complete reversal. In this work,
we introduce an expanded concept of matrix transposition, the generalized
transposition, that takes into account general bipartite unitary
transformations of a quantum operation's future and past Hilbert spaces,
allowing for making the time axis definitely lie in a superposed direction,
which generalizes the previously studied `indefinite direction of time', i.e.,
superposition of the forward and the backward time evolution. This framework
may have applications in approaches that treat time and space equally like
quantum gravity, where the spatio-temporal structure is explained to emerge
from quantum mechanics. We apply this generalized transposition to investigate
a continuous generalization of perfect tensors, a dynamic version of tracing
out a subsystem, and the compatibility of multiple time axes in bipartite
quantum interactions. Notably, we demonstrate that when a bipartite interaction
is consistent with more distinct local temporal axes, there is a reduced
allowance for information exchange between the two parties in order to prevent
causality violations.
|
http://arxiv.org/abs/2306.02755v3
|
The exponential growth in the digitisation of services implies the handling
and storage of large volumes of data. Businesses and services see data sharing
and crossing as an opportunity to improve and produce new business
opportunities. The health sector is one area where this proves to be true,
enabling better and more innovative treatments. Notwithstanding, this raises
concerns regarding personal data being treated and processed. In this paper, we
present a patient-centric platform for the secure sharing of health records by
shifting the control over the data to the patient, therefore, providing a step
further towards data sovereignty. Data sharing is performed only with the
consent of the patient, allowing it to revoke access at any given time.
Furthermore, we also provide a break-glass approach, resorting to Proxy
Re-encryption (PRE) and the concept of a centralised trusted entity that
possesses instant access to patients' medical records. Lastly, an analysis is
made to assess the performance of the platform's key operations, and the impact
that a PRE scheme has on those operations.
|
http://arxiv.org/abs/2307.01175v1
|
Language models are increasingly being deployed for general problem solving
across a wide range of tasks, but are still confined to token-level,
left-to-right decision-making processes during inference. This means they can
fall short in tasks that require exploration, strategic lookahead, or where
initial decisions play a pivotal role. To surmount these challenges, we
introduce a new framework for language model inference, Tree of Thoughts (ToT),
which generalizes over the popular Chain of Thought approach to prompting
language models, and enables exploration over coherent units of text (thoughts)
that serve as intermediate steps toward problem solving. ToT allows LMs to
perform deliberate decision making by considering multiple different reasoning
paths and self-evaluating choices to decide the next course of action, as well
as looking ahead or backtracking when necessary to make global choices. Our
experiments show that ToT significantly enhances language models'
problem-solving abilities on three novel tasks requiring non-trivial planning
or search: Game of 24, Creative Writing, and Mini Crosswords. For instance, in
Game of 24, while GPT-4 with chain-of-thought prompting only solved 4% of
tasks, our method achieved a success rate of 74%. Code repo with all prompts:
https://github.com/princeton-nlp/tree-of-thought-llm.
|
http://arxiv.org/abs/2305.10601v2
|
Multi-task missions for unmanned aerial vehicles (UAVs) involving inspection
and landing tasks are challenging for novice pilots due to the difficulties
associated with depth perception and the control interface. We propose a shared
autonomy system, alongside supplementary information displays, to assist pilots
to successfully complete multi-task missions without any pilot training. Our
approach comprises of three modules: (1) a perception module that encodes
visual information onto a latent representation, (2) a policy module that
augments pilot's actions, and (3) an information augmentation module that
provides additional information to the pilot. The policy module is trained in
simulation with simulated users and transferred to the real world without
modification in a user study (n=29), alongside supplementary information
schemes including learnt red/green light feedback cues and an augmented reality
display. The pilot's intent is unknown to the policy module and is inferred
from the pilot's input and UAV's states. The assistant increased task success
rate for the landing and inspection tasks from [16.67% & 54.29%] respectively
to [95.59% & 96.22%]. With the assistant, inexperienced pilots achieved similar
performance to experienced pilots. Red/green light feedback cues reduced the
required time by 19.53% and trajectory length by 17.86% for the inspection
task, where participants rated it as their preferred condition due to the
intuitive interface and providing reassurance. This work demonstrates that
simple user models can train shared autonomy systems in simulation, and
transfer to physical tasks to estimate user intent and provide effective
assistance and information to the pilot.
|
http://arxiv.org/abs/2306.09600v1
|
Quantum neural networks (QNNs) succeed in object recognition, natural
language processing, and financial analysis. To maximize the accuracy of a QNN
on a Noisy Intermediate Scale Quantum (NISQ) computer, approximate synthesis
modifies the QNN circuit by reducing error-prone 2-qubit quantum gates. The
success of QNNs motivates adversaries to attack QNNs via backdoors. However,
na\"ively transplanting backdoors designed for classical neural networks to
QNNs yields only low attack success rate, due to the noises and approximate
synthesis on NISQ computers. Prior quantum circuit-based backdoors cannot
selectively attack some inputs or work with all types of encoding layers of a
QNN circuit. Moreover, it is easy to detect both transplanted and circuit-based
backdoors in a QNN.
In this paper, we propose a novel and stealthy backdoor attack, QDoor, to
achieve high attack success rate in approximately-synthesized QNN circuits by
weaponizing unitary differences between uncompiled QNNs and their synthesized
counterparts. QDoor trains a QNN behaving normally for all inputs with and
without a trigger. However, after approximate synthesis, the QNN circuit always
predicts any inputs with a trigger to a predefined class while still acts
normally for benign inputs. Compared to prior backdoor attacks, QDoor improves
the attack success rate by $13\times$ and the clean data accuracy by $65\%$ on
average. Furthermore, prior backdoor detection techniques cannot find QDoor
attacks in uncompiled QNN circuits.
|
http://arxiv.org/abs/2307.09529v2
|
We examine the routing problem for self-interested vehicles using stochastic
decision strategies. By approximating the road latency functions and a
non-linear variable transformation, we frame the problem as an aggregative
game. We characterize the approximation error and we derive a new monotonicity
condition for a broad category of games that encompasses the problem under
consideration. Next, we propose a semi-decentralized algorithm to calculate the
routing as a variational generalized Nash equilibrium and demonstrate the
solution's benefits with numerical simulations. In the particular case of
potential games, which emerges for linear latency functions, we explore a
receding-horizon formulation of the routing problem, showing asymptotic
convergence to destinations and analysing closed-loop performance dependence on
horizon length through numerical simulations.
|
http://arxiv.org/abs/2303.03295v2
|
We study gravitational absorption effects using effective on-shell scattering
amplitudes. We develop an in-in probability-based framework involving plane-
and partial-wave coherent states for the incoming wave to describe the
interaction of the wave with a black hole or another compact object. We connect
this framework to a simplified single-quantum analysis. The basic ingredients
are mass-changing three-point amplitudes, which model the leading absorption
effects and a spectral-density function of the black hole. As an application,
we consider a non-spinning black hole that may start spinning as a consequence
of the dynamics. The corresponding amplitudes are found to correspond to
covariant spin-weighted spherical harmonics, the properties of which we
formulate and make use of. We perform a matching calculation to
general-relativity results at the cross-section level and derive the effective
absorptive three-point couplings. They are found to behave as ${\cal
O}(G_\text{Newton}^{s+1})$, where $s$ is the spin of the outgoing massive
state.
|
http://arxiv.org/abs/2307.07504v3
|
We study random velocity effects on a two-species reaction-diffusion system
consisting of three reaction processes $A + A \rightarrow (\varnothing, A),A+B
\rightarrow A$. Using the field-theoretic perturbative renormalization group we
analyze this system in the vicinity of its upper critical dimension $d_c = 2$.
Velocity ensemble is generated by means of stochastic Navier-Stokes equations.
In particular, we investigate the effect of thermal fluctuations on reaction
kinetics. The overall analysis is performed to the one-loop approximation and
possible macroscopic regimes are identified.
|
http://arxiv.org/abs/2305.09350v1
|
In this paper, we characterize Probabilistic Principal Component Analysis in
Hilbert spaces and demonstrate how the optimal solution admits a representation
in dual space. This allows us to develop a generative framework for kernel
methods. Furthermore, we show how it englobes Kernel Principal Component
Analysis and illustrate its working on a toy and a real dataset.
|
http://arxiv.org/abs/2307.10078v1
|
Quantum machine learning (QML) has witnessed immense progress recently, with
quantum support vector machines (QSVMs) emerging as a promising model. This
paper focuses on the two existing QSVM methods: quantum kernel SVM (QK-SVM) and
quantum variational SVM (QV-SVM). While both have yielded impressive results,
we present a novel approach that synergizes the strengths of QK-SVM and QV-SVM
to enhance accuracy. Our proposed model, quantum variational kernel SVM
(QVK-SVM), leverages the quantum kernel and quantum variational algorithm. We
conducted extensive experiments on the Iris dataset and observed that QVK-SVM
outperforms both existing models in terms of accuracy, loss, and confusion
matrix indicators. Our results demonstrate that QVK-SVM holds tremendous
potential as a reliable and transformative tool for QML applications. Hence, we
recommend its adoption in future QML research endeavors.
|
http://arxiv.org/abs/2305.06063v2
|
This work analyzes and parallelizes LearnedSort, the novel algorithm that
sorts using machine learning models based on the cumulative distribution
function. LearnedSort is analyzed under the lens of algorithms with
predictions, and it is argued that LearnedSort is a learning-augmented
SampleSort. A parallel LearnedSort algorithm is developed combining LearnedSort
with the state-of-the-art SampleSort implementation, IPS4o. Benchmarks on
synthetic and real-world datasets demonstrate improved parallel performance for
parallel LearnedSort compared to IPS4o and other sorting algorithms.
|
http://arxiv.org/abs/2307.08637v1
|
The bright, blue, rapidly evolving AT2018cow is a well-studied peculiar
extragalactic transient. Despite an abundance of multi-wavelength data, there
still is no consensus on the nature of the event. We present our analysis of
three epochs of Hubble Space Telescope (HST) observations spanning the period
from 713-1474 days post burst, paying particular attention to uncertainties of
the transient photometry introduced by the complex background in which
AT2018cow resides. Photometric measurements show evident fading in the UV and
more subtle but significant fading in the optical. During the last HST
observation, the transient's optical/UV colours were still bluer than those of
the substantial population of compact, young, star-forming regions in the host
of AT2018cow, suggesting some continued transient contribution to the light.
However, a compact source underlying the transient would substantially modify
the resulting spectral energy distribution, depending on its contribution in
the various bands. In particular, in the optical filters, the complex, diffuse
background poses a problem for precise photometry. An underlying cluster is
expected for a supernova occurring within a young stellar environment or a
tidal-disruption event (TDE) within a dense older one. While many recent works
have focused on the supernova interpretation, we note the substantial
similarity in UV light-curve morphology between AT2018cow and several tidal
disruption events around supermassive black holes. Assuming AT2018cow arises
from a TDE-like event, we fit the late-time emission with a disc model and find
$M_{BH} = 10^{3.2{\pm}0.8}$ M$_{\odot}$. Further observations are necessary to
determine the late-time evolution of the transient and its immediate
environment.
|
http://arxiv.org/abs/2308.07381v1
|
Neural Cellular Automata (NCA) are a powerful combination of machine learning
and mechanistic modelling. We train NCA to learn complex dynamics from time
series of images and PDE trajectories. Our method is designed to identify
underlying local rules that govern large scale dynamic emergent behaviours.
Previous work on NCA focuses on learning rules that give stationary emergent
structures. We extend NCA to capture both transient and stable structures
within the same system, as well as learning rules that capture the dynamics of
Turing pattern formation in nonlinear Partial Differential Equations (PDEs). We
demonstrate that NCA can generalise very well beyond their PDE training data,
we show how to constrain NCA to respect given symmetries, and we explore the
effects of associated hyperparameters on model performance and stability. Being
able to learn arbitrary dynamics gives NCA great potential as a data driven
modelling framework, especially for modelling biological pattern formation.
|
http://arxiv.org/abs/2310.14809v2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.