text
string | source
string |
---|---|
We present a variational Monte Carlo algorithm for estimating the lowest
excited states of a quantum system which is a natural generalization of the
estimation of ground states. The method has no free parameters and requires no
explicit orthogonalization of the different states, instead transforming the
problem of finding excited states of a given system into that of finding the
ground state of an expanded system. Expected values of arbitrary observables
can be calculated, including off-diagonal expectations between different states
such as the transition dipole moment. Although the method is entirely general,
it works particularly well in conjunction with recent work on using neural
networks as variational Ans\"atze for many-electron systems, and we show that
by combining this method with the FermiNet and Psiformer Ans\"atze we can
accurately recover vertical excitation energies and oscillator strengths on a
range of molecules. Our method is the first deep learning approach to achieve
accurate vertical excitation energies, including challenging double
excitations, on benzene-scale molecules. Beyond the chemistry examples here, we
expect this technique will be of great interest for applications to atomic,
nuclear and condensed matter physics.
|
http://arxiv.org/abs/2308.16848v3
|
La doped SrFeO$_{3}$, La$_{1/3}$Sr$_{2/3}$FeO$_{3}$, exhibits a
metal-to-insulator transition accompanied by both antiferromagnetic and charge
ordering states along with the Fe-O bond disproportionation below a critical
temperature near 200K. Unconventionally slow charge dynamics measured in this
material near the critical temperature shows that its excited charge ordering
states can exhibit novel electronic structures with nontrivial energy profiles.
Here, we reveal possible metastable states of charge ordering structures in
La$_{1/3}$Sr$_{2/3}$FeO$_{3}$ using the first-principle and climbing image
nudged elastic band methods. In the strong correlation regime,
La$_{1/3}$Sr$_{2/3}$FeO$_{3}$ is an antiferromagnetic insulator with a charge
ordering state of the big-small-big pattern, consistent with the experimental
measurement of this material at the low temperature. As the correlation effect
becomes weak, we find at least two possible metastable charge ordering states
with the distinct Fe-O bond disproportionation. Remarkably, a ferroelectric
metallic state emerges with the small energy barrier of $\sim$7 meV, driven by
a metastable CO state of the small-medium-big pattern. The electronic
structures of these metastable charge ordering states are noticeably different
from those of the ground-state. Our results can provide an insightful
explanation to multiple metastable charge ordering states and the slow charge
dynamics of this and related oxide materials.
|
http://arxiv.org/abs/2309.03995v1
|
Using the two-component model, we analyze Bose-Einstein correlations in pp
collisions at the center-of-mass energy of 13 TeV, measured by the CMS
Collaboration at the LHC, and compare results with the $\tau$-model. We utilize
data described by the double ratios with an average pair transverse momentum
$0\le k_T\le 1.0$ GeV and six intervals described by the reconstructed charged
particle multiplicity as $N_{\rm trk}^{\rm offline}$. The estimated ranges are
1-4 fm for the magnitude of extension of emitting source expressed by the
exponential function $\exp(-RQ)$ and 0.4-0.5 fm for that by the Gaussian
distribution $\exp(-(RQ)^2))$, respectively. Moreover, we estimate the upper
limits of the 3-pion BEC to test the two-component model and investigate the
role of the long-range correlation.
|
http://arxiv.org/abs/2303.17763v2
|
We propose a method for learning topology-preserving data representations
(dimensionality reduction). The method aims to provide topological similarity
between the data manifold and its latent representation via enforcing the
similarity in topological features (clusters, loops, 2D voids, etc.) and their
localization. The core of the method is the minimization of the Representation
Topology Divergence (RTD) between original high-dimensional data and
low-dimensional representation in latent space. RTD minimization provides
closeness in topological features with strong theoretical guarantees. We
develop a scheme for RTD differentiation and apply it as a loss term for the
autoencoder. The proposed method "RTD-AE" better preserves the global structure
and topology of the data manifold than state-of-the-art competitors as measured
by linear correlation, triplet distance ranking accuracy, and Wasserstein
distance between persistence barcodes.
|
http://arxiv.org/abs/2302.00136v2
|
We study the motion of a domain wall on an ultrathin magnetic film using the
magneto-optical Kerr effect (MOKE). At tiny magnetic fields, the wall creeps
only via thermal activation over the pinning centers present in the sample. Our
results show that this creep dynamics is highly intermittent and correlated. A
localized instability triggers a cascade, akin to aftershocks following a large
earthquake, where the pinned wall undergoes large reorganizations in a compact
active region for a few seconds. Surprisingly, the size and shape of these
reorganizations display the same scale-free statistics of the depinning
avalanches in agreement with the quenched Kardar-Parisi-Zhang universality
class.
|
http://arxiv.org/abs/2309.12898v1
|
Information extraction systems often produce hundreds to thousands of strings
on a specific topic. We present a method that facilitates better consumption of
these strings, in an exploratory setting in which a user wants to both get a
broad overview of what's available, and a chance to dive deeper on some
aspects. The system works by grouping similar items together and arranging the
remaining items into a hierarchical navigable DAG structure. We apply the
method to medical information extraction.
|
http://arxiv.org/abs/2309.10057v1
|
Graphic layout generation, a growing research field, plays a significant role
in user engagement and information perception. Existing methods primarily treat
layout generation as a numerical optimization task, focusing on quantitative
aspects while overlooking the semantic information of layout, such as the
relationship between each layout element. In this paper, we propose LayoutNUWA,
the first model that treats layout generation as a code generation task to
enhance semantic information and harness the hidden layout expertise of large
language models~(LLMs). More concretely, we develop a Code Instruct Tuning
(CIT) approach comprising three interconnected modules: 1) the Code
Initialization (CI) module quantifies the numerical conditions and initializes
them as HTML code with strategically placed masks; 2) the Code Completion (CC)
module employs the formatting knowledge of LLMs to fill in the masked portions
within the HTML code; 3) the Code Rendering (CR) module transforms the
completed code into the final layout output, ensuring a highly interpretable
and transparent layout generation procedure that directly maps code to a
visualized layout. We attain significant state-of-the-art performance (even
over 50\% improvements) on multiple datasets, showcasing the strong
capabilities of LayoutNUWA. Our code is available at
https://github.com/ProjectNUWA/LayoutNUWA.
|
http://arxiv.org/abs/2309.09506v2
|
In the search for stable lead (Pb) free perovskites, Vacancy ordered double
perovskite (VODP), A$_2$BX$_6$ has emerged as a promising class of materials
for solar harvesting owing to their nontoxicity, better stability, and unique
optoelectronic properties. Here, we present the stability and the key physical
attributes of few selected compounds in a systematic manner using
state-of-the-art first-principle calculations. A careful structural and
stability analysis via simulating convex hull and compositional phase diagrams
for different structural prototypes discloses 14 stable and 1 metastable
compounds in this class. The electronic structure calculations using hybrid
functional reveals six compounds to acquire band gap in the ideal visible
region. These six compounds, namely Cs$_2$SnI$_6$, Cs$_2$PdI$_6$,
Cs$_2$TeI$_6$, Cs$_2$TiI$_6$, Cs$_2$PtI$_6$, and Cs$_2$PdBr$_6$, show high
optical absorption ($\approx$ 10$^{5}$ cm $^{-1}$) giving rise to high
spectroscopic limited maximum efficiency, SLME (15-23\%) in the thin-film
thickness range. Close inspection of transport properties reveals polar optical
phonon scattering to be the dominant mechanism limiting the overall mobility.
Further analysis of the polaron excitations discloses the possibility of large
polaron formation at low to moderate defect concentrations. At high defect
concentrations, ionized impurity scattering takes over. This suggests that, a
simulation based guided control of defect concentrations during synthesis can
yield a desired candidate for promissing device application. Additionally, few
selected compounds show moderate to high electron mobility values ($\sim$13-63
cm$^2$V$^{-1}$ s$^{-1}$) at room temperature. Overall, the present study paves
an important path to help design VODP as Pb-free potential candidates for
future optoelectronic applications.
|
http://arxiv.org/abs/2309.06153v1
|
The parent compound of cuprates is a charge-transfer-type Mott insulator with
strong hybridization between the Cu $3d_{\mathrm x^2-y^2}$ and O $2p$ orbitals.
A key question concerning the pairing mechanism is the behavior of doped holes
in the antiferromagnetic (AF) Mott insulator background, which is a
prototypical quantum many-body problem. It was proposed that doped hole on the
O site tends to form a singlet, known as Zhang-Rice singlet (ZRS), with the
unpaired Cu spin. But experimentally little is known about the properties of a
single hole and the interplay between them that leads to superconductivity.
Here we use scanning tunneling microscopy to visualize the electronic states in
hole-doped $\mathrm{Ca_2CuO_2Cl_2}$, aiming to establish the atomic-scale local
basis for pair formation. A single doped hole is shown to have an in-gap state
and a clover-shaped spatial distribution that can be attributed to a localized
ZRS. When the dopants are close enough, they develop delocalized molecular
orbitals with characteristic stripe- and ladder-shaped patterns, accompanied by
the opening of a small gap around the Fermi level ($E_{\mathrm F}$). With
increasing doping, the molecular orbitals proliferate in space and gradually
form densely packed plaquettes, but the stripe and ladder patterns remain
nearly the same. The low-energy electronic states of the molecular orbitals are
intimately related to the local pairing properties, thus play a vitally
important role in the emergence of superconductivity. We propose that the
Cooper pair is formed by two holes occupying the stripe-like molecular orbital,
while the attractive interaction is mediated by the AF spin background.
|
http://arxiv.org/abs/2309.09260v1
|
For widespread adoption, public security and surveillance systems must be
accurate, portable, compact, and real-time, without impeding the privacy of the
individuals being observed. Current systems broadly fall into two categories --
image-based which are accurate, but lack privacy, and RF signal-based, which
preserve privacy but lack portability, compactness and accuracy. Our paper
proposes mmSense, an end-to-end portable miniaturised real-time system that can
accurately detect the presence of concealed metallic objects on persons in a
discrete, privacy-preserving modality. mmSense features millimeter wave radar
technology, provided by Google's Soli sensor for its data acquisition, and
TransDope, our real-time neural network, capable of processing a single radar
data frame in 19 ms. mmSense achieves high recognition rates on a diverse set
of challenging scenes while running on standard laptop hardware, demonstrating
a significant advancement towards creating portable, cost-effective real-time
radar based surveillance systems.
|
http://arxiv.org/abs/2302.14625v1
|
Future wireless systems are likely to adopt extremely large aperture arrays
to achieve higher throughput, wider coverage, and higher spatial resolution.
Conventional wireless systems predominantly operate in the far field (FF) of
the radiation source. However, as the array size increases and the carrier
wavelength decreases, the near field (NF) becomes nonnegligible. Since the NF
and FF differ in many aspects, it is critical to identify their corresponding
regions. In this article, we first provide a comprehensive overview of the
existing NF-FF boundaries, then introduce a novel NF-FF demarcation method
based on effective degrees of freedom (EDoF) of the channel. Since EDoF is
intimately related to channel capacity, the EDoF-based border is able to
characterize key channel performance more accurately than the classic Rayleigh
distance and other representative benchmarks. Furthermore, we analyze the main
features of the EDoF-based NF-FF boundary, provide insights into system design,
and outline the associated challenges and research opportunities.
|
http://arxiv.org/abs/2309.13238v2
|
Purpose: Recent disruptive events, such as COVID-19 and Russia-Ukraine
conflict, had a significant impact of global supply chains. Digital supply
chain twins have been proposed in order to provide decision makers with an
effective and efficient tool to mitigate disruption impact. Methods: This paper
introduces a hybrid deep learning approach for disruption detection within a
cognitive digital supply chain twin framework to enhance supply chain
resilience. The proposed disruption detection module utilises a deep
autoencoder neural network combined with a one-class support vector machine
algorithm. In addition, long-short term memory neural network models are
developed to identify the disrupted echelon and predict time-to-recovery from
the disruption effect. Results: The obtained information from the proposed
approach will help decision-makers and supply chain practitioners make
appropriate decisions aiming at minimizing negative impact of disruptive events
based on real-time disruption detection data. The results demonstrate the
trade-off between disruption detection model sensitivity, encountered delay in
disruption detection, and false alarms. This approach has seldom been used in
recent literature addressing this issue.
|
http://arxiv.org/abs/2309.14557v1
|
We explore the ability of large language models (LLMs) to act as speech
recognition post-processors that perform rescoring and error correction. Our
first focus is on instruction prompting to let LLMs perform these task without
fine-tuning, for which we evaluate different prompting schemes, both zero- and
few-shot in-context learning, and a novel task activation prompting method that
combines causal instructions and demonstration to increase its context windows.
Next, we show that rescoring only by in-context learning with frozen LLMs
achieves results that are competitive with rescoring by domain-tuned LMs, using
a pretrained first-pass recognition system and rescoring output on two
out-of-domain tasks (ATIS and WSJ). By combining prompting techniques with
fine-tuning we achieve error rates below the N-best oracle level, showcasing
the generalization power of the LLMs.
|
http://arxiv.org/abs/2309.15649v2
|
Virtual Reality (VR) has the potential of becoming the next ubiquitous
computing platform. Continued progress in the burgeoning field of VR depends
critically on an efficient computing substrate. In particular, DRAM access
energy is known to contribute to a significant portion of system energy.
Today's framebuffer compression system alleviates the DRAM traffic by using a
numerically lossless compression algorithm. Being numerically lossless,
however, is unnecessary to preserve perceptual quality for humans. This paper
proposes a perceptually lossless, but numerically lossy, system to compress
DRAM traffic. Our idea builds on top of long-established psychophysical studies
that show that humans cannot discriminate colors that are close to each other.
The discrimination ability becomes even weaker (i.e., more colors are
perceptually indistinguishable) in our peripheral vision. Leveraging the color
discrimination (in)ability, we propose an algorithm that adjusts pixel colors
to minimize the bit encoding cost without introducing visible artifacts. The
algorithm is coupled with lightweight architectural support that, in real-time,
reduces the DRAM traffic by 66.9\% and outperforms existing framebuffer
compression mechanisms by up to 20.4\%. Psychophysical studies on human
participants show that our system introduce little to no perceptual fidelity
degradation.
|
http://arxiv.org/abs/2310.00441v1
|
BRC-20 (short for Bitcoin Request for Comment 20) token mania was a key
storyline in the middle of 2023. Setting it apart from conventional ERC-20
token standards on Ethereum, BRC-20 introduces non-fungibility to Bitcoin
through an editable field in each satoshi (0.00000001 Bitcoin, the smallest
unit), making them unique. In this paper, we pioneer the exploration of this
concept, covering its intricate mechanisms, features, and state-of-the-art
applications. By analyzing the multi-dimensional data spanning over months with
factual investigations, we conservatively comment that while BRC-20 expands
Bitcoin's functionality and applicability, it may still not match Ethereum's
abundance of decentralized applications and similar ecosystems.
|
http://arxiv.org/abs/2310.10652v1
|
Human motion capture either requires multi-camera systems or is unreliable
when using single-view input due to depth ambiguities. Meanwhile, mirrors are
readily available in urban environments and form an affordable alternative by
recording two views with only a single camera. However, the mirror setting
poses the additional challenge of handling occlusions of real and mirror image.
Going beyond existing mirror approaches for 3D human pose estimation, we
utilize mirrors for learning a complete body model, including shape and dense
appearance. Our main contributions are extending articulated neural radiance
fields to include a notion of a mirror, making it sample-efficient over
potential occlusion regions. Together, our contributions realize a
consumer-level 3D motion capture system that starts from off-the-shelf 2D poses
by automatically calibrating the camera, estimating mirror orientation, and
subsequently lifting 2D keypoint detections to 3D skeleton pose that is used to
condition the mirror-aware NeRF. We empirically demonstrate the benefit of
learning a body model and accounting for occlusion in challenging mirror
scenes.
|
http://arxiv.org/abs/2309.04750v2
|
In this paper we give an explicit solution of Dzherbashyan-Caputo-fractional
Cauchy problems related to equations with derivatives of order $\nu k$, for $k$
non-negative integer and $\nu>0$. The solution is obtained by connecting the
differential equation with the roots of the characteristic polynomial and it is
expressed in terms of Mittag-Leffler-type functions. Under the some stricter
hypothesis the solution can be expressed as a linear combination of
Mittag-Leffler functions with common fractional order $\nu$. We establish a
probabilistic relationship between the solutions of differential problems with
order $\nu/m$ and $\nu$, for natural $m$. Finally, we use the described method
to solve fractional differential equations arising in the fractionalization of
partial differential equations related to the probability law of planar random
motions with finite velocities.
|
http://arxiv.org/abs/2309.04988v1
|
3D perceptual representations are well suited for robot manipulation as they
easily encode occlusions and simplify spatial reasoning. Many manipulation
tasks require high spatial precision in end-effector pose prediction, which
typically demands high-resolution 3D feature grids that are computationally
expensive to process. As a result, most manipulation policies operate directly
in 2D, foregoing 3D inductive biases. In this paper, we introduce Act3D, a
manipulation policy transformer that represents the robot's workspace using a
3D feature field with adaptive resolutions dependent on the task at hand. The
model lifts 2D pre-trained features to 3D using sensed depth, and attends to
them to compute features for sampled 3D points. It samples 3D point grids in a
coarse to fine manner, featurizes them using relative-position attention, and
selects where to focus the next round of point sampling. In this way, it
efficiently computes 3D action maps of high spatial resolution. Act3D sets a
new state-of-the-art in RL-Bench, an established manipulation benchmark, where
it achieves 10% absolute improvement over the previous SOTA 2D multi-view
policy on 74 RLBench tasks and 22% absolute improvement with 3x less compute
over the previous SOTA 3D policy. We quantify the importance of relative
spatial attention, large-scale vision-language pre-trained 2D backbones, and
weight tying across coarse-to-fine attentions in ablative experiments. Code and
videos are available on our project website: https://act3d.github.io/.
|
http://arxiv.org/abs/2306.17817v2
|
Earthquakes are produced by the propagation of rapid slip along tectonic
faults. The propagation dynamics is governed by a balance between elastic
stored energy in the surrounding rock, and dissipated energy at the propagating
tip of the slipping patch. Energy dissipation is dictated by the mechanical
behaviour of the fault, which is itself the result of feedbacks between
thermo-hydro-mechanical processes acting at the mm to sub-mm scale. Here, we
numerically simulate shear ruptures using a dual scale approach, allowing us to
couple a sub-mm description of inner fault processes and km-scale
elastodynamics, and show that the sudden localisation of shear strain within a
shear zone leads to the emergence of classical cracks driven by a constant
fracture energy. The fracture energy associated to strain localisation is
substantially smaller than that predicted assuming uniform shearing. We show
the existence of a unique scaling law between the localised shearing width and
the rupture speed. Our results indicate that earthquakes are likely to be
systematically associated to extreme strain localisation.
|
http://arxiv.org/abs/2307.16820v1
|
In this paper, we present a novel method to automatically classify medical
images that learns and leverages weak causal signals in the image. Our
framework consists of a convolutional neural network backbone and a
causality-extractor module that extracts cause-effect relationships between
feature maps that can inform the model on the appearance of a feature in one
place of the image, given the presence of another feature within some other
place of the image. To evaluate the effectiveness of our approach in low-data
scenarios, we train our causality-driven architecture in a One-shot learning
scheme, where we propose a new meta-learning procedure entailing meta-training
and meta-testing tasks that are designed using related classes but at different
levels of granularity. We conduct binary and multi-class classification
experiments on a publicly available dataset of prostate MRI images. To validate
the effectiveness of the proposed causality-driven module, we perform an
ablation study and conduct qualitative assessments using class activation maps
to highlight regions strongly influencing the network's decision-making
process. Our findings show that causal relationships among features play a
crucial role in enhancing the model's ability to discern relevant information
and yielding more reliable and interpretable predictions. This would make it a
promising approach for medical image classification tasks.
|
http://arxiv.org/abs/2309.10725v1
|
Simulation-based inference techniques are indispensable for parameter
estimation of mechanistic and simulable models with intractable likelihoods.
While traditional statistical approaches like approximate Bayesian computation
and Bayesian synthetic likelihood have been studied under well-specified and
misspecified settings, they often suffer from inefficiencies due to wasted
model simulations. Neural approaches, such as sequential neural likelihood
(SNL) avoid this wastage by utilising all model simulations to train a neural
surrogate for the likelihood function. However, the performance of SNL under
model misspecification is unreliable and can result in overconfident posteriors
centred around an inaccurate parameter estimate. In this paper, we propose a
novel SNL method, which through the incorporation of additional adjustment
parameters, is robust to model misspecification and capable of identifying
features of the data that the model is not able to recover. We demonstrate the
efficacy of our approach through several illustrative examples, where our
method gives more accurate point estimates and uncertainty quantification than
SNL.
|
http://arxiv.org/abs/2301.13368v2
|
In this work, we focus on a robotic unloading problem from visual
observations, where robots are required to autonomously unload stacks of
parcels using RGB-D images as their primary input source. While supervised and
imitation learning have accomplished good results in these types of tasks, they
heavily rely on labeled data, which are challenging to obtain in realistic
scenarios. Our study aims to develop a sample efficient controller framework
that can learn unloading tasks without the need for labeled data during the
learning process. To tackle this challenge, we propose a hierarchical
controller structure that combines a high-level decision-making module with
classical motion control. The high-level module is trained using Deep
Reinforcement Learning (DRL), wherein we incorporate a safety bias mechanism
and design a reward function tailored to this task. Our experiments demonstrate
that both these elements play a crucial role in achieving improved learning
performance. Furthermore, to ensure reproducibility and establish a benchmark
for future research, we provide free access to our code and simulation.
|
http://arxiv.org/abs/2309.06621v1
|
Topological properties in quantum materials are often governed by symmetry
and tuned by crystal structure and external fields, and hence
symmetry-sensitive nonlinear optical measurements in a magnetic field are a
valuable probe. Here we report nonlinear magneto-optical second harmonic
generation (SHG) studies of non-magnetic topological materials including
bilayer WTe2, monolayer WSe2 and bulk TaAs. The polarization-resolved patterns
of optical SHG under magnetic field show nonlinear Kerr rotation in these
time-reversal symmetric materials. For materials with three-fold rotational
symmetric lattice structure, the SHG polarization pattern rotates just slightly
in a magnetic field, whereas in those with mirror or two-fold rotational
symmetry the SHG polarization pattern rotates greatly and distorts. These
different magneto-SHG characters can be understood by considering the
superposition of the magnetic field-induced time-noninvariant nonlinear optical
tensor and the crystal-structure-based time-invariant counterpart. The
situation is further clarified by scrutinizing the Faraday rotation, whose
subtle interplay with crystal symmetry accounts for the diverse behavior of the
extrinsic nonlinear Kerr rotation in different materials. Our work illustrates
the application of magneto-SHG techniques to directly probe nontrivial
topological properties, and underlines the importance of minimizing extrinsic
nonlinear Kerr rotation in polarization-resolved magneto-optical studies.
|
http://arxiv.org/abs/2309.09512v1
|
The main aim of this paper is to develop extreme value theory for
$\theta$-expansions. We get the limit distribution of the largest value of
$\theta$-continued fraction mixing stationary stochastic process and some
related results. These are analogous to J.Galambos and W.Philipp theorems for
the regular continued fractions. We also have to note that a Borel-Bernstein
type theorem plays an important role.
|
http://arxiv.org/abs/2309.12654v1
|
In the evaluation of the half-life of the neutrinoless double-$\beta$ decay
($0\nu\beta\beta$) of a doubly closed-subshell nucleus $^{96}$Zr, the structure
of the nucleus $^{96}$Mo is essentially important. The $\alpha$-clustering
aspects of $^{96}$Mo are investigated for the first time. By studying the
nuclear rainbows in $\alpha$ scattering from $^{92}$Zr at high energies and the
characteristic structure of the excitation functions at the extreme backward
angle at the low-energy region, the interaction potential between the $\alpha$
particle and the $^{92}$Zr nucleus is determined well in the double folding
model. The validity of the double folding model was reinforced by studying
$\alpha$ scattering from neighboring nuclei $^{90}$Zr, $^{91}$Zr, and
$^{94}$Zr. The double-folding-model calculations reproduced well all the
observed angular distributions over a wide range of incident energies and the
characteristic excitation functions. By using the obtained potential the
$\alpha$ +$^{92}$Zr cluster structure of $^{96}$Mo is investigated in the
spirit of a unified description of scattering and structure. The existence of
the second-higher nodal band states with the $\alpha$+ $^{92}$Zr cluster
structure, in which two more nodes are excited in the relative motion compared
with the ground band, is demonstrated. The calculation reproduces well the
ground-band states of $^{96}$Mo in agreement with experiment. The experimental
$B(E2)$ value of the transition in the ground band is also reproduced well. The
effect of $\alpha$ clustering in $^{96}$Mo on the the half-life of the
$0\nu\beta\beta$ double-$\beta$ decay of $^{96}$Zr is discussed.
|
http://arxiv.org/abs/2303.17777v1
|
Early detection of cardiac dysfunction through routine screening is vital for
diagnosing cardiovascular diseases. An important metric of cardiac function is
the left ventricular ejection fraction (EF), where lower EF is associated with
cardiomyopathy. Echocardiography is a popular diagnostic tool in cardiology,
with ultrasound being a low-cost, real-time, and non-ionizing technology.
However, human assessment of echocardiograms for calculating EF is
time-consuming and expertise-demanding, raising the need for an automated
approach. In this work, we propose using the M(otion)-mode of echocardiograms
for estimating the EF and classifying cardiomyopathy. We generate multiple
artificial M-mode images from a single echocardiogram and combine them using
off-the-shelf model architectures. Additionally, we extend contrastive learning
(CL) to cardiac imaging to learn meaningful representations from exploiting
structures in unlabeled data allowing the model to achieve high accuracy, even
with limited annotations. Our experiments show that the supervised setting
converges with only ten modes and is comparable to the baseline method while
bypassing its cumbersome training process and being computationally much more
efficient. Furthermore, CL using M-mode images is helpful for limited data
scenarios, such as having labels for only 200 patients, which is common in
medical applications.
|
http://arxiv.org/abs/2309.03759v1
|
Image camouflage has been utilized to create clean-label poisoned images for
implanting backdoor into a DL model. But there exists a crucial limitation that
one attack/poisoned image can only fit a single input size of the DL model,
which greatly increases its attack budget when attacking multiple commonly
adopted input sizes of DL models. This work proposes to constructively craft an
attack image through camouflaging but can fit multiple DL models' input sizes
simultaneously, namely OmClic. Thus, through OmClic, we are able to always
implant a backdoor regardless of which common input size is chosen by the user
to train the DL model given the same attack budget (i.e., a fraction of the
poisoning rate). With our camouflaging algorithm formulated as a
multi-objective optimization, M=5 input sizes can be concurrently targeted with
one attack image, which artifact is retained to be almost visually
imperceptible at the same time. Extensive evaluations validate the proposed
OmClic can reliably succeed in various settings using diverse types of images.
Further experiments on OmClic based backdoor insertion to DL models show that
high backdoor performances (i.e., attack success rate and clean data accuracy)
are achievable no matter which common input size is randomly chosen by the user
to train the model. So that the OmClic based backdoor attack budget is reduced
by M$\times$ compared to the state-of-the-art camouflage based backdoor attack
as a baseline. Significantly, the same set of OmClic based poisonous attack
images is transferable to different model architectures for backdoor implant.
|
http://arxiv.org/abs/2309.04036v2
|
The lattice Boltzmann method, after close to thirty years of presence in
computational fluid dynamics has turned into a versatile, efficient and quite
popular numerical tool for fluid flow simulations. The lattice Boltzmann method
owes its popularity in the past decade to its efficiency, low numerical
dissipation and simplicity of its algorithm. Progress in recent years has
opened the door for yet another very challenging area of application:
Combustion simulations. Combustion is known to be a challenge for numerical
tools due to, among many others, the large number of variables and scales both
in time and space, leading to a stiff multi-scale problem. In the present work
we present a comprehensive overview of models and strategies developed in the
past years to model combustion with the lattice Boltzmann method and discuss
some of the most recent applications, remaining challenges and prospects.
|
http://arxiv.org/abs/2309.07517v2
|
We show that the dynamics of a quantum system can be represented by the
dynamics of an underlying classical systems obeying the Hamilton equations of
motion. This is achieved by transforming the phase space of dimension $2n$ into
a Hilbert space of dimension $n$ which is obtained by a peculiar canonical
transformation that changes a pair of real canonical variables into a pair of
complex canonical variables which are complex conjugate of each other. The
probabilistic character of quantum mechanics is devised by treating the wave
function as a stochastic variable. The dynamics of the underlying system is
chosen so as to preserve the norm of the state vector.
|
http://arxiv.org/abs/2308.00151v1
|
The Cherenkov Telescope Array (CTA) will be the next-generation observatory
in the field of very-high-energy (20 GeV to 300 TeV) gamma-ray astroparticle
physics. The traditional approach to data analysis in this field is to apply
quality cuts, optimized using Monte Carlo simulations, on the data acquired to
maximize sensitivity. Subsequent steps of the analysis typically use the
surviving events to calculate one set of instrument response functions (IRFs)
to physically interpret the results. However, an alternative approach is the
use of event types, as implemented in experiments such as the Fermi-LAT. This
approach divides events into sub-samples based on their reconstruction quality,
and a set of IRFs is calculated for each sub-sample. The sub-samples are then
combined in a joint analysis, treating them as independent observations. In
previous works we demonstrated that event types, classified using Machine
Learning methods according to their expected angular reconstruction quality,
have the potential to significantly improve the CTA angular and energy
resolution of a point-like source analysis. Now, we validated the production of
event-type wise full-enclosure IRFs, ready to be used with science tools (such
as Gammapy and ctools). We will report on the impact of using such an
event-type classification on CTA high-level performance, compared to the
traditional procedure.
|
http://arxiv.org/abs/2309.11375v1
|
Many magnetic materials are predicted to exhibit bosonic topological edge
modes in their excitation spectra, because of the nontrivial topology of their
magnon, triplon or other quasi-particle band structures. However, there is a
discrepancy between theory prediction and experimental observation, which
suggests some underlying mechanism that intrinsically suppresses the expected
experimental signatures, like the thermal Hall current. Many-body interactions
that are not accounted for in the non-interacting quasi-particle picture are
most often identified as the reason for the absence of the topological edge
modes. Here we report stable bosonic edge modes at the boundaries of a ladder
quantum paramagnet with gapped triplon excitations in the presence of the full
many-body interaction. For the first time, we use tensor network methods to
resolve topological edge modes in the time-dependent spin-spin correlations and
the dynamical structure factor, which is directly accessible experimentally. We
further show that these edge modes have anomalously long time coherence,
discuss the topological phase diagram of the model, demonstrate the
fractionalization of its low-lying excitations, and propose potential material
candidates.
|
http://arxiv.org/abs/2309.15113v1
|
This paper presents a state-of-the-art solution to the LongEval CLEF 2023 Lab
Task 2: LongEval-Classification. The goal of this task is to improve and
preserve the performance of sentiment analysis models across shorter and longer
time periods. Our framework feeds date-prefixed textual inputs to a pre-trained
language model, where the timestamp is included in the text. We show
date-prefixed samples better conditions model outputs on the temporal context
of the respective texts. Moreover, we further boost performance by performing
self-labeling on unlabeled data to train a student model. We augment the
self-labeling process using a novel augmentation strategy leveraging the
date-prefixed formatting of our samples. We demonstrate concrete performance
gains on the LongEval-Classification evaluation set over non-augmented
self-labeling. Our framework achieves a 2nd place ranking with an overall score
of 0.6923 and reports the best Relative Performance Drop (RPD) of -0.0656 over
the short evaluation set.
|
http://arxiv.org/abs/2309.13562v1
|
The intricacies of black hole ringdown analysis are amplified by the absence
of a complete set of orthogonal basis functions for quasinormal modes. Although
damped sinusoids effectively fit the ringdown signals from binary black hole
mergers, the risk of overfitting remains, due to initial transients and
nonlinear effects. In light of this challenge, we introduce two methods for
extracting quasinormal modes in numerical simulations and qualitatively study
how the transient might affect quasinormal mode fitting. In one method, we
accurately fit quasinormal modes by using their spatial functional form at
constant time hypersurfaces, while in the other method, we exploit both spatial
and temporal aspects of the quasinormal modes. Both fitting methods leverage
the spatial behavior of quasinormal eigenfunctions to enhance accuracy,
outperforming conventional time-only fitting techniques at null infinity. We
also show that we can construct an inner product for which the quasinormal
eigenfunctions form an orthonormal (but not complete) set. We then conduct
numerical experiments involving linearly perturbed Kerr black holes in horizon
penetrating, hyperboloidally compactified coordinates, as this setup enables a
more precise isolation and examination of the ringdown phenomenon. From
solutions to the Teukolsky equation, describing scattering of an ingoing
gravitational wave pulse, we find that the contributions from early-time
transients can lead to large uncertainties in the fit to the amplitudes of
higher overtones ($n\geq 3$). While the methods we discuss here cannot be
applied directly to data from merger observations, our findings underscore the
persistence of ambiguities in interpreting ringdown signals, even with access
to both temporal and spatial information.
|
http://arxiv.org/abs/2309.13204v3
|
Software frameworks for behaviour are critical in robotics as they enable the
correct and efficient execution of functions. While modern behaviour systems
have improved their composability, they do not focus on smooth transitions and
often lack functionality. In this work, we present the Director, a novel
behaviour framework that addresses these problems. It has functionality for
soft transitions, multiple implementations of the same action chosen based on
conditionals, and strict resource control. The system was successfully used in
the 2022/2023 Virtual Season and RoboCup 2023 Bordeaux, in the Humanoid Kid
Size League. It is implemented at https://github.com/NUbots/DirectorSoccer,
which also contains over thirty automated tests and technical documentation on
its implementation in NUClear.
|
http://arxiv.org/abs/2309.09248v2
|
Radiofrequency capacitively coupled plasma is studied theoretically using a
Particle-in-Cell code. For He discharge, the time-averaged sheaths are in the
range of few centimeters. The sheath potential, ion, and electron energy and
angular distributions, discharge current, and dissipated power depend on the
driven potentials and frequencies. Increasing the amplitude of the high radio
frequencies increases the bulk density and the sheath potential and,
consequently, increases the plasma processing rate. Increasing the intermediate
radio frequency amplitude allows a wider sheath with a broad ion energy
distribution and a narrower ion angular distribution. Changing the amplitude
and the phase shift between driven frequencies provide different energies and
angular distribution allowing performing various processes. The interplay
between the sheath and bulk dynamics in the intermediate radiofrequency regime
and the high-frequency regime may excite harmonics in the discharge current.
|
http://arxiv.org/abs/2309.16368v1
|
Optical nanofiber cavity research has mainly focused on the fundamental mode.
Here, a Fabry-P\'erot fiber cavity with an optical nanofiber supporting the
higher-order modes, TE01, TM01, HE21o, and HE21e, is demonstrated. Using cavity
spectroscopy, with mode imaging and analysis, we observe cavity resonances that
exhibit complex, inhomogeneous states of polarization with topological features
containing Stokes singularities such as C-points, Poincar\'e vortices, and
L-lines. In situ tuning of the intracavity birefringence enables the desired
profile and polarization of the cavity mode to be obtained. These findings open
new research possibilities for cold atom manipulation and multimode cavity
quantum electrodynamics using the evanescent fields of higher-order mode
optical nanofibers.
|
http://arxiv.org/abs/2301.13432v1
|
In recent years, excitation of surface phonon polaritons (SPhPs) in van der
Waals materials received wide attention from the nanophotonics community.
Alpha-phase Molybdenum trioxide ($\alpha$-MoO3), a naturally occurring biaxial
hyperbolic crystal, emerged as a promising polaritonic material due to its
ability to support SPhPs for three orthogonal directions at different
wavelength bands (range 10-20 $\mu$m). Here, we report on the fabrication and
IR characterization of large-area (over 1 cm$^2$ size) $\alpha$-MoO3
polycrystalline films deposited on fused silica substrates by pulsed laser
deposition. Single alpha-phase MoO3 films exhibiting a polarization-dependent
reflection peak at 1006 cm$^{-1}$ with a resonance Q-factor as high as 53 were
achieved. Reflection can be tuned via changing incident polarization with a
dynamic range of $\Delta$R=0.3 at 45 deg. incidence angle. We also report a
polarization-independent almost perfect absorption condition (R<0.01) at 972
cm$^{-1}$ which is preserved for a broad angle of incidence. The development of
a low-cost polaritonic platform with high-Q resonances in the mid-infrared
(mid-IR) range is crucial for a wide number of functionalities including
sensors, filters, thermal emitters, and label-free biochemical sensing devices.
In this framework our findings appear extremely promising for the further
development of lithography-free, scalable films, for efficient and large-scale
devices operating in the free space, using far-field detection setups.
|
http://arxiv.org/abs/2309.13210v1
|
Visualization of dynamic processes in scientific high-performance computing
is an immensely data intensive endeavor. Application codes have recently
demonstrated scaling to full-size Exascale machines, and generating
high-quality data for visualization is consequently on the machine-scale,
easily spanning 100s of TBytes of input to generate a single video frame. In
situ visualization, the technique to consume the many-node decomposed data
in-memory, as exposed by applications, is the dominant workflow. Although in
situ visualization has achieved tremendous progress in the last decade, scaling
to system-size together with the application codes that produce its data, there
is one important question that we cannot skip: is what we produce insightful
and inspiring?
|
http://arxiv.org/abs/2310.00469v1
|
Self-supervised learning (SSL) is at the origin of unprecedented improvements
in many different domains including computer vision and natural language
processing. Speech processing drastically benefitted from SSL as most of the
current domain-related tasks are now being approached with pre-trained models.
This work introduces LeBenchmark 2.0 an open-source framework for assessing and
building SSL-equipped French speech technologies. It includes documented,
large-scale and heterogeneous corpora with up to 14,000 hours of heterogeneous
speech, ten pre-trained SSL wav2vec 2.0 models containing from 26 million to
one billion learnable parameters shared with the community, and an evaluation
protocol made of six downstream tasks to complement existing benchmarks.
LeBenchmark 2.0 also presents unique perspectives on pre-trained SSL models for
speech with the investigation of frozen versus fine-tuned downstream models,
task-agnostic versus task-specific pre-trained models as well as a discussion
on the carbon footprint of large-scale model training. Overall, the newly
introduced models trained on 14,000 hours of French speech outperform
multilingual and previous LeBenchmark SSL models across the benchmark but also
required up to four times more energy for pre-training.
|
http://arxiv.org/abs/2309.05472v2
|
In this paper we consider stationary states of the SSH model for infinite
polyacetylene chains that are homoclinic or heteroclinic connections between
two-periodic dimerized states. We prove that such connections converge
exponentially fast to the corresponding asymptotic periodic states.
|
http://arxiv.org/abs/2308.00145v1
|
This paper proposes a multi-spectral random forest classifier with suitable
feature selection and masking for tree cover estimation in urban areas. The key
feature of the proposed classifier is filtering out the built-up region using
spectral indices followed by random forest classification on the remaining mask
with carefully selected features. Using Sentinel-2 satellite imagery, we
evaluate the performance of the proposed technique on a specified area
(approximately 82 acres) of Lahore University of Management Sciences (LUMS) and
demonstrate that our method outperforms a conventional random forest classifier
as well as state-of-the-art methods such as European Space Agency (ESA)
WorldCover 10m 2020 product as well as a DeepLabv3 deep learning architecture.
|
http://arxiv.org/abs/2306.06073v1
|
With most modern visualization tools, authors need to transform their data
into tidy formats to create visualizations they want. Because this requires
experience with programming or separate data processing tools, data
transformation remains a barrier in visualization authoring. To address this
challenge, we present a new visualization paradigm, concept binding, that
separates high-level visualization intents and low-level data transformation
steps, leveraging an AI agent. We realize this paradigm in Data Formulator, an
interactive visualization authoring tool. With Data Formulator, authors first
define data concepts they plan to visualize using natural languages or
examples, and then bind them to visual channels. Data Formulator then
dispatches its AI-agent to automatically transform the input data to surface
these concepts and generate desired visualizations. When presenting the results
(transformed table and output visualizations) from the AI agent, Data
Formulator provides feedback to help authors inspect and understand them. A
user study with 10 participants shows that participants could learn and use
Data Formulator to create visualizations that involve challenging data
transformations, and presents interesting future research directions.
|
http://arxiv.org/abs/2309.10094v2
|
Mixed Reality (MR) allows users to interact with digital objects in a
physical environment, but several limitations have hampered widespread
adoption. Physiologically adaptive systems detecting user's states can drive
interaction and address these limitations. Here, we highlight potential
usability and interaction limitations in MR and how physiologically adaptive
systems can benefit MR experiences and applications. We specifically address
potential applications for human factors and operational settings such as
healthcare, education, and entertainment. We further discuss benefits and
applications in light of ethical and privacy concerns. The use of
physiologically adaptive systems in MR has the potential to revolutionize
human-computer interactions and provide users with a more personalized and
engaging experience.
|
http://arxiv.org/abs/2303.17978v1
|
By ultrafast x-ray diffraction we show that the laser-induced
magnetostructural phase transition in FeRh nanoislands proceeds faster and more
complete than in continuous films. We observe an intrinsic 8 ps timescale for
nucleation of ferromagnetic (FM) domains in both types of samples. For the
continuous film, the substrate-near regions, which are not directly exposed to
light, are only slowly transformed to the FM state by domain wall motion
following heat transport. In contrast, numerical modeling of the plasmonic
absorption in the investigated nanostructure reveals a strong contribution near
the FeRh/MgO interface. On average, the absorption is larger and more
homogeneous in the nanoislands, enabling the phase transition throughout the
entire volume at the intrinsic nucleation timescale.
|
http://arxiv.org/abs/2309.12683v2
|
Co-salient Object Detection (CoSOD) endeavors to replicate the human visual
system's capacity to recognize common and salient objects within a collection
of images. Despite recent advancements in deep learning models, these models
still rely on training with well-annotated CoSOD datasets. The exploration of
training-free zero-shot CoSOD frameworks has been limited. In this paper,
taking inspiration from the zero-shot transfer capabilities of foundational
computer vision models, we introduce the first zero-shot CoSOD framework that
harnesses these models without any training process. To achieve this, we
introduce two novel components in our proposed framework: the group prompt
generation (GPG) module and the co-saliency map generation (CMP) module. We
evaluate the framework's performance on widely-used datasets and observe
impressive results. Our approach surpasses existing unsupervised methods and
even outperforms fully supervised methods developed before 2020, while
remaining competitive with some fully supervised methods developed before 2022.
|
http://arxiv.org/abs/2309.05499v3
|
We obtain an analytical solution for the time-optimal control problem in the
induction phase of anesthesia. Our solution is shown to align numerically with
the results obtained from the conventional shooting method. The induction phase
of anesthesia relies on a pharmacokinetic/pharmacodynamic (PK/PD) model
proposed by Bailey and Haddad in 2005 to regulate the infusion of propofol. In
order to evaluate our approach and compare it with existing results in the
literature, we examine a minimum-time problem for anesthetizing a patient. By
applying the Pontryagin minimum principle, we introduce the shooting method as
a means to solve the problem at hand. Additionally, we conducted numerical
simulations using the MATLAB computing environment. We solve the time-optimal
control problem using our newly proposed analytical method and discover that
the optimal continuous infusion rate of the anesthetic and the minimum required
time for transition from the awake state to an anesthetized state exhibit
similarity between the two methods. However, the advantage of our new analytic
method lies in its independence from unknown initial conditions for the adjoint
variables.
|
http://arxiv.org/abs/2309.04787v1
|
Reinforcement learning (RL) for bipedal locomotion has recently demonstrated
robust gaits over moderate terrains using only proprioceptive sensing. However,
such blind controllers will fail in environments where robots must anticipate
and adapt to local terrain, which requires visual perception. In this paper, we
propose a fully-learned system that allows bipedal robots to react to local
terrain while maintaining commanded travel speed and direction. Our approach
first trains a controller in simulation using a heightmap expressed in the
robot's local frame. Next, data is collected in simulation to train a heightmap
predictor, whose input is the history of depth images and robot states. We
demonstrate that with appropriate domain randomization, this approach allows
for successful sim-to-real transfer with no explicit pose estimation and no
fine-tuning using real-world data. To the best of our knowledge, this is the
first example of sim-to-real learning for vision-based bipedal locomotion over
challenging terrains.
|
http://arxiv.org/abs/2309.14594v2
|
In recent years new deep learning approaches to solve combinatorial
optimization problems, in particular NP-hard Vehicle Routing Problems (VRP),
have been proposed. The most impactful of these methods are sequential neural
construction approaches which are usually trained via reinforcement learning.
Due to the high training costs of these models, they usually are trained on
limited instance sizes (e.g. serving 100 customers) and later applied to vastly
larger instance size (e.g. 2000 customers). By means of a systematic scale-up
study we show that even state-of-the-art neural construction methods are
outperformed by simple heuristics, failing to generalize to larger problem
instances. We propose to use the ruin recreate principle that alternates
between completely destroying a localized part of the solution and then
recreating an improved variant. In this way, neural construction methods like
POMO are never applied to the global problem but just in the reconstruction
step, which only involves partial problems much closer in size to their
original training instances. In thorough experiments on four datasets of
varying distributions and modalities we show that our neural ruin recreate
approach outperforms alternative forms of improving construction methods such
as sampling and beam search and in several experiments also advanced local
search approaches.
|
http://arxiv.org/abs/2309.17089v1
|
Xenes, two-dimensional (2D) monolayers composed of a single element, with
graphene as a typical representative, have attracted widespread attention. Most
of the previous Xenes, X from group-IIIA to group-VIA elements have bonding
characteristics of covalent bonds. In this work, we for the first time unveil
the pivotal role of a halogen bond, which is a distinctive type of bonding with
interaction strength between that of a covalent bond and a van der Waals
interaction, in 2D group-VIIA monolayers. Combing the ingenious
non-edge-to-edge tiling theory and state-of-art ab initio method with refined
local density functional M06-L, we provide a precise and effective bottom-up
construction of 2D iodine monolayer sheets, iodinenes, primarily governed by
halogen bonds, and successfully design a category of stable iodinenes,
encompassing herringbone, Pythagorean, gyrated truncated hexagonal, i.e.
diatomic-kagome, and gyrated hexagonal tiling pattern. These iodinene
structures exhibit a wealth of properties, such as flat bands, nontrivial
topology, and fascinating optical characteristics, offering valuable insights
and guidance for future experimental investigations. Our work not only unveils
the unexplored halogen bonding mechanism in 2D materials but also opens a new
avenue for designing other non-covalent bonding 2D materials.
|
http://arxiv.org/abs/2309.06184v2
|
Offline Reinforcement Learning (RL) enables policy learning without active
interactions, making it especially appealing for self-driving tasks. Recent
successes of Transformers inspire casting offline RL as sequence modeling,
which, however, fails in stochastic environments with incorrect assumptions
that identical actions can consistently achieve the same goal. In this paper,
we introduce an UNcertainty-awaRE deciSion Transformer (UNREST) for planning in
stochastic driving environments without introducing additional transition or
complex generative models. Specifically, UNREST estimates uncertainties by
conditional mutual information between transitions and returns. Discovering
'uncertainty accumulation' and 'temporal locality' properties of driving
environments, we replace the global returns in decision transformers with
truncated returns less affected by environments to learn from actual outcomes
of actions rather than environment transitions. We also dynamically evaluate
uncertainty at inference for cautious planning. Extensive experiments
demonstrate UNREST's superior performance in various driving scenarios and the
power of our uncertainty estimation strategy.
|
http://arxiv.org/abs/2309.16397v3
|
We introduce the Sparsity Roofline, a visual performance model for evaluating
sparsity in neural networks. The Sparsity Roofline jointly models network
accuracy, sparsity, and theoretical inference speedup. Our approach does not
require implementing and benchmarking optimized kernels, and the theoretical
speedup becomes equal to the actual speedup when the corresponding dense and
sparse kernels are well-optimized. We achieve this through a novel analytical
model for predicting sparse network performance, and validate the predicted
speedup using several real-world computer vision architectures pruned across a
range of sparsity patterns and degrees. We demonstrate the utility and
ease-of-use of our model through two case studies: (1) we show how machine
learning researchers can predict the performance of unimplemented or
unoptimized block-structured sparsity patterns, and (2) we show how hardware
designers can predict the performance implications of new sparsity patterns and
sparse data formats in hardware. In both scenarios, the Sparsity Roofline helps
performance experts identify sparsity regimes with the highest performance
potential.
|
http://arxiv.org/abs/2310.00496v2
|
In this paper we present the first investigation into the effectiveness of
Large Language Models (LLMs) for Failure Mode Classification (FMC). FMC, the
task of automatically labelling an observation with a corresponding failure
mode code, is a critical task in the maintenance domain as it reduces the need
for reliability engineers to spend their time manually analysing work orders.
We detail our approach to prompt engineering to enable an LLM to predict the
failure mode of a given observation using a restricted code list. We
demonstrate that the performance of a GPT-3.5 model (F1=0.80) fine-tuned on
annotated data is a significant improvement over a currently available text
classification model (F1=0.60) trained on the same annotated data set. The
fine-tuned model also outperforms the out-of-the box GPT-3.5 (F1=0.46). This
investigation reinforces the need for high quality fine-tuning data sets for
domain-specific tasks using LLMs.
|
http://arxiv.org/abs/2309.08181v1
|
The Weyl-Wigner representation of quantum mechanics allows one to map the
density operator in a function in phase space - the Wigner function - which
acts like a probability distribution. In the context of statistical mechanics,
this mapping makes the transition from the classical to the quantum regimes
very clear, because the thermal Wigner function tends to the Boltzmann
distribution in the high temperature limit. We approximate this quantum phase
space representation of the canonical density operator for general temperatures
in terms of classical trajectories, which are obtained through a Wick rotation
of the semiclassical approximation for the Weyl propagator. A numerical scheme
which allows us to apply the approximation for a broad class of systems is also
developed. The approximation is assessed by testing it against systems with one
and two degrees of freedom, which shows that, for a considerable range of
parameters, the thermodynamic averages are well reproduced.
|
http://arxiv.org/abs/2307.16613v2
|
The $2$-packing number $\rho_2(G)$ of a graph $G$ is the cardinality of a
largest $2$-packing of $G$ and the open packing number $\rho^{\rm o}(G)$ is the
cardinality of a largest open packing of $G$, where an open packing (resp.
$2$-packing) is a set of vertices in $G$ no two (closed) neighborhoods of which
intersect. It is proved that if $G$ is bipartite, then $\rho^{\rm o}(G\Box K_2)
= 2\rho_2(G)$. For hypercubes, the lower bounds $\rho_2(Q_n) \ge 2^{n - \lfloor
\log n\rfloor -1}$ and $\rho^{\rm o}(Q_n) \ge 2^{n - \lfloor \log (n-1)\rfloor
-1}$ are established. These findings are applied to injective colorings of
hypercubes. In particular, it is demonstrated that $Q_9$ is the smallest
hypercube which is not perfect injectively colorable. It is also proved that
$\gamma_t(Q_{2^k}\times H) = 2^{2^k-k}\gamma_t(H)$, where $H$ is an arbitrary
graph with no isolated vertices.
|
http://arxiv.org/abs/2309.04963v1
|
Although syntactic information is beneficial for many NLP tasks, combining it
with contextual information between words to solve the coreference resolution
problem needs to be further explored. In this paper, we propose an end-to-end
parser that combines pre-trained BERT with a Syntactic Relation Graph Attention
Network (RGAT) to take a deeper look into the role of syntactic dependency
information for the coreference resolution task. In particular, the RGAT model
is first proposed, then used to understand the syntactic dependency graph and
learn better task-specific syntactic embeddings. An integrated architecture
incorporating BERT embeddings and syntactic embeddings is constructed to
generate blending representations for the downstream task. Our experiments on a
public Gendered Ambiguous Pronouns (GAP) dataset show that with the supervision
learning of the syntactic dependency graph and without fine-tuning the entire
BERT, we increased the F1-score of the previous best model (RGCN-with-BERT)
from 80.3% to 82.5%, compared to the F1-score by single BERT embeddings from
78.5% to 82.5%. Experimental results on another public dataset - OntoNotes 5.0
demonstrate that the performance of the model is also improved by incorporating
syntactic dependency information learned from RGAT.
|
http://arxiv.org/abs/2309.04977v1
|
The sharing-economy-based business model has recently seen success in the
transportation and accommodation sectors with companies like Uber and Airbnb.
There is growing interest in applying this model to energy systems, with
modalities like peer-to-peer (P2P) Energy Trading, Electric Vehicles (EV)-based
Vehicle-to-Grid (V2G), Vehicle-to-Home (V2H), Vehicle-to-Vehicle (V2V), and
Battery Swapping Technology (BST). In this work, we exploit the increasing
diffusion of EVs to realize a crowdsourcing platform called e-Uber that jointly
enables ride-sharing and energy-sharing through V2G and BST. e-Uber exploits
spatial crowdsourcing, reinforcement learning, and reverse auction theory.
Specifically, the platform uses reinforcement learning to understand the
drivers' preferences towards different ride-sharing and energy-sharing tasks.
Based on these preferences, a personalized list is recommended to each driver
through CMAB-based Algorithm for task Recommendation System (CARS). Drivers bid
on their preferred tasks in their list in a reverse auction fashion. Then
e-Uber solves the task assignment optimization problem that minimizes cost and
guarantees V2G energy requirement. We prove that this problem is NP-hard and
introduce a bipartite matching-inspired heuristic, Bipartite Matching-based
Winner selection (BMW), that has polynomial time complexity. Results from
experiments using real data from NYC taxi trips and energy consumption show
that e-Uber performs close to the optimum and finds better solutions compared
to a state-of-the-art approach
|
http://arxiv.org/abs/2304.04753v1
|
We formulate monocular depth estimation using denoising diffusion models,
inspired by their recent successes in high fidelity image generation. To that
end, we introduce innovations to address problems arising due to noisy,
incomplete depth maps in training data, including step-unrolled denoising
diffusion, an $L_1$ loss, and depth infilling during training. To cope with the
limited availability of data for supervised training, we leverage pre-training
on self-supervised image-to-image translation tasks. Despite the simplicity of
the approach, with a generic loss and architecture, our DepthGen model achieves
SOTA performance on the indoor NYU dataset, and near SOTA results on the
outdoor KITTI dataset. Further, with a multimodal posterior, DepthGen naturally
represents depth ambiguity (e.g., from transparent surfaces), and its zero-shot
performance combined with depth imputation, enable a simple but effective
text-to-3D pipeline. Project page: https://depth-gen.github.io
|
http://arxiv.org/abs/2302.14816v1
|
Deploying service robots in our daily life, whether in restaurants,
warehouses or hospitals, calls for the need to reason on the interactions
happening in dense and dynamic scenes. In this paper, we present and benchmark
three new approaches to model and predict multi-agent interactions in dense
scenes, including the use of an intuitive qualitative representation. The
proposed solutions take into account static and dynamic context to predict
individual interactions. They exploit an input- and a temporal-attention
mechanism, and are tested on medium and long-term time horizons. The first two
approaches integrate different relations from the so-called Qualitative
Trajectory Calculus (QTC) within a state-of-the-art deep neural network to
create a symbol-driven neural architecture for predicting spatial interactions.
The third approach implements a purely data-driven network for motion
prediction, the output of which is post-processed to predict QTC spatial
interactions. Experimental results on a popular robot dataset of challenging
crowded scenarios show that the purely data-driven prediction approach
generally outperforms the other two. The three approaches were further
evaluated on a different but related human scenarios to assess their
generalisation capability.
|
http://arxiv.org/abs/2307.00065v1
|
OpenID Connect (OIDC) is a widely used authentication standard for the Web.
In this work, we define a new Identity Certification Token (ICT) for OIDC. An
ICT can be thought of as a JSON-based, short-lived user certificate for
end-to-end user authentication without the need for cumbersome key management.
A user can request an ICT from his OpenID Provider (OP) and use it to prove his
identity to other users or services that trust the OP. We call this approach
$OIDC^2$ and compare it to other well-known end-to-end authentication methods.
Unlike certificates, $OIDC^2$ does not require installation and can be easily
used on multiple devices, making it more user-friendly. We outline protocols
for implementing $OIDC^2$ based on existing standards. We discuss the trust
relationship between entities involved in $OIDC^2$, propose a classification of
OPs' trust level, and propose authentication with multiple ICTs from different
OPs. We explain how different applications such as videoconferencing, instant
messaging, and email can benefit from ICTs for end-to-end authentication and
recommend validity periods for ICTs. To test $OIDC^2$, we provide a simple
extension to existing OIDC server software and evaluate its performance.
|
http://arxiv.org/abs/2307.16607v2
|
As consumer adoption of immersive technologies grows, virtual avatars will
play a prominent role in the future of social computing. However, as people
begin to interact more frequently through virtual avatars, it is important to
ensure that the research community has validated tools to evaluate the effects
and consequences of such technologies. We present the first iteration of a new,
freely available 3D avatar library called the Virtual Avatar Library for
Inclusion and Diversity (VALID), which includes 210 fully rigged avatars with a
focus on advancing racial diversity and inclusion. We present a detailed
process for creating, iterating, and validating avatars of diversity. Through a
large online study (n=132) with participants from 33 countries, we provide
statistically validated labels for each avatar's perceived race and gender.
Through our validation study, we also advance knowledge pertaining to the
perception of an avatar's race. In particular, we found that avatars of some
races were more accurately identified by participants of the same race.
|
http://arxiv.org/abs/2309.10902v2
|
We propose a Dynamical System (DS) approach to learn complex, possibly
periodic motion plans from kinesthetic demonstrations using Neural Ordinary
Differential Equations (NODE). To ensure reactivity and robustness to
disturbances, we propose a novel approach that selects a target point at each
time step for the robot to follow, by combining tools from control theory and
the target trajectory generated by the learned NODE. A correction term to the
NODE model is computed online by solving a quadratic program that guarantees
stability and safety using control Lyapunov functions and control barrier
functions, respectively. Our approach outperforms baseline DS learning
techniques on the LASA handwriting dataset and complex periodic trajectories.
It is also validated on the Franka Emika robot arm to produce stable motions
for wiping and stirring tasks that do not have a single attractor, while being
robust to perturbations and safe around humans and obstacles.
|
http://arxiv.org/abs/2308.00186v3
|
Forecasting motion and spatial positions of objects is of fundamental
importance, especially in safety-critical settings such as autonomous driving.
In this work, we address the issue by forecasting two different modalities that
carry complementary information, namely optical flow and depth. To this end we
propose FLODCAST a flow and depth forecasting model that leverages a multitask
recurrent architecture, trained to jointly forecast both modalities at once. We
stress the importance of training using flows and depth maps together,
demonstrating that both tasks improve when the model is informed of the other
modality. We train the proposed model to also perform predictions for several
timesteps in the future. This provides better supervision and leads to more
precise predictions, retaining the capability of the model to yield outputs
autoregressively for any future time horizon. We test our model on the
challenging Cityscapes dataset, obtaining state of the art results for both
flow and depth forecasting. Thanks to the high quality of the generated flows,
we also report benefits on the downstream task of segmentation forecasting,
injecting our predictions in a flow-based mask-warping framework.
|
http://arxiv.org/abs/2310.20593v1
|
Many real-time applications (e.g., Augmented/Virtual Reality, cognitive
assistance) rely on Deep Neural Networks (DNNs) to process inference tasks.
Edge computing is considered a key infrastructure to deploy such applications,
as moving computation close to the data sources enables us to meet stringent
latency and throughput requirements. However, the constrained nature of edge
networks poses several additional challenges to the management of inference
workloads: edge clusters can not provide unlimited processing power to DNN
models, and often a trade-off between network and processing time should be
considered when it comes to end-to-end delay requirements. In this paper, we
focus on the problem of scheduling inference queries on DNN models in edge
networks at short timescales (i.e., few milliseconds). By means of simulations,
we analyze several policies in the realistic network settings and workloads of
a large ISP, highlighting the need for a dynamic scheduling policy that can
adapt to network conditions and workloads. We therefore design ASET, a
Reinforcement Learning based scheduling algorithm able to adapt its decisions
according to the system conditions. Our results show that ASET effectively
provides the best performance compared to static policies when scheduling over
a distributed pool of edge resources.
|
http://arxiv.org/abs/2301.13618v1
|
The $\mathbb{R}$-motivic cohomology of an $\mathbb{R}$-motivic spectrum is a
module over the $\mathbb{R}$-motivic Steenrod algebra
$\mathcal{A}^{\mathbb{R}}$. In this paper, we describe how to recover the
$\mathbb{R}$-motivic cohomology of the Spanier-Whitehead dual $\mathrm{DX}$ of
an $\mathbb{R}$-motivic finite complex $\mathrm{X}$, as an
$\mathcal{A}^{\mathbb{R}}$-module, given the $\mathcal{A}^{\mathbb{R}}$-module
structure on the cohomology of $\mathrm{X}$. As an application, we show that 16
out of 128 different $\mathcal{A}^{\mathbb{R}}$-module structures on
$\mathcal{A}^{\mathbb{R}}(1):= \langle \mathrm{Sq}^1, \mathrm{Sq}^2 \rangle$
are self-dual.
|
http://arxiv.org/abs/2309.16142v2
|
High-order structures have been recognised as suitable models for systems
going beyond the binary relationships for which graph models are appropriate.
Despite their importance and surge in research on these structures, their
random cases have been only recently become subjects of interest. One of these
high-order structures is the oriented hypergraph, which relates couples of
subsets of an arbitrary number of vertices. Here we develop the
Erd\H{o}s-R\'enyi model for oriented hypergraphs, which corresponds to the
random realisation of oriented hyperedges of the complete oriented hypergraph.
A particular feature of random oriented hypergraphs is that the ratio between
their expected number of oriented hyperedges and their expected degree or size
is 3/2 for large number of vertices. We highlight the suitability of oriented
hypergraphs for modelling large collections of chemical reactions and the
importance of random oriented hypergraphs to analyse the unfolding of
chemistry.
|
http://arxiv.org/abs/2309.06351v1
|
Interfacial instabilities are common phenomena observed during adhesion
measurements involving viscoelastic polymers or fluids. Typical probe-tack
adhesion measurements with soft adhesives are conducted with rigid probes.
However, in many settings, such as for medical applications, adhesives make and
break contact from soft surfaces such as skin. Here we study how detachment
from soft probes alters the debonding mechanism of a model viscoelastic polymer
film. We demonstrate that detachment from a soft probe suppresses
Saffman-Taylor instabilities commonly encountered in adhesion. We suggest the
mechanism for interface stabilization is elastohydrodynamic deformation of the
probe and propose a scaling for the onset of stabilization.
|
http://arxiv.org/abs/2309.09704v1
|
The disordered ferromagnet is a disordered version of the ferromagnetic Ising
model in which the coupling constants are non-negative quenched random. A
ground configuration is an infinite-volume configuration whose energy cannot be
reduced by finite modifications. It is a long-standing challenge to ascertain
whether the disordered ferromagnet on the $\mathbb{Z}^D$ lattice admits
non-constant ground configurations. We answer this affirmatively in dimensions
$D\ge 4$, when the coupling constants are sampled independently from a
sufficiently concentrated distribution. The obtained ground configurations are
further shown to be translation-covariant with respect to $\mathbb{Z}^{D-1}$
translations of the disorder.
Our result is proved by showing that the finite-volume interface formed by
Dobrushin boundary conditions is localized, and converges to an infinite-volume
interface. This may be expressed in purely combinatorial terms, as a result on
the fluctuations of certain minimal cutsets in the lattice $\mathbb{Z}^D$
endowed with independent edge capacities.
|
http://arxiv.org/abs/2309.06437v2
|
We construct a nontrivial generalization of the paradigmatic Kuramoto model
by using an additional coupling term that explicitly breaks its rotational
symmetry resulting in a variant of the Winfree Model. Consequently, we observe
the characteristic features of the phase diagrams of both the Kuramoto model
and the Winfree model depending on the degree of the symmetry breaking coupling
strength for unimodal frequency distribution. The phase diagrams of both the
Kuramoto and the Winfree models resemble each other for symmetric bimodal
frequency distribution for a range of the symmetry breaking coupling strength
except for region shift and difference in the degree of spread of the
macroscopic dynamical states and bistable regions. The dynamical transitions in
the bistable states are characterized by an abrupt (first-order) transition in
both the forward and reverse traces. For asymmetric bimodal frequency
distribution, the onset of bistable regions depends on the degree of the
asymmetry. Large degree of the symmetry breaking coupling strength promotes the
synchronized stationary state, while a large degree of heterogeneity,
proportional to the separation between the two central frequencies, facilitates
the spread of the incoherent and standing wave states in the phase diagram for
a low strength of the symmetry breaking coupling. We deduce the low-dimensional
equations of motion for the complex order parameters using the Ott-Antonsen
ansatz for both unimodal and bimodal frequency distributions. We also deduce
the Hopf, pitchfork, and saddle-node bifurcation curves from the evolution
equations for the complex order parameters mediating the dynamical transitions.
Simulation results of the original discrete set of equations of the generalized
Kuramoto model agree well with the analytical bifurcation curves.
|
http://arxiv.org/abs/2302.14341v1
|
Radio Relics are typically found to be arc-like regions of synchrotron
emission in the outskirts of merging galaxy clusters, bowing out from the
cluster center. In most cases they show synchrotron spectra that steepen
towards the cluster center, indicating that they are caused by relativistic
electrons being accelerated at outwards traveling merger shocks. A number of
radio relics break with this ideal picture and show morphologies that are bent
the opposite way and show spectral index distributions which do not follow
expectations from the ideal picture. We propose that these `Wrong Way' Relics
can form when an outwards travelling shock wave is bent inwards by an
in-falling galaxy cluster or group. We test this in an ultra-high resolution
zoom-in simulation of a massive galaxy cluster with an on-the-fly spectral
Cosmic Ray model. This allows us to study not only the synchrotron emission at
colliding shocks, but also their synchrotron spectra to adress the open
question of relics with strongly varying spectral indices over the relic
surface.
|
http://arxiv.org/abs/2309.00046v2
|
In everyday life collaboration tasks between human operators and robots, the
former necessitate simple ways for programming new skills, the latter have to
show adaptive capabilities to cope with environmental changes. The joint use of
visual servoing and imitation learning allows us to pursue the objective of
realizing friendly robotic interfaces that (i) are able to adapt to the
environment thanks to the use of visual perception and (ii) avoid explicit
programming thanks to the emulation of previous demonstrations. This work aims
to exploit imitation learning for the visual servoing paradigm to address the
specific problem of tracking moving objects. In particular, we show that it is
possible to infer from data the compensation term required for realizing the
tracking controller, avoiding the explicit implementation of estimators or
observers. The effectiveness of the proposed method has been validated through
simulations with a robotic manipulator.
|
http://arxiv.org/abs/2309.07729v1
|
Results of astrometric very long baseline interferometry (VLBI) observations
towards an extreme OH/IR star candidate NSV17351 are presented. We used the
VERA (VLBI Exploration of Radio Astrometry) VLBI array to observe 22\,GHz
H$_2$O masers of NSV17351. We derived an annual parallax of 0.247$\pm$0.035 mas
which corresponds to a distance of 4.05$\pm$0.59 kpc. By averaging the proper
motions of 15 maser spots, we obtained the systemic proper motion of NSV17351
to be ($\mu_{\alpha}\cos{\delta}, \mu_{\delta}$)$^{\mathrm{avg}}$ $=$ ($-$1.19
$\pm$ 0.11, 1.30 $\pm$ 0.19) mas\,yr$^{-1}$. The maser spots spread out over a
region of 20 mas $\times$ 30 mas, which can be converted to a spatial
distribution of $\sim$80 au $\times$ $\sim$120 au at the source distance.
Internal motions of the maser spots suggest an outward moving maser region with
respect to the estimated position of the central star. From single dish
monitoring of the H$_2$O maser emission, we estimate the pulsation period of
NSV17351 to be 1122$\pm$24 days. This is the first report of the periodic
activity of NSV17351, indicating that NSV17351 could have a mass of
$\sim$4\,M$_{\odot}$. We confirmed that the time variation of H$_2$O masers can
be used as a period estimator of variable OH/IR stars. Furthermore, by
inspecting dozens of double-peaked H$_2$O maser spectra from the last 40 years,
we detected a long-term acceleration in the radial velocity of the
circumstellar matter to be $0.17\pm0.03$ km\,s$^{-1}$\,yr$^{-1}$ Finally, we
determined the position and kinematics of NSV17351 in the Milky Way Galaxy and
found that NSV17351 is located in an interarm region between the Outer and
Perseus arms. We note that astrometric VLBI observations towards extreme OH/IR
stars are useful samples for studies of the Galactic dynamics.
|
http://arxiv.org/abs/2309.04234v1
|
Figurative language is commonplace in natural language, and while making
communication memorable and creative, can be difficult to understand. In this
work, we investigate the robustness of Question Answering (QA) models on
figurative text. Yes/no questions, in particular, are a useful probe of
figurative language understanding capabilities of large language models. We
propose FigurativeQA, a set of 1000 yes/no questions with figurative and
non-figurative contexts, extracted from the domains of restaurant and product
reviews. We show that state-of-the-art BERT-based QA models exhibit an average
performance drop of up to 15\% points when answering questions from figurative
contexts, as compared to non-figurative ones. While models like GPT-3 and
ChatGPT are better at handling figurative texts, we show that further
performance gains can be achieved by automatically simplifying the figurative
contexts into their non-figurative (literal) counterparts. We find that the
best overall model is ChatGPT with chain-of-thought prompting to generate
non-figurative contexts. Our work provides a promising direction for building
more robust QA models with figurative language understanding capabilities.
|
http://arxiv.org/abs/2309.13748v1
|
We introduce a dynamic event-triggering mechanism for regulating the axonal
growth of a neuron. We apply boundary actuation at the soma (the part of a
neuron that contains the nucleus) and regulate the dynamics of tubulin
concentration and axon length. The control law is formulated by applying a
Zero-Order Hold (ZOH) to a continuous-time controller which guides the axon to
reach the desired length. The proposed dynamic event-triggering mechanism
determines the specific time instants at which control inputs are sampled from
the continuous-time control law. We establish the existence of a minimum
dwell-time between two triggering times that ensures avoidance of Zeno
behavior. Through employing the Lyapunov analysis with PDE backstepping, we
prove the local stability of the closed-loop system in $L_2$-norm, initially
for the target system, and subsequently for the original system. The
effectiveness of the proposed method is showcased through numerical
simulations.
|
http://arxiv.org/abs/2310.00131v2
|
The classification of phases and the detection of phase transitions are
central and challenging tasks in diverse fields. Within physics, it relies on
the identification of order parameters and the analysis of singularities in the
free energy and its derivatives. Here, we propose an alternative framework to
identify quantum phase transitions. Using the axial next-nearest neighbor Ising
(ANNNI) model as a benchmark, we show how machine learning can detect three
phases (ferromagnetic, paramagnetic, and a cluster of the antiphase with the
floating phase). Employing supervised learning, we demonstrate the feasibility
of transfer learning. Specifically, a machine trained only with
nearest-neighbor interactions can learn to identify a new type of phase
occurring when next-nearest-neighbor interactions are introduced. We also
compare the performance of common classical machine learning methods with a
version of the quantum nearest neighbors (QNN) algorithm.
|
http://arxiv.org/abs/2309.15339v1
|
The problem of optimal recovering high-order mixed derivatives of bivariate
functions with finite smoothness is studied. On the basis of the truncation
method, an algorithm for numerical differentiation is constructed, which is
order-optimal both in the sense of accuracy and in terms of the amount of
involved Galerkin information.
|
http://arxiv.org/abs/2309.09710v1
|
Here, we develop a framework for the prediction and screening of native
defects and functional impurities in a chemical space of Group IV, III-V, and
II-VI zinc blende (ZB) semiconductors, powered by crystal Graph-based Neural
Networks (GNNs) trained on high-throughput density functional theory (DFT)
data. Using an innovative approach of sampling partially optimized defect
configurations from DFT calculations, we generate one of the largest
computational defect datasets to date, containing many types of vacancies,
self-interstitials, anti-site substitutions, impurity interstitials and
substitutions, as well as some defect complexes. We applied three types of
established GNN techniques, namely Crystal Graph Convolutional Neural Network
(CGCNN), Materials Graph Network (MEGNET), and Atomistic Line Graph Neural
Network (ALIGNN), to rigorously train models for predicting defect formation
energy (DFE) in multiple charge states and chemical potential conditions. We
find that ALIGNN yields the best DFE predictions with root mean square errors
around 0.3 eV, which represents a prediction accuracy of 98 % given the range
of values within the dataset, improving significantly on the state-of-the-art.
Models are tested for different defect types as well as for defect charge
transition levels. We further show that GNN-based defective structure
optimization can take us close to DFT-optimized geometries at a fraction of the
cost of full DFT. DFT-GNN models enable prediction and screening across
thousands of hypothetical defects based on both unoptimized and
partially-optimized defective structures, helping identify electronically
active defects in technologically-important semiconductors.
|
http://arxiv.org/abs/2309.06423v2
|
Multi-objective learning (MOL) problems often arise in emerging machine
learning problems when there are multiple learning criteria, data modalities,
or learning tasks. Different from single-objective learning, one of the
critical challenges in MOL is the potential conflict among different objectives
during the iterative optimization process. Recent works have developed various
dynamic weighting algorithms for MOL such as MGDA and its variants, where the
central idea is to find an update direction that avoids conflicts among
objectives. Albeit its appealing intuition, empirical studies show that dynamic
weighting methods may not always outperform static ones. To understand this
theory-practical gap, we focus on a new stochastic variant of MGDA - the
Multi-objective gradient with Double sampling (MoDo) algorithm, and study the
generalization performance of the dynamic weighting-based MoDo and its
interplay with optimization through the lens of algorithm stability. Perhaps
surprisingly, we find that the key rationale behind MGDA -- updating along
conflict-avoidant direction - may hinder dynamic weighting algorithms from
achieving the optimal ${\cal O}(1/\sqrt{n})$ population risk, where $n$ is the
number of training samples. We further demonstrate the impact of the
variability of dynamic weights on the three-way trade-off among optimization,
generalization, and conflict avoidance that is unique in MOL. We showcase the
generality of our theoretical framework by analyzing other existing stochastic
MOL algorithms under the framework. Experiments on various multi-task learning
benchmarks are performed to demonstrate the practical applicability. Code is
available at https://github.com/heshandevaka/Trade-Off-MOL.
|
http://arxiv.org/abs/2305.20057v3
|
The logical analysis of data, LAD, is a technique that yields two-class
classifiers based on Boolean functions having disjunctive normal form (DNF)
representation. Although LAD algorithms employ optimization techniques, the
resulting binary classifiers or binary rules do not lead to overfitting. We
propose a theoretical justification for the absence of overfitting by
estimating the Vapnik-Chervonenkis dimension (VC dimension) for LAD models
where hypothesis sets consist of DNFs with a small number of cubic monomials.
We illustrate and confirm our observations empirically.
|
http://arxiv.org/abs/2309.16630v1
|
Steinerberger proposed a notion of curvature on graphs (J. Graph Theory,
2023). We show that nonnegative curvature is almost preserved under three graph
operations. We characterize the distance matrix and its null space after adding
an edge between two graphs. Let $D$ be the graph distance matrix and
$\mathbf{1}$ be the all-one vector. We provide a way to construct graphs so
that the linear system $Dx = \mathbf{1}$ does not have a solution. Let $\eta$
be the Perron eigenvector of $D.$ We provide a lower bound to
$\langle\eta,\mathbf{1}\rangle$ when the graph is a tree.
|
http://arxiv.org/abs/2309.16156v2
|
In this research, we aim to compare the performance of different classical
machine learning models and neural networks in identifying the frequency of
occurrence of each digit in a given number. It has various applications in
machine learning and computer vision, e.g. for obtaining the frequency of a
target object in a visual scene. We considered this problem as a hybrid of
classification and regression tasks. We carefully create our own datasets to
observe systematic differences between different methods. We evaluate each of
the methods using different metrics across multiple datasets.The metrics of
performance used were the root mean squared error and mean absolute error for
regression evaluation, and accuracy for classification performance evaluation.
We observe that decision trees and random forests overfit to the dataset, due
to their inherent bias, and are not able to generalize well. We also observe
that the neural networks significantly outperform the classical machine
learning models in terms of both the regression and classification metrics for
both the 6-digit and 10-digit number datasets. Dataset and code are available
on github.
|
http://arxiv.org/abs/2310.04431v1
|
Ultrafast electron-phonon relaxation dynamics in graphene hides many distinct
phenomena, such as hot phonon generation, dynamical Kohn anomalies, and phonon
decoupling, yet still remains largely unexplored. Here, we unravel intricate
mechanisms governing the vibrational relaxation and phonon dressing in graphene
at a highly non-equilibrium state by means of first-principles techniques. We
calculate dynamical phonon spectral functions and momentum-resolved linewidths
for various stages of electron relaxation and find photo-induced phonon
hardening, overall increase of relaxation rate and nonadiabaticity as well as
phonon gain. Namely, the initial stage of photo-excitation is found to be
governed by strong phonon anomalies of finite-momentum optical modes along with
incoherent phonon production. Population inversion state, on the other hand,
allows production of coherent and strongly-coupled phonon modes. Our research
provides vital insights into the electron-phonon coupling phenomena in
graphene, and serves as a foundation for exploring non-equilibrium phonon
dressing in materials where ordered states and phase transitions can be induced
by photo-excitation.
|
http://arxiv.org/abs/2309.09076v1
|
We study states arising from fluctuations in the disorder potential in
systems with long-range hopping. Here, contrary to systems with short-range
hopping, the optimal fluctuations of disorder responsible for the formation of
the states in the gap, are not rendered shallow and long-range when $E$
approaches the band edge ($E\to 0$). Instead, they remain deep and short-range.
The corresponding electronic wave functions also remain short-range-localized
for all $E<0$. This behavior has striking implications for the structure of the
wave functions slightly above $E=0$. By a study of finite systems, we
demonstrate that the wave functions $\Psi_E$ transform from a localized to a
quasi-localized type upon crossing the $E=0$ level, forming resonances embedded
in the $E>0$ continuum. The quasi-localized $\Psi_{E>0}$ consists of a
short-range core that is essentially the same as $\Psi_{E=0}$ and a delocalized
tail extending to the boundaries of the system. The amplitude of the tail is
small, but it decreases with $r$ slowly. Its contribution to the norm of the
wave function dominates for sufficiently large system sizes, $L\gg L_c(E)$;
such states behave as delocalized ones. In contrast, in small systems, $L\ll
L_c(E)$, quasi-localized states are overwhelmingly dominated by the localized
cores and are effectively localized.
|
http://arxiv.org/abs/2309.06345v3
|
Power systems Unit Commitment (UC) problem determines the generator
commitment schedule and dispatch decisions for power networks based on
forecasted electricity demand. However, with the increasing penetration of
renewables and stochastic demand behaviors, it becomes challenging to solve the
large-scale, multi-interval UC problem in an efficient manner. The main
objective of this paper is to propose a fast and reliable scheme to eliminate a
set of redundant or inactive physical constraints in the high-dimensional,
multi-interval, mixed-integer UC problem, while the reduced problem is
equivalent to the original full problem in terms of commitment decisions. Our
key insights lie on pre-screening the constraints based on the load
distribution and considering the physical feasibility regions of multi-interval
UC problem. For the multistep UC formulation, we overcome screening
conservativeness by utilizing the multi-step ramping relationships, and can
reliably screen out more constraints compared to current practice. Extensive
simulations on both specific load samples and load regions validate the
proposed technique can screen out more than 80% constraints while preserving
the feasibility of multi-interval UC problem.
|
http://arxiv.org/abs/2309.05894v1
|
In the field of speaker verification, session or channel variability poses a
significant challenge. While many contemporary methods aim to disentangle
session information from speaker embeddings, we introduce a novel approach
using an additional embedding to represent the session information. This is
achieved by training an auxiliary network appended to the speaker embedding
extractor which remains fixed in this training process. This results in two
similarity scores: one for the speakers information and one for the session
information. The latter score acts as a compensator for the former that might
be skewed due to session variations. Our extensive experiments demonstrate that
session information can be effectively compensated without retraining of the
embedding extractor.
|
http://arxiv.org/abs/2309.14741v1
|
We propose an unsupervised deep learning algorithm for the motion-compensated
reconstruction of 5D cardiac MRI data from 3D radial acquisitions. Ungated
free-breathing 5D MRI simplifies the scan planning, improves patient comfort,
and offers several clinical benefits over breath-held 2D exams, including
isotropic spatial resolution and the ability to reslice the data to arbitrary
views. However, the current reconstruction algorithms for 5D MRI take very long
computational time, and their outcome is greatly dependent on the uniformity of
the binning of the acquired data into different physiological phases. The
proposed algorithm is a more data-efficient alternative to current
motion-resolved reconstructions. This motion-compensated approach models the
data in each cardiac/respiratory bin as Fourier samples of the deformed version
of a 3D image template. The deformation maps are modeled by a convolutional
neural network driven by the physiological phase information. The deformation
maps and the template are then jointly estimated from the measured data. The
cardiac and respiratory phases are estimated from 1D navigators using an
auto-encoder. The proposed algorithm is validated on 5D bSSFP datasets acquired
from two subjects.
|
http://arxiv.org/abs/2309.04552v1
|
The influence of the gravitational fields of pulsars and magnetars on the
arion emission during the propagation of magnetodipole waves in a constant
magnetic field has been evaluated.
The solution of the equation was obtained and the flux of arions emitted by
magnetodipole waves during their propagation in a constant magnetic field was
found. It is shown that the amplitude of the born arion wave at a distance from
the source of magnetodipole radiation of a pulsar or magnetar $(r\to\infty)$ in
the considered case tends to a constant value. The intensity of the arion
emission in the solid angle element and the amount of arion energy
$\overline{I}$, emitted in all directions per unit time grow quadratically with
increasing distance, traveled by the magnetodipole radiation of a pulsar or
magnetar in a constant magnetic field.
Such growth of the energy of the born arion wave is due to the fact that in
the considered problem constant magnetic field is defined in the whole space.
In reality, the galactic and intergalactic magnetic fields can be represented
in this form only in regions of space of finite dimensions, outside of which
the force lines of their induction vector are curved. Therefore, it is possible
to apply these results only in a region of space for which $r\leq
L_{coh}<\infty$, where $L_{coh}$ is the coherence length, the distance at which
the force lines of the induction vector can be considered as straight lines. An
estimate for the value of the coupling constant of photons with arions is
obtained.
|
http://arxiv.org/abs/2309.07073v1
|
In recent years, cloud and edge architectures have gained tremendous focus
for offloading computationally heavy applications. From machine learning and
Internet of Thing (IOT) to industrial procedures and robotics, cloud computing
have been used extensively for data processing and storage purposes, thanks to
its "infinite" resources. On the other hand, cloud computing is characterized
by long time delays due to the long distance between the cloud servers and the
machine requesting the resources. In contrast, edge computing provides almost
real-time services since edge servers are located significantly closer to the
source of data. This capability sets edge computing as an ideal option for
real-time applications, like high level control, for resource-constrained
platforms. In order to utilize the edge resources, several technologies, with
basic ones as containers and orchestrators like Kubernetes, have been developed
to provide an environment with many features, based on each application's
requirements. In this context, this works presents the implementation and
evaluation of a novel edge architecture based on Kubernetes orchestration for
controlling the trajectory of a resource-constrained Unmanned Aerial Vehicle
(UAV) by enabling Model Predictive Control (MPC).
|
http://arxiv.org/abs/2301.13624v1
|
Formalized $1$-category theory forms a core component of various libraries of
mathematical proofs. However, more sophisticated results in fields from
algebraic topology to theoretical physics, where objects have "higher
structure," rely on infinite-dimensional categories in place of $1$-dimensional
categories, and $\infty$-category theory has thusfar proved unamenable to
computer formalization.
Using a new proof assistant called Rzk, which is designed to support
Riehl-Shulman's simplicial extension of homotopy type theory for synthetic
$\infty$-category theory, we provide the first formalizations of results from
$\infty$-category theory. This includes in particular a formalization of the
Yoneda lemma, often regarded as the fundamental theorem of category theory, a
theorem which roughly states that an object of a given category is determined
by its relationship to all of the other objects of the category. A key feature
of our framework is that, thanks to the synthetic theory, many constructions
are automatically natural or functorial. We plan to use Rzk to formalize
further results from $\infty$-category theory, such as the theory of limits and
colimits and adjunctions.
|
http://arxiv.org/abs/2309.08340v3
|
Recent results suggest that splitting topological navigation into
robot-independent and robot-specific components improves navigation performance
by enabling the robot-independent part to be trained with data collected by
robots of different types. However, the navigation methods' performance is
still limited by the scarcity of suitable training data and they suffer from
poor computational scaling.
In this work, we present PlaceNav, subdividing the robot-independent part
into navigation-specific and generic computer vision components. We utilize
visual place recognition for the subgoal selection of the topological
navigation pipeline. This makes subgoal selection more efficient and enables
leveraging large-scale datasets from non-robotics sources, increasing training
data availability. Bayesian filtering, enabled by place recognition, further
improves navigation performance by increasing the temporal consistency of
subgoals. Our experimental results verify the design and the new method obtains
a 76% higher success rate in indoor and 23% higher in outdoor navigation tasks
with higher computational efficiency.
|
http://arxiv.org/abs/2309.17260v4
|
With the proliferation of hate speech on social networks under different
formats, such as abusive language, cyberbullying, and violence, etc., people
have experienced a significant increase in violence, putting them in
uncomfortable situations and threats. Plenty of efforts have been dedicated in
the last few years to overcome this phenomenon to detect hate speech in
different structured languages like English, French, Arabic, and others.
However, a reduced number of works deal with Arabic dialects like Tunisian,
Egyptian, and Gulf, mainly the Algerian ones. To fill in the gap, we propose in
this work a complete approach for detecting hate speech on online Algerian
messages. Many deep learning architectures have been evaluated on the corpus we
created from some Algerian social networks (Facebook, YouTube, and Twitter).
This corpus contains more than 13.5K documents in Algerian dialect written in
Arabic, labeled as hateful or non-hateful. Promising results are obtained,
which show the efficiency of our approach.
|
http://arxiv.org/abs/2309.11611v1
|
The study of improper phases in the context of multiferroic materials has a
long history, but superconductivity has yet to be connected to the network of
ferroic orders. In this work, we highlight an overlooked mechanism that couples
superconducting order parameters to odd-parity orders in the charge or spin
sectors such that the latter emerge as improper orders. For that, we explore a
novel perspective of nonsymmorphic symmetries based on extended symmetry groups
in real space. We highlight how nonsymmorphic symmetries can generate rather
nonintuitive couplings between order parameters. In particular, we find that a
bilinear in the superconducting order parameter can couple linearly to
odd-parity orders in centrosymmetric systems. Our findings can account for the
unusual phenomenology of CeRh$_2$As$_2$, a recently discovered heavy fermion
superconductor, and open the door for exploring nonsymmorphic symmetries in the
broader context of improper orders with potential applications to functional
materials.
|
http://arxiv.org/abs/2309.05664v1
|
We present an apparatus that applies Ramsey's method of separated oscillatory
fields to proton spins in water molecules. The setup consists of a water
circuit, a spin polarizer, a magnetically shielded interaction region with
various radio frequency elements, and a nuclear magnetic resonance system to
measure the spin polarization. We show that this apparatus can be used for Rabi
resonance measurements and to investigate magnetic and pseudomagnetic field
effects in Ramsey-type precision measurements with a sensitivity below 100 pT.
|
http://arxiv.org/abs/2303.18108v2
|
By studying the distribution of calcium-aluminium-rich inclusions (CAIs) that
are embedded within meteorites, we can learn about the dynamical history of the
protoplanetary disk from which our Solar System formed. A long-standing problem
concerning CAIs is the CAI storage problem. CAIs are thought to have formed at
high temperatures near the Sun, but they are primarily found in carbonaceous
chondrites, which formed much further out, beyond the orbit of Jupiter.
Additionally, radial drift of CAI particles should have removed them from the
solar protoplanetary disk several million years before the parent bodies of
meteorites in which they are encountered would have accreted. We revisit a
previously suggested solution to the CAI storage problem by Desch, Kalyaan, and
Alexander which proposed that CAIs were mixed radially outward through the disk
and subsequently got trapped in a pressure maximum created by Jupiter's growing
core opening a planet gap. Our aim is to investigate whether their solution
still works when we take into account the infall phase during which the disk
builds up from the collapse of a molecular cloud core. We build a 1D numerical
code in Python using the DISKLAB package to simulate the evolution of the solar
protoplanetary disk, starting with a collapsing molecular cloud. We find that
outward transport of CAIs during the infall phase is very efficient, possibly
mixing them all the way into the far outer disk. Subsequent inward radial drift
collects CAIs in the pressure maximum beyond Jupiter's orbit while draining the
inner disk, roughly reproducing parts of the result by Desch et al. By
introducing CAI formation so early, abundances out to 100 AU remain
significant, possibly not consistent with some meteoritic data. It is possible
to create a disk that does not expand as far out and also does not push CAIs as
far out by using a very slowly rotating cloud.
|
http://arxiv.org/abs/2309.13760v1
|
In this work we pretrain a CLIP/ViT based model using three different
modalities of satellite imagery across five AOIs covering over ~10\% of Earth's
total landmass, namely Sentinel 2 RGB optical imagery, Sentinel 1 SAR radar
amplitude and interferometric coherence. This model uses $\sim 250$ M
parameters. Then, we use the embeddings produced for each modality with a
classical machine learning method to attempt different downstream tasks for
earth observation related to vegetation, built up surface, croplands and
permanent water. We consistently show how we reduce the need for labeled data
by 99\%, so that with ~200-500 randomly selected labeled examples (around
4K-10K km$^2$) we reach performance levels analogous to those achieved with the
full labeled datasets (about 150K image chips or 3M km$^2$ in each area of
interest - AOI) on all modalities, AOIs and downstream tasks. This leads us to
think that the model has captured significant earth features useful in a wide
variety of scenarios. To enhance our model's usability in practice, its
architecture allows inference in contexts with missing modalities and even
missing channels within each modality. Additionally, we visually show that this
embedding space, obtained with no labels, is sensible to the different earth
features represented by the labelled datasets we selected.
|
http://arxiv.org/abs/2310.00119v2
|
We discuss our recent study of local quantum mechanical uncertainty relations
in quantum many body systems. These lead to fundamental bounds for quantities
such as the speed, acceleration, relaxation times, spatial gradients and the
Lyapunov exponents. We additionally obtain bounds on various transport
coefficients like the viscosity, the diffusion constant, and the thermal
conductivity. Some of these bounds are related to earlier conjectures, such as
the bound on chaos by Maldacena, Shenker and Stanford while others are new. Our
approach is a direct way of obtaining exact bounds in fairly general settings.
We employ uncertainty relations for local quantities from which we strip off
irrelevant terms as much as possible, thereby removing non-local terms. To
gauge the utility of our bounds, we briefly compare their numerical values with
typical values available from experimental data. In various cases, approximate
simplified variants of the bounds that we obtain can become fairly tight, i.e.,
comparable to experimental values. These considerations lead to a minimal time
for thermal equilibrium to be achieved. Building on a conjectured relation
between quantum measurements and equilibration, our bounds, far more
speculatively, suggest a minimal time scale for measurements to stabilize to
equilibrium values.
|
http://arxiv.org/abs/2303.00021v1
|
This article provides a curated review of selected papers published in
prominent economics journals that use machine learning (ML) tools for research
and policy analysis. The review focuses on three key questions: (1) when ML is
used in economics, (2) what ML models are commonly preferred, and (3) how they
are used for economic applications. The review highlights that ML is
particularly used to process nontraditional and unstructured data, capture
strong nonlinearity, and improve prediction accuracy. Deep learning models are
suitable for nontraditional data, whereas ensemble learning models are
preferred for traditional datasets. While traditional econometric models may
suffice for analyzing low-complexity data, the increasing complexity of
economic data due to rapid digitalization and the growing literature suggests
that ML is becoming an essential addition to the econometrician's toolbox.
|
http://arxiv.org/abs/2304.00086v2
|
Testing with randomly generated inputs (fuzzing) has gained significant
traction due to its capacity to expose program vulnerabilities automatically.
Fuzz testing campaigns generate large amounts of data, making them ideal for
the application of machine learning (ML). Neural program smoothing (NPS), a
specific family of ML-guided fuzzers, aims to use a neural network as a smooth
approximation of the program target for new test case generation.
In this paper, we conduct the most extensive evaluation of NPS fuzzers
against standard gray-box fuzzers (>11 CPU years and >5.5 GPU years), and make
the following contributions: (1) We find that the original performance claims
for NPS fuzzers do not hold; a gap we relate to fundamental, implementation,
and experimental limitations of prior works. (2) We contribute the first
in-depth analysis of the contribution of machine learning and gradient-based
mutations in NPS. (3) We implement Neuzz++, which shows that addressing the
practical limitations of NPS fuzzers improves performance, but that standard
gray-box fuzzers almost always surpass NPS-based fuzzers. (4) As a consequence,
we propose new guidelines targeted at benchmarking fuzzing based on machine
learning, and present MLFuzz, a platform with GPU access for easy and
reproducible evaluation of ML-based fuzzers. Neuzz++, MLFuzz, and all our data
are public.
|
http://arxiv.org/abs/2309.16618v1
|
The density functional plus dynamical mean-field theory is used to study the
spin excitation spectra of SrRu$_2$O$_6$. A good quantitative agreement with
experimental spin excitation spectra is found. Depending on the size of the
Hund's coupling $J_H$ the systems chooses either Mott insulator or covalent
insulator state when magnetic ordering is not allowed. We find that the nature
of the paramagnetic state has negligible influence on the charge and spin
excitation spectra. We find that antiferromagnetic correlations hide the
covalent insulator state for realistic choices of the interaction parameters.
|
http://arxiv.org/abs/2305.19826v2
|
In psychiatric diagnosis, a contemporary data-driven, manual-based method for
mental disorders classification is the most popular technique; however, it has
several inevitable flaws. Using the three-way decision as a framework, we
propose a unified model that stands for clinicians' subjective approach (CSA)
analysis consisting of three parts: quantitative analysis, quantitative
analysis, and evaluation-based analysis. A ranking list and a set of numerical
weights based on illness magnitude levels according to the clinician's greatest
degree of assumptions are the findings of the qualitative and quantitative
investigation. We further create a comparative classification of illnesses into
three groups with varying important levels; a three-way evaluation-based model
is utilized in this study for the aim of understanding and portraying these
results in a more clear way. This proposed method might be integrated with the
manual-based process as a complementary tool to improve precision while
diagnosing mental disorders
|
http://arxiv.org/abs/2301.03351v4
|
The recent synthesis of MoSi2N4 material, along with theoretical predictions
encompassing the entire family of chemical analogs, has opened up a new array
of low-dimensional materials for a diverse range of optoelectronics and
photovoltaics applications. In this study, we conducted state-of-the-art
many-body first-principles calculations to analyze the quasi-particle
electronic structure of the material class MSi2Z4 (where M = Mo, W, and Z = N,
P, As, Sb). All monolayers display a direct band gap at the K point, with the
exception of MoSi2N4. In tungsten-based compounds, the fundamental-gap can be
adjusted over a significantly broader energy range compared to their
molybdenum-based counterparts. Additionally, increasing atomic weight of the Z,
both the band gap and exciton binding energies decrease. A noteworthy feature
is the absence of a lateral valley ({\Lambda} or Q) near the conduction band
minimum, indicating potential higher photoluminescence efficiencies compared to
conventional transition-metal dichalcogenide monolayers. The optical spectra of
these materials are predominantly characterized by tightly bound excitons,
leading to an absorption onset in the visible range (for N-based) and in the
infrared region (for others). This diversity offers promising opportunities to
incorporate these materials and their heterostructures into optoelectronic
devices, with tandem solar cells being particularly promising.
|
http://arxiv.org/abs/2309.11163v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.