text
string | source
string |
---|---|
In this letter we derive new expressions for tree-level graviton amplitudes
in $\mathcal{N}=8$ supergravity from BCFW recursion relations combined with new
types of bonus relations. These bonus relations go beyond the famous $1/z^2$
behavior under a large BCFW shift, and use knowledge about certain zeroes of
graviton amplitudes in collinear kinematics. This extra knowledge can be used
in the context of global residue theorems by writing the amplitude in a special
form using canonical building blocks. In the NMHV case these building blocks
are dressed one-loop leading singularities, the same objects that appear in the
expansion of Yang-Mills amplitudes, where each term corresponds to an
$R$-invariant. Unlike other approaches, our formula is not an expansion in
terms of cyclic objects and does not manifest color-kinematics duality, but
rather preserves the permutational symmetry of its building blocks. We also
comment on the possible connection to Grassmannian geometry and give some
non-trivial evidence of such structure for graviton amplitudes.
|
http://arxiv.org/abs/2309.05710v1
|
The development of believable, natural, and interactive digital artificial
agents is a field of growing interest. Theoretical uncertainties and technical
barriers present considerable challenges to the field, particularly with
regards to developing agents that effectively simulate human emotions. Large
language models (LLMs) might address these issues by tapping common patterns in
situational appraisal. In three empirical experiments, this study tests the
capabilities of LLMs to solve emotional intelligence tasks and to simulate
emotions. It presents and evaluates a new chain-of-emotion architecture for
emotion simulation within video games, based on psychological appraisal
research. Results show that it outperforms standard LLM architectures on a
range of user experience and content analysis metrics. This study therefore
provides early evidence of how to construct and test affective agents based on
cognitive processes represented in language models.
|
http://arxiv.org/abs/2309.05076v1
|
Learning-based vehicle planning is receiving increasing attention with the
emergence of diverse driving simulators and large-scale driving datasets. While
offline reinforcement learning (RL) is well suited for these safety-critical
tasks, it still struggles to plan over extended periods. In this work, we
present a skill-based framework that enhances offline RL to overcome the
long-horizon vehicle planning challenge. Specifically, we design a variational
autoencoder (VAE) to learn skills from offline demonstrations. To mitigate
posterior collapse of common VAEs, we introduce a two-branch sequence encoder
to capture both discrete options and continuous variations of the complex
driving skills. The final policy treats learned skills as actions and can be
trained by any off-the-shelf offline RL algorithms. This facilitates a shift in
focus from per-step actions to temporally extended skills, thereby enabling
long-term reasoning into the future. Extensive results on CARLA prove that our
model consistently outperforms strong baselines at both training and new
scenarios. Additional visualizations and experiments demonstrate the
interpretability and transferability of extracted skills.
|
http://arxiv.org/abs/2309.13614v2
|
We study entanglement transitions in a periodically driven Ising chain in the
presence of an imaginary transverse field $\gamma$ as a function of drive
frequency $\omega_D$. In the high drive amplitude and frequency regime, we find
a critical value $\gamma=\gamma_c$ below which the steady state half-chain
entanglement entropy, $S_{L/2}$, scales with chain length $L$ as $S_{L/2} \sim
\ln L$; in contrast, for $\gamma>\gamma_c$, it becomes independent of $L$. In
the small $\gamma$ limit, we compute the coefficient, $\alpha$, of the $\ln L$
term analytically using a Floquet perturbation theory and trace its origin to
the presence of Fisher-Hartwig jump singularities in the correlation function
of the driven chain. We also study the frequency dependence of $\gamma_c$ and
show that $\gamma_c \to 0$ at special drive frequencies; at these frequencies,
which we analytically compute, $S_{L/2}$ remain independent of $L$ for all
$\gamma$. This behavior can be traced to an approximate emergent symmetry of
the Floquet Hamiltonian at these drive frequencies which we identify. Finally,
we discus the behavior of the driven system at low and intermediate drive
frequencies. Our analysis shows the presence of volume law behavior of the
entanglement in this regime $S_{\ell} \sim \ell$ for small subsystem length
$\ell \le \ell^{\ast}(\omega_D)$. We identify $\ell^{\ast}(\omega_D)$ and tie
its existence to the effective long-range nature of the Floquet Hamiltonian of
the driven chain for small subsystem size. We discuss the applicability of our
results to other integrable non-hermitian models.
|
http://arxiv.org/abs/2309.07661v2
|
We report the first observation of ferroelectric gating in AlScN barrier
wide-bandgap nitride transistors. These FerroHEMT devices realized by direct
epitaxial growth represent a new class of ferroelectric transistors in which
the semiconductor is itself polar, and the crystalline ferroelectric barrier is
lattice-matched to the substrate. The FerroHEMTs reported here use the thinnest
nitride high K and ferroelectric barriers to date to deliver the highest on
currents at 4 A/mm, and highest speed AlScN transistors with fmax larger than
150 GHz observed in any ferroelectric transistor. The FerroHEMTs exhibit
hysteretic Id Vgs loops with subthreshold slopes below the Boltzmann limit. A
control AlN barrier HEMT exhibits neither hysteretic, nor sub Boltzmann
behavior. While these results introduce the first epitaxial high K and
ferroelectric barrier technology to RF and mm wave electronics, they are also
of interest as a new material platform for combining memory and logic
functionalities in digital electronics.
|
http://arxiv.org/abs/2302.14209v1
|
The pursuit of long-term autonomy mandates that machine learning models must
continuously adapt to their changing environments and learn to solve new tasks.
Continual learning seeks to overcome the challenge of catastrophic forgetting,
where learning to solve new tasks causes a model to forget previously learnt
information. Prior-based continual learning methods are appealing as they are
computationally efficient and do not require auxiliary models or data storage.
However, prior-based approaches typically fail on important benchmarks and are
thus limited in their potential applications compared to their memory-based
counterparts. We introduce Bayesian adaptive moment regularization (BAdam), a
novel prior-based method that better constrains parameter growth, reducing
catastrophic forgetting. Our method boasts a range of desirable properties such
as being lightweight and task label-free, converging quickly, and offering
calibrated uncertainty that is important for safe real-world deployment.
Results show that BAdam achieves state-of-the-art performance for prior-based
methods on challenging single-headed class-incremental experiments such as
Split MNIST and Split FashionMNIST, and does so without relying on task labels
or discrete task boundaries.
|
http://arxiv.org/abs/2309.08546v3
|
Stacking regressions is an ensemble technique that forms linear combinations
of different regression estimators to enhance predictive accuracy. The
conventional approach uses cross-validation data to generate predictions from
the constituent estimators, and least-squares with nonnegativity constraints to
learn the combination weights. In this paper, we learn these weights
analogously by minimizing a regularized version of the empirical risk subject
to a nonnegativity constraint. When the constituent estimators are linear
least-squares projections onto nested subspaces separated by at least three
dimensions, we show that thanks to an adaptive shrinkage effect, the resulting
stacked estimator has strictly smaller population risk than best single
estimator among them, with more significant gains when the signal-to-noise
ratio is small. Here "best" refers to an estimator that minimizes a model
selection criterion such as AIC or BIC. In other words, in this setting, the
best single estimator is inadmissible. Because the optimization problem can be
reformulated as isotonic regression, the stacked estimator requires the same
order of computation as the best single estimator, making it an attractive
alternative in terms of both performance and implementation.
|
http://arxiv.org/abs/2309.09880v3
|
This paper develops a new vascular respiratory motion compensation algorithm,
Motion-Related Compensation (MRC), to conduct vascular respiratory motion
compensation by extrapolating the correlation between invisible vascular and
visible non-vascular. Robot-assisted vascular intervention can significantly
reduce the radiation exposure of surgeons. In robot-assisted image-guided
intervention, blood vessels are constantly moving/deforming due to respiration,
and they are invisible in the X-ray images unless contrast agents are injected.
The vascular respiratory motion compensation technique predicts 2D vascular
roadmaps in live X-ray images. When blood vessels are visible after contrast
agents injection, vascular respiratory motion compensation is conducted based
on the sparse Lucas-Kanade feature tracker. An MRC model is trained to learn
the correlation between vascular and non-vascular motions. During the
intervention, the invisible blood vessels are predicted with visible tissues
and the trained MRC model. Moreover, a Gaussian-based outlier filter is adopted
for refinement. Experiments on in-vivo data sets show that the proposed method
can yield vascular respiratory motion compensation in 0.032 sec, with an
average error 1.086 mm. Our real-time and accurate vascular respiratory motion
compensation approach contributes to modern vascular intervention and surgical
robots.
|
http://arxiv.org/abs/2308.16451v1
|
Topic modeling is admittedly a convenient way to monitor markets trend.
Conventionally, Latent Dirichlet Allocation, LDA, is considered a must-do model
to gain this type of information. By given the merit of deducing keyword with
token conditional probability in LDA, we can know the most possible or
essential topic. However, the results are not intuitive because the given
topics cannot wholly fit human knowledge. LDA offers the first possible
relevant keywords, which also brings out another problem of whether the
connection is reliable based on the statistic possibility. It is also hard to
decide the topic number manually in advance. As the booming trend of using
fuzzy membership to cluster and using transformers to embed words, this work
presents the fuzzy topic modeling based on soft clustering and document
embedding from state-of-the-art transformer-based model. In our practical
application in a press release monitoring, the fuzzy topic modeling gives a
more natural result than the traditional output from LDA.
|
http://arxiv.org/abs/2309.09658v1
|
In this paper, a kinematically modular approach to robot control is
presented. The method involves structures called Elementary Dynamic Actions and
a network model combining these elements. With this control framework, a rich
repertoire of movements can be generated by combination of basic modules. The
problems of solving inverse kinematics, managing kinematic singularity and
kinematic redundancy are avoided. The modular approach is robust against
contact and physical interaction, which makes it particularly effective for
contact-rich manipulation. Each kinematic module can be learned by Imitation
Learning, thereby resulting in a modular learning strategy for robot control.
The theoretical foundations and their real robot implementation are presented.
Using a KUKA LBR iiwa14 robot, three tasks were considered: (1) generating a
sequence of discrete movements, (2) generating a combination of discrete and
rhythmic movements, and (3) a drawing and erasing task. The results obtained
indicate that this modular approach has the potential to simplify the
generation of a diverse range of robot actions.
|
http://arxiv.org/abs/2309.15271v2
|
Diffusion models have gained prominence in the image domain for their
capabilities in data generation and transformation, achieving state-of-the-art
performance in various tasks in both image and audio domains. In the rapidly
evolving field of audio-based machine learning, safeguarding model integrity
and establishing data copyright are of paramount importance. This paper
presents the first watermarking technique applied to audio diffusion models
trained on mel-spectrograms. This offers a novel approach to the aforementioned
challenges. Our model excels not only in benign audio generation, but also
incorporates an invisible watermarking trigger mechanism for model
verification. This watermark trigger serves as a protective layer, enabling the
identification of model ownership and ensuring its integrity. Through extensive
experiments, we demonstrate that invisible watermark triggers can effectively
protect against unauthorized modifications while maintaining high utility in
benign audio generation tasks.
|
http://arxiv.org/abs/2309.13166v2
|
Ferroelectricity can exist in elemental phases as a result of charge
transfers between atoms occupying inequivalent Wyckoff positions. We
investigate the emergence of ferroelectricity in two-dimensional elemental
materials with buckled honeycomb lattices. Various multi-bilayer structures
hosting ferroelectricity are designed by stacking-engineering. Ferroelectric
materials candidates formed by group IV and V elements are predicted
theoretically. Ultrathin Bi films show layer-stacking-dependent physical
properties of ferroelectricity, topology, and metallicity. The two-bilayer Bi
film with a polar stacking sequence is found to be an elemental topological
ferroelectric material. Three and four bilayers Bi films with polar structures
are ferroelectric-like elemental polar metals with topological nontrivial edge
states. For Ge and Sn, trivial elemental polar metals are predicted. Our work
reveals the possibility of design two-dimensional elemental topological
ferroelectrics and polar metals by stacking-engineering.
|
http://arxiv.org/abs/2309.14609v1
|
The notion of shortcut partition, introduced recently by Chang, Conroy, Le,
Milenkovi\'c, Solomon, and Than [CCLMST23], is a new type of graph partition
into low-diameter clusters. Roughly speaking, the shortcut partition guarantees
that for every two vertices $u$ and $v$ in the graph, there exists a path
between $u$ and $v$ that intersects only a few clusters. They proved that any
planar graph admits a shortcut partition and gave several applications,
including a construction of tree cover for arbitrary planar graphs with stretch
$1+\varepsilon$ and $O(1)$ many trees for any fixed $\varepsilon \in (0,1)$.
However, the construction heavily exploits planarity in multiple steps, and is
thus inherently limited to planar graphs.
In this work, we breach the "planarity barrier" to construct a shortcut
partition for $K_r$-minor-free graphs for any $r$. To this end, we take a
completely different approach -- our key contribution is a novel deterministic
variant of the cop decomposition in minor-free graphs [And86, AGG14]. Our
shortcut partition for $K_r$-minor-free graphs yields several direct
applications. Most notably, we construct the first optimal distance oracle for
$K_r$-minor-free graphs, with $1+\varepsilon$ stretch, linear space, and
constant query time for any fixed $\varepsilon \in (0,1)$. The previous best
distance oracle [AG06] uses $O(n\log n)$ space and $O(\log n)$ query time, and
its construction relies on Robertson-Seymour structural theorem and other
sophisticated tools. We also obtain the first tree cover of $O(1)$ size for
minor-free graphs with stretch $1+\varepsilon$, while the previous best
$(1+\varepsilon)$-tree cover has size $O(\log^2 n)$ [BFN19].
|
http://arxiv.org/abs/2308.00555v1
|
Deep neural network models can learn clinically relevant features from
millions of histopathology images. However generating high-quality annotations
to train such models for each hospital, each cancer type, and each diagnostic
task is prohibitively laborious. On the other hand, terabytes of training data
-- while lacking reliable annotations -- are readily available in the public
domain in some cases. In this work, we explore how these large datasets can be
consciously utilized to pre-train deep networks to encode informative
representations. We then fine-tune our pre-trained models on a fraction of
annotated training data to perform specific downstream tasks. We show that our
approach can reach the state-of-the-art (SOTA) for patch-level classification
with only 1-10% randomly selected annotations compared to other SOTA
approaches. Moreover, we propose an uncertainty-aware loss function, to
quantify the model confidence during inference. Quantified uncertainty helps
experts select the best instances to label for further training. Our
uncertainty-aware labeling reaches the SOTA with significantly fewer
annotations compared to random labeling. Last, we demonstrate how our
pre-trained encoders can surpass current SOTA for whole-slide image
classification with weak supervision. Our work lays the foundation for data and
task-agnostic pre-trained deep networks with quantified uncertainty.
|
http://arxiv.org/abs/2309.07113v1
|
Recent works have shown that in contrast to classical linear elastic fracture
mechanics, endowing crack fronts in a brittle Green-elastic solid with
Steigmann-Ogden surface elasticity yields a model that predicts bounded
stresses and strains at the crack tips for plane-strain problems. However,
singularities persist for anti-plane shear (mode-III fracture) under far-field
loading, even when Steigmann-Ogden surface elasticity is incorporated.
This work is motivated by obtaining a model of brittle fracture capable of
predicting bounded stresses and strains for all modes of loading. We formulate
an exact general theory of a three-dimensional solid containing a boundary
surface with strain-gradient surface elasticity. For planar reference surfaces
parameterized by flat coordinates, the form of surface elasticity reduces to
that introduced by Hilgers and Pipkin, and when the surface energy is
independent of the surface covariant derivative of the stretching, the theory
reduces to that of Steigmann and Ogden. We discuss material symmetry using
Murdoch and Cohen's extension of Noll's theory. We present a model small-strain
surface energy that incorporates resistance to geodesic distortion, satisfies
strong ellipticity, and requires the same material constants found in the
Steigmann-Ogden theory.
Finally, we derive and apply the linearized theory to mode-III fracture in an
infinite plate under far-field loading. We prove that there always exists a
unique classical solution to the governing integro-differential equation, and
in contrast to using Steigmann-Ogden surface elasticity, our model is
consistent with the linearization assumption in predicting finite stresses and
strains at the crack tips.
|
http://arxiv.org/abs/2301.13744v3
|
Video saliency prediction and detection are thriving research domains that
enable computers to simulate the distribution of visual attention akin to how
humans perceiving dynamic scenes. While many approaches have crafted
task-specific training paradigms for either video saliency prediction or video
salient object detection tasks, few attention has been devoted to devising a
generalized saliency modeling framework that seamlessly bridges both these
distinct tasks. In this study, we introduce the Unified Saliency Transformer
(UniST) framework, which comprehensively utilizes the essential attributes of
video saliency prediction and video salient object detection. In addition to
extracting representations of frame sequences, a saliency-aware transformer is
designed to learn the spatio-temporal representations at progressively
increased resolutions, while incorporating effective cross-scale saliency
information to produce a robust representation. Furthermore, a task-specific
decoder is proposed to perform the final prediction for each task. To the best
of our knowledge, this is the first work that explores designing a transformer
structure for both saliency modeling tasks. Convincible experiments demonstrate
that the proposed UniST achieves superior performance across seven challenging
benchmarks for two tasks, and significantly outperforms the other
state-of-the-art methods.
|
http://arxiv.org/abs/2309.08220v1
|
This work is devoted to the study of the probability of immunity, i.e. the
effect occurs whether exposed or not. We derive necessary and sufficient
conditions for non-immunity and $\epsilon$-bounded immunity, i.e. the
probability of immunity is zero and $\epsilon$-bounded, respectively. The
former allows us to estimate the probability of benefit (i.e., the effect
occurs if and only if exposed) from a randomized controlled trial, and the
latter allows us to produce bounds of the probability of benefit that are
tighter than the existing ones. We also introduce the concept of indirect
immunity (i.e., through a mediator) and repeat our previous analysis for it.
Finally, we propose a method for sensitivity analysis of the probability of
immunity under unmeasured confounding.
|
http://arxiv.org/abs/2309.11942v2
|
Autonomous vehicles (AVs) are expected to bring major benefits to transport
and society. To exploit this potential, their acceptance by society is a
necessary condition. However, AV acceptance is currently at stake: AVs face
resistance by bystanders and local communities. Resistance can prevent the
implementation and use of AVs, threatening road safety and efficiency. The
present study performed a qualitative and quantitative text analysis of
comments submitted by locals in San Francisco (SF) to the California Public
Utilities Commission (CPUC) on the fared deployment of AVs. The results of the
analysis are synthesized, and a conceptual framework explaining and predicting
resistance is proposed. The framework posits that the occurrence of resistance
is a direct result of the perception of threats, which is determined by
individual and system characteristics, direct and indirect consequences of
system use, reactions of others, and external events. AVs as threat to safety
was associated with their unpredictable, and illegal driving behavior, as well
as producing conflict situations. The lack of explicit communication between
AVs and other road users due to the absence of a human driver behind the
steering wheel negatively contributed to perceived safety and trust, especially
for vulnerable populations in crossing situations. Respondents reported a
negative impact on road capacity, congestion, and traffic flow, with AVs
blocking other road users, such as emergency vehicles. Inaccessible vehicle
design contributed to the exclusion of vulnerable groups with disabilities. The
scientific dialogue on acceptance of AVs needs to shift towards resistance as
the 'other' essential element of acceptance to ensure that we live up to our
promise of transitioning towards more sustainable mobility that is inclusive,
equitable, fair, just, affordable, and available to all.
|
http://arxiv.org/abs/2309.10484v1
|
We present the theoretical status of the lifetimes of weakly decaying heavy
hadrons containing a bottom or a charm quark, and discuss the current
predictions, based on the framework of the Heavy Quark Expansion (HQE), for
both mesons and baryons. Potential improvements to reduce the theoretical
uncertainties are also highlighted.
|
http://arxiv.org/abs/2302.14590v1
|
As a second-order method, the Natural Gradient Descent (NGD) has the ability
to accelerate training of neural networks. However, due to the prohibitive
computational and memory costs of computing and inverting the Fisher
Information Matrix (FIM), efficient approximations are necessary to make NGD
scalable to Deep Neural Networks (DNNs). Many such approximations have been
attempted. The most sophisticated of these is KFAC, which approximates the FIM
as a block-diagonal matrix, where each block corresponds to a layer of the
neural network. By doing so, KFAC ignores the interactions between different
layers. In this work, we investigate the interest of restoring some
low-frequency interactions between the layers by means of two-level methods.
Inspired from domain decomposition, several two-level corrections to KFAC using
different coarse spaces are proposed and assessed. The obtained results show
that incorporating the layer interactions in this fashion does not really
improve the performance of KFAC. This suggests that it is safe to discard the
off-diagonal blocks of the FIM, since the block-diagonal approach is
sufficiently robust, accurate and economical in computation time.
|
http://arxiv.org/abs/2303.18083v2
|
We propose terminology to classify interpretations of quantum mechanics and
models that modify or complete quantum mechanics. Our focus is on models which
have previously been referred to as superdeterministic (strong or weak),
retrocausal (with or without signalling, dynamical or non-dynamical),
future-input-dependent, atemporal and all-at-once, not always with the same
meaning or context. Sometimes these models are assumed to be deterministic,
sometimes not, the word deterministic has been given different meanings, and
different notions of causality have been used when classifying them. This has
created much confusion in the literature, and we hope that the terms proposed
here will help to clarify the nomenclature. The general model framework that we
will propose may also be useful to classify other interpretations and
modifications of quantum mechanics. This document grew out of the discussions
at the 2022 Bonn Workshop on Superdeterminism and Retrocausality.
|
http://arxiv.org/abs/2309.12293v2
|
Star-forming galaxies are believed to replenish their atomic gas reservoir,
which is consumed in star-formation, through accretion of gas from their
circumgalactic mediums (CGMs). However, there are few observational constraints
today on the gas accretion rate in external galaxies. Here, we use our recent
measurement of the scaling relation between the atomic hydrogen (HI) mass
$M_{HI}$ and the stellar mass $M_*$ in star-forming galaxies at $z \approx
0.35$, with the relations between the star-formation rate (SFR) and $M_*$, and
the molecular gas mass $M_{Mol}$ and $M_*$, and the assumption that
star-forming galaxies evolve along the main sequence, to determine the
evolution of the neutral gas reservoir and the average net gas accretion rate
onto the disks of star-forming galaxies over the past 4 Gyr. For galaxies with
$M_* \gtrsim 10^9 M_{\odot}$ today, we find that both $M_*$ and $M_{HI}$ in the
disk have increased, while $M_{Mol}$ has decreased, since $z \sim 0.35$. The
average gas accretion rate onto the disk over the past 4 Gyr is similar to the
average SFR over this period, implying that main-sequence galaxies have
maintained a stable HI reservoir, despite the consumption of gas in
star-formation. We obtain an average net gas accretion rate (over the past 4
Gyr) of $\approx 6 M_{\odot} yr^{-1}$ for galaxies with the stellar mass of the
Milky Way. At low redshifts, $z \lesssim 0.4$, the reason for the decline in
the cosmic SFR density thus appears to be the inefficiency in the conversion of
atomic gas to molecular gas, rather than insufficient gas accretion from the
CGM.
|
http://arxiv.org/abs/2309.05937v2
|
Various spatial-gradient extensions of standard viscoelastic rheologies of
the Kelvin-Voigt, Maxwell's, and Jeffreys' types are analyzed in linear
one-dimensional situations as far as the propagation of waves and their
dispersion and attenuation. These gradient extensions are then presented in the
large-strain nonlinear variants where they are sometimes used rather for purely
analytical reasons either in the Lagrangian or the Eulerian formulations
without realizing this wave-propagation context.The interconnection between
these two modeling aspects is thus revealed in particular selected cases.
|
http://arxiv.org/abs/2309.05089v2
|
Large Language Models (LLMs) are trained and aligned to follow natural
language instructions with only a handful of examples, and they are prompted as
task-driven autonomous agents to adapt to various sources of execution
environments. However, deploying agent LLMs in virtual reality (VR) has been
challenging due to the lack of efficiency in online interactions and the
complex manipulation categories in 3D environments. In this work, we propose
Voice2Action, a framework that hierarchically analyzes customized voice signals
and textual commands through action and entity extraction and divides the
execution tasks into canonical interaction subsets in real-time with error
prevention from environment feedback. Experiment results in an urban
engineering VR environment with synthetic instruction data show that
Voice2Action can perform more efficiently and accurately than approaches
without optimizations.
|
http://arxiv.org/abs/2310.00092v1
|
As language models are adopted by a more sophisticated and diverse set of
users, the importance of guaranteeing that they provide factually correct
information supported by verifiable sources is critical across fields of study.
This is especially the case for high-stakes fields, such as medicine and law,
where the risk of propagating false information is high and can lead to
undesirable societal consequences. Previous work studying attribution and
factuality has not focused on analyzing these characteristics of language model
outputs in domain-specific scenarios. In this work, we conduct human evaluation
of responses from a few representative systems along various axes of
attribution and factuality, by bringing domain experts in the loop.
Specifically, we collect expert-curated questions from 484 participants across
32 fields of study, and then ask the same experts to evaluate generated
responses to their own questions. In addition, we ask experts to improve upon
responses from language models. The output of our analysis is ExpertQA, a
high-quality long-form QA dataset with 2177 questions spanning 32 fields, along
with verified answers and attributions for claims in the answers.
|
http://arxiv.org/abs/2309.07852v2
|
The paper deals with the theoretical consideration of surface
plasmon-polaritons in the graphene monolayer, embedded into dielectric with
spatially separated gain and losses. It is demonstrated, that presence of gain
and losses in the system leads to the formation of additional mode of graphene
surface plasmon-polaritons, which does not have its counterpart in the
conservative system. When the gain and losses are mutually balanced, the
position of exceptional point -- transition point between unbroken and broken
$\mathcal{PT}$-symmetry -- can be effectively tuned by graphene's doping. In
the case of unbalanced gain and losses the spectrum of surface
plasmon-polaritons contains spectral singularity, whose frequency is also
adjustable through the electrostatic gating of graphene.
|
http://arxiv.org/abs/2309.16787v1
|
Real-time and efficient path planning is critical for all robotic systems. In
particular, it is of greater importance for industrial robots since the overall
planning and execution time directly impact the cycle time and automation
economics in production lines. While the problem may not be complex in static
environments, classical approaches are inefficient in high-dimensional
environments in terms of planning time and optimality. Collision checking poses
another challenge in obtaining a real-time solution for path planning in
complex environments. To address these issues, we propose an end-to-end
learning-based framework viz., Path Planning and Collision checking Network
(PPCNet). The PPCNet generates the path by computing waypoints sequentially
using two networks: the first network generates a waypoint, and the second one
determines whether the waypoint is on a collision-free segment of the path. The
end-to-end training process is based on imitation learning that uses data
aggregation from the experience of an expert planner to train the two networks,
simultaneously. We utilize two approaches for training a network that
efficiently approximates the exact geometrical collision checking function.
Finally, the PPCNet is evaluated in two different simulation environments and a
practical implementation on a robotic arm for a bin-picking application.
Compared to the state-of-the-art path planning methods, our results show
significant improvement in performance by greatly reducing the planning time
with comparable success rates and path lengths.
|
http://arxiv.org/abs/2304.00119v1
|
This paper investigates the transverse Ising model on a discretization of
two-dimensional anti-de Sitter space. We use classical and quantum algorithms
to simulate real-time evolution and measure out-of-time-ordered correlators
(OTOC). The latter can probe thermalization and scrambling of quantum
information under time evolution. We compared tensor network-based methods both
with simulation on gated-based superconducting quantum devices and analog
quantum simulation using Rydberg arrays. While studying this system's
thermalization properties, we observed different regimes depending on the
radius of curvature of the space. In particular, we find a region of parameter
space where the thermalization time depends only logarithmically on the number
of degrees of freedom.
|
http://arxiv.org/abs/2309.04383v2
|
We present PyRPL, an open source software package that allows the
implementation of automatic digital feedback controllers for quantum optics
experiments on commercially available, affordable FPGA boards. Our software
implements the digital generation of various types of error signals, from an
analog input through the application of loop filters of high complexity and
real-time gain adjustment for multiple analog output signals, including
different algorithms for resonance search, lock acquisition sequences and
in-loop gain optimization. Furthermore, all necessary diagnostic instruments
such as an oscilloscope, a network analyzer and a spectrum analyzer are
integrated into our software. Apart from providing a quickly scalable,
automatic feedback controller, the lock performance that can be achieved by
using PyRPL with imperfect equipment such as piezoelectric transducers and
noisy amplifiers is better than the one achievable with standard analog
controllers due to the higher complexity of implementable filters and
possibilities of nonlinear operations in the FPGA. This drastically reduces the
cost of added complexity when introducing additional feedback loops to an
experiment. The open-source character also distinguishes PyRPL from commercial
solutions, as it allows users to customize functionalities at various levels,
ranging from the easy integration of PyRPL-based feedback controllers into
existing setups to the modification of the FPGA functionality. A community of
developers provides fast and efficient implementation and testing of software
modifications.
|
http://arxiv.org/abs/2310.00086v1
|
The reduction of phase noise in electronic systems is of utmost importance in
modern communication and signal processing applications and requires an
understanding of the underlying physical processes. Here, we systematically
study the phase noise in mutually synchronized chains of nano-constriction spin
Hall nano-oscillators (SHNOs). We find that longer chains have improved phase
noise figures at low offset frequencies (1/f noise), where chains of two and
ten mutually synchronized SHNOs have 2.8 and 6.2 dB lower phase noise than
single SHNOs. This is close to the theoretical values of 3 and 10 dB, and the
deviation is ascribed to process variations between nano-constrictions.
However, at higher offset frequencies (thermal noise), the phase noise
unexpectedly increases with chain length, which we ascribe to process
variations, a higher operating temperature in the long chains at the same drive
current and phase delays in the coupling between nano-constrictions.
|
http://arxiv.org/abs/2303.18097v1
|
We present a comparative study of the effect of low-temperature opacities on
stellar models up to the Red Giant branch (RGB), computed with the GARching
STellar Evolution Code. We have used two sets of low-temperature opacities;
{\AE}SOPUS ({\AE}) from the University of Padova and those from the Wichita
State University group (F05). In the relevant range of temperatures for this
study, log \k{appa}{\AE} < log \k{appa}F 05. Therefore, to compare stellar
evolutionary tracks, we performed a solar calibration of the {\alpha}mlt, for
each set of low-temperature opacities. After carrying such a calibration, we
find that stellar evolutionary tracks are almost unaffected by the choice of
low-temperature opacities, with largest variations of 25-30 K at the latest
evolutionary stages of the RGB phase.
|
http://arxiv.org/abs/2309.10490v1
|
Finger vein pattern recognition is an emerging biometric with a good
resistance to presentation attacks and low error rates. One problem is that it
is hard to obtain ground truth finger vein patterns from live fingers. In this
paper we propose an advanced method to create finger vein phantoms using 3D
printing where we mimic the optical properties of the various tissues inside
the fingers, like bone, veins and soft tissues using different printing
materials and parameters. We demonstrate that we are able to create finger
phantoms that result in realistic finger vein images and precisely known vein
patterns. These phantoms can be used to develop and evaluate finger vein
extraction and recognition methods. In addition, we show that the finger vein
phantoms can be used to spoof a finger vein recognition system. This paper is
based on the Master's thesis of Rasmus van der Grift.
|
http://arxiv.org/abs/2309.14806v1
|
There are many artificial intelligence algorithms for autonomous driving, but
directly installing these algorithms on vehicles is unrealistic and expensive.
At the same time, many of these algorithms need an environment to train and
optimize. Simulation is a valuable and meaningful solution with training and
testing functions, and it can say that simulation is a critical link in the
autonomous driving world. There are also many different applications or systems
of simulation from companies or academies such as SVL and Carla. These
simulators flaunt that they have the closest real-world simulation, but their
environment objects, such as pedestrians and other vehicles around the
agent-vehicle, are already fixed programmed. They can only move along the
pre-setting trajectory, or random numbers determine their movements. What is
the situation when all environmental objects are also installed by Artificial
Intelligence, or their behaviors are like real people or natural reactions of
other drivers? This problem is a blind spot for most of the simulation
applications, or these applications cannot be easy to solve this problem. The
Neurorobotics Platform from the TUM team of Prof. Alois Knoll has the idea
about "Engines" and "Transceiver Functions" to solve the multi-agents problem.
This report will start with a little research on the Neurorobotics Platform and
analyze the potential and possibility of developing a new simulator to achieve
the true real-world simulation goal. Then based on the NRP-Core Platform, this
initial development aims to construct an initial demo experiment. The consist
of this report starts with the basic knowledge of NRP-Core and its
installation, then focus on the explanation of the necessary components for a
simulation experiment, at last, about the details of constructions for the
autonomous driving system, which is integrated object detection and autonomous
control.
|
http://arxiv.org/abs/2301.00089v1
|
Reliability quantification of deep reinforcement learning (DRL)-based control
is a significant challenge for the practical application of artificial
intelligence (AI) in safety-critical systems. This study proposes a method for
quantifying the reliability of DRL-based control. First, an existing method,
random noise distillation, was applied to the reliability evaluation to clarify
the issues to be solved. Second, a novel method for reliability quantification
was proposed to solve these issues. The reliability is quantified using two
neural networks: reference and evaluator. They have the same structure with the
same initial parameters. The outputs of the two networks were the same before
training. During training, the evaluator network parameters were updated to
maximize the difference between the reference and evaluator networks for
trained data. Thus, the reliability of the DRL-based control for a state can be
evaluated based on the difference in output between the two networks. The
proposed method was applied to DQN-based control as an example of a simple
task, and its effectiveness was demonstrated. Finally, the proposed method was
applied to the problem of switching trained models depending on the state.
Con-sequently, the performance of the DRL-based control was improved by
switching the trained models according to their reliability.
|
http://arxiv.org/abs/2309.16977v2
|
Large language models (LLMs) are highly adept at question answering and
reasoning tasks, but when reasoning in a situational context, human
expectations vary depending on the relevant cultural common ground. As
languages are associated with diverse cultures, LLMs should also be
culturally-diverse reasoners. In this paper, we study the ability of a wide
range of state-of-the-art multilingual LLMs (mLLMs) to reason with proverbs and
sayings in a conversational context. Our experiments reveal that: (1) mLLMs
"know" limited proverbs and memorizing proverbs does not mean understanding
them within a conversational context; (2) mLLMs struggle to reason with
figurative proverbs and sayings, and when asked to select the wrong answer
(instead of asking it to select the correct answer); and (3) there is a
"culture gap" in mLLMs when reasoning about proverbs and sayings translated
from other languages. We construct and release our evaluation dataset MAPS
(MulticultrAl Proverbs and Sayings) for proverb understanding with
conversational context for six different languages.
|
http://arxiv.org/abs/2309.08591v2
|
We compare games under delayed control and delay games, two types of infinite
games modelling asynchronicity in reactive synthesis. In games under delayed
control both players suffer from partial informedness due to symmetrically
delayed communication, while in delay games, the protagonist has to grant
lookahead to the alter player. Our first main result, the interreducibility of
the existence of sure winning strategies for the protagonist, allows to
transfer known complexity results and bounds on the delay from delay games to
games under delayed control, for which no such results had been known. We
furthermore analyse existence of randomized strategies that win almost surely,
where this correspondence between the two types of games breaks down. In this
setting, some games surely won by the alter player in delay games can now be
won almost surely by the protagonist in the corresponding game under delayed
control, showing that it indeed makes a difference whether the protagonist has
to grant lookahead or both players suffer from partial informedness. These
results get even more pronounced when we finally address the quantitative goal
of winning with a probability in $[0,1]$. We show that for any rational
threshold $\theta \in [0,1]$ there is a game that can be won by the protagonist
with exactly probability $\theta$ under delayed control, while being surely won
by alter in the delay game setting. All these findings refine our original
result that games under delayed control are not determined.
|
http://arxiv.org/abs/2305.19985v4
|
Recently, M. Ludewig and G. C. Thiang introduced a notion of a uniformly
localized Wannier basis with localization centers in an arbitrary uniformly
discrete subset $D$ in a complete Riemannian manifold $X$. They show that,
under certain geometric conditions on $X$, the class of the orthogonal
projection onto the span of such a Wannier basis in the $K$-theory of the Roe
algebra $C^*(X)$ is trivial. In this short note, we clarify the geometric
conditions on $X$, which guarantee triviality of the $K$-theory class of any
Wannier projection. We show that this property is equivalent to triviality of
the unit of the uniform Roe algebra of $D$ in the $K$-theory of its Roe
algebra, and provide a geometric criterion for that. As a consequence, we prove
triviality of the $K$-theory class of any Wannier projection on a connected
proper measure space $X$ of bounded geometry with a uniformly discrete set of
localization centers, coarsely equivalent to $X$.
|
http://arxiv.org/abs/2304.00125v1
|
A millisecond pulsar (MSP) is an old neutron star (NS) that has accreted
material from its companion star, causing it to spin up, which is known as the
recycling scenario. During the mass transfer phase, the system manifests itself
as an X-ray binary. PSR J1402+13 is an MSP with a spin period of $5.89~{\rm
ms}$ and a spin period derivative of $\log\dot{P}_{\rm spin}=-16.32$. These
properties make it a notable object within the pulsar population, as MSPs
typically exhibit low spin period derivatives. In this paper, we aim to explain
how an MSP can posses high spin period derivative by binary evolution. By
utilizing the stellar evolution code \textsc{MESA}, we examine the effects of
irradiation on the companion star and the propeller effect on the NS during
binary evolution. We demonstrate that irradiation can modify the spin period
and mass of an MSP, resulting in a higher spin period derivative. These results
suggest that the irradiation effect may serve as a key factor in explaining
MSPs with high spin period derivatives.
|
http://arxiv.org/abs/2309.16963v1
|
Conventional end-to-end Automatic Speech Recognition (ASR) models primarily
focus on exact transcription tasks, lacking flexibility for nuanced user
interactions. With the advent of Large Language Models (LLMs) in speech
processing, more organic, text-prompt-based interactions have become possible.
However, the mechanisms behind these models' speech understanding and
"reasoning" capabilities remain underexplored. To study this question from the
data perspective, we introduce instruction-following speech recognition,
training a Listen-Attend-Spell model to understand and execute a diverse set of
free-form text instructions. This enables a multitude of speech recognition
tasks -- ranging from transcript manipulation to summarization -- without
relying on predefined command sets. Remarkably, our model, trained from scratch
on Librispeech, interprets and executes simple instructions without requiring
LLMs or pre-trained speech modules. It also offers selective transcription
options based on instructions like "transcribe first half and then turn off
listening," providing an additional layer of privacy and safety compared to
existing LLMs. Our findings highlight the significant potential of
instruction-following training to advance speech foundation models.
|
http://arxiv.org/abs/2309.09843v1
|
We establish a close analogy between the thermodynamics of the nonlinear
systems far from equilibrium and the dissipative solitons. Unlike the solitons
in the Hamiltonian systems, their dissipative counterpart looks like an
aggregation of bounded quasi-particles interacting on the short range, obeying
the Rayleigh-Jeans distribution, and possessing a temperature, entropy, and
other thermodynamic characteristics. This ensemble is confined by a collective
potential, which defines its negative chemical potential. Such a dissipative
soliton represents a strongly chirped pulse generated by a mode-locked laser
with the advantage of being energy scalable by the analogy with the
Bose-Einstein condensation from an incoherent ``basin.'' We demonstrate the
main limits of the dissipative soliton energy scaling which result from the
loss of internal soliton coherency and the thermalization due to nontriviality
of a ``free energy landscape.''
|
http://arxiv.org/abs/2307.16571v2
|
Speech synthesis systems powered by neural networks hold promise for
multimedia production, but frequently face issues with producing expressive
speech and seamless editing. In response, we present the Cross-Utterance
Conditioned Variational Autoencoder speech synthesis (CUC-VAE S2) framework to
enhance prosody and ensure natural speech generation. This framework leverages
the powerful representational capabilities of pre-trained language models and
the re-expression abilities of variational autoencoders (VAEs). The core
component of the CUC-VAE S2 framework is the cross-utterance CVAE, which
extracts acoustic, speaker, and textual features from surrounding sentences to
generate context-sensitive prosodic features, more accurately emulating human
prosody generation. We further propose two practical algorithms tailored for
distinct speech synthesis applications: CUC-VAE TTS for text-to-speech and
CUC-VAE SE for speech editing. The CUC-VAE TTS is a direct application of the
framework, designed to generate audio with contextual prosody derived from
surrounding texts. On the other hand, the CUC-VAE SE algorithm leverages real
mel spectrogram sampling conditioned on contextual information, producing audio
that closely mirrors real sound and thereby facilitating flexible speech
editing based on text such as deletion, insertion, and replacement.
Experimental results on the LibriTTS datasets demonstrate that our proposed
models significantly enhance speech synthesis and editing, producing more
natural and expressive speech.
|
http://arxiv.org/abs/2309.04156v2
|
Large language models (LLMs) have shown great promise for capturing
contextual information in natural language processing tasks. We propose a novel
approach to speaker diarization that incorporates the prowess of LLMs to
exploit contextual cues in human dialogues. Our method builds upon an
acoustic-based speaker diarization system by adding lexical information from an
LLM in the inference stage. We model the multi-modal decoding process
probabilistically and perform joint acoustic and lexical beam search to
incorporate cues from both modalities: audio and text. Our experiments
demonstrate that infusing lexical knowledge from the LLM into an acoustics-only
diarization system improves overall speaker-attributed word error rate
(SA-WER). The experimental results show that LLMs can provide complementary
information to acoustic models for the speaker diarization task via proposed
beam search decoding approach showing up to 39.8% relative delta-SA-WER
improvement from the baseline system. Thus, we substantiate that the proposed
technique is able to exploit contextual information that is inaccessible to
acoustics-only systems which is represented by speaker embeddings. In addition,
these findings point to the potential of using LLMs to improve speaker
diarization and other speech processing tasks by capturing semantic and
contextual cues.
|
http://arxiv.org/abs/2309.05248v3
|
The rise of unmanned aerial vehicle (UAV) operations, as well as the
vulnerability of the UAVs' sensors, has led to the need for proper monitoring
systems for detecting any abnormal behavior of the UAV. This work addresses
this problem by proposing an innovative multi-task learning framework (MLF-ST)
for UAV state identification and trajectory prediction, that aims to optimize
the performance of both tasks simultaneously. A deep neural network with shared
layers to extract features from the input data is employed, utilizing drone
sensor measurements and historical trajectory information. Moreover, a novel
loss function is proposed that combines the two objectives, encouraging the
network to jointly learn the features that are most useful for both tasks. The
proposed MLF-ST framework is evaluated on a large dataset of UAV flights,
illustrating that it is able to outperform various state-of-the-art baseline
techniques in terms of both state identification and trajectory prediction. The
evaluation of the proposed framework, using real-world data, demonstrates that
it can enable applications such as UAV-based surveillance and monitoring, while
also improving the safety and efficiency of UAV operations.
|
http://arxiv.org/abs/2309.06741v1
|
The attention towards food products characteristics, such as nutritional
properties and traceability, has risen substantially in the recent years.
Consequently, we are witnessing an increased demand for the development of
modern tools to monitor, analyse and assess food quality and authenticity.
Within this framework, an essential set of data collection techniques is
provided by vibrational spectroscopy. In fact, methods such as Fourier near
infrared and mid infrared spectroscopy have been often exploited to analyze
different foodstuffs. Nonetheless, existing statistical methods often struggle
to deal with the challenges presented by spectral data, such as their high
dimensionality, paired with strong relationships among the wavelengths.
Therefore, the definition of proper statistical procedures accounting for the
peculiarities of spectroscopy data is paramount. In this work, motivated by two
dairy science applications, we propose an adaptive functional regression
framework for spectroscopy data. The method stems from the trend filtering
literature, allowing the definition of a highly flexible and adaptive estimator
able to handle different degrees of smoothness. We provide a fast optimization
procedure that is suitable for both Gaussian and non Gaussian scalar responses,
and allows for the inclusion of scalar covariates. Moreover, we develop
inferential procedures for both the functional and the scalar component thus
enhancing not only the interpretability of the results, but also their
usability in real world scenarios. The method is applied to two sets of MIR
spectroscopy data, providing excellent results when predicting milk chemical
composition and cows' dietary treatments. Moreover, the developed inferential
routine provides relevant insights, potentially paving the way for a richer
interpretation and a better understanding of the impact of specific wavelengths
on milk features.
|
http://arxiv.org/abs/2309.06999v1
|
The Kruskal-Szekeres coordinates construction for the Schwarzschild spacetime
could be viewed geometrically as a squeezing of the $t$-line associated with
the asymptotic observer into a single point, at the event horizon $r=2M$.
Starting from this point, we extend the Kruskal charting to spacetimes with two
horizons, in particular the Reissner-Nordstr\"om manifold, $\mathcal{M}_{RN}$.
We develop a new method for constructing Kruskal-like coordinates and find two
algebraically distinct classes charting $\mathcal{M}_{RN}$. We pedagogically
illustrate our method by constructing two compact, conformal, and global
coordinate systems labeled $\mathcal{GK_{I}}$ and $\mathcal{GK_{II}}$ for each
class respectively. In both coordinates, the metric differentiability can be
promoted to $C^\infty$. The conformal metric factor can be explicitly written
in terms of the original $t$ and $r$ coordinates for both charts.
|
http://arxiv.org/abs/2309.10123v2
|
Sentiment analysis using big data from YouTube videos metadata can be
conducted to analyze public opinions on various political figures who represent
political parties. This is possible because YouTube has become one of the
platforms for people to express themselves, including their opinions on various
political figures. The resulting sentiment analysis can be useful for political
executives to gain an understanding of public sentiment and develop appropriate
and effective political strategies. This study aimed to build a sentiment
analysis system leveraging YouTube videos metadata. The sentiment analysis
system was built using Apache Kafka, Apache PySpark, and Hadoop for big data
handling; TensorFlow for deep learning handling; and FastAPI for deployment on
the server. The YouTube videos metadata used in this study is the video
description. The sentiment analysis model was built using LSTM algorithm and
produces two types of sentiments: positive and negative sentiments. The
sentiment analysis results are then visualized in the form a simple web-based
dashboard.
|
http://arxiv.org/abs/2309.16234v1
|
Series or orthogonal basis regression is one of the most popular
non-parametric regression techniques in practice, obtained by regressing the
response on features generated by evaluating the basis functions at observed
covariate values. The most routinely used series estimator is based on ordinary
least squares fitting, which is known to be minimax rate optimal in various
settings, albeit under stringent restrictions on the basis functions and the
distribution of covariates. In this work, inspired by the recently developed
Forster-Warmuth (FW) learner, we propose an alternative series regression
estimator that can attain the minimax estimation rate under strictly weaker
conditions imposed on the basis functions and the joint law of covariates, than
existing series estimators in the literature. Moreover, a key contribution of
this work generalizes the FW-learner to a so-called counterfactual regression
problem, in which the response variable of interest may not be directly
observed (hence, the name ``counterfactual'') on all sampled units, and
therefore needs to be inferred in order to identify and estimate the regression
in view from the observed data. Although counterfactual regression is not
entirely a new area of inquiry, we propose the first-ever systematic study of
this challenging problem from a unified pseudo-outcome perspective. In fact, we
provide what appears to be the first generic and constructive approach for
generating the pseudo-outcome (to substitute for the unobserved response) which
leads to the estimation of the counterfactual regression curve of interest with
small bias, namely bias of second order. Several applications are used to
illustrate the resulting FW-learner including many nonparametric regression
problems in missing data and causal inference literature, for which we
establish high-level conditions for minimax rate optimality of the proposed
FW-learner.
|
http://arxiv.org/abs/2307.16798v4
|
This paper includes the classification, in a simple Lie algebra, of the
singularities of Slodowy slices between special nilpotent orbits that are
adjacent in the partial order on nilpotent orbits. The irreducible components
of most singularities are (up to normalization) either a simple surface
singularity or the closure of a minimal special nilpotent orbit in a smaller
rank Lie algebra. Besides those cases, there are some exceptional cases that
arise as certain quotients of the closure of a minimal orbit in types $A_2$ and
$D_n$. We also consider the action on the slice of the fundamental group of the
smaller orbit. With this action, we observe that under Lusztig-Spaltenstein
duality, in most cases, a simple surface singularity is interchanged with the
closure of a minimal special orbit of Langlands dual type (or a cover of it
with action). This empirical observation generalizes an observation of Kraft
and Procesi in type $A_n$, where all nilpotent orbits are special. We also
resolve a conjecture of Lusztig that concerns the intersection cohomology of
slices between special nilpotent orbits.
|
http://arxiv.org/abs/2310.00521v1
|
Quantum software engineering (QSE) is receiving increasing attention, as
evidenced by increasing publications on topics, e.g., quantum software
modeling, testing, and debugging. However, in the literature, quantum software
requirements engineering (QSRE) is still a software engineering area that is
relatively less investigated. To this end, in this paper, we provide an initial
set of thoughts about how requirements engineering for quantum software might
differ from that for classical software after making an effort to map classical
requirements classifications (e.g., functional and extra-functional
requirements) into the context of quantum software. Moreover, we provide
discussions on various aspects of QSRE that deserve attention from the quantum
software engineering community.
|
http://arxiv.org/abs/2309.13358v1
|
We present optimization of [(15 $\unicode{x212B}$) Ni$_{80}$Fe$_{20}$/(5
$\unicode{xC5}$) M]$_{20}$ single crystal multilayers on (001) MgO, with M
being Cu, Cu$_{50}$Pt$_{50}$ and Pt. These superlattices were characterized by
high-resolution X-ray reflectivity (XRR) and diffraction (XRD) as well as polar
mapping of important crystal planes. It is shown that cube on cube epitaxial
relationship can be obtained when depositing at the substrate temperature of
100 $^\circ$C regardless of the lattice mismatch (5% and 14% for Cu and Pt,
respectively). At lower substrate temperatures poly-crystalline multilayers
were obtained while at higher substrate temperatures {111} planes appear at
$\sim$10$^\circ$ off normal to the film plane. It is also shown that as the
epitaxial strain increases, the easy magnetization axis rotates towards the
direction that previously was assumed to be harder, i.e. from [110] to [100],
and eventually further increase in the strain makes the magnetic hysteresis
loops isotropic in the film plane. Higher epitaxial strain is also accompanied
with increased coercivity values. Thus, the effect of epitaxial strain on the
magnetocrystalline anisotropy is much larger than what was observed previously
in similar, but polycrystalline samples with uniaxial anisotropy (Kateb et al.
2021).
|
http://arxiv.org/abs/2302.14745v1
|
Despite their success, Machine Learning (ML) models do not generalize
effectively to data not originating from the training distribution. To reliably
employ ML models in real-world healthcare systems and avoid inaccurate
predictions on out-of-distribution (OOD) data, it is crucial to detect OOD
samples. Numerous OOD detection approaches have been suggested in other fields
- especially in computer vision - but it remains unclear whether the challenge
is resolved when dealing with medical tabular data. To answer this pressing
need, we propose an extensive reproducible benchmark to compare different
methods across a suite of tests including both near and far OODs. Our benchmark
leverages the latest versions of eICU and MIMIC-IV, two public datasets
encompassing tens of thousands of ICU patients in several hospitals. We
consider a wide array of density-based methods and SOTA post-hoc detectors
across diverse predictive architectures, including MLP, ResNet, and
Transformer. Our findings show that i) the problem appears to be solved for
far-OODs, but remains open for near-OODs; ii) post-hoc methods alone perform
poorly, but improve substantially when coupled with distance-based mechanisms;
iii) the transformer architecture is far less overconfident compared to MLP and
ResNet.
|
http://arxiv.org/abs/2309.16220v1
|
Distributed learning on edge devices has attracted increased attention with
the advent of federated learning (FL). Notably, edge devices often have limited
battery and heterogeneous energy availability, while multiple rounds are
required in FL for convergence, intensifying the need for energy efficiency.
Energy depletion may hinder the training process and the efficient utilization
of the trained model. To solve these problems, this letter considers the
integration of energy harvesting (EH) devices into a FL network with
multi-channel ALOHA, while proposing a method to ensure both low energy outage
probability and successful execution of future tasks. Numerical results
demonstrate the effectiveness of this method, particularly in critical setups
where the average energy income fails to cover the iteration cost. The method
outperforms a norm based solution in terms of convergence time and battery
level.
|
http://arxiv.org/abs/2309.06033v1
|
Argument mining is to analyze argument structure and extract important
argument information from unstructured text. An argument mining system can help
people automatically gain causal and logical information behind the text. As
argumentative corpus gradually increases, like more people begin to argue and
debate on social media, argument mining from them is becoming increasingly
critical. However, argument mining is still a big challenge in natural language
tasks due to its difficulty, and relative techniques are not mature. For
example, research on non-tree argument mining needs to be done more. Most works
just focus on extracting tree structure argument information. Moreover, current
methods cannot accurately describe and capture argument relations and do not
predict their types. In this paper, we propose a novel neural model called
AutoAM to solve these problems. We first introduce the argument component
attention mechanism in our model. It can capture the relevant information
between argument components, so our model can better perform argument mining.
Our model is a universal end-to-end framework, which can analyze argument
structure without constraints like tree structure and complete three subtasks
of argument mining in one model. The experiment results show that our model
outperforms the existing works on several metrics in two public datasets.
|
http://arxiv.org/abs/2309.09300v1
|
Exhibiting an explicit Boolean function with a large high-order nonlinearity
is an important problem in cryptography, coding theory, and computational
complexity. We prove lower bounds on the second-order, third-order, and
higher-order nonlinearities of some trace monomial Boolean functions.
We prove lower bounds on the second-order nonlinearities of functions
$\mathrm{tr}_n(x^7)$ and $\mathrm{tr}_n(x^{2^r+3})$ where $n=2r$. Among all
trace monomials, our bounds match the best second-order nonlinearity lower
bounds by \cite{Car08} and \cite{YT20} for odd and even $n$ respectively. We
prove a lower bound on the third-order nonlinearity for functions
$\mathrm{tr}_n(x^{15})$, which is the best third-order nonlinearity lower
bound. For any $r$, we prove that the $r$-th order nonlinearity of
$\mathrm{tr}_n(x^{2^{r+1}-1})$ is at least
$2^{n-1}-2^{(1-2^{-r})n+\frac{r}{2^{r-1}}-1}- O(2^{\frac{n}{2}})$. For $r \ll
\log_2 n$, this is the best lower bound among all explicit functions.
|
http://arxiv.org/abs/2309.11229v1
|
Auxiliary data sources have become increasingly important in epidemiological
surveillance, as they are often available at a finer spatial and temporal
resolution, larger coverage, and lower latency than traditional surveillance
signals. We describe the problem of spatial and temporal heterogeneity in these
signals derived from these data sources, where spatial and/or temporal biases
are present. We present a method to use a ``guiding'' signal to correct for
these biases and produce a more reliable signal that can be used for modeling
and forecasting. The method assumes that the heterogeneity can be approximated
by a low-rank matrix and that the temporal heterogeneity is smooth over time.
We also present a hyperparameter selection algorithm to choose the parameters
representing the matrix rank and degree of temporal smoothness of the
corrections. In the absence of ground truth, we use maps and plots to argue
that this method does indeed reduce heterogeneity. Reducing heterogeneity from
auxiliary data sources greatly increases their utility in modeling and
forecasting epidemics.
|
http://arxiv.org/abs/2309.16546v1
|
The ever-increasing computational and storage requirements of modern
applications and the slowdown of technology scaling pose major challenges to
designing and implementing efficient computer architectures. In this paper, we
leverage the architectural balance principle to alleviate the bandwidth
bottleneck at the L1 data memory boundary of a tightly-coupled cluster of
processing elements (PEs). We thus explore coupling each PE with an L0 memory,
namely a private register file implemented as Standard Cell Memory (SCM).
Architecturally, the SCM is the Vector Register File (VRF) of Spatz, a compact
64-bit floating-point-capable vector processor based on RISC-V's Vector
Extension Zve64d. Unlike typical vector processors, whose VRF are hundreds of
KiB large, we prove that Spatz can achieve peak energy efficiency with a VRF of
only 2 KiB. An implementation of the Spatz-based cluster in GlobalFoundries'
12LPP process with eight double-precision Floating Point Units (FPUs) achieves
an FPU utilization just 3.4% lower than the ideal upper bound on a
double-precision, floating-point matrix multiplication. The cluster reaches 7.7
FMA/cycle, corresponding to 15.7 GFLOPS-DP and 95.7 GFLOPS-DP/W at 1 GHz and
nominal operating conditions (TT, 0.80V, 25^oC) with more than 55% of the power
spent on the FPUs. Furthermore, the optimally-balanced Spatz-based cluster
reaches a 95.0% FPU utilization (7.6 FMA/cycle), 15.2 GFLOPS-DP, and 99.3
GFLOPS-DP/W (61% of the power spent in the FPU) on a 2D workload with a 7x7
kernel, resulting in an outstanding area/energy efficiency of 171
GFLOPS-DP/W/mm^2. At equi-area, our computing cluster built upon compact vector
processors reaches a 30% higher energy efficiency than a cluster with the same
FPU count built upon scalar cores specialized for stream-based floating-point
computation.
|
http://arxiv.org/abs/2309.10137v1
|
Many infrastructure managers have the goal to increase the capacity of their
railway infrastructure due to an increasing demand. While methods for
performance calculations of railway line infrastructure are already well
established, the determination of railway junction capacity remains a
challenge. This work utilizes the concept of queueing theory to develop a
method for the capacity calculation of railway junctions, solely depending on
their infrastructure layout along with arrival and service rates. The
implementation of the introduced approach is based on advanced model-checking
techniques. It can be used to decide which infrastructure layout to build, i.e.
whether an overpass for the analysed railway junction is needed. The developed
method hence addresses the need for fast and reliable timetable independent
junction evaluation in the long-term railway capacity calculation landscape.
|
http://arxiv.org/abs/2309.14351v1
|
We develop a semi-analytical description for the
Berezinskii-Kosterlitz-Thouless (BKT) like phase transition in nonequilibrium
Bose-Einstein condensates. Our theoretical analysis is based on a noisy
generalized Gross-Pitaevskii equation. Above a critical strength of the noise,
spontaneous vortex-antivortex pairs are generated. We provide a semi-analytical
determination of the transition point based on a linearized Bogoliubov
analysis, to which some nonlinear corrections are added. We present two
different approaches that are in agreement with our numerical calculations in a
wide range of system parameters. We find that for small losses and not too
small energy relaxation, the critical point approaches that of the equilibrium
BKT transition. Furthermore, we find that losses tend to stabilize the ordered
phase: keeping the other parameters constant and increasing the losses leads to
a higher critical noise strength for the spontaneous generation of
vortex-antivortex pairs. Our theoretical analysis is relevant for experiments
on microcavity polaritons.
|
http://arxiv.org/abs/2309.11201v1
|
Collaborative robotics is a new and challenging field in the realm of motion
control and human-robot interaction. The safety measures needed for a reliable
interaction between the robot and its environment hinder the use of classical
control methods, pushing researchers to try new techniques such as machine
learning (ML). In this context, reinforcement learning has been adopted as the
primary way to create intelligent controllers for collaborative robots, however
supervised learning shows great promise in the hope of developing data-driven
model based ML controllers in a faster and safer way. In this work we study
several aspects of the methodology needed to create a dataset to be used to
learn the dynamics of a robot. For this we tune several PD controllers to
several trajectories, using a multi-objective genetic algorithm (GA) which
takes into account not only their accuracy, but also their safety. We
demonstrate the need to tune the controllers individually to each trajectory
and empirically explore the best population size for the GA and how the speed
of the trajectory affects the tuning and the dynamics of the robot.
|
http://arxiv.org/abs/2309.08988v1
|
The increasing capacities of large language models (LLMs) present an
unprecedented opportunity to scale up data analytics in the humanities and
social sciences, augmenting and automating qualitative analytic tasks
previously typically allocated to human labor. This contribution proposes a
systematic mixed methods framework to harness qualitative analytic expertise,
machine scalability, and rigorous quantification, with attention to
transparency and replicability. 16 machine-assisted case studies are showcased
as proof of concept. Tasks include linguistic and discourse analysis, lexical
semantic change detection, interview analysis, historical event cause inference
and text mining, detection of political stance, text and idea reuse, genre
composition in literature and film; social network inference, automated
lexicography, missing metadata augmentation, and multimodal visual cultural
analytics. In contrast to the focus on English in the emerging LLM
applicability literature, many examples here deal with scenarios involving
smaller languages and historical texts prone to digitization distortions. In
all but the most difficult tasks requiring expert knowledge, generative LLMs
can demonstrably serve as viable research instruments. LLM (and human)
annotations may contain errors and variation, but the agreement rate can and
should be accounted for in subsequent statistical modeling; a bootstrapping
approach is discussed. The replications among the case studies illustrate how
tasks previously requiring potentially months of team effort and complex
computational pipelines, can now be accomplished by an LLM-assisted scholar in
a fraction of the time. Importantly, this approach is not intended to replace,
but to augment researcher knowledge and skills. With these opportunities in
sight, qualitative expertise and the ability to pose insightful questions have
arguably never been more critical.
|
http://arxiv.org/abs/2309.14379v1
|
Exploring hit positions of recorded events can help to understand and
suppress backgrounds in rare event searching experiments. In this study, we
virtually segment a small contact P-type high purity germanium detector (HPGe)
into two layers. Single-site events (SSEs) in each layer are selected by an
algorithm based on two pulse shape parameters: the charge pulse drift time
($T_{Q}$) and current pulse rise time ($T_{I}$). To determine the shapes and
volumes of the two layers, a Th-228 source is placed at top and side positions
to irradiate the detector. The double escape peak events from 2614.5 keV
$\gamma$-ray are selected as typical SSEs, their numbers in the two layers are
used to calculate the volumes and shapes of those layers. Considering the
statistical and systematic uncertainties, the inner layer volume is evaluated
to be 47.2\%$\pm$0.26(stat.)\%$\pm$0.22(sys.)\% of the total sensitive volume.
We extend our analysis for SSEs in 1400-2100 keV, the spectra of inner layer
events acquired from experimental data using the selection algorithm are in
good agreement with those from the simulation. For sources outside the HPGe
detector, the outer layer can act as a shielding for the inner layer. Selecting
the inner layer as the analysis volume can reduce the externalbackground in the
signal region of Ge-76 neutrinoless double beta (0$\nu\beta\beta$) decay. We
use the Th-228 source to evaluate the background suppression power of the
virtual segmentation. After performing the single and multi-site event
discrimination, the event rate in the 0$\nu\beta\beta$ signal region can be
further suppressed by 12\% by selecting the inner layer as the analysis volume.
The virtual segmentation could be used to efficiently suppress surface
background like electrons from Ar-42/K-42 decay in 0$\nu\beta\beta$ experiments
using germanium detector immersed in liquid argon.
|
http://arxiv.org/abs/2309.03605v1
|
Logic locking and hardware Trojans are two fields in hardware security that
have been mostly developed independently from each other. In this paper, we
identify the relationship between these two fields. We find that a common
structure that exists in many logic locking techniques has desirable properties
of hardware Trojans (HWT). We then construct a novel type of HWT, called
Trojans based on Logic Locking (TroLL), in a way that can evade
state-of-the-art ATPG-based HWT detection techniques. In an effort to detect
TroLL, we propose customization of existing state-of-the-art ATPG-based HWT
detection approaches as well as adapting the SAT-based attacks on logic locking
to HWT detection. In our experiments, we use random sampling as reference. It
is shown that the customized ATPG-based approaches are the best performing but
only offer limited improvement over random sampling. Moreover, their efficacy
also diminishes as TroLL's triggers become longer, i.e., have more bits
specified). We thereby highlight the need to find a scalable HWT detection
approach for TroLL.
|
http://arxiv.org/abs/2309.15067v1
|
Advancements in deep neural networks have allowed automatic speech
recognition (ASR) systems to attain human parity on several publicly available
clean speech datasets. However, even state-of-the-art ASR systems experience
performance degradation when confronted with adverse conditions, as a
well-trained acoustic model is sensitive to variations in the speech domain,
e.g., background noise. Intuitively, humans address this issue by relying on
their linguistic knowledge: the meaning of ambiguous spoken terms is usually
inferred from contextual cues thereby reducing the dependency on the auditory
system. Inspired by this observation, we introduce the first open-source
benchmark to utilize external large language models (LLMs) for ASR error
correction, where N-best decoding hypotheses provide informative elements for
true transcription prediction. This approach is a paradigm shift from the
traditional language model rescoring strategy that can only select one
candidate hypothesis as the output transcription. The proposed benchmark
contains a novel dataset, HyPoradise (HP), encompassing more than 334,000 pairs
of N-best hypotheses and corresponding accurate transcriptions across prevalent
speech domains. Given this dataset, we examine three types of error correction
techniques based on LLMs with varying amounts of labeled
hypotheses-transcription pairs, which gains a significant word error rate (WER)
reduction. Experimental evidence demonstrates the proposed technique achieves a
breakthrough by surpassing the upper bound of traditional re-ranking based
methods. More surprisingly, LLM with reasonable prompt and its generative
capability can even correct those tokens that are missing in N-best list. We
make our results publicly accessible for reproducible pipelines with released
pre-trained models, thus providing a new evaluation paradigm for ASR error
correction with LLMs.
|
http://arxiv.org/abs/2309.15701v2
|
In this paper, we have realized the left-right symmetric model with modular
symmetry. We have used $\Gamma$(3) modular group which is isomorphic to
non-abelian discrete symmetry group $A_4$. The advantage of using modular
symmetry is the non-requirement for the use of extra particles called
'flavons'. In this model, the Yukawa couplings are expressed in terms of
modular forms $(Y_1,Y_2,Y_3)$. In this work, we have studied minimal Left-Right
Symmetric Model for both type-I and type-II dominances. Here, we have
calculated the values for the Yukawa couplings and then plotted it against the
sum of the neutrino masses. The results obtained are well within the
experimental limits for the desired values of sum of neutrino masses. We have
also briefly analyzed the effects of the implications of modular symmetry on
neutrinoless double beta decay with the new physics contributions within
Left-Right Symmetric Model.
|
http://arxiv.org/abs/2301.13552v1
|
We introduce Secure Haplotype Imputation Employing Local Differential privacy
(SHIELD), a program for accurately estimating the genotype of target samples at
markers that are not directly assayed by array-based genotyping platforms while
preserving the privacy of donors to public reference panels. At the core of
SHIELD is the Li-Stephens model of genetic recombination, according to which
genomic information is comprised of mosaics of ancestral haplotype fragments
that coalesce via a Markov random field. We use the standard forward-backward
algorithm for inferring the ancestral haplotypes of target genomes, and hence
the most likely genotype at unobserved sites, using a reference panel of
template haplotypes whose privacy is guaranteed by the randomized response
technique from differential privacy.
|
http://arxiv.org/abs/2309.07305v1
|
Screw and Lie group theory allows for user-friendly modeling of multibody
systems (MBS) while at the same they give rise to computationally efficient
recursive algorithms. The inherent frame invariance of such formulations allows
for use of arbitrary reference frames within the kinematics modeling (rather
than obeying modeling conventions such as the Denavit-Hartenberg convention)
and to avoid introduction of joint frames. The computational efficiency is owed
to a representation of twists, accelerations, and wrenches that minimizes the
computational effort. This can be directly carried over to dynamics
formulations. In this paper recursive $O\left( n\right) $ Newton-Euler
algorithms are derived for the four most frequently used representations of
twists, and their specific features are discussed. These formulations are
related to the corresponding algorithms that were presented in the literature.
The MBS motion equations are derived in closed form using the Lie group
formulation. One are the so-called 'Euler-Jourdain' or 'projection' equations,
of which Kane's equations are a special case, and the other are the Lagrange
equations. The recursive kinematics formulations are readily extended to higher
orders in order to compute derivatives of the motions equations. To this end,
recursive formulations for the acceleration and jerk are derived. It is briefly
discussed how this can be employed for derivation of the linearized motion
equations and their time derivatives. The geometric modeling allows for direct
application of Lie group integration methods, which is briefly discussed.
|
http://arxiv.org/abs/2306.17793v1
|
This paper is about the study of F-transforms based on overlap and grouping
maps, residual and co-residual implicator over complete lattice from both
constructive and axiomatic approaches. Further, the duality, basic properties,
and the inverse of proposed F-transforms have been studied, and axiomatic
characterizations of proposed direct F-transforms are investigated.
|
http://arxiv.org/abs/2301.12894v1
|
Low-rank tensor completion (LRTC) aims to recover a complete low-rank tensor
from incomplete observed tensor, attracting extensive attention in various
practical applications such as image processing and computer vision. However,
current methods often perform well only when there is a sufficient of observed
information, and they perform poorly or may fail when the observed information
is less than 5\%. In order to improve the utilization of observed information,
a new method called the tensor joint rank with logarithmic composite norm
(TJLC) method is proposed. This method simultaneously exploits two types of
tensor low-rank structures, namely tensor Tucker rank and tubal rank, thereby
enhancing the inherent correlations between known and missing elements. To
address the challenge of applying two tensor ranks with significantly different
directly to LRTC, a new tensor Logarithmic composite norm is further proposed.
Subsequently, the TJLC model and algorithm for the LRTC problem are proposed.
Additionally, theoretical convergence guarantees for the TJLC method are
provided. Experiments on various real datasets demonstrate that the proposed
method outperforms state-of-the-art methods significantly. Particularly, the
proposed method achieves satisfactory recovery even when the observed
information is as low as 1\%, and the recovery performance improves
significantly as the observed information increases.
|
http://arxiv.org/abs/2309.16208v2
|
In this work the LvN quantization of the type IIB superstring is carried on
in a time dependent plane wave background with a constant self-dual
Ramond-Ramond 5-form and a linear dilaton in the light-like direction. Such an
endeavour allows us to define an invariant density matrix and study important
issues in real time string thermodynamics. In particular, the Hagendorn
temperature is calculated as function of the thermalization time.
|
http://arxiv.org/abs/2309.11567v1
|
Numerous applications have been developed to assist visually impaired
individuals that employ a machine learning unit to process visual input.
However, a critical challenge with these applications is the sub-optimal
quality of images captured by the users. Given the complexity of operating a
camera for visually impaired individuals, we advocate for the use of Apple Live
Photos and Android Motion Photos technologies. In this study, we introduce a
straightforward methodology to evaluate and contrast the efficacy of
Live/Motion Photos against traditional image-based approaches. Our findings
reveal that both Live Photos and Motion Photos outperform single-frame images
in common visual assisting tasks, specifically in object classification and
VideoQA. We validate our results through extensive experiments on the ORBIT
dataset, which consists of videos collected by visually impaired individuals.
Furthermore, we conduct a series of ablation studies to delve deeper into the
impact of deblurring and longer temporal crops.
|
http://arxiv.org/abs/2309.08022v1
|
This letter investigates a cache-enabled multiuser mobile edge computing
(MEC) system with dynamic task arrivals, taking into account the impact of
proactive cache placement on the system's overall energy consumption. We
consider that an access point (AP) schedules a wireless device (WD) to offload
computational tasks while executing the tasks of a finite library in the
\emph{task caching} phase, such that the nearby WDs with the same task request
arriving later can directly download the task results in the \emph{task arrival
and execution} phase. We aim for minimizing the system's weighted-sum energy
over a finite-time horizon, by jointly optimizing the task caching decision and
the MEC execution of the AP, and local computing as well as task offloading of
the WDs at each time slot, subject to caching capacity, task causality, and
completion deadline constraints. The formulated design problem is a
mixed-integer nonlinear program. Under the assumption of fully predicable task
arrivals, we first propose a branch-and-bound (BnB) based method to obtain the
optimal offline solution. Next, we propose two low-complexity schemes based on
convex relaxation and task-popularity, respectively. Finally, numerical results
show the benefit of the proposed schemes over existing benchmark schemes.
|
http://arxiv.org/abs/2301.13546v1
|
Most interpretability research in NLP focuses on understanding the behavior
and features of a fully trained model. However, certain insights into model
behavior may only be accessible by observing the trajectory of the training
process. We present a case study of syntax acquisition in masked language
models (MLMs) that demonstrates how analyzing the evolution of interpretable
artifacts throughout training deepens our understanding of emergent behavior.
In particular, we study Syntactic Attention Structure (SAS), a naturally
emerging property of MLMs wherein specific Transformer heads tend to focus on
specific syntactic relations. We identify a brief window in pretraining when
models abruptly acquire SAS, concurrent with a steep drop in loss. This
breakthrough precipitates the subsequent acquisition of linguistic
capabilities. We then examine the causal role of SAS by manipulating SAS during
training, and demonstrate that SAS is necessary for the development of
grammatical capabilities. We further find that SAS competes with other
beneficial traits during training, and that briefly suppressing SAS improves
model quality. These findings offer an interpretation of a real-world example
of both simplicity bias and breakthrough training dynamics.
|
http://arxiv.org/abs/2309.07311v5
|
We discuss a string-net construction on 2-framed surfaces, taking as
algebraic input a finite, rigid tensor category, which is neither assumed to be
pivotal nor semi-simple. It is shown that circle categories of our framed
string-net construction essentially compute Drinfeld centers twisted by powers
of the double dual functor.
|
http://arxiv.org/abs/2302.14779v3
|
Weyl points (WP) are robust spectral degeneracies, which can not be split by
small perturbations, as they are protected by their non-zero topological
charge. For larger perturbations, WPs can disappear via pairwise annihilation,
where two oppositely charged WPs merge, and the resulting neutral degeneracy
disappears. The neutral degeneracy is unstable, meaning that it requires the
fine-tuning of the perturbation. Fine-tuning of more than one parameter can
lead to more exotic WP mergers. In this work, we reveal and analyze a
fundamental connection of the WP mergers and singularity theory: phase boundary
points of Weyl phase diagrams, i.e., control parameter values where Weyl point
mergers happen, can be classified according to singularity classes of maps
between manifolds of equal dimension. We demonstrate this connection on a
Weyl--Josephson circuit where the merger of 4 WPs draw a swallowtail
singularity, and in a random BdG Hamiltonian which reveal a rich pattern of
fold lines and cusp points. Our results predict universal geometrical features
of Weyl phase diagrams, and generalize naturally to creation and annihilation
of Weyl points in electronic (phononic, magnonic, photonic, etc) band-structure
models, where Weyl phase transitions can be triggered by control parameters
such as mechanical strain.
|
http://arxiv.org/abs/2309.05506v1
|
This article describes a multi-modal method using simulated Lidar data via
ray tracing and image pixel loss with differentiable rendering to optimize an
object's position with respect to an observer or some referential objects in a
computer graphics scene. Object position optimization is completed using
gradient descent with the loss function being influenced by both modalities.
Typical object placement optimization is done using image pixel loss with
differentiable rendering only, this work shows the use of a second modality
(Lidar) leads to faster convergence. This method of fusing sensor input
presents a potential usefulness for autonomous vehicles, as these methods can
be used to establish the locations of multiple actors in a scene. This article
also presents a method for the simulation of multiple types of data to be used
in the training of autonomous vehicles.
|
http://arxiv.org/abs/2309.03177v1
|
Harmful Algal and Cyanobacterial Blooms (HABs), occurring in inland and
maritime waters, pose threats to natural environments by producing toxins that
affect human and animal health. In the past, HABs have been assessed mainly by
the manual collection and subsequent analysis of water samples and occasionally
by automatic instruments that acquire information from fixed locations. These
procedures do not provide data with the desirable spatial and temporal
resolution to anticipate the formation of HABs. Hence, new tools and
technologies are needed to efficiently detect, characterize and respond to HABs
that threaten water quality. It is essential nowadays when the world's water
supply is under tremendous pressure because of climate change,
overexploitation, and pollution. This paper introduces DEVS-BLOOM, a novel
framework for real-time monitoring and management of HABs. Its purpose is to
support high-performance hazard detection with Model Based Systems Engineering
(MBSE) and Cyber-Physical Systems (CPS) infrastructure for dynamic
environments.
|
http://arxiv.org/abs/2309.04618v1
|
We provide the results of pattern recognition experiments on mathematical
expressions.
We give a few examples of conjectured results. None of which was thoroughly
checked for novelty. We did not attempt to prove all the relations found and
focused on their generation.
|
http://arxiv.org/abs/2301.01624v1
|
The collection of reflecting hyperplanes of a finite Coxeter group is called
a reflection arrangement and it appears in many subareas of combinatorics and
representation theory. We focus on the problem of counting regions of
reflection arrangements and their deformations. Inspired by the recent work of
Bernardi, we show that the notion of moves and sketches can be used to provide
a uniform and explicit bijection between regions of (the Catalan deformation
of) a reflection arrangement and certain non-nesting partitions. We then use
the exponential formula to describe a statistic on these partitions such that
distribution is given by the coefficients of the characteristic polynomial.
Finally, we consider a sub-arrangement of type C arrangement called the
threshold arrangement and its Catalan and Shi deformations.
|
http://arxiv.org/abs/2308.16653v1
|
Food image classification is a fundamental step of image-based dietary
assessment, enabling automated nutrient analysis from food images. Many current
methods employ deep neural networks to train on generic food image datasets
that do not reflect the dynamism of real-life food consumption patterns, in
which food images appear sequentially over time, reflecting the progression of
what an individual consumes. Personalized food classification aims to address
this problem by training a deep neural network using food images that reflect
the consumption pattern of each individual. However, this problem is
under-explored and there is a lack of benchmark datasets with individualized
food consumption patterns due to the difficulty in data collection. In this
work, we first introduce two benchmark personalized datasets including the
Food101-Personal, which is created based on surveys of daily dietary patterns
from participants in the real world, and the VFNPersonal, which is developed
based on a dietary study. In addition, we propose a new framework for
personalized food image classification by leveraging self-supervised learning
and temporal image feature information. Our method is evaluated on both
benchmark datasets and shows improved performance compared to existing works.
The dataset has been made available at:
https://skynet.ecn.purdue.edu/~pan161/dataset_personal.html
|
http://arxiv.org/abs/2309.08744v1
|
Sonification as a complement of visualization is been under research for
decades as a new ways of data deployment. ICAD conferences, gather together
specialists from different disciplines to discuss about sonification. Different
tools as sonoUno, starSound and Web Sandbox are attempt to reach a tool to open
astronomical data sets and sonify it in conjunction to visualization. In this
contribution, the sonoUno web version is presented, this version allows user to
explore data sets without any installation. The data can be uploaded or a
pre-loaded file can be opened, the sonification and the visual characteristics
of the plot can be customized on the same window. The plot, sound and marks can
be saved. The web interface were tested with the main used screen readers in
order to confirm their good performance.
|
http://arxiv.org/abs/2302.00081v1
|
We investigate the internal behavior of Transformer-based Large Language
Models (LLMs) when they generate factually incorrect text. We propose modeling
factual queries as constraint satisfaction problems and use this framework to
investigate how the LLM interacts internally with factual constraints. We find
a strong positive relationship between the LLM's attention to constraint tokens
and the factual accuracy of generations. We curate a suite of 10 datasets
containing over 40,000 prompts to study the task of predicting factual errors
with the Llama-2 family across all scales (7B, 13B, 70B). We propose SAT Probe,
a method probing attention patterns, that can predict factual errors and
fine-grained constraint satisfaction, and allow early error identification. The
approach and findings take another step towards using the mechanistic
understanding of LLMs to enhance their reliability.
|
http://arxiv.org/abs/2309.15098v2
|
We consider arbitrary bounded discrete time series originating from dynamical
system. Without any use of the Fourier transform, we find periodic points which
suitably characterizes (i.e. independent of Lyapunov exponent) the
corresponding time series. In particular, bounded discrete time series
generated by the autoregressive model (without the white noise) is equivalent
to a quasi periodic function.
|
http://arxiv.org/abs/2310.00290v6
|
Black hole (BH) X-ray binaries cycle through different spectral states of
accretion over the course of months to years. Although fluctuations in the BH
mass accretion rate are generally recognized as the most important component of
state transitions, it is becoming increasingly evident that magnetic fields
play a similarly important role. In this article, we present the first
radiative two-temperature (2T) general relativistic magnetohydrodynamics
(GRMHD) simulations in which an accretion disk transitions from a quiescent
state at an accretion rate of $\dot{M} \sim 10^{-10} \dot{M}_{\rm Edd}$ to a
hard-intermediate state at an accretion rate of $\dot{M} \sim 10^{-2}
\dot{M}_{\rm Edd}$. This huge parameter space in mass accretion rate is bridged
by artificially rescaling the gas density scale of the simulations. We present
two jetted BH models with varying degrees of magnetic flux saturation. We
demonstrate that in `Standard and Normal Evolution' models, which are
unsaturated with magnetic flux, the hot torus collapses into a thin and cold
accretion disk when $\dot{M} \gtrsim 5\times 10^{-3} \dot{M}_{\rm Edd}$. On the
other hand, in `Magnetically Arrested Disk' models, which are fully saturated
with vertical magnetic flux, the plasma remains mostly hot with substructures
that condense into cold clumps of gas when $\dot{M} \gtrsim 1 \times 10^{-2}
\dot{M}_{\rm Edd}$. This suggests that the spectral signatures observed during
state transitions are closely tied to the level of magnetic flux saturation.
|
http://arxiv.org/abs/2309.15926v2
|
We argue that the higher weak isospin $SU(3)_L$ manifestly unifies dark
matter and normal matter in its isomultiplets for which dark matter carries a
conserved dark charge while normal matter does not. The resultant gauge
symmetry is given by $SU(3)_C\otimes SU(3)_L \otimes U(1)_X\otimes U(1)_G$,
where the first factor is the color group, while the rest defines a theory of
scotoelectroweak in which $X$ and $G$ determine electric charge
$Q=T_3-1/\sqrt{3}T_8+X$ and dark charge $D=-2/\sqrt{3}T_8+G$. This setup
provides both appropriate scotogenic neutrino masses and dark matter stability
as preserved by a residual dark parity $P_D=(-1)^D$. Interpretation of the dark
charge is further discussed, given that $SU(3)_L$ is broken at very high energy
scale.
|
http://arxiv.org/abs/2309.12091v2
|
We introduce stability conditions (in the sense of King) for representable
modules of continuous quivers of type A along with a special criteria called
the four point condition. The stability conditions are defined using a
generalization of delta functions, called half-delta functions. We show that
for a continuous quiver of type A with finitely many sinks and sources, the
stability conditions satisfying the four point condition are in bijection with
measured laminations of the hyperbolic plane. Along the way, we extend an
earlier result by the first author and Todorov regarding continuous cluster
categories for linear continuous quivers of type A and laminations of the
hyperbolic plane to all continuous quivers of type A with finitely many sinks
and sources. We also give a formula for the continuous cluster character.
|
http://arxiv.org/abs/2302.14792v1
|
We answer a question of Pakhomov by showing that there is a consistent, c.e.
theory $T$ such that no theory which is definitionally equivalent to $T$ has a
computable model. A key tool in our proof is the model-theoretic notion of
mutual algebraicity.
|
http://arxiv.org/abs/2309.11598v1
|
Oxide heterostructures exhibit a vast variety of unique physical properties.
Examples are unconventional superconductivity in layered nickelates and
topological polar order in (PbTiO$_3$)$_n$/(SrTiO$_3$)$_n$ superlattices.
Although it is clear that variations in oxygen content are crucial for the
electronic correlation phenomena in oxides, it remains a major challenge to
quantify their impact. Here, we measure the chemical composition in
multiferroic (LuFeO$_3$)$_9$/(LuFe$_2$O$_4$)$_1$ superlattices, revealing a
one-to-one correlation between the distribution of oxygen vacancies and the
electric and magnetic properties. Using atom probe tomography, we observe
oxygen vacancies arranging in a layered three-dimensional structure with a
local density on the order of 10$^{14}$ cm$^{-2}$, congruent with the
formula-unit-thick ferrimagnetic LuFe$_2$O$_4$ layers. The vacancy order is
promoted by the locally reduced formation energy and plays a key role in
stabilizing the ferroelectric domains and ferrimagnetism in the LuFeO$_3$ and
LuFe$_2$O$_4$ layers, respectively. The results demonstrate the importance of
oxygen vacancies for the room-temperature multiferroicity in this system and
establish an approach for quantifying the oxygen defects with atomic-scale
precision in 3D, giving new opportunities for deterministic defect-enabled
property control in oxide heterostructures.
|
http://arxiv.org/abs/2307.00139v1
|
The remarkable growth and significant success of machine learning have
expanded its applications into programming languages and program analysis.
However, a key challenge in adopting the latest machine learning methods is the
representation of programming languages, which directly impacts the ability of
machine learning methods to reason about programs. The absence of numerical
awareness, aggregate data structure information, and improper way of presenting
variables in previous representation works have limited their performances. To
overcome the limitations and challenges of current program representations, we
propose a graph-based program representation called PERFOGRAPH. PERFOGRAPH can
capture numerical information and the aggregate data structure by introducing
new nodes and edges. Furthermore, we propose an adapted embedding method to
incorporate numerical awareness. These enhancements make PERFOGRAPH a highly
flexible and scalable representation that effectively captures programs
intricate dependencies and semantics. Consequently, it serves as a powerful
tool for various applications such as program analysis, performance
optimization, and parallelism discovery. Our experimental results demonstrate
that PERFOGRAPH outperforms existing representations and sets new
state-of-the-art results by reducing the error rate by 7.4% (AMD dataset) and
10% (NVIDIA dataset) in the well-known Device Mapping challenge. It also sets
new state-of-the-art results in various performance optimization tasks like
Parallelism Discovery and NUMA and Prefetchers Configuration prediction.
|
http://arxiv.org/abs/2306.00210v2
|
Leo T is the lowest mass galaxy known to contain neutral gas and to show
signs of recent star formation, which makes it a valuable laboratory for
studying the nature of gas and star formation at the limits of where galaxies
are found to have rejuvenating episodes of star formation. Here we discuss a
novel study of Leo T that uses data from the MUSE integral field spectrograph
and photometric data from HST. The high sensitivity of MUSE allowed us to
increase the number of Leo T stars observed spectroscopically from 19 to 75. We
studied the age and metallicity of these stars and identified two populations,
all consistent with similar metallicity of [Fe/H] $\sim$ -1.5 dex, suggesting
that a large fraction of metals were ejected. Within the young population, we
discovered three emission line Be stars, supporting the conclusion that rapidly
rotating massive stars are common in metal-poor environments. We find
differences in the dynamics of young and old stars, with the young population
having a velocity dispersion consistent with the kinematics of the cold
component of the neutral gas. This finding directly links the recent star
formation in Leo T with the cold component of the neutral gas.
|
http://arxiv.org/abs/2309.03188v1
|
In the constrained planarity setting, we ask whether a graph admits a planar
drawing that additionally satisfies a given set of constraints. These
constraints are often derived from very natural problems; prominent examples
are Level Planarity, where vertices have to lie on given horizontal lines
indicating a hierarchy, and Clustered Planarity, where we additionally draw the
boundaries of clusters which recursively group the vertices in a crossing-free
manner. Despite receiving significant amount of attention and substantial
theoretical progress on these problems, only very few of the found solutions
have been put into practice and evaluated experimentally.
In this paper, we describe our implementation of the recent quadratic-time
algorithm by Bl\"asius et al. [TALG Vol 19, No 4] for solving the problem
Synchronized Planarity, which can be seen as a common generalization of several
constrained planarity problems, including the aforementioned ones. Our
experimental evaluation on an existing benchmark set shows that even our
baseline implementation outperforms all competitors by at least an order of
magnitude. We systematically investigate the degrees of freedom in the
implementation of the Synchronized Planarity algorithm for larger instances and
propose several modifications that further improve the performance. Altogether,
this allows us to solve instances with up to 100 vertices in milliseconds and
instances with up to 100 000 vertices within a few minutes.
|
http://arxiv.org/abs/2310.20632v1
|
Deep learning (DL) reconstruction particularly of MRI has led to improvements
in image fidelity and reduction of acquisition time. In neuroimaging, DL
methods can reconstruct high-quality images from undersampled data. However, it
is essential to consider fairness in DL algorithms, particularly in terms of
demographic characteristics. This study presents the first fairness analysis in
a DL-based brain MRI reconstruction model. The model utilises the U-Net
architecture for image reconstruction and explores the presence and sources of
unfairness by implementing baseline Empirical Risk Minimisation (ERM) and
rebalancing strategies. Model performance is evaluated using image
reconstruction metrics. Our findings reveal statistically significant
performance biases between the gender and age subgroups. Surprisingly, data
imbalance and training discrimination are not the main sources of bias. This
analysis provides insights of fairness in DL-based image reconstruction and
aims to improve equity in medical AI applications.
|
http://arxiv.org/abs/2309.14392v1
|
Automatic Pronunciation Assessment (APA) is vital for computer-assisted
language learning. Prior methods rely on annotated speech-text data to train
Automatic Speech Recognition (ASR) models or speech-score data to train
regression models. In this work, we propose a novel zero-shot APA method based
on the pre-trained acoustic model, HuBERT. Our method involves encoding speech
input and corrupting them via a masking module. We then employ the Transformer
encoder and apply k-means clustering to obtain token sequences. Finally, a
scoring module is designed to measure the number of wrongly recovered tokens.
Experimental results on speechocean762 demonstrate that the proposed method
achieves comparable performance to supervised regression baselines and
outperforms non-regression baselines in terms of Pearson Correlation
Coefficient (PCC). Additionally, we analyze how masking strategies affect the
performance of APA.
|
http://arxiv.org/abs/2305.19563v1
|
Visual active tracking is a growing research topic in robotics due to its key
role in applications such as human assistance, disaster recovery, and
surveillance. In contrast to passive tracking, active tracking approaches
combine vision and control capabilities to detect and actively track the
target. Most of the work in this area focuses on ground robots, while the very
few contributions on aerial platforms still pose important design constraints
that limit their applicability. To overcome these limitations, in this paper we
propose D-VAT, a novel end-to-end visual active tracking methodology based on
deep reinforcement learning that is tailored to micro aerial vehicle platforms.
The D-VAT agent computes the vehicle thrust and angular velocity commands
needed to track the target by directly processing monocular camera
measurements. We show that the proposed approach allows for precise and
collision-free tracking operations, outperforming different state-of-the-art
baselines on simulated environments which differ significantly from those
encountered during training. Moreover, we demonstrate a smooth real-world
transition to a quadrotor platform with mixed-reality.
|
http://arxiv.org/abs/2308.16874v2
|
Automatic text-to-3D generation that combines Score Distillation Sampling
(SDS) with the optimization of volume rendering has achieved remarkable
progress in synthesizing realistic 3D objects. Yet most existing text-to-3D
methods by SDS and volume rendering suffer from inaccurate geometry, e.g., the
Janus issue, since it is hard to explicitly integrate 3D priors into implicit
3D representations. Besides, it is usually time-consuming for them to generate
elaborate 3D models with rich colors. In response, this paper proposes GSGEN, a
novel method that adopts Gaussian Splatting, a recent state-of-the-art
representation, to text-to-3D generation. GSGEN aims at generating high-quality
3D objects and addressing existing shortcomings by exploiting the explicit
nature of Gaussian Splatting that enables the incorporation of 3D prior.
Specifically, our method adopts a progressive optimization strategy, which
includes a geometry optimization stage and an appearance refinement stage. In
geometry optimization, a coarse representation is established under 3D point
cloud diffusion prior along with the ordinary 2D SDS optimization, ensuring a
sensible and 3D-consistent rough shape. Subsequently, the obtained Gaussians
undergo an iterative appearance refinement to enrich texture details. In this
stage, we increase the number of Gaussians by compactness-based densification
to enhance continuity and improve fidelity. With these designs, our approach
can generate 3D assets with delicate details and accurate geometry. Extensive
evaluations demonstrate the effectiveness of our method, especially for
capturing high-frequency components. Our code is available at
https://github.com/gsgen3d/gsgen
|
http://arxiv.org/abs/2309.16585v4
|
Despite the remarkable capabilities of Large Language Models (LLMs) like
GPT-4, producing complex, structured tabular data remains challenging. Our
study assesses LLMs' proficiency in structuring tables and introduces a novel
fine-tuning method, cognizant of data structures, to bolster their performance.
We unveil Struc-Bench, a comprehensive benchmark featuring prominent LLMs
(GPT-NeoX-20B, GPT-3.5, GPT-4, and Vicuna), which spans text tables, HTML, and
LaTeX formats. Our proposed FormatCoT aids in crafting format-specific
instructions from the intended outputs to populate this benchmark. Addressing
the gap in task-centered evaluation, we propose two innovative metrics, P-Score
(Prompting Score) and H-Score (Heuristical Score), to more accurately gauge LLM
performance. Our experiments show that applying our structure-aware fine-tuning
to LLaMA-7B leads to substantial performance gains, outshining its LLM
counterparts across most measures. In-depth error analysis and creating an
ability map across six dimensions -- coverage, formatting, reasoning,
comprehension, pragmatics, and hallucination -- highlight areas for future
enhancements and suggest forthcoming research trajectories. Our code and models
can be found at https://github.com/gersteinlab/Struc-Bench.
|
http://arxiv.org/abs/2309.08963v3
|
Fact-checking in financial domain is under explored, and there is a shortage
of quality dataset in this domain. In this paper, we propose Fin-Fact, a
benchmark dataset for multimodal fact-checking within the financial domain.
Notably, it includes professional fact-checker annotations and justifications,
providing expertise and credibility. With its multimodal nature encompassing
both textual and visual content, Fin-Fact provides complementary information
sources to enhance factuality analysis. Its primary objective is combating
misinformation in finance, fostering transparency, and building trust in
financial reporting and news dissemination. By offering insightful
explanations, Fin-Fact empowers users, including domain experts and end-users,
to understand the reasoning behind fact-checking decisions, validating claim
credibility, and fostering trust in the fact-checking process. The Fin-Fact
dataset, along with our experimental codes is available at
https://github.com/IIT-DM/Fin-Fact/.
|
http://arxiv.org/abs/2309.08793v2
|
To easily obtain the knowledge about autism spectrum disorder and help its
early screening and diagnosis, we create AsdKB, a Chinese knowledge base on
autism spectrum disorder. The knowledge base is built on top of various
sources, including 1) the disease knowledge from SNOMED CT and ICD-10 clinical
descriptions on mental and behavioural disorders, 2) the diagnostic knowledge
from DSM-5 and different screening tools recommended by social organizations
and medical institutes, and 3) the expert knowledge on professional physicians
and hospitals from the Web. AsdKB contains both ontological and factual
knowledge, and is accessible as Linked Data at https://w3id.org/asdkb/. The
potential applications of AsdKB are question answering, auxiliary diagnosis,
and expert recommendation, and we illustrate them with a prototype which can be
accessed at http://asdkb.org.cn/.
|
http://arxiv.org/abs/2307.16773v2
|
Agent-based modeling (ABM) and simulation have emerged as important tools for
studying emergent behaviors, especially in the context of swarming algorithms
for robotic systems. Despite significant research in this area, there is a lack
of standardized simulation environments, which hinders the development and
deployment of real-world robotic swarms. To address this issue, we present
Zespol, a modular, Python-based simulation environment that enables the
development and testing of multi-agent control algorithms. Zespol provides a
flexible and extensible sandbox for initial research, with the potential for
scaling to real-world applications. We provide a topological overview of the
system and detailed descriptions of its plug-and-play elements. We demonstrate
the fidelity of Zespol in simulated and real-word robotics by replicating
existing works highlighting the simulation to real gap with the milling
behavior. We plan to leverage Zespol's plug-and-play feature for neuromorphic
computing in swarming scenarios, which involves using the modules in Zespol to
simulate the behavior of neurons and their connections as synapses. This will
enable optimizing and studying the emergent behavior of swarm systems in
complex environments. Our goal is to gain a better understanding of the
interplay between environmental factors and neural-like computations in
swarming systems.
|
http://arxiv.org/abs/2306.17744v1
|
We study the gravitational bremsstrahlung owing to collisions mediated by a
$1/r$ potential. We combine classical and first order Born approximation
results in order to construct an approximate gravitational `Gaunt factor' for
the total emitted energy. We also obtain the cross-section with an angular
momentum cut-off, and hence the cross-section for emission via close hyperbolic
encounters in a gravitating cluster. These effects are the dominant source of
very high frequency gravitational noise in the solar system. The total
gravitational wave power of the Sun is $76\pm 20\,$MW.
|
http://arxiv.org/abs/2309.06972v2
|
Content metadata plays a very important role in movie recommender systems as
it provides valuable information about various aspects of a movie such as
genre, cast, plot synopsis, box office summary, etc. Analyzing the metadata can
help understand the user preferences to generate personalized recommendations
and item cold starting. In this talk, we will focus on one particular type of
metadata - \textit{genre} labels. Genre labels associated with a movie or a TV
series help categorize a collection of titles into different themes and
correspondingly setting up the audience expectation. We present some of the
challenges associated with using genre label information and propose a new way
of examining the genre information that we call as the \textit{Genre Spectrum}.
The Genre Spectrum helps capture the various nuanced genres in a title and our
offline and online experiments corroborate the effectiveness of the approach.
Furthermore, we also talk about applications of LLMs in augmenting content
metadata which could eventually be used to achieve effective organization of
recommendations in user's 2-D home-grid.
|
http://arxiv.org/abs/2309.08787v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.