text
string | source
string |
---|---|
Crystalline materials are promising candidates as substrates or
high-reflective coatings of mirrors to reduce thermal noises in future laser
interferometric gravitational wave detectors. However, birefringence of such
materials could degrade the sensitivity of gravitational wave detectors, not
only because it can introduce optical losses, but also because its fluctuations
create extra phase noise in the arm cavity reflected beam. In this paper, we
analytically estimate the effects of birefringence and its fluctuations in the
mirror substrate and coating for gravitational wave detectors. Our calculations
show that the requirements for the birefringence fluctuations in silicon
substrate and AlGaAs coating will be on the order of $10^{-8}$ and $10^{-10}$
rad/$\sqrt{\rm Hz}$ at 100~Hz, respectively, for future gravitational wave
detectors. We also point out that optical cavity response needs to be carefully
taken into account to estimate optical losses from depolarization.
|
http://arxiv.org/abs/2308.00150v2
|
Motivation: Read mapping is a computationally expensive process and a major
bottleneck in genomics analyses. The performance of read mapping is mainly
limited by the performance of three key computational steps: Index Querying,
Seed Chaining, and Sequence Alignment. The first step is dominated by how fast
and frequent it accesses the main memory (i.e., memory-bound), while the latter
two steps are dominated by how fast the CPU can compute their
computationally-costly dynamic programming algorithms (i.e., compute-bound).
Accelerating these three steps by exploiting new algorithms and new hardware
devices is essential to accelerate most genome analysis pipelines that widely
use read mapping. Given the large body of work on accelerating Sequence
Alignment, this work focuses on significantly improving the remaining steps.
Results: We introduce GateSeeder, the first CPU-FPGA-based near-memory
acceleration of both short and long read mapping. GateSeeder exploits
near-memory computation capability provided by modern FPGAs that couple a
reconfigurable compute fabric with high-bandwidth memory (HBM) to overcome the
memory-bound and compute-bound bottlenecks. GateSeeder also introduces a new
lightweight algorithm for finding the potential matching segment pairs. Using
real ONT, HiFi, and Illumina sequences, we experimentally demonstrate that
GateSeeder outperforms Minimap2, without performing sequence alignment, by up
to 40.3x, 4.8x, and 2.3x, respectively. When performing read mapping with
sequence alignment, GateSeeder outperforms Minimap2 by 1.15-4.33x (using KSW2)
and by 1.97-13.63x (using WFA-GPU). Availability:
https://github.com/CMU-SAFARI/GateSeeder
|
http://arxiv.org/abs/2309.17063v1
|
Snoring is a common disorder that affects people's social and marital lives.
The annoyance caused by snoring can be partially solved with active noise
control systems. In this context, the present work aims at introducing an
enhanced system based on the use of a convolutional recurrent neural network
for snoring activity detection and a delayless subband approach for active
snoring cancellation. Thanks to several experiments conducted using real
snoring signals, this work shows that the active snoring cancellation system
achieves better performance when the snoring activity detection stage is turned
on, demonstrating the beneficial effect of a preliminary snoring detection
stage in the perspective of snoring cancellation.
|
http://arxiv.org/abs/2307.16809v1
|
The buckling of a soft elastic sample under growth or swelling has
highlighted a new interest in materials science, morphogenesis, and biology or
physiology. Indeed, the change of mass or volume is a common fact of any living
species, and on a scale larger than the cell size, a macroscopic view can help
to explain many features of common observation. Many morphologies of soft
materials result from the accumulation of elastic compressive stress due to
growth, and thus from the minimization of a nonlinear elastic energy. The
similarity between growth and compression of a piece of rubber has revived the
instability formalism of nonlinear elastic samples under compression, and in
particular Biot's instability. Here we present a modern treatment of this
instability in the light of complex analysis and demonstrate the richness of
possible profiles that an interface can present under buckling, even if one
restricts oneself to the two spatial dimensions. Special attention is given to
wrinkles, folds and cusps, a surprising observation in swelling gels or clays.
The standard techniques of complex analysis, nonlinear bifurcation theory and
path-independent integrals are revisited to highlight the role of physical
parameters at the origin of the observed patterns below and above the Biot
threshold.
|
http://arxiv.org/abs/2309.11412v1
|
Designing effective automatic speech recognition (ASR) systems for
Code-Switching (CS) often depends on the availability of the transcribed CS
resources. To address data scarcity, this paper introduces Speech Collage, a
method that synthesizes CS data from monolingual corpora by splicing audio
segments. We further improve the smoothness quality of audio generation using
an overlap-add approach. We investigate the impact of generated data on speech
recognition in two scenarios: using in-domain CS text and a zero-shot approach
with synthesized CS text. Empirical results highlight up to 34.4% and 16.2%
relative reductions in Mixed-Error Rate and Word-Error Rate for in-domain and
zero-shot scenarios, respectively. Lastly, we demonstrate that CS augmentation
bolsters the model's code-switching inclination and reduces its monolingual
bias.
|
http://arxiv.org/abs/2309.15674v1
|
The remarkable advancements in large language models (LLMs) have brought
about significant improvements in Natural Language Processing(NLP) tasks. This
paper presents a comprehensive review of in-context learning techniques,
focusing on different types of prompts, including discrete, continuous,
few-shot, and zero-shot, and their impact on LLM performance. We explore
various approaches to prompt design, such as manual design, optimization
algorithms, and evaluation methods, to optimize LLM performance across diverse
tasks. Our review covers key research studies in prompt engineering, discussing
their methodologies and contributions to the field. We also delve into the
challenges faced in evaluating prompt performance, given the absence of a
single "best" prompt and the importance of considering multiple metrics. In
conclusion, the paper highlights the critical role of prompt design in
harnessing the full potential of LLMs and provides insights into the
combination of manual design, optimization techniques, and rigorous evaluation
for more effective and efficient use of LLMs in various NLP tasks.
|
http://arxiv.org/abs/2309.13205v1
|
Humans can easily perceive the direction of sound sources in a visual scene,
termed sound source localization. Recent studies on learning-based sound source
localization have mainly explored the problem from a localization perspective.
However, prior arts and existing benchmarks do not account for a more important
aspect of the problem, cross-modal semantic understanding, which is essential
for genuine sound source localization. Cross-modal semantic understanding is
important in understanding semantically mismatched audio-visual events, e.g.,
silent objects, or off-screen sounds. To account for this, we propose a
cross-modal alignment task as a joint task with sound source localization to
better learn the interaction between audio and visual modalities. Thereby, we
achieve high localization performance with strong cross-modal semantic
understanding. Our method outperforms the state-of-the-art approaches in both
sound source localization and cross-modal retrieval. Our work suggests that
jointly tackling both tasks is necessary to conquer genuine sound source
localization.
|
http://arxiv.org/abs/2309.10724v1
|
Contrary to a wide-accepted assumption about the decisive role of driver
over-reaction for breakdown in vehicular traffic, we have shown that the cause
of the breakdown is driver over-acceleration, not driver over-reaction. To
reach this goal, we have introduced a mathematical approach for the description
of driver over-acceleration in a microscopic traffic flow model. The model, in
which no driver over-reaction occurs, explains all observed empirical
nucleation features of traffic breakdown.
|
http://arxiv.org/abs/2309.09275v1
|
This note surveys Wolfgang Lusky's proof of uniqueness of the Gurariy spaces
and mentions further developments.
|
http://arxiv.org/abs/2309.06146v1
|
Echocardiography has become an indispensable clinical imaging modality for
general heart health assessment. From calculating biomarkers such as ejection
fraction to the probability of a patient's heart failure, accurate segmentation
of the heart structures allows doctors to assess the heart's condition and
devise treatments with greater precision and accuracy. However, achieving
accurate and reliable left ventricle segmentation is time-consuming and
challenging due to different reasons. Hence, clinicians often rely on
segmenting the left ventricular (LV) in two specific echocardiogram frames to
make a diagnosis. This limited coverage in manual LV segmentation poses a
challenge for developing automatic LV segmentation with high temporal
consistency, as the resulting dataset is typically annotated sparsely. In
response to this challenge, this work introduces SimLVSeg, a novel paradigm
that enables video-based networks for consistent LV segmentation from sparsely
annotated echocardiogram videos. SimLVSeg consists of self-supervised
pre-training with temporal masking, followed by weakly supervised learning
tailored for LV segmentation from sparse annotations. We demonstrate how
SimLVSeg outperforms the state-of-the-art solutions by achieving a 93.32%
(95%CI 93.21-93.43%) dice score on the largest 2D+time echocardiography dataset
(EchoNet-Dynamic) while being more efficient. SimLVSeg is compatible with two
types of video segmentation networks: 2D super image and 3D segmentation. To
show the effectiveness of our approach, we provide extensive ablation studies,
including pre-training settings and various deep learning backbones. We further
conduct an out-of-distribution test to showcase SimLVSeg's generalizability on
unseen distribution (CAMUS dataset). The code is publicly available at
https://github.com/fadamsyah/SimLVSeg.
|
http://arxiv.org/abs/2310.00454v3
|
Optical microcavities are often proposed as platforms for spectroscopy in the
single- and few-photon regime due to strong light-matter coupling. For
classical-light spectroscopies, an empty microcavity simply acts as an optical
filter. However, we find that in the single- or few-photon regime treating the
empty microcavity as an optical filter does not capture the full effect on the
quantum state of the transmitted photons. Focusing on the case of entangled
photon-pair spectroscopy, we consider how the propagation of one photon through
an optical microcavity changes the joint spectrum of a frequency-entangled
photon pair. Using the input-output treatment of a Dicke model, we find that
propagation through a strongly coupled microcavity above a certain coupling
threshold enhances the entanglement entropy between the signal and idler
photons. These results show that optical microcavities are not neutral
platforms for quantum-light spectroscopies and their effects must be carefully
considered when using change in entanglement entropy as an observable.
|
http://arxiv.org/abs/2309.04751v1
|
Motion planning is the soul of robot decision making. Classical planning
algorithms like graph search and reaction-based algorithms face challenges in
cases of dense and dynamic obstacles. Deep learning algorithms generate
suboptimal one-step predictions that cause many collisions. Reinforcement
learning algorithms generate optimal or near-optimal time-sequential
predictions. However, they suffer from slow convergence, suboptimal converged
results, and overfittings. This paper introduces a hybrid algorithm for robotic
motion planning: long short-term memory (LSTM) pooling and skip connection for
attention-based discrete soft actor critic (LSA-DSAC). First, graph network
(relational graph) and attention network (attention weight) interpret the
environmental state for the learning of the discrete soft actor critic
algorithm. The expressive power of attention network outperforms that of graph
in our task by difference analysis of these two representation methods.
However, attention based DSAC faces the overfitting problem in training.
Second, the skip connection method is integrated to attention based DSAC to
mitigate overfitting and improve convergence speed. Third, LSTM pooling is
taken to replace the sum operator of attention weigh and eliminate overfitting
by slightly sacrificing convergence speed at early-stage training. Experiments
show that LSA-DSAC outperforms the state-of-the-art in training and most
evaluations. The physical robot is also implemented and tested in the real
world.
|
http://arxiv.org/abs/2309.03758v1
|
Recently, the application of computer vision for anomaly detection has been
under attention in several industrial fields. An important example is oil
pipeline defect detection. Failure of one oil pipeline can interrupt the
operation of the entire transportation system or cause a far-reaching failure.
The automated defect detection could significantly decrease the inspection time
and the related costs. However, there is a gap in the related literature when
it comes to dealing with this task. The existing studies do not sufficiently
cover the research of the Magnetic Flux Leakage data and the preprocessing
techniques that allow overcoming the limitations set by the available data.
This work focuses on alleviating these issues. Moreover, in doing so, we
exploited the recent convolutional neural network structures and proposed
robust approaches, aiming to acquire high performance considering the related
metrics. The proposed approaches and their applicability were verified using
real-world data.
|
http://arxiv.org/abs/2310.00332v1
|
Fully CMOS-compatible photonic memory holding devices hold a potential in a
development of ultrafast artificial neural networks. Leveraging the benefits of
photonics such as high-bandwidth, low latencies, low-energy interconnect and
high speed they can overcome the existing limits of the electronic processing.
To satisfy all these requirements a new photonic platform is proposed that
combines low-loss nitride-rich silicon as a guide and low-loss transparent
conductive oxides as an active material that can provide high nonlinearity and
bistability under both electrical and optical signals.
|
http://arxiv.org/abs/2308.00178v1
|
Coherent-state representations are a standard tool to deal with
continuous-variable systems, as they allow one to efficiently visualize quantum
states in phase space. Here, we work out an alternative basis consisting of
monomials on the basic observables, with the crucial property of behaving well
under symplectic transformations. This basis is the analogue of the irreducible
tensors widely used in the context of SU(2) symmetry. Given the density matrix
of a state, the expansion coefficients in that basis constitute the multipoles,
which describe the state in a canonically covariant form that is both concise
and explicit. We use these quantities to assess properties such as quantumness
or Gaussianity and to furnish direct connections between tomographic
measurements and quasiprobability distribution reconstructions.
|
http://arxiv.org/abs/2309.10042v2
|
In this paper, we propose a new nonlocal model for two-phase Stefan problem,
where the nonlocal version of the one-phase Stefan problem arises naturally as
a special case. Among other things, we obtain the optimal condition for the
pointwise convergence between local and nonlocal one-phase Stefan problem and
an equivalent characterization of this optimal condition. Moreover, we provide
some sufficient criteria for the continuous expansion of free boundaries, and
when the sufficient conditions are violated, we construct examples to
demonstrate that the jumping phenomena could happen on the free boundaries. The
jumping phenomena is essentially induced by the nonlocal diffusion and thus it
does not appear in the classical Stefan problem.
|
http://arxiv.org/abs/2301.13369v1
|
The paper deals with the spread of two competing viruses over a network of
population nodes, accounting for pairwise interactions and higher-order
interactions (HOI) within and between the population nodes. We study the
competitive networked bivirus susceptible-infected-susceptible (SIS) model on a
hypergraph introduced in Cui et al. [1]. We show that the system has, in a
generic sense, a finite number of equilibria, and the Jacobian associated with
each equilibrium point is nonsingular; the key tool is the Parametric
Transversality Theorem of differential topology. Since the system is also
monotone, it turns out that the typical behavior of the system is convergence
to some equilibrium point. Thereafter, we exhibit a tri-stable domain with
three locally exponentially stable equilibria. For different parameter regimes,
we establish conditions for the existence of a coexistence equilibrium (both
viruses infect separate fractions of each population node).
|
http://arxiv.org/abs/2309.14230v1
|
The topic of synthetic graph generators (SGGs) has recently received much
attention due to the wave of the latest breakthroughs in generative modelling.
However, many state-of-the-art SGGs do not scale well with the graph size.
Indeed, in the generation process, all the possible edges for a fixed number of
nodes must often be considered, which scales in $\mathcal{O}(N^2)$, with $N$
being the number of nodes in the graph. For this reason, many state-of-the-art
SGGs are not applicable to large graphs. In this paper, we present SANGEA, a
sizeable synthetic graph generation framework which extends the applicability
of any SGG to large graphs. By first splitting the large graph into
communities, SANGEA trains one SGG per community, then links the community
graphs back together to create a synthetic large graph. Our experiments show
that the graphs generated by SANGEA have high similarity to the original graph,
in terms of both topology and node feature distribution. Additionally, these
generated graphs achieve high utility on downstream tasks such as link
prediction. Finally, we provide a privacy assessment of the generated graphs to
show that, even though they have excellent utility, they also achieve
reasonable privacy scores.
|
http://arxiv.org/abs/2309.15648v1
|
The ''Propose-Test-Release'' (PTR) framework is a classic recipe for
designing differentially private (DP) algorithms that are data-adaptive, i.e.
those that add less noise when the input dataset is nice. We extend PTR to a
more general setting by privately testing data-dependent privacy losses rather
than local sensitivity, hence making it applicable beyond the standard
noise-adding mechanisms, e.g. to queries with unbounded or undefined
sensitivity. We demonstrate the versatility of generalized PTR using private
linear regression as a case study. Additionally, we apply our algorithm to
solve an open problem from ''Private Aggregation of Teacher Ensembles (PATE)''
-- privately releasing the entire model with a delicate data-dependent
analysis.
|
http://arxiv.org/abs/2301.00301v1
|
In this perspective, we introduce recent research into the structure and
function of complex investor networks supporting sustainability efforts. Using
the case of solar, wind and hydro energy technologies, this perspective
explores the complexity in low-carbon finance markets, defined as markets that
direct capital flows towards low-carbon technologies, using network approaches
to study their structure and dynamics. Investors are modeled as nodes which
form a network or higher-order network connected by edges representing projects
in which joint funding or security-related insurance was provided or other
investment-related interaction occurred. We review the literature on investor
networks generally, particularly in the case of complex networks, and address
areas where these ideas were applied in this emerging field. The complex
investor dynamics which emerge from the extant funding scenarios are not well
understood. These dynamics have the potential to result in interesting
non-linear behaviour, growth, and decline, which can be studied, explained and
controlled using the tools of network science.
|
http://arxiv.org/abs/2309.15890v1
|
In 2022, the U.S. National Institute of Standards and Technology (NIST)
conducted the latest Language Recognition Evaluation (LRE) in an ongoing series
administered by NIST since 1996 to foster research in language recognition and
to measure state-of-the-art technology. Similar to previous LREs, LRE22 focused
on conversational telephone speech (CTS) and broadcast narrowband speech (BNBS)
data. LRE22 also introduced new evaluation features, such as an emphasis on
African languages, including low resource languages, and a test set consisting
of segments containing between 3s and 35s of speech randomly sampled and
extracted from longer recordings. A total of 21 research organizations, forming
16 teams, participated in this 3-month long evaluation and made a total of 65
valid system submissions to be evaluated. This paper presents an overview of
LRE22 and an analysis of system performance over different evaluation
conditions. The evaluation results suggest that Oromo and Tigrinya are easier
to detect while Xhosa and Zulu are more challenging. A greater confusability is
seen for some language pairs. When speech duration increased, system
performance significantly increased up to a certain duration, and then a
diminishing return on system performance is observed afterward.
|
http://arxiv.org/abs/2302.14624v1
|
Sequential recommendation problems have received increasing attention in
research during the past few years, leading to the inception of a large variety
of algorithmic approaches. In this work, we explore how large language models
(LLMs), which are nowadays introducing disruptive effects in many AI-based
applications, can be used to build or improve sequential recommendation
approaches. Specifically, we devise and evaluate three approaches to leverage
the power of LLMs in different ways. Our results from experiments on two
datasets show that initializing the state-of-the-art sequential recommendation
model BERT4Rec with embeddings obtained from an LLM improves NDCG by 15-20%
compared to the vanilla BERT4Rec model. Furthermore, we find that a simple
approach that leverages LLM embeddings for producing recommendations, can
provide competitive performance by highlighting semantically related items. We
publicly share the code and data of our experiments to ensure reproducibility.
|
http://arxiv.org/abs/2309.09261v1
|
Crystalline CaF2 is drawing huge attentions due to its great potential of
being the gate dielectric of two-dimensional (2D) material MOSFETs. It is
deemed to be much superior than boron nitride and traditional SiO2 because of
its larger dielectric constant, wider band gap, and lower defect density.
Nevertheless, the CaF2-based MOSFETs fabricated in experiment still present
notable reliability issues, and the underlying reason remains unclear. Here we
studied the various intrinsic defects and adsorbates in CaF2/MoS2 and
CaF2/MoSi2N4 interface systems to reveal the most active charge trapping
centers in CaF2-based 2D material MOSFETs. An elaborate Table that comparing
the importance of different defects in both n-type and p-type device is
provided. Most impressively, the oxygen molecules adsorbed at the interface or
surface, which are inevitable in experiments, are as active as the intrinsic
defects in channel materials, and they can even change the MoSi2N4 to p-type
spontaneously. These results mean that it is necessary to develop high vacuum
packaging process as well as preparing high-quality 2D materials for better
device performance.
|
http://arxiv.org/abs/2309.06152v1
|
We developed a system for whole-body human ultrasound tomography in
reflection and transmission modes. A custom 512-element ultrasound receiver
array with a rotating single-element ultrasound transmitter are used to
generate 2D isotropically resolved images across the entire human
cross-section. We demonstrate this technique in regions such as the abdomen and
legs in healthy volunteers. Compared to handheld-probe-based ultrasonography,
this approach provides a substantially larger field of view, depends less on
operator training, and obtains quantitative tissue parameter profiles in
addition to reflectivity images. Whole-body ultrasound tomography could be
valuable in applications such as organ disease screening, image-guided needle
biopsy, and treatment monitoring.
|
http://arxiv.org/abs/2307.00110v1
|
Feng--Huang (2016) introduced weighted topological entropy and pressure for
factor maps between dynamical systems and established its variational
principle. Tsukamoto (2022) redefined those invariants quite differently for
the simplest case and showed via the variational principle that the two
definitions coincide. We generalize Tsukamoto's approach, redefine the weighted
topological entropy and pressure for higher dimensions, and prove the
variational principle. Our result allows for an elementary calculation of the
Hausdorff dimension of affine-invariant sets such as self-affine sponges and
certain sofic sets that reside in Euclidean space of arbitrary dimension.
|
http://arxiv.org/abs/2307.16772v1
|
This paper goes beyond Katz-Sarnak theory on the distribution of curves over
finite fields according to their number of rational points, theoretically,
experimentally and conjecturally. In particular, we give a formula for the
limits of the moments measuring the asymmetry of this distribution for
(non-hyperelliptic) curves of genus $g \geq 3$. The experiments point to a
stronger notion of convergence than the one provided by the Katz-Sarnak
framework for all curves of genus $\geq 3$. However, for elliptic curves and
for hyperelliptic curves of every genus we prove that this stronger convergence
cannot occur.
|
http://arxiv.org/abs/2303.17825v2
|
We provide, in the setting of Gauss' capillarity theory, a rigorous
derivation of the equilibrium law for the three dimensional structures known as
Plateau borders which arise in "wet" soap films and foams. A key step in our
analysis is a complete measure-theoretic overhaul of the homotopic spanning
condition introduced by Harrison and Pugh in the study of Plateau's laws for
two-dimensional area minimizing surfaces ("dry" soap films). This new point of
view allows us to obtain effective compactness theorems and energy
representation formulae for the homotopic spanning relaxation of Gauss'
capillarity theory which, in turn, lead to prove sharp regularity properties of
energy minimizers. The equilibrium law for Plateau borders in wet foams is also
addressed as a (simpler) variant of the theory for wet soap films.
|
http://arxiv.org/abs/2310.20169v1
|
In this short note we construct an embedding of the planar algebra for
$\overline{\operatorname{Rep}(U_q(sl_3))}$ at $q = e^{2\pi i \frac{1}{24}}$
into the graph planar algebra of di Francesco and Zuber's candidate graph
$\mathcal{E}_4^{12}$. Via the graph planar algebra embedding theorem we thus
construct a rank 11 module category over
$\overline{\operatorname{Rep}(U_q(sl_3))}$ whose graph for action by the vector
representation is $\mathcal{E}_4^{12}$. This fills a small gap in the
literature on the construction of $\overline{\operatorname{Rep}(U_q(sl_3))}$
module categories. As a consequence of our construction, we obtain the
principal graphs of subfactors constructed abstractly by Evans and Pugh.
|
http://arxiv.org/abs/2308.16849v2
|
Deep neural networks (DNNs) underpin many machine learning applications.
Production quality DNN models achieve high inference accuracy by training
millions of DNN parameters which has a significant resource footprint. This
presents a challenge for resources operating at the extreme edge of the
network, such as mobile and embedded devices that have limited computational
and memory resources. To address this, models are pruned to create lightweight,
more suitable variants for these devices. Existing pruning methods are unable
to provide similar quality models compared to their unpruned counterparts
without significant time costs and overheads or are limited to offline use
cases. Our work rapidly derives suitable model variants while maintaining the
accuracy of the original model. The model variants can be swapped quickly when
system and network conditions change to match workload demand. This paper
presents DNNShifter, an end-to-end DNN training, spatial pruning, and model
switching system that addresses the challenges mentioned above. At the heart of
DNNShifter is a novel methodology that prunes sparse models using structured
pruning. The pruned model variants generated by DNNShifter are smaller in size
and thus faster than dense and sparse model predecessors, making them suitable
for inference at the edge while retaining near similar accuracy as of the
original dense model. DNNShifter generates a portfolio of model variants that
can be swiftly interchanged depending on operational conditions. DNNShifter
produces pruned model variants up to 93x faster than conventional training
methods. Compared to sparse models, the pruned model variants are up to 5.14x
smaller and have a 1.67x inference latency speedup, with no compromise to
sparse model accuracy. In addition, DNNShifter has up to 11.9x lower overhead
for switching models and up to 3.8x lower memory utilisation than existing
approaches.
|
http://arxiv.org/abs/2309.06973v1
|
We consider the weighted least squares spline approximation of a noisy
dataset. By interpreting the weights as a probability distribution, we maximize
the associated entropy subject to the constraint that the mean squared error is
prescribed to a desired (small) value. Acting on this error yields a robust
regression method that automatically detects and removes outliers from the data
during the fitting procedure, by assigning them a very small weight. We discuss
the use of both spline functions and spline curves. A number of numerical
illustrations have been included to disclose the potentialities of the
maximal-entropy approach in different application fields.
|
http://arxiv.org/abs/2309.08792v1
|
Let $X$ be a compact Riemann surface of genus $g \geq 2$ and let $D\subset X$
be a fixed finite subset. We considered the moduli spaces of parabolic Higgs
bundles and of parabolic connections over $X$ with the parabolic structure over
$D$. For generic weights, we showed that these two moduli spaces have equal
Grothendieck motivic classes and their $E$-polynomials are the same. We also
show that the Voevodsky and Chow motives of these two moduli spaces are also
equal. We showed that the Grothendieck motivic classes and the $E$-polynomials
of parabolic Higgs moduli and of parabolic Hodge moduli are closely related.
Finally, we considered the moduli spaces with fixed determinants and showed
that the above results also hold for the fixed determinant case.
|
http://arxiv.org/abs/2309.06967v2
|
We show that there is a language in $\mathsf{S}_2\mathsf{E}/_1$ (symmetric
exponential time with one bit of advice) with circuit complexity at least
$2^n/n$. In particular, the above also implies the same near-maximum circuit
lower bounds for the classes $\Sigma_2\mathsf{E}$,
$(\Sigma_2\mathsf{E}\cap\Pi_2\mathsf{E})/_1$, and
$\mathsf{ZPE}^{\mathsf{NP}}/_1$. Previously, only "half-exponential" circuit
lower bounds for these complexity classes were known, and the smallest
complexity class known to require exponential circuit complexity was
$\Delta_3\mathsf{E} = \mathsf{E}^{\Sigma_2\mathsf{P}}$ (Miltersen,
Vinodchandran, and Watanabe COCOON'99).
Our circuit lower bounds are corollaries of an unconditional zero-error
pseudodeterministic algorithm with an $\mathsf{NP}$ oracle and one bit of
advice ($\mathsf{FZPP}^{\mathsf{NP}}/_1$) that solves the range avoidance
problem infinitely often. This algorithm also implies unconditional
infinitely-often pseudodeterministic $\mathsf{FZPP}^{\mathsf{NP}}/_1$
constructions for Ramsey graphs, rigid matrices, two-source extractors, linear
codes, and $\mathrm{K}^{\mathrm{poly}}$-random strings with nearly optimal
parameters.
Our proofs relativize. The two main technical ingredients are (1) Korten's
$\mathsf{P}^{\mathsf{NP}}$ reduction from the range avoidance problem to
constructing hard truth tables (FOCS'21), which was in turn inspired by a
result of Je\v{r}\'abek on provability in Bounded Arithmetic (Ann. Pure Appl.
Log. 2004); and (2) the recent iterative win-win paradigm of Chen, Lu,
Oliveira, Ren, and Santhanam (FOCS'23).
|
http://arxiv.org/abs/2309.12912v1
|
We use the three-dimensional Monte Carlo radiative transfer code HDUST to
model Be stars where the disc is tilted from the equatorial plane of the star.
We compute 128 models across 4 spectral types, B0, B2, B5 and B8, tilting the
disc by $0^o$, $10^o$, $20^o$, and $40^o$, while varying disc density according
to spectral type. We also compute every model for an average and high stellar
rotation rate. We first discuss non-tilted disc temperatures and show its
non-linear dependence on stellar and disc parameters. We find that tilting the
disc minimally affects the density-weighted average disc temperature, but
tilting does create a temperature asymmetry in disc cross sections, which is
more pronounced for a faster rotation rate. We also investigate the effect
tilting has on $V$-band magnitude, polarization, and the H$\alpha$ line.
Tilting the disc does affect these observables, but the changes are entirely
dependent on the position of the observer relative to the direction of tilt. We
find the observables that distinguish tilting from a change in density or
geometry are the H$\alpha$ line shape, where it can transition between
single-peaked and double-peaked, and the polarization position angle, whose
value is dependent on the projected major elongation axis of the disc on the
sky. We also present one early and one late-type model with warped discs. We
find their temperature structure varies a small amount from the uniformly
tilted models, and the different observables correspond to different tilt
angles, consistent with their expected volume of origin within the disc.
|
http://arxiv.org/abs/2309.04816v1
|
Although resonant planets have orbital periods near commensurability,
resonance is also dictated by other factors, such as the planets'
eccentricities and masses, and therefore must be confirmed through a study of
the system's dynamics. Here, we perform such a study for five multi-planet
systems: Kepler-226, Kepler-254, Kepler-363, Kepler-1542, and K2-32. For each
system, we run a suite of N-body simulations that span the full parameter-space
that is consistent with the constrained orbital and planetary properties. We
study the stability of each system and look for resonances based on the
libration of the critical resonant angles. We find strong evidence for a
two-body resonance in each system; we confirm a 3:2 resonance between
Kepler-226c and Kepler-226d, confirm a 3:2 resonance between Kepler-254c and
Kepler-254d, and confirm a three-body 1:2:3 resonant chain between the three
planets of Kepler-363. We explore the dynamical history of two of these systems
and find that these resonances most likely formed without migration. Migration
leads to the libration of the three-body resonant angle, but these angles
circulate in both Kepler-254 and Kepler-363. Applying our methods to additional
near-resonant systems could help us identify which systems are truly resonant
or non-resonant and which systems require additional follow-up analysis.
|
http://arxiv.org/abs/2306.17751v1
|
The recent explosion of performance of large language models (LLMs) has
changed the field of Natural Language Processing (NLP) more abruptly and
seismically than any other shift in the field's 80-year history. This has
resulted in concerns that the field will become homogenized and
resource-intensive. The new status quo has put many academic researchers,
especially PhD students, at a disadvantage. This paper aims to define a new NLP
playground by proposing 20+ PhD-dissertation-worthy research directions,
covering theoretical analysis, new and challenging problems, learning
paradigms, and interdisciplinary applications.
|
http://arxiv.org/abs/2310.20633v1
|
This paper introduces a dynamic logic extension of separation logic. The
assertion language of separation logic is extended with modalities for the five
types of the basic instructions of separation logic: simple assignment,
look-up, mutation, allocation, and de-allocation. The main novelty of the
resulting dynamic logic is that it allows to combine different approaches to
resolving these modalities. One such approach is based on the standard weakest
precondition calculus of separation logic. The other approach introduced in
this paper provides a novel alternative formalization in the proposed dynamic
logic extension of separation logic. The soundness and completeness of this
axiomatization has been formalized in the Coq theorem prover.
|
http://arxiv.org/abs/2309.08962v2
|
Various collaborative distributed machine learning (CDML) systems, including
federated learning systems and swarm learning systems, with different key
traits were developed to leverage resources for development and use of machine
learning (ML) models in a confidentiality-preserving way. To meet use case
requirements, suitable CDML systems need to be selected. However, comparison
between CDML systems regarding their suitability for use cases is often
difficult. This work presents a CDML system conceptualization and CDML
archetypes to support comparison of CDML systems and introduce scientific and
practical audiences to the principal functioning and key traits of CDML
systems.
|
http://arxiv.org/abs/2309.16584v3
|
The carbon footprint associated with large language models (LLMs) is a
significant concern, encompassing emissions from their training, inference,
experimentation, and storage processes, including operational and embodied
carbon emissions. An essential aspect is accurately estimating the carbon
impact of emerging LLMs even before their training, which heavily relies on GPU
usage. Existing studies have reported the carbon footprint of LLM training, but
only one tool, mlco2, can predict the carbon footprint of new neural networks
prior to physical training. However, mlco2 has several serious limitations. It
cannot extend its estimation to dense or mixture-of-experts (MoE) LLMs,
disregards critical architectural parameters, focuses solely on GPUs, and
cannot model embodied carbon footprints. Addressing these gaps, we introduce
\textit{\carb}, an end-to-end carbon footprint projection model designed for
both dense and MoE LLMs. Compared to mlco2, \carb~significantly enhances the
accuracy of carbon footprint estimations for various LLMs. The source code is
released at \url{https://github.com/SotaroKaneda/MLCarbon}.
|
http://arxiv.org/abs/2309.14393v2
|
Multi-task learning (MTL) is a powerful approach in deep learning that
leverages the information from multiple tasks during training to improve model
performance. In medical imaging, MTL has shown great potential to solve various
tasks. However, existing MTL architectures in medical imaging are limited in
sharing information across tasks, reducing the potential performance
improvements of MTL. In this study, we introduce a novel attention-based MTL
framework to better leverage inter-task interactions for various tasks from
pixel-level to image-level predictions. Specifically, we propose a Cross-Task
Attention Network (CTAN) which utilizes cross-task attention mechanisms to
incorporate information by interacting across tasks. We validated CTAN on four
medical imaging datasets that span different domains and tasks including:
radiation treatment planning prediction using planning CT images of two
different target cancers (Prostate, OpenKBP); pigmented skin lesion
segmentation and diagnosis using dermatoscopic images (HAM10000); and COVID-19
diagnosis and severity prediction using chest CT scans (STOIC). Our study
demonstrates the effectiveness of CTAN in improving the accuracy of medical
imaging tasks. Compared to standard single-task learning (STL), CTAN
demonstrated a 4.67% improvement in performance and outperformed both widely
used MTL baselines: hard parameter sharing (HPS) with an average performance
improvement of 3.22%; and multi-task attention network (MTAN) with a relative
decrease of 5.38%. These findings highlight the significance of our proposed
MTL framework in solving medical imaging tasks and its potential to improve
their accuracy across domains.
|
http://arxiv.org/abs/2309.03837v1
|
Quantum measurements are key to quantum metrology. Constrained by
experimental capabilities, collective measurements on a large number of copies
of metrological probes can pose significant challenges. Therefore, the locality
in quantum measurements must be considered. In this work, we propose a method
dubbed as the "iterative matrix partition" approach to elucidate the underlying
structures of optimal local measurements, with and without classical
communications, that saturate the quantum Cram\'er-Rao Bound (qCRB).
Furthermore, we find that while exact saturation is possible for all two-qubit
pure states, it is generically restrictive for multi-qubit pure states.
However, we demonstrate that the qCRB can be universally saturated in an
approximate manner through adaptive coherent controls, as long as the initial
state is separable and the Hamiltonian allows for interaction. Our results
bridge the gap between theoretical proposals and experiments in many-body
metrology and can find immediate applications in noisy intermediate-scale
quantum devices.
|
http://arxiv.org/abs/2310.00285v1
|
Machine learning (ML) is increasingly becoming a common tool in computational
chemistry. At the same time, the rapid development of ML methods requires a
flexible software framework for designing custom workflows. MLatom 3 is a
program package designed to leverage the power of ML to enhance typical
computational chemistry simulations and to create complex workflows. This
open-source package provides plenty of choice to the users who can run
simulations with the command line options, input files, or with scripts using
MLatom as a Python package, both on their computers and on the online XACS
cloud computing at XACScloud.com. Computational chemists can calculate energies
and thermochemical properties, optimize geometries, run molecular and quantum
dynamics, and simulate (ro)vibrational, one-photon UV/vis absorption, and
two-photon absorption spectra with ML, quantum mechanical, and combined models.
The users can choose from an extensive library of methods containing
pre-trained ML models and quantum mechanical approximations such as AIQM1
approaching coupled-cluster accuracy. The developers can build their own models
using various ML algorithms. The great flexibility of MLatom is largely due to
the extensive use of the interfaces to many state-of-the-art software packages
and libraries.
|
http://arxiv.org/abs/2310.20155v1
|
Evolutionary robotics offers a powerful framework for designing and evolving
robot morphologies, particularly in the context of modular robots. However, the
role of query mechanisms during the genotype-to-phenotype mapping process has
been largely overlooked. This research addresses this gap by conducting a
comparative analysis of query mechanisms in the brain-body co-evolution of
modular robots. Using two different query mechanisms, Breadth-First Search
(BFS) and Random Query, within the context of evolving robot morphologies using
CPPNs and robot controllers using tensors, and testing them in two evolutionary
frameworks, Lamarckian and Darwinian systems, this study investigates their
influence on evolutionary outcomes and performance. The findings demonstrate
the impact of the two query mechanisms on the evolution and performance of
modular robot bodies, including morphological intelligence, diversity, and
morphological traits. This study suggests that BFS is both more effective and
efficient in producing highly performing robots. It also reveals that
initially, robot diversity was higher with BFS compared to Random Query, but in
the Lamarckian system, it declines faster, converging to superior designs,
while in the Darwinian system, BFS led to higher end-process diversity.
|
http://arxiv.org/abs/2309.14387v1
|
We study fermions on a finite chain, interacting repulsively when residing on
the same and on nearest-neighbor sites, and subjected to a Wannier-Stark
linearly-varying potential. Using the density matrix renormalization-group
numerical technique to solve this generalized extended Hubbard model, the
ground state exhibits a staircase of (quasi) plateaus in the average local site
density along the chain, decreasing from being doubly-filled to empty as the
potential increases. These `plateaus' represent locked-in commensurate phases
of charge density waves together with band and Mott insulators. These phases
are separated by incompressible regions with incommensurate fillings. It is
suggested that experimental variations of the slope of the potential and of the
range of the repulsive interactions will produce such a coexistence of phases
which have been individually expected theoretically and observed experimentally
for uniform systems.
|
http://arxiv.org/abs/2310.00291v2
|
Predicting and reasoning about the future lie at the heart of many
time-series questions. For example, goal-conditioned reinforcement learning can
be viewed as learning representations to predict which states are likely to be
visited in the future. While prior methods have used contrastive predictive
coding to model time series data, learning representations that encode
long-term dependencies usually requires large amounts of data. In this paper,
we introduce a temporal difference version of contrastive predictive coding
that stitches together pieces of different time series data to decrease the
amount of data required to learn predictions of future events. We apply this
representation learning method to derive an off-policy algorithm for
goal-conditioned RL. Experiments demonstrate that, compared with prior RL
methods, ours achieves $2 \times$ median improvement in success rates and can
better cope with stochastic environments. In tabular settings, we show that our
method is about $20 \times$ more sample efficient than the successor
representation and $1500 \times$ more sample efficient than the standard (Monte
Carlo) version of contrastive predictive coding.
|
http://arxiv.org/abs/2310.20141v2
|
In this paper, we develop a new rainbow Hamilton framework, which is of
independent interest, settling the problem proposed by Gupta, Hamann,
M\"{u}yesser, Parczyk, and Sgueglia when $k=3$, and draw the general conclusion
for any $k\geq3$ as follows. A $k$-graph system $\textbf{H}=\{H_i\}_{i\in[n]}$
is a family of not necessarily distinct $k$-graphs on the same $n$-vertex set
$V$, moreover, a $k$-graph $H$ on $V$ is rainbow if $E(H)\subseteq
\bigcup_{i\in[n]}E(H_i)$ and $|E(H)\cap E(H_i)|\leq1$ for $i\in[n]$. We show
that given $\gamma> 0$, sufficiently large $n$ and an $n$-vertex $k$-graph
system $\textbf{H}=\{H_i\}_{i\in[n]}$ , if
$\delta_{k-2}(H_i)\geq(5/9+\gamma)\binom{n}{2}$ for $i\in[n]$ where $k\geq3$,
then there exists a rainbow tight Hamilton cycle. This result implies the
conclusion in a single graph, which was proved by Lang and Sanhueza-Matamala
[$J. Lond. Math. Soc., 2022$], Polcyn, Reiher, R\"{o}dl and Sch\"{u}lke [$J.
Combin. Theory \ Ser. B, 2021$] independently.
|
http://arxiv.org/abs/2302.00080v1
|
The relative fixity of a digraph $\Gamma$ is defined as the ratio between the
largest number of vertices fixed by a nontrivial automorphism of $\Gamma$ and
the number of vertices of $\Gamma$. We characterize the vertex-primitive
digraphs whose relative fixity is at least $1/3$, and we show that there are
only finitely many vertex-primitive digraphs of bounded out-valency and
relative fixity exceeding a positive constant.
|
http://arxiv.org/abs/2309.16590v1
|
This paper deals with the Vlasov-Stokes' system in three dimensions with
periodic boundary conditions in the spatial variable. We prove the existence of
a unique strong solution to this two-phase model under the assumption that
initial velocity moments of certain order are bounded. We use a fixed point
argument to arrive at a global-in-time solution.
|
http://arxiv.org/abs/2305.19576v1
|
2D materials present an interesting platform for device designs. However,
oxidation can drastically change the system's properties, which need to be
accounted for. Through {\it ab initio} calculations, we investigated
freestanding and SiC-supported As, Sb, and Bi mono-elemental layers. The
oxidation process occurs through an O$_2$ spin-state transition, accounted for
within the Landau-Zener transition. Additionally, we have investigated the
oxidation barriers and the role of spin-orbit coupling. Our calculations
pointed out that the presence of SiC substrate reduces the oxidation time scale
compared to a freestanding monolayer. We have extracted the energy barrier
transition, compatible with our spin-transition analysis. Besides, spin-orbit
coupling is relevant to the oxidation mechanisms and alters time scales. The
energy barriers decrease as the pnictogen changes from As to Sb to Bi for the
freestanding systems, while for SiC-supported, they increase across the
pnictogen family. Our computed energy barriers confirm the enhanced robustness
against oxidation for the SiC-supported systems.
|
http://arxiv.org/abs/2307.00138v1
|
TREXIO is an open-source file format and library developed for the storage
and manipulation of data produced by quantum chemistry calculations. It is
designed with the goal of providing a reliable and efficient method of storing
and exchanging wave function parameters and matrix elements, making it an
important tool for researchers in the field of quantum chemistry. In this work,
we present an overview of the TREXIO file format and library. The library
consists of a front-end implemented in the C programming language and two
different back-ends: a text back-end and a binary back-end utilizing the HDF5
library which enables fast read and write operations. It is compatible with a
variety of platforms and has interfaces for the Fortran, Python, and OCaml
programming languages. In addition, a suite of tools has been developed to
facilitate the use of the TREXIO format and library, including converters for
popular quantum chemistry codes and utilities for validating and manipulating
data stored in TREXIO files. The simplicity, versatility, and ease of use of
TREXIO make it a valuable resource for researchers working with quantum
chemistry data.
|
http://arxiv.org/abs/2302.14793v2
|
Multi-task learning (MTL) has shown great potential in medical image
analysis, improving the generalizability of the learned features and the
performance in individual tasks. However, most of the work on MTL focuses on
either architecture design or gradient manipulation, while in both scenarios,
features are learned in a competitive manner. In this work, we propose to
formulate MTL as a multi/bi-level optimization problem, and therefore force
features to learn from each task in a cooperative approach. Specifically, we
update the sub-model for each task alternatively taking advantage of the
learned sub-models of the other tasks. To alleviate the negative transfer
problem during the optimization, we search for flat minima for the current
objective function with regard to features from other tasks. To demonstrate the
effectiveness of the proposed approach, we validate our method on three
publicly available datasets. The proposed method shows the advantage of
cooperative learning, and yields promising results when compared with the
state-of-the-art MTL approaches. The code will be available online.
|
http://arxiv.org/abs/2309.12090v1
|
We report the structural and magnetic properties of RNi (R=Dy,
Tb$_{1/3}$Dy$_{1/3}$Ho$_{1/3}$, and
Gd$_{1/5}$Tb$_{1/5}$Dy$_{1/5}$Ho$_{1/5}$Er$_{1/5}$) to investigate the
high-entropy effect at the rare-earth site. The lattice parameters are almost
unchanged by the increase of configurational entropy, which is due to the
successive partial substitution of Dy by pair of rare earth elements located on
both sides of Dy in the periodic table. All compounds exhibit ferromagnetic
ground states. The replacement of Dy with Tb+Ho, which does not have magnetic
interactions in competition with Dy, does not affect the magnetic ordering
temperature. Although (Gd$_{1/5}$Tb$_{1/5}$Dy$_{1/5}$Ho$_{1/5}$Er$_{1/5}$)Ni
shows the Curie temperature close to that of DyNi, an additional magnetic
anomaly, which would be a spin reorientation, is observed probably due to the
introduction of competing magnetic interactions between R=Gd and Er compounds
and R=Tb, Dy, and Ho ones. We have also assessed the magnetocaloric effect, and
the configurational entropy dependence of the magnetic entropy change reflects
that of the temperature derivative of the magnetic susceptibility. Our analysis
suggests the possibility of enhancing magnetocaloric properties by designing
the anisotropy of rare-earth magnetic moments in the high-entropy state.
|
http://arxiv.org/abs/2309.04619v1
|
Recent legislation proposals have significantly increased the demand for
eXplainable Artificial Intelligence (XAI) in many businesses, especially in
so-called `high-risk' domains, such as recruitment. Within recruitment, AI has
become commonplace, mainly in the form of job recommender systems (JRSs), which
try to match candidates to vacancies, and vice versa. However, common XAI
techniques often fall short in this domain due to the different levels and
types of expertise of the individuals involved, making explanations difficult
to generalize. To determine the explanation preferences of the different
stakeholder types - candidates, recruiters, and companies - we created and
validated a semi-structured interview guide. Using grounded theory, we
structurally analyzed the results of these interviews and found that different
stakeholder types indeed have strongly differing explanation preferences.
Candidates indicated a preference for brief, textual explanations that allow
them to quickly judge potential matches. On the other hand, hiring managers
preferred visual graph-based explanations that provide a more technical and
comprehensive overview at a glance. Recruiters found more exhaustive textual
explanations preferable, as those provided them with more talking points to
convince both parties of the match. Based on these findings, we describe
guidelines on how to design an explanation interface that fulfills the
requirements of all three stakeholder types. Furthermore, we provide the
validated interview guide, which can assist future research in determining the
explanation preferences of different stakeholder types.
|
http://arxiv.org/abs/2309.05507v1
|
Background: The literature offers various methods for capturing software
architectural knowledge (AK), including views, viewpoints, and architecture
decision records (ADRs). In parallel, sustainability has gained prominence in
software engineering, especially concerning software architecture.
Nevertheless, practical industry reviews on these subjects seem to be lacking.
Aim: In this research we aim to understand the current practice in architecture
knowledge, and to explore where sustainability can be applied to address
sustainability in software architecture in the future. Method: We used a
survey, which utilized a questionnaire containing 34 questions and collected
responses from 45 architects working at a prominent bank in the Netherlands,
aimed to evaluate the practical representation and communication of
architectural knowledge and sustainability. Result: Our analysis yielded two
primary discoveries and several intriguing detailed results regarding how AK is
captured and conveyed to diverse stakeholders. Firstly, it seems crucial to
develop a new architectural element that connects various architectural
features and perspectives tailored for different stakeholders. Secondly,
providing clear guidance, references, and goals is essential to motivate
architects to adopt Sustainable Software Engineering practices. Conclusion:
After analysing the data collected through this survey, we have concluded that:
a) There are no established domain-specific AK methods/tools in the financial
domain. Most practitioners use domain-generic tools. b) A new architectural
element that links the various architectural features and viewpoints created
for various stakeholders appears to be necessary. c) There is sufficient
sustainability awareness and motivation among software architects. However,
what they lack are clear guidance, references, and goals to practice
sustainable software engineering.
|
http://arxiv.org/abs/2309.11572v1
|
We present a reversible intermediate language with concurrency for
translating a high-level concurrent programming language to another lower-level
concurrent programming language, keeping reversibility. Intermediate languages
are commonly used in compiling a source program to an object code program
closer to the machine code, where an intermediate language enables behavioral
analysis and optimization to be decomposed in steps. We propose CRIL
(Concurrent Reversible Intermediate Language) as an extension of RIL used by
Mogensen for a functional reversible language, incorporating a multi-thread
process invocation and the synchronization primitives based on the P-V
operations. We show that the operational semantics of CRIL enjoy the properties
of reversibility, including the causal safety and causal liveness proposed by
Lanese et al., checking the axiomatic properties. The operational semantics is
defined by composing the bidirectional control flow with the dependency
information on updating the memory, called annotation DAG. We show a simple
example of `airline ticketing' to illustrate how CRIL preserves the causality
for reversibility in imperative programs with concurrency.
|
http://arxiv.org/abs/2309.07310v1
|
In this work we present a hybrid physics-based and data-driven learning
approach to construct surrogate models for concurrent multiscale simulations of
complex material behavior. We start from robust but inflexible physics-based
constitutive models and increase their expressivity by allowing a subset of
their material parameters to change in time according to an evolution operator
learned from data. This leads to a flexible hybrid model combining a
data-driven encoder and a physics-based decoder. Apart from introducing
physics-motivated bias to the resulting surrogate, the internal variables of
the decoder act as a memory mechanism that allows path dependency to arise
naturally. We demonstrate the capabilities of the approach by combining an FNN
encoder with several plasticity decoders and training the model to reproduce
the macroscopic behavior of fiber-reinforced composites. The hybrid models are
able to provide reasonable predictions of unloading/reloading behavior while
being trained exclusively on monotonic data. Furthermore, in contrast to
traditional surrogates mapping strains to stresses, the specific architecture
of the hybrid model allows for lossless dimensionality reduction and
straightforward enforcement of frame invariance by using strain invariants as
the feature space of the encoder.
|
http://arxiv.org/abs/2301.13547v1
|
We consider electroweak (EW) gauge boson corrections to the masses of
pseudoscalar mesons to next to leading order (NLO) in $\alpha_s$ and $1/N_C$.
The pion mass shift induced by the $Z$-boson is shown to be
$m_{\pi^\pm}-m_{\pi^0} = -0.00201(12)$ MeV. While being small compared to the
electromagnetic mass shift, the prediction lies about a factor of $\sim 4$
above the precision of the current experimental measurement, and a factor
$O(10)$ below the precision of current lattice calculations. This motivates
future implementations of these EW gauge boson effects on the lattice. Finally,
we consider BSM contributions to the pion mass difference.
|
http://arxiv.org/abs/2308.00030v1
|
Automated detection of Gallbladder Cancer (GBC) from Ultrasound (US) images
is an important problem, which has drawn increased interest from researchers.
However, most of these works use difficult-to-acquire information such as
bounding box annotations or additional US videos. In this paper, we focus on
GBC detection using only image-level labels. Such annotation is usually
available based on the diagnostic report of a patient, and do not require
additional annotation effort from the physicians. However, our analysis reveals
that it is difficult to train a standard image classification model for GBC
detection. This is due to the low inter-class variance (a malignant region
usually occupies only a small portion of a US image), high intra-class variance
(due to the US sensor capturing a 2D slice of a 3D object leading to large
viewpoint variations), and low training data availability. We posit that even
when we have only the image level label, still formulating the problem as
object detection (with bounding box output) helps a deep neural network (DNN)
model focus on the relevant region of interest. Since no bounding box
annotations is available for training, we pose the problem as weakly supervised
object detection (WSOD). Motivated by the recent success of transformer models
in object detection, we train one such model, DETR, using
multi-instance-learning (MIL) with self-supervised instance selection to suit
the WSOD task. Our proposed method demonstrates an improvement of AP and
detection sensitivity over the SOTA transformer-based and CNN-based WSOD
methods. Project page is at https://gbc-iitd.github.io/wsod-gbc
|
http://arxiv.org/abs/2309.05261v1
|
The concept of cyber deception has been receiving emerging attention. The
development of cyber defensive deception techniques requires interdisciplinary
work, among which cognitive science plays an important role. In this work, we
adopt a signaling game framework between a defender and a human agent to
develop a cyber defensive deception protocol that takes advantage of the
cognitive biases of human decision-making using quantum decision theory to
combat insider attacks (IA). The defender deceives an inside human attacker by
luring him to access decoy sensors via generators producing perceptions of
classical signals to manipulate the human attacker's psychological state of
mind. Our results reveal that even without changing the classical traffic data,
strategically designed generators can result in a worse performance for
defending against insider attackers in identifying decoys than the ones in the
deceptive scheme without generators, which generate random information based on
input signals. The proposed framework leads to fundamental theories in
designing more effective signaling schemes.
|
http://arxiv.org/abs/2309.13403v1
|
This paper presents a method for determining the area explored by a
line-sweep sensor during an area-covering mission in a two-dimensional plane.
Accurate knowledge of the explored area is crucial for various applications in
robotics, such as mapping, surveillance, and coverage optimization. The
proposed method leverages the concept of coverage measure of the environment
and its relation to the topological degree in the plane, to estimate the extent
of the explored region. In addition, we extend the approach to uncertain
coverage measure values using interval analysis. This last contribution allows
for a guaranteed characterization of the explored area, essential considering
the often critical character of area-covering missions. Finally, this paper
also proposes a novel algorithm for computing the topological degree in the
2-dimensional plane, for all the points inside an area of interest, which
differs from existing solutions that compute the topological degree for single
points. The applicability of the method is evaluated through a real-world
experiment.
|
http://arxiv.org/abs/2309.03604v1
|
The increasing demand for the realization of global-scale quantum
communication services necessitates critical investigation for a practical
quantum secure communication network that relies on full-time all-location
coverage. In this direction, the non-terrestrial quantum key distribution is
expected to play an important role in providing agility, maneuverability, relay
link, on-demand network, and last-mile coverage. In this work, we have
summarized the research and development that has happened until now in the
domain of quantum communication using non-terrestrial platforms with a specific
focus on the associated challenges and the relevant models. Further, to extend
the analysis beyond the existing know-how, a hybrid model involving the
features of Vasylyev et al. model and Liorni et al. model is introduced here.
The hybrid model entails us adapting a spherical beam to an elliptic beam
approximation and effectively capturing the characteristics of transmittance in
densely humid weather conditions and at low altitudes. Further, to understand
the potential impact of the weather conditions of a region on atmospheric
attenuation, as an example the average monthly visibility of Pune city was
analyzed for the years 2021 and 2022. In addition, a simulation of a generic
model is performed using a software-defined network paradigm where quantum
teleportation is simulated between distant parties using a swarm of drones in
NetSquid.
|
http://arxiv.org/abs/2309.13417v1
|
Large Language Models (LLMs) excel in various Natural Language Processing
(NLP) tasks, yet their evaluation, particularly in languages beyond the top
$20$, remains inadequate due to existing benchmarks and metrics limitations.
Employing LLMs as evaluators to rank or score other models' outputs emerges as
a viable solution, addressing the constraints tied to human annotators and
established benchmarks. In this study, we explore the potential of LLM-based
evaluators, specifically GPT-4 in enhancing multilingual evaluation by
calibrating them against $20$K human judgments across three text-generation
tasks, five metrics, and eight languages. Our analysis reveals a bias in
GPT4-based evaluators towards higher scores, underscoring the necessity of
calibration with native speaker judgments, especially in low-resource and
non-Latin script languages, to ensure accurate evaluation of LLM performance
across diverse languages.
|
http://arxiv.org/abs/2309.07462v2
|
Second language acquisition (SLA) research has extensively studied
cross-linguistic transfer, the influence of linguistic structure of a speaker's
native language [L1] on the successful acquisition of a foreign language [L2].
Effects of such transfer can be positive (facilitating acquisition) or negative
(impeding acquisition). We find that NLP literature has not given enough
attention to the phenomenon of negative transfer. To understand patterns of
both positive and negative transfer between L1 and L2, we model sequential
second language acquisition in LMs. Further, we build a Mutlilingual Age
Ordered CHILDES (MAO-CHILDES) -- a dataset consisting of 5 typologically
diverse languages, i.e., German, French, Polish, Indonesian, and Japanese -- to
understand the degree to which native Child-Directed Speech (CDS) [L1] can help
or conflict with English language acquisition [L2]. To examine the impact of
native CDS, we use the TILT-based cross lingual transfer learning approach
established by Papadimitriou and Jurafsky (2020) and find that, as in human
SLA, language family distance predicts more negative transfer. Additionally, we
find that conversational speech data shows greater facilitation for language
acquisition than scripted speech data. Our findings call for further research
using our novel Transformer-based SLA models and we would like to encourage it
by releasing our code, data, and models.
|
http://arxiv.org/abs/2305.19589v1
|
The hydrodynamic limit and Newtonian limit are important in the relativistic
kinetic theory. We justify rigorously the validity of the two independent
limits from the special relativistic Boltzmann equation to the classical Euler
equations without assuming any dependence between the Knudsen number
$\varepsilon$ and the light speed $\mathfrak{c}$. The convergence rates are
also obtained. This is achieved by Hilbert expansion of relativistic Boltzmann
equation. New difficulties arise when tacking the uniform in $\mathfrak{c}$ and
$\varepsilon$ estimates for the Hilbert expansion, which have been overcome by
establishing some uniform-in-$\mathfrak{c}$ estimate for relativistic Boltzmann
operators.
|
http://arxiv.org/abs/2308.16646v1
|
As the pretraining technique is growing in popularity, little work has been
done on pretrained learning-based motion prediction methods in autonomous
driving. In this paper, we propose a framework to formalize the pretraining
task for trajectory prediction of traffic participants. Within our framework,
inspired by the random masked model in natural language processing (NLP) and
computer vision (CV), objects' positions at random timesteps are masked and
then filled in by the learned neural network (NN). By changing the mask
profile, our framework can easily switch among a range of motion-related tasks.
We show that our proposed pretraining framework is able to deal with noisy
inputs and improves the motion prediction accuracy and miss rate, especially
for objects occluded over time by evaluating it on Argoverse and NuScenes
datasets.
|
http://arxiv.org/abs/2309.08989v1
|
Angular momentum coupling between a rotating magnetized plasma and torsional
Alfv\'en waves carrying orbital angular momentum (OAM) is examined. It is not
only demonstrated that rotation is the source of Fresnel-Faraday rotation - or
orbital Faraday rotation effects - for OAM carrying Alfv\'en waves, but also
that angular momentum from an OAM carrying Alfv\'en wave can be transferred to
a rotating plasma through the inverse process. For the direct process, the
transverse structure angular rotation frequency is derived by considering the
dispersion relation for modes with opposite OAM content. For the inverse
process, the torque exerted on the plasma is derived as a function of wave and
plasma parameters.
|
http://arxiv.org/abs/2309.11200v1
|
Dysarthria is a speech disorder that hinders communication due to
difficulties in articulating words. Detection of dysarthria is important for
several reasons as it can be used to develop a treatment plan and help improve
a person's quality of life and ability to communicate effectively. Much of the
literature focused on improving ASR systems for dysarthric speech. The
objective of the current work is to develop models that can accurately classify
the presence of dysarthria and also give information about the intelligibility
level using limited data by employing a few-shot approach using a transformer
model. This work also aims to tackle the data leakage that is present in
previous studies. Our whisper-large-v2 transformer model trained on a subset of
the UASpeech dataset containing medium intelligibility level patients achieved
an accuracy of 85%, precision of 0.92, recall of 0.8 F1-score of 0.85, and
specificity of 0.91. Experimental results also demonstrate that the model
trained using the 'words' dataset performed better compared to the model
trained on the 'letters' and 'digits' dataset. Moreover, the multiclass model
achieved an accuracy of 67%.
|
http://arxiv.org/abs/2309.09329v1
|
We introduce SignBank+, a clean version of the SignBank dataset, optimized
for machine translation between spoken language text and SignWriting, a
phonetic sign language writing system. In addition to previous work that
employs complex factorization techniques to enable translation between text and
SignWriting, we show that a traditional text-to-text translation approach
performs equally effectively on the cleaned SignBank+ dataset. Our evaluation
results indicate that models trained on SignBank+ surpass those on the original
dataset, establishing a new benchmark for SignWriting-based sign language
translation and providing an open resource for future research.
|
http://arxiv.org/abs/2309.11566v2
|
This position paper on the (meta-)theory of Structural Operational Semantic
(SOS) is motivated by the following two questions: (1) Is the (meta-)theory of
SOS dying out as a research field? (2) If so, is it possible to rejuvenate this
field with a redefined purpose?
In this article, we will consider possible answers to those questions by
first analysing the history of the EXPRESS/SOS workshops and the data
concerning the authors and the presentations featured in the editions of those
workshops as well as their subject matters.
The results of our quantitative and qualitative analyses all indicate a
diminishing interest in the theory of SOS as a field of research. Even though
`all good things must come to an end', we strive to finish this position paper
on an upbeat note by addressing our second motivating question with some
optimism. To this end, we use our personal reflections and an analysis of
recent trends in two of the flagship conferences in the field of Programming
Languages (namely POPL and PDLI) to draw some conclusions on possible future
directions that may rejuvenate research on the (meta-)theory of SOS. We hope
that our musings will entice members of the research community to breathe new
life into a field of research that has been kind to three of the authors of
this article.
|
http://arxiv.org/abs/2309.07304v1
|
With the rise of bidirectional encoder representations from Transformer
models in natural language processing, the speech community has adopted some of
their development methodologies. Therefore, the Wav2Vec models were introduced
to reduce the data required to obtain state-of-the-art results. This work
leverages this knowledge and improves the performance of the pre-trained speech
models by simply replacing the fine-tuning dense layer with a lateral
inhibition layer inspired by the biological process. Our experiments on
Romanian, a low-resource language, show an average improvement of 12.5% word
error rate (WER) using the lateral inhibition layer. In addition, we obtain
state-of-the-art results on both the Romanian Speech Corpus and the Robin
Technical Acquisition Corpus with 1.78% WER and 29.64% WER, respectively.
|
http://arxiv.org/abs/2306.17792v1
|
This paper proposes decentralized stability conditions for multi-converter
systems based on the combination of the small gain theorem and the small phase
theorem. Instead of directly computing the closed-loop dynamics, e.g.,
eigenvalues of the state-space matrix, or using the generalized Nyquist
stability criterion, the proposed stability conditions are more scalable and
computationally lighter, which aim at evaluating the closed-loop system
stability by comparing the individual converter dynamics with the network
dynamics in a decentralized and open-loop manner. Moreover, our approach can
handle heterogeneous converters' dynamics and is suitable to analyze
large-scale multi-converter power systems that contain grid-following (GFL),
grid-forming (GFM) converters, and synchronous generators. Compared with other
decentralized stability conditions, e.g., passivity-based stability conditions,
the proposed conditions are significantly less conservative and can be
generally satisfied in practice across the whole frequency range.
|
http://arxiv.org/abs/2309.08037v2
|
The paper describes the MAGIC multi-mode focal reducer (Monitoring of Active
Galaxies by Investigation of their Cores), commissioned on the 1-m Zeiss-1000
telescope of the Special Astrophysical Observatory of the Russian Academy of
Sciences in September 2020. Three observational modes are currently realised:
photometry, polarimetry, and long-slit spectroscopy. Reducing the focal length
makes it possible to obtain a sufficiently large field of view for photometry
and a large slit height for spectroscopy of $\sim$12$'$, as well as a large
field of view for polarimetry with a quadrupole Wollaston prism of
$\sim$6$'$.4. This feature makes the complex study of extended nebulae and
galaxies efficient. The MAGIC capabilities are presented in examples of
observations of various astronomical objects. The spectral mode in the range of
4000-7200 AA provides the spectral resolution $R \sim$ 1000; for a starlike
target up to 14 mag in medium-band filters with a seeing of 1$''$ for 20
minutes of total exposure, the photometry accuracy is better than 0.01 mag and
the polarization accuracy is better than 0.6%. Especially for the new focal
reducer, an offset guide and a position angle rotation system were implemented.
The results of the modernization of the baffle system in the optical scheme of
the telescope for the suppression of scattered light are also described.
|
http://arxiv.org/abs/2309.13371v1
|
We establish the new main inequality as a minimizing criterion for minimal
maps to products of $\mathbb{R}$-trees, and the infinitesimal new main
inequality as a stability criterion for minimal maps to $\mathbb{R}^n$. Along
the way, we develop a new perspective on destabilizing minimal surfaces in
$\mathbb{R}^n$, and as a consequence we reprove the instability of some
classical minimal surfaces; for example, the Enneper surface.
|
http://arxiv.org/abs/2301.00249v2
|
This paper presents a systematic and comprehensive analysis of the impact of
parameter imbalance in permanent magnet synchronous machines. Analytical models
that reveal the effects of imbalance are obtained for each parameter.
Thereafter, the models are verified for accuracy by comparison with complex
simulations that closely represent true machine behavior. Such models may be
utilized for developing (general) algorithms for detection, learning and
mitigation of the negative effects of parameter imbalance including current
(and thus torque) pulsations during real-time operation.
|
http://arxiv.org/abs/2310.00508v1
|
The current interacting hand (IH) datasets are relatively simplistic in terms
of background and texture, with hand joints being annotated by a machine
annotator, which may result in inaccuracies, and the diversity of pose
distribution is limited. However, the variability of background, pose
distribution, and texture can greatly influence the generalization ability.
Therefore, we present a large-scale synthetic dataset RenderIH for interacting
hands with accurate and diverse pose annotations. The dataset contains 1M
photo-realistic images with varied backgrounds, perspectives, and hand
textures. To generate natural and diverse interacting poses, we propose a new
pose optimization algorithm. Additionally, for better pose estimation accuracy,
we introduce a transformer-based pose estimation network, TransHand, to
leverage the correlation between interacting hands and verify the effectiveness
of RenderIH in improving results. Our dataset is model-agnostic and can improve
more accuracy of any hand pose estimation method in comparison to other real or
synthetic datasets. Experiments have shown that pretraining on our synthetic
data can significantly decrease the error from 6.76mm to 5.79mm, and our
Transhand surpasses contemporary methods. Our dataset and code are available at
https://github.com/adwardlee/RenderIH.
|
http://arxiv.org/abs/2309.09301v3
|
We show how to explicitly compute the homogenized curvature energy appearing
in the isotropic $\Gamma$-limit for flat and for curved initial configuration
Cosserat shell models, when a parental three-dimensional minimization problem
on $\Omega \subset \mathbb{R}^3$ for a Cosserat energy based on the second
order dislocation density tensor $\alpha:=\overline{R} ^T {\rm
Curl}\,\overline{R} \in \mathbb{R}^{3\times 3}$, $\overline{R}\in {\rm SO}(3)$
is used.
|
http://arxiv.org/abs/2309.06032v1
|
We study the low-energy eigenstates of a topological superconductor wire
modeled by a Kitaev chain, which is connected at one of its ends to a quantum
dot through nearest-neighbor (NN) hopping and NN Coulomb repulsion. Using an
unrestricted Hartree-Fock approximation to decouple the Coulomb term, we obtain
that the quality of the Majorana end states is seriously affected by this term
only when the dependence of the low-lying energies with the energy of the
quantum dot shows a "diamond" shape, characteristic of short wires. We discuss
limitations of the simplest effective models to describe the physics. We expect
the same behavior in more realistic models for topological superconducting
wires.
|
http://arxiv.org/abs/2309.10888v3
|
We focus on modeling the relationship between an input feature vector and the
predicted outcome of a trained decision tree using mixed-integer optimization.
This can be used in many practical applications where a decision tree or tree
ensemble is incorporated into an optimization problem to model the predicted
outcomes of a decision. We propose tighter mixed-integer optimization
formulations than those previously introduced. Existing formulations can be
shown to have linear relaxations that have fractional extreme points, even for
the simple case of modeling a single decision tree. A formulation we propose,
based on a projected union of polyhedra approach, is ideal for a single
decision tree. While the formulation is generally not ideal for tree ensembles
or if additional constraints are added, it generally has fewer extreme points,
leading to a faster time to solve, particularly if the formulation has
relatively few trees. However, previous work has shown that formulations based
on a binary representation of the feature vector perform well computationally
and hence are attractive for use in practical applications. We present multiple
approaches to tighten existing formulations with binary vectors, and show that
fractional extreme points are removed when there are multiple splits on the
same feature. At an extreme, we prove that this results in ideal formulations
for tree ensembles modeling a one-dimensional feature vector. Building on this
result, we also show via numerical simulations that these additional
constraints result in significantly tighter linear relaxations when the feature
vector is low dimensional. We also present instances where the time to solve to
optimality is significantly improved using these formulations.
|
http://arxiv.org/abs/2302.14744v1
|
Boundary effects play an important role in the study of hydrodynamic limits
in the Boltzmann theory. We justify rigorously the validity of the hydrodynamic
limit from the Boltzmann equation of soft potentials to the compressible Euler
equations by the Hilbert expansion with multi-scales. Specifically, the
Boltzmann solutions are expanded into three parts: interior part, viscous
boundary layer and Knudsen boundary layer. Due to the weak effect of collision
frequency of soft potentials, new difficulty arises when tackling the existence
of Knudsen layer solutions with space decay rate, which has been overcome under
some constraint conditions and losing velocity weight arguments.
|
http://arxiv.org/abs/2310.02337v1
|
Chiral form fields in $d$ dimensions can be effectively described as edge
modes of topological Chern-Simons theories in $d+1$ dimensions. At the same
time, manifestly Lorentz-invariant Lagrangian description of such fields
directly in terms of a $d$-dimensional field theory is challenging and requires
introducing nontrivial auxiliary gauge fields eliminated on-shell with extra
gauge symmetries. A recent work by Arvanitakis et al.\ demonstrates
(emphasizing the case of 2d chiral bosons) that the two approaches are related,
and a peculiar reduction on the $(d+1)$-dimensional topological Lagrangian
automatically leads to $d$-dimensional Lagrangians with appropriate sets of
auxiliary fields. We develop this setup in three distinct directions. First, we
demonstrate how arbitrary Abelian self-interactions for chiral forms can be
included using nonlinear boundary terms in the Chern-Simons theory. Second, by
generalizing the Chern-Simons theory to the BF theory, we obtain an analogous
democratic description of non-chiral form fields, where electric and magnetic
potentials appear as explicit dynamical variables. Third, we discuss the
effects of introducing topological interactions in the higher-dimensional bulk,
which produce extra interaction terms in the boundary theory. When applied to a
topological 4-form field in 12 dimensions, this construction results in a
democratic description of the 3-form gauge field of the 11-dimensional
supergravity.
|
http://arxiv.org/abs/2309.04625v1
|
In this work, we investigate the potential of gamma-ray pulsar time array
(PTA) on gravitational waves background (GWB) using future gamma-ray detectors
with larger effective areas. We consider both spaceborne detectors and
ground-based imaging air Cherenkov telescope arrays (IACTs). We simulated the
detected photons from pulsars using the response of hypothetical detectors
taking into account the backgrounds and analyzed the sensitivities. Our results
showed that thanks to the higher statistics of IACTs, the PTA using IACTs can
improve significantly the performance compared with the PTA using Fermi-LAT
data.
|
http://arxiv.org/abs/2309.13359v1
|
The possibility of cluster emission from trans-lead (86$\leq$Z$\leq$96)
region of periodic chart has been explored comprehensively by employing few
empirical formulas which are modified by adding angular momentum ($l$) or
isospin-dependent ($I=(N-Z)/A$) or both terms for the calculation of cluster
decay half-lives. These modified versions of the formulas are found with lesser
${\chi}^2$ per degree of freedom and root mean-square error, in addition to the
smaller values of some other statistical parameters, while compared to their
corresponding old versions on available 61 experimental data of cluster
radioactivity. By applying the modified version of the formula given by
Balasubramaniam \textit{et al.} [PRC 70 (2004) 017301], the most accurate
formula among these, half-lives of several clusters i.e. isotopes of Be, B, C,
N, O, F, Ne, Na, Mg, and Si are predicted systematically for the several
isotopes in the trans-lead region. The contest of cluster emission with
$\alpha$-decay has been investigated in form of branching ratio which brings
several potential cluster emissions into the probable decay modes of these
nuclei. The accurate prediction of half-lives of such clusters is expected to
be crucial for the future experimental observations where $\alpha$-decay is
observed dominantly.
|
http://arxiv.org/abs/2301.00261v1
|
We present a method to precisly measure the frequencies of transitions to
high-$n$ Rydberg states of the hydrogen atom which are not subject to
uncontrolled systematic shifts caused by stray electric fields. The method
consists in recording Stark spectra of the field-insensitive $k=0$ Stark states
and the field-sensitive $k=\pm2$ Stark states, which are used to calibrate the
electric field strength. We illustrate this method with measurements of
transitions from the $2\,\text{s}(f=0\text{ and } 1)$ hyperfine levels in the
presence of intentionally applied electric fields with strengths in the range
between $0.4$ and $1.6\,$Vcm$^{-1}$. The slightly field-dependent $k=0$ level
energies are corrected with a precisely calculated shift to obtain the
corresponding Bohr energies $\left(-cR_{\mathrm{H}}/n^2\right)$. The energy
difference between $n=20$ and $n=24$ obtained with our method agrees with
Bohr's formula within the $10\,$kHz experimental uncertainty. We also
determined the hyperfine splitting of the $2\,\text{s}$ state by taking the
difference between transition frequencies from the $2\,\text{s}(f=0 \text{ and
}1)$ levels to the $n=20,k=0$ Stark states. Our results demonstrate the
possibility of carrying out precision measurements in high-$n$ hydrogenic
quantum states.
|
http://arxiv.org/abs/2309.12721v1
|
As an emerging field that aims to bridge the gap between human activities and
computing systems, human-centered computing (HCC) in cloud, edge, fog has had a
huge impact on the artificial intelligence algorithms. The quantum generative
adversarial network (QGAN) is considered to be one of the quantum machine
learning algorithms with great application prospects, which also should be
improved to conform to the human-centered paradigm. The generation process of
QGAN is relatively random and the generated model does not conform to the
human-centered concept, so it is not quite suitable for real scenarios. In
order to solve these problems, a hybrid quantum-classical conditional
generative adversarial network (QCGAN) algorithm is proposed, which is a
knowledge-driven human-computer interaction computing mode that can be
implemented in cloud. The purposes of stabilizing the generation process and
realizing the interaction between human and computing process are achieved by
inputting artificial conditional information in the generator and
discriminator. The generator uses the parameterized quantum circuit with an
all-to-all connected topology, which facilitates the tuning of network
parameters during the training process. The discriminator uses the classical
neural network, which effectively avoids the "input bottleneck" of quantum
machine learning. Finally, the BAS training set is selected to conduct
experiment on the quantum cloud computing platform. The result shows that the
QCGAN algorithm can effectively converge to the Nash equilibrium point after
training and perform human-centered classification generation tasks.
|
http://arxiv.org/abs/2310.00246v1
|
An event-based maximum likelihood method for handling X-ray polarimetry data
is extended to include the effects of background and nonuniform sampling of the
possible position angle space. While nonuniform sampling in position angle
space generally introduces cross terms in the uncertainties of polarization
parameters that could create degeneracies, there are interesting cases that
engender no bias or parameter covariance. When including background in
Poisson-based likelihood formulation, the formula for the minimum detectable
polarization (MDP) has nearly the same form as for the case of Gaussian
statistics derived by Elsner et al. (2012) in the limiting case of an
unpolarized signal. A polarized background is also considered, which
demonstrably increases uncertainties in source polarization measurements. In
addition, a Kolmogorov-style test of the event position angle distribution is
proposed that can provide an unbinned test of models where the polarization
angle in Stokes space depends on event characteristics such as time or energy.
|
http://arxiv.org/abs/2310.20196v2
|
Satellite altimetry combined with data assimilation and optimal interpolation
schemes have deeply renewed our ability to monitor sea surface dynamics.
Recently, deep learning (DL) schemes have emerged as appealing solutions to
address space-time interpolation problems. The scarcity of real altimetry
dataset, in terms of space-time coverage of the sea surface, however impedes
the training of state-of-the-art neural schemes on real-world case-studies.
Here, we leverage both simulations of ocean dynamics and satellite altimeters
to train simulation-based neural mapping schemes for the sea surface height and
demonstrate their performance for real altimetry datasets. We analyze further
how the ocean simulation dataset used during the training phase impacts this
performance. This experimental analysis covers both the resolution from
eddy-present configurations to eddy-rich ones, forced simulations vs.
reanalyses using data assimilation and tide-free vs. tide-resolving
simulations. Our benchmarking framework focuses on a Gulf Stream region for a
realistic 5-altimeter constellation using NEMO ocean simulations and 4DVarNet
mapping schemes. All simulation-based 4DVarNets outperform the operational
observation-driven and reanalysis products, namely DUACS and GLORYS. The more
realistic the ocean simulation dataset used during the training phase, the
better the mapping. The best 4DVarNet mapping was trained from an eddy-rich and
tide-free simulation datasets. It improves the resolved longitudinal scale from
151 kilometers for DUACS and 241 kilometers for GLORYS to 98 kilometers and
reduces the root mean squared error (RMSE) by 23% and 61%. These results open
research avenues for new synergies between ocean modelling and ocean
observation using learning-based approaches.
|
http://arxiv.org/abs/2309.14350v1
|
In this paper, we first answer Chen-Zhang's problem on $p$-Bergman metric
proposed in \cite{CZ22}. Second, we prove the off-diagonal p-Bergman kernel
function $K_p(z,w)$ is H\"older continuous of order (1-$\varepsilon$) about the
second component when $p>1$ for any $\varepsilon>0$, which improves the
corresponding result of Chen-Zhang. Moreover, we prove the asymptotic behavior
of the maximizer of $p$-Bergman kernel as $p\rightarrow 1^-$. Finally, we give
a characterization of a class of holomorphic functions on $\mathbb{B}^1$ to be
$L^p$-integrable.
|
http://arxiv.org/abs/2309.04143v1
|
Trilayer graphene exhibits valley-protected gapless states when the stacking
order changes from ABC to CBA and a gate voltage is applied to outer layers.
Some of these states survive strong distortions of the trilayer. For example,
they persist when the outer layers are partially devoid yielding a system of
two trilayers of different stacking order connected by a strip of a single
graphene layer. Here we investigate how these states respond to another
perturbation, i.e., the presence of magnetic defects, which we model as
pi-vacancies. We show that the gap states hybridize with the defect states and
strongly spin-split. More importantly, it is demonstrated that by changing the
gate voltage value one can change the spin density of the gap states and the
corresponding currents at the Fermi level.
|
http://arxiv.org/abs/2309.16547v1
|
Few-shot point cloud semantic segmentation aims to train a model to quickly
adapt to new unseen classes with only a handful of support set samples.
However, the noise-free assumption in the support set can be easily violated in
many practical real-world settings. In this paper, we focus on improving the
robustness of few-shot point cloud segmentation under the detrimental influence
of noisy support sets during testing time. To this end, we first propose a
Component-level Clean Noise Separation (CCNS) representation learning to learn
discriminative feature representations that separates the clean samples of the
target classes from the noisy samples. Leveraging the well separated clean and
noisy support samples from our CCNS, we further propose a Multi-scale
Degree-based Noise Suppression (MDNS) scheme to remove the noisy shots from the
support set. We conduct extensive experiments on various noise settings on two
benchmark datasets. Our results show that the combination of CCNS and MDNS
significantly improves the performance. Our code is available at
https://github.com/Pixie8888/R3DFSSeg.
|
http://arxiv.org/abs/2309.11228v1
|
Stochastic memoization is a higher-order construct of probabilistic
programming languages that is key in Bayesian nonparametrics, a modular
approach that allows us to extend models beyond their parametric limitations
and compose them in an elegant and principled manner. Stochastic memoization is
simple and useful in practice, but semantically elusive, particularly regarding
dataflow transformations. As the naive implementation resorts to the state
monad, which is not commutative, it is not clear if stochastic memoization
preserves the dataflow property -- i.e., whether we can reorder the lines of a
program without changing its semantics, provided the dataflow graph is
preserved. In this paper, we give an operational and categorical semantics to
stochastic memoization and name generation in the context of a minimal
probabilistic programming language, for a restricted class of functions. Our
contribution is a first model of stochastic memoization of constant Bernoulli
functions with a non-enumerable type, which validates data flow
transformations, bridging the gap between traditional probability theory and
higher-order probability models. Our model uses a presheaf category and a novel
probability monad on it.
|
http://arxiv.org/abs/2309.09467v2
|
Inspired by artistic practices such as beadwork and himmeli, we study the
problem of threading a single string through a set of tubes, so that pulling
the string forms a desired graph. More precisely, given a connected graph
(where edges represent tubes and vertices represent junctions where they meet),
we give a polynomial-time algorithm to find a minimum-length closed walk
(representing a threading of string) that induces a connected graph of string
at every junction. The algorithm is based on a surprising reduction to
minimum-weight perfect matching. Along the way, we give tight worst-case bounds
on the length of the optimal threading and on the maximum number of times this
threading can visit a single edge. We also give more efficient solutions to two
special cases: cubic graphs and the case when each edge can be visited at most
twice.
|
http://arxiv.org/abs/2309.10122v2
|
We perform a holographic study of the high and low temperature behaviours of
logarithmic negativity (LN) and entanglement wedge cross section (EWCS) in a
large $N$ strongly coupled thermal field theory with critical point having a
well defined gravity dual known as 1RC black hole. The critical point is
defined via $\xi \to 2$ limit where, $\xi$ is dimensionless parameter
proportional to the charge of the 1RC black hole. We show that the logarithmic
negativity in low and high thermal limits enhances with increasing $\xi$. We
analytically compute the EWCS in low and high thermal limits and find an
agreement with the previously reported numerical results. We holographically
explore the correlation between two identical copies of thermal field theory
with critical point forming a thermofield double state (TFD) by computing the
thermo mutual information (TMI). TMI shows an increasing behaviour with respect
to the width of the boundary region. Further, we analyze the impact of an early
perturbation on the field theory by analyzing a shock wave perturbation that
grows exponentially in the dual eternal 1 RC black hole and then estimate the
degradation of TMI. However rate of such disruption of TMI slows down as the
value of critical parameter $\xi$ takes higher values.
|
http://arxiv.org/abs/2308.00018v3
|
We develop a general and practical framework to address the problem of the
optimal design of dynamic fee mechanisms for multiple blockchain resources. Our
framework allows to compute policies that optimally trade-off between adjusting
resource prices to handle persistent demand shifts versus being robust to local
noise in the observed block demand. In the general case with more than one
resource, our optimal policies correctly handle cross-effects (complementarity
and substitutability) in resource demands. We also show how these cross-effects
can be used to inform resource design, i.e. combining resources into bundles
that have low demand-side cross-effects can yield simpler and more efficient
price-update rules. Our framework is also practical, we demonstrate how it can
be used to refine or inform the design of heuristic fee update rules such as
EIP-1559 or EIP-4844 with two case studies. We then estimate a uni-dimensional
version of our model using real market data from the Ethereum blockchain and
empirically compare the performance of our optimal policies to EIP-1559.
|
http://arxiv.org/abs/2309.12735v1
|
The field of digital pathology has seen a proliferation of deep learning
models in recent years. Despite substantial progress, it remains rare for other
researchers and pathologists to be able to access models published in the
literature and apply them to their own images. This is due to difficulties in
both sharing and running models. To address these concerns, we introduce
WSInfer: a new, open-source software ecosystem designed to make deep learning
for pathology more streamlined and accessible. WSInfer comprises three main
elements: 1) a Python package and command line tool to efficiently apply
patch-based deep learning inference to whole slide images; 2) a QuPath
extension that provides an alternative inference engine through user-friendly
and interactive software, and 3) a model zoo, which enables pathology models
and metadata to be easily shared in a standardized form. Together, these
contributions aim to encourage wider reuse, exploration, and interrogation of
deep learning models for research purposes, by putting them into the hands of
pathologists and eliminating a need for coding experience when accessed through
QuPath. The WSInfer source code is hosted on GitHub and documentation is
available at https://wsinfer.readthedocs.io.
|
http://arxiv.org/abs/2309.04631v1
|
This paper is devoted to the development of a localized Large Language Model
(LLM) specifically for Arabic, a language imbued with unique cultural
characteristics inadequately addressed by current mainstream models.
Significant concerns emerge when addressing cultural sensitivity and local
values. To address this, the paper proposes a comprehensive solution that
includes further pre-training with Arabic texts, Supervised Fine-Tuning (SFT)
utilizing native Arabic instructions, and GPT-4 responses in Arabic, alongside
Reinforcement Learning with AI Feedback (RLAIF) employing a reward model
attuned to local culture and values. The goal is to cultivate culturally
cognizant and value-aligned Arabic LLMs capable of accommodating the diverse,
application-specific needs of Arabic-speaking communities.
Comprehensive evaluations reveal that the resulting model, dubbed `AceGPT',
sets the state-of-the-art standard for open Arabic LLMs across various
benchmarks. Codes, data, and models are in
https://github.com/FreedomIntelligence/AceGPT.
|
http://arxiv.org/abs/2309.12053v5
|
Quantum computing is an emerging paradigm that has shown great promise in
accelerating large-scale scientific, optimization, and machine-learning
workloads. With most quantum computing solutions being offered over the cloud,
it has become imperative to protect confidential and proprietary quantum code
from being accessed by untrusted and/or adversarial agents. In response to this
challenge, we propose SPYCE, which is the first known solution to obfuscate
quantum code and output to prevent the leaking of any confidential information
over the cloud. SPYCE implements a lightweight, scalable, and effective
solution based on the unique principles of quantum computing to achieve this
task.
|
http://arxiv.org/abs/2307.16799v1
|
The success of language models, especially transformer-based architectures,
has trickled into other domains giving rise to "scientific language models"
that operate on small molecules, proteins or polymers. In chemistry, language
models contribute to accelerating the molecule discovery cycle as evidenced by
promising recent findings in early-stage drug discovery. Here, we review the
role of language models in molecular discovery, underlining their strength in
de novo drug design, property prediction and reaction chemistry. We highlight
valuable open-source software assets thus lowering the entry barrier to the
field of scientific language modeling. Last, we sketch a vision for future
molecular design that combines a chatbot interface with access to computational
chemistry tools. Our contribution serves as a valuable resource for
researchers, chemists, and AI enthusiasts interested in understanding how
language models can and will be used to accelerate chemical discovery.
|
http://arxiv.org/abs/2309.16235v1
|
The advances in virtualization technologies have sparked a growing transition
from virtual machine (VM)-based to container-based infrastructure for cloud
computing. From the resource orchestration perspective, containers' lightweight
and highly configurable nature not only enables opportunities for more
optimized strategies, but also poses greater challenges due to additional
uncertainties and a larger configuration parameter search space. Towards this
end, we propose Drone, a resource orchestration framework that adaptively
configures resource parameters to improve application performance and reduce
operational cost in the presence of cloud uncertainties. Built on Contextual
Bandit techniques, Drone is able to achieve a balance between performance and
resource cost on public clouds, and optimize performance on private clouds
where a hard resource constraint is present. We show that our algorithms can
achieve sub-linear growth in cumulative regret, a theoretically sound
convergence guarantee, and our extensive experiments show that Drone achieves
an up to 45% performance improvement and a 20% resource footprint reduction
across batch processing jobs and microservice workloads.
|
http://arxiv.org/abs/2309.16962v1
|
Vision-language models (VLMs) have shown powerful capabilities in visual
question answering and reasoning tasks by combining visual representations with
the abstract skill set large language models (LLMs) learn during pretraining.
Vision, while the most popular modality to augment LLMs with, is only one
representation of a scene. In human-robot interaction scenarios, robot
perception requires accurate scene understanding by the robot. In this paper,
we define and demonstrate a method of aligning the embedding spaces of
different modalities (in this case, inertial measurement unit (IMU) data) to
the vision embedding space through a combination of supervised and contrastive
training, enabling the VLM to understand and reason about these additional
modalities without retraining. We opt to give the model IMU embeddings directly
over using a separate human activity recognition model that feeds directly into
the prompt to allow for any nonlinear interactions between the query, image,
and IMU signal that would be lost by mapping the IMU data to a discrete
activity label. Further, we demonstrate our methodology's efficacy through
experiments involving human activity recognition using IMU data and visual
inputs. Our results show that using multiple modalities as input improves the
VLM's scene understanding and enhances its overall performance in various
tasks, thus paving the way for more versatile and capable language models in
multi-modal contexts.
|
http://arxiv.org/abs/2308.16493v1
|
Tuberculosis (TB) is still considered a leading cause of death and a
substantial threat to global child health. Both TB infection and disease are
curable using antibiotics. However, most children who die of TB are never
diagnosed or treated. In clinical practice, experienced physicians assess TB by
examining chest X-rays (CXR). Pediatric CXR has specific challenges compared to
adult CXR, which makes TB diagnosis in children more difficult. Computer-aided
diagnosis systems supported by Artificial Intelligence have shown performance
comparable to experienced radiologist TB readings, which could ease mass TB
screening and reduce clinical burden. We propose a multi-view deep
learning-based solution which, by following a proposed template, aims to
automatically regionalize and extract lung and mediastinal regions of interest
from pediatric CXR images where key TB findings may be present. Experimental
results have shown accurate region extraction, which can be used for further
analysis to confirm TB finding presence and severity assessment. Code publicly
available at https://github.com/dani-capellan/pTB_LungRegionExtractor.
|
http://arxiv.org/abs/2301.13786v1
|
Patents serve as valuable indicators of innovation and provide insights into
the spaces of innovation and venture formation within geographic regions. In
this study, we utilise patent data to examine the dynamics of innovation and
venture formation in the biotech sector across the United Kingdom (UK). By
analysing patents, we identify key regions that drive biotech innovation in the
UK. Our findings highlight the crucial role of biotech incubators in
facilitating knowledge exchange between scientific research and industry.
However, we observe that the incubators themselves do not significantly
contribute to the diversity of innovations which might be due to the underlying
effect of geographic proximity on the influences and impact of the patents.
These insights contribute to our understanding of the historical development
and future prospects of the biotech sector in the UK, emphasising the
importance of promoting innovation diversity and fostering inclusive enterprise
for achieving equitable economic growth.
|
http://arxiv.org/abs/2306.17547v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.