text
string | source
string |
---|---|
The number of muons in an air shower is a strong indicator of the mass of the
primary particle and increases with a small power of the cosmic ray mass by the
$\beta$-exponent, $N_{\mu} \sim A^{(1-\beta)}$. This behaviour can be explained
in terms of the Heitler-Matthews model of hadronic air showers. In this paper,
we present a method for calculating $\beta$ from the Heitler-Matthews model.
The method has been successfully verified with a series of simulated events
observed by the Pierre Auger Observatory at $10^{19}$ eV. To follow real
measurements of the mass composition at this energy, the generated sample
consists of a certain fraction of events produced with p, He, N and Fe primary
energies. Since hadronic interactions at the highest energies can differ from
those observed at energies reached by terrestrial accelerators, we generate a
mock data set with $\beta =0.92$ (the canonical value) and $\beta =0.96$ (a
more exotic scenario). The method can be applied to measured events to
determine the muon signal for each primary particle as well as the muon scaling
factor and the $\beta$-exponent. Determining the $\beta$-exponent can
effectively constrain the parameters that govern hadronic interactions and help
solve the so-called muon problem, where hadronic interaction models predict too
few muons relative to observed events. In this paper, we lay the foundation for
the future analysis of measured data from the Pierre Auger Observatory with a
simulation study.
|
http://arxiv.org/abs/2308.16525v1
|
Unmanned aerial vehicles (UAVs) have become increasingly prevalent in various
domains, ranging from military operations to civilian applications. However,
the proliferation of UAVs has also given rise to concerns regarding their
potential misuse and security threats. As a result, the search and pursuit of
UAVs have become crucial tasks for law enforcement agencies and security
organizations. In this paper, we use a game theoretic approach to explore the
problem of searching for and pursuing submarines and translate the problem into
a UAV search and pursuit problem. Game theory provides a mathematical framework
for modeling and analyzing strategic interactions among multiple decision
makers. By applying game theoretic principles to the search and pursuit
problem, we aim to improve the effectiveness of UAV detection and capture
strategies. We begin by formulating the problem as a game, where the UAV
represents the evader, and the search and pursuit team represents the pursuers.
Each player's objective is to optimize their own utility while considering the
actions and strategies of the other players. By leveraging game theory, we can
gain insights into the optimal decision-making strategies for both the UAV and
the pursuers, leading to improved search and pursuit outcomes and enhanced
security in the face of UAV threats.
|
http://arxiv.org/abs/2305.19832v1
|
In previous works, we introduced and studied certain categories called
quasi-BPS categories associated to symmetric quivers with potential,
preprojective algebras, and local surfaces. They have properties reminiscent of
BPS invariants/ cohomologies in enumerative geometry, for example they play
important roles in categorical wall-crossing formulas.
In this paper, we make the connections between quasi-BPS categories and BPS
cohomologies more precise via the cycle map for topological K-theory. We show
the existence of filtrations on topological K-theory of quasi-BPS categories
whose associated graded are isomorphic to the monodromy invariant BPS
cohomologies. Along the way, we also compute the topological K-theory of
categories of matrix factorizations in terms of the monodromy invariant
vanishing cycles (a version of this comparison was already known by work of
Blanc-Robalo-To\"en-Vezzosi), prove a Grothendieck-Riemann-Roch theorem for
matrix factorizations, and prove the compatibility between the Koszul
equivalence in K-theory and dimensional reduction in cohomology.
In a separate paper, we use the results from this paper to show that the
quasi-BPS categories of K3 surfaces recover the BPS invariants of the
corresponding local surface, which are Euler characteristics of Hilbert schemes
of points on K3 surfaces.
|
http://arxiv.org/abs/2309.08432v2
|
Feynman's diagrammatic series is a common language for a formally exact
theoretical description of systems of infinitely-many interacting quantum
particles, as well as a foundation for precision computational techniques. Here
we introduce a universal framework for efficient summation of connected or
skeleton Feynman diagrams for generic quantum many-body systems. It is based on
an explicit combinatorial construction of the sum of the integrands by dynamic
programming, at a computational cost that can be made only exponential in the
diagram order on a classical computer and potentially polynomial on a quantum
computer. We illustrate the technique by an unbiased diagrammatic Monte Carlo
calculation of the equation of state of the $2D$ $SU(N)$ Hubbard model in an
experimentally relevant regime, which has remained challenging for
state-of-the-art numerical methods.
|
http://arxiv.org/abs/2309.13774v4
|
Here we study the prediction of even and odd numbered sunspot cycles
separately, thereby taking into account the Hale cyclicity of solar magnetism.
We first show that the temporal evolution and shape of all sunspot cycles are
extremely well described by a simple parameterized mathematical expression. We
find that the parameters describing even sunspot cycles can be predicted quite
accurately using the sunspot number 41 months prior to sunspot minimum as a
precursor. We find that the parameters of the odd cycles can be best predicted
with maximum geomagnetic aa index close to fall equinox within a 3-year window
preceding the sunspot minimum. We use the found precursors to predict all
previous sunspot cycles and evaluate the performance with a cross-validation
methodology, which indicates that each past cycle is very accurately predicted.
For the coming sunspot cycle 25 we predict an amplitude of 171 +/- 23 and the
end of the cycle in September 2029 +/- 1.9 years. We are also able to make a
rough prediction for cycle 26 based on the predicted cycle 25. While the
uncertainty for the cycle amplitude is large we estimate that the cycle 26 will
most likely be stronger than cycle 25. These results suggest an increasing
trend in solar activity for the next decades.
|
http://arxiv.org/abs/2309.04208v1
|
A third of greenhouse gas emissions are attributable to the food sector. A
shift in dietary habits could reduce these by half. Engaging and empowering
consumers is vital to this critical shift; yet, if we get the framing wrong, we
might cause distress or eco-anxiety, impeding initial engagement as well as
longer-term diet change. Evoking joy is a powerful yet under-explored motivator
to overcome psychological barriers and support pro-environmental attitudes.
This pictorial presents the outcomes of a one-day workshop as a series of
speculative ideas in the form of an annotated portfolio, highlighting design
qualities and interaction mechanisms that afford joy and sustainability in food
choices. Our contribution will inspire HCI researchers and designers to
reposition joy as a fundamental value to sustainability communication
|
http://arxiv.org/abs/2309.05670v1
|
Contact electrification, or contact charging, refers to the process of static
charge accumulation after rubbing, or even simple touching, of two materials.
Despite its relevance in static electricity, various natural phenomena, and
numerous technologies, contact charging remains poorly understood. For
insulating materials, even the species of charge carrier may be unknown, and
the direction of charge-transfer lacks firm molecular-level explanation. We use
all-atom molecular dynamics simulations to investigate whether thermodynamics
can explain contact charging between insulating polymers. Building on prior
work implicating water-ions (e.g., hydronium and hydroxide) as potential charge
carriers, we predict preferred directions of charge-transfer between polymer
surfaces according to the free energy of water-ions within water droplets on
such surfaces. Broad agreement between our predictions and experimental
triboelectric series indicate that thermodynamically driven ion-transfer likely
influences contact charging of polymers. Importantly, simulation analyses
reveal how specific interactions of water and water-ions proximate to the
polymer-water interface explains observed trends. This study establishes
relevance of thermodynamic driving forces in contact charging of insulators
with new evidence informed by molecular-level interactions. These insights have
direct implications for future mechanistic studies and applications of contact
charging involving polymeric materials.
|
http://arxiv.org/abs/2309.11605v2
|
Achieving the UN Sustainable Development Goals (SDGs) demands adequate levels
of awareness and actions to address sustainability challenges. Software systems
will play an important role in moving towards these targets. Sustainability
skills are necessary to support the development of software systems and to
provide sustainable IT-supported services for citizens. While there is a
growing number of academic bodies, including sustainability education in
engineering and computer science curricula, there is not yet comprehensive
research on the competencies and skills required by IT professionals to develop
such systems. This study aims to identify the industrial sustainability needs
for education and training from software engineers' perspective. We conducted
interviews and focus groups with experts from twenty-eight organisations with
an IT division from nine countries to understand their interests, goals and
achievements related to sustainability, and the skills and competencies needed
to achieve their goals. Our findings show that organisations are interested in
sustainability, both idealistically and increasingly for core business reasons.
They seek to improve the sustainability of processes and products but encounter
difficulties, like the trade-off between short-term financial profitability and
long-term sustainability goals. To fill the gaps, they have promoted in-house
training courses, collaborated with universities, and sent employees to
external training. The acquired competencies make sustainability an integral
part of software development. We conclude that educational programs should
include knowledge and skills on core sustainability concepts, system thinking,
soft skills, technical sustainability, sustainability impact and measurements,
values and ethics, standards and legal aspects, and advocacy and lobbying.
|
http://arxiv.org/abs/2305.00436v2
|
Critical nodes in networks are extremely vulnerable to malicious attacks to
trigger negative cascading events such as the spread of misinformation and
diseases. Therefore, effective moderation of critical nodes is very vital for
mitigating the potential damages caused by such malicious diffusions. The
current moderation methods are computationally expensive. Furthermore, they
disregard the fundamental metric of information centrality, which measures the
dissemination power of nodes.
We investigate the problem of removing $k$ edges from a network to minimize
the information centrality of a target node $\lea$ while preserving the
network's connectivity. We prove that this problem is computationally
challenging: it is NP-complete and its objective function is not supermodular.
However, we propose three approximation greedy algorithms using novel
techniques such as random walk-based Schur complement approximation and fast
sum estimation. One of our algorithms runs in nearly linear time in the number
of edges.
To complement our theoretical analysis, we conduct a comprehensive set of
experiments on synthetic and real networks with over one million nodes. Across
various settings, the experimental results illustrate the effectiveness and
efficiency of our proposed algorithms.
|
http://arxiv.org/abs/2309.06392v1
|
In this paper, we focus on an asynchronous distributed optimization problem.
In our problem, each node is endowed with a convex local cost function, and is
able to communicate with its neighbors over a directed communication network.
Furthermore, we assume that the communication channels between nodes have
limited bandwidth, and each node suffers from processing delays. We present a
distributed algorithm which combines the Alternating Direction Method of
Multipliers (ADMM) strategy with a finite time quantized averaging algorithm.
In our proposed algorithm, nodes exchange quantized valued messages and operate
in an asynchronous fashion. More specifically, during every iteration of our
algorithm each node (i) solves a local convex optimization problem (for the one
of its primal variables), and (ii) utilizes a finite-time quantized averaging
algorithm to obtain the value of the second primal variable (since the cost
function for the second primal variable is not decomposable). We show that our
algorithm converges to the optimal solution at a rate of $O(1/k)$ (where $k$ is
the number of time steps) for the case where the local cost function of every
node is convex and not-necessarily differentiable. Finally, we demonstrate the
operational advantages of our algorithm against other algorithms from the
literature.
|
http://arxiv.org/abs/2309.04585v1
|
This report presents the results of the shared tasks organized as part of the
VarDial Evaluation Campaign 2023. The campaign is part of the tenth workshop on
Natural Language Processing (NLP) for Similar Languages, Varieties and Dialects
(VarDial), co-located with EACL 2023. Three separate shared tasks were included
this year: Slot and intent detection for low-resource language varieties
(SID4LR), Discriminating Between Similar Languages -- True Labels (DSL-TL), and
Discriminating Between Similar Languages -- Speech (DSL-S). All three tasks
were organized for the first time this year.
|
http://arxiv.org/abs/2305.20080v1
|
Dielectric barrier discharge (DBD) plasma actuators can generate a wall jet
without moving parts by interacting with ionized and neutral molecules in an
electric field. The coupling between electrohydrodynamic (EHD), turbulence,
inertial and viscous effects in the flow boundary layer remains poorly
understood and requires investigation. We present an experimental investigation
of momentum injection by DBD actuators into the free stream flow with Re =
35,000 and 75,000 in co-flow and counter-flow scenarios over a range of VAC =
12 kV - 19.5 kV peak-to-peak at a frequency of 2 kHz. In the co-flow
configuration, the DBD actuator injects momentum into the boundary layer. In
co-flow, the momentum injection results in the thinning boundary layer, while
in the counter-flow configuration, flow separation can occur. For the tested
condition, a separation bubble is observed at Re = 35,000. The momentum
displacement in the counter-flow configuration is six times greater than the
EHD jet momentum in a quiescent environment. Both co-flow and counter-flow
momentum injections show diminishing effects with increasing external
velocities. This work highlights that the resulting flow pattern is not a
simple superposition of the EHD jet and the free stream but is determined by
the coupling of inertial, viscous, and Coulombic effects in the EHD-driven wall
jet and the external flow. The velocity profiles and momentum measurements
presented here can be used to validate numerical models and inform the design
of DBD actuators for active flow control.
|
http://arxiv.org/abs/2304.00079v1
|
Valley magnetic moments play a crucial role in valleytronics in 2D hexagonal
materials. Traditionally, based on studies of quantum states in homogeneous
bulks, it is widely believed that only materials with broken structural
inversion symmetry can exhibit nonvanishing valley magnetic moments. Such
constraint excludes from relevant applications those with inversion symmetry,
as specifically exemplified by gapless monolayer graphene despite its
technological advantage in routine growth and production. This work revisits
valley-derived magnetic moments in a broad context covering inhomogeneous
structures as well. It generalizes the notion of valley magnetic moment for a
state from an integrated total quantity to the local field called "local valley
magnetic moment" with space-varying distribution. In suitable
inversion-symmetric structures with inhomogeneity, e.g., zigzag nanoribbons of
gapless monolayer graphene, it is shown that the local moment of a state can be
nonvanishing with sizable magnitude, while the corresponding total moment is
subject to the broken symmetry constraint. Moreover, it is demonstrated that
such local moment can interact with space-dependent electric and magnetic
fields manifesting pronounced field effects and making possible a local valley
control with external fields. Overall, a path to "local valleytronics" is
illustrated which exploits local valley magnetic moments for device
applications, relaxes the broken symmetry constraint on materials, and expands
flexibility in the implementation of valleytronics.
|
http://arxiv.org/abs/2309.00091v1
|
In the current study, we investigate a scalar field cosmological model with
Lyra's geometry to explain the present cosmic expansion in a homogeneous and
isotropic flat FRW universe. In Einstein's field equations, we presupposed a
variable displacement vector as an element of Lyra's geometry. In the context
of the conventional theory of gravity, we suggest a suitable parameterization
of the scalar field's dark energy density in the hybrid function of redshift
$z$, confirming the essential transition behavior of the universe from a
decelerating era to the present accelerated scenario. We present constraints on
model parameters using the most recent observational data sets from OHD,
BAO/CMB, and Pantheon, taking Markov Chain Monte Carlo (MCMC) analysis into
account. For the proposed model, the best estimated values of parameters for
the combined dataset (OHD, BAO/CMB, and Pantheon) are $ H_0 = 71.15\pm 0.26$
km/s/Mpc, $ \Omega_{m0}=0.2625\pm 0.0024$, $ \Omega_{\phi0} = 0.676\pm0.038$, $
\alpha=-0.22\pm0.13$, $n = 0.096\pm0.079$, and $k = 0.38\pm0.32$. The model
exhibits a flipping nature, and the redshift transition occurs at $z_t =
0.756^{+0.005}_{-0.015}$. The current value of the decelerated parameter for
the proposed model is calculated as $q_0 = -0.625^{+0.067}_{-0.085}$ for the
combined dataset. Some dynamical properties of the model like energy density
($\rho_{\phi}$), scalar field pressure ($p_{\phi}$), EoS parameter of scalar
field ($\omega_{\phi}$), and effective EoS parameter ($\omega_{eff}$) are
analyzed and presented. Further, we have also examined the statefinder
diagnosis and jerk parameters of the derived model. The total density parameter
for the derived model is found to be unity which is in nice agreement with
recent standard findings.
|
http://arxiv.org/abs/2309.10282v2
|
We report on the existence of exceptional points (EPs) in single-resonance
autoionization and provide analytical expressions for their positions in
parameter space, in terms of the Fano asymmetry parameter. We additionally
propose a reliable method for the experimental determination of EPs, based
solely on information about their ionization probability as a function of the
system parameters. The links between EPs, the maxima of the asymmetric profile
and the effective decay rate of the ground state are investigated in detail.
Quantitative numerical examples pertaining to the doubly excited $2s2p({}^1P)$
state of Helium confirm the validity of our formulation and results. In
addition to unveiling hidden aspects of autoionization, our treatment and
results provide a benchmark for the exploration of EPs and their properties in
a variety of materials exhibiting Fano profiles with a broad perspective of
possible applications.
|
http://arxiv.org/abs/2305.19615v2
|
The pursuit of understanding the mysteries surrounding dark energy has
sparked significant interest within the field of cosmology. While conventional
approaches, such as the cosmological constant, have been extensively explored,
alternative theories incorporating scalar field-based models and modified
gravity have emerged as intriguing avenues. Among these, teleparallel theories
of gravity, specifically the $f(T,\phi)$ formulation, have gained prominence as
a means to comprehend dark energy within the framework of teleparallelism. In
this study, we investigate two well-studied models of teleparallel dark energy
and examine the presence of cosmological singularities within these scenarios.
Using the Goriely-Hyde procedure, we examine the dynamical systems governing
the cosmological equations of these models. Our analysis reveals that both
models exhibit Type IV singularities, but only for a limited range of initial
conditions. These results could indicate a potential edge for teleparallel
cosmological models over their other modified gravity counterparts, as the
models we examine seem to be only allowing for weak singularities that too
under non general conditions.
|
http://arxiv.org/abs/2310.20222v2
|
We apply the thermal (imaginary time) perturbative expansion to the relevant
effective field theory to compute characteristics of the phase transition to
the ordered state which can occur at low temperatures in the gas of
(nonrelativistic) spin 1/2 fermions interacting through a short-range spin
independent repulsive binary interaction potential. We show how to obtain a
systematic expansion of the system's free energy depending on the densities
$n_+$ and $n_-$ of spin-up and spin-down fermions. In this paper we truncate
this expansion at the second order and determine, by numerically minimizing the
free energy, the equilibrium proportions of $n_+$ and $n_-$ (that is, the
system's polarization) as functions of the temperature, the system's overall
density $n = n_+ + n_-$ and the strength of the interaction.
|
http://arxiv.org/abs/2309.14782v1
|
Many problems in science and technology require finding global minima or
maxima of various objective functions. The functions are typically
high-dimensional; each function evaluation may entail a significant
computational cost. The importance of global optimization has inspired
development of numerous heuristic algorithms based on analogies with physical,
chemical or biological systems. Here we present a novel algorithm, SmartRunner,
which employs a Bayesian probabilistic model informed by the history of
accepted and rejected moves to make a decision about the next random trial.
Thus, SmartRunner intelligently adapts its search strategy to a given objective
function and moveset, with the goal of maximizing fitness gain (or energy loss)
per function evaluation. Our approach can be viewed as adding a simple adaptive
penalty to the original objective function, with SmartRunner performing hill
ascent or descent on the modified landscape. This penalty can be added to many
other global optimization algorithms. We explored SmartRunner's performance on
a standard set of test functions, finding that it compares favorably against
several widely-used alternatives: simulated annealing, stochastic hill
climbing, evolutionary algorithm, and taboo search. Interestingly, adding the
adaptive penalty to the first three of these algorithms considerably enhances
their performance. We have also employed SmartRunner to study the
Sherrington-Kirkpatrick (SK) spin glass model and Kauffman's NK fitness model -
two NP-hard problems characterized by numerous local optima. In systems with
quenched disorder, SmartRunner performs well compared to the other global
optimizers. Moreover, in finite SK systems it finds close-to-optimal
ground-state energies averaged over disorder.
|
http://arxiv.org/abs/2309.04591v1
|
Traditional CNN models are trained and tested on relatively low resolution
images (<300 px), and cannot be directly operated on large-scale images due to
compute and memory constraints. We propose Patch Gradient Descent (PatchGD), an
effective learning strategy that allows to train the existing CNN architectures
on large-scale images in an end-to-end manner. PatchGD is based on the
hypothesis that instead of performing gradient-based updates on an entire image
at once, it should be possible to achieve a good solution by performing model
updates on only small parts of the image at a time, ensuring that the majority
of it is covered over the course of iterations. PatchGD thus extensively enjoys
better memory and compute efficiency when training models on large scale
images. PatchGD is thoroughly evaluated on two datasets - PANDA and UltraMNIST
with ResNet50 and MobileNetV2 models under different memory constraints. Our
evaluation clearly shows that PatchGD is much more stable and efficient than
the standard gradient-descent method in handling large images, and especially
when the compute memory is limited.
|
http://arxiv.org/abs/2301.13817v1
|
The problem of generating microstructures of complex materials in silico has
been approached from various directions including simulation, Markov, deep
learning and descriptor-based approaches. This work presents a hybrid method
that is inspired by all four categories and has interesting scalability
properties. A neural cellular automaton is trained to evolve microstructures
based on local information. Unlike most machine learning-based approaches, it
does not directly require a data set of reference micrographs, but is trained
from statistical microstructure descriptors that can stem from a single
reference. This means that the training cost scales only with the complexity of
the structure and associated descriptors. Since the size of the reconstructed
structures can be set during inference, even extremely large structures can be
efficiently generated. Similarly, the method is very efficient if many
structures are to be reconstructed from the same descriptor for statistical
evaluations. The method is formulated and discussed in detail by means of
various numerical experiments, demonstrating its utility and scalability.
|
http://arxiv.org/abs/2309.16195v1
|
We present the first implementation of Drinfeld modules fully integrated in
the SageMath ecosystem. First features will be released with SageMath 10.0.
|
http://arxiv.org/abs/2305.00422v1
|
In this article, we investigate the rate at which the first Dirichlet
eigenvalue of geodesic balls decreases as the radius approaches infinity. We
prove that if the conformal infinity of an asymptotically hyperbolic Einstein
manifold is of nonnegative Yamabe type, then the two-term asymptotic of the
eigenvalues is the same as that in hyperbolic space.
|
http://arxiv.org/abs/2307.16439v1
|
Lung diseases are a leading cause of child mortality in the developing world,
with India accounting for approximately half of global pneumonia deaths
(370,000) in 2016. Timely diagnosis is crucial for reducing mortality rates.
This paper introduces a low-density neural network structure to mitigate
topological challenges in deep networks. The network incorporates parameters
into a feature pyramid, enhancing data extraction and minimizing information
loss. Soft Non-Maximal Suppression optimizes regional proposals generated by
the Region Proposal Network. The study evaluates the model on chest X-ray
images, computing a confusion matrix to determine accuracy, precision,
sensitivity, and specificity. We analyze loss functions, highlighting their
trends during training. The regional proposal loss and classification loss
assess model performance during training and classification phases. This paper
analysis lung disease detection and neural network structures.
|
http://arxiv.org/abs/2309.06386v1
|
Hallucinations and off-target translation remain unsolved problems in MT,
especially for low-resource languages and massively multilingual models. In
this paper, we introduce two related methods to mitigate these failure cases
with a modified decoding objective, without either requiring retraining or
external models. In source-contrastive decoding, we search for a translation
that is probable given the correct input, but improbable given a random input
segment. In language-contrastive decoding, we search for a translation that is
probable, but improbable given the wrong language indicator token. Experiments
on the massively multilingual models M2M-100 (418M) and SMaLL-100 show that
these methods suppress hallucinations and off-target translations, reducing the
number of translations with segment-level chrF2 below 10 by 67-83% on average,
and the number of translations with oscillatory hallucinations by 75-92% on
average, across 57 tested translation directions. In a proof of concept on
out-of-English translation, we also show that we can suppress off-target
translations with large language models. We release our source code at
https://github.com/ZurichNLP/ContraDecode.
|
http://arxiv.org/abs/2309.07098v2
|
This paper is the first to propose an allelopathic phytoplankton competition
ODE model influenced by a fear effect based on natural biological phenomena. It
is shown that the interplay of this fear effect and the allelopathic term cause
rich dynamics in the proposed competition model, such as global stability,
transcritical bifurcation, pitchfork bifurcation, and saddle-node bifurcation.
We also consider the spatially explicit version of the model and prove
analogous results. Numerical simulations verify the feasibility of the
theoretical analysis. The results demonstrate that the primary cause of the
extinction of non-toxic species is the fear of toxic species compared to
toxins. Allelopathy only affects the density of non-toxic species. The
discussion provides guidance for the conservation of species and the
maintenance of biodiversity.
|
http://arxiv.org/abs/2309.08383v1
|
Local minimizers of integral functionals of the calculus of variations are
analyzed under growth conditions dictated by different lower and upper bounds
for the integrand. Growths of non-necessarily power type are allowed. The local
boundedness of the relevant minimizers is established under a suitable balance
between the lower and the upper bounds. Classical minimizers, as well as
quasi-minimizers are included in our discussion. Functionals subject to
so-called $p,q$-growth conditions are embraced as special cases and the
corresponding sharp results available in the literature are recovered.
|
http://arxiv.org/abs/2309.16803v2
|
We introduce PyQBench, an innovative open-source framework for benchmarking
gate-based quantum computers. PyQBench can benchmark NISQ devices by verifying
their capability of discriminating between two von Neumann measurements.
PyQBench offers a simplified, ready-to-use, command line interface (CLI) for
running benchmarks using a predefined parametrized Fourier family of
measurements. For more advanced scenarios, PyQBench offers a way of employing
user-defined measurements instead of predefined ones.
|
http://arxiv.org/abs/2304.00045v1
|
The concept of the Metaverse aims to bring a fully-fledged extended reality
environment to provide next generation applications and services. Development
of the Metaverse is backed by many technologies, including, 5G, artificial
intelligence, edge computing and extended reality. The advent of 6G is
envisaged to mark a significant milestone in the development of the Metaverse,
facilitating near-zero-latency, a plethora of new services and upgraded
real-world infrastructure. This paper establishes the advantages of providing
the Metaverse services over 6G along with an overview of the demanded technical
requirements. The paper provides an insight to the concepts of the Metaverse
and the envisaged technical capabilities of 6G mobile networks. Then, the
technical aspects covering 6G for the development of the Metaverse, ranging
from validating digital assets, interoperability, and efficient user
interaction in the Metaverse to related security and privacy aspects are
elaborated. Subsequently, the role of 6G technologies towards enabling the
Metaverse, including artificial intelligence, blockchain, open radio access
networks, edge computing, cloudification and internet of everything. The paper
also presents 6G integration challenges and outlines ongoing projects towards
developing the Metaverse technologies to facilitate the Metaverse applications
and services.
|
http://arxiv.org/abs/2301.03386v1
|
We study the problem of discovering joinable datasets at scale. We approach
the problem from a learning perspective relying on profiles. These are succinct
representations that capture the underlying characteristics of the schemata and
data values of datasets, which can be efficiently extracted in a distributed
and parallel fashion. Profiles are then compared, to predict the quality of a
join operation among a pair of attributes from different datasets. In contrast
to the state-of-the-art, we define a novel notion of join quality that relies
on a metric considering both the containment and cardinality proportion between
join candidate attributes. We implement our approach in a system called
NextiaJD, and present experiments to show the predictive performance and
computational efficiency of our method. Our experiments show that NextiaJD
obtains greater predictive performance to that of hash-based methods while we
are able to scale-up to larger volumes of data.
|
http://arxiv.org/abs/2305.19629v1
|
Deep learning has achieved remarkable success in the field of bearing fault
diagnosis. However, this success comes with larger models and more complex
computations, which cannot be transferred into industrial fields requiring
models to be of high speed, strong portability, and low power consumption. In
this paper, we propose a lightweight and deployable model for bearing fault
diagnosis, referred to as BearingPGA-Net, to address these challenges. Firstly,
aided by a well-trained large model, we train BearingPGA-Net via decoupled
knowledge distillation. Despite its small size, our model demonstrates
excellent fault diagnosis performance compared to other lightweight
state-of-the-art methods. Secondly, we design an FPGA acceleration scheme for
BearingPGA-Net using Verilog. This scheme involves the customized quantization
and designing programmable logic gates for each layer of BearingPGA-Net on the
FPGA, with an emphasis on parallel computing and module reuse to enhance the
computational speed. To the best of our knowledge, this is the first instance
of deploying a CNN-based bearing fault diagnosis model on an FPGA. Experimental
results reveal that our deployment scheme achieves over 200 times faster
diagnosis speed compared to CPU, while achieving a lower-than-0.4\% performance
drop in terms of F1, Recall, and Precision score on our independently-collected
bearing dataset. Our code is available at
\url{https://github.com/asdvfghg/BearingPGA-Net}.
|
http://arxiv.org/abs/2307.16363v1
|
DANSS is a solid state scintillator neutrino spectrometer placed at a small
distance from the commercial nuclear reactor of Kalininskaya NPP. The distance
from the detector to the center of the reactor core can be changed online in
the range 10.9-12.9 m. This fact together with a very high neutrino counting
rate (more than 5000 events per day) and low background makes DANSS an ideal
detector to search for neutrino oscillations in $1~eV^2 \Delta m^2$ range. We
report the results based on the statistics of 6 million events, obtained
between April 2016 and March 2022. The results include limits in the short
range oscillation parameter space, fuel evolution studies and the bump in the
neutrino spectrum. The talk will also cover our plans of the detector upgrade.
|
http://arxiv.org/abs/2305.07417v1
|
Existing exploration algorithms mainly generate frontiers using random
sampling or motion primitive methods within a specific sensor range or search
space. However, frontiers generated within constrained spaces lead to
back-and-forth maneuvers in large-scale environments, thereby diminishing
exploration efficiency. To address this issue, we propose a method that
utilizes a 3D dense map to generate Segmented Exploration Regions (SERs) and
generate frontiers from a global-scale perspective. In particular, this paper
presents a novel topological map generation approach that fully utilizes
Line-of-Sight (LOS) features of LiDAR sensor points to enhance exploration
efficiency inside large-scale subterranean environments. Our topological map
contains the contributions of keyframes that generate each SER, enabling rapid
exploration through a switch between local path planning and global path
planning to each frontier. The proposed method achieved higher explored volume
generation than the state-of-the-art algorithm in a large-scale simulation
environment and demonstrated a 62% improvement in explored volume increment
performance. For validation, we conducted field tests using UAVs in real
subterranean environments, demonstrating the efficiency and speed of our
method.
|
http://arxiv.org/abs/2309.08397v1
|
Fault-tolerant quantum computation with bosonic qubits often necessitates the
use of noisy discrete-variable ancillae. In this work, we establish a
comprehensive and practical fault-tolerance framework for such a hybrid system
and synthesize it with fault-tolerant protocols by combining bosonic quantum
error correction (QEC) and advanced quantum control techniques. We introduce
essential building blocks of error-corrected gadgets by leveraging
ancilla-assisted bosonic operations using a generalized variant of
path-independent quantum control (GPI). Using these building blocks, we
construct a universal set of error-corrected gadgets that tolerate a single
photon loss and an arbitrary ancilla fault for four-legged cat qubits. Notably,
our construction only requires dispersive coupling between bosonic modes and
ancillae, as well as beam-splitter coupling between bosonic modes, both of
which have been experimentally demonstrated with strong strengths and high
accuracy. Moreover, each error-corrected bosonic qubit is only comprised of a
single bosonic mode and a three-level ancilla, featuring the hardware
efficiency of bosonic QEC in the full fault-tolerant setting. We numerically
demonstrate the feasibility of our schemes using current experimental
parameters in the circuit-QED platform. Finally, we present a
hardware-efficient architecture for fault-tolerant quantum computing by
concatenating the four-legged cat qubits with an outer qubit code utilizing
only beam-splitter couplings. Our estimates suggest that the overall noise
threshold can be reached using existing hardware. These developed
fault-tolerant schemes extend beyond their applicability to four-legged cat
qubits and can be adapted for other rotation-symmetrical codes, offering a
promising avenue toward scalable and robust quantum computation with bosonic
qubits.
|
http://arxiv.org/abs/2310.20578v1
|
Allowing organizations to share their data for training of machine learning
(ML) models without unintended information leakage is an open problem in
practice. A promising technique for this still-open problem is to train models
on the encoded data. Our approach, called Privately Encoded Open Datasets with
Public Labels (PEOPL), uses a certain class of randomly constructed transforms
to encode sensitive data. Organizations publish their randomly encoded data and
associated raw labels for ML training, where training is done without knowledge
of the encoding realization. We investigate several important aspects of this
problem: We introduce information-theoretic scores for privacy and utility,
which quantify the average performance of an unfaithful user (e.g., adversary)
and a faithful user (e.g., model developer) that have access to the published
encoded data. We then theoretically characterize primitives in building
families of encoding schemes that motivate the use of random deep neural
networks. Empirically, we compare the performance of our randomized encoding
scheme and a linear scheme to a suite of computational attacks, and we also
show that our scheme achieves competitive prediction accuracy to raw-sample
baselines. Moreover, we demonstrate that multiple institutions, using
independent random encoders, can collaborate to train improved ML models.
|
http://arxiv.org/abs/2304.00047v1
|
Background: Test-case quality has always been one of the major concerns in
software testing. To improve test-case quality, it is important to better
understand how practitioners perceive the quality of test-cases. Objective:
Motivated by that need, we investigated how practitioners define test-case
quality and which aspects of test-cases are important for quality assessment.
Method: We conducted semi-structured interviews with professional developers,
testers and test architects from a multinational software company in Sweden.
Before the interviews, we asked participants for actual test cases (written in
natural language) that they perceive as good, normal, and bad respectively
together with rationales for their assessment. We also compared their opinions
on shared test cases and contrasted their views with the relevant literature.
Results: We present a quality model which consists of 11 test-case quality
attributes. We also identify a misalignment in defining test-case quality among
practitioners and between academia and industry, along with suggestions for
improving test-case quality in industry. Conclusion: The results show that
practitioners' background, including roles and working experience, are critical
dimensions of how test-case quality is defined and assessed.
|
http://arxiv.org/abs/2309.16801v1
|
There has been a growing realisation that school science curricula do not
adequately reflect the revolutionary changes in our scientific understanding of
the 20th century. This discrepancy between current school education and our
modern scientific understanding has led to calls for the modernisation of the
science curriculum. Although there have been attempts to introduce topics of
Einsteinian physics (i.e., quantum physics and relativity) to school education,
often at the secondary level, we still lack a seamless curriculum in which
modern science concepts are gradually introduced in primary and middle schools.
Guided by the Model of Educational Reconstruction and following a mixed-methods
research design, the Einstein-First project aims to address this gap.
Einstein-First has developed and implemented an Einsteinian curriculum from
Years 3 to 10 (students aged 7- 16) that resolves the disconnect between
science in schools and the modern world. This paper presents the concepts,
rationale, and learning outcomes of the curriculum implementation in six
Australian schools with 315 students across Years 3 to 10. Our findings lay the
foundation for informed curriculum development towards a school education that
can enhance students' understanding and appreciation of the fundamental
concepts of modern science and its impact on our society.
|
http://arxiv.org/abs/2306.17342v2
|
Precise and timely fault diagnosis is a prerequisite for a distribution
system to ensure minimum downtime and maintain reliable operation. This
necessitates access to a comprehensive procedure that can provide the grid
operators with insightful information in the case of a fault event. In this
paper, we propose a heterogeneous multi-task learning graph neural network
(MTL-GNN) capable of detecting, locating and classifying faults in addition to
providing an estimate of the fault resistance and current. Using a graph neural
network (GNN) allows for learning the topological representation of the
distribution system as well as feature learning through a message-passing
scheme. We investigate the robustness of our proposed model using the IEEE-123
test feeder system. This work also proposes a novel GNN-based explainability
method to identify key nodes in the distribution system which then facilitates
informed sparse measurements. Numerical tests validate the performance of the
model across all tasks.
|
http://arxiv.org/abs/2309.09921v2
|
We propose a parallel (distributed) version of the spectral proper orthogonal
decomposition (SPOD) technique. The parallel SPOD algorithm distributes the
spatial dimension of the dataset preserving time. This approach is adopted to
preserve the non-distributed fast Fourier transform of the data in time,
thereby avoiding the associated bottlenecks. The parallel SPOD algorithm is
implemented in the PySPOD (https://github.com/MathEXLab/PySPOD) library and
makes use of the standard message passing interface (MPI) library, implemented
in Python via mpi4py (https://mpi4py.readthedocs.io/en/stable/). An extensive
performance evaluation of the parallel package is provided, including strong
and weak scalability analyses. The open-source library allows the analysis of
large datasets of interest across the scientific community. Here, we present
applications in fluid dynamics and geophysics, that are extremely difficult (if
not impossible) to achieve without a parallel algorithm. This work opens the
path toward modal analyses of big quasi-stationary data, helping to uncover new
unexplored spatio-temporal patterns.
|
http://arxiv.org/abs/2309.11808v2
|
Music classification has been one of the most popular tasks in the field of
music information retrieval. With the development of deep learning models, the
last decade has seen impressive improvements in a wide range of classification
tasks. However, the increasing model complexity makes both training and
inference computationally expensive. In this paper, we integrate the ideas of
transfer learning and feature-based knowledge distillation and systematically
investigate using pre-trained audio embeddings as teachers to guide the
training of low-complexity student networks. By regularizing the feature space
of the student networks with the pre-trained embeddings, the knowledge in the
teacher embeddings can be transferred to the students. We use various
pre-trained audio embeddings and test the effectiveness of the method on the
tasks of musical instrument classification and music auto-tagging. Results show
that our method significantly improves the results in comparison to the
identical model trained without the teacher's knowledge. This technique can
also be combined with classical knowledge distillation approaches to further
improve the model's performance.
|
http://arxiv.org/abs/2306.17424v1
|
We analyse several aspects of detectors with uniform acceleration $a$ and
uniform rotation $\Omega$ in de Sitter ($\Lambda>0$) and anti-de Sitter
($\Lambda<0$) spacetimes, focusing particularly on the periodicity, in
(Euclidean) proper time $\tau_{\rm traj}$, of geodesic interval $\tau_{\rm
geod}$ between two events on the trajectory. For $\Lambda<0$, $\tau_{\rm geod}$
is periodic in ${\rm i} \tau_{\rm traj}$ for specific values of $a$ and
$\Omega$. These results are used to obtain numerical plots for the response
rate $\dot{\mathcal{F}}$ of Unruh-de Witt detectors, which display non-trivial
combined effects of rotation and curvature through the dimensionless parameter
$\Lambda c^2/\Omega^2$. In particular, periodicity does not imply thermality
due to additional poles in the Wightman function away from the imaginary axis.
We then present some results for stationary rotational motion in arbitrary
curved spacetime, as a perturbative expansion in curvature.
|
http://arxiv.org/abs/2307.16413v3
|
Deep reinforcement learning (RL) is notoriously impractical to deploy due to
sample inefficiency. Meta-RL directly addresses this sample inefficiency by
learning to perform few-shot learning when a distribution of related tasks is
available for meta-training. While many specialized meta-RL methods have been
proposed, recent work suggests that end-to-end learning in conjunction with an
off-the-shelf sequential model, such as a recurrent network, is a surprisingly
strong baseline. However, such claims have been controversial due to limited
supporting evidence, particularly in the face of prior work establishing
precisely the opposite. In this paper, we conduct an empirical investigation.
While we likewise find that a recurrent network can achieve strong performance,
we demonstrate that the use of hypernetworks is crucial to maximizing their
potential. Surprisingly, when combined with hypernetworks, the recurrent
baselines that are far simpler than existing specialized methods actually
achieve the strongest performance of all methods evaluated. We provide code at
https://github.com/jacooba/hyper.
|
http://arxiv.org/abs/2309.14970v4
|
Social tipping points are promising levers to achieve net-zero greenhouse gas
emission targets. They describe how social, political, economic or
technological systems can move rapidly into a new state if cascading positive
feedback mechanisms are triggered. Analysing the potential of social tipping
for rapid decarbonization requires considering the inherent complexity of
social systems. Here, we identify that existing scientific literature is
inclined to a narrative-based account of social tipping, lacks a broad
empirical framework and a multi-systems view. We subsequently outline a dynamic
systems approach that entails (i) a systems outlook involving interconnected
feedback mechanisms alongside cross-system and cross-scale interactions, and
including a socioeconomic and environmental injustice perspective (ii) directed
data collection efforts to provide empirical evidence for and monitor social
tipping dynamics, (iii) global, integrated, descriptive modelling to project
future dynamics and provide ex-ante evidence for interventions. Research on
social tipping must be accordingly solidified for climate policy relevance.
|
http://arxiv.org/abs/2309.14964v1
|
The dissipation rates of the basic turbulent second-order moments are the key
parameters controlling turbulence energetics and spectra, turbulent fluxes of
momentum and heat, and playing a vital role in turbulence modelling. In this
paper, we use the results of Direct Numerical Simulations (DNS) to evaluate
dissipation rates of the basic turbulent second-order moments and revise the
energy and flux-budget turbulence closure model for stably stratified
turbulence. We delve into the theoretical implications of this approach and
substantiate our closure hypotheses through DNS data. We also show why the
concept of down-gradient turbulent transport becomes incomplete when applied to
the vertical turbulent flux of potential temperature under very stable
stratification. We reveal essential feedback between turbulent kinetic energy,
the vertical flux of buoyancy and turbulent potential energy, which is
responsible for maintaining shear-produced stably stratified turbulence up to
extreme static stability.
|
http://arxiv.org/abs/2309.05869v1
|
We study the election control problem with multi-votes, where each voter can
present a single vote according different views (or layers, we use "layer" to
represent "view"). For example, according to the attributes of candidates, such
as: education, hobby or the relationship of candidates, a voter may present
different preferences for the same candidate set. Here, we consider a new model
of election control that by assigning different rules to the votes from
different layers, makes the special candidate p being the winner of the
election (a rule can be assigned to different layers). Assuming a set of
candidates C among a special candidate "p", a set of voters V, and t layers,
each voter gives t votes over all candidates, one for each layer, a set of
voting rules R, the task is to find an assignment of rules to each layer that p
is acceptable for voters (possible winner of the election). Three models are
considered (denoted as sum-model, max-model, and min-model) to measure the
satisfaction of each voter. In this paper, we analyze the computational
complexity of finding such a rule assignment, including classical complexity
and parameterized complexity. It is interesting to find out that 1) it is
NP-hard even if there are only two voters in the sum-model, or there are only
two rules in sum-model and max-model; 2) it is intractable with the number of
layers as parameter for all of three models; 3) even the satisfaction of each
vote is set as dichotomous, 1 or 0, it remains hard to find out an acceptable
rule assignment. Furthermore, we also get some other intractable and tractable
results.
|
http://arxiv.org/abs/2306.17430v1
|
Much of explainable AI research treats explanations as a means for model
inspection. Yet, this neglects findings from human psychology that describe the
benefit of self-explanations in an agent's learning process. Motivated by this,
we introduce a novel workflow in the context of image classification, termed
Learning by Self-Explaining (LSX). LSX utilizes aspects of self-refining AI and
human-guided explanatory machine learning. The underlying idea is that a
learner model, in addition to optimizing for the original predictive task, is
further optimized based on explanatory feedback from an internal critic model.
Intuitively, a learner's explanations are considered "useful" if the internal
critic can perform the same task given these explanations. We provide an
overview of important components of LSX and, based on this, perform extensive
experimental evaluations via three different example instantiations. Our
results indicate improvements via Learning by Self-Explaining on several
levels: in terms of model generalization, reducing the influence of confounding
factors, and providing more task-relevant and faithful model explanations.
Overall, our work provides evidence for the potential of self-explaining within
the learning phase of an AI model.
|
http://arxiv.org/abs/2309.08395v3
|
An analytical solution for high supersonic flow over a circular cylinder
based on Schneider's inverse method has been presented. In the inverse method,
a shock shape is assumed and the corresponding flow field and the shape of the
body producing the shock are found by integrating the equations of motion using
the stream function. A shock shape theorised by Moeckel has been assumed and it
is optimized by minimising the error between the shape of the body obtained
using Schneider's method and the actual shape of the body. A further
improvement in the shock shape is also found by using the Moeckel's shock shape
in a small series expansion. With this shock shape, the whole flow field in the
shock layer has been calculated using Schneider's method by integrating the
equations of motion. This solution is compared against a fifth order accurate
numerical solution using the discontinuous Galerkin method (DGM) and the
maximum error in density is found to be of the order of 0.001 which
demonstrates the accuracy of the method used for both plane and axisymmetric
flows.
|
http://arxiv.org/abs/2307.16407v1
|
Let $\mathcal{T}_n$ be the set of all mappings
$T:\{1,2,\ldots,n\}\to\{1,2,\ldots,n\}$. The corresponding graph of $T$ is a
union of disjoint connected unicyclic components. We assume that each
$T\in\mathcal{T}_n$ is chosen uniformly at random (i.e., with probability
$n^{-n}$). The cycle of $T$ contained within its largest component is callled
the deepest one. For any $T\in\mathcal{T}_n$, let $\nu_n=\nu_n(T)$ denote the
length of this cycle. In this paper, we establish the convergence in
distribution of $\nu_n/\sqrt{n}$ and find the limits of its expectation and
variance as $n\to\infty$. For $n$ large enough, we also show that nearly $55\%$
of all cyclic vertices of a random mapping $T\in\mathcal{T}_n$ lie in the
deepest cycle and that a vertex from the longest cycle of $T$ does not belong
to its largest component with approximate probability $0.075$.
|
http://arxiv.org/abs/2301.13829v3
|
Human-Scene Interaction (HSI) is a vital component of fields like embodied AI
and virtual reality. Despite advancements in motion quality and physical
plausibility, two pivotal factors, versatile interaction control and the
development of a user-friendly interface, require further exploration before
the practical application of HSI. This paper presents a unified HSI framework,
UniHSI, which supports unified control of diverse interactions through language
commands. This framework is built upon the definition of interaction as Chain
of Contacts (CoC): steps of human joint-object part pairs, which is inspired by
the strong correlation between interaction types and human-object contact
regions. Based on the definition, UniHSI constitutes a Large Language Model
(LLM) Planner to translate language prompts into task plans in the form of CoC,
and a Unified Controller that turns CoC into uniform task execution. To
facilitate training and evaluation, we collect a new dataset named ScenePlan
that encompasses thousands of task plans generated by LLMs based on diverse
scenarios. Comprehensive experiments demonstrate the effectiveness of our
framework in versatile task execution and generalizability to real scanned
scenes. The project page is at https://github.com/OpenRobotLab/UniHSI .
|
http://arxiv.org/abs/2309.07918v4
|
Deep Neural Networks (DNNs) for 3D point cloud recognition are vulnerable to
adversarial examples, threatening their practical deployment. Despite the many
research endeavors have been made to tackle this issue in recent years, the
diversity of adversarial examples on 3D point clouds makes them more
challenging to defend against than those on 2D images. For examples, attackers
can generate adversarial examples by adding, shifting, or removing points.
Consequently, existing defense strategies are hard to counter unseen point
cloud adversarial examples. In this paper, we first establish a comprehensive,
and rigorous point cloud adversarial robustness benchmark to evaluate
adversarial robustness, which can provide a detailed understanding of the
effects of the defense and attack methods. We then collect existing defense
tricks in point cloud adversarial defenses and then perform extensive and
systematic experiments to identify an effective combination of these tricks.
Furthermore, we propose a hybrid training augmentation methods that consider
various types of point cloud adversarial examples to adversarial training,
significantly improving the adversarial robustness. By combining these tricks,
we construct a more robust defense framework achieving an average accuracy of
83.45\% against various attacks, demonstrating its capability to enabling
robust learners. Our codebase are open-sourced on:
\url{https://github.com/qiufan319/benchmark_pc_attack.git}.
|
http://arxiv.org/abs/2307.16361v2
|
The applicability of the effective models to the description of baryons and
the behaviour of ratios of strange baryons to pions is discussed. In the
framework of the EPNJL model, the Bethe - Salpeter equation is used to find
masses of baryons, which are considered as diquark-quark state. Baryon melting
is discussed at a finite chemical potential and a flavor dependence of the
hadronic deconfinement temperature is pointed. It is shown that the description
of the diquark-quark state at finite chemical potential is limited due to the
occurrence of the Bose condensate. This effect is strongly manifested in the
description of light diquarks and baryons. Both $\Lambda^0/\pi^+$ and
$\Xi^-/\pi^+$ ratios show a sharp behaviour as functions of $T/\mu_B$ variable,
where T and $\mu_B$ are calculated along the melting lines.
|
http://arxiv.org/abs/2309.16815v1
|
Valuable insights, such as frequently visited environments in the wake of the
COVID-19 pandemic, can oftentimes only be gained by analyzing sensitive data
spread across edge-devices like smartphones. To facilitate such an analysis, we
present a toolchain called PrivAgE for a distributed, privacy-preserving
aggregation of local data by taking the limited resources of edge-devices into
account. The distributed aggregation is based on secure summation and
simultaneously satisfies the notion of differential privacy. In this way, other
parties can neither learn the sensitive data of single clients nor a single
client's influence on the final result. We perform an evaluation of the power
consumption, the running time and the bandwidth overhead on real as well as
simulated devices and demonstrate the flexibility of our toolchain by
presenting an extension of the summation of histograms to distributed
clustering.
|
http://arxiv.org/abs/2309.12483v2
|
3D object detection in point clouds is important for autonomous driving
systems. A primary challenge in 3D object detection stems from the sparse
distribution of points within the 3D scene. Existing high-performance methods
typically employ 3D sparse convolutional neural networks with small kernels to
extract features. To reduce computational costs, these methods resort to
submanifold sparse convolutions, which prevent the information exchange among
spatially disconnected features. Some recent approaches have attempted to
address this problem by introducing large-kernel convolutions or self-attention
mechanisms, but they either achieve limited accuracy improvements or incur
excessive computational costs. We propose HEDNet, a hierarchical
encoder-decoder network for 3D object detection, which leverages
encoder-decoder blocks to capture long-range dependencies among features in the
spatial space, particularly for large and distant objects. We conducted
extensive experiments on the Waymo Open and nuScenes datasets. HEDNet achieved
superior detection accuracy on both datasets than previous state-of-the-art
methods with competitive efficiency. The code is available at
https://github.com/zhanggang001/HEDNet.
|
http://arxiv.org/abs/2310.20234v1
|
The paradigm of vertical federated learning (VFL), where institutions
collaboratively train machine learning models via combining each other's local
feature or label information, has achieved great success in applications to
financial risk management (FRM). The surging developments of graph
representation learning (GRL) have opened up new opportunities for FRM
applications under FL via efficiently utilizing the graph-structured data
generated from underlying transaction networks. Meanwhile, transaction
information is often considered highly sensitive. To prevent data leakage
during training, it is critical to develop FL protocols with formal privacy
guarantees. In this paper, we present an end-to-end GRL framework in the VFL
setting called VESPER, which is built upon a general privatization scheme
termed perturbed message passing (PMP) that allows the privatization of many
popular graph neural architectures.Based on PMP, we discuss the strengths and
weaknesses of specific design choices of concrete graph neural architectures
and provide solutions and improvements for both dense and sparse graphs.
Extensive empirical evaluations over both public datasets and an industry
dataset demonstrate that VESPER is capable of training high-performance GNN
models over both sparse and dense graphs under reasonable privacy budgets.
|
http://arxiv.org/abs/2310.20552v1
|
We report on the discovery of two potential polar ring galaxies (PRGs) in the
WALLABY Pilot Data Release 1 (PDR1). These untargetted detections,
cross-matched to NGC 4632 and NGC 6156, are some of the first galaxies where
the Hi observations show two distinct components. We used the iDaVIE virtual
reality software to separate the anomalous gas from the galactic gas and find
that the anomalous gas comprises ~ 50% of the total H i content of both
systems. We have generated plausible 3D kinematic models for each galaxy
assuming that the rings are circular and inclined at 90 degrees to the galaxy
bodies. These models show that the data are consistent with PRGs, but do not
definitively prove that the galaxies are PRGs. By projecting these models at
different combinations of main disk inclinations, ring orientations, and
angular resolutions in mock datacubes, we have further investigated the
detectability of similar PRGs in WALLABY. Assuming that these galaxies are
indeed PRGs, the detectability fraction, combined with the size distribution of
WALLABY PDR1 galaxies, implies an incidence rate of ~ 1% - 3%. If this rate
holds true, the WALLABY survey will detect hundreds of new polar ring galaxies.
|
http://arxiv.org/abs/2309.05841v2
|
We present an entangled quantum radar protocol. It consists in scanning the
sky with a thin Gaussian beam and measuring the travel time of the radiation
reflected from the target, as in conventional radars. Here the Gaussian beam is
composed of $N$ photons entangled in the frequency degrees of freedom. We show
that this provides a $\sqrt{N}$ quantum enhancement over the unentangled case,
as is usual in quantum metrology.
|
http://arxiv.org/abs/2309.11834v1
|
A quantum register coupled to a spin-photon interface is a key component in
quantum communication and information processing. Group-IV color centers in
diamond (SiV, GeV, and SnV) are promising candidates for this application,
comprising an electronic spin with optical transitions coupled to a nuclear
spin as the quantum register. However, the creation of a quantum register for
these color centers with deterministic and strong coupling to the spin-photon
interface remains challenging. Here, we make first-principles predictions of
the hyperfine parameters of the group-IV color centers, which we verify
experimentally with a comprehensive comparison between the spectra of spin
active and spin neutral intrinsic dopant nuclei in single GeV and SnV emitters.
In line with the theoretical predictions, detailed spectroscopy on large sample
sizes reveals that hyperfine coupling causes a splitting of the optical
transition of SnV an order of magnitude larger than the optical linewidth and
provides a magnetic-field insensitive transition. This strong coupling provides
access to a new regime for quantum registers in diamond color centers, opening
avenues for novel spin-photon entanglement and quantum sensing schemes for
these well-studied emitters.
|
http://arxiv.org/abs/2306.00164v2
|
A ReLU neural network leads to a finite polyhedral decomposition of input
space and a corresponding finite dual graph. We show that while this dual graph
is a coarse quantization of input space, it is sufficiently robust that it can
be combined with persistent homology to detect homological signals of manifolds
in the input space from samples. This property holds for a variety of networks
trained for a wide range of purposes that have nothing to do with this
topological application. We found this feature to be surprising and
interesting; we hope it will also be useful.
|
http://arxiv.org/abs/2306.17418v1
|
Pancreatic cancer is a lethal form of cancer that significantly contributes
to cancer-related deaths worldwide. Early detection is essential to improve
patient prognosis and survival rates. Despite advances in medical imaging
techniques, pancreatic cancer remains a challenging disease to detect.
Endoscopic ultrasound (EUS) is the most effective diagnostic tool for detecting
pancreatic cancer. However, it requires expert interpretation of complex
ultrasound images to complete a reliable patient scan. To obtain complete
imaging of the pancreas, practitioners must learn to guide the endoscope into
multiple "EUS stations" (anatomical locations), which provide different views
of the pancreas. This is a difficult skill to learn, involving over 225
proctored procedures with the support of an experienced doctor. We build an
AI-assisted tool that utilizes deep learning techniques to identify these
stations of the stomach in real time during EUS procedures. This
computer-assisted diagnostic (CAD) will help train doctors more efficiently.
Historically, the challenge faced in developing such a tool has been the amount
of retrospective labeling required by trained clinicians. To solve this, we
developed an open-source user-friendly labeling web app that streamlines the
process of annotating stations during the EUS procedure with minimal effort
from the clinicians. Our research shows that employing only 43 procedures with
no hyperparameter fine-tuning obtained a balanced accuracy of 89%, comparable
to the current state of the art. In addition, we employ Grad-CAM, a
visualization technology that provides clinicians with interpretable and
explainable visualizations.
|
http://arxiv.org/abs/2309.11820v3
|
III-Nitride micropillar structures show great promise for applications in
micro light-emitting diodes and vertical power transistors due to their
excellent scalability and outstanding electrical properties. Typically,
III-Nitride micropillars are fabricated through a top-down approach using
reactive ion etch which leads to roughened, non-vertical sidewalls that results
in significant performance degradation. Thus, it is essential to remove this
plasma etch induced surface damage. Here, we show that potassium hydroxide
(KOH) acts as a crystallographic etchant for III-Nitride micropillars,
preferentially exposing the vertical <1-100> m-plane, and effectively removing
dry etch damage and reducing the structure diameter at up to 36.6 nm/min. Both
KOH solution temperature and concentration have a dramatic effect on this wet
etch progression. We found that a solution of 20% AZ400K (2% KOH) at 90 C is
effective at producing smooth, highly vertical sidewalls with RMS surface
roughness as low as 2.59 nm, ideal for high-performance electronic and
optoelectronic devices.
|
http://arxiv.org/abs/2310.20546v1
|
We obtain tight lower bounds for the trace norm $\Vert \cdot \Vert_1$ of some
matrices with diagonal zero, in terms of the entry-wise $L^1$-norm (denoted by
$\Vert \cdot \Vert_{(1)}$). It is shown that on the space of nonzero real
symmetric matrices $A$ of order $n$ with diagonal zero, the minimum value of
the quantity $\frac{\Vert A\Vert_1}{\Vert A\Vert_{(1)}}$ is equal to
$\frac{2}{n}$. The answer of the similar problem in the space of Hermitian
matrices, is also obtained to be equal to $\tan(\frac{\pi}{2n})$. The
equivalent "dual" form of these results, give some upper bounds for the
distance to the nearest diagonal matrix for a given symmetric or Hermitian
matrix, when the distance is computed in the spectral norm.
|
http://arxiv.org/abs/2309.14958v2
|
Total Variation regularization (TV) is a seminal approach for image recovery.
TV involves the norm of the image's gradient, aggregated over all pixel
locations. Therefore, TV leads to piece-wise constant solutions, resulting in
what is known as the "staircase effect." To mitigate this effect, the Hessian
Schatten norm regularization (HSN) employs second-order derivatives,
represented by the pth norm of eigenvalues in the image hessian, summed across
all pixels. HSN demonstrates superior structure-preserving properties compared
to TV. However, HSN solutions tend to be overly smoothed. To address this, we
introduce a non-convex shrinkage penalty applied to the Hessian's eigenvalues,
deviating from the convex lp norm. It is important to note that the shrinkage
penalty is not defined directly in closed form, but specified indirectly
through its proximal operation. This makes constructing a provably convergent
algorithm difficult as the singular values are also defined through a
non-linear operation. However, we were able to derive a provably convergent
algorithm using proximal operations. We prove the convergence by establishing
that the proposed regularization adheres to restricted proximal regularity. The
images recovered by this regularization were sharper than the convex
counterparts.
|
http://arxiv.org/abs/2309.04593v1
|
Accurate and precise climate projections are required for climate adaptation
and mitigation, but Earth system models still exhibit great uncertainties.
Several approaches have been developed to reduce the spread of climate
projections and feedbacks, yet those methods cannot capture the non-linear
complexity inherent in the climate system. Using a Transfer Learning approach,
we show that Machine Learning can be used to optimally leverage and merge the
knowledge gained from Earth system models simulations and historical
observations to more accurately project global surface air temperature fields
in the 21st century. We reach an uncertainty reduction of more than 50% with
respect to state-of-the-art approaches. We give evidence that our novel method
provides narrower projection uncertainty together with more accurate mean
climate projections, urgently required for climate adaptation.
|
http://arxiv.org/abs/2309.14780v4
|
What makes waveform-based deep learning so hard? Despite numerous attempts at
training convolutional neural networks (convnets) for filterbank design, they
often fail to outperform hand-crafted baselines. These baselines are linear
time-invariant systems: as such, they can be approximated by convnets with wide
receptive fields. Yet, in practice, gradient-based optimization leads to
suboptimal approximations. In our article, we approach this phenomenon from the
perspective of initialization. We present a theory of large deviations for the
energy response of FIR filterbanks with random Gaussian weights. We find that
deviations worsen for large filters and locally periodic input signals, which
are both typical for audio signal processing applications. Numerical
simulations align with our theory and suggest that the condition number of a
convolutional layer follows a logarithmic scaling law between the number and
length of the filters, which is reminiscent of discrete wavelet bases.
|
http://arxiv.org/abs/2309.05855v4
|
Let $T$ be a complete, model complete o-minimal theory extending the theory
of real closed ordered fields and assume that $T$ is power bounded. Let $K$ be
a model of $T$ equipped with a $T$-convex valuation ring $\mathcal{O}$ and a
$T$-derivation $\partial$ such that $\partial$ is monotone, i.e., weakly
contractive with respect to the valuation induced by $\mathcal{O}$. We show
that the theory of monotone $T$-convex $T$-differential fields, i.e., the
common theory of such $K$, has a model completion, which is complete and
distal. Among the axioms of this model completion, we isolate an analogue of
henselianity that we call $T^{\partial}$-henselianity. We establish an
Ax--Kochen/Ershov theorem and further results for monotone $T$-convex
$T$-differential fields that are $T^{\partial}$-henselian.
|
http://arxiv.org/abs/2309.13951v2
|
This article solves the Hume's problem of induction using a probabilistic
approach. From the probabilistic perspective, the core task of induction is to
estimate the probability of an event and judge the accuracy of the estimation.
Following this principle, the article provides a method for calculating the
confidence on a given confidence interval, and furthermore, degree of
confirmation. The law of large numbers shows that as the number of experiments
tends to infinity, for any small confidence interval, the confidence approaches
100\% in a probabilistic sense, thus the Hume's problem of induction is solved.
The foundation of this method is the existence of probability, or in other
words, the identity of physical laws. The article points out that it cannot be
guaranteed that all things possess identity, but humans only concern themselves
with things that possess identity, and identity is built on the foundation of
pragmatism. After solving the Hum's problem, a novel demarcation of science are
proposed, providing science with the legitimacy of being referred to as truth.
|
http://arxiv.org/abs/2309.07924v1
|
Privacy-preserving crowd density analysis finds application across a wide
range of scenarios, substantially enhancing smart building operation and
management while upholding privacy expectations in various spaces. We propose a
non-speech audio-based approach for crowd analytics, leveraging a
transformer-based model. Our results demonstrate that non-speech audio alone
can be used to conduct such analysis with remarkable accuracy. To the best of
our knowledge, this is the first time when non-speech audio signals are
proposed for predicting occupancy. As far as we know, there has been no other
similar approach of its kind prior to this. To accomplish this, we deployed our
sensor-based platform in the waiting room of a large hospital with IRB approval
over a period of several months to capture non-speech audio and thermal images
for the training and evaluation of our models. The proposed non-speech-based
approach outperformed the thermal camera-based model and all other baselines.
In addition to demonstrating superior performance without utilizing speech
audio, we conduct further analysis using differential privacy techniques to
provide additional privacy guarantees. Overall, our work demonstrates the
viability of employing non-speech audio data for accurate occupancy estimation,
while also ensuring the exclusion of speech-related content and providing
robust privacy protections through differential privacy guarantees.
|
http://arxiv.org/abs/2309.10280v2
|
Stabilization of a coupled system consisting of a parabolic partial
differential equation and an elliptic partial differential equation is
considered. Even in the situation when the parabolic equation is exponentially
stable on its own, the coupling between the two equations can cause instability
in the overall system. A backstepping approach is used to derive a boundary
control input that stabilizes the coupled system. The result is an explicit
expression for the stabilizing control law. The second part of the paper
involves the design of exponentially convergent observers to estimate the state
of the coupled system, given some partial boundary measurements. The
observation error system is shown to be exponentially stable, again by
employing a backstepping method. This leads to the design of observer gains in
closed-form. Finally, we address the output-feedback problem by combining the
observers with the state feedback boundary control. The theoretical results are
demonstrated with numerical simulations.
|
http://arxiv.org/abs/2309.00093v1
|
The `Main' galaxy cluster in the Abell 781 system is undergoing a significant
merger and accretion process with peripheral emission to the north and
southeastern flanks of the merging structure. Here we present a full
polarimetric study of this field, using radio interferometric data taken at 21
and 92 cm with the Westerbork Synthesis Radio Telescope, to a sensitivity
better than any 21 cm (L-band) observation to date. We detect evidence of
extended low-level emission of 1.9 mJy associated with the Main cluster at 21
cm, although this detection necessitates further follow-up by modern
instruments due to the limited resolution of the Westerbork Synthesis Radio
Telescope. Our polarimetric study indicates that, most likely, the peripheral
emission associated with this cluster is not a radio relic.
|
http://arxiv.org/abs/2309.09909v1
|
We present a review of known models and a new simple mathematical modelling
for border completion in the visual cortex V1 highlighting the striking
analogies with bicycle rear wheel motions in the plane.
|
http://arxiv.org/abs/2304.00084v1
|
We study how to verify specific frequency distributions when we observe a
stream of $N$ data items taken from a universe of $n$ distinct items. We
introduce the \emph{relative Fr\'echet distance} to compare two frequency
functions in a homogeneous manner. We consider two streaming models: insertions
only and sliding windows. We present a Tester for a certain class of functions,
which decides if $f $ is close to $g$ or if $f$ is far from $g$ with high
probability, when $f$ is given and $g$ is defined by a stream. If $f$ is
uniform we show a space $\Omega(n)$ lower bound. If $f$ decreases fast enough,
we then only use space $O(\log^2 n\cdot \log\log n)$. The analysis relies on
the Spacesaving algorithm \cite{MAE2005,Z22} and on sampling the stream.
|
http://arxiv.org/abs/2309.11175v1
|
We consider the two-dimensional, $\beta$-plane, vorticity equations for an
incompressible flow, where the zonally averaged flow varies on scales much
larger than the perturbation. We prove global existence and uniqueness of the
solution to the equations on periodic settings.
|
http://arxiv.org/abs/2303.00023v2
|
We present a new pre-training strategy called M$^{3}$3D
($\underline{M}$ulti-$\underline{M}$odal $\underline{M}$asked $\underline{3D}$)
built based on Multi-modal masked autoencoders that can leverage 3D priors and
learned cross-modal representations in RGB-D data. We integrate two major
self-supervised learning frameworks; Masked Image Modeling (MIM) and
contrastive learning; aiming to effectively embed masked 3D priors and modality
complementary features to enhance the correspondence between modalities. In
contrast to recent approaches which are either focusing on specific downstream
tasks or require multi-view correspondence, we show that our pre-training
strategy is ubiquitous, enabling improved representation learning that can
transfer into improved performance on various downstream tasks such as video
action recognition, video action detection, 2D semantic segmentation and depth
estimation. Experiments show that M$^{3}$3D outperforms the existing
state-of-the-art approaches on ScanNet, NYUv2, UCF-101 and OR-AR, particularly
with an improvement of +1.3\% mIoU against Mask3D on ScanNet semantic
segmentation. We further evaluate our method on low-data regime and demonstrate
its superior data efficiency compared to current state-of-the-art approaches.
|
http://arxiv.org/abs/2309.15313v1
|
Data analytics using GUI-based dataflows is an iterative process in which an
analyst makes many iterations of changes to refine the dataflow, generating a
different version at each iteration. In many cases, the result of executing a
dataflow version is equivalent to a result of a prior executed version.
Identifying such equivalence between the execution results of different
dataflow versions is important for optimizing the performance of a dataflow by
reusing results from a previous run. The size of the dataflows and the
complexity of their operators often make existing equivalence verifiers (EVs)
not able to solve the problem. In this paper, we present "Veer," which
leverages the fact that two dataflow versions can be very similar except for a
few changes. The solution divides the dataflow version pair into small parts,
called windows, and verifies the equivalence within each window by using an
existing EV as a black box. We develop solutions to efficiently generate
windows and verify the equivalence within each window. Our thorough experiments
on real dataflows show that Veer is able to not only verify the equivalence of
dataflows that cannot be supported by existing EVs but also do the verification
efficiently.
|
http://arxiv.org/abs/2309.13762v3
|
Global environmental change is pushing many socio-environmental systems
towards critical thresholds, where ecological systems' states are on the
precipice of tipping points and interventions are needed to navigate or avert
impending transitions. Flickering, where a system vacillates between
alternative stable states, is touted as a useful early warning signal of
irreversible transitions to undesirable ecological regimes. However, while
flickering may presage an ecological tipping point, these dynamics also pose
unique challenges for human adaptation. In this work, we link an ecological
model that can exhibit flickering to a model of human adaptation to a changing
environment. This allows us to explore the impact of flickering on the utility
of adaptive agents in a coupled socio-environmental system. We highlight the
conditions under which flickering causes wellbeing to decline
disproportionately, and explore how these dynamics impact the optimal timing of
a transformational change that partially decouples wellbeing from environmental
variability. The implications of flickering on nomadic communities in Mongolia,
artisanal fisheries, and wildfire systems are explored as possible case
studies. Flickering, driven in part by climate change and changes to governance
systems, may already be impacting communities. We argue that governance
interventions investing in adaptive capacity could blunt the negative impact of
flickering that can occur as socio-environmental systems pass through tipping
points, and therefore contribute to the sustainability of these systems.
|
http://arxiv.org/abs/2309.04578v1
|
The rapid growth of information in the field of Generative Artificial
Intelligence (AI), particularly in the subfields of Natural Language Processing
(NLP) and Machine Learning (ML), presents a significant challenge for
researchers and practitioners to keep pace with the latest developments. To
address the problem of information overload, this report by the Natural
Language Learning Group at Bielefeld University focuses on identifying the most
popular papers on arXiv, with a specific emphasis on NLP and ML. The objective
is to offer a quick guide to the most relevant and widely discussed research,
aiding both newcomers and established researchers in staying abreast of current
trends. In particular, we compile a list of the 40 most popular papers based on
normalized citation counts from the first half of 2023. We observe the
dominance of papers related to Large Language Models (LLMs) and specifically
ChatGPT during the first half of 2023, with the latter showing signs of
declining popularity more recently, however. Further, NLP related papers are
the most influential (around 60\% of top papers) even though there are twice as
many ML related papers in our data. Core issues investigated in the most
heavily cited papers are: LLM efficiency, evaluation techniques, ethical
considerations, embodied agents, and problem-solving with LLMs. Additionally,
we examine the characteristics of top papers in comparison to others outside
the top-40 list (noticing the top paper's focus on LLM related issues and
higher number of co-authors) and analyze the citation distributions in our
dataset, among others.
|
http://arxiv.org/abs/2308.04889v1
|
Context. In the scope of space weather forecasting, it is crucial to be able
to more reliably predict the arrival time, speed, and magnetic field
configuration of coronal mass ejections (CMEs). From the time a CME is
launched, the dominant factor influencing all of the above is the interaction
of the interplanetary CME (ICME) with the ambient plasma and interplanetary
magnetic field. Aims. Due to a generally anisotropic heliosphere, differently
oriented ICMEs may interact differently with the ambient plasma and
interplanetary magnetic field, even when the initial eruption conditions are
similar. For this, we examined the possible link between the orientation of an
ICME and its propagation in the heliosphere (up to 1 AU). Methods. We
investigated 31 CME-ICME associations in the period from 1997 to 2018. The CME
orientation in the near-Sun environment was determined using an ellipse-fitting
technique applied to single-spacecraft data from SOHO/LASCO C2 and C3
coronagraphs. In the near-Earth environment, we obtained the orientation of the
corresponding ICME using in situ plasma and magnetic field data. The shock
orientation and nonradial flows in the sheath region for differently oriented
ICMEs were investigated. In addition, we calculated the ICME transit time to
Earth and drag parameter to probe the overall drag force for differently
oriented ICMEs. The drag parameter was calculated using the reverse modeling
procedure with the drag-based model. Results. We found a significant difference
in nonradial flows for differently oriented ICMEs, whereas a significant
difference in drag for differently oriented ICMEs was not found.
|
http://arxiv.org/abs/2309.15475v1
|
Despite a growing sample of precisely measured stellar rotation periods and
ages, the strength of magnetic braking and the degree of departure from
standard (Skumanich-like) spindown have remained persistent questions,
particularly for stars more evolved than the Sun. Rotation periods can be
measured for stars older than the Sun by leveraging asteroseismology, enabling
models to be tested against a larger sample of old field stars. Because
asteroseismic measurements of rotation do not depend on starspot modulation,
they avoid potential biases introduced by the need for a stellar dynamo to
drive starspot production. Using a neural network trained on a grid of stellar
evolution models and a hierarchical model-fitting approach, we constrain the
onset of weakened magnetic braking. We find that a sample of stars with
asteroseismically-measured rotation periods and ages is consistent with models
that depart from standard spindown prior to reaching the evolutionary stage of
the Sun. We test our approach using neural networks trained on model grids
produced by separate stellar evolution codes with differing physical
assumptions and find that the choices of grid physics can influence the
inferred properties of the braking law. We identify the normalized critical
Rossby number ${\rm Ro}_{\rm crit}/{\rm Ro}_\odot = 0.91\pm0.03$ as the
threshold for the departure from standard rotational evolution. This suggests
that weakened magnetic braking poses challenges to gyrochronology for roughly
half of the main sequence lifetime of sun-like stars.
|
http://arxiv.org/abs/2309.05666v1
|
We propose a framework for thinking about eccentricity in terms of blocks. We
extend the familiar definitions of radius and center to blocks and verify that
a central block contains all central points. We classify graphs into two types
depending upon the relationship between block radius and vertex radius and
between central blocks and central vertices; from this we derive a new lower
bound on diameter in terms of the diameter of the central block. We also
identify a subgraph which respects the block structure of the original graph
and realizes the same vertex radius, and we use it to verify that cactus graphs
satisfy a conjectured bound between vertex radius and the Randic index, an
invariant from mathematical chemistry.
|
http://arxiv.org/abs/2309.11613v1
|
A system of two gravitating bodies floating around a restricted region of
strong gravitational field is investigated. We consider two concentric
spherically symmetric timelike shells spatially constrained by a perfectly
reflecting inner and outer boundary. It is shown numerically that even when the
gravitational radius of a contracting shell is larger than the radius of the
inner boundary, energy transfer occurs due to the intersection with the other
expanding shell before the contracting shell becomes a black hole, resulting
nonlinearly stable motion. The system appears to be in a permanently stable
periodic motion due to the repetition of forward and reverse energy transfer.
The larger the specific energy of a shell, the more stable the motion is. In
addition, the motion of the null shell as the fastest limit of the timelike
shell is also investigated. Unlike the timelike shell, the motion of the two
null shells reduces to exact recurrence equations. By analyzing the recurrence
equations, we find the null shells also allow stable motions. Using the
algebraic computation of the recurrence equations, we show numerical
integration is not necessary for the nonlinear dynamics of the null shells in
confined geometry.
|
http://arxiv.org/abs/2302.14419v2
|
Reconfigurable intelligent surfaces (RIS)-assisted massive multiple-input
multiple-output (mMIMO) is a promising technology for applications in
next-generation networks. However, reflecting-only RIS provides limited
coverage compared to a simultaneously transmitting and reflecting RIS
(STAR-RIS). Hence, in this paper, we focus on the downlink achievable rate and
its optimization of a STAR-RIS-assisted mMIMO system. Contrary to previous
works on STAR-RIS, we consider mMIMO, correlated fading, and multiple user
equipments (UEs) at both sides of the RIS. In particular, we introduce an
estimation approach of the aggregated channel with the main benefit of reduced
overhead links instead of estimating the individual channels. {Next, leveraging
channel hardening in mMIMO and the use-and-forget bounding technique, we obtain
an achievable rate in closed-form that only depends on statistical channel
state information (CSI). To optimize the amplitudes and phase shifts of the
STAR-RIS, we employ a projected gradient ascent method (PGAM) that
simultaneously adjusts the amplitudes and phase shifts for both energy
splitting (ES) and mode switching (MS) STAR-RIS operation protocols.} By
considering large-scale fading, the proposed optimization can be performed
every several coherence intervals, which can significantly reduce overhead.
Considering that STAR-RIS has twice the number of controllable parameters
compared to conventional reflecting-only RIS, this accomplishment offers
substantial practical benefits. Simulations are carried out to verify the
analytical results, reveal the interplay of the achievable rate with
fundamental parameters, and show the superiority of STAR-RIS regarding its
achievable rate compared to its reflecting-only counterpart.
|
http://arxiv.org/abs/2309.08342v1
|
This paper delves into the transformative power of Generative AI-driven
storytelling in the realm of marketing. Generative AI, distinct from
traditional machine learning, offers the capability to craft narratives that
resonate with consumers on a deeply personal level. Through real-world examples
from industry leaders like Google, Netflix and Stitch Fix, we elucidate how
this technology shapes marketing strategies, personalizes consumer experiences,
and navigates the challenges it presents. The paper also explores future
directions and recommendations for generative AI-driven storytelling, including
prospective applications such as real-time personalized storytelling, immersive
storytelling experiences, and social media storytelling. By shedding light on
the potential and impact of generative AI-driven storytelling in marketing,
this paper contributes to the understanding of this cutting-edge approach and
its transformative power in the field of marketing.
|
http://arxiv.org/abs/2309.09048v1
|
Current proposals for topological quantum computation (TQC) based on Majorana
zero modes (MZM) have mostly been focused on coupled-wire architecture which
can be challenging to implement experimentally. To explore alternative building
blocks of TQC, in this work we study the possibility of obtaining robust MZM at
the corners of triangular superconducting islands, which often appear
spontaneously in epitaxial growth. We first show that a minimal three-site
triangle model of spinless $p$-wave superconductor allows MZM to appear at
different pairs of vertices controlled by a staggered vector potential, which
may be realized using coupled quantum dots and can already demonstrate
braiding. For systems with less fine-tuned parameters, we suggest an
alternative structure of a "hollow" triangle subject to uniform supercurrents
or vector potentials, in which MZM generally appear when two of the edges are
in a different topological phase from the third. We also discuss the
feasibility of constructing the triangles using existing candidate MZM systems
and of braiding more MZM in networks of such triangles.
|
http://arxiv.org/abs/2309.11607v2
|
The recent advancements in machine learning have motivated researchers to
generate classification models dealing with hundreds of classes such as in the
case of image datasets. However, visualization of classification models with
high number of classes and inter-model comparison in such classification
problems are two areas that have not received much attention in the literature,
despite the ever-increasing use of classification models to address problems
with very large class categories. In this paper, we present our interactive
visual analytics tool, called Circles, that allows a visual inter-model
comparison of numerous classification models with 1K classes in one view. To
mitigate the tricky issue of visual clutter, we chose concentric a radial line
layout for our inter-model comparison task. Our prototype shows the results of
9 models with 1K classes
|
http://arxiv.org/abs/2309.05672v1
|
This paper proposes an unmanned aerial vehicle (UAV)-based distributed
sensing framework that uses orthogonal frequency-division multiplexing (OFDM)
waveforms to detect the position of a ground target, and UAVs operate in
half-duplex mode. A spatial grid approach is proposed, where an specific area
in the ground is divided into cells of equal size, then the radar cross-section
(RCS) of each cell is jointly estimated by a network of dual-function UAVs. For
this purpose, three estimation algorithms are proposed employing the maximum
likelihood criterion, and digital beamforming is used for the local signal
acquisition at the receive UAVs. It is also considered that the coordination,
fusion of sensing data, and central estimation is performed at a certain UAV
acting as a fusion center (FC). Monte Carlo simulations are performed to obtain
the absolute estimation error of the proposed framework. The results show an
improved accuracy and resolution by the proposed framework, if compared to a
single monostatic UAV benchmark, due to the distributed approach among the
UAVs. It is also evidenced that a reduced overhead is obtained when compared to
a general compressive sensing (CS) approach.
|
http://arxiv.org/abs/2309.05114v1
|
In this paper, we study the propagations of external fields in Horndeski
theory, including the scalar field, electromagnetic field and Dirac field. We
extensively explore the quasinormal frequencies, time evolution, greybody
factors and emission rates of those massless perturbing fields by solving the
corresponding master equations in the Horndeski hairy black hole. With the use
of both numerical and analytical methods, we disclose the
competitive/promotional influences of the Horndeski hair, spin and quantum
momentum number of the external fields on those phenomenal physics. Our results
show that the Horndeski hairy black hole is stable under those perturbations.
Moreover, a larger Horndeski hair could enhance the intensity of energy
emission rate for Hawking radiation of various particles, indicating that
comparing to the Schwarzschild black hole, the Horndeski hariy black hole could
have longer or shorter lifetime depending on the sign of the Horndeski hair.
|
http://arxiv.org/abs/2309.03565v1
|
We identify a family of $O(|E(G)|^2)$ nontrivial facets of the connected
matching polytope of a graph $G$, that is, the convex hull of incidence vectors
of matchings in $G$ whose covered vertices induce a connected subgraph.
Accompanying software to further inspect the polytope of an input graph is
available.
|
http://arxiv.org/abs/2309.14019v2
|
Artificial intelligence models and methods commonly lack causal
interpretability. Despite the advancements in interpretable machine learning
(IML) methods, they frequently assign importance to features which lack causal
influence on the outcome variable. Selecting causally relevant features among
those identified as relevant by these methods, or even before model training,
would offer a solution. Feature selection methods utilizing information
theoretical quantities have been successful in identifying statistically
relevant features. However, the information theoretical quantities they are
based on do not incorporate causality, rendering them unsuitable for such
scenarios. To address this challenge, this article proposes information
theoretical quantities that incorporate the causal structure of the system,
which can be used to evaluate causal importance of features for some given
outcome variable. Specifically, we introduce causal versions of entropy and
mutual information, termed causal entropy and causal information gain, which
are designed to assess how much control a feature provides over the outcome
variable. These newly defined quantities capture changes in the entropy of a
variable resulting from interventions on other variables. Fundamental results
connecting these quantities to the existence of causal effects are derived. The
use of causal information gain in feature selection is demonstrated,
highlighting its superiority over standard mutual information in revealing
which features provide control over a chosen outcome variable. Our
investigation paves the way for the development of methods with improved
interpretability in domains involving causation.
|
http://arxiv.org/abs/2309.07703v2
|
Inequalities among symmetric functions are fundamental questions in
mathematics and have various applications in science and engineering. In this
paper, we tackle a conjecture about inequalities among the complete homogeneous
symmetric function $H_{n,\lambda}$, that is, the inequality $H_{n,\lambda}\leq
H_{n,\mu}$ implies majorization order $\lambda\preceq\mu$. This conjecture was
proposed by Cuttler, Greene and Skandera in 2011. The conjecture is a close
analogy with other known results on Muirhead-type inequalities. In 2021, Heaton
and Shankar disproved the conjecture by showing a counterexample for degree
$d=8$ and number of variables $n=3$. They then asked whether the conjecture is
true when~ the number of variables, $n$, is large enough? In this paper, we
answer the question by proving that the conjecture does not hold when $d\geq8$
and $n\geq2$. A crucial step of the proof relies on variables reduction.
Inspired by this, we propose a new conjecture for $H_{n,\lambda}\leq
H_{n,\mu}$.
|
http://arxiv.org/abs/2305.19830v1
|
We survey various recent results that rigorously study the complexity of
learning quantum states. These include progress on quantum tomography, learning
physical quantum states, alternate learning models to tomography and learning
classical functions encoded as quantum states. We highlight how these results
are paving the way for a highly successful theory with a range of exciting open
questions. To this end, we distill 25 open questions from these results.
|
http://arxiv.org/abs/2305.20069v1
|
I introduce a new iterative method to solve problems in small-strain
non-linear elasticity. The method is inspired by recent work in data-driven
computational mechanics, which reformulated the classic boundary value problem
of continuum mechanics using the concept of "phase space". The latter is an
abstract metric space, whose coordinates are indexed by strains and stress
components, where each possible state of the discretized body corresponds to a
point. Since the phase space is associated to the discretized body, it is
finite dimensional. Two subsets are then defined: an affine space termed
"physically-admissible set" made up by those points that satisfy equilibrium
and a "materially-admissible set" containing points that satisfy the
constitutive law. Solving the boundary-value problem amounts to finding the
intersection between these two subdomains. In the linear-elastic setting, this
can be achieved through the solution of a set of linear equations; when
material non-linearity enters the picture, such is not the case anymore and
iterative solution approaches are necessary. Our iterative method consists on
projecting points alternatively from one set to the other, until convergence.
The method is similar in spirit to the "method of alternative projections" and
to the "method of projections onto convex sets", for which there is a solid
mathematical foundation that furnishes conditions for existence and uniqueness
of solutions, upon which we rely to uphold our new method's performance. We
present two examples to illustrate the applicability of the method, and to
showcase its strengths when compared to the classic Newton-Raphson method, the
usual tool of choice in non-linear continuum mechanics.
|
http://arxiv.org/abs/2309.14031v1
|
Off-Policy Estimation (OPE) methods allow us to learn and evaluate
decision-making policies from logged data. This makes them an attractive choice
for the offline evaluation of recommender systems, and several recent works
have reported successful adoption of OPE methods to this end. An important
assumption that makes this work is the absence of unobserved confounders:
random variables that influence both actions and rewards at data collection
time. Because the data collection policy is typically under the practitioner's
control, the unconfoundedness assumption is often left implicit, and its
violations are rarely dealt with in the existing literature.
This work aims to highlight the problems that arise when performing
off-policy estimation in the presence of unobserved confounders, specifically
focusing on a recommendation use-case. We focus on policy-based estimators,
where the logging propensities are learned from logged data. We characterise
the statistical bias that arises due to confounding, and show how existing
diagnostics are unable to uncover such cases. Because the bias depends directly
on the true and unobserved logging propensities, it is non-identifiable. As the
unconfoundedness assumption is famously untestable, this becomes especially
problematic. This paper emphasises this common, yet often overlooked issue.
Through synthetic data, we empirically show how na\"ive propensity estimation
under confounding can lead to severely biased metric estimates that are allowed
to fly under the radar. We aim to cultivate an awareness among researchers and
practitioners of this important problem, and touch upon potential research
directions towards mitigating its effects.
|
http://arxiv.org/abs/2309.04222v1
|
We use the stellar fossil record to constrain the stellar metallicity
evolution and star-formation histories of the post-starburst (PSB) regions
within 45 local post-starburst galaxies from the MaNGA survey. The direct
measurement of the regions' stellar metallicity evolution is achieved by a new
two-step metallicity model that allows for stellar metallicity to change at the
peak of the starburst. We also employ a Gaussian process noise model that
accounts for correlated errors introduced by the observational data reduction
or inaccuracies in the models. We find that a majority of PSB regions (69% at
$>1\sigma$ significance) increased in stellar metallicity during the recent
starburst, with an average increase of 0.8 dex and a standard deviation of 0.4
dex. A much smaller fraction of PSBs are found to have remained constant (22%)
or declined in metallicity (9%, average decrease 0.4 dex, standard deviation
0.3 dex). The pre-burst metallicities of the PSB galaxies are in good agreement
with the mass-metallicity relation of local star-forming galaxies. These
results are consistent with hydrodynamic simulations, which suggest that
mergers between gas-rich galaxies are the primary formation mechanism of local
PSBs, and rapid metal recycling during the starburst outweighs the impact of
dilution by any gas inflows. The final mass-weighted metallicities of the PSB
galaxies are consistent with the mass-metallicity relation of local passive
galaxies. Our results suggest that rapid quenching following a merger-driven
starburst is entirely consistent with the observed gap between the stellar
mass-metallicity relations of local star-forming and passive galaxies.
|
http://arxiv.org/abs/2309.16626v3
|
This paper focuses on an elastic dislocation problem that is motivated by
applications in the geophysical and seismological communities. In our model,
the displacement satisfies the Lam\'e system in a bounded domain with a mixed
homogeneous boundary condition. We also allow the occurrence of discontinuities
in both the displacement and traction fields on the fault curve/surface. By the
variational approach, we first prove the well-posedness of the direct
dislocation problem in a rather general setting with the Lam\'e parameters
being real-valued $L^\infty$ functions and satisfy the strong convexity
condition. Next, by considering the scenario that the Lam\'e parameters are
constant and the fault curve/surface possesses certain corner singularities, we
establish a local characterisation of the slip vectors at the corner points
over the dislocation curve/surface. In our study the dislocation is
geometrically rather general and may be open or closed. For both cases, we
establish the uniqueness results for the inverse problem of determining the
dislocation curve/surface and the slips.
|
http://arxiv.org/abs/2309.09706v2
|
We investigate the early time dynamics of heavy ion collisions studying the
time evolution of the energy-momentum tensor as well as energy-momentum
correlations within a uniformly thermalizing holographic QGP. From these
quantities, we suggest a far-from equilibrium definition of shear viscosity,
which is a crucial property of QCD matter as it significantly determines the
generation of elliptic flow already at early times. During an exemplary initial
heating phase of the holographic QGP the shear viscosity of entropy density
ratio decreases down to 60%, followed by an overshoot to 110% of the
near-equilibrium value, $\eta/s=1/(4\pi)$. Implications for the QCD QGP are
discussed. Subsequently, we consider a holographic QGP which is
Bjorken-expanding. Its energy-momentum tensor components have a known
hydrodynamic attractor to which all time evolutions collapse independent of the
initial conditions. Based on this, we propose a definition for a far from
equilibrium speed of sound, and analytically compute its hydrodynamic
attractor. Subjecting this Bjorken-expanding plasma to an external magnetic
field and an axial chemical potential, we study the chiral magnetic effect far
from equilibrium.
|
http://arxiv.org/abs/2309.06435v1
|
We present a method for reproducing complex multi-character interactions for
physically simulated humanoid characters using deep reinforcement learning. Our
method learns control policies for characters that imitate not only individual
motions, but also the interactions between characters, while maintaining
balance and matching the complexity of reference data. Our approach uses a
novel reward formulation based on an interaction graph that measures distances
between pairs of interaction landmarks. This reward encourages control policies
to efficiently imitate the character's motion while preserving the spatial
relationships of the interactions in the reference motion. We evaluate our
method on a variety of activities, from simple interactions such as a high-five
greeting to more complex interactions such as gymnastic exercises, Salsa
dancing, and box carrying and throwing. This approach can be used to
``clean-up'' existing motion capture data to produce physically plausible
interactions or to retarget motion to new characters with different sizes,
kinematics or morphologies while maintaining the interactions in the original
data.
|
http://arxiv.org/abs/2305.20041v1
|
In the last decade, despite rapid advancements in artificial intelligence
(AI) transforming many industry practices, construction largely lags in
adoption. Recently, the emergence and rapid adoption of advanced large language
models (LLM) like OpenAI's GPT, Google's PaLM, and Meta's Llama have shown
great potential and sparked considerable global interest. However, the current
surge lacks a study investigating the opportunities and challenges of
implementing Generative AI (GenAI) in the construction sector, creating a
critical knowledge gap for researchers and practitioners. This underlines the
necessity to explore the prospects and complexities of GenAI integration.
Bridging this gap is fundamental to optimizing GenAI's early-stage adoption
within the construction sector. Given GenAI's unprecedented capabilities to
generate human-like content based on learning from existing content, we reflect
on two guiding questions: What will the future bring for GenAI in the
construction industry? What are the potential opportunities and challenges in
implementing GenAI in the construction industry? This study delves into
reflected perception in literature, analyzes the industry perception using
programming-based word cloud and frequency analysis, and integrates authors'
opinions to answer these questions. This paper recommends a conceptual GenAI
implementation framework, provides practical recommendations, summarizes future
research questions, and builds foundational literature to foster subsequent
research expansion in GenAI within the construction and its allied architecture
& engineering domains.
|
http://arxiv.org/abs/2310.04427v1
|
The subsurface oceans of icy satellites are among the most compelling among
the potentially habitable environments in our Solar System. The question of
whether a liquid subsurface layer can be maintained over geological timescales
depends on its chemical composition. The composition of icy satellites is
linked to that of the circumplanetary disk (CPD) in which they form. The CPD
accretes material from the surrounding circumstellar disk in the vicinity of
the planet, however, the degree of chemical inheritance is unclear. We aim to
investigate the composition of ices in chemically reset or inherited
circumplanetary disks to inform interior modeling and the interpretation of in
situ measurements of icy solar system satellites, with an emphasis on the
Galilean moon system. We used a radiation-thermochemical code to produce
circumplanetary disk models and extract the ice composition from time-dependent
chemistry, incorporating gas-phase and grain-surface reactions. The initial
sublimation of ices during accretion may result in a CO2-rich ice composition.
Sublimated ammonia ice is destroyed by background radiation while drifting
towards the CPD midplane. Liberated nitrogen becomes locked in N2 due to
efficient self-shielding, leaving ices depleted of ammonia. A significant
ammonia ice component remains only when ices are inherited from the
circumstellar disk. The observed composition of the Galilean moons is
consistent with the sublimation of ices during accretion onto the CPD. In this
scenario, the Galilean moon ices are nitrogen-poor and CO2 on Callisto is
endogenous and primordial. The ice composition is significantly altered after
an initial reset of accreted circumstellar ice. The chemical history of the
Galilean moons stands in contrast to the Saturnian system, where the
composition of the moons corresponds more closely with the directly inherited
circumstellar disk material.
|
http://arxiv.org/abs/2302.14425v1
|
A huge amount of information is produced in digital form. The Semantic Web
stems from the realisation that dealing efficiently with this production
requires getting better at interlinking digital informational resources
together. Its focus is on linking data. Linking data isn't enough. We need to
provide infrastructural support for linking all sorts of informational
resources including resources whose understanding and fine interlinking
requires domain-specific human expertise. At times when many problems scale to
planetary dimensions, it is essential to scale coordination of information
processing and information production, without giving up on expertise and depth
of analysis, nor forcing languages and formalisms onto thinkers,
decision-makers and innovators that are only suitable to some forms of
intelligence. This article makes a proposal in this direction and in line with
the idea of interlinking championed by the Semantic Web.
|
http://arxiv.org/abs/2309.10531v1
|
Band engineering stands as an efficient route to induce strongly correlated
quantum many-body phenomena. Besides inspiring analogies among diverse physical
fields, tuning on demand the group velocity is highly attractive in photonics
because it allows unconventional flows of light. $\Lambda$-schemes offer a
route to control the propagation of light in a lattice-free configurations,
enabling exotic phases such as slow-light and allowing for highly optical
non-linear systems. Here, we realize room-temperature intercavity Frenkel
polaritons excited across two strongly coupled cavities. We demonstrate the
formation of a tuneable heavy-polariton, akin to slow light, appearing in the
absence of a periodic in-plane potential. Our photonic architecture based on a
simple three-level scheme enables the unique spatial segregation of photons and
excitons in different cavities and maintains a balanced degree of mixing
between them. This unveils a dynamical competition between many-body scattering
processes and the underlying polariton nature which leads to an increased
fluorescence lifetime. The intercavity polariton features are further revealed
under appropriate resonant pumping, where we observe suppression of the
polariton fluorescence intensity.
|
http://arxiv.org/abs/2309.04544v2
|
In the context of the interaction between a moving plane shock wave and an
inclined wall (wedge), it is possible to distinguish four distinct shock
reflection configurations. These shock wave reflections, which depend on the
characteristics of the incident shock wave and the geometry of the surface that
it interacts with, are (i) regular reflection (RR), (ii) simple Mach reflection
(SMR), (iii) transition Mach reflection (TMR), and (iv) double Mach reflection
(DMR). The impact of these shock reflections on flow properties can be
significant so understanding them is important when predicting the behavior of
shock waves in more complex flow configurations. Previous research works have
explored the referred shock reflections through both numerical and experimental
approaches, employing various gases and different flow and geometrical
configurations. The present study involves the use of a high-fidelity
computational fluid dynamics (CFD) tool, known as PeleC, which is a
compressible solver based on AMReX specifically designed to handle complex flow
configurations. Accordingly, by solving the time-dependent Euler equations for
various 2D flow configurations, this work studies shock wave reflections
accounting for four different Mach-based operating conditions and compares and
analyzes the resulting density profiles on the wedge wall with experimental
data. To strike a balance between model accuracy and computational efficiency,
adaptive mesh refinement (AMR) is incorporated, and a mesh independence study
is performed by varying the number of AMR levels. The results of this study
demonstrate the capabilities of the CFD tool employed as it accurately predicts
the sensitivity of wave characteristics to different operating conditions.
|
http://arxiv.org/abs/2309.05882v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.