text
string | source
string |
---|---|
We revisit an argument due to Lesch (Topology 32 (1993), no. 3, 611-623) for
proving the cobordism invariance of the index of Dirac operators on
even-dimensional closed manifolds and combine this with recent work by the
author (New York J. Math. 28 (2022), 705-772) to show vanishing results for the
spectral flow for families of selfadjoint Fredholm realizations of elliptic
operators in case the family is induced on the boundary by an elliptic operator
on a compact space. This work is motivated by studying the behavior of the
index of realizations of elliptic operators under cobordisms of statified
manifolds.
|
http://arxiv.org/abs/2301.00100v1
|
We consider branching processes describing structured, interacting
populations in continuous time. Dynamics of each individuals characteristics
and branching properties can be influenced by the entire population. We propose
a Girsanov-type result based on a spinal construction, and establish a
many-to-one formula. By combining this result with the spinal decomposition, we
derive a generalized continuous-time version of the Kesten-Stigum theorem that
incorporates interactions. Additionally, we propose an alternative approach of
the spine construction for exact simulations of stochastic size-dependent
populations.
|
http://arxiv.org/abs/2309.15449v2
|
Unstructured data in Electronic Health Records (EHRs) often contains critical
information -- complementary to imaging -- that could inform radiologists'
diagnoses. But the large volume of notes often associated with patients
together with time constraints renders manually identifying relevant evidence
practically infeasible. In this work we propose and evaluate a zero-shot
strategy for using LLMs as a mechanism to efficiently retrieve and summarize
unstructured evidence in patient EHR relevant to a given query. Our method
entails tasking an LLM to infer whether a patient has, or is at risk of, a
particular condition on the basis of associated notes; if so, we ask the model
to summarize the supporting evidence. Under expert evaluation, we find that
this LLM-based approach provides outputs consistently preferred to a pre-LLM
information retrieval baseline. Manual evaluation is expensive, so we also
propose and validate a method using an LLM to evaluate (other) LLM outputs for
this task, allowing us to scale up evaluation. Our findings indicate the
promise of LLMs as interfaces to EHR, but also highlight the outstanding
challenge posed by "hallucinations". In this setting, however, we show that
model confidence in outputs strongly correlates with faithful summaries,
offering a practical means to limit confabulations.
|
http://arxiv.org/abs/2309.04550v3
|
There are countless digital sky surveys and automated scans of the night sky
which use computer algorithms to detect and categorize objects. With the advent
of Artificial Intelligence such surveys will become even more efficient in the
near future. Despite this some objects are missed by surveys or pose no initial
interest. At times such missed objects are unique in nature and of decent
angular sizes, demanding research, unlike the billions of tiny specs of
galaxies that would be too tedious to name and study. In this scenario the
amateur astronomer and their spirit for old school astronomical discovery steps
in, to manually comb the sky and catalogue unique objects as was done in the
early days of astronomy. In this paper two unique, previously uncatalogued
galaxy candidates, namely Shaheer I and Shaheer II are identified and studied.
Both galaxies lay at a distance of 6.67 arc-minutes from each other in the
constellation of Camelopardalis. One boasts an unusual morphological profile,
akin to a molar tooth, while the other seems to be shooting through space at
tremendous velocities. The objects were discovered during visual inspection of
digital surveys and then imaged from amateur telescopes at Taqwa observatory,
Pakistan's first and only dark sky observatory (bortle 1). We perform
photometry using PetroFit to discuss the potential nature of the galaxies and
implore further collaborative research to fully uncover their characteristics.
|
http://arxiv.org/abs/2309.14743v1
|
The existing methods for video anomaly detection mostly utilize videos
containing identifiable facial and appearance-based features. The use of videos
with identifiable faces raises privacy concerns, especially when used in a
hospital or community-based setting. Appearance-based features can also be
sensitive to pixel-based noise, straining the anomaly detection methods to
model the changes in the background and making it difficult to focus on the
actions of humans in the foreground. Structural information in the form of
skeletons describing the human motion in the videos is privacy-protecting and
can overcome some of the problems posed by appearance-based features. In this
paper, we present a survey of privacy-protecting deep learning anomaly
detection methods using skeletons extracted from videos. We present a novel
taxonomy of algorithms based on the various learning approaches. We conclude
that skeleton-based approaches for anomaly detection can be a plausible
privacy-protecting alternative for video anomaly detection. Lastly, we identify
major open research questions and provide guidelines to address them.
|
http://arxiv.org/abs/2301.00114v4
|
We have conducted a revised analysis of the first-order phase transition that
is associated with symmetry breaking in a classically scale-invariant model
that has been extended with a new $SU(2)$ gauge group. By incorporating recent
developments in the understanding of supercooled phase transitions, we were
able to calculate all of its features and significantly limit the parameter
space. We were also able to predict the gravitational wave spectra generated
during this phase transition and found that this model is well-testable with
LISA. Additionally, we have made predictions regarding the relic dark matter
abundance. Our predictions are consistent with observations but only within a
narrow part of the parameter space. We have placed significant constraints on
the supercool dark matter scenario by improving the description of percolation
and reheating after the phase transition, as well as including the running of
couplings. Finally, we have also analyzed the renormalization-scale dependence
of our results.
|
http://arxiv.org/abs/2303.18122v1
|
Traffic forecasting is a challenging task due to the complex spatio-temporal
correlations among traffic series. In this paper, we identify an underexplored
problem in multivariate traffic series prediction: extreme events. Road
congestion and rush hours can result in low correlation in vehicle speeds at
various intersections during adjacent time periods. Existing methods generally
predict future series based on recent observations and entirely discard
training data during the testing phase, rendering them unreliable for
forecasting highly nonlinear multivariate time series. To tackle this issue, we
propose a test-time compensated representation learning framework comprising a
spatio-temporal decomposed data bank and a multi-head spatial transformer model
(CompFormer). The former component explicitly separates all training data along
the temporal dimension according to periodicity characteristics, while the
latter component establishes a connection between recent observations and
historical series in the data bank through a spatial attention matrix. This
enables the CompFormer to transfer robust features to overcome anomalous events
while using fewer computational resources. Our modules can be flexibly
integrated with existing forecasting methods through end-to-end training, and
we demonstrate their effectiveness on the METR-LA and PEMS-BAY benchmarks.
Extensive experimental results show that our method is particularly important
in extreme events, and can achieve significant improvements over six strong
baselines, with an overall improvement of up to 28.2%.
|
http://arxiv.org/abs/2309.09074v1
|
In this article, we try to capture the influence of deviation from standard
Kerr black hole spacetime on observed high-frequency quasi-periodic
oscillations signal. We explore the dynamics of test particles in the field of
rotating compact objects governed by the various modifications of the standard
Kerr black hole spacetime and apply the model of epicyclic oscillations of
Keplerian discs to the observed microquasars and active galactic nuclei
high-frequency quasi-periodic oscillations data. We presented a generalized
formalism for the fitting of the high-frequency quasi-periodic oscillations
models so-called epicyclic resonance and relativistic precession models, under
the assumption of stationary, axisymmetric, and asymptotically flat spacetimes.
Recently, we have used the same set of stationary, axisymmetric, and
asymptotically flat spacetimes, and estimated the restrictions of spacetime
parameters with the help of hot-spot data of three flares observed at Sgr~A* by
GRAVITY instrument \citep{Shahzadi-et-al:2022:EPJC:}. The aim of this work is
not to test a particular theoretical model or to determine and constrain its
parameters, but to map a set of well-astrophysically motivated deviations from
classical Kerr black hole spacetime and demonstrate which ones provide the best
fit for high-frequency quasi-periodic oscillations data and could be fruitful
for future exploration.
|
http://arxiv.org/abs/2309.09712v1
|
Plug-and-Play (PnP) priors is a widely-used family of methods for solving
imaging inverse problems by integrating physical measurement models with image
priors specified using image denoisers. PnP methods have been shown to achieve
state-of-the-art performance when the prior is obtained using powerful deep
denoisers. Despite extensive work on PnP, the topic of distribution mismatch
between the training and testing data has often been overlooked in the PnP
literature. This paper presents a set of new theoretical and numerical results
on the topic of prior distribution mismatch and domain adaptation for
alternating direction method of multipliers (ADMM) variant of PnP. Our
theoretical result provides an explicit error bound for PnP-ADMM due to the
mismatch between the desired denoiser and the one used for inference. Our
analysis contributes to the work in the area by considering the mismatch under
nonconvex data-fidelity terms and expansive denoisers. Our first set of
numerical results quantifies the impact of the prior distribution mismatch on
the performance of PnP-ADMM on the problem of image super-resolution. Our
second set of numerical results considers a simple and effective domain
adaption strategy that closes the performance gap due to the use of mismatched
denoisers. Our results suggest the relative robustness of PnP-ADMM to prior
distribution mismatch, while also showing that the performance gap can be
significantly reduced with few training samples from the desired distribution.
|
http://arxiv.org/abs/2310.00133v1
|
Fishes, cetaceans, and many other aquatic vertebrates undulate their bodies
to propel themselves through water. Swimming requires an intricate interplay
between sensing the environment, making decisions, controlling internal
dynamics, and moving the body in interaction with the external medium. Within
this sequence of actions initiating locomotion, biological and physical laws
manifest complex and nonlinear effects, which does not prevent natural swimmers
to demonstrate efficient movement. This raises two complementary questions: how
to model this intricacy and how to abstract it for practical swimming. In the
context of robotics, the second question is of paramount importance to build
efficient artificial swimmers driven by digital signals and mechanics. In this
study, we tackle these two questions by leveraging a biomimetic robotic swimmer
as a platform for investigating optimal control strategies for thrust
generation. Through a combination of machine learning techniques and intuitive
models, we identify a control signal that maximizes thrust production. Optimum
tail-beat frequency and amplitude result from the subtle interplay between the
swimmer's internal dynamics and its interaction with the surrounding fluid. We
then propose a practical implementation for autonomous robotic swimmers that
requires no prior knowledge of systems or equations. Direct fluid-structure
simulations confirms the effectiveness and reliability of the proposed
approach. Hence, our findings bridge fluid dynamics, robotics, and biology,
providing valuable insights into the physics of aquatic locomotion
|
http://arxiv.org/abs/2309.14025v3
|
This paper develops the MUFIN technique for extreme classification (XC) tasks
with millions of labels where datapoints and labels are endowed with visual and
textual descriptors. Applications of MUFIN to product-to-product recommendation
and bid query prediction over several millions of products are presented.
Contemporary multi-modal methods frequently rely on purely embedding-based
methods. On the other hand, XC methods utilize classifier architectures to
offer superior accuracies than embedding only methods but mostly focus on
text-based categorization tasks. MUFIN bridges this gap by reformulating
multi-modal categorization as an XC problem with several millions of labels.
This presents the twin challenges of developing multi-modal architectures that
can offer embeddings sufficiently expressive to allow accurate categorization
over millions of labels; and training and inference routines that scale
logarithmically in the number of labels. MUFIN develops an architecture based
on cross-modal attention and trains it in a modular fashion using pre-training
and positive and negative mining. A novel product-to-product recommendation
dataset MM-AmazonTitles-300K containing over 300K products was curated from
publicly available amazon.com listings with each product endowed with a title
and multiple images. On the all datasets MUFIN offered at least 3% higher
accuracy than leading text-based, image-based and multi-modal techniques. Code
for MUFIN is available at https://github.com/Extreme-classification/MUFIN
|
http://arxiv.org/abs/2309.04961v1
|
Let us say that a graph $G$ is Ramsey for a tuple $(H_1,\dots,H_r)$ of graphs
if every $r$-coloring of the edges of $G$ contains a monochromatic copy of
$H_i$ in color $i$, for some $i \in [r]$. A famous conjecture of Kohayakawa and
Kreuter, extending seminal work of R\"odl and Ruci\'nski, predicts the
threshold at which the binomial random graph $G_{n,p}$ becomes Ramsey for
$(H_1,\dots,H_r)$ asymptotically almost surely. In this paper, we resolve the
Kohayakawa-Kreuter conjecture for almost all tuples of graphs. Moreover, we
reduce its validity to the truth of a certain deterministic statement, which is
a clear necessary condition for the conjecture to hold. All of our results
actually hold in greater generality, when one replaces the graphs
$H_1,\dots,H_r$ by finite families $\mathcal{H}_1,\dots,\mathcal{H}_r$.
Additionally, we pose a natural (deterministic) graph-partitioning conjecture,
which we believe to be of independent interest, and whose resolution would
imply the Kohayakawa-Kreuter conjecture.
|
http://arxiv.org/abs/2307.16611v1
|
The design of induction machine is a challenging task due to different
electromagnetic and thermal constraints. Quick estimation of machine's
dimensions is important in the sales tool to provide quick quotations to
customers based on specific requirements. The key part of this process is to
select different design parameters like length, diameter, tooth tip height and
winding turns to achieve certain torque, current and temperature of the
machine. Electrical machine designers, with their experience know how to alter
different machine design parameters to achieve a customer specific operation
requirements. We propose a reinforcement learning algorithm to design a
customised induction motor. The neural network model is trained off-line by
simulating different instances of of electrical machine design game with a
reward or penalty function when a good or bad design choice is made. The
results demonstrate that the suggested method automates electrical machine
design without applying any human engineering knowledge.
|
http://arxiv.org/abs/2306.17626v1
|
The software supply chain (SSC) attack has become one of the crucial issues
that are being increased rapidly with the advancement of the software
development domain. In general, SSC attacks execute during the software
development processes lead to vulnerabilities in software products targeting
downstream customers and even involved stakeholders. Machine Learning
approaches are proven in detecting and preventing software security
vulnerabilities. Besides, emerging quantum machine learning can be promising in
addressing SSC attacks. Considering the distinction between traditional and
quantum machine learning, performance could be varies based on the proportions
of the experimenting dataset. In this paper, we conduct a comparative analysis
between quantum neural networks (QNN) and conventional neural networks (NN)
with a software supply chain attack dataset known as ClaMP. Our goal is to
distinguish the performance between QNN and NN and to conduct the experiment,
we develop two different models for QNN and NN by utilizing Pennylane for
quantum and TensorFlow and Keras for traditional respectively. We evaluated the
performance of both models with different proportions of the ClaMP dataset to
identify the f1 score, recall, precision, and accuracy. We also measure the
execution time to check the efficiency of both models. The demonstration result
indicates that execution time for QNN is slower than NN with a higher
percentage of datasets. Due to recent advancements in QNN, a large level of
experiments shall be carried out to understand both models accurately in our
future research.
|
http://arxiv.org/abs/2306.08060v1
|
We identify a new type of risk, common firm-level investor fears, from
commonalities within the cross-sectional distribution of individual stock
options. We define firm-level fears that link with upward price movements as
good fears, and those relating to downward price movements as bad fears. Such
information is different to market fears that we extract from index options.
Stocks with high sensitivities to common firm-level investor fears earn lower
returns, with investors demanding a higher compensation for exposure to common
bad fears relative to common good fears. Risk premium estimates for common bad
fears range from -5.63% to -4.92% per annum.
|
http://arxiv.org/abs/2309.03968v1
|
The coherent dynamics and control of spin qubits are essential requirements
for quantum technology. A prominent challenge for coherent control of a spin
qubit in a set of qubits is the destructive effect of the applied magnetic
field on the coherent dynamics of neighbouring qubits due to its spatial
extension. We propose a novel scheme to characterize the coherent dynamics of
these quantum systems and to coherently control them using a magnetic field.
Our scheme consists of a resonator that encompasses the desired quantum system
and a modulated electron beam that passes through the resonator in close
proximity to the quantum system of interest. The dynamics of the system is
obtained by solving the Lindblad master equation. To verify the reliability of
our model, we tested the model on a Potassium atom, $^{41}$K and NV$^-$ centre
in Diamond. The results show that by properly controlling the parameters of the
resonator and the electron beam, the coherence and decoherence rates of these
quantum systems can be improved. Our model has the potential to be used for
characterizing different types of spin-based quantum systems, and implementing
quantum logic gates for quantum computation.
|
http://arxiv.org/abs/2303.17952v1
|
High-quality text embedding is pivotal in improving semantic textual
similarity (STS) tasks, which are crucial components in Large Language Model
(LLM) applications. However, a common challenge existing text embedding models
face is the problem of vanishing gradients, primarily due to their reliance on
the cosine function in the optimization objective, which has saturation zones.
To address this issue, this paper proposes a novel angle-optimized text
embedding model called AnglE. The core idea of AnglE is to introduce angle
optimization in a complex space. This novel approach effectively mitigates the
adverse effects of the saturation zone in the cosine function, which can impede
gradient and hinder optimization processes. To set up a comprehensive STS
evaluation, we experimented on existing short-text STS datasets and a newly
collected long-text STS dataset from GitHub Issues. Furthermore, we examine
domain-specific STS scenarios with limited labeled data and explore how AnglE
works with LLM-annotated data. Extensive experiments were conducted on various
tasks including short-text STS, long-text STS, and domain-specific STS tasks.
The results show that AnglE outperforms the state-of-the-art (SOTA) STS models
that ignore the cosine saturation zone. These findings demonstrate the ability
of AnglE to generate high-quality text embeddings and the usefulness of angle
optimization in STS.
|
http://arxiv.org/abs/2309.12871v8
|
We show that minimally 3-rigid block-and-hole graphs, with one block or one
hole, are characterised as those which are constructible from $K_3$ by vertex
splitting, and also, as those having associated looped face graphs which are
$(3,0)$-tight. This latter property can be verified in polynomial time by a
form of pebble game algorithm. We also indicate connections to the rigidity
properties of polyhedral surfaces known as origami and to graph rigidity in
$\ell_p^3$ for $p\not=2$.
|
http://arxiv.org/abs/2309.06804v1
|
In many applications, a combinatorial problem must be repeatedly solved with
similar, but distinct parameters. Yet, the parameters $w$ are not directly
observed; only contextual data $d$ that correlates with $w$ is available. It is
tempting to use a neural network to predict $w$ given $d$. However, training
such a model requires reconciling the discrete nature of combinatorial
optimization with the gradient-based frameworks used to train neural networks.
We study the case where the problem in question is an Integer Linear Program
(ILP). We propose applying a three-operator splitting technique, also known as
Davis-Yin splitting (DYS), to the quadratically regularized continuous
relaxation of the ILP. We prove that the resulting scheme is compatible with
the recently introduced Jacobian-free backpropagation (JFB). Our experiments on
two representative ILPs: the shortest path problem and the knapsack problem,
demonstrate that this combination-DYS on the forward pass, JFB on the backward
pass-yields a scheme which scales more effectively to high-dimensional problems
than existing schemes. All code associated with this paper is available at
github.com/mines-opt-ml/fpo-dys.
|
http://arxiv.org/abs/2301.13395v4
|
PSO J334.2028+1.4075 (PSO J334) is a luminous quasar located at redshift
z=2.06. The source gained attention when periodic flux density variations were
discovered in its optical light curve. These variations were initially
interpreted as the variability due to the orbital motion of a supermassive
black hole binary (SMBHB) residing in a single circumbinary accretion disk.
However, subsequent multiwavelength observations provided evidence against the
binary hypothesis as no optical periodicity was found on extended time
baselines. On the other hand, detailed radio analysis with the Karl G. Jansky
Very Large Array (VLA) and the Very Long Baseline Array (VLBA) revealed a
lobe-dominated quasar at kpc scales, and possibly a precessing jet, which could
retain PSO J334 as a binary SMBH candidate. We aim to study both the large- and
small-scale radio structures in PSO J334 to provide additional evidence for or
against the binary scenario. We observed the source at 1.7 GHz with the
European Very Long Baseline Interferometry Network (EVN), and at 1.5 and 6.2
GHz with the VLA, at frequencies that complement the previous radio
interferometric study. Our images reveal a single component at parsec scales
slightly resolved in the southeast-northwest direction and a lobe-dominated
quasar at kiloparsec scales with a complex structure. The source morphology and
polarization in our VLA maps suggest that the jet is interacting with dense
clumps of the ambient medium. While we also observe a misalignment between the
inner jet and the outer lobes, we suggest that this is due to the restarted
nature of the radio jet activity and the possible presence of a warped
accretion disk rather than due to the perturbing effects of a companion SMBH.
Our analysis suggests that PSO J334 is most likely a jetted AGN with a single
SMBH, and there is no clear evidence of a binary SMBH system in its central
engine.
|
http://arxiv.org/abs/2306.17632v1
|
The rise of data-intensive applications exposed the limitations of
conventional processor-centric von-Neumann architectures that struggle to meet
the off-chip memory bandwidth demand. Therefore, recent innovations in computer
architecture advocate compute-in-memory (CIM) and compute-near-memory (CNM),
non-von- Neumann paradigms achieving orders-of-magnitude improvements in
performance and energy consumption. Despite significant technological
breakthroughs in the last few years, the programmability of these systems is
still a serious challenge. Their programming models are too low-level and
specific to particular system implementations. Since such future architectures
are predicted to be highly heterogenous, developing novel compiler abstractions
and frameworks become necessary. To this end, we present CINM (Cinnamon), a
first end-to-end compilation flow that leverages the hierarchal abstractions to
generalize over different CIM and CNM devices and enable device-agnostic and
device-aware optimizations. Cinnamon progressively lowers input programs and
performs optimizations at each level in the lowering pipeline. To show its
efficacy, we evaluate CINM on a set of benchmarks for the well-known UPMEM CNM
system and the memristors-based CIM accelerators. We show that Cinnamon,
supporting multiple hardware targets, generates high-performance code
comparable to or better than state-of-the-art implementations.
|
http://arxiv.org/abs/2301.07486v4
|
Organic molecular solids can exhibit rich phase diagrams. In addition to
structurally unique phases, translational and rotational degrees of freedom can
melt at different state points, giving rise to partially disordered solid
phases. The structural and dynamic disorder in these materials can have a
significant impact on the physical properties of the organic solid,
necessitating a thorough understanding of disorder at the atomic scale. When
these disordered phases form at low temperatures, especially in crystals with
light nuclei, the prediction of materials properties can be complicated by the
importance of nuclear quantum effects. As an example, we investigate nuclear
quantum effects on the structure and dynamics of the
orientationally-disordered, translationally-ordered plastic phase of the
acetylene:ammonia (1:1) co-crystal that is expected to exist on the surface of
Saturn's moon Titan. Titan's low surface temperature (~90 K) suggests that the
quantum mechanical behavior of nuclei may be important in this and other
molecular solids in these environments. By using neural network potentials
combined with ring polymer molecular dynamics simulations, we show that nuclear
quantum effects increase orientational disorder and rotational dynamics within
the acetylene:ammonia (1:1) co-crystal by weakening hydrogen bonds. Our results
suggest that nuclear quantum effects are important to accurately model
molecular solids and their physical properties in low temperature environments.
|
http://arxiv.org/abs/2310.00480v1
|
Comets are considered a potential source of inner solar system volatiles, but
the timing of this delivery relative to that of Earth's accretion is still
poorly understood. Measurements of xenon isotopes in comet
67P/Churyumov-Gerasimenko revealed that comets partly contributed to the
Earth's atmosphere. However, there is no conclusive evidence of a significant
cometary component in the Earth's mantle. These geochemical constraints would
favour a contribution of comets mainly occurring after the last stages of
Earth's formation. Here, we evaluate whether dynamical simulations satisfy
these constraints in the context of an Early Instability model. We perform
dynamical simulations of the solar system, calculate the probability of
collision between comets and Earth analogs component embryos through time and
estimate the total cometary mass accreted in Earth analogs as a function of
time. While our results are in excellent agreement with geochemical
constraints, we also demonstrate that the contribution of comets on Earth might
have been delayed with respect to the timing of the instability, due to a
stochastic component of the bombardment. More importantly, we show that it is
possible that enough cometary mass has been brought to Earth after it had
finished forming so that the xenon constraint is not necessarily in conflict
with an Early Instability scenario. However, it appears very likely that a few
comets were delivered to Earth early in its accretion history, thus
contributing to the mantle's budget. Finally, we compare the delivery of
cometary material on Earth to Venus and Mars. These results emphasize the
stochastic nature of the cometary bombardment in the inner solar system.
|
http://arxiv.org/abs/2309.03954v1
|
The training of a parameterized model largely depends on the landscape of the
underlying loss function. In particular, vanishing gradients are a central
bottleneck in the scalability of variational quantum algorithms (VQAs), and are
known to arise in various ways. However, a caveat of most existing gradient
bound results is the requirement of t-design circuit assumptions that are
typically not satisfied in practice. In this work, we loosen these assumptions
altogether and derive tight upper and lower bounds on loss and gradient
concentration for a large class of parameterized quantum circuits and arbitrary
observables, which are significantly stronger than prior work. Moreover, we
show that these bounds, as well as the variance of the loss itself, can be
estimated efficiently and classically-providing practical tools to study the
loss landscapes of VQA models, including verifying whether or not a
circuit/observable induces barren plateaus. In particular, our results can
readily be leveraged to rule out barren plateaus for a realistic class of
ans\"atze and mixed observables, namely, observables containing a non-vanishing
local term. This insight has direct implications for hybrid Quantum Generative
Adversarial Networks (qGANs). We prove that designing the discriminator
appropriately leads to 1-local weights that stay constant in the number of
qubits, regardless of discriminator depth. This implies that qGANs with
appropriately chosen generators do not suffer from barren plateaus even at
scale-making them a promising candidate for applications in generative quantum
machine learning. We demonstrate this result by training a qGAN to learn a 2D
mixture of Gaussian distributions with up to 16 qubits, and provide numerical
evidence that global contributions to the gradient, while initially
exponentially small, may kick in substantially over the course of training.
|
http://arxiv.org/abs/2309.12681v3
|
Combinatorial optimization is one of the fields where near term quantum
devices are being utilized with hybrid quantum-classical algorithms to
demonstrate potentially practical applications of quantum computing. One of the
most well studied problems in combinatorial optimization is the Max-Cut
problem. The problem is also highly relevant to quantum and other types of
"post Moore" architectures due to its similarity with the Ising model and other
reasons. In this paper, we introduce a scalable hybrid multilevel approach to
solve large instances of Max-Cut using both classical only solvers and quantum
approximate optimization algorithm (QAOA). We compare the results of our solver
to existing state of the art large-scale Max-Cut solvers. We demonstrate
excellent performance of both classical and hybrid quantum-classical approaches
and show that using QAOA within our framework is comparable to classical
approaches.
|
http://arxiv.org/abs/2309.08815v1
|
Whether or not $z \gtrsim 6$ quasars lie in the most massive dark-matter
halos of the Universe is still a subject of dispute. While most theoretical
studies support this scenario, current observations yield discordant results
when they probe the halo mass through the detection rate of quasar companion
galaxies. Feedback processes from supermassive black holes and dust obscuration
have been blamed for this discrepancy, but the impact of these effects is
complex and far from being clearly understood. This paper aims to improve the
interpretation of current far-infrared observations by taking into account the
cosmological volume probed by the Atacama Large Millimeter/submillimeter Array
Telescope and to explain the observational discrepancies. We statistically
investigate the detection rate of quasar companions in current observations and
verify if they match the expected distribution from various theoretical models,
once convolved with the ALMA field-of-view, through the use of Monte Carlo
simulations. We demonstrate that the telescope geometrical bias is fundamental
and can alone explain the scatter in the number of detected satellite galaxies
in different observations. We conclude that the resulting companion densities
depend on the chosen galaxy distributions. According to our fiducial models,
current data favour a density scenario where quasars lie in dark-matter halos
of viral mass $M_{\rm vir} \gtrsim 10^{12}~{\rm M_{\odot}}$, in agreement with
most theoretical studies. According to our analysis, each quasar has about 2
companion galaxies, with a [CII] luminosity $L_{\rm [CII]} \gtrsim 10^8~{\rm
L}_{\odot}$, within a distance of about 1~Mpc from the quasar.
|
http://arxiv.org/abs/2309.03940v1
|
With the increasing penetration of Inverter-Based Resources (IBRs) and their
impact on power system stability and operation, the concept of
stability-constrained optimization has drawn significant attention from
researchers. In order to manage the parametric uncertainty due to inaccurate
modeling that influences the system dynamics, this work proposes a
distributionally robust stability constraint formulation. However, the
uncertainty of system dynamic parameters influences the stability constraints
indirectly through a nonlinear and implicit relationship. To address this
issue, a propagation mechanism from the uncertainty of the system dynamic
parameters to the stability constraint coefficients is established. Since these
coefficients are connected to the uncertain parameters through highly nonlinear
and implicit functions, an approximation approach utilizing Taylor expansion
and the Delta method is developed to estimate the statistical moments of the
stability constraint coefficients based on the first and second-order
derivatives, with which an ambiguity set for the distributionally robust
optimization can be formulated. The accuracy of the uncertainty propagation as
well as the effectiveness of the distributionally robust stability constraints
are demonstrated through detailed case studies in the modified IEEE 39-bus
system.
|
http://arxiv.org/abs/2309.03798v2
|
We have devised a data-driven framework for uncovering hidden control
strategies used by an evolutionary system described by an evolutionary
probability distribution. This innovative framework enables deciphering of the
concealed mechanisms that contribute to the progression or mitigation of such
situations as the spread of COVID-19. Novel algorithms are used to estimate the
optimal control in tandem with the parameters for evolution in general
dynamical systems, thereby extending the concept of model predictive control.
This is a significant departure from conventional control methods, which
require knowledge of the system to manipulate its evolution and of the
controller's strategy or parameters. We used a generalized additive model,
supplemented by extensive statistical testing, to identify a set of predictor
covariates closely linked to the control. Using real-world COVID-19 data, we
successfully delineated the descriptive behaviors of the COVID-19 epidemics in
five prefectures in Japan and nine countries. We compared these nine countries
and grouped them on the basis of shared profiles, providing valuable insights
into their pandemic responses. Our findings underscore the potential of our
framework as a powerful tool for understanding and managing complex
evolutionary processes.
|
http://arxiv.org/abs/2309.15844v1
|
Based on work presented in [4], we define $S^2$-Upper Triangular Matrices and
$S^2$-Lower Triangular Matrices, two special types of $d\times d(2d-1)$
matrices generalizing Upper and Lower Triangular Matrices, respectively. Then,
we show that the property that the determinant of an Upper Triangular Matrix is
the product of its diagonal entries is generalized under our construction.
Further, we construct the algebra of $S^2$-Upper Triangular Matrices and give
conditions for an LU-Decomposition with $S^2$-Lower Triangular and $S^2$-Upper
Triangular Matrices, respectively.
|
http://arxiv.org/abs/2310.00494v1
|
Citation maturity time varies for different articles. However, the impact of
all articles is measured in a fixed window. Clustering their citation
trajectories helps understand the knowledge diffusion process and reveals that
not all articles gain immediate success after publication. Moreover, clustering
trajectories is necessary for paper impact recommendation algorithms. It is a
challenging problem because citation time series exhibit significant
variability due to non linear and non stationary characteristics. Prior works
propose a set of arbitrary thresholds and a fixed rule based approach. All
methods are primarily parameter dependent. Consequently, it leads to
inconsistencies while defining similar trajectories and ambiguities regarding
their specific number. Most studies only capture extreme trajectories. Thus, a
generalised clustering framework is required. This paper proposes a feature
based multiple k means cluster ensemble framework. 1,95,783 and 41,732 well
cited articles from the Microsoft Academic Graph data are considered for
clustering short term (10 year) and long term (30 year) trajectories,
respectively. It has linear run time. Four distinct trajectories are obtained
Early Rise Rapid Decline (2.2%), Early Rise Slow Decline (45%), Delayed Rise No
Decline (53%), and Delayed Rise Slow Decline (0.8%). Individual trajectory
differences for two different spans are studied. Most papers exhibit Early Rise
Slow Decline and Delayed Rise No Decline patterns. The growth and decay times,
cumulative citation distribution, and peak characteristics of individual
trajectories are redefined empirically. A detailed comparative study reveals
our proposed methodology can detect all distinct trajectory classes.
|
http://arxiv.org/abs/2309.04949v1
|
QUIC is a new protocol standardized in 2021 designed to improve on the widely
used TCP / TLS stack. The main goal is to speed up web traffic via HTTP, but it
is also used in other areas like tunneling. Based on UDP it offers features
like reliable in-order delivery, flow and congestion control, streambased
multiplexing, and always-on encryption using TLS 1.3. Other than with TCP, QUIC
implements all these features in user space, only requiring kernel interaction
for UDP. While running in user space provides more flexibility, it profits less
from efficiency and optimization within the kernel. Multiple implementations
exist, differing in programming language, architecture, and design choices.
This paper presents an extension to the QUIC Interop Runner, a framework for
testing interoperability of QUIC implementations. Our contribution enables
reproducible QUIC benchmarks on dedicated hardware. We provide baseline results
on 10G links, including multiple implementations, evaluate how OS features like
buffer sizes and NIC offloading impact QUIC performance, and show which data
rates can be achieved with QUIC compared to TCP. Our results show that QUIC
performance varies widely between client and server implementations from 90
Mbit/s to 4900 Mbit/s. We show that the OS generally sets the default buffer
size too small, which should be increased by at least an order of magnitude
based on our findings. Furthermore, QUIC benefits less from NIC offloading and
AES NI hardware acceleration while both features improve the goodput of TCP to
around 8000 Mbit/s. Our framework can be applied to evaluate the effects of
future improvements to the protocol or the OS.
|
http://arxiv.org/abs/2309.16395v1
|
A novel mode-selective thermo-optic phase shifter (MS-TOPS) enabled by
subwavelength grating (SWG) structures is proposed and experimentally
demonstrated on a 220 nm waveguide thick silicon photonics chip for the first
two quasi-transverse electric modes (TE0, TE1). Mode-selective relative phase
manipulation of modes unlocks several processing tasks in mode division
multiplexing systems. This integrated solution provides a direct phase
manipulation of modes without converting them to their fundamental modes. A
Mach-Zehnder interferometer is deployed as a test structure incorporating the
proposed MS-TOPS in one arm and a mode-insensitive thermo-optic phase shifter
(MI-TOPS) in another. The effect of the SWG duty cycle ratio is investigated by
both numerical simulations and experimental measurements. A mode-selectivity of
1.44 is experimentally demonstrated. In other words, the thermo-optic
coefficient of TE0 is 44% larger than the one for TE1. The phase shifter's
insertion loss is at most 2.5 dB and a worst-case crosstalk of -13.1 dB over a
40 nm wavelength range from 1520 to 1560 nm. A cascaded configuration of the
proposed MS-TOPS and an MI-TOPS provides sufficient degrees of freedom to
manipulate the relative phase of each mode independently. Potential numerous
applications of such devices include optical switching, multimode quantum
optical processors, and scaling-up conventional optical processors with a mode
selective building block.
|
http://arxiv.org/abs/2307.16639v1
|
The assumption of no unmeasured confounders is a critical but unverifiable
assumption required for causal inference yet quantitative sensitivity analyses
to assess robustness of real-world evidence remains underutilized. The lack of
use is likely in part due to complexity of implementation and often specific
and restrictive data requirements required for application of each method. With
the advent of sensitivity analyses methods that are broadly applicable in that
they do not require identification of a specific unmeasured confounder, along
with publicly available code for implementation, roadblocks toward broader use
are decreasing. To spur greater application, here we present a best practice
guidance to address the potential for unmeasured confounding at both the design
and analysis stages, including a set of framing questions and an analytic
toolbox for researchers. The questions at the design stage guide the research
through steps evaluating the potential robustness of the design while
encouraging gathering of additional data to reduce uncertainty due to potential
confounding. At the analysis stage, the questions guide researchers to
quantifying the robustness of the observed result and providing researchers
with a clearer indication of the robustness of their conclusions. We
demonstrate the application of the guidance using simulated data based on a
real-world fibromyalgia study, applying multiple methods from our analytic
toolbox for illustration purposes.
|
http://arxiv.org/abs/2309.07273v1
|
In order to use the Dual Simplex Method, one needs to prove a certain
bijection between the dictionaries associated with the primal problem and those
associated with its dual. We give a short conceptual proof of why this
bijection exists.
|
http://arxiv.org/abs/2310.02268v1
|
The momentum of light in a medium and the mechanisms of momentum transfer
between light and dielectrics have long been the topic of controversies and
confusion. We discuss here the problem of momentum transfers that follow the
refraction of light by dilute, inhomogeneous ensembles of ultra-cold atoms. We
show experimentally and theoretically that the refraction of light rays by a
dilute gas does not entail momentum transfers to first order in the light-atom
coupling coefficient, in contradiction with the work reported in Matzliah et
al. Phys. Rev. Lett. 119, 189902 (2017).
|
http://arxiv.org/abs/2309.05464v1
|
In recent years face recognition systems have been brought to the mainstream
due to development in hardware and software. Consistent efforts are being made
to make them better and more secure. This has also brought developments in 3D
face recognition systems at a rapid pace. These 3DFR systems are expected to
overcome certain vulnerabilities of 2DFR systems. One such problem that the
domain of 2DFR systems face is face image morphing. A substantial amount of
research is being done for generation of high quality face morphs along with
detection of attacks from these morphs. Comparatively the understanding of
vulnerability of 3DFR systems against 3D face morphs is less. But at the same
time an expectation is set from 3DFR systems to be more robust against such
attacks. This paper attempts to research and gain more information on this
matter. The paper describes a couple of methods that can be used to generate 3D
face morphs. The face morphs that are generated using this method are then
compared to the contributing faces to obtain similarity scores. The highest
MMPMR is obtained around 40% with RMMR of 41.76% when 3DFRS are attacked with
look-a-like morphs.
|
http://arxiv.org/abs/2309.12118v1
|
In this work, we use the communication of intent as a means to facilitate
cooperation between autonomous vehicle agents. Generally speaking, intents can
be any reliable information about its future behavior that a vehicle
communicates with another vehicle. We implement this as an intent-sharing task
atop the merging environment in the simulator of highway-env, which provides a
collection of environments for learning decision-making strategies for
autonomous vehicles. Under a simple setting between two agents, we carefully
investigate how intent-sharing can aid the receiving vehicle in adjusting its
behavior in highway merging scenarios.
|
http://arxiv.org/abs/2309.13206v1
|
By tracking trajectories of dark matter (DM) particles accreting onto haloes
in cosmological $N$-body simulations, we investigate the radial phase-space
distribution of cold dark matter (CDM) haloes, paying attention to their inner
regions deep inside the halo boundary called the splashback radius, where the
particles undergo multi-stream flows. Improving the analysis by Sugiura et al.,
we classify DM particles by the number of apocenter passages, $p$, and count it
up to $p=40$ for each halo over a wide mass range. Quantifying the radial
density profile for particles having the same value of $p$, we find that it
generally exhibits a double-power law feature, whose indices of inner and outer
slopes are well-described by $-1$ and $-8$, respectively. Its characteristic
scale and density are given as a simple fitting function of $p$, with a weak
halo mass dependence. Interestingly, summing up these double-power law profiles
beyond $p=40$ reproduces well the total density profile of simulated haloes.
The double-power law nature is persistent and generic not only in mass-selected
haloes but also in haloes selected in different criteria. Our results are
compared with self-similar solutions that describe the stationary and spherical
accretion of DM. We find that even when introducing a non-zero angular
momentum, none of them explain the radial multi-stream structure. The analysis
with particle trajectories tracing back to higher redshifts suggests that the
double-power law nature has been established during an early accretion phase
and remains stable.
|
http://arxiv.org/abs/2309.13560v3
|
In this work, we consider the optimization process of minibatch stochastic
gradient descent (SGD) on a 2-layer neural network with data separated by a
quadratic ground truth function. We prove that with data drawn from the
$d$-dimensional Boolean hypercube labeled by the quadratic ``XOR'' function $y
= -x_ix_j$, it is possible to train to a population error $o(1)$ with $d
\:\text{polylog}(d)$ samples. Our result considers simultaneously training both
layers of the two-layer-neural network with ReLU activations via standard
minibatch SGD on the logistic loss. To our knowledge, this work is the first to
give a sample complexity of $\tilde{O}(d)$ for efficiently learning the XOR
function on isotropic data on a standard neural network with standard training.
Our main technique is showing that the network evolves in two phases: a
$\textit{signal-finding}$ phase where the network is small and many of the
neurons evolve independently to find features, and a $\textit{signal-heavy}$
phase, where SGD maintains and balances the features. We leverage the
simultaneous training of the layers to show that it is sufficient for only a
small fraction of the neurons to learn features, since those neurons will be
amplified by the simultaneous growth of their second layer weights.
|
http://arxiv.org/abs/2309.15111v2
|
Virtual and augmented realities are increasingly popular tools in many
domains such as architecture, production, training and education,
(psycho)therapy, gaming, and others. For a convincing rendering of sound in
virtual and augmented environments, audio signals must be convolved in
real-time with impulse responses that change from one moment in time to
another. Key requirements for the implementation of such time-variant real-time
convolution algorithms are short latencies, moderate computational cost and
memory footprint, and no perceptible switching artifacts. In this engineering
report, we introduce a partitioned convolution algorithm that is able to
quickly switch between impulse responses without introducing perceptible
artifacts, while maintaining a constant computational load and low memory
usage. Implementations in several popular programming languages are freely
available via GitHub.
|
http://arxiv.org/abs/2310.00319v1
|
We propose a Small Area Estimation model based on Generalized Additive Models
for Location, Scale and Shape (SAE-GAMLSS), for the estimation of household
economic indicators. SAE-GAMLSS release the exponential family distributional
assumption and allow each distributional parameter to depend on covariates. A
bootstrap approach to estimate MSE is proposed. The SAE-GAMLSS estimator shows
a largely better performance than the well-known EBLUP, under various simulated
scenarios. Based on SAE-GAMLSS per-capita consumption of Italian and foreign
households in Italian regions, in urban and rural areas, is estimated. Results
show that the well-known Italian North-South divide does not hold for
foreigners.
|
http://arxiv.org/abs/2302.00108v4
|
We present a novel efficient theoretical and numerical framework for solving
global non-convex polynomial optimization problems. We analytically demonstrate
that such problems can be efficiently reformulated using a non-linear objective
over a convex set; further, these reformulated problems possess no spurious
local minima (i.e., every local minimum is a global minimum). We introduce an
algorithm for solving these resulting problems using the augmented Lagrangian
and the method of Burer and Monteiro. We show through numerical experiments
that polynomial scaling in dimension and degree is achievable for computing the
optimal value and location of previously intractable global polynomial
optimization problems in high dimension.
|
http://arxiv.org/abs/2308.16731v2
|
In this work, we address music representation learning using convolution-free
transformers. We build on top of existing spectrogram-based audio transformers
such as AST and train our models on a supervised task using patchout training
similar to PaSST. In contrast to previous works, we study how specific design
decisions affect downstream music tagging tasks instead of focusing on the
training task. We assess the impact of initializing the models with different
pre-trained weights, using various input audio segment lengths, using learned
representations from different blocks and tokens of the transformer for
downstream tasks, and applying patchout at inference to speed up feature
extraction. We find that 1) initializing the model from ImageNet or AudioSet
weights and using longer input segments are beneficial both for the training
and downstream tasks, 2) the best representations for the considered downstream
tasks are located in the middle blocks of the transformer, and 3) using
patchout at inference allows faster processing than our convolutional baselines
while maintaining superior performance. The resulting models, MAEST, are
publicly available and obtain the best performance among open models in music
tagging tasks.
|
http://arxiv.org/abs/2309.16418v1
|
Inverse text normalization (ITN) is crucial for converting spoken-form into
written-form, especially in the context of automatic speech recognition (ASR).
While most downstream tasks of ASR rely on written-form, ASR systems often
output spoken-form, highlighting the necessity for robust ITN in product-level
ASR-based applications. Although neural ITN methods have shown promise, they
still encounter performance challenges, particularly when dealing with
ASR-generated spoken text. These challenges arise from the out-of-domain
problem between training data and ASR-generated text. To address this, we
propose a direct training approach that utilizes ASR-generated written or
spoken text, with pairs augmented through ASR linguistic context emulation and
a semi-supervised learning method enhanced by a large language model,
respectively. Additionally, we introduce a post-aligning method to manage
unpredictable errors, thereby enhancing the reliability of ITN. Our experiments
show that our proposed methods remarkably improved ITN performance in various
ASR scenarios.
|
http://arxiv.org/abs/2309.08626v1
|
Sketch-based terrain generation seeks to create realistic landscapes for
virtual environments in various applications such as computer games, animation
and virtual reality. Recently, deep learning based terrain generation has
emerged, notably the ones based on generative adversarial networks (GAN).
However, these methods often struggle to fulfill the requirements of flexible
user control and maintain generative diversity for realistic terrain.
Therefore, we propose a novel diffusion-based method, namely terrain diffusion
network (TDN), which actively incorporates user guidance for enhanced
controllability, taking into account terrain features like rivers, ridges,
basins, and peaks. Instead of adhering to a conventional monolithic denoising
process, which often compromises the fidelity of terrain details or the
alignment with user control, a multi-level denoising scheme is proposed to
generate more realistic terrains by taking into account fine-grained details,
particularly those related to climatic patterns influenced by erosion and
tectonic activities. Specifically, three terrain synthesisers are designed for
structural, intermediate, and fine-grained level denoising purposes, which
allow each synthesiser concentrate on a distinct terrain aspect. Moreover, to
maximise the efficiency of our TDN, we further introduce terrain and sketch
latent spaces for the synthesizers with pre-trained terrain autoencoders.
Comprehensive experiments on a new dataset constructed from NASA Topology
Images clearly demonstrate the effectiveness of our proposed method, achieving
the state-of-the-art performance. Our code and dataset will be publicly
available.
|
http://arxiv.org/abs/2308.16725v1
|
Inspired by recent work demonstrating the promise of smaller
Transformer-based language models pretrained on carefully curated data, we
supercharge such approaches by investing heavily in curating a novel, high
quality, non-synthetic data mixture based solely on evaluation benchmarks.
Using our novel dataset mixture consisting of less than 100 thousand tokens, we
pretrain a 1 million parameter transformer-based LLM \textbf{phi-CTNL}
(pronounced ``fictional") that achieves perfect results across diverse academic
benchmarks, strictly outperforming all known foundation models.
\textbf{phi-CTNL} also beats power-law scaling and exhibits a never-before-seen
grokking-like ability to accurately predict downstream evaluation benchmarks'
canaries.
|
http://arxiv.org/abs/2309.08632v1
|
Using local operations and classical communication (LOCC), entanglement can
be manipulated but not created. However, entanglement can be embezzled. In this
work, we completely characterize universal embezzling families and demonstrate
how this singles out the original family introduced by van Dam and Hayden. To
achieve this, we first give a full characterization of pure to mixed state
LOCC-conversions. Then, we introduce a new conversion distance and derive a
closed-form expression for it. These results might be of independent interest.
|
http://arxiv.org/abs/2303.17749v3
|
An irreducible polynomial $f\in\Bbb F_q[X]$ of degree $n$ is {\em normal}
over $\Bbb F_q$ if and only if its roots $r, r^q,\dots,r^{q^{n-1}}$ satisfy the
condition $\Delta_n(r, r^q,\dots,r^{q^{n-1}})\ne 0$, where
$\Delta_n(X_0,\dots,X_{n-1})$ is the $n\times n$ circulant determinant. By
finding a suitable {\em symmetrization} of $\Delta_n$ (A multiple of $\Delta_n$
which is symmetric in $X_0,\dots,X_{n-1}$), we obtain a condition on the
coefficients of $f$ that is sufficient for $f$ to be normal. This approach
works well for $n\le 5$ but encounters computational difficulties when $n\ge
6$. In the present paper, we consider irreducible polynomials of the form
$f=X^n+X^{n-1}+a\in\Bbb F_q[X]$. For $n=6$ and $7$, by an indirect method, we
are able to find simple conditions on $a$ that are sufficient for $f$ to be
normal. In a more general context, we also explore the normal polynomials of a
finite Galois extension through the irreducible characters of the Galois group.
|
http://arxiv.org/abs/2309.05470v1
|
Diffusion models achieve great success in generating diverse and
high-fidelity images, yet their widespread application, especially in real-time
scenarios, is hampered by their inherently slow generation speed. The slow
generation stems from the necessity of multi-step network inference. While some
certain predictions benefit from the full computation of the model in each
sampling iteration, not every iteration requires the same amount of
computation, potentially leading to inefficient computation. Unlike typical
adaptive computation challenges that deal with single-step generation problems,
diffusion processes with a multi-step generation need to dynamically adjust
their computational resource allocation based on the ongoing assessment of each
step's importance to the final image output, presenting a unique set of
challenges. In this work, we propose AdaDiff, an adaptive framework that
dynamically allocates computation resources in each sampling step to improve
the generation efficiency of diffusion models. To assess the effects of changes
in computational effort on image quality, we present a timestep-aware
uncertainty estimation module (UEM). Integrated at each intermediate layer, the
UEM evaluates the predictive uncertainty. This uncertainty measurement serves
as an indicator for determining whether to terminate the inference process.
Additionally, we introduce an uncertainty-aware layer-wise loss aimed at
bridging the performance gap between full models and their adaptive
counterparts.
|
http://arxiv.org/abs/2309.17074v3
|
Simultaneous machine translation (SiMT) outputs translation while reading the
source sentence. Unlike conventional sequence-to-sequence (seq2seq) training,
existing SiMT methods adopt the prefix-to-prefix (prefix2prefix) training,
where the model predicts target tokens based on partial source tokens. However,
the prefix2prefix training diminishes the ability of the model to capture
global information and introduces forced predictions due to the absence of
essential source information. Consequently, it is crucial to bridge the gap
between the prefix2prefix training and seq2seq training to enhance the
translation capability of the SiMT model. In this paper, we propose a novel
method that glances future in curriculum learning to achieve the transition
from the seq2seq training to prefix2prefix training. Specifically, we gradually
reduce the available source information from the whole sentence to the prefix
corresponding to that latency. Our method is applicable to a wide range of SiMT
methods and experiments demonstrate that our method outperforms strong
baselines.
|
http://arxiv.org/abs/2309.06179v1
|
Information about autonomic nervous system (ANS) activity may be valuable for
personalized atrial fibrillation (AF) treatment but is not easily accessible
from the ECG. In this study, we propose a new approach for ECG-based assessment
of respiratory modulation in AV nodal refractory period and conduction delay. A
1-dimensional convolutional neural network (1D-CNN) was trained to estimate
respiratory modulation of AV nodal conduction properties from 1-minute segments
of RR series, respiration signals, and atrial fibrillatory rates (AFR) using
synthetic data that replicates clinical ECG-derived data. The synthetic data
were generated using a network model of the AV node and 4 million unique model
parameter sets. The 1D-CNN was then used to analyze respiratory modulation in
clinical deep breathing test data of 28 patients in AF, where a ECG-derived
respiration signal was extracted using a novel approach based on periodic
component analysis. We demonstrated using synthetic data that the 1D-CNN can
predict the respiratory modulation from RR series alone ($\rho$ = 0.805) and
that the addition of either respiration signal ($\rho$ = 0.830), AFR ($\rho$ =
0.837), or both ($\rho$ = 0.855) improves the prediction. Results from analysis
of clinical ECG data of 20 patients with sufficient signal quality suggest that
respiratory modulation decreased in response to deep breathing for five
patients, increased for five patients, and remained similar for ten patients,
indicating a large inter-patient variability.
|
http://arxiv.org/abs/2309.05458v1
|
A harmonious coloring of a $k$-uniform hypergraph $H$ is a vertex coloring
such that no two vertices in the same edge have the same color, and each
$k$-element subset of colors appears on at most one edge. The harmonious number
$h(H)$ is the least number of colors needed for such a coloring.
The paper contains a new proof of the upper bound $h(H)=O(\sqrt[k]{k!m})$ on
the harmonious number of hypergraphs of maximum degree $\Delta$ with $m$ edges.
We use the local cut lemma of A. Bernshteyn.
|
http://arxiv.org/abs/2301.00302v3
|
Methods to detect malignant lesions from screening mammograms are usually
trained with fully annotated datasets, where images are labelled with the
localisation and classification of cancerous lesions. However, real-world
screening mammogram datasets commonly have a subset that is fully annotated and
another subset that is weakly annotated with just the global classification
(i.e., without lesion localisation). Given the large size of such datasets,
researchers usually face a dilemma with the weakly annotated subset: to not use
it or to fully annotate it. The first option will reduce detection accuracy
because it does not use the whole dataset, and the second option is too
expensive given that the annotation needs to be done by expert radiologists. In
this paper, we propose a middle-ground solution for the dilemma, which is to
formulate the training as a weakly- and semi-supervised learning problem that
we refer to as malignant breast lesion detection with incomplete annotations.
To address this problem, our new method comprises two stages, namely: 1)
pre-training a multi-view mammogram classifier with weak supervision from the
whole dataset, and 2) extending the trained classifier to become a multi-view
detector that is trained with semi-supervised student-teacher learning, where
the training set contains fully and weakly-annotated mammograms. We provide
extensive detection results on two real-world screening mammogram datasets
containing incomplete annotations, and show that our proposed approach achieves
state-of-the-art results in the detection of malignant breast lesions with
incomplete annotations.
|
http://arxiv.org/abs/2301.13418v4
|
An antimagic labeling of a graph $G(V,E)$ is a bijection $f: E \to \{1,2,
\dots, |E|\}$ so that $\sum_{e \in E(u)} f(e) \neq \sum_{e \in E(v)} f(e)$
holds for all $u, v \in V(G)$ with $u \neq v$, where $E(v)$ is the set of edges
incident to $v$. We call $G$ antimagic if it admits an antimagic labeling. A
forest is a graph without cycles; equivalently, every component of a forest is
a tree. It was proved by Kaplan, Lev, and Roditty [2009], and by Liang, Wong,
and Zhu [2014] that every tree with at most one vertex of degree-2 is
antimagic. A major tool used in the proof is the zero-sum partition introduced
by Kaplan, Lev, and Roditty [2009]. In this article, we provide an algorithmic
representation for the zero-sum partition method and apply this method to show
that every forest with at most one vertex of degree-2 is also antimagic.
|
http://arxiv.org/abs/2307.16836v1
|
Machine learning has been successfully applied to grid-based PDE modeling in
various scientific applications. However, learned PDE solvers based on
Lagrangian particle discretizations, which are the preferred approach to
problems with free surfaces or complex physics, remain largely unexplored. We
present LagrangeBench, the first benchmarking suite for Lagrangian particle
problems, focusing on temporal coarse-graining. In particular, our contribution
is: (a) seven new fluid mechanics datasets (four in 2D and three in 3D)
generated with the Smoothed Particle Hydrodynamics (SPH) method including the
Taylor-Green vortex, lid-driven cavity, reverse Poiseuille flow, and dam break,
each of which includes different physics like solid wall interactions or free
surface, (b) efficient JAX-based API with various recent training strategies
and three neighbor search routines, and (c) JAX implementation of established
Graph Neural Networks (GNNs) like GNS and SEGNN with baseline results. Finally,
to measure the performance of learned surrogates we go beyond established
position errors and introduce physical metrics like kinetic energy MSE and
Sinkhorn distance for the particle distribution. Our codebase is available at
https://github.com/tumaer/lagrangebench .
|
http://arxiv.org/abs/2309.16342v2
|
Leveraging Graphics Processing Units (GPUs) to accelerate scientific software
has proven to be highly successful, but in order to extract more performance,
GPU programmers must overcome the high latency costs associated with their use.
One method of reducing or hiding this latency cost is to use asynchronous
streams to issue commands to the GPU. While performant, the streams model is an
invasive abstraction, and has therefore proven difficult to integrate into
general-purpose libraries. In this work, we enumerate the difficulties specific
to library authors in adopting streams, and present recent work on addressing
them. Finally, we present a unified asynchronous programming model for use in
the Portable, Extensible, Toolkit for Scientific Computation (PETSc) to
overcome these challenges. The new model shows broad performance benefits while
remaining ergonomic to the user.
|
http://arxiv.org/abs/2306.17801v1
|
Despite considerable advances in automated fake news detection, due to the
timely nature of news, it remains a critical open question how to effectively
predict the veracity of news articles based on limited fact-checks. Existing
approaches typically follow a "Train-from-Scratch" paradigm, which is
fundamentally bounded by the availability of large-scale annotated data. While
expressive pre-trained language models (PLMs) have been adapted in a
"Pre-Train-and-Fine-Tune" manner, the inconsistency between pre-training and
downstream objectives also requires costly task-specific supervision. In this
paper, we propose "Prompt-and-Align" (P&A), a novel prompt-based paradigm for
few-shot fake news detection that jointly leverages the pre-trained knowledge
in PLMs and the social context topology. Our approach mitigates label scarcity
by wrapping the news article in a task-related textual prompt, which is then
processed by the PLM to directly elicit task-specific knowledge. To supplement
the PLM with social context without inducing additional training overheads,
motivated by empirical observation on user veracity consistency (i.e., social
users tend to consume news of the same veracity type), we further construct a
news proximity graph among news articles to capture the veracity-consistent
signals in shared readerships, and align the prompting predictions along the
graph edges in a confidence-informed manner. Extensive experiments on three
real-world benchmarks demonstrate that P&A sets new states-of-the-art for
few-shot fake news detection performance by significant margins.
|
http://arxiv.org/abs/2309.16424v1
|
Glitches are sudden spin-up events of pulsars and are usually thought to be
induced by unpinning of neutron superfluid vortices in pulsar crusts. Unpinning
and repinning of superfluid vortices, and even thermoelectric effects induced
by the deposited heat released during glitches, may vary the velocity fields in
pulsars. We show that the generally invoked magnetic dipole fields of pulsars
cannot remain stationary during the variation of the velocity fields, so that
multipole components must be generated. We argue that the increase of the spark
frequency of periodic radio pulses is the indicator for the emergence of the
multipole components. Interpretations of pulsar nulling, rebrightening of
radio-quiet magnetars, differences between Crab and Vela pulsars after
glitches, and extra-galactic fast radio burst-like events from SGR 1935+2154
have been proposed based on the influence of the variation of the velocity
field on the magnetic field.
|
http://arxiv.org/abs/2301.04602v2
|
Polarization is a unique tool to study the properties of dust grains of
protoplanetary disks and detail the initial conditions of planet formation.
Polarization around HL Tau was previously imaged using the Atacama Large
Millimeter/submillimeter Array (ALMA) at Bands 3 (3.1 mm), 6 (1.3 mm), and 7
(0.87 mm), showing that the polarization orientation changes across wavelength
$\lambda$. The polarization morphology at Band 7 is predominantly parallel to
the disk minor axis but appears azimuthally oriented at Band 3, with the
morphology at Band 6 in between the two. We present new ~0.2" (29 au)
polarization observations at Q-Band (7.0 mm) using the Karl G. Jansky Very
Large Array (VLA) and at Bands 4 (2.1 mm), 5 (1.5 mm), and 7 using ALMA,
consolidating HL Tau's position as the protoplanetary disk with the most
complete wavelength coverage in dust polarization. The polarization patterns at
Bands 4 and 5 continue to follow the morphological transition with wavelength
previously identified in Bands 3, 6, and 7. Based on the azimuthal variation,
we decompose the polarization into contributions from scattering ($s$) and
thermal emission ($t$). We find that $s$ decreases slowly with increasing
$\lambda$, and $t$ increases more rapidly with $\lambda$ which are expected
from optical depth effects of toroidally aligned, scattering prolate grains.
The relatively weak $\lambda$ dependence of $s$ is consistent with large,
porous grains. The sparse polarization detections from the Q-band image are
also consistent with toroidally aligned prolate grains.
|
http://arxiv.org/abs/2309.10055v1
|
We introduce a novel family of mechanisms for constrained allocation problems
which we call local priority mechanisms. These mechanisms are parameterized by
a function which assigns a set of agents, the local compromisers, to every
infeasible allocation. The mechanism then greedily attempts to match agents
with their top choices. Whenever it reaches an infeasible allocation, the local
compromisers move to their next favorite alternative. Local priority mechanisms
exist for any constraint, so this provides a method of constructing new designs
for any constrained allocation problem. We give axioms which characterize local
priority mechanisms. Since constrained allocation includes many canonical
problems as special constraints, we apply this characterization to show that
several well-known mechanisms, including deferred acceptance for school choice,
top trading cycles for house allocation, and serial dictatorship can be
understood as instances of local priority mechanisms. Other mechanisms,
including the Boston mechanism, are not local priority mechanisms. We give
sufficient conditions for a local priority mechanism to be group
strategy-proof. We also provide conditions which enable welfare comparisons
across local priority mechanisms.
|
http://arxiv.org/abs/2309.04020v2
|
This study provides Urdu poetry generated using different deep-learning
techniques and algorithms. The data was collected through the Rekhta website,
containing 1341 text files with several couplets. The data on poetry was not
from any specific genre or poet. Instead, it was a collection of mixed Urdu
poems and Ghazals. Different deep learning techniques, such as the model
applied Long Short-term Memory Networks (LSTM) and Gated Recurrent Unit (GRU),
have been used. Natural Language Processing (NLP) may be used in machine
learning to understand, analyze, and generate a language humans may use and
understand. Much work has been done on generating poetry for different
languages using different techniques. The collection and use of data were also
different for different researchers. The primary purpose of this project is to
provide a model that generates Urdu poems by using data completely, not by
sampling data. Also, this may generate poems in pure Urdu, not Roman Urdu, as
in the base paper. The results have shown good accuracy in the poems generated
by the model.
|
http://arxiv.org/abs/2309.14233v1
|
The process of designing costmaps for off-road driving tasks is often a
challenging and engineering-intensive task. Recent work in costmap design for
off-road driving focuses on training deep neural networks to predict costmaps
from sensory observations using corpora of expert driving data. However, such
approaches are generally subject to over-confident mispredictions and are
rarely evaluated in-the-loop on physical hardware. We present an inverse
reinforcement learning-based method of efficiently training deep cost functions
that are uncertainty-aware. We do so by leveraging recent advances in highly
parallel model-predictive control and robotic risk estimation. In addition to
demonstrating improvement at reproducing expert trajectories, we also evaluate
the efficacy of these methods in challenging off-road navigation scenarios. We
observe that our method significantly outperforms a geometric baseline,
resulting in 44% improvement in expert path reconstruction and 57% fewer
interventions in practice. We also observe that varying the risk tolerance of
the vehicle results in qualitatively different navigation behaviors, especially
with respect to higher-risk scenarios such as slopes and tall grass.
|
http://arxiv.org/abs/2302.00134v1
|
The use of Implicit Neural Representation (INR) through a hash-table has
demonstrated impressive effectiveness and efficiency in characterizing
intricate signals. However, current state-of-the-art methods exhibit
insufficient regularization, often yielding unreliable and noisy results during
interpolations. We find that this issue stems from broken gradient flow between
input coordinates and indexed hash-keys, where the chain rule attempts to model
discrete hash-keys, rather than the continuous coordinates. To tackle this
concern, we introduce RHINO, in which a continuous analytical function is
incorporated to facilitate regularization by connecting the input coordinate
and the network additionally without modifying the architecture of current
hash-based INRs. This connection ensures a seamless backpropagation of
gradients from the network's output back to the input coordinates, thereby
enhancing regularization. Our experimental results not only showcase the
broadened regularization capability across different hash-based INRs like DINER
and Instant NGP, but also across a variety of tasks such as image fitting,
representation of signed distance functions, and optimization of 5D static / 6D
dynamic neural radiance fields. Notably, RHINO outperforms current
state-of-the-art techniques in both quality and speed, affirming its
superiority.
|
http://arxiv.org/abs/2309.12642v1
|
We present the methods and results from the discovery and photometric
measurement of 26 bright (VR $>$ 24 trans-Neptunian objects (TNOs) during the
first year (2019-20) of the DECam Ecliptic Exploration Project (DEEP). The DEEP
survey is an observational TNO survey with wide sky coverage, high sensitivity,
and a fast photometric cadence. We apply a computer vision technique known as a
progressive probabilistic Hough transform to identify linearly-moving transient
sources within DEEP photometric catalogs. After subsequent visual vetting, we
provide a photometric and astrometric catalog of our TNOs. By modeling the
partial lightcurve amplitude distribution of the DEEP TNOs using Monte Carlo
techniques, we find our data to be most consistent with an average TNO axis
ratio b/a $<$ 0.5, implying a population dominated by non-spherical objects.
Based on ellipsoidal gravitational stability arguments, we find our data to be
consistent with a TNO population containing a high fraction of contact binaries
or other extremely non-spherical objects. We also discuss our data as evidence
that the expected binarity fraction of TNOs may be size-dependent.
|
http://arxiv.org/abs/2309.04034v1
|
Josephson Junctions are important components in superconducting qubits. It
introduces anharmonicity to the energy level spacings of the qubit which allow
us to identify two unique quantum energy states for computing. It is difficult
to fabricate multiple junctions within the same desired parameter range.
Characterisation of the junctions is, therefore, a necessary step after
fabrication. In particular, the critical current of the junctions is determined
by measuring their normal state resistance. This is done via two-point or
four-point resistance measurement at a manual probe station which is a
time-consuming process, especially for wafer-scale fabrication. This bottleneck
can be circumvented by automation with object detection. The base of the
automated probe station is a 3D printer modified with multiple Arduino Uno
microcontrollers and motorised linear stages. The automation process is
achieved via auto-alignment of the probes and an automatic measurement
procedure. As a result, the fully automated process will take about 27-29
seconds to measure the resistance of one junction which saves 28-51% of the
time compared to the manual probe station and can be unsupervised. Due to the
reuse of a commercial 3D printer, the cost of this system is 800 SGD which is
much less than comparable commercial solutions.
|
http://arxiv.org/abs/2310.00331v1
|
The decoherence of point defect qubits is often governed by the electron
spin-nuclear spin hyperfine interaction that can be parameterized by using ab
inito calculations in principle. So far most of the theoretical works have
focused on the hyperfine interaction of the closest nuclear spins, while the
accuracy of the predictions for distinct nuclear spins is barely discussed. We
demonstrate for the case of the NV center in diamond that the absolute relative
error of the computed hyperfine parameters can exceed 100\% in VASP for weakly
coupled nuclear spins. To overcome this issue, we implement an alternative
method and report on significantly improved hyperfine values with $O$(1\%)
relative mean error at all distances. The provided accurate hyperfine data for
the NV center enables high-precision simulation of NV quantum nodes for quantum
information processing and positioning of nuclear spins by comparing
experimental and theoretical hyperfine data.
|
http://arxiv.org/abs/2309.03983v3
|
We present a theoretical investigation of electron heat current in
asymmetrical length armchair graphene nanoribbon (AGNR) heterostructures with
vacancies, focusing on the topological states (TSs). In particular, we examine
the 9-7-9 AGNR heterostructures where the TSs are well-isolated from the
conduction and valence subbands. This isolation effectively mitigates thermal
noise of subbands arising from temperature fluctuations during charge
transport. Moreover, when the TSs exhibit an orbital off-set, intriguing
electron heat rectification phenomena are observed, primarily attributed to
inter-TS electron Coulomb interactions. To enhance the heat rectification ratio
($\eta_Q$), we manipulate the coupling strengths between the heat sources and
the TSs by introducing asymmetrical lengths in the 9-AGNRs. This approach
offers control over the rectification properties, enabling significant
enhancements. Additionally, we introduce vacancies strategically positioned
between the heat sources and the TSs to suppress phonon heat current. This
arrangement effectively reduces the overall phonon heat current, while leaving
the TSs unaffected. Our findings provide valuable insights into the behavior of
electron heat current in AGNR heterostructures, highlighting the role of
topological states, inter-TS electron Coulomb interactions, and the impact of
structural modifications such as asymmetrical lengths and vacancy positioning.
These results pave the way for the design and optimization of graphene-based
devices with improved thermal management and efficient control of electron heat
transport.
|
http://arxiv.org/abs/2309.06623v2
|
The adiabatic connection interaction strength interpolation (ISI)-like method
provides a high-level expression for the correlation energy, being in principle
exact in the weak-interaction limit, where it recovers the second-order
G\"orling-Levy perturbation term, but also in the strong-interaction limit that
is described by the strictly correlated electron approach. In this work, we
construct the genISI functional made accurate for the uniform electron gas, a
solid-state physics paradigm that is a very difficult test for ISI-like
correlation functionals. We assess the genISI functional for various jellium
spheres with the number of electrons Z $\leq$ 912 and for the non-relativistic
noble atoms with Z $\leq$ 290. For the jellium clusters, the genISI is
remarkably accurate, while for the noble atoms, it shows a good performance,
similar to other ISI-like methods. Then, the genISI functional can open the
path using the ISI-like method in solid-state calculations.
|
http://arxiv.org/abs/2309.16430v1
|
We use the link between Jacobi continued fractions and the generating
functions of certain moment sequences to study some simple transformations on
them. In particular, we define and study a transformation that is appropriate
for the study of spidernet graphs and their moments, and the free Meixner law.
|
http://arxiv.org/abs/2307.00098v1
|
We present a novel inference scheme, self-speculative decoding, for
accelerating Large Language Models (LLMs) without the need for an auxiliary
model. This approach is characterized by a two-stage process: drafting and
verification. The drafting stage generates draft tokens at a slightly lower
quality but more quickly, which is achieved by selectively skipping certain
intermediate layers during drafting. Subsequently, the verification stage
employs the original LLM to validate those draft output tokens in one forward
pass. This process ensures the final output remains identical to that produced
by the unaltered LLM. Moreover, the proposed method requires no additional
neural network training and no extra memory footprint, making it a
plug-and-play and cost-effective solution for inference acceleration.
Benchmarks with LLaMA-2 and its variants demonstrated a speedup up to
1.99$\times$.
|
http://arxiv.org/abs/2309.08168v2
|
The reliability of a learning model is key to the successful deployment of
machine learning in various applications. Creating a robust model, particularly
one unaffected by adversarial attacks, requires a comprehensive understanding
of the adversarial examples phenomenon. However, it is difficult to describe
the phenomenon due to the complicated nature of the problems in machine
learning. It has been shown that adversarial training can improve the
robustness of the hypothesis. However, this improvement comes at the cost of
decreased performance on natural samples. Hence, it has been suggested that
robustness and accuracy of a hypothesis are at odds with each other. In this
paper, we put forth the alternative proposal that it is the continuity of a
hypothesis that is incompatible with its robustness and accuracy. In other
words, a continuous function cannot effectively learn the optimal robust
hypothesis. To this end, we will introduce a framework for a rigorous study of
harmonic and holomorphic hypothesis in learning theory terms and provide
empirical evidence that continuous hypotheses does not perform as well as
discontinuous hypotheses in some common machine learning tasks. From a
practical point of view, our results suggests that a robust and accurate
learning rule would train different continuous hypotheses for different regions
of the domain. From a theoretical perspective, our analysis explains the
adversarial examples phenomenon as a conflict between the continuity of a
sequence of functions and its uniform convergence to a discontinuous function.
|
http://arxiv.org/abs/2309.17048v1
|
The task of state estimation in active distribution systems faces a major
challenge due to the integration of different measurements with multiple
reporting rates. As a result, distribution systems are essentially unobservable
in real time, indicating the existence of multiple states that result in
identical values for the available measurements. Certain existing approaches
utilize historical data to infer the relationship between real-time available
measurements and the state. Other learning-based methods aim to estimate the
measurements acquired with a delay, generating pseudo-measurements. Our paper
presents a methodology that utilizes the outcome of an unobservable state
estimator to exploit information on the joint probability distribution between
real-time available measurements and delayed ones. Through numerical
simulations conducted on a realistic distribution grid with insufficient
real-time measurements, the proposed procedure showcases superior performance
compared to existing state forecasting approaches and those relying on inferred
pseudo-measurements.
|
http://arxiv.org/abs/2307.16822v2
|
Localized atomic orbitals are the preferred basis-set choice for large-scale
explicit correlated calculations, and high-quality hierarchical
correlation-consistent basis sets are a prerequisite for correlated methods to
deliver numerically reliable results. At present, Numeric Atom-centered Orbital
(NAO) basis sets with valence correlation consistency (VCC), designated as
NAO-VCC-$n$Z, are only available for light elements from hydrogen (H) to argon
(Ar) (\textit{New J. Phys.} \textbf{15}, 123033, (2013) ). In this work, we
extend this series by developing NAO-VCC-$n$Z basis sets for krypton (Kr), a
prototypical element in the fourth row of periodic table. We demonstrate that
NAO-VCC-$n$Z basis sets facilitate the convergence of electronic total-energy
calculations using the Random Phase Approximation (RPA), which can be used
together with a two-point extrapolation scheme to approach the
complete-basis-set (CBS) limit. Notably, the Basis Set Superposition Error
(BSSE) associated with the newly generated NAO basis sets is minimal, making
them suitable for applications where BSSE correction is either cumbersome or
impractical to do. After confirming the reliability of NAO basis sets for Kr,
we proceed to calculate the Helmholtz free energy for Kr crystal at the
theoretical level of RPA plus renormalized single excitation (rSE) correction.
From this, we derive the pressure-volume ($P$-$V$) diagram, which shows
excellent agreement with the latest experimental data. Our work demonstrates
the capability of correlation-consistent NAO basis sets for heavy elements,
paving the way toward numerically reliable correlated calculations for bulk
materials.
|
http://arxiv.org/abs/2309.06145v1
|
In Grammatical Error Correction (GEC), it is crucial to ensure the user's
comprehension of a reason for correction. Existing studies present tokens,
examples, and hints as to the basis for correction but do not directly explain
the reasons for corrections. Although methods that use Large Language Models
(LLMs) to provide direct explanations in natural language have been proposed
for various tasks, no such method exists for GEC. Generating explanations for
GEC corrections involves aligning input and output tokens, identifying
correction points, and presenting corresponding explanations consistently.
However, it is not straightforward to specify a complex format to generate
explanations, because explicit control of generation is difficult with prompts.
This study introduces a method called controlled generation with Prompt
Insertion (PI) so that LLMs can explain the reasons for corrections in natural
language. In PI, LLMs first correct the input text, and then we automatically
extract the correction points based on the rules. The extracted correction
points are sequentially inserted into the LLM's explanation output as prompts,
guiding the LLMs to generate explanations for the correction points. We also
create an Explainable GEC (XGEC) dataset of correction reasons by annotating
NUCLE, CoNLL2013, and CoNLL2014. Although generations from GPT-3 and ChatGPT
using original prompts miss some correction points, the generation control
using PI can explicitly guide to describe explanations for all correction
points, contributing to improved performance in generating correction reasons.
|
http://arxiv.org/abs/2309.11439v1
|
Heart failure (HF) is a critical condition in which the accurate prediction
of mortality plays a vital role in guiding patient management decisions.
However, clinical datasets used for mortality prediction in HF often suffer
from an imbalanced distribution of classes, posing significant challenges. In
this paper, we explore preprocessing methods for enhancing one-month mortality
prediction in HF patients. We present a comprehensive preprocessing framework
including scaling, outliers processing and resampling as key techniques. We
also employed an aware encoding approach to effectively handle missing values
in clinical datasets. Our study utilizes a comprehensive dataset from the
Persian Registry Of cardio Vascular disease (PROVE) with a significant class
imbalance. By leveraging appropriate preprocessing techniques and Machine
Learning (ML) algorithms, we aim to improve mortality prediction performance
for HF patients. The results reveal an average enhancement of approximately
3.6% in F1 score and 2.7% in MCC for tree-based models, specifically Random
Forest (RF) and XGBoost (XGB). This demonstrates the efficiency of our
preprocessing approach in effectively handling Imbalanced Clinical Datasets
(ICD). Our findings hold promise in guiding healthcare professionals to make
informed decisions and improve patient outcomes in HF management.
|
http://arxiv.org/abs/2310.00457v1
|
We propose a cluster-based method to detect and locate eavesdropping events
in optical line systems characterized by small power losses. Our findings
indicate that detecting such subtle losses from eavesdropping can be
accomplished solely through optical performance monitoring (OPM) data collected
at the receiver. On the other hand, the localization of such events can be
effectively achieved by leveraging in-line OPM data.
|
http://arxiv.org/abs/2309.14541v1
|
Images or videos captured by the Under-Display Camera (UDC) suffer from
severe degradation, such as saturation degeneration and color shift. While
restoration for UDC has been a critical task, existing works of UDC restoration
focus only on images. UDC video restoration (UDC-VR) has not been explored in
the community. In this work, we first propose a GAN-based generation pipeline
to simulate the realistic UDC degradation process. With the pipeline, we build
the first large-scale UDC video restoration dataset called PexelsUDC, which
includes two subsets named PexelsUDC-T and PexelsUDC-P corresponding to
different displays for UDC. Using the proposed dataset, we conduct extensive
benchmark studies on existing video restoration methods and observe their
limitations on the UDC-VR task. To this end, we propose a novel
transformer-based baseline method that adaptively enhances degraded videos. The
key components of the method are a spatial branch with local-aware
transformers, a temporal branch embedded temporal transformers, and a
spatial-temporal fusion module. These components drive the model to fully
exploit spatial and temporal information for UDC-VR. Extensive experiments show
that our method achieves state-of-the-art performance on PexelsUDC. The
benchmark and the baseline method are expected to promote the progress of
UDC-VR in the community, which will be made public.
|
http://arxiv.org/abs/2309.04752v1
|
A high-pressure hydrogen micromix combustor has been investigated using
direct numerical simulation with detailed chemistry to examine the flame
structure and stabilisation mechanism. The configuration of the combustor was
based on the design by Schefer [1], using numerical periodicity to mimic a
large square array. A precursor simulation of an opposed jet-in-crossflow was
first conducted to generate appropriate partially-premixed inflow boundary
conditions for the subsequent reacting simulation. The resulting flame can be
described as a predominantly-lean inhomogeneously-premixed lifted jet flame.
Five main zones were identified: a jet mixing region, a core flame, a
peripheral flame, a recirculation zone, and combustion products. The core
flame, situated over the jet mixing region, was found to burn as a thin
reaction front, responsible for over 85% of the total fuel consumption. The
peripheral flame shrouded the core flame, had low mean flow with high
turbulence, and burned at very lean conditions (in the distributed burning
regime). It was shown that turbulent premixed flame propagation was an
order-of-magnitude too slow to stabilise the flame at these conditions.
Stabilisation was identified to be due to ignition events resulting from
turbulent mixing of fuel from the jet into mean recirculation of very lean hot
products. Ignition events were found to correlate with shear-driven
Kelvin-Helmholtz vortices, and increased in likelihood with streamwise
distance. At the flame base, isolated events were observed, which developed
into rapidly burning flame kernels that were blown downstream. Further
downstream, near-simultaneous spatially-distributed ignition events were
observed, which appeared more like ignition sheets. The paper concludes with a
broader discussion that considers generalising from the conditions considered
here.
|
http://arxiv.org/abs/2309.04815v1
|
In this paper, we reexamine one of the most promising candidates for
determining the neutrino mass scale -- the unique first forbidden $\beta$
transition from $^{187}$Re($5/2^+$) to $^{187}$Os($1/2^-$). With the
lowest-known ground-state to ground-state $Q$-value for a $\beta$ transition at
$2.4709$ keV, rhenium's $\beta$ decay can offer insights into the neutrino mass
scale puzzle. However, understanding its electron spectrum is a complex task.
Besides involving a mixture of $s_{1/2}$-state and $p_{3/2}$-state electrons,
the rhenium $\beta$ spectrum could be strongly influenced by various atomic
corrections. In addition to our previous paper, we have incorporated finite
nuclear size, diffuse nuclear surface, screening, and exchange corrections into
the rhenium $\beta$ decay model. We have accounted for the last two effects
within the framework of the Dirac-Hartree-Fock-Slater self-consistent method.
We have discovered that both screening and exchange effects significantly alter
the partial decay rates for the $s_{1/2}$- and $p_{3/2}$-state emission
channels, while still maintaining the experimentally confirmed dominance of the
$p_{3/2}$-state emission. The ratio between the respective decay rates has been
found to be approximately $10^4$. When compared to the other corrections, the
exchange effect stands out due to the modification it induces in the spectrum
shape. We demonstrate that calculations with and without the exchange effect
lead to entirely different shape factors for the decay spectrum. Finally, we
illustrate that to preserve the linearity of the Kurie plot, it is essential to
include the exchange correction in its definition. We conclude that atomic
effects, especially the exchange effect, should be taken into account in
current and future investigations of the neutrino mass scale from $\beta$
decays.
|
http://arxiv.org/abs/2309.15918v1
|
We introduce a new method of detecting when the fundamental group of a Dehn
surgery on a knot admits a left-ordering, a method which is particularly useful
for 2-bridge knots. As an illustration of this method, we show that all Dehn
surgeries on the knot $6_2$ with slope in the interval $(-4, 8)\cap\mathbb{Q}$
have left-orderable fundamental groups by exhibiting a family of hyperbolic
$\widetilde{PSL}(2,\mathbb{R})$-representations of the knot complement group.
|
http://arxiv.org/abs/2307.00107v1
|
Recently, the development of large language models (LLMs) has been
significantly enhanced the question answering and dialogue generation, and
makes them become increasingly popular in current practical scenarios. While
unlike the general dialogue system which emphasizes the semantic performance,
the task-oriented dialogue (ToD) systems aim to achieve the dialogue goal
efficiently and successfully in multiple turns. Unfortunately, existing
LLM-induced ToD systems lack the direct reward toward the final goal and do not
take account of the dialogue proactivity that can strengthen the dialogue
efficiency. To fill these gaps, we introduce the ProToD (Proactively
Goal-Driven LLM-Induced ToD) approach, which anticipates the future dialogue
actions and incorporates the goal-oriented reward signal to enhance ToD
systems. Additionally, we present a novel evaluation method that assesses ToD
systems based on goal-driven dialogue simulations. This method allows us to
gauge user satisfaction, system efficiency and successful rate while overcoming
the limitations of current Information and Success metrics. Empirical
experiments conducted on the MultiWoZ 2.1 dataset demonstrate that our model
can achieve superior performance using only 10% of the data compared to
previous end-to-end fully supervised models. This improvement is accompanied by
enhanced user satisfaction and efficiency.
|
http://arxiv.org/abs/2309.08949v1
|
Efficient power coupling between on-chip guided and free-space optical modes
requires precision spatial mode matching with apodized grating couplers. Yet,
grating apodizations are often limited by the minimum feature size of the
fabrication approach. This is especially challenging when small feature sizes
are required to fabricate gratings at short wavelengths or to achieve weakly
scattered light for large-area gratings. Here, we demonstrate a fish-bone
grating coupler for precision beam shaping and the generation of
millimeter-scale beams at 461 nm wavelength. Our design decouples the minimum
feature size from the minimum achievable optical scattering strength, allowing
smooth turn-on and continuous control of the emission. Our approach is
compatible with commercial foundry photolithography and has reduced sensitivity
to both the resolution and the variability of the fabrication approach compared
to subwavelength meta-gratings, which often require electron beam lithography.
|
http://arxiv.org/abs/2309.08791v1
|
Current speech large language models build upon discrete speech
representations, which can be categorized into semantic tokens and acoustic
tokens. However, existing speech tokens are not specifically designed for
speech language modeling. To assess the suitability of speech tokens for
building speech language models, we established the first benchmark,
SLMTokBench. Our results indicate that neither semantic nor acoustic tokens are
ideal for this purpose. Therefore, we propose SpeechTokenizer, a unified speech
tokenizer for speech large language models. SpeechTokenizer adopts the
Encoder-Decoder architecture with residual vector quantization (RVQ). Unifying
semantic and acoustic tokens, SpeechTokenizer disentangles different aspects of
speech information hierarchically across different RVQ layers. Furthermore, We
construct a Unified Speech Language Model (USLM) leveraging SpeechTokenizer.
Experiments show that SpeechTokenizer performs comparably to EnCodec in speech
reconstruction and demonstrates strong performance on the SLMTokBench
benchmark. Also, USLM outperforms VALL-E in zero-shot Text-to-Speech tasks.
Code and models are available at
https://github.com/ZhangXInFD/SpeechTokenizer/.
|
http://arxiv.org/abs/2308.16692v2
|
In this article, we show that there is no cofibration category structure on
the category of finite graphs with $\times$-homotopy equivalences as the class
of weak equivalences. Further, we show that it is not possible to enlarge the
class of weak equivalences to get cofibration category structure on the
category of finite graphs without including morphisms where domain and codomain
have non-isomorphic stiff subgraphs.
|
http://arxiv.org/abs/2301.13587v2
|
We examine online safe multi-agent reinforcement learning using constrained
Markov games in which agents compete by maximizing their expected total rewards
under a constraint on expected total utilities. Our focus is confined to an
episodic two-player zero-sum constrained Markov game with independent
transition functions that are unknown to agents, adversarial reward functions,
and stochastic utility functions. For such a Markov game, we employ an approach
based on the occupancy measure to formulate it as an online constrained
saddle-point problem with an explicit constraint. We extend the Lagrange
multiplier method in constrained optimization to handle the constraint by
creating a generalized Lagrangian with minimax decision primal variables and a
dual variable. Next, we develop an upper confidence reinforcement learning
algorithm to solve this Lagrangian problem while balancing exploration and
exploitation. Our algorithm updates the minimax decision primal variables via
online mirror descent and the dual variable via projected gradient step and we
prove that it enjoys sublinear rate $ O((|X|+|Y|) L \sqrt{T(|A|+|B|)}))$ for
both regret and constraint violation after playing $T$ episodes of the game.
Here, $L$ is the horizon of each episode, $(|X|,|A|)$ and $(|Y|,|B|)$ are the
state/action space sizes of the min-player and the max-player, respectively. To
the best of our knowledge, we provide the first provably efficient online safe
reinforcement learning algorithm in constrained Markov games.
|
http://arxiv.org/abs/2306.00212v1
|
AI developers often apply safety alignment procedures to prevent the misuse
of their AI systems. For example, before Meta released Llama 2-Chat - a
collection of instruction fine-tuned large language models - they invested
heavily in safety training, incorporating extensive red-teaming and
reinforcement learning from human feedback. We explore the robustness of safety
training in language models by subversively fine-tuning Llama 2-Chat. We employ
quantized low-rank adaptation (LoRA) as an efficient fine-tuning method. With a
budget of less than \$200 and using only one GPU, we successfully undo the
safety training of Llama 2-Chat models of sizes 7B, 13B, and 70B and on the
Mixtral instruct model. Specifically, our fine-tuning technique significantly
reduces the rate at which the model refuses to follow harmful instructions. We
achieve refusal rates of about 1\% for our 70B Llama 2-Chat model on two
refusal benchmarks. Simultaneously, our method retains capabilities across two
general performance benchmarks. We show that subversive fine-tuning is
practical and effective, and hence argue that evaluating risks from fine-tuning
should be a core part of risk assessments for releasing model weights. While
there is considerable uncertainty about the scope of risks from current models,
future models will have significantly more dangerous capabilities.
|
http://arxiv.org/abs/2310.20624v2
|
Neutrino experiments, in the next years, aim to determine with precision all
the six parameters of the three-neutrino standard paradigm. The complete
success of the experimental program is, nevertheless, attached to the
non-existence (or at least smallness) of Non-Standard Interactions (NSI). In
this work, anticipating the data taken from long-baseline neutrino experiments,
we map all the weakly coupled theories that could induce sizable NSI, with the
potential to be determined in these experiments, in particular DUNE. Once
present constraints from other experiments are taken into account, in
particular charged-lepton flavor violation, we find that only models containing
leptoquarks (scalar or vector) and/or neutral isosinglet vector bosons are
viable. We provide the explicit matching formulas connecting weakly coupled
models and NSI, both in propagation and production. Departing from the weakly
coupled completion with masses at TeV scale, we also provide a global fit on
all NSI for DUNE, finding that NSI smaller than $10^{-2}$ cannot be probed even
in the best-case scenario.
|
http://arxiv.org/abs/2309.15924v2
|
This article examines India's first science lander mission on 22 July 2019,
attempting a historic landing on the Lunar South Pole Region. Communication was
lost at 2.1 km above the lunar surface during the rough braking phase. The
cause of the Chandrayaan 2 lander "Vikram" failure remains undisclosed.
Possible factors such as vibrations, thruster issues, and power depletion are
considered. Recommendations include backup power sources and direct
communication systems for interplanetary missions. Despite the setback, ISRO
proposed "Chandrayaan 3" to explore the lunar polar region. Chandrayaan 2's
legacy influences future missions, shaping India's aspirations for pioneering
space endeavors. Gratitude is expressed to ISRO for insights gained during live
coverage.
|
http://arxiv.org/abs/2309.14384v1
|
A risk in adopting third-party dependencies into an application is their
potential to serve as a doorway for malicious code to be injected (most often
unknowingly). While many initiatives from both industry and research
communities focus on the most critical dependencies (i.e., those most depended
upon within the ecosystem), little is known about whether the rest of the
ecosystem suffers the same fate. Our vision is to promote and establish safer
practises throughout the ecosystem. To motivate our vision, in this paper, we
present preliminary data based on three representative samples from a
population of 88,416 pull requests (PRs) and identify unsafe dependency updates
(i.e., any pull request that risks being unsafe during runtime), which clearly
shows that unsafe dependency updates are not limited to highly impactful
libraries. To draw attention to the long tail, we propose a research agenda
comprising six key research questions that further explore how to safeguard
against these unsafe activities. This includes developing best practises to
address unsafe dependency updates not only in top-tier libraries but throughout
the entire ecosystem.
|
http://arxiv.org/abs/2309.04197v1
|
Retrieval-Augmented Language Modeling (RALM) methods, which condition a
language model (LM) on relevant documents from a grounding corpus during
generation, were shown to significantly improve language modeling performance.
In addition, they can mitigate the problem of factually inaccurate text
generation and provide natural source attribution mechanism. Existing RALM
approaches focus on modifying the LM architecture in order to facilitate the
incorporation of external information, significantly complicating deployment.
This paper considers a simple alternative, which we dub In-Context RALM:
leaving the LM architecture unchanged and prepending grounding documents to the
input, without any further training of the LM. We show that In-Context RALM
that builds on off-the-shelf general purpose retrievers provides surprisingly
large LM gains across model sizes and diverse corpora. We also demonstrate that
the document retrieval and ranking mechanism can be specialized to the RALM
setting to further boost performance. We conclude that In-Context RALM has
considerable potential to increase the prevalence of LM grounding, particularly
in settings where a pretrained LM must be used without modification or even via
API access.
|
http://arxiv.org/abs/2302.00083v3
|
In this paper, we prove that isotropic Gaussian functions are characterized
by a rearrangement inequality for weighted perimeter in dimensions $n \ge 2$
within the class of non-negative weights in $L^1(\mathbb{R}^n) \cap
W^{1,1}_{loc}(\mathbb{R}^n)$. More specifically, we prove that within this
class generalized Ehrhard symmetrization is perimeter non-increasing for all
Borel sets $E$ in all directions $\vec{v}$ if and only if the distribution
function is an isotropic Gaussian.
|
http://arxiv.org/abs/2310.00292v1
|
Wave-based imaging techniques use wavefield data from receivers on the
boundary of a domain to produce an image of the underlying structure in the
domain of interest. These images are defined by the imaging condition, which
maps recorded data to their reflection points in the domain. In this paper, we
introduce a nonlinear modification to the standard imaging condition that can
produce images with resolutions greater than that ordinarily expected using the
standard imaging condition. We show that the phase of the integrand of the
imaging condition, in the Fourier domain, has a special significance in some
settings that can be exploited to derive a super-resolved modification of the
imaging condition. Whereas standard imaging techniques can resolve features of
a length scale of $\lambda$, our technique allows for resolution level $R <
\lambda$, where the super-resolution factor (SRF) is typically $\lambda/R$. We
show that, in the presence of noise, $R \sim \sigma$.
|
http://arxiv.org/abs/2304.01013v2
|
The convergence of deterministic policy gradient under the Hadamard
parameterization is studied in the tabular setting and the linear convergence
of the algorithm is established. To this end, we first show that the error
decreases at an $O(\frac{1}{k})$ rate for all the iterations. Based on this
result, we further show that the algorithm has a faster local linear
convergence rate after $k_0$ iterations, where $k_0$ is a constant that only
depends on the MDP problem and the initialization. To show the local linear
convergence of the algorithm, we have indeed established the contraction of the
sub-optimal probability $b_s^k$ (i.e., the probability of the output policy
$\pi^k$ on non-optimal actions) when $k\ge k_0$.
|
http://arxiv.org/abs/2305.19575v2
|
Human-centric video frame interpolation has great potential for improving
people's entertainment experiences and finding commercial applications in the
sports analysis industry, e.g., synthesizing slow-motion videos. Although there
are multiple benchmark datasets available in the community, none of them is
dedicated for human-centric scenarios. To bridge this gap, we introduce
SportsSloMo, a benchmark consisting of more than 130K video clips and 1M video
frames of high-resolution ($\geq$720p) slow-motion sports videos crawled from
YouTube. We re-train several state-of-the-art methods on our benchmark, and the
results show a decrease in their accuracy compared to other datasets. It
highlights the difficulty of our benchmark and suggests that it poses
significant challenges even for the best-performing methods, as human bodies
are highly deformable and occlusions are frequent in sports videos. To improve
the accuracy, we introduce two loss terms considering the human-aware priors,
where we add auxiliary supervision to panoptic segmentation and human keypoints
detection, respectively. The loss terms are model agnostic and can be easily
plugged into any video frame interpolation approaches. Experimental results
validate the effectiveness of our proposed loss terms, leading to consistent
performance improvement over 5 existing models, which establish strong baseline
models on our benchmark. The dataset and code can be found at:
https://neu-vi.github.io/SportsSlomo/.
|
http://arxiv.org/abs/2308.16876v2
|
We report a non-detection of the [OI] 63-um emission line from the z = 6.03
galaxy G09.83808 using ALMA Band 9 observations, refuting the previously
claimed detection with APEX by (Rybak et al. 2020); the new upper limit on the
[OI] 63-um flux is almost 20-times lower. [OI] 63-um line could be a powerful
tracer of neutral gas in the Epoch of Reionisation: yet our null result shows
that detecting [OI] 63-um from z$\geq$6 galaxies is more challenging than
previously hypothesised.
|
http://arxiv.org/abs/2309.12939v1
|
User churn, characterized by customers ending their relationship with a
business, has profound economic consequences across various
Business-to-Customer scenarios. For numerous system-to-user actions, such as
promotional discounts and retention campaigns, predicting potential churners
stands as a primary objective. In volatile sectors like fantasy sports,
unpredictable factors such as international sports events can influence even
regular spending habits. Consequently, while transaction history and
user-product interaction are valuable in predicting churn, they demand deep
domain knowledge and intricate feature engineering. Additionally, feature
development for churn prediction systems can be resource-intensive,
particularly in production settings serving 200m+ users, where inference
pipelines largely focus on feature engineering. This paper conducts an
exhaustive study on predicting user churn using historical data. We aim to
create a model forecasting customer churn likelihood, facilitating businesses
in comprehending attrition trends and formulating effective retention plans.
Our approach treats churn prediction as multivariate time series
classification, demonstrating that combining user activity and deep neural
networks yields remarkable results for churn prediction in complex
business-to-customer contexts.
|
http://arxiv.org/abs/2309.14390v1
|
We derive exact solutions of massless free field equations and tree-level
two-point amplitudes up to spin 2 on self-dual Taub-NUT space-time, as well as
on its single copy, the self-dual dyon. We use Killing spinors to build
analogues of momentum eigenstates, finding that, in the spirit of
color-kinematics duality, those for the self-dual dyon lift directly to provide
states on the self-dual Taub-NUT background if one replaces charge with energy.
We discover that they are forced to have faster growth at infinity than in flat
space due to the topological non-triviality of these backgrounds. The
amplitudes for massless scalars and spinning particles in the $(+\,+)$ and
$(+\,-)$ helicity configurations vanish for generic kinematics as a consequence
of the integrability of the self-dual sector. The $(-\,-)$ amplitudes are
non-vanishing and we compute them exactly in the backgrounds, which are treated
non-perturbatively. It is explained how spin is easily introduced via a
Newman-Janis imaginary shift along the spin-vector leading directly to the
additional well-known exponential factor in the dot product of the spin with
the momenta. We also observe a double copy relation between the gluon amplitude
on a self-dual dyon and graviton amplitude on a self-dual Taub-NUT space-time.
|
http://arxiv.org/abs/2309.03834v1
|
The parabolic Airy line ensemble $\mathfrak A$ is a central limit object in
the KPZ universality class and related areas. On any compact set $K = \{1,
\dots, k\} \times [a, a + t]$, the law of the recentered ensemble $\mathfrak A
- \mathfrak A(a)$ has a density $X_K$ with respect to the law of $k$
independent Brownian motions. We show that
$$
X_K(f) = \exp \left(-\textsf{S}(f) + o(\textsf{S}(f))\right)
$$
where $\textsf{S}$ is an explicit, tractable, non-negative function of $f$.
We use this formula to show that $X_K$ is bounded above by a $K$-dependent
constant, give a sharp estimate on the size of the set where $X_K < \epsilon$
as $\epsilon \to 0$, and prove a large deviation principle for $\mathfrak A$.
We also give density estimates that take into account the relative positions of
the Airy lines, and prove sharp two-point tail bounds that are stronger than
those for Brownian motion. These estimates are a key input in the
classification of geodesic networks in the directed landscape. The paper is
essentially self-contained, requiring only tail bounds on the Airy point
process and the Brownian Gibbs property as inputs.
|
http://arxiv.org/abs/2302.00097v4
|
This is a continuation of our previous work entitled \enquote{Alternating
Proximity Mapping Method for Convex-Concave Saddle-Point Problems}, in which we
proposed the alternating proximal mapping method and showed convergence results
on the sequence of our iterates, the sequence of averages of our iterates, and
the sequence of function values evaluated at the averages of the iterates for
solving convex-concave saddle-point problems.
In this work, we extend the application of the alternating proximal mapping
method to solve strongly convex-strongly concave saddle-point problems. We
demonstrate two sets of sufficient conditions and also their simplified
versions, which guarantee the linear convergence of the sequence of iterates
towards a desired saddle-point. Additionally, we provide two sets of sufficient
conditions, along with their simplified versions, that ensure the linear
convergence of the sequence of function values evaluated at the convex
combinations of iteration points to the desired function value of a
saddle-point.
|
http://arxiv.org/abs/2310.20156v1
|
We consider a scenario where the scalaron of $f({\cal R})$ models is related
to the volume modulus of string compactifications leaving only one scalar
degree of freedom at low energy. The coefficient of the leading curvature
squared contribution to the low energy effective action of gravity determines
the mass of the scalaron. We impose that this mass is small enough to allow for
the scalaron to drive Starobinski's inflation. After inflation, the
renormalisation group evolution of the couplings of the $f({\cal R})$ theory,
viewed as a scalar-tensor theory, provides the link with the Infra-Red regime.
We consider a scenario where the corrections to the mass of the scalaron are
large and reduce it below the electron mass in the Infra-Red, so that the
scalaron plays a central role in the low energy dynamics of the Universe. In
particular this leads to a connection between the scalaron mass and the
measured vacuum energy provided its renormalisation group running at energies
higher than the electron mass never drops below the present day value of the
dark energy.
|
http://arxiv.org/abs/2309.12087v2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.