text
string | source
string |
---|---|
We conduct a spectral and timing analysis of GX 339-4 and EXO 1846-031 with
the aim of studying the evolution of Type-C QPOs with spectral parameters. The
high cadence data from Insight-HXMT and NICER allow us to track them. Type-C
QPOs appear at the end of low-hard state and/or hard-intermediate state. The
results reveal that the QPO frequency is closely related to the inner disk
radius and mass accretion rate in the two sources. Such a correlation is nicely
consistent with the dynamic frequency model.
|
http://arxiv.org/abs/2305.18249v2
|
The detection of leaf diseases in plants generally involves visual
observation of patterns appearing on the leaf surface. However, there are many
diseases that are distinguished based on very subtle changes in these visually
observable patterns. This paper attempts to identify plant leaf diseases using
image processing techniques. The focus of this study is on the detection of
citrus leaf canker disease. Canker is a bacterial infection of leaves. Symptoms
of citrus cankers include brown spots on the leaves, often with a watery or
oily appearance. The spots (called lesions in botany) are usually yellow. It is
surrounded by a halo of the leaves and is found on both the top and bottom of
the leaf. This paper describes various methods that have been used to detect
citrus leaf canker disease. The methods used are histogram comparison and
k-means clustering. Using these methods, citrus canker development was detected
based on histograms generated based on leaf patterns. The results thus obtained
can be used, after consultation with experts in the field of agriculture, to
identify suitable treatments for the processes used.
|
http://arxiv.org/abs/2306.16734v1
|
Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON, to align base language
models with minimal human supervision, using only a small set of human-defined
principles, yet achieving superior performance. Central to our approach is an
instructable reward model. Trained on synthetic preference data, this model can
generate reward scores based on arbitrary human-defined principles. By merely
adjusting these principles during the RL training phase, we gain full control
over the preferences with the instructable reward model, subsequently
influencing the behavior of the RL-trained policy models, and reducing the
reliance on the collection of online human preferences. Applying our method to
the LLaMA-2-70b base language model, we developed an AI assistant named
Dromedary-2. With only 6 exemplars for in-context learning and 31 human-defined
principles, Dromedary-2 significantly surpasses the performance of several
state-of-the-art AI systems, including LLaMA-2-Chat-70b, on various benchmark
datasets. We have open-sourced the code and model weights to encourage further
research into aligning LLM-based AI agents with enhanced supervision
efficiency, improved controllability, and scalable oversight.
|
http://arxiv.org/abs/2310.05910v2
|
We present a comprehensive experimental study on pretrained feature
extractors for visual out-of-distribution (OOD) detection, focusing on adapting
contrastive language-image pretrained (CLIP) models. Without fine-tuning on the
training data, we are able to establish a positive correlation ($R^2\geq0.92$)
between in-distribution classification and unsupervised OOD detection for CLIP
models in $4$ benchmarks. We further propose a new simple and scalable method
called \textit{pseudo-label probing} (PLP) that adapts vision-language models
for OOD detection. Given a set of label names of the training set, PLP trains a
linear layer using the pseudo-labels derived from the text encoder of CLIP. To
test the OOD detection robustness of pretrained models, we develop a novel
feature-based adversarial OOD data manipulation approach to create adversarial
samples. Intriguingly, we show that (i) PLP outperforms the previous
state-of-the-art \citep{ming2022mcm} on all $5$ large-scale benchmarks based on
ImageNet, specifically by an average AUROC gain of 3.4\% using the largest CLIP
model (ViT-G), (ii) we show that linear probing outperforms fine-tuning by
large margins for CLIP architectures (i.e. CLIP ViT-H achieves a mean gain of
7.3\% AUROC on average on all ImageNet-based benchmarks), and (iii)
billion-parameter CLIP models still fail at detecting adversarially manipulated
OOD images. The code and adversarially created datasets will be made publicly
available.
|
http://arxiv.org/abs/2303.05828v2
|
Speech processing Universal PERformance Benchmark (SUPERB) is a leaderboard
to benchmark the performance of Self-Supervised Learning (SSL) models on
various speech processing tasks. However, SUPERB largely considers English
speech in its evaluation. This paper presents multilingual SUPERB (ML-SUPERB),
covering 143 languages (ranging from high-resource to endangered), and
considering both automatic speech recognition and language identification.
Following the concept of SUPERB, ML-SUPERB utilizes frozen SSL features and
employs a simple framework for multilingual tasks by learning a shallow
downstream model. Similar to the SUPERB benchmark, we find speech SSL models
can significantly improve performance compared to FBANK features. Furthermore,
we find that multilingual models do not always perform better than their
monolingual counterparts. We will release ML-SUPERB as a challenge with
organized datasets and reproducible training scripts for future multilingual
representation research.
|
http://arxiv.org/abs/2305.10615v2
|
For almost 20 years, the Wikimedia Foundation has been publishing statistics
about how many people visited each Wikipedia page on each day. This data helps
Wikipedia editors determine where to focus their efforts to improve the online
encyclopedia, and enables academic research. In June 2023, the Wikimedia
Foundation, helped by Tumult Labs, addressed a long-standing request from
Wikipedia editors and academic researchers: it started publishing these
statistics with finer granularity, including the country of origin in the daily
counts of page views. This new data publication uses differential privacy to
provide robust guarantees to people browsing or editing Wikipedia. This paper
describes this data publication: its goals, the process followed from its
inception to its deployment, the algorithms used to produce the data, and the
outcomes of the data release.
|
http://arxiv.org/abs/2308.16298v2
|
This paper proposes a model called TMR to mine valuable information from
simulated data environments. We intend to complete the submission of this
paper.
|
http://arxiv.org/abs/2306.10345v2
|
The projected energy correlator measures the energy deposited in multiple
detectors as a function of the largest angular distance $x_L = (1 -
\cos\chi_L)/2$ between detectors. The collinear limit $x_L\to 0$ of the
projected energy correlator is particularly interesting for understanding the
jet-substructures, while the large logarithms of $x_L$ could potentially spoil
the perturbation theory and must be resummed. As a necessary ingredient for its
resummation at next-to-next-to-leading logarithmic (NNLL) accuracy, we
calculate the two-loop jet functions for the projected three-point energy
correlator (E3C), using direct integration method and the parameter space
Integration-by-Part (IBP) method. We then present the NNLL resummation for
$e^+e^-$ annihilation and an approximate NNLL resummation for $pp\rightarrow
jj$ process, where the two-loop hard constant is estimated in the latter case.
The convergence is improved and the hadronization effect in the collinear limit
is suppressed when considering the ratio of E3C distribution to two-point
energy-energy correlator (EEC). Our results show potential in precision
determination of strong coupling constant using energy correlators from both
$e^+e^-$ data and $pp$ data.
|
http://arxiv.org/abs/2307.07510v1
|
This paper is focused on the coherent effects that appear in tracer
statistics in two-dimensional incompressible turbulence in the presence of an
average velocity. We show that this determines strong modifications of the
transport and trajectory statistics, which are essentially caused by hidden
coherent components of the motion.
|
http://arxiv.org/abs/2306.07639v1
|
Language models pretrained on large collections of tabular data have
demonstrated their effectiveness in several downstream tasks. However, many of
these models do not take into account the row/column permutation invariances,
hierarchical structure, etc. that exist in tabular data. To alleviate these
limitations, we propose HYTREL, a tabular language model, that captures the
permutation invariances and three more structural properties of tabular data by
using hypergraphs - where the table cells make up the nodes and the cells
occurring jointly together in each row, column, and the entire table are used
to form three different types of hyperedges. We show that HYTREL is maximally
invariant under certain conditions for tabular data, i.e., two tables obtain
the same representations via HYTREL iff the two tables are identical up to
permutations. Our empirical results demonstrate that HYTREL consistently
outperforms other competitive baselines on four downstream tasks with minimal
pretraining, illustrating the advantages of incorporating the inductive biases
associated with tabular data into the representations. Finally, our qualitative
analyses showcase that HYTREL can assimilate the table structures to generate
robust representations for the cells, rows, columns, and the entire table.
|
http://arxiv.org/abs/2307.08623v2
|
Following the remarkable success of diffusion models on image generation,
recent works have also demonstrated their impressive ability to address a
number of inverse problems in an unsupervised way, by properly constraining the
sampling process based on a conditioning input. Motivated by this, in this
paper, we present the first approach to use diffusion models as a prior for
highly accurate 3D facial BRDF reconstruction from a single image. We start by
leveraging a high-quality UV dataset of facial reflectance (diffuse and
specular albedo and normals), which we render under varying illumination
settings to simulate natural RGB textures and, then, train an unconditional
diffusion model on concatenated pairs of rendered textures and reflectance
components. At test time, we fit a 3D morphable model to the given image and
unwrap the face in a partial UV texture. By sampling from the diffusion model,
while retaining the observed texture part intact, the model inpaints not only
the self-occluded areas but also the unknown reflectance components, in a
single sequence of denoising steps. In contrast to existing methods, we
directly acquire the observed texture from the input image, thus, resulting in
more faithful and consistent reflectance estimation. Through a series of
qualitative and quantitative comparisons, we demonstrate superior performance
in both texture completion as well as reflectance reconstruction tasks.
|
http://arxiv.org/abs/2305.06077v2
|
This paper studies abelian categories that can be decomposed into smaller
abelian categories via iterated recollements - such a decomposition we call a
stratification. Examples include the categories of (equivariant) perverse
sheaves and epsilon-stratified categories (in particular highest weight
categories) in the sense of Brundan-Stroppel (2018).
We give necessary and sufficient conditions for an abelian category with a
stratification to be equivalent to a category of finite dimensional modules of
a finite dimensional algebra - this generalizes the main result of
Cipriani-Woolf (2022). Furthermore, we give necessary and sufficient conditions
for such a category to be epsilon-stratified - this generalizes the
characterisation of highest weight categories given by Krause (2017).
|
http://arxiv.org/abs/2303.14925v1
|
Text injection for automatic speech recognition (ASR), wherein unpaired
text-only data is used to supplement paired audio-text data, has shown
promising improvements for word error rate. This study examines the use of text
injection for auxiliary tasks, which are the non-ASR tasks often performed by
an E2E model. In this work, we use joint end-to-end and internal language model
training (JEIT) as our text injection algorithm to train an ASR model which
performs two auxiliary tasks. The first is capitalization, which is a
de-normalization task. The second is turn-taking prediction, which attempts to
identify whether a user has completed their conversation turn in a digital
assistant interaction. We show results demonstrating that our text injection
method boosts capitalization performance for long-tail data, and improves
turn-taking detection recall.
|
http://arxiv.org/abs/2308.07395v1
|
Federated learning (FL) has been a hot topic in recent years. Ever since it
was introduced, researchers have endeavored to devise FL systems that protect
privacy or ensure fair results, with most research focusing on one or the
other. As two crucial ethical notions, the interactions between privacy and
fairness are comparatively less studied. However, since privacy and fairness
compete, considering each in isolation will inevitably come at the cost of the
other. To provide a broad view of these two critical topics, we presented a
detailed literature review of privacy and fairness issues, highlighting unique
challenges posed by FL and solutions in federated settings. We further
systematically surveyed different interactions between privacy and fairness,
trying to reveal how privacy and fairness could affect each other and point out
new research directions in fair and private FL.
|
http://arxiv.org/abs/2306.14123v1
|
The metallic antiferromagnet CoNb$_3$S$_6$ exhibits a giant anomalous Hall
effect (AHE) that cannot be explained by a collinear N\'eel order on
intercalated Co ions. Thus, a noncoplanar structure is expected. We carried out
resonant elastic x-ray scattering (REXS) to reexamine the magnetic structure of
CoNb$_3$S$_6$ and found a double-$Q$ ($2Q$) order with a $(\frac{1}{2}00)$
commensurate component and a long-wavelength modulation. Circular dichroism and
linear polarization analysis reveal that the commensurate components on the two
Co sites are noncollinear and the modulation is helical. The resulting magnetic
structure has a staggered scalar spin chirality forming a stripe pattern in
real space. Furthermore, we found that the helical modulation wavevector
exhibits a sample dependence and develops a low-symmetry domain structure. We
propose that quenched-in lattice strain controls the helical domain structure,
accounting for much of the sample dependence. These results provide insight
into the mechanism of the AHE in CoNb$_3$S$_6$ and identifies potential routes
for controlling the Hall response and realizing other unconventional electronic
phenomena in metallic antiferromagnets.
|
http://arxiv.org/abs/2307.03776v1
|
Unlike influence lines, the concept of influence zones is remarkably absent
within the field of structural engineering, despite its existence in the
closely related domain of geotechnics. This paper proposes the novel concept of
a structural influence zone in relation to continuous beam systems and explores
its size numerically with various design constraints applicable to steel framed
buildings. The key challenge involves explicitly defining the critical load
arrangements, and is tackled by using the novel concepts of polarity sequences
and polarity zones. These lead to the identification of flexural and (discovery
of) shear load arrangements, with an equation demarcating when the latter
arises. After developing algorithms that help identify both types of critical
load arrangements, design data sets are generated and the influence zone values
are extracted. The results indicate that the influence zone under ultimate
state considerations is typically less than 3, rising to a maximum size of 5
adjacent members for any given continuous beam. Additional insights from the
influence zone concept, specifically in comparison to influence lines, are
highlighted, and the avenues for future research, such as in relation to the
newly identified shear load arrangements, are discussed.
|
http://arxiv.org/abs/2305.02211v1
|
Decision-focused learning (DFL) is an emerging paradigm that integrates
machine learning (ML) and constrained optimization to enhance decision quality
by training ML models in an end-to-end system. This approach shows significant
potential to revolutionize combinatorial decision-making in real-world
applications that operate under uncertainty, where estimating unknown
parameters within decision models is a major challenge. This paper presents a
comprehensive review of DFL, providing an in-depth analysis of both
gradient-based and gradient-free techniques used to combine ML and constrained
optimization. It evaluates the strengths and limitations of these techniques
and includes an extensive empirical evaluation of eleven methods across seven
problems. The survey also offers insights into recent advancements and future
research directions in DFL.
Code and benchmark: https://github.com/PredOpt/predopt-benchmarks
|
http://arxiv.org/abs/2307.13565v4
|
Robotic navigation in unknown, cluttered environments with limited sensing
capabilities poses significant challenges in robotics. Local trajectory
optimization methods, such as Model Predictive Path Intergal (MPPI), are a
promising solution to this challenge. However, global guidance is required to
ensure effective navigation, especially when encountering challenging
environmental conditions or navigating beyond the planning horizon. This study
presents the GP-MPPI, an online learning-based control strategy that integrates
MPPI with a local perception model based on Sparse Gaussian Process (SGP). The
key idea is to leverage the learning capability of SGP to construct a variance
(uncertainty) surface, which enables the robot to learn about the navigable
space surrounding it, identify a set of suggested subgoals, and ultimately
recommend the optimal subgoal that minimizes a predefined cost function to the
local MPPI planner. Afterward, MPPI computes the optimal control sequence that
satisfies the robot and collision avoidance constraints. Such an approach
eliminates the necessity of a global map of the environment or an offline
training process. We validate the efficiency and robustness of our proposed
control strategy through both simulated and real-world experiments of 2D
autonomous navigation tasks in complex unknown environments, demonstrating its
superiority in guiding the robot safely towards its desired goal while avoiding
obstacles and escaping entrapment in local minima. The GPU implementation of
GP-MPPI, including the supplementary video, is available at
https://github.com/IhabMohamed/GP-MPPI.
|
http://arxiv.org/abs/2307.04019v3
|
In recent years, pre-trained language models have undergone rapid development
with the emergence of large-scale models. However, there is a lack of
open-sourced chat models specifically designed for the Chinese language,
especially in the field of Chinese finance, at the scale of hundreds of
billions. To address this gap, we introduce XuanYuan 2.0, the largest Chinese
chat model to date, built upon the BLOOM-176B architecture. Additionally, we
propose a novel training method called hybrid-tuning to mitigate catastrophic
forgetting. By combining general-domain with domain-specific knowledge and
integrating the stages of pre-training and fine-tuning, XuanYuan 2.0 is capable
of providing accurate and contextually appropriate responses in the Chinese
financial domain.
|
http://arxiv.org/abs/2305.12002v1
|
Given a connected semisimple Lie group $G$ and an arithmetic subgroup
$\Gamma$, it is well-known that each irreducible representation $\pi$ of $G$
occurs in the discrete spectrum $L^2_{\text{disc}}(\Gamma\backslash G)$ of
$L^2(\Gamma\backslash G)$ with at most a finite multiplicity $m_{\Gamma}(\pi)$.
While $m_{\Gamma}(\pi)$ is unknown in general, we are interested in its limit
as $\Gamma$ is taken to be in a tower of lattices $\Gamma_1\supset
\Gamma_2\supset\dots$. For a bounded measurable subset $X$ of the unitary dual
$\widehat{G}$, we let $m_{\Gamma_n}(X)$ be the sum of the multiplicity
$m_{\Gamma_n}(\pi)$ of a representation $\pi$ over all $\pi$ in $X$. Let $H_X$
be the direct integral of the irreducible representations in $X$, which is also
a module over the group von Neumann algebra $\mathcal{L}\Gamma_n$. We prove:
\begin{center} $\lim\limits_{n\to
\infty}\cfrac{m_{\Gamma_n}(X)}{\dim_{\mathcal{L}\Gamma_n}H_X}=1$, \end{center}
for any bounded subset $X$ of $\widehat{G}$, when i) $\Gamma_n$'s are
cocompact, or, ii) $G=\SL(n,\mathbb{R})$ and $\{\Gamma_n\}$ are principal
congruence subgroups.
|
http://arxiv.org/abs/2306.02999v1
|
A hybrid numerical model previously developed for combustion simulations is
extended in this article to describe flame propagation and stabilization in
porous media. The model, with a special focus on flame/wall interaction
processes, is validated via corresponding benchmarks involving flame
propagation in channels with both adiabatic and constant-temperature walls.
Simulations with different channel widths show that the model can correctly
capture the changes in flame shape and propagation speed as well as the dead
zone and quenching limit, as found in channels with cold walls. The model is
further assessed considering a pseudo 2-D porous burner involving an array of
cylindrical obstacles at constant temperature, investigated in a companion
experimental study. Furthermore, the model is used to simulate pore-scale flame
dynamics in a randomly-generated 3-D porous media. Results are promising,
opening the door for future simulations of flame propagation in realistic
porous media.
|
http://arxiv.org/abs/2304.05657v1
|
With the increasing prevalence of scalable file systems in the context of
High Performance Computing (HPC), the importance of accurate anomaly detection
on runtime logs is increasing. But as it currently stands, many
state-of-the-art methods for log-based anomaly detection, such as DeepLog, have
encountered numerous challenges when applied to logs from many parallel file
systems (PFSes), often due to their irregularity and ambiguity in time-based
log sequences. To circumvent these problems, this study proposes ClusterLog, a
log pre-processing method that clusters the temporal sequence of log keys based
on their semantic similarity. By grouping semantically and sentimentally
similar logs, this approach aims to represent log sequences with the smallest
amount of unique log keys, intending to improve the ability of a downstream
sequence-based model to effectively learn the log patterns. The preliminary
results of ClusterLog indicate not only its effectiveness in reducing the
granularity of log sequences without the loss of important sequence information
but also its generalizability to different file systems' logs.
|
http://arxiv.org/abs/2301.07846v1
|
Adding tactile sensors to a robotic system is becoming a common practice to
achieve more complex manipulation skills than those robotics systems that only
use external cameras to manipulate objects. The key of tactile sensors is that
they provide extra information about the physical properties of the grasping.
In this paper, we implemented a system to predict and quantify the rotational
slippage of objects in hand using the vision-based tactile sensor known as
Digit. Our system comprises a neural network that obtains the segmented contact
region (object-sensor), to later calculate the slippage rotation angle from
this region using a thinning algorithm. Besides, we created our own tactile
segmentation dataset, which is the first one in the literature as far as we are
concerned, to train and evaluate our neural network, obtaining results of 95%
and 91% in Dice and IoU metrics. In real-scenario experiments, our system is
able to predict rotational slippage with a maximum mean rotational error of 3
degrees with previously unseen objects. Thus, our system can be used to prevent
an object from falling due to its slippage.
|
http://arxiv.org/abs/2305.04660v1
|
We introduce a novel and efficient method for Event Coreference Resolution
(ECR) applied to a lower-resourced language domain. By framing ECR as a graph
reconstruction task, we are able to combine deep semantic embeddings with
structural coreference chain knowledge to create a parameter-efficient family
of Graph Autoencoder models (GAE). Our method significantly outperforms
classical mention-pair methods on a large Dutch event coreference corpus in
terms of overall score, efficiency and training speed. Additionally, we show
that our models are consistently able to classify more difficult coreference
links and are far more robust in low-data settings when compared to
transformer-based mention-pair coreference algorithms.
|
http://arxiv.org/abs/2310.11965v1
|
We present and analyze a new derivation of the meso-level behavior of a
discrete microscopic model of heat transfer. This construction is based on the
principle of dynamic consistency. Our work reproduces and corrects, when
needed, all the major previous expressions which provide modifications to the
standard heat PDE. However, unlike earlier efforts, we do not allow the
microscopic level parameters to have zero limiting values. We also give insight
into the difficulties of constructing physically valid heat equations within
the framework of the general mathematically inequivalent of difference and
differential equations.
|
http://arxiv.org/abs/2301.06580v1
|
In this work we develop a weight theory in the setting of hyperbolic spaces.
Our starting point is a variant of the well-known endpoint Fefferman-Stein
inequality for the centered Hardy-Littlewood maximal function. This inequality
generalizes, in the hyperbolic setting, the weak $(1,1)$ estimates obtained by
Str\"omberg in "Weak type L1 estimates for maximal functions on noncompact
symmetric spaces", Ann. of Math. 114 (1981), where Str\"omberg answered a
question posed by Stein and Wainger in "Problems in harmonic analysis related
to curvature", Bull. Amer. Math. Soc. 84 (1978). Our approach is based on a
combination of geometrical arguments and the techniques used in the discrete
setting of regular trees by Naor and Tao in "Random martingales and
localization of maximal inequalities", J. Funct. Anal. 259 (2010). This variant
of the Fefferman-Stein inequality paves the road to weighted estimates for the
maximal function for $p>1$. On the one hand, we show that the classical $A_p$
conditions are not the right ones in this setting. On the other hand, we
provide sharp sufficient conditions for weighted weak and strong type $(p,p)$
boundedness of the centered maximal function, when $p>1$. The sharpness is in
the sense that, given $p>1$, we can construct a weight satisfying our
sufficient condition for that $p$, and so it satisfies the weak type $(p,p)$
inequality, but the strong type $(p,p)$ inequality fails. In particular, the
weak type $(q,q)$ fails as well for every $q < p$.
|
http://arxiv.org/abs/2305.14473v1
|
OUXT-Polaris has been developing an autonomous navigation system by
participating in the Maritime RobotX Challenge 2014, 2016, and 2018. In this
paper, we describe the improvement of the previous vessel system. We also
indicate the advantage of the improved design. Moreover, we describe the
developing method under Covid-19 using simulation / miniture-size hardware and
the feature components for the next RobotX Challenge.
|
http://arxiv.org/abs/2306.13894v1
|
Hardware security keys undoubtedly have advantage for users as "usability"
pain is trivial compared to the maximum "security" gain in authentication.
Naturally, the hardware factor in the authentication received a widespread
adoption amongst average users, as it is ergonomically less demanding than
phone texts or authentication prompts. This ergonomic advantage in particular
is essential for users who are blind or low vision, as their interaction with a
phone is impractical. However, the "usability" for low vision or blind users
pain might be much higher than an average well-bodied user for the same
"security" gain. In an effort to learn more we conducted a usability assessment
with ten low vision or blind users setting up the OnlyKey two-factor
authentication key. First, the setup process was insurmountable for more than
half of the participants, resulting in a situation where the hardware key was
abandoned. Secondly, the lack of tactile orientation led participants to
consider it as both impractical, and prone to difficulties locating or loosing
it. We discuss the implications of our findings for future improvements in
usable authentication for visually impaired users.
|
http://arxiv.org/abs/2308.05582v1
|
Existing results for the estimation of the L\'evy measure are mostly limited
to the onedimensional setting. We apply the spectral method to multidimensional
L\'evy processes in order to construct a nonparametric estimator for the
multivariate jump distribution. We prove convergence rates for the uniform
estimation error under both a low- and a high-frequency observation regime. The
method is robust to various dependence structures. Along the way, we present a
uniform risk bound for the multivariate empirical characteristic function and
its partial derivatives. The method is illustrated with simulation examples.
|
http://arxiv.org/abs/2305.14315v1
|
There is a growing interest in the implementation of platform trials, which
provide the flexibility to incorporate new treatment arms during the trial and
the ability to halt treatments early based on lack of benefit or observed
superiority. In such trials, it can be important to ensure that error rates are
controlled. This paper introduces a multi-stage design that enables the
addition of new treatment arms, at any point, in a pre-planned manner within a
platform trial, while still maintaining control over the family-wise error
rate. This paper focuses on finding the required sample size to achieve a
desired level of statistical power when treatments are continued to be tested
even after a superior treatment has already been found. This may be of interest
if there are other sponsors treatments which are also superior to the current
control or multiple doses being tested. The calculations to determine the
expected sample size is given. A motivating trial is presented in which the
sample size of different configurations is studied. Additionally the approach
is compared to running multiple separate trials and it is shown that in many
scenarios if family wise error rate control is needed there may not be benefit
in using a platform trial when comparing the sample size of the trial.
|
http://arxiv.org/abs/2308.12798v1
|
This paper generalizes the result of Sarnak and Ubis \cite{sarnak-ubis} about
non-concentration of primes in horocycle orbits on $PSL_2(\mathbb{Z})
\backslash PSL_2(\mathbb{R})$ to any lattice in $PSL_2(\mathbb{R})$. The proof
combines the asymptotic result of Str\"ombergsson \parencite{strombergsson} and
Venkatesh's method \parencite{venkatesh} with the approach of Sarnak and Ubis
of approximating horocycle pieces with periodic horocycles. The key step is to
establish a dichotomy between $\{\xi h(t), t \in [0, T] \}$ having good
equidistribution in $\Gamma \backslash PSL_2(\mathbb{R})$ and it being
approximable by closed horocycle pieces with small period. In a follow-up
paper, a similar approach will be used to show equidistribution of $\xi
h(n^{1+\gamma})$ for small $\gamma>0$, generalizing Venkatesh's result
\parencite{venkatesh} to non-compact $\Gamma$.
|
http://arxiv.org/abs/2303.07781v1
|
Recently, there has been renewed interest in a crossing-symmetric dispersion
relation from the 1970s due to its implications for both regular quantum field
theory and conformal field theory. However, this dispersion relation introduces
nonlocal spurious singularities and requires additional locality constraints
for their removal, a process that presents considerable technical challenges.
In this Letter, we address this issue by deriving a new crossing-symmetric
dispersion relation that is free of spurious singularities, resulting in a
compact form of the contact terms in crossing-symmetric blocks. Our results
establish a solid foundation for the Polyakov bootstrap in conformal field
theories and the crossing-symmetry S-matrix bootstrap in quantum field
theories.
|
http://arxiv.org/abs/2305.03669v2
|
Understanding how external stimuli are encoded in distributed neural activity
is of significant interest in clinical and basic neuroscience. To address this
need, it is essential to develop analytical tools capable of handling limited
data and the intrinsic stochasticity present in neural data. In this study, we
propose a straightforward Bayesian time series classifier (BTsC) model that
tackles these challenges whilst maintaining a high level of interpretability.
We demonstrate the classification capabilities of this approach by utilizing
neural data to decode colors in a visual task. The model exhibits consistent
and reliable average performance of 75.55% on 4 patients' dataset, improving
upon state-of-the-art machine learning techniques by about 3.0 percent. In
addition to its high classification accuracy, the proposed BTsC model provides
interpretable results, making the technique a valuable tool to study neural
activity in various tasks and categories. The proposed solution can be applied
to neural data recorded in various tasks, where there is a need for
interpretable results and accurate classification accuracy.
|
http://arxiv.org/abs/2307.15672v1
|
The shapes of Stokes profiles contain much information about the atmospheric
conditions that produced them. However, a variety of different atmospheric
structures can produce very similar profiles. Thus, it is important for proper
interpretation of observations to have a good understanding of how the shapes
of Stokes profiles depend on the underlying atmosphere. An excellent tool in
this regard is forward modeling, i.e. computing and studying synthetic spectra
from realistic simulations of the solar atmosphere. Modern simulations
routinely produce several hundred thousand spectral profiles per snapshot. With
such numbers, it becomes necessary to use automated procedures in order to
organize the profiles according to their shape. Here we illustrate the use of
two complementary methods, k-means and k-Shape, to cluster similarly shaped
profiles, and demonstrate how the resulting clusters can be combined with
knowledge of the simulation's atmosphere to interpret spectral shapes. We
generate synthetic Stokes profiles for the Ca II 854.2 nm line using the
Multi3D code from a Bifrost simulation snapshot. We then apply the k-means and
k-Shape clustering techniques to group the profiles together according to their
shape. We show and compare the classes of profile shapes we retrieve from
applying both k-means and k-Shape to our synthetic intensity spectra. We then
show the structure of the underlying atmosphere for two particular classes of
profile shapes retrieved by the clustering, and demonstrate how this leads to
an interpretation for the formation of those profile shapes. Furthermore, we
apply both methods to the subset of our profiles containing the strongest
Stokes V signals, and demonstrate how k-Shape can be qualitatively better than
k-means at retrieving complex profile shapes when using a small number of
clusters.
|
http://arxiv.org/abs/2306.05748v1
|
Automatically associating ICD codes with electronic health data is a
well-known NLP task in medical research. NLP has evolved significantly in
recent years with the emergence of pre-trained language models based on
Transformers architecture, mainly in the English language. This paper adapts
these models to automatically associate the ICD codes. Several neural network
architectures have been experimented with to address the challenges of dealing
with a large set of both input tokens and labels to be guessed. In this paper,
we propose a model that combines the latest advances in NLP and multi-label
classification for ICD-10 code association. Fair experiments on a Clinical
dataset in the French language show that our approach increases the $F_1$-score
metric by more than 55\% compared to state-of-the-art results.
|
http://arxiv.org/abs/2304.02886v1
|
Irregular satellites are the minor bodies found orbiting all four Solar
System giant planets, with large semi-major axes, eccentricities, and
inclinations. Previous studies have determined that the Solar System's
irregular satellites are extremely collisionally evolved populations today,
having lost $\sim$99 per cent of their initial mass over the course of hundreds
of Myr. Such an evolution implies that the irregular satellites must have
produced a population of dusty collisional debris in the past, which is
potentially observable due to the resulting reprocessing of stellar light. In
this paper we examine the signatures of the debris discs produced by extrasolar
analogues of this process. Radiation pressure, quantified by the parameter
$\beta$, is the driving force behind the liberation of dust grains from the
planetary Hill sphere, and results in the formation of circumstellar dust
rings, even in the absence of an underlying belt of asteroids in the system.
Our simulated discs reproduce many of the same features seen in some classes of
observed debris discs, such as thin ring morphology, a large blowout size, and
azimuthal symmetry. We compare our simulated discs' radial profiles to those of
the narrow dust rings observed around Fomalhaut and HR 4796A, and show that
they can broadly reproduce the observed radial distribution of dust.
|
http://arxiv.org/abs/2304.13753v1
|
Pre-trained language models (PLMs) have become a prevalent technique in deep
learning for code, utilizing a two-stage pre-training and fine-tuning procedure
to acquire general knowledge about code and specialize in a variety of
downstream tasks. However, the dynamic nature of software codebases poses a
challenge to the effectiveness and robustness of PLMs. In particular,
world-realistic scenarios potentially lead to significant differences between
the distribution of the pre-training and test data, i.e., distribution shift,
resulting in a degradation of the PLM's performance on downstream tasks. In
this paper, we stress the need for adapting PLMs of code to software data whose
distribution changes over time, a crucial problem that has been overlooked in
previous works. The motivation of this work is to consider the PLM in a
non-stationary environment, where fine-tuning data evolves over time according
to a software evolution scenario. Specifically, we design a scenario where the
model needs to learn from a stream of programs containing new, unseen APIs over
time. We study two widely used PLM architectures, i.e., a GPT2 decoder and a
RoBERTa encoder, on two downstream tasks, API call and API usage prediction. We
demonstrate that the most commonly used fine-tuning technique from prior work
is not robust enough to handle the dynamic nature of APIs, leading to the loss
of previously acquired knowledge i.e., catastrophic forgetting. To address
these issues, we implement five continual learning approaches, including
replay-based and regularization-based methods. Our findings demonstrate that
utilizing these straightforward methods effectively mitigates catastrophic
forgetting in PLMs across both downstream tasks while achieving comparable or
superior performance.
|
http://arxiv.org/abs/2305.04106v2
|
Accurate simulation of granular flow dynamics is crucial for assessing
various geotechnical risks, including landslides and debris flows. Granular
flows involve a dynamic rearrangement of particles exhibiting complex
transitions from solid-like to fluid-like responses. Traditional continuum and
discrete numerical methods are limited by their computational cost in
simulating large-scale systems. Statistical or machine learning-based models
offer an alternative. Still, they are largely empirical, based on a limited set
of parameters. Due to their permutation-dependent learning, traditional machine
learning-based models require huge training data to generalize. To resolve
these problems, we use a graph neural network, a state-of-the-art machine
learning architecture that learns local interactions. Graphs represent the
state of dynamically changing granular flows and the interaction laws, such as
energy and momentum exchange between grains. We develop a graph neural
network-based simulator (GNS) that takes the current state of granular flow and
predicts the next state using Euler explicit integration by learning the local
interaction laws. We train GNS on different granular trajectories. We then
assess the performance of GNS by predicting granular column collapse. GNS
accurately predicts flow dynamics for column collapses with different aspect
ratios unseen during training. GNS is hundreds of times faster than
high-fidelity numerical simulators. The model also generalizes to domains much
larger than the training data, handling more than twice the number of particles
than it was trained on.
|
http://arxiv.org/abs/2305.05218v2
|
A useful capability is that of classifying some agent's behavior using data
from a sequence, or trace, of sensor measurements. The sensor selection problem
involves choosing a subset of available sensors to ensure that, when generated,
observation traces will contain enough information to determine whether the
agent's activities match some pattern. In generalizing prior work, this paper
studies a formulation in which multiple behavioral itineraries may be supplied,
with sensors selected to distinguish between behaviors. This allows one to pose
fine-grained questions, e.g., to position the agent's activity on a spectrum.
In addition, with multiple itineraries, one can also ask about choices of
sensors where some behavior is always plausibly concealed by (or mistaken for)
another. Using sensor ambiguity to limit the acquisition of knowledge is a
strong privacy guarantee, a form of guarantee which some earlier work examined
under formulations distinct from our inter-itinerary conflation approach. By
concretely formulating privacy requirements for sensor selection, this paper
connects both lines of work in a novel fashion: privacy-where there is a bound
from above, and behavior verification-where sensors choices are bounded from
below. We examine the worst-case computational complexity that results from
both types of bounds, proving that upper bounds are more challenging under
standard computational complexity assumptions. The problem is intractable in
general, but we introduce an approach to solving this problem that can exploit
interrelationships between constraints, and identify opportunities for
optimizations. Case studies are presented to demonstrate the usefulness and
scalability of our proposed solution, and to assess the impact of the
optimizations.
|
http://arxiv.org/abs/2307.13203v2
|
We study Lindstrom quantifiers that satisfy certain closure properties which
are motivated by the study of polymorphisms in the context of constraint
satisfaction problems (CSP). When the algebra of polymorphisms of a finite
structure B satisfies certain equations, this gives rise to a natural closure
condition on the class of structures that map homomorphically to B. The
collection of quantifiers that satisfy closure conditions arising from a fixed
set of equations are rather more general than those arising as CSP. For any
such conditions P, we define a pebble game that delimits the distinguishing
power of the infinitary logic with all quantifiers that are P-closed. We use
the pebble game to show that the problem of deciding whether a system of linear
equations is solvable in Z2 is not expressible in the infinitary logic with all
quantifiers closed under a near-unanimity condition.
|
http://arxiv.org/abs/2308.03695v1
|
We extend the framework of analyzing the 2HDM in its orbit space to study the
one-loop effective potential before and after electroweak symmetry breaking. In
this framework, we present a comprehensive analysis of global symmetries of the
one-loop thermal effective potential in the 2HDM, demonstrating when the global
symmetries of the tree-level 2HDM potential are broken by loop contributions.
By introducing light-cone coordinates and generalizing the bilinear notation
around the vacuum, we present a geometric view of the scalar mass matrix and
on-shell renormalization conditions.
|
http://arxiv.org/abs/2305.12764v2
|
Computer vision-based object detection is a key modality for advanced
Detect-And-Avoid systems that allow for autonomous flight missions of UAVs.
While standard object detection frameworks do not predict the actual depth of
an object, this information is crucial to avoid collisions. In this paper, we
propose several novel extensions to state-of-the-art methods for monocular
object detection from images at long range. Firstly, we propose Sigmoid and
ReLU-like encodings when modeling depth estimation as a regression task.
Secondly, we frame the depth estimation as a classification problem and
introduce a Soft-Argmax function in the calculation of the training loss. The
extensions are exemplarily applied to the YOLOX object detection framework. We
evaluate the performance using the Amazon Airborne Object Tracking dataset. In
addition, we introduce the Fitness score as a new metric that jointly assesses
both object detection and depth estimation performance. Our results show that
the proposed methods outperform state-of-the-art approaches w.r.t. existing, as
well as the proposed metrics.
|
http://arxiv.org/abs/2302.08943v1
|
Pseudospherical surfaces determined by Cauchy problems involving the
Camassa-Holm equation are considered herein. We study how global solutions
influence the corresponding surface, as well as we investigate two sorts of
singularities of the metric: the first one is just when the co-frame of dual
form is not linearly independent. The second sort of singularity is that
arising from solutions blowing up. In particular, it is shown that the metric
blows up if and only if the solution breaks in finite time.
|
http://arxiv.org/abs/2310.18941v1
|
We show a natural extension of the Novikov numbers associated to the basic
cohomology class of a closed $1$-form on an orbifold, thus proving
corresponding Novikov inequalities for the compact case.
|
http://arxiv.org/abs/2306.05990v1
|
Searchable encrypted (SE) indexing systems are a useful tool for utilizing
cloud services to store and manage sensitive information. However, much of the
work on SE systems to date has remained theoretical. In order to make them of
practical use, more work is needed to develop optimal protocols and working
models for them. This includes, in particular, the creation of a working update
model in order to maintain an encrypted index of a dynamic document set such as
an email inbox. I have created a working, real-world end-to-end SE
implementation that satisfies these needs, including the first empirical
performance evaluation of the dynamic SE update operation. In doing so, I show
a viable path to move from the theoretical concepts described by previous
researchers to a future production-worthy implementation and identify issues
for follow-on investigation.
|
http://arxiv.org/abs/2308.13486v1
|
Text reading order is a crucial aspect in the output of an OCR engine, with a
large impact on downstream tasks. Its difficulty lies in the large variation of
domain specific layout structures, and is further exacerbated by real-world
image degradations such as perspective distortions. We propose a lightweight,
scalable and generalizable approach to identify text reading order with a
multi-modal, multi-task graph convolutional network (GCN) running on a sparse
layout based graph. Predictions from the model provide hints of bidimensional
relations among text lines and layout region structures, upon which a
post-processing cluster-and-sort algorithm generates an ordered sequence of all
the text lines. The model is language-agnostic and runs effectively across
multi-language datasets that contain various types of images taken in
uncontrolled conditions, and it is small enough to be deployed on virtually any
platform including mobile devices.
|
http://arxiv.org/abs/2305.02577v1
|
The performance of neural networks in content-based image retrieval (CBIR) is
highly influenced by the chosen loss (objective) function. The majority of
objective functions for neural models can be divided into metric learning and
statistical learning. Metric learning approaches require a pair mining strategy
that often lacks efficiency, while statistical learning approaches are not
generating highly compact features due to their indirect feature optimization.
To this end, we propose a novel repeller-attractor loss that falls in the
metric learning paradigm, yet directly optimizes for the L2 metric without the
need of generating pairs. Our loss is formed of three components. One leading
objective ensures that the learned features are attracted to each designated
learnable class anchor. The second loss component regulates the anchors and
forces them to be separable by a margin, while the third objective ensures that
the anchors do not collapse to zero. Furthermore, we develop a more efficient
two-stage retrieval system by harnessing the learned class anchors during the
first stage of the retrieval process, eliminating the need of comparing the
query with every image in the database. We establish a set of four datasets
(CIFAR-100, Food-101, SVHN, and Tiny ImageNet) and evaluate the proposed
objective in the context of few-shot and full-set training on the CBIR task, by
using both convolutional and transformer architectures. Compared to existing
objective functions, our empirical evidence shows that the proposed objective
is generating superior and more consistent results.
|
http://arxiv.org/abs/2306.00630v2
|
In this article, we study the problems found in the Susa Mathematical Texts
No.\,24 and No.\,25 (\textbf{SMT No.\,24} and \textbf{SMT No.\,25}) which
concern excavation projects such as canals and holes. We also examine certain
Elamite structures, such as the canal systems serving Susa and a reservoir at
the ziggurat of Chogha Zanbil, in whose construction geometry might well have
played an important role.
|
http://arxiv.org/abs/2304.01357v1
|
We introduce two 1D tight-binding models based on the Tribonacci
substitution, the hopping and on-site Tribonacci chains, which generalize the
Fibonacci chain. For both hopping and on-site models, a perturbative real-space
renormalization procedure is developed. We show that the two models are
equivalent at the fixed point of the renormalization group flow, and that the
renormalization procedure naturally gives the Local Resonator Modes.
Additionally, the Rauzy fractal, inherent to the Tribonacci substitution, is
shown to serve as the analog of conumbering for the Tribonacci chain. The
renormalization procedure is used to repeatedly subdivide the Rauzy fractal
into copies of itself, which can be used to describe the eigenstates in terms
of Local Resonator Modes. Finally, the multifractal dimensions of the energy
spectrum and eigenstates of the hopping Tribonacci chain are computed, from
which it can be concluded that the Tribonacci chains are critical.
|
http://arxiv.org/abs/2304.11144v2
|
This paper deals with both the higher order Tur\'an inequalities and the
Laguerre inequalities for quasi-polynomial-like functions -- that are
expressions of the form $f(n)=c_l(n)n^l+\cdots+c_d(n)n^d+o(n^d)$, where
$d,l\in\mathbb{N}$ and $d\leqslant l$. A natural example of such a function is
the $A$-partition function $p_{A}(n)$, which enumerates the number of
partitions of $n$ with parts in the fixed finite multiset
$A=\{a_1,a_2,\ldots,a_k\}$ of positive integers. For an arbitrary positive
integer $d$, we present efficient criteria for both the order $d$ Tur\'an
inequality and the $d$th Laguarre inequality for quasi-polynomial-like
functions. In particular, we apply these results to deduce non-trivial
analogues for $p_A(n)$.
|
http://arxiv.org/abs/2310.13814v1
|
In this conceptual paper, we discuss quantum formalisms which do not use the
famous Axiom of Choice. We also consider the fundamental problem which
addresses the (in)correctness of having the complex numbers as the base field
for Hilbert spaces in the K{\o}benhavn interpretation of quantum theory, and
propose a new approach to this problem (based on the Lefschetz principle).
Rather than a Theorem--Proof--paper, this paper describes two new research
programs on the foundational level, and focuses on fundamental open questions
in these programs which come along the way.
|
http://arxiv.org/abs/2305.10173v1
|
Derived $A_\infty$-algebras have a wealth of theoretical advantages over
regular $A_\infty$-algebras. However, due to their bigraded nature, in practice
they are often unwieldy to work with. We develop a framework involving brace
algebras on operads which allows us to study derived $A_\infty$ algebras in a
new conceptual context. One particular advantage is that this construction
allows us to generalize the Lie algebra structure on the Hochschild complex of
an $A_\infty$-algebra, obtaining new and rigorous versions of the Deligne
conjecture.
|
http://arxiv.org/abs/2307.11414v3
|
A class of almost paratopological groups is introduced, which (1) contains
paratopological groups and Hausdorff quasitopological groups; (2) is closed
under products; (3) subgroups. Almost paratopological $T_1$ groups $G$ are
characterized by the fact that $\{(x,y)\in G^2: xy=e\}$ is closed in $G^2$. A
compact almost paratopological group is topological. A regular $\Sigma$-space
with a countable extend and a separately continuous Mal'tsev operation is
$\omega$-cellular (and ccc). A $\sigma$-compact regular almost paratopological
group is ccc. In particular, a $\sigma$-compact regular quasitopological group
is ccc.
|
http://arxiv.org/abs/2306.06241v2
|
Most existing large-scale academic search engines are built to retrieve
text-based information. However, there are no large-scale retrieval services
for scientific figures and tables. One challenge for such services is
understanding scientific figures' semantics, such as their types and purposes.
A key obstacle is the need for datasets containing annotated scientific figures
and tables, which can then be used for classification, question-answering, and
auto-captioning. Here, we develop a pipeline that extracts figures and tables
from the scientific literature and a deep-learning-based framework that
classifies scientific figures using visual features. Using this pipeline, we
built the first large-scale automatically annotated corpus, ACL-Fig, consisting
of 112,052 scientific figures extracted from ~56K research papers in the ACL
Anthology. The ACL-Fig-Pilot dataset contains 1,671 manually labeled scientific
figures belonging to 19 categories. The dataset is accessible at
https://huggingface.co/datasets/citeseerx/ACL-fig under a CC BY-NC license.
|
http://arxiv.org/abs/2301.12293v1
|
We study the problem of privately estimating the parameters of
$d$-dimensional Gaussian Mixture Models (GMMs) with $k$ components. For this,
we develop a technique to reduce the problem to its non-private counterpart.
This allows us to privatize existing non-private algorithms in a blackbox
manner, while incurring only a small overhead in the sample complexity and
running time. As the main application of our framework, we develop an
$(\varepsilon, \delta)$-differentially private algorithm to learn GMMs using
the non-private algorithm of Moitra and Valiant [MV10] as a blackbox.
Consequently, this gives the first sample complexity upper bound and first
polynomial time algorithm for privately learning GMMs without any boundedness
assumptions on the parameters. As part of our analysis, we prove a tight (up to
a constant factor) lower bound on the total variation distance of
high-dimensional Gaussians which can be of independent interest.
|
http://arxiv.org/abs/2303.04288v2
|
The quality of a wood log in the wood industry depends heavily on the
presence of both outer and inner defects, including inner knots that are a
result of the growth of tree branches. Today, locating the inner knots require
the use of expensive equipment such as X-ray scanners. In this paper, we
address the task of predicting the location of inner defects from the outer
shape of the logs. The dataset is built by extracting both the contours and the
knots with X-ray measurements. We propose to solve this binary segmentation
task by leveraging convolutional recurrent neural networks. Once the neural
network is trained, inference can be performed from the outer shape measured
with cheap devices such as laser profilers. We demonstrate the effectiveness of
our approach on fir and spruce tree species and perform ablation on the
recurrence to demonstrate its importance.
|
http://arxiv.org/abs/2308.11291v1
|
Human genetic diseases often arise from point mutations, emphasizing the
critical need for precise genome editing techniques. Among these, base editing
stands out as it allows targeted alterations at the single nucleotide level.
However, its clinical application is hindered by low editing efficiency and
unintended mutations, necessitating extensive trial-and-error experimentation
in the laboratory. To speed up this process, we present an attention-based
two-stage machine learning model that learns to predict the likelihood of all
possible editing outcomes for a given genomic target sequence. We further
propose a multi-task learning schema to jointly learn multiple base editors
(i.e. variants) at once. Our model's predictions consistently demonstrated a
strong correlation with the actual experimental results on multiple datasets
and base editor variants. These results provide further validation for the
models' capacity to enhance and accelerate the process of refining base editing
designs.
|
http://arxiv.org/abs/2310.02919v2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.