text
string | source
string |
---|---|
Automated visualization recommendation facilitates the rapid creation of
effective visualizations, which is especially beneficial for users with limited
time and limited knowledge of data visualization. There is an increasing trend
in leveraging machine learning (ML) techniques to achieve an end-to-end
visualization recommendation. However, existing ML-based approaches implicitly
assume that there is only one appropriate visualization for a specific dataset,
which is often not true for real applications. Also, they often work like a
black box, and are difficult for users to understand the reasons for
recommending specific visualizations. To fill the research gap, we propose
AdaVis, an adaptive and explainable approach to recommend one or multiple
appropriate visualizations for a tabular dataset. It leverages a box
embedding-based knowledge graph to well model the possible one-to-many mapping
relations among different entities (i.e., data features, dataset columns,
datasets, and visualization choices). The embeddings of the entities and
relations can be learned from dataset-visualization pairs. Also, AdaVis
incorporates the attention mechanism into the inference framework. Attention
can indicate the relative importance of data features for a dataset and provide
fine-grained explainability. Our extensive evaluations through quantitative
metric evaluations, case studies, and user interviews demonstrate the
effectiveness of AdaVis.
|
http://arxiv.org/abs/2310.11742v1
|
Our objective is to derive the range and velocity of multiple targets from
the delay-Doppler domain for radar sensing using orthogonal time frequency
space (OTFS) signaling. Noise contamination affects the performance of OTFS
signals in real-world environments, making radar sensing challenging. This work
introduces a two-stage approach to tackle this issue. In the first stage, we
use a generative adversarial network to denoise the corrupted OTFS samples,
significantly improving the data quality. Following this, the denoised signals
are passed to a convolutional neural network model to predict the values of the
velocities and ranges of multiple targets. The proposed two-stage approach can
predict the range and velocity of multiple targets, even in very low
signal-to-noise ratio scenarios, with high accuracy and outperforms existing
methods.
|
http://arxiv.org/abs/2310.00897v2
|
Plant phenology and phenotype prediction using remote sensing data is
increasingly gaining the attention of the plant science community to improve
agricultural productivity. This work aims to generate synthetic forestry images
that satisfy certain phenotypic attributes, viz. canopy greenness. We harness a
Generative Adversarial Network (GAN) to synthesize biologically plausible and
phenotypically stable forestry images conditioned on the greenness of
vegetation (a continuous attribute) over a specific region of interest
(describing a particular vegetation type in a mixed forest). The training data
is based on the automated digital camera imagery provided by the National
Ecological Observatory Network (NEON) and processed by the PhenoCam Network.
Our method helps render the appearance of forest sites specific to a greenness
value. The synthetic images are utilized to predict another phenotypic
attribute, viz., redness of plants. The Structural SIMilarity (SSIM) index is
used to assess the quality of the synthetic images. The greenness and redness
indices of the generated synthetic images are compared against that of the
original images using Root Mean Squared Percentage Error (RMSPE) to evaluate
their accuracy and integrity. The generalizability and scalability of our
proposed GAN model is determined by effectively transforming it to generate
synthetic images for other forest sites and vegetation types.
|
http://arxiv.org/abs/2307.03789v2
|
Deep learning achieves outstanding results in many machine learning tasks.
Nevertheless, it is vulnerable to backdoor attacks that modify the training set
to embed a secret functionality in the trained model. The modified training
samples have a secret property, i. e., a trigger. At inference time, the secret
functionality is activated when the input contains the trigger, while the model
functions correctly in other cases. While there are many known backdoor attacks
(and defenses), deploying a stealthy attack is still far from trivial.
Successfully creating backdoor triggers depends on numerous parameters.
Unfortunately, research has not yet determined which parameters contribute most
to the attack performance.
This paper systematically analyzes the most relevant parameters for the
backdoor attacks, i.e., trigger size, position, color, and poisoning rate.
Using transfer learning, which is very common in computer vision, we evaluate
the attack on state-of-the-art models (ResNet, VGG, AlexNet, and GoogLeNet) and
datasets (MNIST, CIFAR10, and TinyImageNet). Our attacks cover the majority of
backdoor settings in research, providing concrete directions for future works.
Our code is publicly available to facilitate the reproducibility of our
results.
|
http://arxiv.org/abs/2302.01740v2
|
We construct a logarithmic version of the Hilbert scheme, and more generally
the Quot scheme, of a simple normal crossings pair. The logarithmic Quot space
admits a natural tropicalisation called the space of tropical supports, which
is a functor on the category of cone complexes. The fibers of the map to the
space of tropical supports are algebraic. The space of tropical supports is
representable by ``piecewise linear spaces'', which are introduced here to
generalise fans and cone complexes to allow non--convex geometries. The space
of tropical supports can be seen as a polyhedral analogue of the Hilbert
scheme. The logarithmic Quot space parameterises quotient sheaves on
logarithmic modifications that satisfy a natural transversality condition. We
prove that the space it is a logarithmic algebraic space, is separated, and
universally closed. The logarithmic Hilbert space parameterizes families of
proper monomorphisms, and in this way is exactly analogous to the classical
Hilbert scheme. The new complexity of the space can then be viewed as stemming
from the complexity of proper monomorphisms in logarithmic geometry. Our
construction generalises the logarithmic Donaldson--Thomas space studied by
Maulik--Ranganathan to arbitrary rank and dimension, and the good degenerations
of Quot schemes of Li--Wu to simple normal crossings geometries.
|
http://arxiv.org/abs/2308.14470v1
|
We propose a simple generalization of standard and empirically successful
decision tree learning algorithms such as ID3, C4.5, and CART. These
algorithms, which have been central to machine learning for decades, are greedy
in nature: they grow a decision tree by iteratively splitting on the best
attribute. Our algorithm, Top-$k$, considers the $k$ best attributes as
possible splits instead of just the single best attribute. We demonstrate,
theoretically and empirically, the power of this simple generalization. We
first prove a {\sl greediness hierarchy theorem} showing that for every $k \in
\mathbb{N}$, Top-$(k+1)$ can be dramatically more powerful than Top-$k$: there
are data distributions for which the former achieves accuracy $1-\varepsilon$,
whereas the latter only achieves accuracy $\frac1{2}+\varepsilon$. We then
show, through extensive experiments, that Top-$k$ outperforms the two main
approaches to decision tree learning: classic greedy algorithms and more recent
"optimal decision tree" algorithms. On one hand, Top-$k$ consistently enjoys
significant accuracy gains over greedy algorithms across a wide range of
benchmarks. On the other hand, Top-$k$ is markedly more scalable than optimal
decision tree algorithms and is able to handle dataset and feature set sizes
that remain far beyond the reach of these algorithms.
|
http://arxiv.org/abs/2310.01551v2
|
The IceCube South Pole Neutrino Observatory is a Cherenkov detector
instrumented in a cubic kilometer of ice at the South Pole. IceCube's primary
scientific goal is the detection of TeV neutrino emissions from astrophysical
sources. At the lower center of the IceCube array, there is a subdetector
called DeepCore, which has a denser configuration that makes it possible to
lower the energy threshold of IceCube and observe GeV-scale neutrinos, opening
the window to atmospheric neutrino oscillations studies. Advances in physics
sensitivity have recently been achieved by employing Convolutional Neural
Networks to reconstruct neutrino interactions in the DeepCore detector. In this
contribution, the recent IceCube result from the atmospheric muon neutrino
disappearance analysis using the CNN-reconstructed neutrino sample is presented
and compared to the existing worldwide measurements.
|
http://arxiv.org/abs/2307.15855v1
|
Score based generative models are a new class of generative models that have
been shown to accurately generate high dimensional calorimeter datasets. Recent
advances in generative models have used images with 3D voxels to represent and
model complex calorimeter showers. Point clouds, however, are likely a more
natural representation of calorimeter showers, particularly in calorimeters
with high granularity. Point clouds preserve all of the information of the
original simulation, more naturally deal with sparse datasets, and can be
implemented with more compact models and data files. In this work, two
state-of-the-art score based models are trained on the same set of calorimeter
simulation and directly compared.
|
http://arxiv.org/abs/2307.04780v2
|
In geographic data videos, camera movements are frequently used and combined
to present information from multiple perspectives. However, creating and
editing camera movements requires significant time and professional skills.
This work aims to lower the barrier of crafting diverse camera movements for
geographic data videos. First, we analyze a corpus of 66 geographic data videos
and derive a design space of camera movements with a dimension for geospatial
targets and one for narrative purposes. Based on the design space, we propose a
set of adaptive camera shots and further develop an interactive tool called
GeoCamera. This interactive tool allows users to flexibly design camera
movements for geographic visualizations. We verify the expressiveness of our
tool through case studies and evaluate its usability with a user study. The
participants find that the tool facilitates the design of camera movements.
|
http://arxiv.org/abs/2303.06460v3
|
N and $\Delta$ baryons hold an important place towards understanding the
quark dynamics inside hadrons. The hypercentral Constituent Quark Model (hCQM)
has been employed in various studies ranging from light to heavy hadrons. In
the present article, screened potential has been used to study light baryon
resonances. The Regge trajectories have been plotted alongwith the details of
slopes and intercepts. The strong decay widths to pion have been calculated for
some channels using the present masses.
|
http://arxiv.org/abs/2305.02588v1
|
The aim of this paper is, making use of the Gaia DR3 catalogue and Virtual
Observatory tools, to confirm and characterize 428 binary and multiple stellar
systems classified as neglected (only one observation) in the Washington Double
Star Catalogue (WDS). The components of the stellar systems have the same
parallax and proper motion (within the errors) and are separated by less than
50 000 AU, which minimizes the number of by-chance counterparts. Effective
temperatures calculated using VOSA were used to estimate stellar masses.
Binding energies were calculated for 42 binary systems confirming they are
physical pairs. Also we found 75 pairs with F/G- M spectral types which are
very interesting to improve the determination of the metallicity of the M star
from the higher-mass component.
|
http://arxiv.org/abs/2310.06558v1
|
Large language models (LLMs) are increasingly capable and prevalent, and can
be used to produce creative content. The quality of content is influenced by
the prompt used, with more specific prompts that incorporate examples generally
producing better results. On from this, it could be seen that using
instructions written for crowdsourcing tasks (that are specific and include
examples to guide workers) could prove effective LLM prompts. To explore this,
we used a previous crowdsourcing pipeline that gave examples to people to help
them generate a collectively diverse corpus of motivational messages. We then
used this same pipeline to generate messages using GPT-4, and compared the
collective diversity of messages from: (1) crowd-writers, (2) GPT-4 using the
pipeline, and (3 & 4) two baseline GPT-4 prompts. We found that the LLM prompts
using the crowdsourcing pipeline caused GPT-4 to produce more diverse messages
than the two baseline prompts. We also discuss implications from messages
generated by both human writers and LLMs.
|
http://arxiv.org/abs/2308.13479v1
|
In this work, a near-wall model, which couples the inverse of a recently
developed compressible velocity transformation [Griffin, Fu, & Moin, PNAS,
118:34, 2021] and an algebraic temperature-velocity relation, is developed for
high-speed turbulent boundary layers. As input, the model requires the mean
flow state at one wall-normal height in the inner layer of the boundary layer
and at the boundary-layer edge. As output, the model can predict mean
temperature and velocity profiles across the entire inner layer, as well as the
wall shear stress and heat flux. The model is tested in an a priori sense using
a wide database of direct numerical simulation high-Mach-number turbulent
channel flows, pipe flows, and boundary layers (48 cases with edge Mach numbers
in the range of 0.77--11 and semi-local friction Reynolds numbers in the range
of 170--5700). The present model is significantly more accurate than the
classical ordinary differential equation (ODE) model for all cases tested. The
model is deployed as a wall model for large-eddy simulations in channel flows
with bulk Mach numbers in the range of 0.7--4 and friction Reynolds numbers in
the range of 320--1800. When compared to the classical framework, in the a
posteriori sense, the present method greatly improves the predicted heat flux,
wall stress, and temperature and velocity profiles, especially in cases with
strong heat transfer. In addition, the present model solves one ODE instead of
two and has a similar computational cost and implementation complexity as the
commonly used ODE model.
|
http://arxiv.org/abs/2307.04958v1
|
The fully charmed hadronic scalar molecules $\mathcal{M}_1=\eta_c \eta_c$ and
$\mathcal{M}_2=\chi_{c0}\chi_{c0}$ are studied in the context of the QCD sum
rule method. The masses $m$, $\widetilde{m}$ and current couplings $f$, $
\widetilde{f}$ of these states are calculated using the two-point sum rule
approach. The obtained results $m=(6264 \pm 50)~\mathrm{MeV}$ and $
\widetilde{m}=(6954 \pm 50)~\mathrm{MeV}$ are employed to determine their decay
channels. It is demonstrated that the processes $\mathcal{M}_1\to J/\psi J/\psi
$ and $\mathcal{M}_1\to \eta _{c}\eta _{c}$ are kinematically allowed decay
modes of $\mathcal{M}_1$. The molecule $\mathcal{M}_2$ decays to $J/\psi
J/\psi$, $J/\psi \psi^{\prime}$, $\eta _{c}\eta _{c}$, $\eta _{c}\eta
_{c}(2S)$, $\eta _{c}\chi _{c1}(1P)$, and $\chi_{c0} \chi_{c0}$ mesons. The
partial widths all of these processes are evaluated by means of the three-point
sum rule calculations, which are necessary to extract the strong couplings
$g_i$ at vertices $\mathcal{M}_1J/\psi J/\psi $, $\mathcal{M }_1\eta _{c}\eta
_{c}$, and others. Our estimates for the full widths of the molecules
$\Gamma_{\mathcal{M}_1}=(320 \pm 72)~\mathrm{MeV}$ and $\Gamma _{
\mathcal{M}_2}=(138 \pm 18)~\mathrm{MeV}$, as well as their masses are compared
with parameters of the $X$ resonances discovered by the LHCb-ATLAS-CMS
Collaborations in the di-$J/\psi$ and $J/\psi\psi^{\prime}$ invariant mass
distributions. We argue that the molecule $\mathcal{M}_1$ can be considered as
a real candidate to the resonance $X(6200)$. The structure $ \mathcal{M}_2$ may
be interpreted as $X(6900)$ or one of its components in combination with a
scalar tetraquark.
|
http://arxiv.org/abs/2305.03696v2
|
The evil twin attack is a major security threat to WLANs. An evil twin is a
rogue AP installed by a malicious user to impersonate legitimate APs. It
intends to attract victims in order to intercept their credentials, to steal
their sensitive information, to eavesdrop on their data, etc. In this paper, we
study the security mechanisms of wireless networks and we introduce the
different authentication methods, including 802.1X authentication. We show that
802.1X has improved security through the use of digital certificates but does
not define any practical technique for the user to check the network
certificate. Therefore, it remains vulnerable to the evil twin attack. To
repair this vulnerability, we introduce Robust Certificate Management System
(RCMS) which takes advantage of the digital certificates of 802.1X to protect
the users against rogue APs. RCMS defines a new verification code to allow the
user device to check the network certificate. This practical verification
combined with the reliability of digital certificates provides a perfect
protection against rogue APs. RCMS requires a small software update on the user
terminal and does not need any modification of IEEE 802.11. It has a
significant flexibility since trusting a single AP is enough to trust all the
APs of the extended network. This allows the administrators to extend their
networks easily without the need to update any database of trusted APs on the
user devices.
|
http://arxiv.org/abs/2302.00338v1
|
We investigate the Brusselator system with diffusion and Dirichlet boundary
conditions on one dimensional space interval. Our proof demonstrates that, for
certain parameter values, a periodic orbit exists. This proof is
computer-assisted and rooted in the rigorous integration of partial
differential equations. Additionally, we present the evidence of the occurrence
of period-doubling bifurcation.
|
http://arxiv.org/abs/2303.03518v2
|
Matching cross-modality features between images and point clouds is a
fundamental problem for image-to-point cloud registration. However, due to the
modality difference between images and points, it is difficult to learn robust
and discriminative cross-modality features by existing metric learning methods
for feature matching. Instead of applying metric learning on cross-modality
data, we propose to unify the modality between images and point clouds by
pretrained large-scale models first, and then establish robust correspondence
within the same modality. We show that the intermediate features, called
diffusion features, extracted by depth-to-image diffusion models are
semantically consistent between images and point clouds, which enables the
building of coarse but robust cross-modality correspondences. We further
extract geometric features on depth maps produced by the monocular depth
estimator. By matching such geometric features, we significantly improve the
accuracy of the coarse correspondences produced by diffusion features.
Extensive experiments demonstrate that without any task-specific training,
direct utilization of both features produces accurate image-to-point cloud
registration. On three public indoor and outdoor benchmarks, the proposed
method averagely achieves a 20.6 percent improvement in Inlier Ratio, a
three-fold higher Inlier Number, and a 48.6 percent improvement in Registration
Recall than existing state-of-the-arts.
|
http://arxiv.org/abs/2310.03420v2
|
Dilated Convolution with Learnable Spacings (DCLS) is a recently proposed
variation of the dilated convolution in which the spacings between the non-zero
elements in the kernel, or equivalently their positions, are learnable.
Non-integer positions are handled via interpolation. Thanks to this trick,
positions have well-defined gradients. The original DCLS used bilinear
interpolation, and thus only considered the four nearest pixels. Yet here we
show that longer range interpolations, and in particular a Gaussian
interpolation, allow improving performance on ImageNet1k classification on two
state-of-the-art convolutional architectures (ConvNeXt and Conv\-Former),
without increasing the number of parameters. The method code is based on
PyTorch and is available at
https://github.com/K-H-Ismail/Dilated-Convolution-with-Learnable-Spacings-PyTorch
|
http://arxiv.org/abs/2306.00817v2
|
We consider a parametric nonautonomous $(p, q)$-equation with unbalanced
growth as follows
\begin{align*}
\left\{ \begin{aligned} &-\Delta_p^\alpha u(z)-\Delta_q u(z)=\lambda \vert
u(z)\vert^{\tau-2}u(z)+f(z, u(z)), \quad \quad \hbox{in }\Omega,\\
&u|_{\partial \Omega}=0, \end{aligned} \right. \end{align*} where $\Omega
\subseteq \mathbb{R}^N$ be a bounded domain with Lispchitz boundary
$\partial\Omega$, $\alpha \in L^{\infty}(\Omega)\backslash \{0\}$, $a(z)\geq 0$
for a.e. $z \in \Omega$, $ 1<\tau< q<p<N$ and $\lambda>0$. In the reaction
there is a parametric concave term and a perturbation $f(z, x)$. Under the
minimal conditions on $f(z, 0)$, which essentially restrict its growth near
zero, by employing variational tools, truncation and comparison techniques, as
well as critical groups, we prove that for all small values of the parameter
$\lambda>0$, the problem has at least three nontrivial bounded solutions
(positive, negative, nodal), which are ordered and asymptotically vanish as
$\lambda \rightarrow 0^{+}$.
|
http://arxiv.org/abs/2309.01354v1
|
A large variety of real-world Reinforcement Learning (RL) tasks is
characterized by a complex and heterogeneous structure that makes end-to-end
(or flat) approaches hardly applicable or even infeasible. Hierarchical
Reinforcement Learning (HRL) provides general solutions to address these
problems thanks to a convenient multi-level decomposition of the tasks, making
their solution accessible. Although often used in practice, few works provide
theoretical guarantees to justify this outcome effectively. Thus, it is not yet
clear when to prefer such approaches compared to standard flat ones. In this
work, we provide an option-dependent upper bound to the regret suffered by
regret minimization algorithms in finite-horizon problems. We illustrate that
the performance improvement derives from the planning horizon reduction induced
by the temporal abstraction enforced by the hierarchical structure. Then,
focusing on a sub-setting of HRL approaches, the options framework, we
highlight how the average duration of the available options affects the
planning horizon and, consequently, the regret itself. Finally, we relax the
assumption of having pre-trained options to show how in particular situations,
learning hierarchically from scratch could be preferable to using a standard
approach.
|
http://arxiv.org/abs/2305.06936v1
|
We show that if $\lbrace \varphi_i\rbrace_{i\in \Gamma}$ and $\lbrace
\psi_j\rbrace_{j\in\Lambda}$ are self-affine iterated function systems on the
plane that satisfy strong separation, domination and irreducibility, then for
any associated self-affine measures $\mu$ and $\nu$, the inequality $$\dim_{\rm
H}(\mu*\nu) < \min \lbrace 2, \dim_{\rm H} \mu + \dim_{\rm H} \nu \rbrace$$
implies that there is algebraic resonance between the eigenvalues of the linear
parts of $\varphi_i$ and $\psi_j$. This extends to planar non-conformal setting
the existing analogous results for self-conformal measures on the line.
|
http://arxiv.org/abs/2302.05240v4
|
Computing the 4D Euclidean path integral to one-loop order we find the large
quantum corrections that govern the behavior of a spherically symmetric
non-supersymmetric near-extremal black hole at very low temperature. These
corrections appear from the near-horizon geometry of the near-extremal black
hole. Using first-order perturbation theory we find that such corrections arise
from the zero modes of the extremal background. In the logarithm of the
partition function, these correspond to terms involving logarithm of
temperature. Part of our result matches with the existing one in literature
derived from an effective Schwarzian theory.
|
http://arxiv.org/abs/2303.12415v1
|
Brain tumors are a complex and potentially life-threatening medical condition
that requires accurate diagnosis and timely treatment. In this paper, we
present a machine learning-based system designed to assist healthcare
professionals in the classification and diagnosis of brain tumors using MRI
images. Our system provides a secure login, where doctors can upload or take a
photo of MRI and our app can classify the model and segment the tumor,
providing the doctor with a folder of each patient's history, name, and
results. Our system can also add results or MRI to this folder, draw on the MRI
to send it to another doctor, and save important results in a saved page in the
app. Furthermore, our system can classify in less than 1 second and allow
doctors to chat with a community of brain tumor doctors.
To achieve these objectives, our system uses a state-of-the-art machine
learning algorithm that has been trained on a large dataset of MRI images. The
algorithm can accurately classify different types of brain tumors and provide
doctors with detailed information on the size, location, and severity of the
tumor. Additionally, our system has several features to ensure its security and
privacy, including secure login and data encryption.
We evaluated our system using a dataset of real-world MRI images and compared
its performance to other existing systems. Our results demonstrate that our
system is highly accurate, efficient, and easy to use. We believe that our
system has the potential to revolutionize the field of brain tumor diagnosis
and treatment and provide healthcare professionals with a powerful tool for
improving patient outcomes.
|
http://arxiv.org/abs/2304.07901v1
|
In this paper, we define the KW cell system on a graph $\Gamma$, depending on
parameters $N\in \mathbb{N}$, $q$ a root of unity, and $\omega$ an $N$-th root
of unity. This is a polynomial system of equations depending on $\Gamma$ and
the parameters. Using the graph planar algebra embedding theorem, we prove that
when $q = e^{2\pi i \frac{1}{2(N+k)}}$, solutions to the KW cell system on
$\Gamma$ classify module categories over
$\overline{\mathrm{Rep}(U_q(sl_N))^\omega}$ whose action graph for the object
$\Lambda_1$ is $\Gamma$. The KW cell system is a generalisation of the
Etingof-Ostrik and the De Commer-Yamashita classifying data for
$\overline{\mathrm{Rep}(U_q(sl_2))}$ module categories, and Ocneanu's cell
calculus for $\overline{\mathrm{Rep}(U_q(sl_3))}$ module categories.
To demonstrate the effectiveness of this cell calculus, we solve the KW cell
systems corresponding to the exceptional module categories over
$\overline{\mathrm{Rep}(U_q(sl_4))}$ when $q= e^{2\pi i \frac{1}{2(4+k)}}$, as
well as for all three infinite families of charge conjugation modules. Building
on the work of the second author, this explicitly constructs and classifies all
irreducible module categories over $\mathcal{C}(sl_4, k)$ for all $k\in
\mathbb{N}$. These results prove claims made by Ocneanu on the quantum
subgroups of $SU(4)$. We also construct exceptional module categories over
$\overline{\mathrm{Rep}(U_q(sl_4))^\omega}$ where $\omega\in \{-1, i, -i\}$.
Two of these module categories have no analogue when $\omega=1$.
The main technical contributions of this paper are a proof of the graph
planar algebra embedding theorem for oriented planar algebras, and a refinement
of Kazhdan and Wenzl's skein theory presentation of the category
$\overline{\mathrm{Rep}(U_q(sl_N))^\omega}$. We also explicitly describe the
subfactors coming from a solution to a KW cell system.
|
http://arxiv.org/abs/2301.13172v2
|
The Lott-Sturm-Villani curvature-dimension condition $\mathsf{CD}(K,N)$
provides a synthetic notion for a metric measure space to have curvature
bounded from below by $K$ and dimension bounded from above by $N$. It has been
recently proved that this condition does not hold in sub-Riemannian geometry
for every choice of the parameters $K$ and $N$. In this paper, we extend this
result to the context sub-Finsler geometry, showing that the $\mathsf{CD}(K,N)$
condition is not well-suited to characterize curvature in this setting.
Firstly, we show that this condition fails in (strict) sub-Finsler manifolds
equipped with a smooth strongly convex norm and with a positive smooth measure.
Secondly, we focus on the sub-Finsler Heisenberg group, proving that
curvature-dimension bounds can not hold also when the reference norm is less
regular, in particular when it is of class $C^{1,1}$. The strategy for proving
these results is a non-trivial adaptation of the work of Juillet [Rev. Mat.
Iberoam., 37(1):177-188, 2021], and it requires the introduction of new tools
and ideas of independent interest. Finally, we demonstrate the failure of the
(weaker) measure contraction property $\mathsf{MCP}(K,N)$ in the sub-Finsler
Heisenberg group, equipped with a singular strictly convex norm and with a
positive smooth measure. This result contrasts with what happens in the \sr
Heisenberg group, which instead satisfies $\mathsf{MCP}(0,5)$.
|
http://arxiv.org/abs/2307.01820v2
|
We present a design methodology that enables the semi-automatic generation of
a hardware-accelerated graph building architectures for locally constrained
graphs based on formally described detector definitions. In addition, we define
a similarity measure in order to compare our locally constrained graph building
approaches with commonly used k-nearest neighbour building approaches. To
demonstrate the feasibility of our solution for particle physics applications,
we implemented a real-time graph building approach in a case study for the
Belle~II central drift chamber using Field-Programmable Gate Arrays~(FPGAs).
Our presented solution adheres to all throughput and latency constraints
currently present in the hardware-based trigger of the Belle~II experiment. We
achieve constant time complexity at the expense of linear space complexity and
thus prove that our automated methodology generates online graph building
designs suitable for a wide range of particle physics applications. By enabling
an hardware-accelerated pre-processing of graphs, we enable the deployment of
novel Graph Neural Networks~(GNNs) in first level triggers of particle physics
experiments.
|
http://arxiv.org/abs/2307.07289v2
|
Several types of energetic supernovae, such as superluminous supernovae
(SLSNe) and broad-line Ic supernovae (Ic-BL SNe), could be powered by the
spin-down of a rapidly rotating magnetar. Currently, most models used to infer
the parameters for potential magnetar-driven supernovae make several unsuitable
assumptions that likely bias the estimated parameters. In this work, we present
a new model for magnetar-driven supernovae that relaxes several of these
assumptions and an inference workflow that enables accurate estimation of
parameters from lightcurves of magnetar-driven supernovae. In particular, in
this model, we include the dynamical evolution of the ejecta, coupling it to
the energy injected by the magnetar itself while also allowing for non-dipole
spin down. We show that the model can reproduce SLSN and Ic-BL SN light curves
consistent with the parameter space from computationally expensive numerical
models. We also show the results of parameter inference on four well-known
example supernovae, demonstrating the model's effectiveness at capturing the
considerable diversity in magnetar-driven supernova lightcurves. The model fits
each light curve well and recovers parameters broadly consistent with previous
works. This model will allow us to explore the full diversity of
magnetar-driven supernovae under one theoretical framework, more accurately
characterize these supernovae from only photometric data, and make more
accurate predictions of future multiwavelength emission to test the
magnetar-driven scenario better.
|
http://arxiv.org/abs/2308.12997v2
|
With software systems permeating our lives, we are entitled to expect that
such systems are secure by design, and that such security endures throughout
the use of these systems and their subsequent evolution. Although adaptive
security systems have been proposed to continuously protect assets from harm,
they can only mitigate threats arising from changes foreseen at design time. In
this paper, we propose the notion of Sustainable Adaptive Security (SAS) which
reflects such enduring protection by augmenting adaptive security systems with
the capability of mitigating newly discovered threats. To achieve this
objective, a SAS system should be designed by combining automation (e.g., to
discover and mitigate security threats) and human intervention (e.g., to
resolve uncertainties during threat discovery and mitigation). In this paper,
we use a smart home example to showcase how we can engineer the activities of
the MAPE (Monitor, Analysis, Planning, and Execution) loop of systems
satisfying sustainable adaptive security. We suggest that using anomaly
detection together with abductive reasoning can help discover new threats and
guide the evolution of security requirements and controls. We also exemplify
situations when humans can be involved in the execution of the activities of
the MAPE loop and discuss the requirements to engineer human interventions.
|
http://arxiv.org/abs/2306.04481v1
|
We propose principled Gaussian processes (GPs) for modeling functions defined
over the edge set of a simplicial 2-complex, a structure similar to a graph in
which edges may form triangular faces. This approach is intended for learning
flow-type data on networks where edge flows can be characterized by the
discrete divergence and curl. Drawing upon the Hodge decomposition, we first
develop classes of divergence-free and curl-free edge GPs, suitable for various
applications. We then combine them to create \emph{Hodge-compositional edge
GPs} that are expressive enough to represent any edge function. These GPs
facilitate direct and independent learning for the different Hodge components
of edge functions, enabling us to capture their relevance during hyperparameter
optimization. To highlight their practical potential, we apply them for flow
data inference in currency exchange, ocean currents and water supply networks,
comparing them to alternative models.
|
http://arxiv.org/abs/2310.19450v3
|
Based on a formalism introduced in our previous work, we reconstruct the
phenomenological function $G_{\rm eff}(z)$ describing deviations from General
Relativity (GR) in a model-independent manner. In this alternative approach, we
model $\mu\equiv G_\mathrm{eff}/G$ as a Gaussian process and use forecasted
growth-rate measurements from a stage-IV survey to reconstruct its shape for
two different toy models. We follow a two-step procedure: (i) we first
reconstruct the background expansion history from Supernovae (SNe) and Baryon
Acoustic Oscillation (BAO) measurements; (ii) we then use it to obtain the
growth history $f\sigma_8$, that we fit to redshift-space distortions (RSD)
measurements to reconstruct $G_\mathrm{eff}$. We find that upcoming surveys
such as the Dark Energy Spectroscopic Instrument (DESI) might be capable of
detecting deviations from GR, provided the dark energy behavior is accurately
determined. We might even be able to constrain the transition redshift from
$G\to G_\mathrm{eff}$ for some particular models. We further assess the impact
of massive neutrinos on the reconstructions of $G_\mathrm{eff}$ (or $\mu$)
assuming the expansion history is given, and only the neutrino mass is free to
vary. Given the tight constraints on the neutrino mass, and for the profiles we
considered in this work, we recover numerically that the effect of such massive
neutrinos does not alter our conclusions. Finally, we stress that incorrectly
assuming a $\Lambda$CDM expansion history leads to a degraded reconstruction of
$\mu$, and/or a non-negligible bias in the
($\Omega_\mathrm{m,0}$,$\sigma_{8,0}$)-plane.
|
http://arxiv.org/abs/2301.00640v3
|
We introduce a novel spiking neural network model for learning distributed
internal representations from data in an unsupervised procedure. We achieved
this by transforming the non-spiking feedforward Bayesian Confidence
Propagation Neural Network (BCPNN) model, employing an online correlation-based
Hebbian-Bayesian learning and rewiring mechanism, shown previously to perform
representation learning, into a spiking neural network with Poisson statistics
and low firing rate comparable to in vivo cortical pyramidal neurons. We
evaluated the representations learned by our spiking model using a linear
classifier and show performance close to the non-spiking BCPNN, and competitive
with other Hebbian-based spiking networks when trained on MNIST and F-MNIST
machine learning benchmarks.
|
http://arxiv.org/abs/2305.03866v2
|
While large language models demonstrate remarkable capabilities, they often
present challenges in terms of safety, alignment with human values, and
stability during training. Here, we focus on two prevalent methods used to
align these models, Supervised Fine-Tuning (SFT) and Reinforcement Learning
from Human Feedback (RLHF). SFT is simple and robust, powering a host of
open-source models, while RLHF is a more sophisticated method used in top-tier
models like ChatGPT but also suffers from instability and susceptibility to
reward hacking. We propose a novel approach, Supervised Iterative Learning from
Human Feedback (SuperHF), which seeks to leverage the strengths of both
methods. Our hypothesis is two-fold: that the reward model used in RLHF is
critical for efficient data use and model generalization and that the use of
Proximal Policy Optimization (PPO) in RLHF may not be necessary and could
contribute to instability issues. SuperHF replaces PPO with a simple supervised
loss and a Kullback-Leibler (KL) divergence prior. It creates its own training
data by repeatedly sampling a batch of model outputs and filtering them through
the reward model in an online learning regime. We then break down the reward
optimization problem into three components: robustly optimizing the training
rewards themselves, preventing reward hacking-exploitation of the reward model
that degrades model performance-as measured by a novel METEOR similarity
metric, and maintaining good performance on downstream evaluations. Our
experimental results show SuperHF exceeds PPO-based RLHF on the training
objective, easily and favorably trades off high reward with low reward hacking,
improves downstream calibration, and performs the same on our GPT-4 based
qualitative evaluation scheme all the while being significantly simpler to
implement, highlighting SuperHF's potential as a competitive language model
alignment technique.
|
http://arxiv.org/abs/2310.16763v1
|
Central limit theorems (CLTs) have a long history in probability and
statistics. They play a fundamental role in constructing valid statistical
inference procedures. Over the last century, various techniques have been
developed in probability and statistics to prove CLTs under a variety of
assumptions on random variables. Quantitative versions of CLTs (e.g.,
Berry--Esseen bounds) have also been parallelly developed. In this article, we
propose to use approximation theory from functional analysis to derive explicit
bounds on the difference between expectations of functions.
|
http://arxiv.org/abs/2306.05947v2
|
Medical data often exhibits long-tail distributions with heavy class
imbalance, which naturally leads to difficulty in classifying the minority
classes (i.e., boundary regions or rare objects). Recent work has significantly
improved semi-supervised medical image segmentation in long-tailed scenarios by
equipping them with unsupervised contrastive criteria. However, it remains
unclear how well they will perform in the labeled portion of data where class
distribution is also highly imbalanced. In this work, we present ACTION++, an
improved contrastive learning framework with adaptive anatomical contrast for
semi-supervised medical segmentation. Specifically, we propose an adaptive
supervised contrastive loss, where we first compute the optimal locations of
class centers uniformly distributed on the embedding space (i.e., off-line),
and then perform online contrastive matching training by encouraging different
class features to adaptively match these distinct and uniformly distributed
class centers. Moreover, we argue that blindly adopting a constant temperature
$\tau$ in the contrastive loss on long-tailed medical data is not optimal, and
propose to use a dynamic $\tau$ via a simple cosine schedule to yield better
separation between majority and minority classes. Empirically, we evaluate
ACTION++ on ACDC and LA benchmarks and show that it achieves state-of-the-art
across two semi-supervised settings. Theoretically, we analyze the performance
of adaptive anatomical contrast and confirm its superiority in label
efficiency.
|
http://arxiv.org/abs/2304.02689v3
|
The study focuses on the impact of microlensing in modern cosmology and
introduces a new framework for the static spherically symmetrical wormhole in
terms of the radial equation of state. Following a standard procedure, the
study calculates the lensing equation, magnification, and event rate based on
the the radial equation of state. The analysis highlights that the image
problem of the light source is complex. Furthermore, the study suggests that
larger values for the throat radius of the wormhole and the radial equation of
state lead to higher event rates. Additionally, it is proposed that the event
rate of a wormhole will be larger compared to that of a black hole, provided
their masses and distances from the light source and observer are comparable.
This study offers the potential to distinguish between a wormhole and a black
hole under similar conditions.
|
http://arxiv.org/abs/2303.11134v5
|
The identification of the main sulfur reservoir on its way from the diffuse
interstellar medium to the cold dense star-forming cores and eventually to
protostars is a long-standing problem. Despite sulfur's astrochemical
relevance, the abundance of S-bearing molecules in dense cores and regions
around protostars is still insufficiently constrained. The goal of this
investigation is to derive the gas-phase H$_2$S/OCS ratio for several low-mass
protostars, which could provide crucial information about the physical and
chemical conditions in the birth cloud of Sun-like stars. Using ALMA ACA Band 6
observations, H$_2$S, OCS, and their isotopologs are searched for in 10 Class
0/I protostars with different source properties such as age, mass, and
environmental conditions. An LTE model is used to fit synthetic spectra to the
detected lines and to derive the column densities based solely on optically
thin lines. The H$_2$S and OCS column densities span four orders of magnitude
across the sample. The H$_2$S/OCS ratio is found to be in the range from 0.2 to
above 9.7. IRAS 16293-2422 A and Ser-SMM3 have the lowest ratio, while
BHR71-IRS1 has the highest. Only the H$_2$S/OCS ratio of BHR71-IRS1 agress
within uncertainties with the ratio in comet 67P/C$-$G. The determined
gas-phase H$_2$S/OCS ratios can be below the upper limits on the solid-state
ratios by as much as an order of magnitude. The H$_2$S/OCS ratio depends
significantly on the environment of the birth cloud, such as UV-irradiation and
heating received prior to the formation of a protostar. The highly isolated
birth environment of BHR71-IRS1 is hypothesized to be the reason for its high
gaseous H$_2$S/OCS ratio due to lower rates of photoreactions and more
efficient hydrogenation reactions under such dark, cold conditions. The gaseous
inventory of S-bearing molecules in BHR71-IRS1 appears to be most similar to
that of interstellar ices.
|
http://arxiv.org/abs/2302.09452v1
|
Recently, helicity-dependent photocurrent was reported in Bi single thin fi
lms. It is proposed that the origin of this photocurrent is the combination of
photo-spin conversion and spin-charge conversion effects in Bi and efficient
spin conversion in Bi is expected. In this study, we measured two types of
terahertz (THz) emissions from Bi/Co bilayer films induced by spin current
generation using laser-induced demagnetization of the Co layer and photo-spin
conversion effect in the Bi layer to investigate the spin current induced by
the two mechanisms simultaneously. We clearly observed diff erent Bi thickness
dependence of peak intensity and that of bandwidth for THz spin current in two
experiments, i.e., spin current induced by demagnetization of Co and that by
photo-spin conversion in Bi. The different Bi thickness dependence of spin
current intensity and bandwidth in two experiments is caused by different spin
relaxation properties of optically excited spin currents in Bi layers.
|
http://arxiv.org/abs/2301.06231v2
|
We introduce the notion of a weighted inversion statistic on the symmetric
group, and examine its distribution on each conjugacy class. Our work
generalizes the study of several common permutation statistics, including the
number of inversions, the number of descents, the major index, and the number
of excedances. As a consequence, we obtain explicit formulas for the first
moments of several statistics by conjugacy class. We also show that when the
cycle lengths are sufficiently large, the higher moments of arbitrary
permutation statistics are independent of the conjugacy class. Fulman (J. Comb.
Theory Ser. A., 1998) previously established this result for major index and
descents. We obtain these results, in part, by generalizing the techniques of
Fulman (ibid.), and introducing the notion of permutation constraints. For
permutation statistics that can be realized via symmetric constraints, we show
that each moment is a polynomial in the degree of the symmetric group.
|
http://arxiv.org/abs/2301.00898v2
|
The so-called `impossibly early galaxy' problem, first identified via the
Hubble Space Telescope's observation of galaxies at redshifts z > 10, appears
to have been exacerbated by the more recent James Webb Space Telescope (JWST)
discovery of galaxy candidates at even higher redshifts (z ~ 17) which,
however, are yet to be confirmed spectroscopically. These candidates would have
emerged only ~ 230 million years after the big bang in the context of LCDM,
requiring a more rapid star formation in the earliest galaxies than appears to
be permitted by simulations adopting the concordance model parameters. This
time-compression problem would therefore be inconsistent with the age-redshift
relation predicted by LCDM. Instead, the sequence of star formation and galaxy
assembly would confirm the timeline predicted by the R_h=ct universe, a
theoretically advanced version of LCDM that incorporates the `zero active mass'
condition from general relativity. This model has accounted for many
cosmological data better than LCDM, and eliminates all of its inconsistencies,
including the horizon and initial entropy problems. The latest JWST discoveries
at z > 14, if confirmed, would add further support to the idea that the R_h=ct
universe is favored by the observations over the current standard model.
|
http://arxiv.org/abs/2302.10103v1
|
A class of occupancy models for detection/non-detection data is proposed to
relax the closure assumption of N$-$mixture models. We introduce a community
parameter $c$, ranging from $0$ to $1$, which characterizes a certain portion
of individuals being fixed across multiple visits. As a result, when $c$ equals
$1$, the model reduces to the N$-$mixture model; this reduced model is shown to
overestimate abundance when the closure assumption is not fully satisfied.
Additionally, by including a zero-inflated component, the proposed model can
bridge the standard occupancy model ($c=0$) and the zero-inflated N$-$mixture
model ($c=1$). We then study the behavior of the estimators for the two extreme
models as $c$ varies from $0$ to $1$. An interesting finding is that the
zero-inflated N$-$mixture model can consistently estimate the zero-inflated
probability (occupancy) as $c$ approaches $0$, but the bias can be positive,
negative, or unbiased when $c>0$ depending on other parameters. We also
demonstrate these results through simulation studies and data analysis.
|
http://arxiv.org/abs/2304.02851v1
|
TextDescriptives is a Python package for calculating a large variety of
metrics from text. It is built on top of spaCy and can be easily integrated
into existing workflows. The package has already been used for analysing the
linguistic stability of clinical texts, creating features for predicting
neuropsychiatric conditions, and analysing linguistic goals of primary school
students. This paper describes the package and its features.
|
http://arxiv.org/abs/2301.02057v3
|
The Knowledge Base Question Answering (KBQA) task aims to answer natural
language questions based on a given knowledge base. Recently, Large Language
Models (LLMs) have shown strong capabilities in language understanding and can
be used to solve this task. In doing so, a major challenge for LLMs is to
overcome the immensity and heterogeneity of knowledge base schemas.Existing
methods bypass this challenge by initially employing LLMs to generate drafts of
logic forms without schema-specific details.Then, an extra module is used to
inject schema information to these drafts.In contrast, in this paper, we
propose a simple In-Context Schema Understanding (ICSU) method that enables
LLMs to directly understand schemas by leveraging in-context learning.
Specifically, ICSU provides schema information to LLMs using schema-related
annotated examples. We investigate three example retrieval strategies based on
raw questions, anonymized questions, and generated SPARQL queries. Experimental
results show that ICSU demonstrates competitive performance compared to
baseline methods on both the KQA Pro and WebQSP datasets.
|
http://arxiv.org/abs/2310.14174v2
|
Metal halide perovskites have shown great performance as solar energy
materials, but their outstanding optoelectronic properties are paired with
unusually strong anharmonic effects. It has been proposed that this intriguing
combination of properties derives from the "lone pair" 6$s^2$ electron
configuration of the Pb$^{2+}$ cations, and associated weak pseudo-Jahn-Teller
effect, but the precise impact of this chemical feature remains unclear. Here
we show that in fact an $ns^2$ electron configuration is not a prerequisite for
the strong anharmonicity and low-energy lattice dynamics encountered in this
class of materials. We combine X-ray diffraction, infrared and Raman
spectroscopies, and first-principles molecular dynamics calculations to
directly contrast the lattice dynamics of CsSrBr$_3$ with those of CsPbBr$_3$,
two compounds which bear close structural similarity but with the former
lacking the propensity to form lone pairs on the 5$s^0$ octahedral cation. We
exploit low-frequency diffusive Raman scattering, nominally symmetry-forbidden
in the cubic phase, as a fingerprint to detect anharmonicity and reveal that
low-frequency tilting occurs irrespective of octahedral cation electron
configuration. This work highlights the key role of structure in perovskite
lattice dynamics, providing important design rules for the emerging class of
soft perovskite semiconductors for optoelectronic and light-harvesting devices.
|
http://arxiv.org/abs/2310.03408v2
|
The objective of this work is to quantify the reconstruction error in sparse
inverse problems with measures and stochastic noise, motivated by optimal
sensor placement. To be useful in this context, the error quantities must be
explicit in the sensor configuration and robust with respect to the source, yet
relatively easy to compute in practice, compared to a direct evaluation of the
error by a large number of samples. In particular, we consider the
identification of a measure consisting of an unknown linear combination of
point sources from a finite number of measurements contaminated by Gaussian
noise. The statistical framework for recovery relies on two main ingredients:
first, a convex but non-smooth variational Tikhonov point estimator over the
space of Radon measures and, second, a suitable mean-squared error based on its
Hellinger-Kantorovich distance to the ground truth. To quantify the error, we
employ a non-degenerate source condition as well as careful linearization
arguments to derive a computable upper bound. This leads to asymptotically
sharp error estimates in expectation that are explicit in the sensor
configuration. Thus they can be used to estimate the expected reconstruction
error for a given sensor configuration and guide the placement of sensors in
sparse inverse problems.
|
http://arxiv.org/abs/2308.01055v2
|
We consider the boundary value problem $-\Delta_p u_\lambda -\Delta_q
u_\lambda =\lambda g(x) u_\lambda^{-\beta}$ in $\Omega$ , $u_\lambda=0$ on
$\partial \Omega$ with $u_\lambda>0$ in $\Omega.$ We assume $\Omega$ is a
bounded open set in $\mathbb{R}^N$ with smooth boundary, $1<p<q<\infty$,
$\beta\in [0,1),$ $g$ is a positive weight function and $\lambda$ is a positive
parameter. We derive an estimate for $u_\lambda$ which describes its exact
behavior when the parameter $\lambda$ is large. In general, by invoking
appropriate comparison principles, this estimate can be used as a powerful tool
in deducing the existence, non-existence and multiplicity of positive solutions
of nonlinear elliptic boundary value problems. Here, as an application of this
estimate, we obtain a uniqueness result for a nonlinear elliptic boundary value
problem with a singular nonlinearity.
|
http://arxiv.org/abs/2302.04176v1
|
Mass spectra, which are agglomerations of ionized fragments from targeted
molecules, play a crucial role across various fields for the identification of
molecular structures. A prevalent analysis method involves spectral library
searches,where unknown spectra are cross-referenced with a database. The
effectiveness of such search-based approaches, however, is restricted by the
scope of the existing mass spectra database, underscoring the need to expand
the database via mass spectra prediction. In this research, we propose the
Motif-based Mass Spectrum Prediction Network (MoMS-Net), a system that predicts
mass spectra using the information derived from structural motifs and the
implementation of Graph Neural Networks (GNNs). We have tested our model across
diverse mass spectra and have observed its superiority over other existing
models. MoMS-Net considers substructure at the graph level, which facilitates
the incorporation of long-range dependencies while using less memory compared
to the graph transformer model.
|
http://arxiv.org/abs/2306.16085v1
|
Bistable mechanisms are prevalent across a broad spectrum of applications due
to their ability to maintain two distinct stable states. Their energy
consumption is predominantly confined to the process of state transitions,
thereby enhancing their efficiency. However, the transition often requires two
distinct inputs, implicating the requirement of multiple actuators. Here, we
propose an elastic and contactless design strategy for inducing state
transitions in bistable mechanisms, requiring only a single cyclic input. The
strategy leverages internal information, interpreted as system state, as an
extra input to make a weighted decision for transitioning to the subsequent
state. We characterize the behavior using a spring-based rigid-body model,
consisting of a column near bifurcation, combined with a non-linear spring
connected to a bistable element that represents the information state. The
results show that a nonlinear spring with a quadratic stiffness function, i.e.,
representing internal instability, is crucial for regulating state-switching
behavior. We then demonstrate this design strategy by developing a monolithic
and compliant design embodiment and experimentally evaluate its behavior.
|
http://arxiv.org/abs/2308.09409v1
|
The inference of Large language models (LLMs) requires immense computation
and memory resources. To curtail these costs, quantisation has merged as a
promising solution, but existing LLM quantisation mainly focuses on 8-bit. In
this work, we explore the statistical and learning properties of the LLM layer
and attribute the bottleneck of LLM quantisation to numerical scaling offsets.
To address this, we adapt block quantisations for LLMs, a family of methods
that share scaling factors across packed numbers. Block quantisations
efficiently reduce the numerical scaling offsets solely from an arithmetic
perspective, without additional treatments in the computational path. Our
nearly-lossless quantised 6-bit LLMs achieve a $19\times$ higher arithmetic
density and $5\times$ memory density than the float32 baseline, surpassing the
prior art 8-bit quantisation by $2.5\times$ in arithmetic density and
$1.2\times$ in memory density, without requiring any data calibration or
re-training. We also share our insights into sub-8-bit LLM quantisation,
including the mismatch between activation and weight distributions, optimal
fine-tuning strategies, and a lower quantisation granularity inherent in the
statistical properties of LLMs. The latter two tricks enable nearly-lossless
4-bit LLMs on downstream tasks. Our code is open-sourced.
|
http://arxiv.org/abs/2310.05079v2
|
Escherichia coli is one of many bacterial inhabitants found in human
intestines and any adaptation as a result of mutations may affect its host. A
commonly used technique employed to study these mutations is Restriction
Fragment Length Polymorphism (RFLP) and is proceeded with a suitable distance
coefficient to quantify genetic differences between 2 samples. Dice is
considered a suitable distance coefficient in RFLP analyses, while others were
left unstudied in its suitability for use. Hence, this study aims to identify
substitutes for Dice. Experimental data was obtained by subculturing E. coli
for 72 passages in 8 different adaptation media and RFLP profiles analyzed
using 20 distance coefficients. Our results suggest that Dennis, Fossum,
Matching and Russel and Rao to work as well or better than Dice. Dennis,
Matching and Fossum coefficients had highest discriminatory abilities but are
limited by the lack of upper or lower boundaries. Russel and Rao coefficient is
highly correlated with Dice coefficient (r2 = 0.998), with both higher and
lower boundaries, suggesting that Russel and Rao coefficient can be used to
substitute Dice coefficient in studying genetic distances in E. coli.
|
http://arxiv.org/abs/2302.12714v1
|
The $\text{PSL}(4,\mathbb{R})$ Hitchin component of a closed surface group
$\pi_1(S)$ consists of holonomies of properly convex foliated projective
structures on the unit tangent bundle of $S$. We prove that the leaves of the
codimension-$1$ foliation of any such projective structure are all projectively
equivalent if and only if its holonomy is Fuchsian. This implies constraints on
the symmetries and shapes of these leaves.
We also give an application to the topology of the non-${\rm T}_0$ space
$\mathfrak{C}(\mathbb{RP}^n)$ of projective classes of properly convex domains
in $\mathbb{RP}^n$. Namely, Benz\'ecri asked in 1960 if every closed subset of
$\mathfrak{C}(\mathbb{RP}^n)$ that contains no proper nonempty closed subset is
a point. Our results imply a negative resolution for $n \geq 2$.
|
http://arxiv.org/abs/2304.01380v2
|
In this paper, we explore the impact of adding tactile sensation to video
prediction models for physical robot interactions. Predicting the impact of
robotic actions on the environment is a fundamental challenge in robotics.
Current methods leverage visual and robot action data to generate video
predictions over a given time period, which can then be used to adjust robot
actions. However, humans rely on both visual and tactile feedback to develop
and maintain a mental model of their physical surroundings. In this paper, we
investigate the impact of integrating tactile feedback into video prediction
models for physical robot interactions. We propose three multi-modal
integration approaches and compare the performance of these tactile-enhanced
video prediction models. Additionally, we introduce two new datasets of robot
pushing that use a magnetic-based tactile sensor for unsupervised learning. The
first dataset contains visually identical objects with different physical
properties, while the second dataset mimics existing robot-pushing datasets of
household object clusters. Our results demonstrate that incorporating tactile
feedback into video prediction models improves scene prediction accuracy and
enhances the agent's perception of physical interactions and understanding of
cause-effect relationships during physical robot interactions.
|
http://arxiv.org/abs/2304.11193v1
|
Learning to Rank (LTR) methods are vital in online economies, affecting users
and item providers. Fairness in LTR models is crucial to allocate exposure
proportionally to item relevance. Widely used deterministic LTR models can lead
to unfair exposure distribution, especially when items with the same relevance
receive slightly different ranking scores. Stochastic LTR models, incorporating
the Plackett-Luce (PL) ranking model, address fairness issues but suffer from
high training cost. In addition, they cannot provide guarantees on the utility
or fairness, which can lead to dramatic degraded utility when optimized for
fairness. To overcome these limitations, we propose Inference-time Stochastic
Ranking with Risk Control (ISRR), a novel method that performs stochastic
ranking at inference time with guanranteed utility or fairness given pretrained
scoring functions from deterministic or stochastic LTR models. Comprehensive
experimental results on three widely adopted datasets demonstrate that our
proposed method achieves utility and fairness comparable to existing stochastic
ranking methods with much lower computational cost. In addition, results verify
that our method provides finite-sample guarantee on utility and fairness. This
advancement represents a significant contribution to the field of stochastic
ranking and fair LTR with promising real-world applications.
|
http://arxiv.org/abs/2306.07188v3
|
We give a necessary and sufficient condition for an inverse sequence $S_0
\leftarrow S_1 \leftarrow \dots$ indexed by natural numbers to have ${\rm
lim}^1S=0$. This condition can be treated as a transfinite version of the
Mittag-Leffler condition. We consider inverse sequences in an arbitrary abelian
category having a generator and satisfying Grothendieck axioms ${\rm (AB3)}$
and ${\rm (AB4^*)}.$ We also show that the class of inverse sequences $S$ such
that ${\rm lim}\: S={\rm lim}^1 S=0$ is the least class of inverse sequences
containing the trivial inverse sequence and closed with respect to small limits
and a certain type of extensions.
|
http://arxiv.org/abs/2310.02716v4
|
Graph algorithms are challenging to implement due to their varying topology
and irregular access patterns. Real-world graphs are dynamic in nature and
routinely undergo edge and vertex additions, as well as, deletions. Typical
examples of dynamic graphs are social networks, collaboration networks, and
road networks. Applying static algorithms repeatedly on dynamic graphs is
inefficient. Unfortunately, we know little about how to efficiently process
dynamic graphs on massively parallel architectures such as GPUs. Existing
approaches to represent and process dynamic graphs are either not general or
inefficient. In this work, we propose a library-based framework for dynamic
graph algorithms that proposes a GPU-tailored graph representation and exploits
the warp-cooperative execution model. The library, named Meerkat, builds upon a
recently proposed dynamic graph representation on GPUs. This representation
exploits a hashtable-based mechanism to store a vertex's neighborhood. Meerkat
also enables fast iteration through a group of vertices, such as the whole set
of vertices or the neighbors of a vertex. Based on the efficient iterative
patterns encoded in Meerkat, we implement dynamic versions of the popular graph
algorithms such as breadth-first search, single-source shortest paths, triangle
counting, weakly connected components, and PageRank. Compared to the
state-of-the-art dynamic graph analytics framework Hornet, Meerkat is
$12.6\times$, $12.94\times$, and $6.1\times$ faster, for query, insert, and
delete operations, respectively. Using a variety of real-world graphs, we
observe that Meerkat significantly improves the efficiency of the underlying
dynamic graph algorithm. Meerkat performs $1.17\times$ for BFS, $1.32\times$
for SSSP, $1.74\times$ for PageRank, and $6.08\times$ for WCC, better than
Hornet on average.
|
http://arxiv.org/abs/2305.17813v2
|
Cyber attacks deceive machines into believing something that does not exist
in the first place. However, there are some to which even humans fall prey. One
such famous attack that attackers have used over the years to exploit the
vulnerability of vision is known to be a Homoglyph attack. It employs a primary
yet effective mechanism to create illegitimate domains that are hard to
differentiate from legit ones. Moreover, as the difference is pretty
indistinguishable for a user to notice, they cannot stop themselves from
clicking on these homoglyph domain names. In many cases, that results in either
information theft or malware attack on their systems. Existing approaches use
simple, string-based comparison techniques applied in primary language-based
tasks. Although they are impactful to some extent, they usually fail because
they are not robust to different types of homoglyphs and are computationally
not feasible because of their time requirement proportional to the string
length. Similarly, neural network-based approaches are employed to determine
real domain strings from fake ones. Nevertheless, the problem with both methods
is that they require paired sequences of real and fake domain strings to work
with, which is often not the case in the real world, as the attacker only sends
the illegitimate or homoglyph domain to the vulnerable user. Therefore,
existing approaches are not suitable for practical scenarios in the real world.
In our work, we created GlyphNet, an image dataset that contains 4M domains,
both real and homoglyphs. Additionally, we introduce a baseline method for a
homoglyph attack detection system using an attention-based convolutional Neural
Network. We show that our model can reach state-of-the-art accuracy in
detecting homoglyph attacks with a 0.93 AUC on our dataset.
|
http://arxiv.org/abs/2306.10392v1
|
We carry out the calculation of kinematical higher-twist corrections to the
cross section of $\gamma^* \to M_1 M_2 \gamma$ up to twist 4, where $M_i$ is a
scalar or pseudoscalar neutral meson. The three independant helicity amplitudes
are presented in terms of the twist-2 generalized distribution amplitudes
(GDAs), which are important non-perturbative quantities for understanding the
3D structure of hadrons. Since this process can be measured by BESIII in $e^+
e^-$ collisions, we perform the numerical estimate of the kinematical
higher-twist corrections by using the kinematics of BESIII. We adopt the $\pi
\pi$ GDA extracted from Belle measurements and the asymptotic $\pi \pi$ GDA to
study the size of the kinematical corrections in the case of pion meson pair,
and a model $\eta \eta$ GDA is used to see the impact of target mass
corrections $\mathcal O(m^2/s)$ for $\gamma^* \to \eta \eta \gamma$. Our
results show that the kinematical higher-twist corrections account for $\sim
20\%$ of the cross sections at BESIII on the average, and it is necessary to
include them if one tries to extract GDAs from experimental measurements
precisely. We also comment the case of $\pi^0 \eta$ production which is
important for the search of hybrid mesons.
|
http://arxiv.org/abs/2304.06389v2
|
The aim of the paper is to present a novel class of time-dependent controls
to realize ultra-fast magnetization switching in nanomagnets driven by
spin-torques produced by spin-polarized electric currents. Magnetization
dynamics in such systems is governed by the Landau-Lifshitz-Slonczewski
equation which describes the precessional motion of (dimensionless)
magnetization vector on the unit-sphere. The relevant case of nanoparticles
with uniaxial anisotropy having in-plane easy and intermediate axes and
out-of-plane hard axis is considered. By exploiting the characteristic
smallness of damping and spin-torque intensity, the aforementioned controls are
constructed via suitable perturbative tools in a way to realise approximate
\emph{latitudinal solutions} (i.e. motions on a sphere in which the
out-of-plane magnetization component stays constant) with the effect to fast
``switch'' the system from one stationary state to another. The possibility to
keep a (``small'') bounded value of the out-of-plane coordinate throughout this
process of ``transfer'', turns out to be advantageous in the applications as it
sensibly reduces the post-switching relaxation oscillations that may cause the
failure of switching in real samples. Further relevant quantitative results on
the behaviour of the solutions during the pre- and post-switching stages
(termed ``expulsion'' and ``attraction'', respectively), are given as a
byproduct. A selection of validating numerical experiments is presented
alongside the corresponding theoretical results.
|
http://arxiv.org/abs/2310.02070v1
|
In Einstein-Gauss-Bonnet gravity, we study the quasi-normal modes (QNMs) of
the tensor perturbation for the so-called Maeda-Dadhich black hole which
locally has a topology $\mathcal{M}^n \simeq M^4 \times \mathcal{K}^{n-4}$. Our
discussion is based on the tensor perturbation equation derived
in~\cite{Cao:2021sty}, where the Kodama-Ishibashi gauge invariant formalism for
Einstein gravity theory has been generalized to the Einstein-Gauss-Bonnet
gravity theory. With the help of characteristic tensors for the constant
curvature space $\mathcal{K}^{n-4}$, we investigate the effect of extra
dimensions and obtain the scalar equation in four dimensional spacetime, which
is quite different from the Klein-Gordon equation. Using the asymptotic
iteration method and the numerical integration method with the Kumaresan-Tufts
frequency extraction method, we numerically calculate the QNM frequencies. In
our setups, characteristic frequencies depend on six distinct factors. They are
the spacetime dimension $n$, the Gauss-Bonnet coupling constant $\alpha$, the
black hole mass parameter $\mu$, the black hole charge parameter $q$, and two
``quantum numbers" $l$, $\gamma$. Without loss of generality, the impact of
each parameter on the characteristic frequencies is investigated while fixing
other five parameters. Interestingly, the dimension of compactification part
has no significant impact on the lifetime of QNMs.
|
http://arxiv.org/abs/2307.06801v2
|
In this paper, we investigate the almost sure convergence, in supremum norm,
of the rank-based linear wavelet estimator for a multivariate copula density.
Based on empirical process tools, we prove a uniform limit law for the
deviation, from its expectation, of an oracle estimator (obtained for known
margins), from which we derive the exact convergence rate of the rank-based
linear estimator. This rate reveals to be optimal in a minimax sense over Besov
balls for the supremum norm loss, whenever the resolution level is suitably
chosen.
|
http://arxiv.org/abs/2303.05627v1
|
Although randomized controlled trials (RCTs) are a cornerstone of comparative
effectiveness, they typically have much smaller sample size than observational
studies because of financial and ethical considerations. Therefore there is
interest in using plentiful historical data (either observational data or prior
trials) to reduce trial sizes. Previous estimators developed for this purpose
rely on unrealistic assumptions, without which the added data can bias the
treatment effect estimate. Recent work proposed an alternative method
(prognostic covariate adjustment) that imposes no additional assumptions and
increases efficiency in trial analyses. The idea is to use historical data to
learn a prognostic model: a regression of the outcome onto the covariates. The
predictions from this model, generated from the RCT subjects' baseline
variables, are then used as a covariate in a linear regression analysis of the
trial data. In this work, we extend prognostic adjustment to trial analyses
with nonparametric efficient estimators, which are more powerful than linear
regression. We provide theory that explains why prognostic adjustment improves
small-sample point estimation and inference without any possibility of bias.
Simulations corroborate the theory: efficient estimators using prognostic
adjustment compared to without provides greater power (i.e., smaller standard
errors) when the trial is small. Population shifts between historical and trial
data attenuate benefits but do not introduce bias. We showcase our estimator
using clinical trial data provided by Novo Nordisk A/S that evaluates insulin
therapy for individuals with type II diabetes.
|
http://arxiv.org/abs/2305.19180v4
|
Mobile augmented reality (MAR) is widely acknowledged as one of the
ubiquitous interfaces to the digital twin and Metaverse, demanding unparalleled
levels of latency, computational power, and energy efficiency. The existing
solutions for realizing MAR combine multiple technologies like edge, cloud
computing, and fifth-generation (5G) networks. However, the inherent
communication latency of visual data imposes apparent limitations on the
quality of experience (QoE). To address the challenge, we propose an emergent
semantic communication framework to learn the communication protocols in MAR.
Specifically, we train two agents through a modified Lewis signaling game to
emerge a discrete communication protocol spontaneously. Based on this protocol,
two agents can communicate about the abstract idea of visual data through
messages with extremely small data sizes in a noisy channel, which leads to
message errors. To better simulate real-world scenarios, we incorporate channel
uncertainty into our training process. Experiments have shown that the proposed
scheme has better generalization on unseen objects than traditional object
recognition used in MAR and can effectively enhance communication efficiency
through the utilization of small-size messages.
|
http://arxiv.org/abs/2308.07342v1
|
We consider the obnoxious facility location problem (in which agents prefer
the facility location to be far from them) and propose a hierarchy of
distance-based proportional fairness concepts for the problem. These fairness
axioms ensure that groups of agents at the same location are guaranteed to be a
distance from the facility proportional to their group size. We consider
deterministic and randomized mechanisms, and compute tight bounds on the price
of proportional fairness. In the deterministic setting, not only are our
proportional fairness axioms incompatible with strategyproofness, the Nash
equilibria may not guarantee welfare within a constant factor of the optimal
welfare. On the other hand, in the randomized setting, we identify
proportionally fair and strategyproof mechanisms that give an expected welfare
within a constant factor of the optimal welfare.
|
http://arxiv.org/abs/2301.04340v1
|
This paper addresses the escalating challenges posed by the ever-increasing
data volume, velocity, and the demand for low-latency applications, driven by
the proliferation of smart devices and Internet of Things (IoT) applications.
To mitigate service delay and enhance Quality of Service (QoS), we introduce a
hybrid optimization of Particle Swarm (PSO) and Chemical Reaction (CRO) to
improve service delay in FogPlan, an offline framework that prioritizes QoS and
enables dynamic fog service deployment. The method optimizes fog service
allocation based on incoming traffic to each fog node, formulating it as an
Integer Non-Linear Programming (INLP) problem, considering various service
attributes and costs. Our proposed algorithm aims to minimize service delay and
QoS degradation. The evaluation using real MAWI Working Group traffic data
demonstrates a substantial 29.34% reduction in service delay, a 66.02% decrease
in service costs, and a noteworthy 50.15% reduction in delay violations
compared to the FogPlan framework.
|
http://arxiv.org/abs/2301.12522v2
|
This paper aims to evaluate how changing patterns of sectoral gender
segregation play a role in accounting for women's employment contracts and
wages in the UK between 2005 and 2020. We then study wage differentials in
gender-specific dominated sectors. We found that the propensity of women to be
distributed differently across sectors is a major factor contributing to
explaining the differences in wages and contract opportunities. Hence, the
disproportion of women in female-dominated sectors implies contractual features
and lower wages typical of that sector, on average, for all workers. This
difference is primarily explained by "persistent discriminatory constraints",
while human capital-related characteristics play a minor role. However, wage
differentials would shrink if workers had the same potential and residual wages
as men in male-dominated sectors. Moreover, this does not happen at the top of
the wage distribution, where wage differentials among women working in
female-dominated sectors are always more pronounced than those of men.
|
http://arxiv.org/abs/2303.04539v3
|
We say that a chessboard filled with integer entries satisfies the
neighbour-sum property if the number appearing on each cell is the sum of
entries in its neighbouring cells, where neighbours are cells sharing a common
edge or vertex. We show that an $n\times n$ chessboard satisfies this property
if and only if $n\equiv 5\pmod 6$. Existence of solutions is further
investigated of rectangular, toroidal boards, as well as on Neumann
neighbourhoods, including a nice connection to discrete harmonic functions.
Construction of solutions on infinite boards are also presented. Finally,
answers to three dimensional analogues of these boards are explored using
properties of cyclotomic polynomials and relevant ideas conjectured.
|
http://arxiv.org/abs/2310.04401v1
|
A triangular solution [Phys. Rev. D 107, 044005 (2023)] has recently been
found to the planar circular three-body problem in the parametrized
post-Newtonian (PPN) formalism, for which they focus on a class of fully
conservative theories characterized by the Eddington-Robertson parameters
$\beta$ and $\gamma$. The present paper extends the PPN triangular solution to
quasi-elliptic motion, for which the shape of the triangular configuration
changes with time at the PPN order. The periastron shift due to the PPN effects
is also obtained.
|
http://arxiv.org/abs/2310.14612v2
|
We present an extension of the notion of in-splits from symbolic dynamics to
topological graphs and, more generally, to C*-correspondences. We demonstrate
that in-splits provide examples of strong shift equivalences of
C*-correspondences. Furthermore, we provide a streamlined treatment of Muhly,
Pask, and Tomforde's proof that any strong shift equivalence of regular
C*-correspondences induces a (gauge-equivariant) Morita equivalence between
Cuntz-Pimsner algebras. For topological graphs, we prove that in-splits induce
diagonal-preserving gauge-equivariant *-isomorphisms in analogy with the
results for Cuntz-Krieger algebras. Additionally, we examine the notion of
out-splits for C*-correspondences.
|
http://arxiv.org/abs/2305.01917v2
|
Alzheimer's Disease (AD) is a progressive disease preceded by Mild Cognitive
Impairment (MCI). Early detection of AD is crucial for making treatment
decisions. However, most of the literature on computer-assisted detection of AD
focuses on classifying brain images into one of three major categories:
healthy, MCI, and AD; or categorizing MCI patients into (1) progressive: those
who progress from MCI to AD at a future examination time, and (2) stable: those
who stay as MCI and never progress to AD. This misses the opportunity to
accurately identify the trajectory of progressive MCI patients. In this paper,
we revisit the brain image classification task for AD identification and
re-frame it as an ordinal classification task to predict how close a patient is
to the severe AD stage. To this end, we select progressive MCI patients from
the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset and construct an
ordinal dataset with a prediction target that indicates the time to progression
to AD. We train a Siamese network model to predict the time to onset of AD
based on MRI brain images. We also propose a Weighted variety of Siamese
network and compare its performance to a baseline model. Our evaluations show
that incorporating a weighting factor to Siamese networks brings considerable
performance gain at predicting how close input brain MRI images are to
progressing to AD. Moreover, we complement our results with an interpretation
of the learned embedding space of the Siamese networks using a model
explainability technique.
|
http://arxiv.org/abs/2304.07097v2
|
In this paper, we present a deterministic attack on (EC)DSA signature scheme,
providing that several signatures are known such that the corresponding
ephemeral keys share a certain amount of bits without knowing their value. By
eliminating the shared blocks of bits between the ephemeral keys, we get a
lattice of dimension equal to the number of signatures having a vector
containing the private key. We compute an upper bound for the distance of this
vector from a target vector, and next, using Kannan's enumeration algorithm, we
determine it and hence the secret key. The attack can be made highly efficient
by appropriately selecting the number of shared bits and the number of
signatures.
|
http://arxiv.org/abs/2307.03979v1
|
Water is routinely exposed to external electric fields (EFs). Whether, e.g.,
at physiological conditions, in contact with biological systems, or at the
interface of polar surfaces in countless technological and industrial settings,
water responds to EFs on the order of a few V/{\AA} in a manner that is still
under intense investigation. Dating back to the $19^{th}$ century, the
possibility of solidifying water upon applying an EF instead of adjusting
temperature and pressure -- a process known as electrofreezing -- is an
alluring promise that has canalized major efforts since, with uncertain
outcomes. In this work, we perform long \emph{ab initio} molecular dynamics
simulations \textcolor{black}{of water at ambient conditions exposed at EFs of
different intensities. While the response of single water molecules is almost
instantaneous, the cooperativity of the hydrogen bonds induces slower
reorganizations that can be captured by dividing the trajectories in disjoint
time windows and by performing analysis on each of them separately. Upon
adopting this approach, we find} that EFs of $0.10\leq$EFs$\leq0.15$~V/{\AA}
induce electrofreezing \textcolor{black}{occurring after $\sim150$~ps. We
observe a continuous transition to a disordered state characterized by frozen
dynamical properties, damped oscillations, lower energy, and enhanced local
structural properties. Therefore, we ascribe this state to} a new ferroelectric
amorphous phase, which we term f-GW (ferroelectric glassy water). Our work
represents the first evidence of electrofreezing of liquid water at ambient
conditions and therefore impacts several fields, from
\textcolor{black}{fundamental chemical physics to} biology
\textcolor{black}{and} catalysis.
|
http://arxiv.org/abs/2308.04893v1
|
Modified theories of gravity encompass a class of $f(R)$-models that seek to
elucidate the observed late time accelerated expansion of the universe. In this
study, we examine a set of viable $f(R)$ models (Hu-Sawicki: two cases,
Satrobinsky, Tsujikawa, exponential and arcTanh models) in metric formalism,
using recent cosmological data sets: type Ia supernovae data, cosmic
chronometer observations, baryonic acoustic oscillations data, data from
H\textsc{ii} starburst galaxies, and local measurements of the Hubble parameter
$H_0$. The model parameters are constrained using a Bayesian analysis with the
Monte Carlo Markov Chain method. We employ statistical tools such as the Akaike
Information Criterion, Bayesian Information Criterion, and reduced chi-square
statistics to conduct a comparative investigation of these models. We determine
the transition redshift, the evolution of total equation-of-state (EoS)
parameter, and the EoS for the component responsible for current accelerated
expansion to characterize the expansion's evolution. Taking into account the
``Hubble tension," we perform the study with and without a Gaussian prior for
$H_0$ from local measurements. Our findings are as follows: (i) in many cases
the $f(R)$ models are strongly favored over the standard $\Lambda$CDM model,
(ii) the deviation parameter ($b$) significantly deviates from zero in several
cases, (iii) the inclusion of local $H_0$ not only increases the fitted value
of $H_0$ (as expected) but also affects the gap between predictions of $f(R)$
models and the $\Lambda$CDM model, and (iv) the relevant quantities
characterizing the (accelerated) expansion of the universe obtained in our
models are consistent with those obtained in a model-independent way by others.
Our investigation and results present a compelling case for pursuing further
research on $f(R)$ models with future observations to come.
|
http://arxiv.org/abs/2306.12585v1
|
It is commonly recognized that the Landauer bound holds in (irreversible)
quantum operations. In this study, we verified this bound by extracting a
single spin from a spin-spin magnetic interaction experiment to demonstrate
that the Landauer bound can be approached quantitatively with an approaching
rate of 79.3 percent via quantum spin tunneling. An optically manipulated
spin-encoded quantum computer is designed, in which energy bound near kB T to
erase a spin qubit is theoretically sensible and experimentally verified. This
work may represent the last piece of the puzzle in quantum Landauer erasure in
terms of a single spin being the smallest information carrier.
|
http://arxiv.org/abs/2302.00476v2
|
A result of Hohloch links the theory of integer partitions with the Monge
formulation of the optimal transport problem, giving the optimal transport map
between (Young diagrams of) integer partitions and their corresponding
symmetric partitions. Our aim is to extend Hohloch's result to the higher
dimensional case. In doing so, we show the Kantorovich formulation of the
optimal transport problem provides the tool to study the matching of higher
dimensional partitions with their corresponding symmetric partitions.
|
http://arxiv.org/abs/2310.10474v1
|
The Koopman framework is a popular approach to transform a finite dimensional
nonlinear system into an infinite dimensional, but linear model through a
lifting process, using so-called observable functions. While there is an
extensive theory on infinite dimensional representations in the operator sense,
there are few constructive results on how to select the observables to realize
them. When it comes to the possibility of finite Koopman representations, which
are highly important form a practical point of view, there is no constructive
theory. Hence, in practice, often a data-based method and ad-hoc choice of the
observable functions is used. When truncating to a finite number of basis,
there is also no clear indication of the introduced approximation error. In
this paper, we propose a systematic method to compute the finite dimensional
Koopman embedding of a specific class of polynomial nonlinear systems in
continuous-time such that, the embedding, without approximation, can fully
represent the dynamics of the nonlinear system.
|
http://arxiv.org/abs/2301.06557v1
|
Methods for object detection and segmentation often require abundant
instance-level annotations for training, which are time-consuming and expensive
to collect. To address this, the task of zero-shot object detection (or
segmentation) aims at learning effective methods for identifying and localizing
object instances for the categories that have no supervision available.
Constructing architectures for these tasks requires choosing from a myriad of
design options, ranging from the form of the class encoding used to transfer
information from seen to unseen categories, to the nature of the function being
optimized for learning. In this work, we extensively study these design
choices, and carefully construct a simple yet extremely effective zero-shot
recognition method. Through extensive experiments on the MSCOCO dataset on
object detection and segmentation, we highlight that our proposed method
outperforms existing, considerably more complex, architectures. Our findings
and method, which we propose as a competitive future baseline, point towards
the need to revisit some of the recent design trends in zero-shot detection /
segmentation.
|
http://arxiv.org/abs/2302.07319v1
|
Recently, deep learning has produced encouraging results for kidney stone
classification using endoscope images. However, the shortage of annotated
training data poses a severe problem in improving the performance and
generalization ability of the trained model. It is thus crucial to fully
exploit the limited data at hand. In this paper, we propose SegPrompt to
alleviate the data shortage problems by exploiting segmentation maps from two
aspects. First, SegPrompt integrates segmentation maps to facilitate
classification training so that the classification model is aware of the
regions of interest. The proposed method allows the image and segmentation
tokens to interact with each other to fully utilize the segmentation map
information. Second, we use the segmentation maps as prompts to tune the
pretrained deep model, resulting in much fewer trainable parameters than
vanilla finetuning. We perform extensive experiments on the collected kidney
stone dataset. The results show that SegPrompt can achieve an advantageous
balance between the model fitting ability and the generalization ability,
eventually leading to an effective model with limited training data.
|
http://arxiv.org/abs/2303.08303v1
|
In unmanned aerial vehicle (UAV)-assisted millimeter wave (mmWave) systems,
channel state information (CSI) feedback is critical for the selection of
modulation schemes, resource management, beamforming, etc. However, traditional
CSI feedback methods lead to significant feedback overhead and energy
consumption of the UAV transmitter, therefore shortening the system operation
time. To tackle these issues, inspired by superimposed feedback and integrated
sensing and communications (ISAC), a line of sight (LoS) sensing-based
superimposed CSI feedback scheme is proposed. Specifically, on the UAV
transmitter side, the ground-to-UAV (G2U) CSI is superimposed on the
UAVto-ground (U2G) data to feed back to the ground base station (gBS). At the
gBS, the dedicated LoS sensing network (LoSSenNet) is designed to sense the U2G
CSI in LoS and NLoS scenarios. With the sensed result of LoS-SenNet, the
determined G2U CSI from the initial feature extraction will work as the priori
information to guide the subsequent operation. Specifically, for the G2U CSI in
NLoS, a CSI recovery network (CSI-RecNet) and superimposed interference
cancellation are developed to recover the G2U CSI and U2G data. As for the LoS
scenario, a dedicated LoS aid network (LoS-AidNet) is embedded before the
CSI-RecNet and the block of superimposed interference cancellation to highlight
the feature of the G2U CSI. Compared with other methods of superimposed CSI
feedback, simulation results demonstrate that the proposed feedback scheme
effectively improves the recovery accuracy of the G2U CSI and U2G data.
Besides, against parameter variations, the proposed feedback scheme presents
its robustness.
|
http://arxiv.org/abs/2302.10665v1
|
Text simplification aims to make the text easier to understand by applying
rewriting transformations. There has been very little research on Chinese text
simplification for a long time. The lack of generic evaluation data is an
essential reason for this phenomenon. In this paper, we introduce MCTS, a
multi-reference Chinese text simplification dataset. We describe the annotation
process of the dataset and provide a detailed analysis. Furthermore, we
evaluate the performance of several unsupervised methods and advanced large
language models. We additionally provide Chinese text simplification parallel
data that can be used for training, acquired by utilizing machine translation
and English text simplification. We hope to build a basic understanding of
Chinese text simplification through the foundational work and provide
references for future research. All of the code and data are released at
https://github.com/blcuicall/mcts/.
|
http://arxiv.org/abs/2306.02796v3
|
Four-dimensional weak-constraint variational data assimilation estimates a
state given partial noisy observations and dynamical model by minimizing a cost
function that takes into account both discrepancy between the state and
observations and model error over time. It can be formulated as a Gauss-Newton
iteration of an associated least-squares problem. In this paper, we introduce a
parameter in front of the observation mismatch and show analytically that this
parameter is crucial either for convergence to the true solution when
observations are noise-free or for boundness of the error when observations are
noisy with bounded observation noise. We also consider joint state-parameter
estimation. We illustrated theoretical results with numerical experiments using
the Lorenz 63 and Lorenz 96 models.
|
http://arxiv.org/abs/2304.05858v1
|
Visual error metrics play a fundamental role in the quantification of
perceived image similarity. Most recently, use cases for them in real-time
applications have emerged, such as content-adaptive shading and shading reuse
to increase performance and improve efficiency. A wide range of different
metrics has been established, with the most sophisticated being capable of
capturing the perceptual characteristics of the human visual system. However,
their complexity, computational expense, and reliance on reference images to
compare against prevent their generalized use in real-time, restricting such
applications to using only the simplest available metrics. In this work, we
explore the abilities of convolutional neural networks to predict a variety of
visual metrics without requiring either reference or rendered images.
Specifically, we train and deploy a neural network to estimate the visual error
resulting from reusing shading or using reduced shading rates. The resulting
models account for 70%-90% of the variance while achieving up to an order of
magnitude faster computation times. Our solution combines image-space
information that is readily available in most state-of-the-art deferred shading
pipelines with reprojection from previous frames to enable an adequate estimate
of visual errors, even in previously unseen regions. We describe a suitable
convolutional network architecture and considerations for data preparation for
training. We demonstrate the capability of our network to predict complex error
metrics at interactive rates in a real-time application that implements
content-adaptive shading in a deferred pipeline. Depending on the portion of
unseen image regions, our approach can achieve up to $2\times$ performance
compared to state-of-the-art methods.
|
http://arxiv.org/abs/2310.09125v1
|
We present our study of stability of differentially rotating, axisymmetric
neutron stars described by a polytropic equation of state with $\Gamma = 2$. We
focus on quasi-toroidal solutions with a degree of differential rotation
$\widetilde A=1$. Our results show that for a wide range of parameters
hypermassive, quasi-toroidal neutron stars are dynamically stable against
quasi-radial perturbations, which may have implications for newly born neutron
stars and binary neutron stars mergers.
|
http://arxiv.org/abs/2302.06007v1
|
$\mathrm{MoS_2}$ is an emergent van der Waals material that shows promising
prospects in semiconductor industry and optoelectronic applications. However,
its electronic properties are not yet fully understood. In particular, the
nature of the insulating state at low carrier density deserves further
investigation, as it is important for fundamental research and applications. In
this study, we investigate the insulating state of a dual-gated exfoliated
bilayer $\mathrm{MoS_2}$ field-effect transistor by performing magnetotransport
experiments. We observe positive and non-saturating magnetoresistance, in a
regime where only one band contributes to electron transport. At low electron
density ($\sim 1.4\times 10^{12}~\mathrm{cm^{-2}}$) and a perpendicular
magnetic field of 7 Tesla, the resistance exceeds by more than one order of
magnitude the zero field resistance and exponentially drops with increasing
temperature. We attribute this observation to strong electron localization.
Both temperature and magnetic field dependence can, at least qualitatively, be
described by the Efros-Shklovskii law, predicting the formation of a Coulomb
gap in the density of states due to Coulomb interactions. However, the
localization length obtained from fitting the temperature dependence exceeds by
more than one order of magnitude the one obtained from the magnetic field
dependence. We attribute this discrepancy to the presence of a nearby metallic
gate, which provides electrostatic screening and thus reduces long-range
Coulomb interactions. The result of our study suggests that the insulating
state of $\mathrm{MoS_2}$ originates from a combination of disorder-driven
electron localization and Coulomb interactions.
|
http://arxiv.org/abs/2308.13337v2
|
In this paper, we estimate an operator norm of dilation operators on block
spaces ($\mathfrak{B}_{r,\alpha}(\mathbb{Q}_p)$) over $p$-adic field. With this
estimate, we establish the boundedness of $p$-adic Hardy-Hilbert type integral
operator on $\mathfrak{B}_{r,\alpha}(\mathbb{Q}_p)$. Moreover as application to
our result, we obtain the $p$-adic Hilbert inequality, $p$-adic Hardy
inequality and $p$-adic Hardy-Littlewood-P\'olya inequality on
$\mathfrak{B}_{r,\alpha}(\mathbb{Q}_p)$.
|
http://arxiv.org/abs/2303.11652v1
|
Secure computation often benefits from the use of correlated randomness to
achieve fast, non-cryptographic online protocols. A recent paradigm put forth
by Boyle $\textit{et al.}$ (CCS 2018, Crypto 2019) showed how pseudorandom
correlation generators (PCG) can be used to generate large amounts of useful
forms of correlated (pseudo)randomness, using minimal interactions followed
solely by local computations, yielding silent secure two-party computation
protocols (protocols where the preprocessing phase requires almost no
communication). An additional property called programmability allows to extend
this to build N-party protocols. However, known constructions for programmable
PCG's can only produce OLE's over large fields, and use rather new splittable
Ring-LPN assumption.
In this work, we overcome both limitations. To this end, we introduce the
quasi-abelian syndrome decoding problem (QA-SD), a family of assumptions which
generalises the well-established quasi-cyclic syndrome decoding assumption.
Building upon QA-SD, we construct new programmable PCG's for OLE's over any
field $\mathbb{F}_q$ with $q>2$. Our analysis also sheds light on the security
of the ring-LPN assumption used in Boyle $\textit{et al.}$ (Crypto 2020). Using
our new PCG's, we obtain the first efficient N-party silent secure computation
protocols for computing general arithmetic circuit over $\mathbb{F}_q$ for any
$q>2$.
|
http://arxiv.org/abs/2306.03488v1
|
We work to create a multilingual speech synthesis system which can generate
speech with the proper accent while retaining the characteristics of an
individual voice. This is challenging to do because it is expensive to obtain
bilingual training data in multiple languages, and the lack of such data
results in strong correlations that entangle speaker, language, and accent,
resulting in poor transfer capabilities. To overcome this, we present a
multilingual, multiaccented, multispeaker speech synthesis model based on
RADTTS with explicit control over accent, language, speaker and fine-grained
$F_0$ and energy features. Our proposed model does not rely on bilingual
training data. We demonstrate an ability to control synthesized accent for any
speaker in an open-source dataset comprising of 7 accents. Human subjective
evaluation demonstrates that our model can better retain a speaker's voice and
accent quality than controlled baselines while synthesizing fluent speech in
all target languages and accents in our dataset.
|
http://arxiv.org/abs/2301.10335v1
|
Neuroevolution (NE) has recently proven a competitive alternative to learning
by gradient descent in reinforcement learning tasks. However, the majority of
NE methods and associated simulation environments differ crucially from
biological evolution: the environment is reset to initial conditions at the end
of each generation, whereas natural environments are continuously modified by
their inhabitants; agents reproduce based on their ability to maximize rewards
within a population, while biological organisms reproduce and die based on
internal physiological variables that depend on their resource consumption;
simulation environments are primarily single-agent while the biological world
is inherently multi-agent and evolves alongside the population. In this work we
present a method for continuously evolving adaptive agents without any
environment or population reset. The environment is a large grid world with
complex spatiotemporal resource generation, containing many agents that are
each controlled by an evolvable recurrent neural network and locally reproduce
based on their internal physiology. The entire system is implemented in JAX,
allowing very fast simulation on a GPU. We show that NE can operate in an
ecologically-valid non-episodic multi-agent setting, finding sustainable
collective foraging strategies in the presence of a complex interplay between
ecological and evolutionary dynamics.
|
http://arxiv.org/abs/2302.09334v3
|
We discuss problems associated with the notion of pH in heterogeneous
systems. For homogeneous systems, standardization protocols lead to a well
defined quantity, which although different from S\o rensen's original idea of
pH, is well reproducible and has become accepted as the measure of the
``hydrogen potential". On the other hand, for heterogeneous systems, pH defined
in terms of the chemical part of the electrochemical activity is
thermodynamically inconsistent and runs afoul of the Gibbs-Guggenheim principle
that forbids splitting of the electrochemical potential into separate chemical
and electrostatic parts -- since only the sum of two has any thermodynamic
meaning. The problem is particularly relevant for modern simulation methods
which involve charge regulation of proteins, polyelectrolytes, nanoparticles,
colloidal suspensions etc. In this paper we show that titration isotherms
calculated using semi-grand canonical simulations can be very different from
the ones obtained using canonical reactive Monte Carlo simulations.
|
http://arxiv.org/abs/2310.01579v1
|
We introduce MLFMF, a collection of data sets for benchmarking recommendation
systems used to support formalization of mathematics with proof assistants.
These systems help humans identify which previous entries (theorems,
constructions, datatypes, and postulates) are relevant in proving a new theorem
or carrying out a new construction. Each data set is derived from a library of
formalized mathematics written in proof assistants Agda or Lean. The collection
includes the largest Lean~4 library Mathlib, and some of the largest Agda
libraries: the standard library, the library of univalent mathematics
Agda-unimath, and the TypeTopology library. Each data set represents the
corresponding library in two ways: as a heterogeneous network, and as a list of
s-expressions representing the syntax trees of all the entries in the library.
The network contains the (modular) structure of the library and the references
between entries, while the s-expressions give complete and easily parsed
information about every entry. We report baseline results using standard graph
and word embeddings, tree ensembles, and instance-based learning algorithms.
The MLFMF data sets provide solid benchmarking support for further
investigation of the numerous machine learning approaches to formalized
mathematics. The methodology used to extract the networks and the s-expressions
readily applies to other libraries, and is applicable to other proof
assistants. With more than $250\,000$ entries in total, this is currently the
largest collection of formalized mathematical knowledge in machine learnable
format.
|
http://arxiv.org/abs/2310.16005v1
|
Multipath-based simultaneous localization and mapping (SLAM) is a promising
approach to obtain position information of transmitters and receivers as well
as information regarding the propagation environments in future mobile
communication systems. Usually, specular reflections of the radio signals
occurring at flat surfaces are modeled by virtual anchors (VAs) that are mirror
images of the physical anchors (PAs). In existing methods for multipath-based
SLAM, each VA is assumed to generate only a single measurement. However, due to
imperfections of the measurement equipment such as non-calibrated antennas or
model mismatch due to roughness of the reflective surfaces, there are
potentially multiple multipath components (MPCs) that are associated to one
single VA. In this paper, we introduce a Bayesian particle-based sum-product
algorithm (SPA) for multipath-based SLAM that can cope with
multiple-measurements being associated to a single VA. Furthermore, we
introduce a novel statistical measurement model that is strongly related to the
radio signal. It introduces additional dispersion parameters into the
likelihood function to capture additional MPCs-related measurements. We
demonstrate that the proposed SLAM method can robustly fuse multiple
measurements per VA based on numerical simulations.
|
http://arxiv.org/abs/2304.05680v4
|
The growing adoption of IT solutions in the healthcare sector is leading to a
steady increase in the number of cybersecurity incidents. As a result,
organizations worldwide have introduced regulations, standards, and best
practices to address cybersecurity and data protection issues in this sector.
However, the application of this large corpus of documents presents operational
difficulties, and operators continue to lag behind in resilience to cyber
attacks. This paper contributes a systematization of the significant
cybersecurity documents relevant to the healthcare sector. We collected the 49
most significant documents and used the NIST cybersecurity framework to
categorize key information and support the implementation of cybersecurity
measures.
|
http://arxiv.org/abs/2304.14955v1
|
The increasing focus on long-term time series prediction across various
fields has been significantly strengthened by advancements in quantum
computation. In this paper, we introduce a data-driven method designed for
long-term time series prediction with quantum dynamical embedding (QDE). This
approach enables a trainable embedding of the data space into an extended state
space, allowing for the recursive retrieval of time series information. Based
on its independence of time series length, this method achieves depth-efficient
quantum circuits that are crucial for near-term quantum computers. Numerical
simulations demonstrate the model's improved performance in prediction accuracy
and resource efficiency over existing methods, as well as its effective
denoising capabilities. We implement this model on the Origin ''Wukong''
superconducting quantum processor with a learnable error-cancellation layer
(LECL) for error mitigation, further validating the practical applicability of
our approach on near-term quantum devices. Furthermore, the theoretical
analysis of the QDE's dynamical properties and its universality enhances its
potential for time series prediction. This study establishes a significant step
towards the processing of long-term time series on near-term quantum computers,
integrating data-driven learning with discrete dynamical embedding for enhanced
forecasting capabilities.
|
http://arxiv.org/abs/2305.15976v3
|
On the modern web, trackers and advertisers frequently construct and monetize
users' detailed behavioral profiles without consent. Despite various studies on
web tracking mechanisms and advertisements, there has been no rigorous study
focusing on websites targeted at children. To address this gap, we present a
measurement of tracking and (targeted) advertising on websites directed at
children. Motivated by lacking a comprehensive list of child-directed (i.e.,
targeted at children) websites, we first build a multilingual classifier based
on web page titles and descriptions. Applying this classifier to over two
million pages, we compile a list of two thousand child-directed websites.
Crawling these sites from five vantage points, we measure the prevalence of
trackers, fingerprinting scripts, and advertisements. Our crawler detects ads
displayed on child-directed websites and determines if ad targeting is enabled
by scraping ad disclosure pages whenever available. Our results show that
around 90% of child-directed websites embed one or more trackers, and about 27%
contain targeted advertisements--a practice that should require verifiable
parental consent. Next, we identify improper ads on child-directed websites by
developing an ML pipeline that processes both images and text extracted from
ads. The pipeline allows us to run semantic similarity queries for arbitrary
search terms, revealing ads that promote services related to dating, weight
loss, and mental health; as well as ads for sex toys and flirting chat
services. Some of these ads feature repulsive and sexually explicit imagery. In
summary, our findings indicate a trend of non-compliance with privacy
regulations and troubling ad safety practices among many advertisers and
child-directed websites. To protect children and create a safer online
environment, regulators and stakeholders must adopt and enforce more stringent
measures.
|
http://arxiv.org/abs/2308.04887v2
|
Memristor-aided logic (MAGIC) design style holds a high promise for realizing
digital logic-in-memory functionality. The ability to implement a specific gate
in a MAGIC design style hinges on the SET-to-RESET threshold ratio. The TaOx
memristive devices exhibit distinct SET-to-RESET ratios, enabling the
implementation of OR and NOT operations. As the adoption of the MAGIC design
style gains momentum, it becomes crucial to understand the breakdown of energy
consumption in the various phases of its operation. This paper presents
experimental demonstrations of the OR and NOT gates on a 1T1R crossbar array.
Additionally, it provides insights into the energy distribution for performing
these operations at different stages. Through our experiments across different
gates, we found that the energy consumption is dominated by initialization in
the MAGIC design style. The energy split-up is 14.8%, 85%, and 0.2% for
execution, initialization, and read operations respectively.
|
http://arxiv.org/abs/2310.10460v1
|
GRB221009A is the brightest gamma-ray burst ever detected. To probe the
very-high-energy (VHE, $>$\!100 GeV) emission, the High Energy Stereoscopic
System (H.E.S.S.) began observations 53 hours after the triggering event, when
the brightness of the moonlight no longer precluded observations. We derive
differential and integral upper limits using H.E.S.S. data from the third,
fourth, and ninth nights after the initial GRB detection, after applying
atmospheric corrections. The combined observations yield an integral energy
flux upper limit of $\Phi_\mathrm{UL}^{95\%} = 9.7 \times
10^{-12}~\mathrm{erg\,cm^{-2}\,s^{-1}}$ above $E_\mathrm{thr} = 650$ GeV. The
constraints derived from the H.E.S.S. observations complement the available
multiwavelength data. The radio to X-ray data are consistent with synchrotron
emission from a single electron population, with the peak in the SED occurring
above the X-ray band. Compared to the VHE-bright GRB190829A, the upper limits
for GRB221009A imply a smaller gamma-ray to X-ray flux ratio in the afterglow.
Even in the absence of a detection, the H.E.S.S. upper limits thus contribute
to the multiwavelength picture of GRB221009A, effectively ruling out an IC
dominated scenario.
|
http://arxiv.org/abs/2303.10558v1
|
White dwarf photospheric parameters are usually obtained by means of
spectroscopic or photometric analysis. These results are not always consistent
with each other, with the published values often including just the statistical
uncertainties. The differences are more dramatic for white dwarfs with
helium-dominated photospheres, so to obtain realistic uncertainties we have
analysed a sample of 13 of these white dwarfs, applying both techniques to up
to three different spectroscopic and photometric data sets for each star. We
found mean standard deviations of < $\sigma T_{\mathrm{eff}}$ > = 524 K, <
$\sigma \log g$ > = 0.27 dex and < $\sigma \log(\mathrm{H/He})$ > = 0.31 dex
for the effective temperature, surface gravity and relative hydrogen abundance,
respectively, when modelling diverse spectroscopic data. The photometric fits
provided mean standard deviations up to < $\sigma T_{\mathrm{eff}}$ > = 1210 K
and < $\sigma \log g$ > = 0.13 dex. We suggest these values to be adopted as
realistic lower limits to the published uncertainties in parameters derived
from spectroscopic and photometric fits for white dwarfs with similar
characteristics. In addition, we investigate the effect of fitting the
observational data adopting three different photospheric chemical compositions.
In general, pure helium model spectra result in larger $T_{\mathrm{eff}}$
compared to those derived from models with traces of hydrogen. The $\log g$
shows opposite trends: smaller spectroscopic values and larger photometric ones
when compared to models with hydrogen. The addition of metals to the models
also affects the derived atmospheric parameters, but a clear trend is not
found.
|
http://arxiv.org/abs/2301.09670v1
|
Reassembling 3D broken objects is a challenging task. A robust solution that
generalizes well must deal with diverse patterns associated with different
types of broken objects. We propose a method that tackles the pairwise assembly
of 3D point clouds, that is agnostic on the type of object, and that relies
solely on their geometrical information, without any prior information on the
shape of the reconstructed object. The method receives two point clouds as
input and segments them into regions using detected closed boundary contours,
known as breaking curves. Possible alignment combinations of the regions of
each broken object are evaluated and the best one is selected as the final
alignment. Experiments were carried out both on available 3D scanned objects
and on a recent benchmark for synthetic broken objects. Results show that our
solution performs well in reassembling different kinds of broken objects.
|
http://arxiv.org/abs/2306.02782v1
|
Explainable artificial intelligence is increasingly used in machine learning
(ML) based decision-making systems in healthcare. However, little research has
compared the utility of different explanation methods in guiding healthcare
experts for patient care. Moreover, it is unclear how useful, understandable,
actionable and trustworthy these methods are for healthcare experts, as they
often require technical ML knowledge. This paper presents an explanation
dashboard that predicts the risk of diabetes onset and explains those
predictions with data-centric, feature-importance, and example-based
explanations. We designed an interactive dashboard to assist healthcare
experts, such as nurses and physicians, in monitoring the risk of diabetes
onset and recommending measures to minimize risk. We conducted a qualitative
study with 11 healthcare experts and a mixed-methods study with 45 healthcare
experts and 51 diabetic patients to compare the different explanation methods
in our dashboard in terms of understandability, usefulness, actionability, and
trust. Results indicate that our participants preferred our representation of
data-centric explanations that provide local explanations with a global
overview over other methods. Therefore, this paper highlights the importance of
visually directive data-centric explanation method for assisting healthcare
experts to gain actionable insights from patient health records. Furthermore,
we share our design implications for tailoring the visual representation of
different explanation methods for healthcare experts.
|
http://arxiv.org/abs/2302.10671v1
|
Motivated by the papers of Mladenovc and Piterbarg (2006), Krajka (2011) and
Pereira and Tan (2017), we study the limit properties for the maxima from
nonstationary random fields subject to missing observations and obtain the
weakly convergence and almost sure convergence results for these maxima. Some
examples such as Gaussian random fields, $chi$-random fields and Gaussian order
statistics fields are given to illustrate the obtained results.
|
http://arxiv.org/abs/2306.13857v1
|
The concept of fairness is gaining popularity in academia and industry.
Social media is especially vulnerable to media biases and toxic language and
comments. We propose a fair ML pipeline that takes a text as input and
determines whether it contains biases and toxic content. Then, based on
pre-trained word embeddings, it suggests a set of new words by substituting the
bi-ased words, the idea is to lessen the effects of those biases by replacing
them with alternative words. We compare our approach to existing fairness
models to determine its effectiveness. The results show that our proposed
pipeline can de-tect, identify, and mitigate biases in social media data
|
http://arxiv.org/abs/2303.07024v1
|
Although non-intuitive, an accelerated electron along a particular trajectory
can be shown to emit classical electromagnetic radiation in the form of a
Fermi-Dirac spectral distribution when observed in a particular angular regime.
We investigate the relationship between the distribution, spectrum, and
particle count. The result for the moving point charge is classical, as it
accelerates along an exactly known trajectory. We map to the semi-classical
regime of the moving mirror model with a quantized spin-0 field. The scalars
also possess a $\beta$ Bogoliubov coefficient distribution with Fermi-Dirac
form in the respective frequency regime.
|
http://arxiv.org/abs/2307.12860v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.