Dataset Viewer
text
string | source
string |
---|---|
Given the recent advances with image-generating algorithms, deep image
completion methods have made significant progress. However, state-of-art
methods typically provide poor cross-scene generalization, and generated masked
areas often contain blurry artifacts. Predictive filtering is a method for
restoring images, which predicts the most effective kernels based on the input
scene. Motivated by this approach, we address image completion as a filtering
problem. Deep feature-level semantic filtering is introduced to fill in missing
information, while preserving local structure and generating visually realistic
content. In particular, a Dual-path Cooperative Filtering (DCF) model is
proposed, where one path predicts dynamic kernels, and the other path extracts
multi-level features by using Fast Fourier Convolution to yield semantically
coherent reconstructions. Experiments on three challenging image completion
datasets show that our proposed DCF outperforms state-of-art methods.
|
http://arxiv.org/abs/2305.00379v1
|
We analyzed four epochs of beamformed EVN data of the Crab Pulsar at 1658.49
MHz. With the high sensitivity resulting from resolving out the Crab Nebula, we
are able to detect even the faint high-frequency components in the folded
profile. We also detect a total of 65951 giant pulses, which we use to
investigate the rates, fluence, phase, and arrival time distributions. We find
that for the main pulse component, our giant pulses represent about 80% of the
total flux. This suggests we have a nearly complete giant pulse energy
distribution, although it is not obvious how the observed distribution could be
extended to cover the remaining 20% of the flux without invoking large numbers
of faint bursts for every rotation. Looking at the difference in arrival time
between subsequent bursts in single rotations, we confirm that the likelihood
of finding giant pulses close to each other is increased beyond that expected
for randomly occurring bursts - some giant pulses consist of causally related
microbursts, with typical separations of $\sim\!30{\rm\;\mu s}$ - but also find
evidence that at separations $\gtrsim\!100{\rm\;\mu s}$ the likelihood of
finding another giant pulse is suppressed. In addition, our high sensitivity
enabled us to detect weak echo features in the brightest pulses (at
$\sim\!0.4\%$ of the peak giant pulse flux), which are delayed by up to
$\sim\!300{\rm\;\mu s}$.
|
http://arxiv.org/abs/2307.16362v2
|
Understanding and evaluating uncertainty play a key role in decision-making.
When a viewer studies a visualization that demands inference, it is necessary
that uncertainty is portrayed in it. This paper showcases the importance of
representing uncertainty in visualizations. It provides an overview of
uncertainty visualization and the challenges authors and viewers face when
working with such charts. I divide the visualization pipeline into four parts,
namely data collection, preprocessing, visualization, and inference, to
evaluate how uncertainty impacts them. Next, I investigate the authors'
methodologies to process and design uncertainty. Finally, I contribute by
exploring future paths for uncertainty visualization.
|
http://arxiv.org/abs/2301.07687v1
|
Vocoder models have recently achieved substantial progress in generating
authentic audio comparable to human quality while significantly reducing memory
requirement and inference time. However, these data-hungry generative models
require large-scale audio data for learning good representations. In this
paper, we apply contrastive learning methods in training the vocoder to improve
the perceptual quality of the vocoder without modifying its architecture or
adding more data. We design an auxiliary task with mel-spectrogram contrastive
learning to enhance the utterance-level quality of the vocoder model under
data-limited conditions. We also extend the task to include waveforms to
improve the multi-modality comprehension of the model and address the
discriminator overfitting problem. We optimize the additional task
simultaneously with GAN training objectives. Our results show that the tasks
improve model performance substantially in data-limited settings.
|
http://arxiv.org/abs/2309.09088v2
|
Ongoing research explores thermal switching materials to control heat flow.
Specifically, there has been interest in magneto-thermal switching (MTS)
materials based on superconductors, which only exhibited switching behavior
when a magnetic field was applied. However, a recent report highlighted
nonvolatile MTS in commercial Sn-Pb solders, attributed to magnetic flux
trapping. In this study, we focused on flux trapping in a type-II
superconductor MgB2. Magnetization and thermal conductivity measurements under
magnetic fields were conducted on polycrystalline MgB2. We confirmed that
magnetic flux was indeed trapped in MgB2 even after demagnetization.
Additionally, we observed nonvolatile MTS in MgB2 as well as Sn-Pb solders.
These results suggest that the nonvolatile MTS may be a widespread
characteristic of superconducting materials with flux trapping.
|
http://arxiv.org/abs/2307.16404v1
|
Scene text image super-resolution (STISR) is an important pre-processing
technique for text recognition from low-resolution scene images. Nowadays,
various methods have been proposed to extract text-specific information from
high-resolution (HR) images to supervise STISR model training. However, due to
uncontrollable factors (e.g. shooting equipment, focus, and environment) in
manually photographing HR images, the quality of HR images cannot be
guaranteed, which unavoidably impacts STISR performance. Observing the quality
issue of HR images, in this paper we propose a novel idea to boost STISR by
first enhancing the quality of HR images and then using the enhanced HR images
as supervision to do STISR. Concretely, we develop a new STISR framework,
called High-Resolution ENhancement (HiREN) that consists of two branches and a
quality estimation module. The first branch is developed to recover the
low-resolution (LR) images, and the other is an HR quality enhancement branch
aiming at generating high-quality (HQ) text images based on the HR images to
provide more accurate supervision to the LR images. As the degradation from HQ
to HR may be diverse, and there is no pixel-level supervision for HQ image
generation, we design a kernel-guided enhancement network to handle various
degradation, and exploit the feedback from a recognizer and text-level
annotations as weak supervision signal to train the HR enhancement branch.
Then, a quality estimation module is employed to evaluate the qualities of HQ
images, which are used to suppress the erroneous supervision information by
weighting the loss of each image. Extensive experiments on TextZoom show that
HiREN can work well with most existing STISR methods and significantly boost
their performances.
|
http://arxiv.org/abs/2307.16410v1
|
Synchrotron and inverse Compton emission successfully explain the observed
spectra of gamma-ray burst (GRB) afterglows. It is thought that most GRBs are
products of extremely relativistic outflows and the afterglow marks the
interaction of that ejecta with the surrounding matter. Faster decay of
afterglow light curves at late times is indicative of non-spherical geometries,
and are usually interpreted as evidence for jet geometry. Recent numerical
simulations have shown that ring-like geometries are also permissible for
relativistic outflows. We therefore extend the standard theory of afterglow
evolution to ring geometries. An analytic prescription for the light curves and
spectra produced by relativistic toroidal blast waves is presented. We compare
these to their spherical and jet-like counterparts, and show that ring
afterglows decay faster than spherical outflows but not as fast as jets.
|
http://arxiv.org/abs/2304.00044v1
|
Recent studies in active learning, particularly in uncertainty sampling, have
focused on the decomposition of model uncertainty into reducible and
irreducible uncertainties. In this paper, the aim is to simplify the
computational process while eliminating the dependence on observations.
Crucially, the inherent uncertainty in the labels is considered, the
uncertainty of the oracles. Two strategies are proposed, sampling by Klir
uncertainty, which tackles the exploration-exploitation dilemma, and sampling
by evidential epistemic uncertainty, which extends the concept of reducible
uncertainty within the evidential framework, both using the theory of belief
functions. Experimental results in active learning demonstrate that our
proposed method can outperform uncertainty sampling.
|
http://arxiv.org/abs/2309.12494v2
|
Recent advancements in Automatic Speech Recognition (ASR) systems,
exemplified by Whisper, have demonstrated the potential of these systems to
approach human-level performance given sufficient data. However, this progress
doesn't readily extend to ASR for children due to the limited availability of
suitable child-specific databases and the distinct characteristics of
children's speech. A recent study investigated leveraging the My Science Tutor
(MyST) children's speech corpus to enhance Whisper's performance in recognizing
children's speech. They were able to demonstrate some improvement on a limited
testset. This paper builds on these findings by enhancing the utility of the
MyST dataset through more efficient data preprocessing. We reduce the Word
Error Rate (WER) on the MyST testset 13.93% to 9.11% with Whisper-Small and
from 13.23% to 8.61% with Whisper-Medium and show that this improvement can be
generalized to unseen datasets. We also highlight important challenges towards
improving children's ASR performance. The results showcase the viable and
efficient integration of Whisper for effective children's speech recognition.
|
http://arxiv.org/abs/2309.07927v3
|
In the literature, Benford's Law is considered for base-b expansions where
b>1 is an integer. In this paper, we investigate the distribution of leading
"digits" of a sequence of positive integers under other expansions such as
Zeckendorf expansion, and declare what Benford's Law should be under
generalized Zeckendorf expansion.
|
http://arxiv.org/abs/2309.00090v1
|
General Relativity predicts that black holes do not possess an internal
structure and consequently cannot be excited. This leads to a specific
prediction about the waveform of gravitational waves, which they emit during a
binary black hole inspiral and to the vanishing of their Love numbers. However,
if astrophysical black holes do possess an internal structure, their Love
numbers would no longer vanish, and they could be excited during an inspiral by
the transfer of orbital energy. This would affect the orbital period and lead
to an observable imprint on the emitted gravitational waves waveform. The
effect is enhanced if one of the binary companions is resonantly excited. We
discuss the conditions for resonant excitation of a hypothetical internal
structure of black holes and calculate the phase change of the gravitational
waves waveform that is induced due to such resonant excitation during
intermediate- and extreme-mass-ratio inspirals. We then relate the phase change
to the electric quadrupolar Love number of the larger companion, which is
resonantly excited by its smaller companion. We discuss the statistical error
on measuring the Love number by LISA and show that, because of this phase
change, the statistical error is small even for small values of the Love
number. Our results provide a strong indication that the Love number could be
detected by LISA with remarkable accuracy, much higher than what can be
achieved via tidal deformation effects. Our results further indicate that
resonant excitation of the central black hole during an extreme- or
intermediate-mass-ratio inspirals is the most promising effect for putting
bounds on, or detecting, non-vanishing tidal Love numbers of black holes.
|
http://arxiv.org/abs/2306.00173v1
|
We describe the tropical mirror for complex toric surfaces. In particular we
provide an explicit expression for the mirror states and show that they can be
written in enumerative form. Their holomorphic germs give an explicit form of
good section for Landau-Ginzburg-Saito theory. We use an explicit form of
holomorphic germs to derive the divisor relation for tropical Gromov-Witten
invariants. We interpret the deformation of the theory by a point observable as
a blow up of a point on the toric surface. We describe the implication of such
interpretation for the tropical Gromov-Witten invariants.
|
http://arxiv.org/abs/2305.00423v2
|
The magnetotail current sheet's spatial configuration and stability control
the onset of magnetic reconnection - the driving process for magnetospheric
substorms. The near-Earth current sheet has been thoroughly investigated by
numerous missions, whereas the midtail current sheet has not been adequately
explored. This is especially the case for the long-term variation of its
configuration in response to the solar wind. We present a statistical analysis
of 1261 magnetotail current sheet crossings by the Acceleration, Reconnection,
Turbulence and Electrodynamics of Moon's Interaction with the Sun (ARTEMIS)
mission orbiting the moon (X~-60 RE), collected during the entirety of Solar
Cycle 24. We demonstrate that the magnetotail current sheet typically remains
extremely thin, with a characteristic thickness comparable to the thermal ion
gyroradius, even at such large distances from Earth's dipole. We also find that
a substantial fraction (~one quarter) of the observed current sheets have a
partially force-free magnetic field configuration, with a negligible
contribution of the thermal pressure and a significant contribution of the
magnetic field shear component to the pressure balance. Further, we quantify
the impact of the changing solar wind driving conditions on the properties of
the midtail around the lunar orbit. During active solar wind driving
conditions, we observe an increase in the occurrence rate of thin current
sheets, whereas quiet solar wind driving conditions seem to favor the formation
of partially force-free current sheets.
|
http://arxiv.org/abs/2309.16194v1
|
The utilization of programming language (PL) models, pre-trained on
large-scale code corpora, as a means of automating software engineering
processes has demonstrated considerable potential in streamlining various code
generation tasks such as code completion, code translation, and program
synthesis. However, current approaches mainly rely on supervised fine-tuning
objectives borrowed from text generation, neglecting unique sequence-level
characteristics of code, including but not limited to compilability as well as
syntactic and functional correctness. To address this limitation, we propose
PPOCoder, a new framework for code generation that synergistically combines
pre-trained PL models with Proximal Policy Optimization (PPO) which is a widely
used deep reinforcement learning technique. By utilizing non-differentiable
feedback from code execution and structure alignment, PPOCoder seamlessly
integrates external code-specific knowledge into the model optimization
process. It's important to note that PPOCoder is a task-agnostic and
model-agnostic framework that can be used across different code generation
tasks and PLs. Extensive experiments on three code generation tasks demonstrate
the effectiveness of our proposed approach compared to SOTA methods, achieving
significant improvements in compilation success rates and functional
correctness across different PLs.
|
http://arxiv.org/abs/2301.13816v4
|
In manufacturing processes, surface inspection is a key requirement for
quality assessment and damage localization. Due to this, automated surface
anomaly detection has become a promising area of research in various industrial
inspection systems. A particular challenge in industries with large-scale
components, like aircraft and heavy machinery, is inspecting large parts with
very small defect dimensions. Moreover, these parts can be of curved shapes. To
address this challenge, we present a 2-stage multi-modal inspection pipeline
with visual and tactile sensing. Our approach combines the best of both visual
and tactile sensing by identifying and localizing defects using a global view
(vision) and using the localized area for tactile scanning for identifying
remaining defects. To benchmark our approach, we propose a novel real-world
dataset with multiple metallic defect types per image, collected in the
production environments on real aerospace manufacturing parts, as well as
online robot experiments in two environments. Our approach is able to identify
85% defects using Stage I and identify 100% defects after Stage II. The dataset
is publicly available at https://zenodo.org/record/8327713
|
http://arxiv.org/abs/2309.04590v1
|
The design dataset is the backbone of data-driven design. Ideally, the
dataset should be fairly distributed in both shape and property spaces to
efficiently explore the underlying relationship. However, the classical
experimental design focuses on shape diversity and thus yields biased
exploration in the property space. Recently developed methods either conduct
subset selection from a large dataset or employ assumptions with severe
limitations. In this paper, fairness- and uncertainty-aware data generation
(FairGen) is proposed to actively detect and generate missing properties
starting from a small dataset. At each iteration, its coverage module computes
the data coverage to guide the selection of the target properties. The
uncertainty module ensures that the generative model can make certain and thus
accurate shape predictions. Integrating the two modules, Bayesian optimization
determines the target properties, which are thereafter fed into the generative
model to predict the associated shapes. The new designs, whose properties are
analyzed by simulation, are added to the design dataset. An S-slot design
dataset case study was implemented to demonstrate the efficiency of FairGen in
auxetic structural design. Compared with grid and randomized sampling, FairGen
increased the coverage score at twice the speed and significantly expanded the
sampled region in the property space. As a result, the generative models
trained with FairGen-generated datasets showed consistent and significant
reductions in mean absolute errors.
|
http://arxiv.org/abs/2309.05842v1
|
Model-based diagnosis has been an active research topic in different
communities including artificial intelligence, formal methods, and control.
This has led to a set of disparate approaches addressing different classes of
systems and seeking different forms of diagnoses. In this paper, we resolve
such disparities by generalising Reiter's theory to be agnostic to the types of
systems and diagnoses considered. This more general theory of diagnosis from
first principles defines the minimal diagnosis as the set of preferred
diagnosis candidates in a search space of hypotheses. Computing the minimal
diagnosis is achieved by exploring the space of diagnosis hypotheses, testing
sets of hypotheses for consistency with the system's model and the observation,
and generating conflicts that rule out successors and other portions of the
search space. Under relatively mild assumptions, our algorithms correctly
compute the set of preferred diagnosis candidates. The main difficulty here is
that the search space is no longer a powerset as in Reiter's theory, and that,
as consequence, many of the implicit properties (such as finiteness of the
search space) no longer hold. The notion of conflict also needs to be
generalised and we present such a more general notion. We present two
implementations of these algorithms, using test solvers based on satisfiability
and heuristic search, respectively, which we evaluate on instances from two
real world discrete event problems. Despite the greater generality of our
theory, these implementations surpass the special purpose algorithms designed
for discrete event systems, and enable solving instances that were out of reach
of existing diagnosis approaches.
|
http://arxiv.org/abs/2309.16180v1
|
A dry frictional interface loaded in shear often displays stick-slip. The
amplitude of this cycle depends on the probability that a slip event nucleates
into a rupture, and on the rate at which slip events are triggered. This rate
is determined by the distribution $P(x)$ of soft spots which yields if the
shear stress is increased by some amount $x$. In minimal models of a frictional
interface that include disorder, inertia and long-range elasticity, we
discovered an 'armouring' mechanism, by which the interface is greatly
stabilised after a large slip event: $P(x)$ then vanishes at small arguments,
as $P(x)\sim x^\theta$ [1]. The exponent $\theta>0$, which exists only in the
presence of inertia (otherwise $\theta=0$), was found to depend on the
statistics of the disorder in the model, a phenomenon that was not explained.
Here, we show that a single-particle toy model with inertia and disorder
captures the existence of a non-trivial exponent $\theta>0$, which we can
analytically relate to the statistics of the disorder.
|
http://arxiv.org/abs/2301.13802v1
|
Enabling preserving bisimilarity is a refinement of strong bisimilarity that
preserves safety as well as liveness properties. To define it properly,
labelled transition systems needed to be upgraded with a successor relation,
capturing concurrency between transitions enabled in the same state. We enrich
the well-known De Simone format to handle inductive definitions of this
successor relation. We then establish that ep-bisimilarity is a congruence for
the operators, as well as lean congruence for recursion, for all (enriched) De
Simone languages.
|
http://arxiv.org/abs/2309.07933v1
|
Most modern ticketing systems rely on a first-come-first-serve or randomized
allocation system to determine the allocation of tickets. Such systems has
received considerable backlash in recent years due to its inequitable allotment
and allocative inefficiency. We analyze a ticketing protocol based on a
variation of the marginal price auction system. Users submit bids to the
protocol based on their own utilities. The protocol awards tickets to the
highest bidders and determines the final ticket price paid by all bidders using
the lowest winning submitted bid. Game theoretic proof is provided to ensure
the protocol more efficiently allocates the tickets to the bidders with the
highest utilities. We also prove that the protocol extracts more economic rents
for the event organizers and the non-optimality of ticket scalping under
time-invariant bidder utilities.
|
http://arxiv.org/abs/2309.11189v1
|
We show that for $1\leq p, q<\infty$ with $p/q \notin \mathbb{N}$, the doubly
atomless separable $L_pL_q$ Banach lattice $L_p(L_q)$ is approximately
ultrahomogeneous (AUH) over the class of its finitely generated sublattices.
The above is not true when $p/q \in \mathbb{N}$. However, for any $p\neq q$,
$L_p(L_q)$ is AUH over the finitely generated lattices in the class $BL_pL_q$
of bands of $L_pL_q$ lattices.
|
http://arxiv.org/abs/2309.10297v1
|
We place observational constraints on a dark energy (DE) model in which a
quintessence scalar field $\phi$ is coupled to dark matter (DM) through
momentum and energy exchanges.The momentum transfer is weighed by an
interaction between the field derivative and DM four velocity with a coupling
constant $\beta$, whereas the energy exchange is characterized by an
exponential scalar-field coupling to the DM density with a coupling constant
$Q$. A positive coupling $\beta$ leads to the suppression for the growth of DM
density perturbations at low redshifts, whose property offers a possibility for
resolving the $\sigma_8$ tension problem. A negative coupling $Q$ gives rise to
a $\phi$-matter-dominated epoch, whose presence can reduce the sound horizon
around the Cosmic Microwave Background (CMB) decoupling epoch. Using the data
of Planck 2018, 12-th Sloan Digital Sky Survey, Phantheon supernovae samples,
and 1-year dark energy survey, we find that the two couplings are constrained
to be $\beta=0.332^{+1.246}_{-0.237}$ and $Q =-0.0312^{+0.0312}_{-0.0085}$ at
68\,\% confidence level (CL). Thus, there is an interesting observational
signature of the momentum exchange ($\beta \neq 0$) between DE and DM, with a
peak of the probability distribution of the energy transfer coupling at $Q<0$.
|
http://arxiv.org/abs/2309.13946v2
|
Discovery of mathematical descriptors of physical phenomena from
observational and simulated data, as opposed to from the first principles, is a
rapidly evolving research area. Two factors, time-dependence of the inputs and
hidden translation invariance, are known to complicate this task. To ameliorate
these challenges, we combine Lagrangian dynamic mode decomposition with a
locally time-invariant approximation of the Koopman operator. The former
component of our method yields the best linear estimator of the system's
dynamics, while the latter deals with the system's nonlinearity and
non-autonomous behavior. We provide theoretical estimators (bounds) of
prediction accuracy and perturbation error to guide the selection of both rank
truncation and temporal discretization. We demonstrate the performance of our
approach on several non-autonomous problems, including two-dimensional
Navier-Stokes equations.
|
http://arxiv.org/abs/2309.05117v2
|
Software-Defined Networking (SDN) significantly simplifies programming,
reconfiguring, and optimizing network devices, such as switches and routers.
The de facto standard for programmming SDN devices is the P4 language. However,
the flexibility and power of P4, and SDN more generally, gives rise to
important risks. As a number of incidents at major cloud providers have shown,
errors in SDN programs can compromise the availability of networks, leaving
them in a non-functional state. The focus of this paper are errors in
control-plane programs that interact with P4-enabled network devices via the
standardized P4Runtime API. For clients of the P4Runtime API it is easy to make
mistakes that lead to catastrophic failures, despite the use of Google's
Protocol Buffers as an interface definition language.
This paper proposes P4R-Type, a novel verified P4Runtime API for Scala that
performs static checks for P4 control plane operations, ruling out mismatches
between P4 tables, allowed actions, and action parameters. As a formal
foundation of P4R-Type, we present the $F_{\text{P4R}}$ calculus and its typing
system, which ensure that well-typed programs never get stuck by issuing
invalid P4Runtime operations. We evaluate the safety and flexibility of
P4R-Type with 3 case studies. To the best of our knowledge, this is the first
work that formalises P4Runtime control plane applications, and a typing
discipline ensuring the correctness of P4Runtime operations.
|
http://arxiv.org/abs/2309.03566v1
|
Let $f:[0,1]^d\to\mathbb{R}$ be a completely monotone integrand as defined by
Aistleitner and Dick (2015) and let points
$\boldsymbol{x}_0,\dots,\boldsymbol{x}_{n-1}\in[0,1]^d$ have a non-negative
local discrepancy (NNLD) everywhere in $[0,1]^d$. We show how to use these
properties to get a non-asymptotic and computable upper bound for the integral
of $f$ over $[0,1]^d$. An analogous non-positive local discrepancy (NPLD)
property provides a computable lower bound. It has been known since Gabai
(1967) that the two dimensional Hammersley points in any base $b\ge2$ have
non-negative local discrepancy. Using the probabilistic notion of associated
random variables, we generalize Gabai's finding to digital nets in any base
$b\ge2$ and any dimension $d\ge1$ when the generator matrices are permutation
matrices. We show that permutation matrices cannot attain the best values of
the digital net quality parameter when $d\ge3$. As a consequence the computable
absolutely sure bounds we provide come with less accurate estimates than the
usual digital net estimates do in high dimensions. We are also able to
construct high dimensional rank one lattice rules that are NNLD. We show that
those lattices do not have good discrepancy properties: any lattice rule with
the NNLD property in dimension $d\ge2$ either fails to be projection regular or
has all its points on the main diagonal. Complete monotonicity is a very strict
requirement that for some integrands can be mitigated via a control variate.
|
http://arxiv.org/abs/2309.04209v2
|
As Diffusion Models have shown promising performance, a lot of efforts have
been made to improve the controllability of Diffusion Models. However, how to
train Diffusion Models to have the disentangled latent spaces and how to
naturally incorporate the disentangled conditions during the sampling process
have been underexplored. In this paper, we present a training framework for
feature disentanglement of Diffusion Models (FDiff). We further propose two
sampling methods that can boost the realism of our Diffusion Models and also
enhance the controllability. Concisely, we train Diffusion Models conditioned
on two latent features, a spatial content mask, and a flattened style
embedding. We rely on the inductive bias of the denoising process of Diffusion
Models to encode pose/layout information in the content feature and
semantic/style information in the style feature. Regarding the sampling
methods, we first generalize Composable Diffusion Models (GCDM) by breaking the
conditional independence assumption to allow for some dependence between
conditional inputs, which is shown to be effective in realistic generation in
our experiments. Second, we propose timestep-dependent weight scheduling for
content and style features to further improve the performance. We also observe
better controllability of our proposed methods compared to existing methods in
image manipulation and image translation.
|
http://arxiv.org/abs/2302.14368v3
|
Good teachers always tailor their explanations to the learners. Cognitive
scientists model this process under the rationality principle: teachers try to
maximise the learner's utility while minimising teaching costs. To this end,
human teachers seem to build mental models of the learner's internal state, a
capacity known as Theory of Mind (ToM). Inspired by cognitive science, we build
on Bayesian ToM mechanisms to design teacher agents that, like humans, tailor
their teaching strategies to the learners. Our ToM-equipped teachers construct
models of learners' internal states from observations and leverage them to
select demonstrations that maximise the learners' rewards while minimising
teaching costs. Our experiments in simulated environments demonstrate that
learners taught this way are more efficient than those taught in a
learner-agnostic way. This effect gets stronger when the teacher's model of the
learner better aligns with the actual learner's state, either using a more
accurate prior or after accumulating observations of the learner's behaviour.
This work is a first step towards social machines that teach us and each other,
see https://teacher-with-tom.github.io.
|
http://arxiv.org/abs/2309.17275v1
|
Extracting precise geographical information from textual contents is crucial
in a plethora of applications. For example, during hazardous events, a robust
and unbiased toponym extraction framework can provide an avenue to tie the
location concerned to the topic discussed by news media posts and pinpoint
humanitarian help requests or damage reports from social media. Early studies
have leveraged rule-based, gazetteer-based, deep learning, and hybrid
approaches to address this problem. However, the performance of existing tools
is deficient in supporting operations like emergency rescue, which relies on
fine-grained, accurate geographic information. The emerging pretrained language
models can better capture the underlying characteristics of text information,
including place names, offering a promising pathway to optimize toponym
recognition to underpin practical applications. In this paper, TopoBERT, a
toponym recognition module based on a one dimensional Convolutional Neural
Network (CNN1D) and Bidirectional Encoder Representation from Transformers
(BERT), is proposed and fine-tuned. Three datasets (CoNLL2003-Train,
Wikipedia3000, WNUT2017) are leveraged to tune the hyperparameters, discover
the best training strategy, and train the model. Another two datasets
(CoNLL2003-Test and Harvey2017) are used to evaluate the performance. Three
distinguished classifiers, linear, multi-layer perceptron, and CNN1D, are
benchmarked to determine the optimal model architecture. TopoBERT achieves
state-of-the-art performance (f1-score=0.865) compared to the other five
baseline models and can be applied to diverse toponym recognition tasks without
additional training.
|
http://arxiv.org/abs/2301.13631v2
|
Human motion prediction is important for mobile service robots and
intelligent vehicles to operate safely and smoothly around people. The more
accurate predictions are, particularly over extended periods of time, the
better a system can, e.g., assess collision risks and plan ahead. In this
paper, we propose to exploit maps of dynamics (MoDs, a class of general
representations of place-dependent spatial motion patterns, learned from prior
observations) for long-term human motion prediction (LHMP). We present a new
MoD-informed human motion prediction approach, named CLiFF-LHMP, which is data
efficient, explainable, and insensitive to errors from an upstream tracking
system. Our approach uses CLiFF-map, a specific MoD trained with human motion
data recorded in the same environment. We bias a constant velocity prediction
with samples from the CLiFF-map to generate multi-modal trajectory predictions.
In two public datasets we show that this algorithm outperforms the state of the
art for predictions over very extended periods of time, achieving 45% more
accurate prediction performance at 50s compared to the baseline.
|
http://arxiv.org/abs/2309.07066v1
|
This paper focuses on the identification of different algorithm-based biases
in robotic behaviour and their consequences in human-robot mixed groups. We
propose to develop computational models to detect episodes of microaggression,
discrimination, and social exclusion informed by a) observing human coping
behaviours that are used to regain social inclusion and b) using system
inherent information that reveal unequal treatment of human interactants. Based
on this information we can start to develop regulatory mechanisms to promote
fairness and social inclusion in HRI.
|
http://arxiv.org/abs/2310.01574v1
|
Legged locomotion is a complex control problem that requires both accuracy
and robustness to cope with real-world challenges. Legged systems have
traditionally been controlled using trajectory optimization with inverse
dynamics. Such hierarchical model-based methods are appealing due to intuitive
cost function tuning, accurate planning, generalization, and most importantly,
the insightful understanding gained from more than one decade of extensive
research. However, model mismatch and violation of assumptions are common
sources of faulty operation. Simulation-based reinforcement learning, on the
other hand, results in locomotion policies with unprecedented robustness and
recovery skills. Yet, all learning algorithms struggle with sparse rewards
emerging from environments where valid footholds are rare, such as gaps or
stepping stones. In this work, we propose a hybrid control architecture that
combines the advantages of both worlds to simultaneously achieve greater
robustness, foot-placement accuracy, and terrain generalization. Our approach
utilizes a model-based planner to roll out a reference motion during training.
A deep neural network policy is trained in simulation, aiming to track the
optimized footholds. We evaluate the accuracy of our locomotion pipeline on
sparse terrains, where pure data-driven methods are prone to fail. Furthermore,
we demonstrate superior robustness in the presence of slippery or deformable
ground when compared to model-based counterparts. Finally, we show that our
proposed tracking controller generalizes across different trajectory
optimization methods not seen during training. In conclusion, our work unites
the predictive capabilities and optimality guarantees of online planning with
the inherent robustness attributed to offline learning.
|
http://arxiv.org/abs/2309.15462v2
|
The increasing availability of large clinical datasets collected from
patients can enable new avenues for computational characterization of complex
diseases using different analytic algorithms. One of the promising new methods
for extracting knowledge from large clinical datasets involves temporal pattern
mining integrated with machine learning workflows. However, mining these
temporal patterns is a computational intensive task and has memory
repercussions. Current algorithms, such as the temporal sequence pattern mining
(tSPM) algorithm, are already providing promising outcomes, but still leave
room for optimization. In this paper, we present the tSPM+ algorithm, a
high-performance implementation of the tSPM algorithm, which adds a new
dimension by adding the duration to the temporal patterns. We show that the
tSPM+ algorithm provides a speed up to factor 980 and a up to 48 fold
improvement in memory consumption. Moreover, we present a docker container with
an R-package, We also provide vignettes for an easy integration into already
existing machine learning workflows and use the mined temporal sequences to
identify Post COVID-19 patients and their symptoms according to the WHO
definition.
|
http://arxiv.org/abs/2309.05671v1
|
Conversational aspect-based sentiment quadruple analysis (DiaASQ) aims to
extract the quadruple of target-aspect-opinion-sentiment within a dialogue. In
DiaASQ, a quadruple's elements often cross multiple utterances. This situation
complicates the extraction process, emphasizing the need for an adequate
understanding of conversational context and interactions. However, existing
work independently encodes each utterance, thereby struggling to capture
long-range conversational context and overlooking the deep inter-utterance
dependencies. In this work, we propose a novel Dynamic Multi-scale Context
Aggregation network (DMCA) to address the challenges. Specifically, we first
utilize dialogue structure to generate multi-scale utterance windows for
capturing rich contextual information. After that, we design a Dynamic
Hierarchical Aggregation module (DHA) to integrate progressive cues between
them. In addition, we form a multi-stage loss strategy to improve model
performance and generalization ability. Extensive experimental results show
that the DMCA model outperforms baselines significantly and achieves
state-of-the-art performance.
|
http://arxiv.org/abs/2309.15476v1
|
Human-Computer Interaction (HCI) has been the subject of research for many
years, and recent studies have focused on improving its performance through
various techniques. In the past decade, deep learning studies have shown high
performance in various research areas, leading researchers to explore their
application to HCI. Convolutional neural networks can be used to recognize hand
gestures from images using deep architectures. In this study, we evaluated
pre-trained high-performance deep architectures on the HG14 dataset, which
consists of 14 different hand gesture classes. Among 22 different models,
versions of the VGGNet and MobileNet models attained the highest accuracy
rates. Specifically, the VGG16 and VGG19 models achieved accuracy rates of
94.64% and 94.36%, respectively, while the MobileNet and MobileNetV2 models
achieved accuracy rates of 96.79% and 94.43%, respectively. We performed hand
gesture recognition on the dataset using an ensemble learning technique, which
combined the four most successful models. By utilizing these models as base
learners and applying the Dirichlet ensemble technique, we achieved an accuracy
rate of 98.88%. These results demonstrate the effectiveness of the deep
ensemble learning technique for HCI and its potential applications in areas
such as augmented reality, virtual reality, and game technologies.
|
http://arxiv.org/abs/2309.11610v1
|
In this work, we assess the theoretical limitations of determining guaranteed
stability and accuracy of neural networks in classification tasks. We consider
classical distribution-agnostic framework and algorithms minimising empirical
risks and potentially subjected to some weights regularisation. We show that
there is a large family of tasks for which computing and verifying ideal stable
and accurate neural networks in the above settings is extremely challenging, if
at all possible, even when such ideal solutions exist within the given class of
neural architectures.
|
http://arxiv.org/abs/2309.07072v1
|
Black holes violate the third law of thermodynamics, and this gives rise to
difficulties with the microscopic description of the entropy of black holes.
Recently, it has been shown that the microscopic description of the
Schwarzschild black hole thermodynamics in $D = 4$ spacetime dimensions is
provided by the analytical continuation of the entropy of Bose gas with
non-relativistic one particle energy to d =-4 negative spatial dimension. In
this paper, we show that the D=5 and D=6 Schwarzschild black holes
thermodynamics can be modeled by the d-dimensional Bose gas, d=1,2,3..., with
the one particle energy $\varepsilon(k)=k^\alpha$ under conditions
$\alpha=-d/3$ and $\alpha=-d/4$, respectively. In these cases the free energy
of the Bose gas has divergences and we introduce a cut-off and perform the
minimal renormalizations. We also perform renormalizations using analytical
regularization and prove that the minimal cut-off renormalization gives the
same answer as the analytical regularization by the Riemann zeta-function.
|
http://arxiv.org/abs/2305.19827v1
|
The paper establishes an equivalence between localizations of (diagrams of)
cubical sets and (diagrams of) directed topological spaces by those maps
defining (natural) cubical homotopy equivalences after application of the
directed singular functor and a directed analogue of fibrant replacement. This
equivalence both lifts and extends an equivalence between classical homotopy
categories of cubical sets and topological spaces. Some simple applications
include combinatorial descriptions and subsequent calculations of directed
homotopy monoids and directed singular 1-cohomology monoids. Another
application is a characterization of isomorphisms between small categories up
to zig-zags of natural transformations as directed homotopy equivalences
between directed classifying spaces. Cubical sets throughout the paper are
taken to mean presheaves over the minimal symmetric monoidal variant of the
cube category. Along the way, the paper characterizes morphisms in this variant
as the interval-preserving lattice homomorphisms between finite Boolean lattice
and describes some of the test model structure on presheaves over this variant.
|
http://arxiv.org/abs/2309.16619v1
|
We perform calculations of the energy shift of the nuclear clock transition
frequency $^{229}$Th as a function of the number of electrons in Th ion.
We demonstrate that the dependence of the nuclear frequency on electron
configuration is significant. E.g., removing one electron from the atom leads
to relative shift of the nuclear frequency $\sim 10^{-7}$, which is twelve
orders of magnitude larger than expected relative uncertainty of the nuclear
clock transition frequency ($\sim 10^{-19}$). This leads to difference of the
nuclear clock frequencies in Th~IV, Th~III, Th~II and Th~I.
The relative change of the nuclear frequency between neutral Th and its bare
nucleus is 1\%. We also calculate the field shift constants for isotopic and
isomeric shifts of atomic electron transitions in Th ions.
|
http://arxiv.org/abs/2309.11176v1
|
The development of large high-quality datasets and high-performing models
have led to significant advancements in the domain of Extractive Question
Answering (EQA). This progress has sparked considerable interest in exploring
unanswerable questions within the EQA domain. Training EQA models with
unanswerable questions helps them avoid extracting misleading or incorrect
answers for queries that lack valid responses. However, manually annotating
unanswerable questions is labor-intensive. To address this, we propose AGent, a
novel pipeline that automatically creates new unanswerable questions by
re-matching a question with a context that lacks the necessary information for
a correct answer. In this paper, we demonstrate the usefulness of this AGent
pipeline by creating two sets of unanswerable questions from answerable
questions in SQuAD and HotpotQA. These created question sets exhibit low error
rates. Additionally, models fine-tuned on these questions show comparable
performance with those fine-tuned on the SQuAD 2.0 dataset on multiple EQA
benchmarks.
|
http://arxiv.org/abs/2309.05103v1
|
Accretion disks around compact objects are expected to enter an unstable
phase at high luminosity. One instability may occur when the radiation pressure
generated by accretion modifies the disk viscosity, resulting in the cyclic
depletion and refilling of the inner disk on short timescales. Such a scenario,
however, has only been quantitatively verified for a single stellar-mass black
hole. Although there are hints of these cycles in a few isolated cases, their
apparent absence in the variable emission of most bright accreting neutron
stars and black holes has been a lingering puzzle. Here we report the presence
of the same multiwavelength instability around an accreting neutron star.
Moreover, we show that the variability across the electromagnetic spectrum-from
radio to X-ray-of both black holes and neutron stars at high accretion rates
can be explained consistently if the accretion disks are unstable, producing
relativistic ejections during transitions that deplete or refill the inner
disk. Such new association allows us to identify the main physical components
responsible for the fast multiwavelength variability of highly accreting
compact objects.
|
http://arxiv.org/abs/2303.00020v1
|
We present a highly efficient workflow for designing semiconductor structures
with specific physical properties, which can be utilized for a range of
applications, including photocatalytic water splitting. Our algorithm generates
candidate structures composed of earth-abundant elements that exhibit optimal
light-trapping, high efficiency in \ce{H2} and/or \ce{O2} production, and
resistance to reduction and oxidation in aqueous media. To achieve this, we use
an ionic translation model trained on the Inorganic Crystal Structure Database
(ICSD) to predict over thirty thousand undiscovered semiconductor compositions.
These predictions are then screened for redox stability under Hydrogen
Evolution Reaction (HER) or Oxygen Evolution Reaction (OER) conditions before
generating thermodynamically stable crystal structures and calculating accurate
band gap values for the compounds. Our approach results in the identification
of dozens of promising semiconductor candidates with ideal properties for
artificial photosynthesis, offering a significant advancement toward the
conversion of sunlight into chemical fuels.
|
http://arxiv.org/abs/2310.00118v1
|
Let $r$ be a positive integer, $N$ be a nonnegative integer and $\Omega
\subset \mathbb{R}^{r}$ be a domain. Further, for all multi-indices $\alpha \in
\mathbb{N}^{r}$, $|\alpha|\leq N$, let us consider the partial differential
operator $D^{\alpha}$ defined by \[
D^{\alpha}= \frac{\partial^{|\alpha|}}{\partial x_{1}^{\alpha_{1}}\cdots
\partial x_{r}^{\alpha_{r}}}, \] where $\alpha= (\alpha_{1}, \ldots,
\alpha_{r})$. Here by definition we mean $D^{0}\equiv \mathrm{id}$. An easy
computation shows that if $f, g\in \mathscr{C}^{N}(\Omega)$ and $\alpha \in
\mathbb{N}^{r}, |\alpha|\leq N$, then we have \[ \tag{$\ast$} D^{\alpha}(f\cdot
g) = \sum_{\beta\leq \alpha}\binom{\alpha}{\beta}D^{\beta}(f)\cdot D^{\alpha -
\beta}(g). \] This paper is devoted to the study of identity $(\ast)$ in the
space $\mathscr{C}(\Omega)$. More precisely, if $r$ is a positive integer, $N$
is a nonnegative integer and $\Omega \subset \mathbb{R}^{r}$ is a domain, then
we describe those mappings $T_{\alpha} \colon \mathscr{C}(\Omega)\to
\mathscr{C}(\Omega)$, $\alpha \in \mathbb{N}^{r}, |\alpha|\leq N$ that satisfy
identity $(\ast)$ for all possible multi-indices $\alpha\in \mathbb{N}^{r}$,
$|\alpha|\leq N$. Our main result says that if the domain is
$\mathscr{C}(\Omega)$, then the mappings $T_{\alpha}$ are of a rather special
form. Related results in the space $\mathscr{C}^{N}(\Omega)$ are also
presented.
|
http://arxiv.org/abs/2309.03572v1
|
Recent years have witnessed the adoption of differential privacy (DP) in
practical database systems like PINQ, FLEX, and PrivateSQL. Such systems allow
data analysts to query sensitive data while providing a rigorous and provable
privacy guarantee. However, the existing design of these systems does not
distinguish data analysts of different privilege levels or trust levels. This
design can have an unfair apportion of the privacy budget among the data
analyst if treating them as a single entity, or waste the privacy budget if
considering them as non-colluding parties and answering their queries
independently. In this paper, we propose DProvDB, a fine-grained privacy
provenance framework for the multi-analyst scenario that tracks the privacy
loss to each single data analyst. Under this framework, when given a fixed
privacy budget, we build algorithms that maximize the number of queries that
could be answered accurately and apportion the privacy budget according to the
privilege levels of the data analysts.
|
http://arxiv.org/abs/2309.10240v1
|
We present the first comprehensive study of a giant, $\approx \! \! 70$
kpc-scale nebula around a radio-quiet quasar at $z<1$. The analysis is based on
deep integral field spectroscopy with MUSE of the field of HE$\,$0238$-$1904, a
luminous quasar at $z=0.6282$. The nebula emits strongly in $\mathrm{[O \,
II]}$, $\rm H \beta$, and $\mathrm{[O \, III]}$, and the quasar resides in an
unusually overdense environment for a radio-quiet system. The environment
likely consists of two groups which may be merging, and in total have an
estimated dynamical mass of $M_{\rm dyn}\approx 4\times 10^{13}$ to $10^{14}\
{\rm M_\odot}$. The nebula exhibits largely quiescent kinematics and irregular
morphology. The nebula may arise primarily through interaction-related
stripping of circumgalactic and interstellar medium (CGM/ISM) of group members,
with some potential contributions from quasar outflows. The simultaneous
presence of the giant nebula and a radio-quiet quasar in a rich environment
suggests a correlation between such circum-quasar nebulae and environmental
effects. This possibility can be tested with larger samples. The upper limits
on the electron number density implied by the $\mathrm{[O \, II]}$ doublet
ratio range from $\log(n_{\rm e, \, [O \, II]} / \mathrm{cm^{-3}}) < 1.2$ to
$2.8$. However, assuming a constant quasar luminosity and negligible projection
effects, the densities implied from the measured line ratios between different
ions (e.g., $\mathrm{[O\,II]}$, $\mathrm{[O\,III]}$, and $\mathrm{[Ne\,V]}$)
and photoionization simulations are often $10{-}400$ times larger. This large
discrepancy can be explained by quasar variability on a timescale of $\approx
10^4{-}10^5$ years.
|
http://arxiv.org/abs/2309.00053v3
|
The cultural heritage buildings (CHB), which are part of mankind's history
and identity, are in constant danger of damage or in extreme situations total
destruction. That being said, it's of utmost importance to preserve them by
identifying the existent, or presumptive, defects using novel methods so that
renovation processes can be done in a timely manner and with higher accuracy.
The main goal of this research is to use new deep learning (DL) methods in the
process of preserving CHBs (situated in Iran); a goal that has been neglected
especially in developing countries such as Iran, as these countries still
preserve their CHBs using manual, and even archaic, methods that need direct
human supervision. Having proven their effectiveness and performance when it
comes to processing images, the convolutional neural networks (CNN) are a
staple in computer vision (CV) literacy and this paper is not exempt. When
lacking enough CHB images, training a CNN from scratch would be very difficult
and prone to overfitting; that's why we opted to use a technique called
transfer learning (TL) in which we used pre-trained ResNet, MobileNet, and
Inception networks, for classification. Even more, the Grad-CAM was utilized to
localize the defects to some extent. The final results were very favorable
based on those of similar research. The final proposed model can pave the way
for moving from manual to unmanned CHB conservation, hence an increase in
accuracy and a decrease in human-induced errors.
|
http://arxiv.org/abs/2302.14354v1
|
Rare life events significantly impact mental health, and their detection in
behavioral studies is a crucial step towards health-based interventions. We
envision that mobile sensing data can be used to detect these anomalies.
However, the human-centered nature of the problem, combined with the
infrequency and uniqueness of these events makes it challenging for
unsupervised machine learning methods. In this paper, we first investigate
granger-causality between life events and human behavior using sensing data.
Next, we propose a multi-task framework with an unsupervised autoencoder to
capture irregular behavior, and an auxiliary sequence predictor that identifies
transitions in workplace performance to contextualize events. We perform
experiments using data from a mobile sensing study comprising N=126 information
workers from multiple industries, spanning 10106 days with 198 rare events
(<2%). Through personalized inference, we detect the exact day of a rare event
with an F1 of 0.34, demonstrating that our method outperforms several
baselines. Finally, we discuss the implications of our work from the context of
real-world deployment.
|
http://arxiv.org/abs/2305.20056v1
|
CaSb$_2$ is a bulk superconductor and a topological semimetal, making it a
great platform for realizing topological superconductivity. In this work, we
investigate the superconducting upper and lower critical field anisotropy using
magnetic susceptibility, and study the superconducting state using muon
spin-relaxation. The temperature dependence of transverse-field relaxation rate
can be fitted with a single-gap model or two-gap model. Zero-field relaxation
shows little temperature dependence when the muon-spin is parallel to the
$c*$-axis, while an increase in relaxation appears below 1 K when the muon-spin
is parallel to the $ab$-plane. We conclude an $s+is$ order parameter
considering the breaking of time-reversal symmetry (TRS), which originates from
competing interband interactions between the three bands of CaSb$_2$. To
explain the direction-dependent breaking of TRS we suggest loop currents
developing in the plane of distorted square-net of Sb atoms.
|
http://arxiv.org/abs/2309.12457v3
|
We developed a theoretical scheme of incorporating the magnetoelastic
contribution into the thermal elastic dynamics for the thin membranes of 2D
antiferromagnetic material with restricted geometry. We extended the elastic
Gr\"uneisen relation into an effective version which includes the magnetic
counterpart to the volume change of internal energy. Based on the specific heat
and thermal conductivity from the elastic and magnetic origins we predicted the
dependency of observables, such as effective Gr\"uneisen parameter, thermal
expansion coefficient, and the damping factor, with respect to a wide range of
temperature across the phase transition. Our model of analysis as been
validated by applying to the case of FePS3 flake resonator and the theoretical
predictions fits well with the reported experiment data.
|
http://arxiv.org/abs/2309.13991v2
|
Let $\mathcal{S}$ be a finite set of integer points in $\mathbb{R}^d$, which
we assume has many symmetries, and let $P\in\mathbb{R}^d$ be a fixed point. We
calculate the distances from $P$ to the points in $\mathcal{S}$ and compare the
results. In some of the most common cases, we find that they lead to unexpected
conclusions if the dimension is sufficiently large. For example, if
$\mathcal{S}$ is the set of vertices of a hypercube in $\mathbb{R}^d$ and $P$
is any point inside, then almost all triangles $PAB$ with $A,B\in\mathcal{S}$
are almost equilateral. Or, if $P$ is close to the center of the cube, then
almost all triangles $PAB$ with $A\in \mathcal{S}$ and $B$ anywhere in the
hypercube are almost right triangles.
|
http://arxiv.org/abs/2309.15338v1
|
We study the dynamics of a magneto-optical trap (MOT) operating at
high-bandwidth. We find the absolute importance of high recapture efficiency
between cycles to maintain a practical atom number. We develop a simple model
accounting for MOT trapping forces and pressure induced collisions and validate
with experimental data using $\mathrm{{}^{87}Rb}$. This is then applied to
quantum sensing predicting a shot noise limited sensitivity of
$\mathrm{10^{-7}g/\sqrt{Hz}}$ for a gravimeter at 100 Hz operation. The results
are useful for understanding MOT operation at high-bandwidth, particularly in
the context of developing mobile high-bandwidth quantum inertial sensors
targeting dynamic environments and navigation applications.
|
http://arxiv.org/abs/2309.14026v1
|
The classical Minkowski inequality implies that the volume of a bounded
convex domain is controlled from above by the integral of the mean curvature of
its boundary. In this note, we establish an analogous inequality without the
convexity assumption for all bounded smooth domains in a complete manifold with
its bottom spectrum being suitably large relative to its Ricci curvature lower
bound. An immediate implication is the nonexistence of embedded compact minimal
hypersurfaces in such manifolds. This nonexistence issue is also considered for
steady and expanding Ricci solitons.
|
http://arxiv.org/abs/2309.13749v1
|
Quantum dynamics of a collection of atoms subjected to phase modulation has
been carefully revisited. We present an exact analysis of the evolution of a
two-level system (represented by a spinor) under the action of a time-dependent
matrix Hamiltonian. The dynamics is shown to evolve on two coupled potential
energy surfaces, one of them binding while the other one scattering type. The
dynamics is shown to be quasi-integrable with nonlinear resonances. The bounded
dynamics with intermittent scattering at random moments presents the scenario
reminiscent to Anderson and dynamical localization. We believe that a careful
analytical investigation of a multi-component system which is classically
non-integrable is relevant to many other fields, including quantum computation
with multi-qubit system.
|
http://arxiv.org/abs/2309.04235v1
|
To resolve the non-convex optimization problem in partial wave analysis, this
paper introduces a novel approach that incorporates fraction constraints into
the likelihood function. This method offers significant improvements in both
the efficiency of pole searching and the reliability of resonance selection
within partial wave analysis.
|
http://arxiv.org/abs/2309.14740v1
|
Deep learning algorithms utilizing magnetic resonance (MR) images have
demonstrated cutting-edge proficiency in autonomously segmenting multiple
sclerosis (MS) lesions. Despite their achievements, these algorithms may
struggle to extend their performance across various sites or scanners, leading
to domain generalization errors. While few-shot or one-shot domain adaptation
emerges as a potential solution to mitigate generalization errors, its efficacy
might be hindered by the scarcity of labeled data in the target domain. This
paper seeks to tackle this challenge by integrating one-shot adaptation data
with harmonized training data that incorporates labels. Our approach involves
synthesizing new training data with a contrast akin to that of the test domain,
a process we refer to as "contrast harmonization" in MRI. Our experiments
illustrate that the amalgamation of one-shot adaptation data with harmonized
training data surpasses the performance of utilizing either data source in
isolation. Notably, domain adaptation using exclusively harmonized training
data achieved comparable or even superior performance compared to one-shot
adaptation. Moreover, all adaptations required only minimal fine-tuning,
ranging from 2 to 5 epochs for convergence.
|
http://arxiv.org/abs/2310.20586v1
|
The recent introduction of Transformers language representation models
allowed great improvements in many natural language processing (NLP) tasks.
However, if on one hand the performances achieved by this kind of architectures
are surprising, on the other their usability is limited by the high number of
parameters which constitute their network, resulting in high computational and
memory demands. In this work we present BERTino, a DistilBERT model which
proposes to be the first lightweight alternative to the BERT architecture
specific for the Italian language. We evaluated BERTino on the Italian ISDT,
Italian ParTUT, Italian WikiNER and multiclass classification tasks, obtaining
F1 scores comparable to those obtained by a BERTBASE with a remarkable
improvement in training and inference speed.
|
http://arxiv.org/abs/2303.18121v1
|
A sequential pattern with negation, or negative sequential pattern, takes the
form of a sequential pattern for which the negation symbol may be used in front
of some of the pattern's itemsets. Intuitively, such a pattern occurs in a
sequence if negated itemsets are absent in the sequence. Recent work has shown
that different semantics can be attributed to these pattern forms, and that
state-of-the-art algorithms do not extract the same sets of patterns. This
raises the important question of the interpretability of sequential pattern
with negation. In this study, our focus is on exploring how potential users
perceive negation in sequential patterns. Our aim is to determine whether
specific semantics are more "intuitive" than others and whether these align
with the semantics employed by one or more state-of-the-art algorithms. To
achieve this, we designed a questionnaire to reveal the semantics' intuition of
each user. This article presents both the design of the questionnaire and an
in-depth analysis of the 124 responses obtained. The outcomes indicate that two
of the semantics are predominantly intuitive; however, neither of them aligns
with the semantics of the primary state-of-the-art algorithms. As a result, we
provide recommendations to account for this disparity in the conclusions drawn.
|
http://arxiv.org/abs/2309.11638v1
|
Many are the ways of engineering the band gap of nanoribbons including
application of stress, electric field and functionalization of the edges. In
this article, we investigate separately the effects of these methods on
armchair graphene and boron nitride nanoribbons. By means of density functional
theory calculations, we show that, despite their similar structure, the two
materials respond in opposite ways to these stimuli. By treating them as
perturbations of a heteroatomic ladder model based on the tight-binding
formalism, we connect the two behaviours to the different symmetries of the top
valence and bottom conduction wave functions. These results indicate that
opposite and complementary strategies are preferable to engineer the gapwidth
of armchair graphene and boron nitride nanoribbons.
|
http://arxiv.org/abs/2302.14432v2
|
Translation automation mechanisms and tools have been developed for several
years to bring people who speak different languages together. A "new search
only approach to machine translation" was adopted to tackle some of the
slowness and inaccuracy of the other technologies. The idea is to develop a
solution that, by indexing an incremental set of words that combine a certain
semantic meaning, makes it possible to create a process of correspondence
between their native language record and the language of translation. This
research principle assumes that the vocabulary used in a given type of
publication/document is relatively limited in terms of language style and word
diversity, which enhances the greater effect of instantaneously and rigor in
the translation process through the indexing process. A volume of electronic
text documents where processed and loaded into a database, and analyzed and
measured in order confirm the previous premise. Although the observed and
projected metric values did not give encouraging results, it was possible to
develop and make available a translation tool using this approach.
|
http://arxiv.org/abs/2309.10526v1
|
Blind deconvolution over graphs involves using (observed) output graph
signals to obtain both the inputs (sources) as well as the filter that drives
(models) the graph diffusion process. This is an ill-posed problem that
requires additional assumptions, such as the sources being sparse, to be
solvable. This paper addresses the blind deconvolution problem in the presence
of imperfect graph information, where the observed graph is a perturbed version
of the (unknown) true graph. While not having perfect knowledge of the graph is
arguably more the norm than the exception, the body of literature on this topic
is relatively small. This is partly due to the fact that translating the
uncertainty about the graph topology to standard graph signal processing tools
(e.g. eigenvectors or polynomials of the graph) is a challenging endeavor. To
address this limitation, we propose an optimization-based estimator that solves
the blind identification in the vertex domain, aims at estimating the inverse
of the generating filter, and accounts explicitly for additive graph
perturbations. Preliminary numerical experiments showcase the effectiveness and
potential of the proposed algorithm.
|
http://arxiv.org/abs/2309.09063v1
|
The Gene Regulatory Network (GRN) of biological cells governs a number of key
functionalities that enables them to adapt and survive through different
environmental conditions. Close observation of the GRN shows that the structure
and operational principles resembles an Artificial Neural Network (ANN), which
can pave the way for the development of Biological Artificial Intelligence. In
particular, a gene's transcription and translation process resembles a
sigmoidal-like property based on transcription factor inputs. In this paper, we
develop a mathematical model of gene-perceptron using a dual-layered
transcription-translation chemical reaction model, enabling us to transform a
GRN into a Gene Regulatory Neural Network (GRNN). We perform stability analysis
for each gene-perceptron within the fully-connected GRNN sub network to
determine temporal as well as stable concentration outputs that will result in
reliable computing performance. We focus on a non-linear classifier application
for the GRNN, where we analyzed generic multi-layer GRNNs as well as E.Coli
GRNN that is derived from trans-omic experimental data. Our analysis found that
varying the parameters of the chemical reactions can allow us shift the
boundaries of the classification region, laying the platform for programmable
GRNNs that suit diverse application requirements.
|
http://arxiv.org/abs/2310.04424v1
|
Nonlinear optical effects including stimulated Brillouin scattering (SBS) and
four-wave mixing (FWM) play an important role in microwave photonics, optical
frequency combs, and quantum photonics. Harnessing SBS and FWM in a low-loss
and versatile integrated platform would open the path to building large-scale
Brillouin/Kerr-based photonic integrated circuits. In this letter, we
investigate the Brillouin and Kerr properties of a low-index (n=1.513 @ 1550
nm) silicon oxynitride (SiON) platform. We observed, for the first time,
backward SBS in SiON waveguides with a Brillouin gain coefficient of 0.3$\rm
m^{-1}W^{-1}$, which can potentially be increased to 0.95$\rm m^{-1}W^{-1}$ by
just tailoring the waveguide cross-section. We also performed FWM experiments
in SiON rings and obtained the nonlinear parameter $\gamma$, of 0.02 $\rm
m^{-1}W^{-1}$. Our results point to a low-loss and low-index photonic
integrated platform that is both Brillouin and Kerr active.
|
http://arxiv.org/abs/2301.13619v1
|
We introduce a new neural architecture for solving visual abstract reasoning
tasks inspired by human cognition, specifically by observations that human
abstract reasoning often interleaves perceptual and conceptual processing as
part of a flexible, iterative, and dynamic cognitive process. Inspired by this
principle, our architecture models visual abstract reasoning as an iterative,
self-contrasting learning process that pursues consistency between perceptual
and conceptual processing of visual stimuli. We explain how this new
Contrastive Perceptual-Conceptual Network (CPCNet) works using matrix reasoning
problems in the style of the well-known Raven's Progressive Matrices
intelligence test. Experiments on the machine learning dataset RAVEN show that
CPCNet achieves higher accuracy than all previously published models while also
using the weakest inductive bias. We also point out a substantial and
previously unremarked class imbalance in the original RAVEN dataset, and we
propose a new variant of RAVEN -- AB-RAVEN -- that is more balanced in terms of
abstract concepts.
|
http://arxiv.org/abs/2309.10532v3
|
A large class of type-I fracton models, including the X-cube model, have been
found to be fixed points of the foliated renormalization group (RG). The system
size of such foliated models can be changed by adding or removing decoupled
layers of $2$D topological states and continuous deformation of the
Hamiltonian. In this paper, we study a closely related model -- the Ising
cage-net model -- and find that this model is not foliated in the same sense.
In fact, we point out certain unnatural restrictions in the foliated RG, and
find that removing these restrictions leads to a generalized foliated RG under
which the Ising cage-net model is a fixed point, and which includes the
original foliated RG as a special case. The Ising cage-net model thus gives a
prototypical example of the generalized foliated RG, and its system size can be
changed either by condensing / uncondensing bosonic planon excitations near a
2D plane or through a linear depth quantum circuit in the same plane. We show
that these two apparently different RG procedures are closely related, as they
lead to the same gapped boundary when implemented in part of a plane. Finally,
we briefly discuss the implications for foliated fracton phases, whose
universal properties will need to be reexamined in light of the generalized
foliated RG.
|
http://arxiv.org/abs/2301.00103v2
|
We study the semi-random graph process, and a variant process recently
suggested by Nick Wormald. We show that these two processes are asymptotically
equally fast in constructing a semi-random graph $G$ that has property
${\mathcal P}$, for the following examples of ${\mathcal P}$:
- ${\mathcal P}$ is the set of graphs containing a $d$-degenerate subgraph,
where $d\ge 1$ is fixed;
- ${\mathcal P}$ is the set of $k$-connected graphs, where $k\ge 1$ is fixed.
In particular, our result of the $k$-connectedness above settles the open case
$k=2$ of the original semi-random graph process.
We also prove that there exist properties ${\mathcal P}$ where the two
semi-random graph processes do not construct a graph in ${\mathcal P}$
asymptotically equally fast. We further propose some conjectures on ${\mathcal
P}$ for which the two processes perform differently.
|
http://arxiv.org/abs/2309.05881v1
|
Magnetically arrested accretion disks (MADs) around a rapidly rotating black
hole (BH) have been proposed as a model for jetted tidal disruption events
(TDEs). However, the dynamics of strongly magnetized disks in a more realistic
simulation which can mimic the chaotic dynamics during a TDE have previously
been unexplored. Here we employ global GRMHD simulations of a pre-existing MAD
disk interacting with an injected TDE stream with impact parameter $\beta\equiv
R_t/R_p=4-7$ to investigate how strongly magnetized TDEs differ from the
standard MAD picture. We demonstrate for the first time that a MAD or semi-MAD
state can be sustained and jets powered by the BH spin are produced in a TDE.
We also demonstrate that the strength of the self-intersection shock depends on
how dense the disk is relative to the stream, or the density contrast
$f_\rho=\rho_d/\rho_s$. The jet or funnel can become significantly tilted (by
$10-30^\circ$) due to the self-intersection outflow when $f_\rho \leq 0.1$. In
models with a powerful jet and $f_\rho\leq 0.01$, the tilted jet interacts with
and ultimately tilts the disk by as much as 23 degrees from the incoming
stream. We illustrate that as $f_\rho$ increases, the tilt of the jet and disk
is expected to realign with the BH spin once $f_\rho \geq 0.1$. We illustrate
how the tilt can rapidly realign if $f_\rho$ increases rapidly and apply this
to TDEs which have shown X-ray evolution on timescales of days-weeks.
|
http://arxiv.org/abs/2310.20592v1
|
When deploying machine learning estimators in science and engineering (SAE)
domains, it is critical to avoid failed estimations that can have disastrous
consequences, e.g., in aero engine design. This work focuses on detecting and
correcting failed state estimations before adopting them in SAE inverse
problems, by utilizing simulations and performance metrics guided by physical
laws. We suggest to flag a machine learning estimation when its physical model
error exceeds a feasible threshold, and propose a novel approach, GEESE, to
correct it through optimization, aiming at delivering both low error and high
efficiency. The key designs of GEESE include (1) a hybrid surrogate error model
to provide fast error estimations to reduce simulation cost and to enable
gradient based backpropagation of error feedback, and (2) two generative models
to approximate the probability distributions of the candidate states for
simulating the exploitation and exploration behaviours. All three models are
constructed as neural networks. GEESE is tested on three real-world SAE inverse
problems and compared to a number of state-of-the-art optimization/search
approaches. Results show that it fails the least number of times in terms of
finding a feasible state correction, and requires physical evaluations less
frequently in general.
|
http://arxiv.org/abs/2309.13985v2
|
Inspired by the detection of $T_{cc}$ tetraquark state by LHCb Collaboration,
we preform a systemical investigation of the low-lying doubly heavy charm
tetraquark states with strangeness in the quark delocalization color screening
model in the present work. Two kinds of configurations, the meson-meson
configuration and diquark-antidiquark configuration, are considered in the
calculation. Our estimations indicate that the coupled channel effects play
important role in the multiquark system, and a bound state with $J^{P}=1^{+}$
and a resonance state with $J^{P}=0^{+}$ have been predicted. The mass of the
bound state is evaluated to be $(3971\sim3975)$ MeV, while the mass and width
of the resonance are determined to be $(4113\sim4114)$ MeV and $(14.3\sim
16.1)$ MeV, respectively.
|
http://arxiv.org/abs/2309.07728v1
|
Large language model (LLM) platforms, such as ChatGPT, have recently begun
offering an app ecosystem to interface with third-party services on the
internet. While these apps extend the capabilities of LLM platforms, they are
developed by arbitrary third parties and thus cannot be implicitly trusted.
Apps also interface with LLM platforms and users using natural language, which
can have imprecise interpretations. In this paper, we propose a framework that
lays a foundation for LLM platform designers to analyze and improve the
security, privacy, and safety of current and future third-party integrated LLM
platforms. Our framework is a formulation of an attack taxonomy that is
developed by iteratively exploring how LLM platform stakeholders could leverage
their capabilities and responsibilities to mount attacks against each other. As
part of our iterative process, we apply our framework in the context of
OpenAI's plugin (apps) ecosystem. We uncover plugins that concretely
demonstrate the potential for the types of issues that we outline in our attack
taxonomy. We conclude by discussing novel challenges and by providing
recommendations to improve the security, privacy, and safety of present and
future LLM-based computing platforms.
|
http://arxiv.org/abs/2309.10254v2
|
The quantum approximate optimization algorithm (QAOA) is an appealing
proposal to solve NP problems on noisy intermediate-scale quantum (NISQ)
hardware. Making NISQ implementations of the QAOA resilient to noise requires
short ansatz circuits with as few CNOT gates as possible. Here, we present
Dynamic-ADAPT-QAOA. Our algorithm significantly reduces the circuit depth and
the CNOT count of standard ADAPT-QAOA, a leading proposal for near-term
implementations of the QAOA. Throughout our algorithm, the decision to apply
CNOT-intensive operations is made dynamically, based on algorithmic benefits.
Using density-matrix simulations, we benchmark the noise resilience of
ADAPT-QAOA and Dynamic-ADAPT-QAOA. We compute the gate-error probability
$p_\text{gate}^\star$ below which these algorithms provide, on average, more
accurate solutions than the classical, polynomial-time approximation algorithm
by Goemans and Williamson. For small systems with $6-10$ qubits, we show that
$p_{\text{gate}}^\star>10^{-3}$ for Dynamic-ADAPT-QAOA. Compared to standard
ADAPT-QAOA, this constitutes an order-of-magnitude improvement in noise
resilience. This improvement should make Dynamic-ADAPT-QAOA viable for
implementations on superconducting NISQ hardware, even in the absence of error
mitigation.
|
http://arxiv.org/abs/2309.00047v1
|
Tensor networks are useful toy models for understanding the structure of
entanglement in holographic states and reconstruction of bulk operators within
the entanglement wedge. They are, however, constrained to only prepare
so-called "fixed-area states" with flat entanglement spectra, limiting their
utility in understanding general features of holographic entanglement. Here, we
overcome this limitation by constructing a variant of random tensor networks
that enjoys bulk gauge symmetries. Our model includes a gauge theory on a
general graph, whose gauge-invariant states are fed into a random tensor
network. We show that the model satisfies the quantum-corrected Ryu-Takayanagi
formula with a nontrivial area operator living in the center of a
gauge-invariant algebra. We also demonstrate nontrivial, n-dependent
contributions to the R\'enyi entropy and R\'enyi mutual information from this
area operator, a feature shared by general holographic states.
|
http://arxiv.org/abs/2309.06436v1
|
We implement the Bayesian inference to retrieve energy spectra of all
neutrinos from a galactic core-collapse supernova (CCSN). To achieve high
statistics and full sensitivity to all flavours of neutrinos, we adopt a
combination of several reaction channels from different large-scale neutrino
observatories, namely inverse beta decay on proton and elastic scattering on
electron from Hyper-Kamiokande (Hyper-K), charged current absorption on Argon
from Deep Underground Neutrino Experiment (DUNE) and coherent elastic
scattering on Lead from RES-NOVA. Assuming no neutrino oscillation or specific
oscillation models, we obtain mock data for each channel through Poisson
processes with the predictions, for a typical source distance of 10 kpc in our
Galaxy, and then evaluate the probability distributions for all spectral
parameters of theoretical neutrino spectrum model with Bayes' theorem. Although
the results for either the electron-neutrinos or electron-antineutrinos reserve
relatively large uncertainties (according to the neutrino mass hierarchy), a
precision of a few percent (i.e., $\pm 1 \% \sim \pm 4 \%$ at a credible
interval of $2 \sigma$) is achieved for primary spectral parameters (e.g., mean
energy and total emitted energy) of other neutrino species. Moreover, the
correlation coefficients between different parameters are computed as well and
interesting patterns are found. Especially, the mixing-induced correlations are
sensitive to the neutrino mass hierarchy, which potentially makes it a brand
new probe to determine the neutrino mass hierarchy in the detection of galactic
supernova neutrinos. Finally, we discuss the origin of such correlation
patterns and perspectives for further improvement on our results.
|
http://arxiv.org/abs/2305.00392v2
|
Non-governmental organizations for environmental conservation have a
significant interest in monitoring conservation-related media and getting
timely updates about infrastructure construction projects as they may cause
massive impact to key conservation areas. Such monitoring, however, is
difficult and time-consuming. We introduce NewsPanda, a toolkit which
automatically detects and analyzes online articles related to environmental
conservation and infrastructure construction. We fine-tune a BERT-based model
using active learning methods and noise correction algorithms to identify
articles that are relevant to conservation and infrastructure construction. For
the identified articles, we perform further analysis, extracting keywords and
finding potentially related sources. NewsPanda has been successfully deployed
by the World Wide Fund for Nature teams in the UK, India, and Nepal since
February 2022. It currently monitors over 80,000 websites and 1,074
conservation sites across India and Nepal, saving more than 30 hours of human
efforts weekly. We have now scaled it up to cover 60,000 conservation sites
globally.
|
http://arxiv.org/abs/2305.01503v1
|
This paper provides a comprehensive tutorial for Bayesian practitioners in
pharmacometrics using Pumas workflows. We start by giving a brief motivation of
Bayesian inference for pharmacometrics highlighting limitations in existing
software that Pumas addresses. We then follow by a description of all the steps
of a standard Bayesian workflow for pharmacometrics using code snippets and
examples. This includes: model definition, prior selection, sampling from the
posterior, prior and posterior simulations and predictions, counter-factual
simulations and predictions, convergence diagnostics, visual predictive checks,
and finally model comparison with cross-validation. Finally, the background and
intuition behind many advanced concepts in Bayesian statistics are explained in
simple language. This includes many important ideas and precautions that users
need to keep in mind when performing Bayesian analysis. Many of the algorithms,
codes, and ideas presented in this paper are highly applicable to clinical
research and statistical learning at large but we chose to focus our
discussions on pharmacometrics in this paper to have a narrower scope in mind
and given the nature of Pumas as a software primarily for pharmacometricians.
|
http://arxiv.org/abs/2304.04752v1
|
In this paper, we develop a novel efficient and robust nonparametric
regression estimator under a framework of feedforward neural network. There are
several interesting characteristics for the proposed estimator. First, the loss
function is built upon an estimated maximum likelihood function, who integrates
the information from observed data, as well as the information from data
structure. Consequently, the resulting estimator has desirable optimal
properties, such as efficiency. Second, different from the traditional maximum
likelihood estimation (MLE), the proposed method avoid the specification of the
distribution, hence is flexible to any kind of distribution, such as heavy
tails, multimodal or heterogeneous distribution. Third, the proposed loss
function relies on probabilities rather than direct observations as in least
squares, contributing the robustness in the proposed estimator. Finally, the
proposed loss function involves nonparametric regression function only. This
enables a direct application of existing packages, simplifying the computation
and programming. We establish the large sample property of the proposed
estimator in terms of its excess risk and minimax near-optimal rate. The
theoretical results demonstrate that the proposed estimator is equivalent to
the true MLE in which the density function is known. Our simulation studies
show that the proposed estimator outperforms the existing methods in terms of
prediction accuracy, efficiency and robustness. Particularly, it is comparable
to the true MLE, and even gets better as the sample size increases. This
implies that the adaptive and data-driven loss function from the estimated
density may offer an additional avenue for capturing valuable information. We
further apply the proposed method to four real data examples, resulting in
significantly reduced out-of-sample prediction errors compared to existing
methods.
|
http://arxiv.org/abs/2309.12872v1
|
While several previous studies have devised methods for segmentation of
polyps, most of these methods are not rigorously assessed on multi-center
datasets. Variability due to appearance of polyps from one center to another,
difference in endoscopic instrument grades, and acquisition quality result in
methods with good performance on in-distribution test data, and poor
performance on out-of-distribution or underrepresented samples. Unfair models
have serious implications and pose a critical challenge to clinical
applications. We adapt an implicit bias mitigation method which leverages
Bayesian predictive uncertainties during training to encourage the model to
focus on underrepresented sample regions. We demonstrate the potential of this
approach to improve generalisability without sacrificing state-of-the-art
performance on a challenging multi-center polyp segmentation dataset (PolypGen)
with different centers and image modalities.
|
http://arxiv.org/abs/2309.06807v2
|
This paper discusses the application of artificial intelligence (AI)
technology in optical communication networks and 5G. It primarily introduces
representative applications of AI technology and potential risks of AI
technology failure caused by the openness of optical communication networks,
and proposes some coping strategies, mainly including modeling AI systems
through modularization and miniaturization, combining with traditional
classical network modeling and planning methods, and improving the
effectiveness and interpretability of AI technology. At the same time, it
proposes response strategies based on network protection for the possible
failure and attack of AI technology.
|
http://arxiv.org/abs/2301.13396v1
|
We study of the properties of a new class of circumgalactic medium absorbers
identified in the Lyman-$\alpha$ forest: "Strong, Blended Lyman-$\alpha$" (or
SBLA) absorption systems. We study SBLAs at $2.4<z<3.1$ in SDSS-IV/eBOSS
spectra by their strong extended Lyman-$\alpha$ absorption complexes covering
138 $\,\,{\rm km}\,{\rm s}^{-1}$ with an integrated $\log (N_{HI}/$cm$^{-2})
=16.04\substack{+0.05 \\ -0.06}$ and Doppler parameter $b=18.1 \substack{+0.7
\\ -0.4}\,\,{\rm km}\,{\rm s}^{-1}$. Clustering with the Lyman-$\alpha$ forest
provides a large-scale structure bias of $b = 2.34\pm0.06$ and halo mass
estimate of $M_h \approx 10^{12}{\rm h^{-1}M_{sol}}$ for our SBLA sample. We
measure the ensemble mean column densities of 22 metal features in the SBLA
composite spectrum and find that no single-population multiphase model for them
is viable. We therefore explore the underlying SBLA population by forward
modelling the SBLA absorption distribution. Based on covariance measurements
and favoured populations we find that $\approx 25$\% of our SBLAs have stronger
metals. Using silicon only we find that our strong metal SBLAs trace gas with a
$\log(n_H / $cm$^{-3}) > -2.40$ for $T=10^{3.5}$K and show gas clumping on
$<210$ parsec scales. We fit multiphase models to this strong sub-population
and find a low ionization phase with $n_H=1$cm$^{-3}$, $T=10^{3.5}$K and
$[X/H]=0.8$, an intermediate ionization phase with $\log(n_H / $cm$^{-3}) =
-3.05$, $T=10^{3.5}$K and $[X/H]=-0.8$, and a poorly constrained higher
ionization phase. We find that the low ionization phase favours cold, dense
super-solar metallicity gas with a clumping scale of just 0.009 parsecs.
|
http://arxiv.org/abs/2309.06813v2
|
Large language models (LLM) such as OpenAI's ChatGPT and GPT-3 offer unique
testbeds for exploring the translation challenges of turning literacy into
numeracy. Previous publicly-available transformer models from eighteen months
prior and 1000 times smaller failed to provide basic arithmetic. The
statistical analysis of four complex datasets described here combines
arithmetic manipulations that cannot be memorized or encoded by simple rules.
The work examines whether next-token prediction succeeds from sentence
completion into the realm of actual numerical understanding. For example, the
work highlights cases for descriptive statistics on in-memory datasets that the
LLM initially loads from memory or generates randomly using python libraries.
The resulting exploratory data analysis showcases the model's capabilities to
group by or pivot categorical sums, infer feature importance, derive
correlations, and predict unseen test cases using linear regression. To extend
the model's testable range, the research deletes and appends random rows such
that recall alone cannot explain emergent numeracy.
|
http://arxiv.org/abs/2301.13382v1
|
Loop quantum gravity, as one branch of quantum gravity, holds the potential
to explore the fundamental nature of black holes. Recently, according to the
quantum Oppenheimer-Snyder model in loop quantum cosmology, a novel loop
quantum corrected black hole in de Sitter spacetime has been discovered. Here,
we first investigate the corresponding quasinormal modes and late-time behavior
of massless neutral scalar field perturbations based on such a quantum-modified
black hole in de Sitter spacetime. The frequency and time domain analysis of
the lowest-lying quasinormal modes is derived by Prony method, Matrix method as
well as WKB approximation. The influences of loop quantum correction, the black
hole mass ratio, and the cosmological constant on the quasinormal frequencies
are studied in detail. The late-time behaviors of quantum-modified black holes
possess an exponential decay, which is mainly determined not only by the
multipole number but also by the cosmological constant. The impact of loop
quantum correction on the late-time tail is negligible, but it has a
significant impact on damping oscillation. To explore spacetime singularities,
we examine the validity of strong cosmic censorship for a near-extremal
quantum-modified black hole in de Sitter spacetime. As a result, it is found
that the strong cosmic censorship is destroyed as the black hole approaches the
near-extremal limit, but the violation becomes weaker as the cosmological
constant and the loop quantum correction increase.
|
http://arxiv.org/abs/2309.04962v2
|
A \emph{$\nu$-reliable spanner} of a metric space $(X,d)$, is a (dominating)
graph $H$, such that for any possible failure set $B\subseteq X$, there is a
set $B^+$ just slightly larger $|B^+|\le(1+\nu)\cdot|B|$, and all distances
between pairs in $X\setminus B^+$ are (approximately) preserved in $H\setminus
B$. Recently, there have been several works on sparse reliable spanners in
various settings, but so far, the weight of such spanners has not been analyzed
at all. In this work, we initiate the study of \emph{light} reliable spanners,
whose weight is proportional to that of the Minimum Spanning Tree (MST) of $X$.
We first observe that unlike sparsity, the lightness of any deterministic
reliable spanner is huge, even for the metric of the simple path graph.
Therefore, randomness must be used: an \emph{oblivious} reliable spanner is a
distribution over spanners, and the bound on $|B^+|$ holds in expectation.
We devise an oblivious $\nu$-reliable $(2+\frac{2}{k-1})$-spanner for any
$k$-HST, whose lightness is $\approx \nu^{-2}$. We demonstrate a matching
$\Omega(\nu^{-2})$ lower bound on the lightness (for any finite stretch). We
also note that any stretch below 2 must incur linear lightness.
For general metrics, doubling metrics, and metrics arising from minor-free
graphs, we construct {\em light} tree covers, in which every tree is a $k$-HST
of low weight. Combining these covers with our results for $k$-HSTs, we obtain
oblivious reliable light spanners for these metric spaces, with nearly optimal
parameters. In particular, for doubling metrics we get an oblivious
$\nu$-reliable $(1+\varepsilon)$-spanner with lightness $\varepsilon^{-O({\rm
ddim})}\cdot\tilde{O}(\nu^{-2}\cdot\log n)$, which is best possible (up to
lower order terms).
|
http://arxiv.org/abs/2307.16612v1
|
Field-level inference is emerging as a promising technique for optimally
extracting information from cosmological datasets. Indeed, previous analyses
have shown field-based inference produces tighter parameter constraints than
power spectrum analyses. However, estimates of the detailed quantitative gain
in constraining power differ. Here, we demonstrate the gain in constraining
power depends on the parameter space being constrained. As a specific example,
we find that field-based analysis of an LSST Y1-like mock data set only
marginally improves constraints relative to a 2-point function analysis in
$\Lambda$CDM, yet it more than doubles the constraining power of the data in
the context of $w$CDM models. This effect reconciles some, but not all, of the
discrepant results found in the literature. Our results demonstrate the
importance of using a full systematics model when quantifying the information
gain for realistic field-level analyses of future data sets.
|
http://arxiv.org/abs/2307.00070v1
|
Randomized control trials, RCTs, have become a powerful tool for assessing
the impact of interventions and policies in many contexts. They are considered
the gold-standard for inference in the biomedical fields and in many social
sciences. Researchers have published an increasing number of studies that rely
on RCTs for at least part of the inference, and these studies typically include
the response data collected, de-identified and sometimes protected through
traditional disclosure limitation methods. In this paper, we empirically assess
the impact of strong privacy-preservation methodology (with \ac{DP}
guarantees), on published analyses from RCTs, leveraging the availability of
replication packages (research compendia) in economics and policy analysis. We
provide simulations studies and demonstrate how we can replicate the analysis
in a published economics article on privacy-protected data under various
parametrizations. We find that relatively straightforward DP-based methods
allow for inference-valid protection of the published data, though
computational issues may limit more complex analyses from using these methods.
The results have applicability to researchers wishing to share RCT data,
especially in the context of low- and middle-income countries, with strong
privacy protection.
|
http://arxiv.org/abs/2309.14581v1
|
In this paper, the maze generation using quantum annealing is proposed. We
reformulate a standard algorithm to generate a maze into a specific form of a
quadratic unconstrained binary optimization problem suitable for the input of
the quantum annealer. To generate more difficult mazes, we introduce an
additional cost function $Q_{update}$ to increase the difficulty. The
difficulty of the mazes was evaluated by the time to solve the maze of 12 human
subjects. To check the efficiency of our scheme to create the maze, we
investigated the time-to-solution of a quantum processing unit, classical
computer, and hybrid solver.
|
http://arxiv.org/abs/2309.04792v2
|
Object understanding in egocentric visual data is arguably a fundamental
research topic in egocentric vision. However, existing object datasets are
either non-egocentric or have limitations in object categories, visual content,
and annotation granularities. In this work, we introduce EgoObjects, a
large-scale egocentric dataset for fine-grained object understanding. Its Pilot
version contains over 9K videos collected by 250 participants from 50+
countries using 4 wearable devices, and over 650K object annotations from 368
object categories. Unlike prior datasets containing only object category
labels, EgoObjects also annotates each object with an instance-level
identifier, and includes over 14K unique object instances. EgoObjects was
designed to capture the same object under diverse background complexities,
surrounding objects, distance, lighting and camera motion. In parallel to the
data collection, we conducted data annotation by developing a multi-stage
federated annotation process to accommodate the growing nature of the dataset.
To bootstrap the research on EgoObjects, we present a suite of 4 benchmark
tasks around the egocentric object understanding, including a novel instance
level- and the classical category level object detection. Moreover, we also
introduce 2 novel continual learning object detection tasks. The dataset and
API are available at https://github.com/facebookresearch/EgoObjects.
|
http://arxiv.org/abs/2309.08816v1
|
We find that, in the mesoscopic regime, modification of the material's
surface can induce an extensive change of the material's magnetic moment. In
other words, perturbation of order $N^2$ atoms on the surface of a
3-dimensional solid can change the magnetic moment proportionally to $N^3$.
When the solid's surface is perturbed, it triggers two changes in the
magnetization. One arises from variations of the electron wavefunction and
energy, while the other arises from a modification in the kinetic angular
momentum operator. In the macroscopic regime of our model, these two bulk
effects cancel each other, resulting in no impact of the surface perturbation
on the magnetization - consistent with prior work. In the mesoscopic regime, we
find a departure from this behavior, as the cancelation of two terms is not
complete.
|
http://arxiv.org/abs/2309.03957v3
|
Multi-array systems are widely used in sonar and radar applications. They can
improve communication speeds, target discrimination, and imaging. In the case
of a multibeam sonar system that can operate two receiving arrays, we derive
new adaptive to improve detection capabilities compared to traditional sonar
detection approaches. To do so, we more specifically consider correlated
arrays, whose covariance matrices are estimated up to scale factors, and an
impulsive clutter. In a partially homogeneous environment, the 2-step
Generalized Likelihood ratio Test (GLRT) and Rao approach lead to a
generalization of the Adaptive Normalized Matched Filter (ANMF) test and an
equivalent numerically simpler detector with a well-established texture
Constant False Alarm Rate (CFAR) behavior. Performances are discussed and
illustrated with theoretical examples, numerous simulations, and insights into
experimental data. Results show that these detectors outperform their
competitors and have stronger robustness to environmental unknowns.
|
http://arxiv.org/abs/2303.17979v2
|
Event identification is increasingly recognized as crucial for enhancing the
reliability, security, and stability of the electric power system. With the
growing deployment of Phasor Measurement Units (PMUs) and advancements in data
science, there are promising opportunities to explore data-driven event
identification via machine learning classification techniques. However,
obtaining accurately-labeled eventful PMU data samples remains challenging due
to its labor-intensive nature and uncertainty about the event type (class) in
real-time. Thus, it is natural to use semi-supervised learning techniques,
which make use of both labeled and unlabeled samples. %We propose a novel
semi-supervised framework to assess the effectiveness of incorporating
unlabeled eventful samples to enhance existing event identification
methodologies. We evaluate three categories of classical semi-supervised
approaches: (i) self-training, (ii) transductive support vector machines
(TSVM), and (iii) graph-based label spreading (LS) method. Our approach
characterizes events using physically interpretable features extracted from
modal analysis of synthetic eventful PMU data. In particular, we focus on the
identification of four event classes whose identification is crucial for grid
operations. We have developed and publicly shared a comprehensive Event
Identification package which consists of three aspects: data generation,
feature extraction, and event identification with limited labels using
semi-supervised methodologies. Using this package, we generate and evaluate
eventful PMU data for the South Carolina synthetic network. Our evaluation
consistently demonstrates that graph-based LS outperforms the other two
semi-supervised methods that we consider, and can noticeably improve event
identification performance relative to the setting with only a small number of
labeled samples.
|
http://arxiv.org/abs/2309.10095v2
|
We present RLLTE: a long-term evolution, extremely modular, and open-source
framework for reinforcement learning (RL) research and application. Beyond
delivering top-notch algorithm implementations, RLLTE also serves as a toolkit
for developing algorithms. More specifically, RLLTE decouples the RL algorithms
completely from the exploitation-exploration perspective, providing a large
number of components to accelerate algorithm development and evolution. In
particular, RLLTE is the first RL framework to build a complete and luxuriant
ecosystem, which includes model training, evaluation, deployment, benchmark
hub, and large language model (LLM)-empowered copilot. RLLTE is expected to set
standards for RL engineering practice and be highly stimulative for industry
and academia.
|
http://arxiv.org/abs/2309.16382v1
|
Unit testing is a commonly-used approach in software engineering to test the
correctness and robustness of written code. Unit tests are tests designed to
test small components of a codebase in isolation, such as an individual
function or method. Although unit tests have historically been written by human
programmers, recent advancements in AI, particularly LLMs, have shown
corresponding advances in automatic unit test generation. In this study, we
explore the effect of different prompts on the quality of unit tests generated
by Code Interpreter, a GPT-4-based LLM, on Python functions provided by the
Quixbugs dataset, and we focus on prompting due to the ease with which users
can make use of our findings and observations. We find that the quality of the
generated unit tests is not sensitive to changes in minor details in the
prompts provided. However, we observe that Code Interpreter is often able to
effectively identify and correct mistakes in code that it writes, suggesting
that providing it runnable code to check the correctness of its outputs would
be beneficial, even though we find that it is already often able to generate
correctly-formatted unit tests. Our findings suggest that, when prompting
models similar to Code Interpreter, it is important to include the basic
information necessary to generate unit tests, but minor details are not as
important.
|
http://arxiv.org/abs/2310.00483v1
|
Fano varieties are basic building blocks in geometry - they are `atomic
pieces' of mathematical shapes. Recent progress in the classification of Fano
varieties involves analysing an invariant called the quantum period. This is a
sequence of integers which gives a numerical fingerprint for a Fano variety. It
is conjectured that a Fano variety is uniquely determined by its quantum
period. If this is true, one should be able to recover geometric properties of
a Fano variety directly from its quantum period. We apply machine learning to
the question: does the quantum period of X know the dimension of X? Note that
there is as yet no theoretical understanding of this. We show that a simple
feed-forward neural network can determine the dimension of X with 98% accuracy.
Building on this, we establish rigorous asymptotics for the quantum periods of
a class of Fano varieties. These asymptotics determine the dimension of X from
its quantum period. Our results demonstrate that machine learning can pick out
structure from complex mathematical data in situations where we lack
theoretical understanding. They also give positive evidence for the conjecture
that the quantum period of a Fano variety determines that variety.
|
http://arxiv.org/abs/2309.05473v1
|
Online social media have become an important forum for exchanging political
opinions. In response to COVID measures citizens expressed their policy
preferences directly on these platforms. Quantifying political preferences in
online social media remains challenging: The vast amount of content requires
scalable automated extraction of political preferences -- however fine grained
political preference extraction is difficult with current machine learning (ML)
technology, due to the lack of data sets. Here we present a novel data set of
tweets with fine grained political preference annotations. A text
classification model trained on this data is used to extract policy preferences
in a German Twitter corpus ranging from 2019 to 2022. Our results indicate that
in response to the COVID pandemic, expression of political opinions increased.
Using a well established taxonomy of policy preferences we analyse fine grained
political views and highlight changes in distinct political categories. These
analyses suggest that the increase in policy preference expression is dominated
by the categories pro-welfare, pro-education and pro-governmental
administration efficiency. All training data and code used in this study are
made publicly available to encourage other researchers to further improve
automated policy preference extraction methods. We hope that our findings
contribute to a better understanding of political statements in online social
media and to a better assessment of how COVID measures impact political
preferences.
|
http://arxiv.org/abs/2308.04444v1
|
We consider a version of the classical group testing problem motivated by PCR
testing for COVID-19. In the so-called tropical group testing model, the
outcome of a test is the lowest cycle threshold (Ct) level of the individuals
pooled within it, rather than a simple binary indicator variable. We introduce
the tropical counterparts of three classical non-adaptive algorithms (COMP, DD
and SCOMP), and analyse their behaviour through both simulations and bounds on
error probabilities. By comparing the results of the tropical and classical
algorithms, we gain insight into the extra information provided by learning the
outcomes (Ct levels) of the tests. We show that in a limiting regime the
tropical COMP algorithm requires as many tests as its classical counterpart,
but that for sufficiently dense problems tropical DD can recover more
information with fewer tests, and can be viewed as essentially optimal in
certain regimes.
|
http://arxiv.org/abs/2309.07264v2
|
An accurate motion model is a fundamental component of most autonomous
navigation systems. While much work has been done on improving model
formulation, no standard protocol exists for gathering empirical data required
to train models. In this work, we address this issue by proposing Data-driven
Robot Input Vector Exploration (DRIVE), a protocol that enables characterizing
uncrewed ground vehicles (UGVs) input limits and gathering empirical model
training data. We also propose a novel learned slip approach outperforming
similar acceleration learning approaches. Our contributions are validated
through an extensive experimental evaluation, cumulating over 7 km and 1.8 h of
driving data over three distinct UGVs and four terrain types. We show that our
protocol offers increased predictive performance over common human-driven
data-gathering protocols. Furthermore, our protocol converges with 46 s of
training data, almost four times less than the shortest human dataset gathering
protocol. We show that the operational limit for our model is reached in
extreme slip conditions encountered on surfaced ice. DRIVE is an efficient way
of characterizing UGV motion in its operational conditions. Our code and
dataset are both available online at this link:
https://github.com/norlab-ulaval/DRIVE.
|
http://arxiv.org/abs/2309.10718v2
|
The correlation between the sharpness of loss minima and generalisation in
the context of deep neural networks has been subject to discussion for a long
time. Whilst mostly investigated in the context of selected benchmark data sets
in the area of computer vision, we explore this aspect for the acoustic scene
classification task of the DCASE2020 challenge data. Our analysis is based on
two-dimensional filter-normalised visualisations and a derived sharpness
measure. Our exploratory analysis shows that sharper minima tend to show better
generalisation than flat minima -even more so for out-of-domain data, recorded
from previously unseen devices-, thus adding to the dispute about better
generalisation capabilities of flat minima. We further find that, in
particular, the choice of optimisers is a main driver of the sharpness of
minima and we discuss resulting limitations with respect to comparability. Our
code, trained model states and loss landscape visualisations are publicly
available.
|
http://arxiv.org/abs/2309.16369v2
|
The recent advancements in transformer-based visual trackers have led to
significant progress, attributed to their strong modeling capabilities.
However, as performance improves, running latency correspondingly increases,
presenting a challenge for real-time robotics applications, especially on edge
devices with computational constraints. In response to this, we introduce
LiteTrack, an efficient transformer-based tracking model optimized for
high-speed operations across various devices. It achieves a more favorable
trade-off between accuracy and efficiency than the other lightweight trackers.
The main innovations of LiteTrack encompass: 1) asynchronous feature extraction
and interaction between the template and search region for better feature
fushion and cutting redundant computation, and 2) pruning encoder layers from a
heavy tracker to refine the balnace between performance and speed. As an
example, our fastest variant, LiteTrack-B4, achieves 65.2% AO on the GOT-10k
benchmark, surpassing all preceding efficient trackers, while running over 100
fps with ONNX on the Jetson Orin NX edge device. Moreover, our LiteTrack-B9
reaches competitive 72.2% AO on GOT-10k and 82.4% AUC on TrackingNet, and
operates at 171 fps on an NVIDIA 2080Ti GPU. The code and demo materials will
be available at https://github.com/TsingWei/LiteTrack.
|
http://arxiv.org/abs/2309.09249v1
|
We introduce logical synchrony, a framework that allows distributed computing
to be coordinated as tightly as in synchronous systems without the distribution
of a global clock or any reference to universal time. We develop a model of
events called a logical synchrony network, in which nodes correspond to
processors and every node has an associated local clock which generates the
events. We construct a measure of logical latency and develop its properties. A
further model, called a multiclock network, is then analyzed and shown to be a
refinement of the logical synchrony network. We present the bittide mechanism
as an instantiation of multiclock networks, and discuss the clock control
mechanism that ensures that buffers do not overflow or underflow. Finally we
give conditions under which a logical synchrony network has an equivalent
synchronous realization.
|
http://arxiv.org/abs/2308.00144v3
|
Axions and axion-like particles (ALPs) are one of the most widely discussed
extensions of the Standard Model when it comes to the strong CP problem and
dark matter candidates. Current experiments are focused on the indirect
searches of invisible pseudoscalars in a wide parameter range. In this paper we
investigate limits on ALP mass, and its couplings to photons and leptons from
3-photon annihilation at $e^+e^-$ colliders. We provide detailed calculations
and apply them to the particular kinematics of the Belle II experiment,
covering the ALP mass range from few hundred MeV to around 10 GeV. Our results,
which improve upon previous analyses by also including the ALP coupling to
electrons, show that such future analyses will allow to significantly extend
the ALP search range and impose much more stringent restrictions on their
couplings.
|
http://arxiv.org/abs/2309.15106v2
|
We propose a new boson expansion method using a norm operator. The small
parameter expansion, in which the boson approximation becomes the zeroth-order
approximation, requires the double commutation relations between phonon
operators that are not closed between the phonon excitation modes adopted as
boson excitations. This results in an infinite expansion regardless of whether
the type of the boson expansion is Hermitian or non-Hermitian. The small
parameter expansion does not hold when the commutation relations are closed.
The norm operator is expressed as a function of the number operator in the
physical subspace, which enables us to obtain substantially a finite boson
expansion regardless of the Hermitian or non-Hermitian type. We also point out
the problems of the conventional boson expansion methods. The normal-ordered
linked-cluster expansion theory has failed to refute Marshalek's claim that
KT-1 and KT-2 are of chimerical boson expansion. The Dyson boson expansion
theory does not have exceptional superiority over other types. Previous studies
using the boson expansion methods should be re-examined.
|
http://arxiv.org/abs/2303.17986v2
|
In recent years, there is a growing interest in combining techniques
attributed to the areas of Statistics and Machine Learning in order to obtain
the benefits of both approaches. In this article, the statistical technique
lasso for variable selection is represented through a neural network. It is
observed that, although both the statistical approach and its neural version
have the same objective function, they differ due to their optimization. In
particular, the neural version is usually optimized in one-step using a single
validation set, while the statistical counterpart uses a two-step optimization
based on cross-validation. The more elaborated optimization of the statistical
method results in more accurate parameter estimation, especially when the
training set is small. For this reason, a modification of the standard approach
for training neural networks, that mimics the statistical framework, is
proposed. During the development of the above modification, a new optimization
algorithm for identifying the significant variables emerged. Experimental
results, using synthetic and real data sets, show that this new optimization
algorithm achieves better performance than any of the three previous
optimization approaches.
|
http://arxiv.org/abs/2309.03770v1
|
Quantum amplification is recognized as a key resource for precision
measurements. However, most conventional paradigms employ an ensemble of
independent particles that usually limit the performance of quantum
amplification in gain, spectral linewidth, etc. Here we demonstrate a new
signal amplification using cooperative 129Xe nuclear spins embedded within a
feedback circuit, where the noble-gas spin coherence time is enhanced by at
least one order of magnitude. Using such a technique, magnetic field can be
substantially pre-enhanced by more than three orders and is in situ readout
with an embedded 87Rb magnetometer. We realize an ultrahigh magnetic
sensitivity of 4.0 fT/Hz$^{1/2}$ that surpasses the photon-shot noise and even
below the spin-projection noise of the embedded atomic magnetometer, allowing
for exciting applications including searches for dark matter with sensitivity
well beyond supernova constraints. Our findings extend the physics of quantum
amplification to cooperative spin systems and can be generalized to a wide
variety of existing sensors, enabling a new class of cooperative quantum
sensors.
|
http://arxiv.org/abs/2309.11374v1
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 1