ID,TITLE,ABSTRACT,Computer Science,Physics,Mathematics,Statistics,Quantitative Biology,Quantitative Finance 16778,Probing Primordial-Black-Hole Dark Matter with Gravitational Waves," Primordial black holes (PBHs) have long been suggested as a candidate for making up some or all of the dark matter in the Universe. Most of the theoretically possible mass range for PBH dark matter has been ruled out with various null observations of expected signatures of their interaction with standard astrophysical objects. However, current constraints are significantly less robust in the 20 M_sun < M_PBH < 100 M_sun mass window, which has received much attention recently, following the detection of merging black holes with estimated masses of ~30 M_sun by LIGO and the suggestion that these could be black holes formed in the early Universe. We consider the potential of advanced LIGO (aLIGO) operating at design sensitivity to probe this mass range by looking for peaks in the mass spectrum of detected events. To quantify the background, which is due to black holes that are formed from dying stars, we model the shape of the stellar-black-hole mass function and calibrate its amplitude to match the O1 results. Adopting very conservative assumptions about the PBH and stellar-black-hole merger rates, we show that ~5 years of aLIGO data can be used to detect a contribution of >20 M_sun PBHs to dark matter down to f_PBH<0.5 at >99.9% confidence level. Combined with other probes that already suggest tension with f_PBH=1, the obtainable independent limits from aLIGO will thus enable a firm test of the scenario that PBHs make up all of dark matter. ",0,1,0,0,0,0 16779,Fundamental limits of low-rank matrix estimation: the non-symmetric case," We consider the high-dimensional inference problem where the signal is a low-rank matrix which is corrupted by an additive Gaussian noise. Given a probabilistic model for the low-rank matrix, we compute the limit in the large dimension setting for the mutual information between the signal and the observations, as well as the matrix minimum mean square error, while the rank of the signal remains constant. This allows to locate the information-theoretic threshold for this estimation problem, i.e. the critical value of the signal intensity below which it is impossible to recover the low-rank matrix. ",0,0,1,0,0,0 16780,Scalable Gaussian Process Inference with Finite-data Mean and Variance Guarantees," Gaussian processes (GPs) offer a flexible class of priors for nonparametric Bayesian regression, but popular GP posterior inference methods are typically prohibitively slow or lack desirable finite-data guarantees on quality. We develop an approach to scalable approximate GP regression with finite-data guarantees on the accuracy of pointwise posterior mean and variance estimates. Our main contribution is a novel objective for approximate inference in the nonparametric setting: the preconditioned Fisher (pF) divergence. We show that unlike the Kullback--Leibler divergence (used in variational inference), the pF divergence bounds the 2-Wasserstein distance, which in turn provides tight bounds the pointwise difference of the mean and variance functions. We demonstrate that, for sparse GP likelihood approximations, we can minimize the pF divergence efficiently. Our experiments show that optimizing the pF divergence has the same computational requirements as variational sparse GPs while providing comparable empirical performance--in addition to our novel finite-data quality guarantees. ",0,0,0,1,0,0 16781,Integral representations and asymptotic behaviours of Mittag-Leffler type functions of two variables," The paper explores various special functions which generalize the two-parametric Mittag-Leffler type function of two variables. Integral representations for these functions in different domains of variation of arguments for certain values of the parameters are obtained. The asymptotic expansions formulas and asymptotic properties of such functions are also established for large values of the variables. This provides statements of theorems for these formulas and their corresponding properties. ",0,0,1,0,0,0 16782,Efficient Mendler-Style Lambda-Encodings in Cedille," It is common to model inductive datatypes as least fixed points of functors. We show that within the Cedille type theory we can relax functoriality constraints and generically derive an induction principle for Mendler-style lambda-encoded inductive datatypes, which arise as least fixed points of covariant schemes where the morphism lifting is defined only on identities. Additionally, we implement a destructor for these lambda-encodings that runs in constant-time. As a result, we can define lambda-encoded natural numbers with an induction principle and a constant-time predecessor function so that the normal form of a numeral requires only linear space. The paper also includes several more advanced examples. ",1,0,0,0,0,0 16783,Super Jack-Laurent Polynomials," Let $\mathcal{D}_{n,m}$ be the algebra of the quantum integrals of the deformed Calogero-Moser-Sutherland problem corresponding to the root system of the Lie superalgebra $\frak{gl}(n,m)$. The algebra $\mathcal{D}_{n,m}$ acts naturally on the quasi-invariant Laurent polynomials and we investigate the corresponding spectral decomposition. Even for general value of the parameter $k$ the spectral decomposition is not simple and we prove that the image of the algebra $\mathcal{D}_{n,m}$ in the algebra of endomorphisms of the generalised eigen-space is $k[\varepsilon]^{\otimes r}$ where $k[\varepsilon]$ is the algebra of the dual numbers the corresponding representation is the regular representation of the algebra $k[\varepsilon]^{\otimes r}$. ",0,0,1,0,0,0 16784,A New Classification of Technologies," This study here suggests a classification of technologies based on taxonomic characteristics of interaction between technologies in complex systems that is not a studied research field in economics of technical change. The proposed taxonomy here categorizes technologies in four typologies, in a broad analogy with the ecology: 1) technological parasitism is a relationship between two technologies T1 and T2 in a complex system S where one technology T1 benefits from the interaction with T2, whereas T2 has a negative side from interaction with T1; 2) technological commensalism is a relationship between two technologies in S where one technology benefits from the other without affecting it; 3) technological mutualism is a relationship in which each technology benefits from the activity of the other within complex systems; 4) technological symbiosis is a long-term interaction between two (or more) technologies that evolve together in complex systems. This taxonomy systematizes the typologies of interactive technologies within complex systems and predicts their evolutionary pathways that generate stepwise coevolutionary processes of complex systems of technology. This study here begins the process of generalizing, as far as possible, critical typologies of interactive technologies that explain the long-run evolution of technology. The theoretical framework developed here opens the black box of the interaction between technologies that affects, with different types of technologies, the evolutionary pathways of complex systems of technology over time and space. Overall, then, this new theoretical framework may be useful for bringing a new perspective to categorize the gradient of benefit to technologies from interaction with other technologies that can be a ground work for development of more sophisticated concepts to clarify technological and economic change in human society. ",1,0,0,0,0,0 16785,Potential functions on Grassmannians of planes and cluster transformations," With a triangulation of a planar polygon with $n$ sides, one can associate an integrable system on the Grassmannian of 2-planes in an $n$-space. In this paper, we show that the potential functions of Lagrangian torus fibers of the integrable systems associated with different triangulations glue together by cluster transformations. We also prove that the cluster transformations coincide with the wall-crossing formula in Lagrangian intersection Floer theory. ",0,0,1,0,0,0 16786,Physical properties of the first spectroscopically confirmed red supergiant stars in the Sculptor Group galaxy NGC 55," We present K-band Multi-Object Spectrograph (KMOS) observations of 18 Red Supergiant (RSG) stars in the Sculptor Group galaxy NGC 55. Radial velocities are calculated and are shown to be in good agreement with previous estimates, confirming the supergiant nature of the targets and providing the first spectroscopically confirmed RSGs in NGC 55. Stellar parameters are estimated for 14 targets using the $J$-band analysis technique, making use of state-of-the-art stellar model atmospheres. The metallicities estimated confirm the low-metallicity nature of NGC 55, in good agreement with previous studies. This study provides an independent estimate of the metallicity gradient of NGC 55, in excellent agreement with recent results published using hot massive stars. In addition, we calculate luminosities of our targets and compare their distribution of effective temperatures and luminosities to other RSGs, in different environments, estimated using the same technique. ",0,1,0,0,0,0 16787,"Gas near a wall: a shortened mean free path, reduced viscosity, and the manifestation of a turbulent Knudsen layer in the Navier-Stokes solution of a shear flow"," For the gas near a solid planar wall, we propose a scaling formula for the mean free path of a molecule as a function of the distance from the wall, under the assumption of a uniform distribution of the incident directions of the molecular free flight. We subsequently impose the same scaling onto the viscosity of the gas near the wall, and compute the Navier-Stokes solution of the velocity of a shear flow parallel to the wall. This solution exhibits the Knudsen velocity boundary layer in agreement with the corresponding Direct Simulation Monte Carlo computations for argon and nitrogen. We also find that the proposed mean free path and viscosity scaling sets the second derivative of the velocity to infinity at the wall boundary of the flow domain, which suggests that the gas flow is formally turbulent within the Knudsen boundary layer near the wall. ",0,1,0,0,0,0 16788,Greater data science at baccalaureate institutions," Donoho's JCGS (in press) paper is a spirited call to action for statisticians, who he points out are losing ground in the field of data science by refusing to accept that data science is its own domain. (Or, at least, a domain that is becoming distinctly defined.) He calls on writings by John Tukey, Bill Cleveland, and Leo Breiman, among others, to remind us that statisticians have been dealing with data science for years, and encourages acceptance of the direction of the field while also ensuring that statistics is tightly integrated. As faculty at baccalaureate institutions (where the growth of undergraduate statistics programs has been dramatic), we are keen to ensure statistics has a place in data science and data science education. In his paper, Donoho is primarily focused on graduate education. At our undergraduate institutions, we are considering many of the same questions. ",0,0,0,1,0,0 16789,Direct evidence of hierarchical assembly at low masses from isolated dwarf galaxy groups," The demographics of dwarf galaxy populations have long been in tension with predictions from the Cold Dark Matter (CDM) paradigm. If primordial density fluctuations were scale-free as predicted, dwarf galaxies should themselves host dark matter subhaloes, the most massive of which may have undergone star formation resulting in dwarf galaxy groups. Ensembles of dwarf galaxies are observed as satellites of more massive galaxies, and there is observational and theoretical evidence to suggest that these satellites at z=0 were captured by the massive host halo as a group. However, the evolution of dwarf galaxies is highly susceptible to environment making these satellite groups imperfect probes of CDM in the low mass regime. We have identified one of the clearest examples to date of hierarchical structure formation at low masses: seven isolated, spectroscopically confirmed groups with only dwarf galaxies as members. Each group hosts 3-5 known members, has a baryonic mass of ~4.4 x 10^9 to 2 x 10^10 Msun, and requires a mass-to-light ratio of <100 to be gravitationally bound. Such groups are predicted to be rare theoretically and found to be rare observationally at the current epoch and thus provide a unique window into the possible formation mechanism of more massive, isolated galaxies. ",0,1,0,0,0,0 16790,Short-term Mortality Prediction for Elderly Patients Using Medicare Claims Data," Risk prediction is central to both clinical medicine and public health. While many machine learning models have been developed to predict mortality, they are rarely applied in the clinical literature, where classification tasks typically rely on logistic regression. One reason for this is that existing machine learning models often seek to optimize predictions by incorporating features that are not present in the databases readily available to providers and policy makers, limiting generalizability and implementation. Here we tested a number of machine learning classifiers for prediction of six-month mortality in a population of elderly Medicare beneficiaries, using an administrative claims database of the kind available to the majority of health care payers and providers. We show that machine learning classifiers substantially outperform current widely-used methods of risk prediction but only when used with an improved feature set incorporating insights from clinical medicine, developed for this study. Our work has applications to supporting patient and provider decision making at the end of life, as well as population health-oriented efforts to identify patients at high risk of poor outcomes. ",1,0,0,1,0,0 16791,R-C3D: Region Convolutional 3D Network for Temporal Activity Detection," We address the problem of activity detection in continuous, untrimmed video streams. This is a difficult task that requires extracting meaningful spatio-temporal features to capture activities, accurately localizing the start and end times of each activity. We introduce a new model, Region Convolutional 3D Network (R-C3D), which encodes the video streams using a three-dimensional fully convolutional network, then generates candidate temporal regions containing activities, and finally classifies selected regions into specific activities. Computation is saved due to the sharing of convolutional features between the proposal and the classification pipelines. The entire model is trained end-to-end with jointly optimized localization and classification losses. R-C3D is faster than existing methods (569 frames per second on a single Titan X Maxwell GPU) and achieves state-of-the-art results on THUMOS'14. We further demonstrate that our model is a general activity detection framework that does not rely on assumptions about particular dataset properties by evaluating our approach on ActivityNet and Charades. Our code is available at this http URL. ",1,0,0,0,0,0 16792,Regularization for Deep Learning: A Taxonomy," Regularization is one of the crucial ingredients of deep learning, yet the term regularization has various definitions, and regularization methods are often studied separately from each other. In our work we present a systematic, unifying taxonomy to categorize existing methods. We distinguish methods that affect data, network architectures, error terms, regularization terms, and optimization procedures. We do not provide all details about the listed methods; instead, we present an overview of how the methods can be sorted into meaningful categories and sub-categories. This helps revealing links and fundamental similarities between them. Finally, we include practical recommendations both for users and for developers of new regularization methods. ",1,0,0,1,0,0 16793,Re-Evaluating the Netflix Prize - Human Uncertainty and its Impact on Reliability," In this paper, we examine the statistical soundness of comparative assessments within the field of recommender systems in terms of reliability and human uncertainty. From a controlled experiment, we get the insight that users provide different ratings on same items when repeatedly asked. This volatility of user ratings justifies the assumption of using probability densities instead of single rating scores. As a consequence, the well-known accuracy metrics (e.g. MAE, MSE, RMSE) yield a density themselves that emerges from convolution of all rating densities. When two different systems produce different RMSE distributions with significant intersection, then there exists a probability of error for each possible ranking. As an application, we examine possible ranking errors of the Netflix Prize. We are able to show that all top rankings are more or less subject to high probabilities of error and that some rankings may be deemed to be caused by mere chance rather than system quality. ",1,0,0,0,0,0 16794,Infinite monochromatic sumsets for colourings of the reals," N. Hindman, I. Leader and D. Strauss proved that it is consistent that there is a finite colouring of $\mathbb R$ so that no infinite sumset $X+X=\{x+y:x,y\in X\}$ is monochromatic. Our aim in this paper is to prove a consistency result in the opposite direction: we show that, under certain set-theoretic assumptions, for any $c:\mathbb R\to r$ with $r$ finite there is an infinite $X\subseteq \mathbb R$ so that $c$ is constant on $X+X$. ",0,0,1,0,0,0 16795,Mean-Field Games with Differing Beliefs for Algorithmic Trading," Even when confronted with the same data, agents often disagree on a model of the real-world. Here, we address the question of how interacting heterogenous agents, who disagree on what model the real-world follows, optimize their trading actions. The market has latent factors that drive prices, and agents account for the permanent impact they have on prices. This leads to a large stochastic game, where each agents' performance criteria is computed under a different probability measure. We analyse the mean-field game (MFG) limit of the stochastic game and show that the Nash equilibria is given by the solution to a non-standard vector-valued forward-backward stochastic differential equation. Under some mild assumptions, we construct the solution in terms of expectations of the filtered states. We prove the MFG strategy forms an \epsilon-Nash equilibrium for the finite player game. Lastly, we present a least-squares Monte Carlo based algorithm for computing the optimal control and illustrate the results through simulation in market where agents disagree on the model. ",0,0,0,0,0,1 16796,"Energy efficiency of finite difference algorithms on multicore CPUs, GPUs, and Intel Xeon Phi processors"," In addition to hardware wall-time restrictions commonly seen in high-performance computing systems, it is likely that future systems will also be constrained by energy budgets. In the present work, finite difference algorithms of varying computational and memory intensity are evaluated with respect to both energy efficiency and runtime on an Intel Ivy Bridge CPU node, an Intel Xeon Phi Knights Landing processor, and an NVIDIA Tesla K40c GPU. The conventional way of storing the discretised derivatives to global arrays for solution advancement is found to be inefficient in terms of energy consumption and runtime. In contrast, a class of algorithms in which the discretised derivatives are evaluated on-the-fly or stored as thread-/process-local variables (yielding high compute intensity) is optimal both with respect to energy consumption and runtime. On all three hardware architectures considered, a speed-up of ~2 and an energy saving of ~2 are observed for the high compute intensive algorithms compared to the memory intensive algorithm. The energy consumption is found to be proportional to runtime, irrespective of the power consumed and the GPU has an energy saving of ~5 compared to the same algorithm on a CPU node. ",1,1,0,0,0,0 16797,A Plane of High Velocity Galaxies Across the Local Group," We recently showed that several Local Group (LG) galaxies have much higher radial velocities (RVs) than predicted by a 3D dynamical model of the standard cosmological paradigm. Here, we show that 6 of these 7 galaxies define a thin plane with root mean square thickness of only 101 kpc despite a widest extent of nearly 3 Mpc, much larger than the conventional virial radius of the Milky Way (MW) or M31. This plane passes within ${\sim 70}$ kpc of the MW-M31 barycentre and is oriented so the MW-M31 line is inclined by $16^\circ$ to it. We develop a toy model to constrain the scenario whereby a past MW-M31 flyby in Modified Newtonian Dynamics (MOND) forms tidal dwarf galaxies that settle into the recently discovered planes of satellites around the MW and M31. The scenario is viable only for a particular MW-M31 orbital plane. This roughly coincides with the plane of LG dwarfs with anomalously high RVs. Using a restricted $N$-body simulation of the LG in MOND, we show how the once fast-moving MW and M31 gravitationally slingshot test particles outwards at high speeds. The most distant such particles preferentially lie within the MW-M31 orbital plane, probably because the particles ending up with the highest RVs are those flung out almost parallel to the motion of the perturber. This suggests a dynamical reason for our finding of a similar trend in the real LG, something not easily explained as a chance alignment of galaxies with an isotropic or mildly flattened distribution (probability $= {0.0015}$). ",0,1,0,0,0,0 16798,Scalable Magnetic Field SLAM in 3D Using Gaussian Process Maps," We present a method for scalable and fully 3D magnetic field simultaneous localisation and mapping (SLAM) using local anomalies in the magnetic field as a source of position information. These anomalies are due to the presence of ferromagnetic material in the structure of buildings and in objects such as furniture. We represent the magnetic field map using a Gaussian process model and take well-known physical properties of the magnetic field into account. We build local maps using three-dimensional hexagonal block tiling. To make our approach computationally tractable we use reduced-rank Gaussian process regression in combination with a Rao-Blackwellised particle filter. We show that it is possible to obtain accurate position and orientation estimates using measurements from a smartphone, and that our approach provides a scalable magnetic field SLAM algorithm in terms of both computational complexity and map storage. ",1,0,0,1,0,0 16799,K-means Algorithm over Compressed Binary Data," We consider a network of binary-valued sensors with a fusion center. The fusion center has to perform K-means clustering on the binary data transmitted by the sensors. In order to reduce the amount of data transmitted within the network, the sensors compress their data with a source coding scheme based on binary sparse matrices. We propose to apply the K-means algorithm directly over the compressed data without reconstructing the original sensors measurements, in order to avoid potentially complex decoding operations. We provide approximated expressions of the error probabilities of the K-means steps in the compressed domain. From these expressions, we show that applying the K-means algorithm in the compressed domain enables to recover the clusters of the original domain. Monte Carlo simulations illustrate the accuracy of the obtained approximated error probabilities, and show that the coding rate needed to perform K-means clustering in the compressed domain is lower than the rate needed to reconstruct all the measurements. ",1,0,1,0,0,0 16800,Variational Inference for Gaussian Process Models with Linear Complexity," Large-scale Gaussian process inference has long faced practical challenges due to time and space complexity that is superlinear in dataset size. While sparse variational Gaussian process models are capable of learning from large-scale data, standard strategies for sparsifying the model can prevent the approximation of complex functions. In this work, we propose a novel variational Gaussian process model that decouples the representation of mean and covariance functions in reproducing kernel Hilbert space. We show that this new parametrization generalizes previous models. Furthermore, it yields a variational inference problem that can be solved by stochastic gradient ascent with time and space complexity that is only linear in the number of mean function parameters, regardless of the choice of kernels, likelihoods, and inducing points. This strategy makes the adoption of large-scale expressive Gaussian process models possible. We run several experiments on regression tasks and show that this decoupled approach greatly outperforms previous sparse variational Gaussian process inference procedures. ",1,0,0,1,0,0 16801,Incorporation of prior knowledge of the signal behavior into the reconstruction to accelerate the acquisition of MR diffusion data," Diffusion MRI measurements using hyperpolarized gases are generally acquired during patient breath hold, which yields a compromise between achievable image resolution, lung coverage and number of b-values. In this work, we propose a novel method that accelerates the acquisition of MR diffusion data by undersampling in both spatial and b-value dimensions, thanks to incorporating knowledge about the signal decay into the reconstruction (SIDER). SIDER is compared to total variation (TV) reconstruction by assessing their effect on both the recovery of ventilation images and estimated mean alveolar dimensions (MAD). Both methods are assessed by retrospectively undersampling diffusion datasets of normal volunteers and COPD patients (n=8) for acceleration factors between x2 and x10. TV led to large errors and artefacts for acceleration factors equal or larger than x5. SIDER improved TV, presenting lower errors and histograms of MAD closer to those obtained from fully sampled data for accelerations factors up to x10. SIDER preserved image quality at all acceleration factors but images were slightly smoothed and some details were lost at x10. In conclusion, we have developed and validated a novel compressed sensing method for lung MRI imaging and achieved high acceleration factors, which can be used to increase the amount of data acquired during a breath-hold. This methodology is expected to improve the accuracy of estimated lung microstructure dimensions and widen the possibilities of studying lung diseases with MRI. ",1,1,0,0,0,0 16802,Rabi noise spectroscopy of individual two-level tunneling defects," Understanding the nature of two-level tunneling defects is important for minimizing their disruptive effects in various nano-devices. By exploiting the resonant coupling of these defects to a superconducting qubit, one can probe and coherently manipulate them individually. In this work we utilize a phase qubit to induce Rabi oscillations of single tunneling defects and measure their dephasing rates as a function of the defect's asymmetry energy, which is tuned by an applied strain. The dephasing rates scale quadratically with the external strain and are inversely proportional to the Rabi frequency. These results are analyzed and explained within a model of interacting standard defects, in which pure dephasing of coherent high-frequency (GHz) defects is caused by interaction with incoherent low-frequency thermally excited defects. ",0,1,0,0,0,0 16803,Learning rate adaptation for federated and differentially private learning," We propose an algorithm for the adaptation of the learning rate for stochastic gradient descent (SGD) that avoids the need for validation set use. The idea for the adaptiveness comes from the technique of extrapolation: to get an estimate for the error against the gradient flow which underlies SGD, we compare the result obtained by one full step and two half-steps. The algorithm is applied in two separate frameworks: federated and differentially private learning. Using examples of deep neural networks we empirically show that the adaptive algorithm is competitive with manually tuned commonly used optimisation methods for differentially privately training. We also show that it works robustly in the case of federated learning unlike commonly used optimisation methods. ",0,0,0,1,0,0 16804,Holomorphic Hermite polynomials in two variables," Generalizations of the Hermite polynomials to many variables and/or to the complex domain have been located in mathematical and physical literature for some decades. Polynomials traditionally called complex Hermite ones are mostly understood as polynomials in $z$ and $\bar{z}$ which in fact makes them polynomials in two real variables with complex coefficients. The present paper proposes to investigate for the first time holomorphic Hermite polynomials in two variables. Their algebraic and analytic properties are developed here. While the algebraic properties do not differ too much for those considered so far, their analytic features are based on a kind of non-rotational orthogonality invented by van Eijndhoven and Meyers. Inspired by their invention we merely follow the idea of Bargmann's seminal paper (1961) giving explicit construction of reproducing kernel Hilbert spaces based on those polynomials. ""Homotopic"" behavior of our new formation culminates in comparing it to the very classical Bargmann space of two variables on one edge and the aforementioned Hermite polynomials in $z$ and $\bar{z}$ on the other. Unlike in the case of Bargmann's basis our Hermite polynomials are not product ones but factorize to it when bonded together with the first case of limit properties leading both to the Bargmann basis and suitable form of the reproducing kernel. Also in the second limit we recover standard results obeyed by Hermite polynomials in $z$ and $\bar{z}$. ",0,0,1,0,0,0 16805,"Equilibria, information and frustration in heterogeneous network games with conflicting preferences"," Interactions between people are the basis on which the structure of our society arises as a complex system and, at the same time, are the starting point of any physical description of it. In the last few years, much theoretical research has addressed this issue by combining the physics of complex networks with a description of interactions in terms of evolutionary game theory. We here take this research a step further by introducing a most salient societal factor such as the individuals' preferences, a characteristic that is key to understand much of the social phenomenology these days. We consider a heterogeneous, agent-based model in which agents interact strategically with their neighbors but their preferences and payoffs for the possible actions differ. We study how such a heterogeneous network behaves under evolutionary dynamics and different strategic interactions, namely coordination games and best shot games. With this model we study the emergence of the equilibria predicted analytically in random graphs under best response dynamics, and we extend this test to unexplored contexts like proportional imitation and scale free networks. We show that some theoretically predicted equilibria do not arise in simulations with incomplete Information, and we demonstrate the importance of the graph topology and the payoff function parameters for some games. Finally, we discuss our results with available experimental evidence on coordination games, showing that our model agrees better with the experiment that standard economic theories, and draw hints as to how to maximize social efficiency in situations of conflicting preferences. ",1,1,0,0,0,0 16806,Scalable Generalized Dynamic Topic Models," Dynamic topic models (DTMs) model the evolution of prevalent themes in literature, online media, and other forms of text over time. DTMs assume that word co-occurrence statistics change continuously and therefore impose continuous stochastic process priors on their model parameters. These dynamical priors make inference much harder than in regular topic models, and also limit scalability. In this paper, we present several new results around DTMs. First, we extend the class of tractable priors from Wiener processes to the generic class of Gaussian processes (GPs). This allows us to explore topics that develop smoothly over time, that have a long-term memory or are temporally concentrated (for event detection). Second, we show how to perform scalable approximate inference in these models based on ideas around stochastic variational inference and sparse Gaussian processes. This way we can train a rich family of DTMs to massive data. Our experiments on several large-scale datasets show that our generalized model allows us to find interesting patterns that were not accessible by previous approaches. ",0,0,0,1,0,0 16807,Session Types for Orchestrated Interactions," In the setting of the pi-calculus with binary sessions, we aim at relaxing the notion of duality of session types by the concept of retractable compliance developed in contract theory. This leads to extending session types with a new type operator of ""speculative selection"" including choices not necessarily offered by a compliant partner. We address the problem of selecting successful communicating branches by means of an operational semantics based on orchestrators, which has been shown to be equivalent to the retractable semantics of contracts, but clearly more feasible. A type system, sound with respect to such a semantics, is hence provided. ",1,0,0,0,0,0 16808,An Agent-Based Approach for Optimizing Modular Vehicle Fleet Operation," Modularity in military vehicle designs enables on-base assembly, disassembly, and reconfiguration of vehicles, which can be beneficial in promoting fleet adaptability and life cycle cost savings. To properly manage the fleet operation and to control the resupply, demand prediction, and scheduling process, this paper illustrates an agent-based approach customized for highly modularized military vehicle fleets and studies the feasibility and flexibility of modularity for various mission scenarios. Given deterministic field demands with operation stochasticity, we compare the performance of a modular fleet to a conventional fleet in equivalent operation strategies and also compare fleet performance driven by heuristic rules and optimization. Several indicators are selected to quantify the fleet performance, including operation costs, total resupplied resources, and fleet readiness. When the model is implemented for military Joint Tactical Transport System (JTTS) mission, our results indicate that fleet modularity can reduce total resource supplies without significant losses in fleet readiness. The benefits of fleet modularity can also be amplified through a real-time optimized operation strategy. To highlight the feasibility of fleet modularity, a parametric study is performed to show the impacts from working capacity on modular fleet performance. Finally, we provide practical suggestions of modular vehicle designs based on the analysis and other possible usage. ",1,0,0,0,0,0 16809,Delta-epsilon functions and uniform continuity on metric spaces," Under certain general conditions, an explicit formula to compute the greatest delta-epsilon function of a continuous function is given. From this formula, a new way to analyze the uniform continuity of a continuous function is given. Several examples illustrating the theory are discussed. ",0,0,1,0,0,0 16810,Deterministic Dispersion of Mobile Robots in Dynamic Rings," In this work, we study the problem of dispersion of mobile robots on dynamic rings. The problem of dispersion of $n$ robots on an $n$ node graph, introduced by Augustine and Moses Jr. [1], requires robots to coordinate with each other and reach a configuration where exactly one robot is present on each node. This problem has real world applications and applies whenever we want to minimize the total cost of $n$ agents sharing $n$ resources, located at various places, subject to the constraint that the cost of an agent moving to a different resource is comparatively much smaller than the cost of multiple agents sharing a resource (e.g. smart electric cars sharing recharge stations). The study of this problem also provides indirect benefits to the study of scattering on graphs, the study of exploration by mobile robots, and the study of load balancing on graphs. We solve the problem of dispersion in the presence of two types of dynamism in the underlying graph: (i) vertex permutation and (ii) 1-interval connectivity. We introduce the notion of vertex permutation dynamism and have it mean that for a given set of nodes, in every round, the adversary ensures a ring structure is maintained, but the connections between the nodes may change. We use the idea of 1-interval connectivity from Di Luna et al. [10], where for a given ring, in each round, the adversary chooses at most one edge to remove. We assume robots have full visibility and present asymptotically time optimal algorithms to achieve dispersion in the presence of both types of dynamism when robots have chirality. When robots do not have chirality, we present asymptotically time optimal algorithms to achieve dispersion subject to certain constraints. Finally, we provide impossibility results for dispersion when robots have no visibility. ",1,0,0,0,0,0 16811,A brain signature highly predictive of future progression to Alzheimer's dementia," Early prognosis of Alzheimer's dementia is hard. Mild cognitive impairment (MCI) typically precedes Alzheimer's dementia, yet only a fraction of MCI individuals will progress to dementia, even when screened using biomarkers. We propose here to identify a subset of individuals who share a common brain signature highly predictive of oncoming dementia. This signature was composed of brain atrophy and functional dysconnectivity and discovered using a machine learning model in patients suffering from dementia. The model recognized the same brain signature in MCI individuals, 90% of which progressed to dementia within three years. This result is a marked improvement on the state-of-the-art in prognostic precision, while the brain signature still identified 47% of all MCI progressors. We thus discovered a sizable MCI subpopulation which represents an excellent recruitment target for clinical trials at the prodromal stage of Alzheimer's disease. ",0,0,0,1,0,0 16812,Deep scattering transform applied to note onset detection and instrument recognition," Automatic Music Transcription (AMT) is one of the oldest and most well-studied problems in the field of music information retrieval. Within this challenging research field, onset detection and instrument recognition take important places in transcription systems, as they respectively help to determine exact onset times of notes and to recognize the corresponding instrument sources. The aim of this study is to explore the usefulness of multiscale scattering operators for these two tasks on plucked string instrument and piano music. After resuming the theoretical background and illustrating the key features of this sound representation method, we evaluate its performances comparatively to other classical sound representations. Using both MIDI-driven datasets with real instrument samples and real musical pieces, scattering is proved to outperform other sound representations for these AMT subtasks, putting forward its richer sound representation and invariance properties. ",1,0,0,1,0,0 16813,Gaschütz Lemma for Compact Groups," We prove the Gaschütz Lemma holds for all metrisable compact groups. ",0,0,1,0,0,0 16814,Driven flow with exclusion and spin-dependent transport in graphenelike structures," We present a simplified description for spin-dependent electronic transport in honeycomb-lattice structures with spin-orbit interactions, using generalizations of the stochastic non-equilibrium model known as the totally asymmetric simple exclusion process. Mean field theory and numerical simulations are used to study currents, density profiles and current polarization in quasi- one dimensional systems with open boundaries, and externally-imposed particle injection ($\alpha$) and ejection ($\beta$) rates. We investigate the influence of allowing for double site occupancy, according to Pauli's exclusion principle, on the behavior of the quantities of interest. We find that double occupancy shows strong signatures for specific combinations of rates, namely high $\alpha$ and low $\beta$, but otherwise its effects are quantitatively suppressed. Comments are made on the possible relevance of the present results to experiments on suitably doped graphenelike structures. ",0,1,0,0,0,0 16815,MOG: Mapper on Graphs for Relationship Preserving Clustering," The interconnected nature of graphs often results in difficult to interpret clutter. Typically techniques focus on either decluttering by clustering nodes with similar properties or grouping edges with similar relationship. We propose using mapper, a powerful topological data analysis tool, to summarize the structure of a graph in a way that both clusters data with similar properties and preserves relationships. Typically, mapper operates on a given data by utilizing a scalar function defined on every point in the data and a cover for scalar function codomain. The output of mapper is a graph that summarize the shape of the space. In this paper, we outline how to use this mapper construction on an input graphs, outline three filter functions that capture important structures of the input graph, and provide an interface for interactively modifying the cover. To validate our approach, we conduct several case studies on synthetic and real world data sets and demonstrate how our method can give meaningful summaries for graphs with various complexities ",0,0,0,1,0,0 16816,"Variation Evolving for Optimal Control Computation, A Compact Way"," A compact version of the Variation Evolving Method (VEM) is developed for the optimal control computation. It follows the idea that originates from the continuous-time dynamics stability theory in the control field. The optimal solution is analogized to the equilibrium point of a dynamic system and is anticipated to be obtained in an asymptotically evolving way. With the introduction of a virtual dimension, the variation time, the Evolution Partial Differential Equation (EPDE), which describes the variation motion towards the optimal solution, is deduced from the Optimal Control Problem (OCP), and the equivalent optimality conditions with no employment of costates are established. In particular, it is found that theoretically the analytic feedback optimal control law does not exist for general OCPs because the optimal control is related to the future state. Since the derived EPDE is suitable to be solved with the semi-discrete method in the field of PDE numerical calculation, the resulting Initial-value Problems (IVPs) may be solved with mature Ordinary Differential Equation (ODE) numerical integration methods. ",1,0,0,0,0,0 16817,Transforming Sensor Data to the Image Domain for Deep Learning - an Application to Footstep Detection," Convolutional Neural Networks (CNNs) have become the state-of-the-art in various computer vision tasks, but they are still premature for most sensor data, especially in pervasive and wearable computing. A major reason for this is the limited amount of annotated training data. In this paper, we propose the idea of leveraging the discriminative power of pre-trained deep CNNs on 2-dimensional sensor data by transforming the sensor modality to the visual domain. By three proposed strategies, 2D sensor output is converted into pressure distribution imageries. Then we utilize a pre-trained CNN for transfer learning on the converted imagery data. We evaluate our method on a gait dataset of floor surface pressure mapping. We obtain a classification accuracy of 87.66%, which outperforms the conventional machine learning methods by over 10%. ",1,0,0,0,0,0 16818,The Price of Differential Privacy For Online Learning," We design differentially private algorithms for the problem of online linear optimization in the full information and bandit settings with optimal $\tilde{O}(\sqrt{T})$ regret bounds. In the full-information setting, our results demonstrate that $\epsilon$-differential privacy may be ensured for free -- in particular, the regret bounds scale as $O(\sqrt{T})+\tilde{O}\left(\frac{1}{\epsilon}\right)$. For bandit linear optimization, and as a special case, for non-stochastic multi-armed bandits, the proposed algorithm achieves a regret of $\tilde{O}\left(\frac{1}{\epsilon}\sqrt{T}\right)$, while the previously known best regret bound was $\tilde{O}\left(\frac{1}{\epsilon}T^{\frac{2}{3}}\right)$. ",1,0,0,1,0,0 16819,Simulation chain and signal classification for acoustic neutrino detection in seawater," Acoustic neutrino detection is a promising approach to extend the energy range of neutrino telescopes to energies beyond $10^{18}$\,eV. Currently operational and planned water-Cherenkov neutrino telescopes, most notably KM3NeT, include acoustic sensors in addition to the optical ones. These acoustic sensors could be used as instruments for acoustic detection, while their main purpose is the position calibration of the detection units. In this article, a Monte Carlo simulation chain for acoustic detectors will be presented, covering the initial interaction of the neutrino up to the signal classification of recorded events. The ambient and transient background in the simulation was implemented according to data recorded by the acoustic set-up AMADEUS inside the ANTARES detector. The effects of refraction on the neutrino signature in the detector are studied, and a classification of the recorded events is implemented. As bipolar waveforms similar to those of the expected neutrino signals are also emitted from other sound sources, additional features like the geometrical shape of the propagation have to be considered for the signal classification. This leads to a large improvement of the background suppression by almost two orders of magnitude, since a flat cylindrical ""pancake"" propagation pattern is a distinctive feature of neutrino signals. An overview of the simulation chain and the signal classification will be presented and preliminary studies of the performance of the classification will be discussed. ",0,1,0,0,0,0 16820,Parameter Space Noise for Exploration," Deep reinforcement learning (RL) methods generally engage in exploratory behavior through noise injection in the action space. An alternative is to add noise directly to the agent's parameters, which can lead to more consistent exploration and a richer set of behaviors. Methods such as evolutionary strategies use parameter perturbations, but discard all temporal structure in the process and require significantly more samples. Combining parameter noise with traditional RL methods allows to combine the best of both worlds. We demonstrate that both off- and on-policy methods benefit from this approach through experimental comparison of DQN, DDPG, and TRPO on high-dimensional discrete action environments as well as continuous control tasks. Our results show that RL with parameter noise learns more efficiently than traditional RL with action space noise and evolutionary strategies individually. ",1,0,0,1,0,0 16821,Deep Illumination: Approximating Dynamic Global Illumination with Generative Adversarial Network," We present Deep Illumination, a novel machine learning technique for approximating global illumination (GI) in real-time applications using a Conditional Generative Adversarial Network. Our primary focus is on generating indirect illumination and soft shadows with offline rendering quality at interactive rates. Inspired from recent advancement in image-to-image translation problems using deep generative convolutional networks, we introduce a variant of this network that learns a mapping from Gbuffers (depth map, normal map, and diffuse map) and direct illumination to any global illumination solution. Our primary contribution is showing that a generative model can be used to learn a density estimation from screen space buffers to an advanced illumination model for a 3D environment. Once trained, our network can approximate global illumination for scene configurations it has never encountered before within the environment it was trained on. We evaluate Deep Illumination through a comparison with both a state of the art real-time GI technique (VXGI) and an offline rendering GI technique (path tracing). We show that our method produces effective GI approximations and is also computationally cheaper than existing GI techniques. Our technique has the potential to replace existing precomputed and screen-space techniques for producing global illumination effects in dynamic scenes with physically-based rendering quality. ",1,0,0,0,0,0 16822,Fraternal Dropout," Recurrent neural networks (RNNs) are important class of architectures among neural networks useful for language modeling and sequential prediction. However, optimizing RNNs is known to be harder compared to feed-forward neural networks. A number of techniques have been proposed in literature to address this problem. In this paper we propose a simple technique called fraternal dropout that takes advantage of dropout to achieve this goal. Specifically, we propose to train two identical copies of an RNN (that share parameters) with different dropout masks while minimizing the difference between their (pre-softmax) predictions. In this way our regularization encourages the representations of RNNs to be invariant to dropout mask, thus being robust. We show that our regularization term is upper bounded by the expectation-linear dropout objective which has been shown to address the gap due to the difference between the train and inference phases of dropout. We evaluate our model and achieve state-of-the-art results in sequence modeling tasks on two benchmark datasets - Penn Treebank and Wikitext-2. We also show that our approach leads to performance improvement by a significant margin in image captioning (Microsoft COCO) and semi-supervised (CIFAR-10) tasks. ",1,0,0,1,0,0 16823,Finite-sample bounds for the multivariate Behrens-Fisher distribution with proportional covariances," The Behrens-Fisher problem is a well-known hypothesis testing problem in statistics concerning two-sample mean comparison. In this article, we confirm one conjecture in Eaton and Olshen (1972), which provides stochastic bounds for the multivariate Behrens-Fisher test statistic under the null hypothesis. We also extend their results on the stochastic ordering of random quotients to the arbitrary finite dimensional case. This work can also be seen as a generalization of Hsu (1938) that provided the bounds for the univariate Behrens-Fisher problem. The results obtained in this article can be used to derive a testing procedure for the multivariate Behrens-Fisher problem that strongly controls the Type I error. ",0,0,1,1,0,0 16824,Evidence for mixed rationalities in preference formation," Understanding the mechanisms underlying the formation of cultural traits, such as preferences, opinions and beliefs is an open challenge. Trait formation is intimately connected to cultural dynamics, which has been the focus of a variety of quantitative models. Recently, some studies have emphasized the importance of connecting those models to snapshots of cultural dynamics that are empirically accessible. By analyzing data obtained from different sources, it has been suggested that culture has properties that are universally present, and that empirical cultural states differ systematically from randomized counterparts. Hence, a question about the mechanism responsible for the observed patterns naturally arises. This study proposes a stochastic structural model for generating cultural states that retain those robust, empirical properties. One ingredient of the model, already used in previous work, assumes that every individual's set of traits is partly dictated by one of several, universal ""rationalities"", informally postulated by several social science theories. The second, new ingredient taken from the same theories assumes that, apart from a dominant rationality, each individual also has a certain exposure to the other rationalities. It is shown that both ingredients are required for reproducing the empirical regularities. This key result suggests that the effects of cultural dynamics in the real world can be described as an interplay of multiple, mixing rationalities, and thus provides indirect evidence for the class of social science theories postulating such mixing. The model should be seen as a static, effective description of culture, while a dynamical, more fundamental description is left for future research. ",1,1,0,0,0,0 16825,A Variance Maximization Criterion for Active Learning," Active learning aims to train a classifier as fast as possible with as few labels as possible. The core element in virtually any active learning strategy is the criterion that measures the usefulness of the unlabeled data based on which new points to be labeled are picked. We propose a novel approach which we refer to as maximizing variance for active learning or MVAL for short. MVAL measures the value of unlabeled instances by evaluating the rate of change of output variables caused by changes in the next sample to be queried and its potential labelling. In a sense, this criterion measures how unstable the classifier's output is for the unlabeled data points under perturbations of the training data. MVAL maintains, what we refer to as, retraining information matrices to keep track of these output scores and exploits two kinds of variance to measure the informativeness and representativeness, respectively. By fusing these variances, MVAL is able to select the instances which are both informative and representative. We employ our technique both in combination with logistic regression and support vector machines and demonstrate that MVAL achieves state-of-the-art performance in experiments on a large number of standard benchmark datasets. ",1,0,0,1,0,0 16826,"Polarization, plasmon, and Debye screening in doped 3D ani-Weyl semimetal"," We compute the polarization function in a doped three-dimensional anisotropic-Weyl semimetal, in which the fermion energy dispersion is linear in two components of the momenta and quadratic in the third. Through detailed calculations, we find that the long wavelength plasmon mode depends on the fermion density $n_e$ in the form $\Omega_{p}^{\bot}\propto n_{e}^{3/10}$ within the basal plane and behaves as $\Omega_{p}^{z}\propto n_{e}^{1/2}$ along the third direction. This unique characteristic of the plasmon mode can be probed by various experimental techniques, such as electron energy-loss spectroscopy. The Debye screening at finite chemical potential and finite temperature is also analyzed based on the polarization function. ",0,1,0,0,0,0 16827,Identifying Product Order with Restricted Boltzmann Machines," Unsupervised machine learning via a restricted Boltzmann machine is an useful tool in distinguishing an ordered phase from a disordered phase. Here we study its application on the two-dimensional Ashkin-Teller model, which features a partially ordered product phase. We train the neural network with spin configuration data generated by Monte Carlo simulations and show that distinct features of the product phase can be learned from non-ergodic samples resulting from symmetry breaking. Careful analysis of the weight matrices inspires us to define a nontrivial machine-learning motivated quantity of the product form, which resembles the conventional product order parameter. ",0,1,0,0,0,0 16828,A finite temperature study of ideal quantum gases in the presence of one dimensional quasi-periodic potential," We study the thermodynamics of ideal Bose gas as well as the transport properties of non interacting bosons and fermions in a one dimensional quasi-periodic potential, namely Aubry-André (AA) model at finite temperature. For bosons in finite size systems, the effect of quasi-periodic potential on the crossover phenomena corresponding to Bose-Einstein condensation (BEC), superfluidity and localization phenomena at finite temperatures are investigated. From the ground state number fluctuation we calculate the crossover temperature of BEC which exhibits a non monotonic behavior with the strength of AA potential and vanishes at the self-dual critical point following power law. Appropriate rescaling of the crossover temperatures reveals universal behavior which is studied for different quasi-periodicity of the AA model. Finally, we study the temperature and flux dependence of the persistent current of fermions in presence of a quasi-periodic potential to identify the localization at the Fermi energy from the decay of the current. ",0,1,0,0,0,0 16829,High-Frequency Analysis of Effective Interactions and Bandwidth for Transient States after Monocycle Pulse Excitation of Extended Hubbard Model," Using a high-frequency expansion in periodically driven extended Hubbard models, where the strengths and ranges of density-density interactions are arbitrary, we obtain the effective interactions and bandwidth, which depend sensitively on the polarization of the driving field. Then, we numerically calculate modulations of correlation functions in a quarter-filled extended Hubbard model with nearest-neighbor interactions on a triangular lattice with trimers after monocycle pulse excitation. We discuss how the resultant modulations are compatible with the effective interactions and bandwidth derived above on the basis of their dependence on the polarization of photoexcitation, which is easily accessible by experiments. Some correlation functions after monocycle pulse excitation are consistent with the effective interactions, which are weaker or stronger than the original ones. However, the photoinduced enhancement of anisotropic charge correlations previously discussed for the three-quarter-filled organic conductor $\alpha$-(bis[ethylenedithio]-tetrathiafulvalene)$_2$I$_3$ [$\alpha$-(BEDT-TTF)$_2$I$_3$] in the metallic phase is not fully explained by the effective interactions or bandwidth, which are derived independently of the filling. ",0,1,0,0,0,0 16830,"Fast binary embeddings, and quantized compressed sensing with structured matrices"," This paper deals with two related problems, namely distance-preserving binary embeddings and quantization for compressed sensing . First, we propose fast methods to replace points from a subset $\mathcal{X} \subset \mathbb{R}^n$, associated with the Euclidean metric, with points in the cube $\{\pm 1\}^m$ and we associate the cube with a pseudo-metric that approximates Euclidean distance among points in $\mathcal{X}$. Our methods rely on quantizing fast Johnson-Lindenstrauss embeddings based on bounded orthonormal systems and partial circulant ensembles, both of which admit fast transforms. Our quantization methods utilize noise-shaping, and include Sigma-Delta schemes and distributed noise-shaping schemes. The resulting approximation errors decay polynomially and exponentially fast in $m$, depending on the embedding method. This dramatically outperforms the current decay rates associated with binary embeddings and Hamming distances. Additionally, it is the first such binary embedding result that applies to fast Johnson-Lindenstrauss maps while preserving $\ell_2$ norms. Second, we again consider noise-shaping schemes, albeit this time to quantize compressed sensing measurements arising from bounded orthonormal ensembles and partial circulant matrices. We show that these methods yield a reconstruction error that again decays with the number of measurements (and bits), when using convex optimization for reconstruction. Specifically, for Sigma-Delta schemes, the error decays polynomially in the number of measurements, and it decays exponentially for distributed noise-shaping schemes based on beta encoding. These results are near optimal and the first of their kind dealing with bounded orthonormal systems. ",0,0,0,1,0,0 16831,The Many Faces of Link Fraud," Most past work on social network link fraud detection tries to separate genuine users from fraudsters, implicitly assuming that there is only one type of fraudulent behavior. But is this assumption true? And, in either case, what are the characteristics of such fraudulent behaviors? In this work, we set up honeypots (""dummy"" social network accounts), and buy fake followers (after careful IRB approval). We report the signs of such behaviors including oddities in local network connectivity, account attributes, and similarities and differences across fraud providers. Most valuably, we discover and characterize several types of fraud behaviors. We discuss how to leverage our insights in practice by engineering strongly performing entropy-based features and demonstrating high classification accuracy. Our contributions are (a) instrumentation: we detail our experimental setup and carefully engineered data collection process to scrape Twitter data while respecting API rate-limits, (b) observations on fraud multimodality: we analyze our honeypot fraudster ecosystem and give surprising insights into the multifaceted behaviors of these fraudster types, and (c) features: we propose novel features that give strong (>0.95 precision/recall) discriminative power on ground-truth Twitter data. ",1,0,0,0,0,0 16832,Diophantine approximation by special primes," We show that whenever $\delta>0$, $\eta$ is real and constants $\lambda_i$ satisfy some necessary conditions, there are infinitely many prime triples $p_1,\, p_2,\, p_3$ satisfying the inequality $|\lambda_1p_1 + \lambda_2p_2 + \lambda_3p_3+\eta|<(\max p_j)^{-1/12+\delta}$ and such that, for each $i\in\{1,2,3\}$, $p_i+2$ has at most $28$ prime factors. ",0,0,1,0,0,0 16833,A Compositional Treatment of Iterated Open Games," Compositional Game Theory is a new, recently introduced model of economic games based upon the computer science idea of compositionality. In it, complex and irregular games can be built up from smaller and simpler games, and the equilibria of these complex games can be defined recursively from the equilibria of their simpler subgames. This paper extends the model by providing a final coalgebra semantics for infinite games. In the course of this, we introduce a new operator on games to model the economic concept of subgame perfection. ",1,0,0,0,0,0 16834,Bayesian inference for spectral projectors of covariance matrix," Let $X_1, \ldots, X_n$ be i.i.d. sample in $\mathbb{R}^p$ with zero mean and the covariance matrix $\mathbf{\Sigma^*}$. The classic principal component analysis estimates the projector $\mathbf{P^*_{\mathcal{J}}}$ onto the direct sum of some eigenspaces of $\mathbf{\Sigma^*}$ by its empirical counterpart $\mathbf{\widehat{P}_{\mathcal{J}}}$. Recent papers [Koltchinskii, Lounici (2017)], [Naumov et al. (2017)] investigate the asymptotic distribution of the Frobenius distance between the projectors $\| \mathbf{\widehat{P}_{\mathcal{J}}} - \mathbf{P^*_{\mathcal{J}}} \|_2$. The problem arises when one tries to build a confidence set for the true projector effectively. We consider the problem from Bayesian perspective and derive an approximation for the posterior distribution of the Frobenius distance between projectors. The derived theorems hold true for non-Gaussian data: the only assumption that we impose is the concentration of the sample covariance $\mathbf{\widehat{\Sigma}}$ in a vicinity of $\mathbf{\Sigma^*}$. The obtained results are applied to construction of sharp confidence sets for the true projector. Numerical simulations illustrate good performance of the proposed procedure even on non-Gaussian data in quite challenging regime. ",0,0,1,1,0,0 16835,Handling Incomplete Heterogeneous Data using VAEs," Variational autoencoders (VAEs), as well as other generative models, have been shown to be efficient and accurate to capture the latent structure of vast amounts of complex high-dimensional data. However, existing VAEs can still not directly handle data that are heterogenous (mixed continuous and discrete) or incomplete (with missing data at random), which is indeed common in real-world applications. In this paper, we propose a general framework to design VAEs, suitable for fitting incomplete heterogenous data. The proposed HI-VAE includes likelihood models for real-valued, positive real valued, interval, categorical, ordinal and count data, and allows to estimate (and potentially impute) missing data accurately. Furthermore, HI-VAE presents competitive predictive performance in supervised tasks, outperforming super- vised models when trained on incomplete data ",0,0,0,1,0,0 16836,Geometric mean of probability measures and geodesics of Fisher information metric," The space of all probability measures having positive density function on a connected compact smooth manifold $M$, denoted by $\mathcal{P}(M)$, carries the Fisher information metric $G$. We define the geometric mean of probability measures by the aid of which we investigate information geometry of $\mathcal{P}(M)$, equipped with $G$. We show that a geodesic segment joining arbitrary probability measures $\mu_1$ and $\mu_2$ is expressed by using the normalized geometric mean of its endpoints. As an application, we show that any two points of $\mathcal{P}(M)$ can be joined by a geodesic. Moreover, we prove that the function $\ell$ defined by $\ell(\mu_1, \mu_2):=2\arccos\int_M \sqrt{p_1\,p_2}\,d\lambda$, $\mu_i=p_i\,\lambda$, $i=1,2$ gives the distance function on $\mathcal{P}(M)$. It is shown that geodesics are all minimal. ",0,0,1,0,0,0 16837,On automorphism groups of Toeplitz subshifts," In this article we study automorphisms of Toeplitz subshifts. Such groups are abelian and any finitely generated torsion subgroup is finite and cyclic. When the complexity is non superlinear, we prove that the automorphism group is, modulo a finite cyclic group, generated by a unique root of the shift. In the subquadratic complexity case, we show that the automorphism group modulo the torsion is generated by the roots of the shift map and that the result of the non superlinear case is optimal. Namely, for any $\varepsilon > 0$ we construct examples of minimal Toeplitz subshifts with complexity bounded by $C n^{1+\epsilon}$ whose automorphism groups are not finitely generated. Finally, we observe the coalescence and the automorphism group give no restriction on the complexity since we provide a family of coalescent Toeplitz subshifts with positive entropy such that their automorphism groups are arbitrary finitely generated infinite abelian groups with cyclic torsion subgroup (eventually restricted to powers of the shift). ",0,0,1,0,0,0 16838,How to Generate Pseudorandom Permutations Over Other Groups," Recent results by Alagic and Russell have given some evidence that the Even-Mansour cipher may be secure against quantum adversaries with quantum queries, if considered over other groups than $(\mathbb{Z}/2)^n$. This prompts the question as to whether or not other classical schemes may be generalized to arbitrary groups and whether classical results still apply to those generalized schemes. In this thesis, we generalize the Even-Mansour cipher and the Feistel cipher. We show that Even and Mansour's original notions of secrecy are obtained on a one-key, group variant of the Even-Mansour cipher. We generalize the result by Kilian and Rogaway, that the Even-Mansour cipher is pseudorandom, to super pseudorandomness, also in the one-key, group case. Using a Slide Attack we match the bound found above. After generalizing the Feistel cipher to arbitrary groups we resolve an open problem of Patel, Ramzan, and Sundaram by showing that the 3-round Feistel cipher over an arbitrary group is not super pseudorandom. We generalize a result by Gentry and Ramzan showing that the Even-Mansour cipher can be implemented using the Feistel cipher as the public permutation. In this result, we also consider the one-key case over a group and generalize their bound. Finally, we consider Zhandry's result on quantum pseudorandom permutations, showing that his result may be generalized to hold for arbitrary groups. In this regard, we consider whether certain card shuffles may be generalized as well. ",1,0,1,0,0,0 16839,Measures of Tractography Convergence," In the present work, we use information theory to understand the empirical convergence rate of tractography, a widely-used approach to reconstruct anatomical fiber pathways in the living brain. Based on diffusion MRI data, tractography is the starting point for many methods to study brain connectivity. Of the available methods to perform tractography, most reconstruct a finite set of streamlines, or 3D curves, representing probable connections between anatomical regions, yet relatively little is known about how the sampling of this set of streamlines affects downstream results, and how exhaustive the sampling should be. Here we provide a method to measure the information theoretic surprise (self-cross entropy) for tract sampling schema. We then empirically assess four streamline methods. We demonstrate that the relative information gain is very low after a moderate number of streamlines have been generated for each tested method. The results give rise to several guidelines for optimal sampling in brain connectivity analyses. ",0,0,0,1,1,0 16840,Network Flow Based Post Processing for Sales Diversity," Collaborative filtering is a broad and powerful framework for building recommendation systems that has seen widespread adoption. Over the past decade, the propensity of such systems for favoring popular products and thus creating echo chambers have been observed. This has given rise to an active area of research that seeks to diversify recommendations generated by such algorithms. We address the problem of increasing diversity in recommendation systems that are based on collaborative filtering that use past ratings to predicting a rating quality for potential recommendations. Following our earlier work, we formulate recommendation system design as a subgraph selection problem from a candidate super-graph of potential recommendations where both diversity and rating quality are explicitly optimized: (1) On the modeling side, we define a new flexible notion of diversity that allows a system designer to prescribe the number of recommendations each item should receive, and smoothly penalizes deviations from this distribution. (2) On the algorithmic side, we show that minimum-cost network flow methods yield fast algorithms in theory and practice for designing recommendation subgraphs that optimize this notion of diversity. (3) On the empirical side, we show the effectiveness of our new model and method to increase diversity while maintaining high rating quality in standard rating data sets from Netflix and MovieLens. ",1,0,0,0,0,0 16841,Lattice Model for Production of Gas," We define a lattice model for rock, absorbers, and gas that makes it possible to examine the flow of gas to a complicated absorbing boundary over long periods of time. The motivation is to deduce the geometry of the boundary from the time history of gas absorption. We find a solution to this model using Green's function techniques, and apply the solution to three absorbing networks of increasing complexity. ",0,1,0,0,0,0 16842,Adaptive Representation Selection in Contextual Bandit," We consider an extension of the contextual bandit setting, motivated by several practical applications, where an unlabeled history of contexts can become available for pre-training before the online decision-making begins. We propose an approach for improving the performance of contextual bandit in such setting, via adaptive, dynamic representation learning, which combines offline pre-training on unlabeled history of contexts with online selection and modification of embedding functions. Our experiments on a variety of datasets and in different nonstationary environments demonstrate clear advantages of our approach over the standard contextual bandit. ",0,0,0,1,0,0 16843,Algebraic surfaces with zero-dimensional cohomology support locus," Using the theory of cohomology support locus, we give a necessary condition for the Albanese map of a smooth projective surface being a submersion. More precisely, assuming the cohomology support locus of any finite abelian cover of a smooth projective surface consists of finitely many points, we prove that the surface has trivial first Betti number, or is a ruled surface of genus one, or is an abelian surface. ",0,0,1,0,0,0 16844,Insight into the temperature dependent properties of the ferromagnetic Kondo lattice YbNiSn," Analyzing temperature dependent photoemission (PE) data of the ferromagnetic Kondo-lattice (KL) system YbNiSn in the light of the Periodic Anderson model (PAM) we show that the KL behavior is not limited to temperatures below a temperature T_K, defined empirically from resistivity and specificic heat measurements. As characteristic for weakly hybridized Ce and Yb systems, the PE spectra reveal a 4f-derived Fermi level peak, which reflects contributions from the Kondo resonance and its crystal electric field (CEF) satellites. In YbNiSn this peak has an unusual temperature dependence: With decreasing temperature a steady linear increase of intensity is observed which extends over a large interval ranging from 100 K down to 1 K without showing any peculiarities in the region of T_K ~ TC= 5.6 K. In the light of the single-impurity Anderson model (SIAM) this intensity variation reflects a linear increase of 4f occupancy with decreasing temperature, indicating an onset of Kondo screening at temperatures above 100 K. Within the PAM this phenomenon could be described by a non-Fermi liquid like T- linear damping of the self-energy which accounts phenomenologically for the feedback from the closely spaced CEF-states. ",0,1,0,0,0,0 16845,Some Ageing Properties of Dynamic Additive Mean Residual Life Model," Although proportional hazard rate model is a very popular model to analyze failure time data, sometimes it becomes important to study the additive hazard rate model. Again, sometimes the concept of the hazard rate function is abstract, in comparison to the concept of mean residual life function. A new model called `dynamic additive mean residual life model' where the covariates are time-dependent has been defined in the literature. Here we study the closure properties of the model for different positive and negative ageing classes under certain condition(s). Quite a few examples are presented to illustrate different properties of the model. ",0,0,1,1,0,0 16846,From Monte Carlo to Las Vegas: Improving Restricted Boltzmann Machine Training Through Stopping Sets," We propose a Las Vegas transformation of Markov Chain Monte Carlo (MCMC) estimators of Restricted Boltzmann Machines (RBMs). We denote our approach Markov Chain Las Vegas (MCLV). MCLV gives statistical guarantees in exchange for random running times. MCLV uses a stopping set built from the training data and has maximum number of Markov chain steps K (referred as MCLV-K). We present a MCLV-K gradient estimator (LVS-K) for RBMs and explore the correspondence and differences between LVS-K and Contrastive Divergence (CD-K), with LVS-K significantly outperforming CD-K training RBMs over the MNIST dataset, indicating MCLV to be a promising direction in learning generative models. ",1,0,0,1,0,0 16847,CD meets CAT," We show that if a noncollapsed $CD(K,n)$ space $X$ with $n\ge 2$ has curvature bounded above by $\kappa$ in the sense of Alexandrov then $K\le (n-1)\kappa$ and $X$ is an Alexandrov space of curvature bounded below by $K-\kappa (n-2)$. We also show that if a $CD(K,n)$ space $Y$ with finite $n$ has curvature bounded above then it is infinitesimally Hilbertian. ",0,0,1,0,0,0 16848,Cost Models for Selecting Materialized Views in Public Clouds," Data warehouse performance is usually achieved through physical data structures such as indexes or materialized views. In this context, cost models can help select a relevant set ofsuch performance optimization structures. Nevertheless, selection becomes more complex in the cloud. The criterion to optimize is indeed at least two-dimensional, with monetary cost balancing overall query response time. This paper introduces new cost models that fit into the pay-as-you-go paradigm of cloud computing. Based on these cost models, an optimization problem is defined to discover, among candidate views, those to be materialized to minimize both the overall cost of using and maintaining the database in a public cloud and the total response time ofa given query workload. We experimentally show that maintaining materialized views is always advantageous, both in terms of performance and cost. ",1,0,0,0,0,0 16849,Dealing with Integer-valued Variables in Bayesian Optimization with Gaussian Processes," Bayesian optimization (BO) methods are useful for optimizing functions that are expensive to evaluate, lack an analytical expression and whose evaluations can be contaminated by noise. These methods rely on a probabilistic model of the objective function, typically a Gaussian process (GP), upon which an acquisition function is built. This function guides the optimization process and measures the expected utility of performing an evaluation of the objective at a new point. GPs assume continous input variables. When this is not the case, such as when some of the input variables take integer values, one has to introduce extra approximations. A common approach is to round the suggested variable value to the closest integer before doing the evaluation of the objective. We show that this can lead to problems in the optimization process and describe a more principled approach to account for input variables that are integer-valued. We illustrate in both synthetic and a real experiments the utility of our approach, which significantly improves the results of standard BO methods on problems involving integer-valued variables. ",0,0,0,1,0,0 16850,Second-order constrained variational problems on Lie algebroids: applications to optimal control," The aim of this work is to study, from an intrinsic and geometric point of view, second-order constrained variational problems on Lie algebroids, that is, optimization problems defined by a cost functional which depends on higher-order derivatives of admissible curves on a Lie algebroid. Extending the classical Skinner and Rusk formalism for the mechanics in the context of Lie algebroids, for second-order constrained mechanical systems, we derive the corresponding dynamical equations. We find a symplectic Lie subalgebroid where, under some mild regularity conditions, the second-order constrained variational problem, seen as a presymplectic Hamiltonian system, has a unique solution. We study the relationship of this formalism with the second-order constrained Euler-Poincaré and Lagrange-Poincaré equations, among others. Our study is applied to the optimal control of mechanical systems. ",0,0,1,0,0,0 16851,"The Galactic Cosmic Ray Electron Spectrum from 3 to 70 MeV Measured by Voyager 1 Beyond the Heliopause, What This Tells Us About the Propagation of Electrons and Nuclei In and Out of the Galaxy at Low Energies"," The cosmic ray electrons measured by Voyager 1 between 3-70 MeV beyond the heliopause have intensities several hundred times those measured at the Earth by PAMELA at nearly the same energies. This paper compares this new V1 data with data from the earth-orbiting PAMELA experiment up to energies greater than 10 GeV where solar modulation effects are negligible. In this energy regime we assume the main parameters governing electron propagation are diffusion and energy loss and we use a Monte Carlo program to describe this propagation in the galaxy. To reproduce the new Voyager electron spectrum, which is E-1.3, together with that measured by PAMELA which is E-3.20 above 10 GeV, we require a diffusion coefficient which is P 0.45 at energies above 0.5 GeV changing to a P-1.00 dependence at lower rigidities. The entire electron spectrum observed at both V1 and PAMELA from 3 MeV to 30 GeV can then be described by a simple source spectrum, dj/dP P-2.25, with a spectral exponent that is independent of rigidity. The change in exponent of the measured electron spectrum from -1.3 at low energies to 3.2 at the highest energies can be explained by galactic propagation effects related to the changing dependence of the diffusion coefficient below 0.5 GeV, and the increasing importance above 0.5 GV of energy loss from synchrotron and inverse Compton radiation, which are both E2, and which are responsible for most of the changing spectral exponent above 1.0 GV. As a result of the P-1.00 dependence of the diffusion coefficient below 0.5 GV that is required to fit the V1 electron spectrum, there is a rapid flow of these low energy electrons out of the galaxy. These electrons in local IG space are unobservable to us at any wave length and therefore form a dark energy component which is 100 times the electrons rest energy. ",0,1,0,0,0,0 16852,Online Scheduling of Spark Workloads with Mesos using Different Fair Allocation Algorithms," In the following, we present example illustrative and experimental results comparing fair schedulers allocating resources from multiple servers to distributed application frameworks. Resources are allocated so that at least one resource is exhausted in every server. Schedulers considered include DRF (DRFH) and Best-Fit DRF (BF-DRF), TSF, and PS-DSF. We also consider server selection under Randomized Round Robin (RRR) and based on their residual (unreserved) resources. In the following, we consider cases with frameworks of equal priority and without server-preference constraints. We first give typical results of a illustrative numerical study and then give typical results of a study involving Spark workloads on Mesos which we have modified and open-sourced to prototype different schedulers. ",1,0,0,0,0,0 16853,On the representation dimension and finitistic dimension of special multiserial algebras," For monomial special multiserial algebras, which in general are of wild representation type, we construct radical embeddings into algebras of finite representation type. As a consequence, we show that the representation dimension of monomial and self-injective special multiserial algebras is less or equal to three. This implies that the finitistic dimension conjecture holds for all special multiserial algebras. ",0,0,1,0,0,0 16854,Would You Like to Motivate Software Testers? Ask Them How," Context. Considering the importance of software testing to the development of high quality and reliable software systems, this paper aims to investigate how can work-related factors influence the motivation of software testers. Method. We applied a questionnaire that was developed using a previous theory of motivation and satisfaction of software engineers to conduct a survey-based study to explore and understand how professional software testers perceive and value work-related factors that could influence their motivation at work. Results. With a sample of 80 software testers we observed that software testers are strongly motivated by variety of work, creative tasks, recognition for their work, and activities that allow them to acquire new knowledge, but in general the social impact of this activity has low influence on their motivation. Conclusion. This study discusses the difference of opinions among software testers, regarding work-related factors that could impact their motivation, which can be relevant for managers and leaders in software engineering practice. ",1,0,0,0,0,0 16855,POMDP Structural Results for Controlled Sensing," This article provides a short review of some structural results in controlled sensing when the problem is formulated as a partially observed Markov decision process. In particular, monotone value functions, Blackwell dominance and quickest detection are described. ",1,0,0,0,0,0 16856,Low Rank Matrix Recovery with Simultaneous Presence of Outliers and Sparse Corruption," We study a data model in which the data matrix D can be expressed as D = L + S + C, where L is a low rank matrix, S an element-wise sparse matrix and C a matrix whose non-zero columns are outlying data points. To date, robust PCA algorithms have solely considered models with either S or C, but not both. As such, existing algorithms cannot account for simultaneous element-wise and column-wise corruptions. In this paper, a new robust PCA algorithm that is robust to simultaneous types of corruption is proposed. Our approach hinges on the sparse approximation of a sparsely corrupted column so that the sparse expansion of a column with respect to the other data points is used to distinguish a sparsely corrupted inlier column from an outlying data point. We also develop a randomized design which provides a scalable implementation of the proposed approach. The core idea of sparse approximation is analyzed analytically where we show that the underlying ell_1-norm minimization can obtain the representation of an inlier in presence of sparse corruptions. ",1,0,0,1,0,0 16857,Power-Sum Denominators," The power sum $1^n + 2^n + \cdots + x^n$ has been of interest to mathematicians since classical times. Johann Faulhaber, Jacob Bernoulli, and others who followed expressed power sums as polynomials in $x$ of degree $n+1$ with rational coefficients. Here we consider the denominators of these polynomials, and prove some of their properties. A remarkable one is that such a denominator equals $n+1$ times the squarefree product of certain primes $p$ obeying the condition that the sum of the base-$p$ digits of $n+1$ is at least $p$. As an application, we derive a squarefree product formula for the denominators of the Bernoulli polynomials. ",0,0,1,0,0,0 16858,A resource-frugal probabilistic dictionary and applications in bioinformatics," Indexing massive data sets is extremely expensive for large scale problems. In many fields, huge amounts of data are currently generated, however extracting meaningful information from voluminous data sets, such as computing similarity between elements, is far from being trivial. It remains nonetheless a fundamental need. This work proposes a probabilistic data structure based on a minimal perfect hash function for indexing large sets of keys. Our structure out-compete the hash table for construction, query times and for memory usage, in the case of the indexation of a static set. To illustrate the impact of algorithms performances, we provide two applications based on similarity computation between collections of sequences, and for which this calculation is an expensive but required operation. In particular, we show a practical case in which other bioinformatics tools fail to scale up the tested data set or provide lower recall quality results. ",1,0,0,0,0,0 16859,Fast learning rate of deep learning via a kernel perspective," We develop a new theoretical framework to analyze the generalization error of deep learning, and derive a new fast learning rate for two representative algorithms: empirical risk minimization and Bayesian deep learning. The series of theoretical analyses of deep learning has revealed its high expressive power and universal approximation capability. Although these analyses are highly nonparametric, existing generalization error analyses have been developed mainly in a fixed dimensional parametric model. To compensate this gap, we develop an infinite dimensional model that is based on an integral form as performed in the analysis of the universal approximation capability. This allows us to define a reproducing kernel Hilbert space corresponding to each layer. Our point of view is to deal with the ordinary finite dimensional deep neural network as a finite approximation of the infinite dimensional one. The approximation error is evaluated by the degree of freedom of the reproducing kernel Hilbert space in each layer. To estimate a good finite dimensional model, we consider both of empirical risk minimization and Bayesian deep learning. We derive its generalization error bound and it is shown that there appears bias-variance trade-off in terms of the number of parameters of the finite dimensional approximation. We show that the optimal width of the internal layers can be determined through the degree of freedom and the convergence rate can be faster than $O(1/\sqrt{n})$ rate which has been shown in the existing studies. ",1,0,1,1,0,0 16860,Closed-loop field development optimization with multipoint geostatistics and statistical assessment," Closed-loop field development (CLFD) optimization is a comprehensive framework for optimal development of subsurface resources. CLFD involves three major steps: 1) optimization of full development plan based on current set of models, 2) drilling new wells and collecting new spatial and temporal (production) data, 3) model calibration based on all data. This process is repeated until the optimal number of wells is drilled. This work introduces an efficient CLFD implementation for complex systems described by multipoint geostatistics (MPS). Model calibration is accomplished in two steps: conditioning to spatial data by a geostatistical simulation method, and conditioning to production data by optimization-based PCA. A statistical procedure is presented to assess the performance of CLFD. Methodology is applied to an oil reservoir example for 25 different true-model cases. Application of a single-step of CLFD, improved the true NPV in 64%--80% of cases. The full CLFD procedure (with three steps) improved the true NPV in 96% of cases, with an average improvement of 37%. ",1,0,0,1,0,0 16861,Reduction and regular $t$-balanced Cayley maps on split metacyclic 2-groups," A regular $t$-balanced Cayley map (RBCM$_t$ for short) on a group $\Gamma$ is an embedding of a Cayley graph on $\Gamma$ into a surface with some special symmetric properties. We propose a reduction method to study RBCM$_t$'s, and as a first practice, we completely classify RBCM$_t$'s for a class of split metacyclic 2-groups. ",0,0,1,0,0,0 16862,Perovskite Substrates Boost the Thermopower of Cobaltate Thin Films at High Temperatures," Transition metal oxides are promising candidates for thermoelectric applications, because they are stable at high temperature and because strong electronic correlations can generate large Seebeck coefficients, but their thermoelectric power factors are limited by the low electrical conductivity. We report transport measurements on Ca3Co4O9 films on various perovskite substrates and show that reversible incorporation of oxygen into SrTiO3 and LaAlO3 substrates activates a parallel conduction channel for p-type carriers, greatly enhancing the thermoelectric performance of the film-substrate system at temperatures above 450 °C. Thin-film structures that take advantage of both electronic correlations and the high oxygen mobility of transition metal oxides thus open up new perspectives for thermopower generation at high temperature. ",0,1,0,0,0,0 16863,Motion of a rod pushed at one point in a weightless environment in space," We analyze the motion of a rod floating in a weightless environment in space when a force is applied at some point on the rod in a direction perpendicular to its length. If the force applied is at the centre of mass, then the rod gets a linear motion perpendicular to its length. However, if the same force is applied at a point other than the centre of mass, say, near one end of the rod, thereby giving rise to a torque, then there will also be a rotation of the rod about its centre of mass, in addition to the motion of the centre of mass itself. If the force applied is for a very short duration, but imparting nevertheless a finite impulse, like in a sudden (quick) hit at one end of the rod, then the centre of mass will move with a constant linear speed and superimposed on it will be a rotation of the rod with constant angular speed about the centre of mass. However, if force is applied continuously, say by strapping a tiny rocket at one end of the rod, then the rod will spin faster and faster about the centre of mass, with angular speed increasing linearly with time. As the direction of the applied force, as seen by an external (inertial) observer, will be changing continuously with the rotation of the rod, the acceleration of the centre of mass would also be not in one fixed direction. However, it turns out that the locus of the velocity vector of the centre of mass will describe a Cornu spiral, with the velocity vector reaching a final constant value with time. The mean motion of the centre of mass will be in a straight line, with superposed initial oscillations that soon die down. ",0,1,0,0,0,0 16864,Dimension theory and components of algebraic stacks," We prove some basic results on the dimension theory of algebraic stacks, and on the multiplicities of their irreducible components, for which we do not know a reference. ",0,0,1,0,0,0 16865,When is a polynomial ideal binomial after an ambient automorphism?," Can an ideal I in a polynomial ring k[x] over a field be moved by a change of coordinates into a position where it is generated by binomials $x^a - cx^b$ with c in k, or by unital binomials (i.e., with c = 0 or 1)? Can a variety be moved into a position where it is toric? By fibering the G-translates of I over an algebraic group G acting on affine space, these problems are special cases of questions about a family F of ideals over an arbitrary base B. The main results in this general setting are algorithms to find the locus of points in B over which the fiber of F - is contained in the fiber of a second family F' of ideals over B; - defines a variety of dimension at least d; - is generated by binomials; or - is generated by unital binomials. A faster containment algorithm is also presented when the fibers of F are prime. The big-fiber algorithm is probabilistic but likely faster than known deterministic ones. Applications include the setting where a second group T acts on affine space, in addition to G, in which case algorithms compute the set of G-translates of I - whose stabilizer subgroups in T have maximal dimension; or - that admit a faithful multigrading by $Z^r$ of maximal rank r. Even with no ambient group action given, the final application is an algorithm to - decide whether a normal projective variety is abstractly toric. All of these loci in B and subsets of G are constructible; in some cases they are closed. ",1,0,1,0,0,0 16866,Annihilators in $\mathbb{N}^k$-graded and $\mathbb{Z}^k$-graded rings," It has been shown by McCoy that a right ideal of a polynomial ring with several indeterminates has a non-trivial homogeneous right annihilator of degree 0 provided its right annihilator is non-trivial to begin with. In this note, it is documented that any $\mathbb{N}$-graded ring $R$ has a slightly weaker property: the right annihilator of a right ideal contains a homogeneous non-zero element, if it is non-trivial to begin with. If $R$ is a subring of a $\mathbb{Z}^k$ -graded ring $S$ satisfying a certain non-annihilation property (which is the case if $S$ is strongly graded, for example), then it is possible to find annihilators of degree 0. ",0,0,1,0,0,0 16867,q-Neurons: Neuron Activations based on Stochastic Jackson's Derivative Operators," We propose a new generic type of stochastic neurons, called $q$-neurons, that considers activation functions based on Jackson's $q$-derivatives with stochastic parameters $q$. Our generalization of neural network architectures with $q$-neurons is shown to be both scalable and very easy to implement. We demonstrate experimentally consistently improved performances over state-of-the-art standard activation functions, both on training and testing loss functions. ",0,0,0,1,0,0 16868,Liveness-Driven Random Program Generation," Randomly generated programs are popular for testing compilers and program analysis tools, with hundreds of bugs in real-world C compilers found by random testing. However, existing random program generators may generate large amounts of dead code (computations whose result is never used). This leaves relatively little code to exercise a target compiler's more complex optimizations. To address this shortcoming, we introduce liveness-driven random program generation. In this approach the random program is constructed bottom-up, guided by a simultaneous structural data-flow analysis to ensure that the generator never generates dead code. The algorithm is implemented as a plugin for the Frama-C framework. We evaluate it in comparison to Csmith, the standard random C program generator. Our tool generates programs that compile to more machine code with a more complex instruction mix. ",1,0,0,0,0,0 16869,Modeling and optimal control of HIV/AIDS prevention through PrEP," Pre-exposure prophylaxis (PrEP) consists in the use of an antiretroviral medication to prevent the acquisition of HIV infection by uninfected individuals and has recently demonstrated to be highly efficacious for HIV prevention. We propose a new epidemiological model for HIV/AIDS transmission including PrEP. Existence, uniqueness and global stability of the disease free and endemic equilibriums are proved. The model with no PrEP is calibrated with the cumulative cases of infection by HIV and AIDS reported in Cape Verde from 1987 to 2014, showing that it predicts well such reality. An optimal control problem with a mixed state control constraint is then proposed and analyzed, where the control function represents the PrEP strategy and the mixed constraint models the fact that, due to PrEP costs, epidemic context and program coverage, the number of individuals under PrEP is limited at each instant of time. The objective is to determine the PrEP strategy that satisfies the mixed state control constraint and minimizes the number of individuals with pre-AIDS HIV-infection as well as the costs associated with PrEP. The optimal control problem is studied analytically. Through numerical simulations, we demonstrate that PrEP reduces HIV transmission significantly. ",0,0,1,0,0,0 16870,STARIMA-based Traffic Prediction with Time-varying Lags," Based on the observation that the correlation between observed traffic at two measurement points or traffic stations may be time-varying, attributable to the time-varying speed which subsequently causes variations in the time required to travel between the two points, in this paper, we develop a modified Space-Time Autoregressive Integrated Moving Average (STARIMA) model with time-varying lags for short-term traffic flow prediction. Particularly, the temporal lags in the modified STARIMA change with the time-varying speed at different time of the day or equivalently change with the (time-varying) time required to travel between two measurement points. Firstly, a technique is developed to evaluate the temporal lag in the STARIMA model, where the temporal lag is formulated as a function of the spatial lag (spatial distance) and the average speed. Secondly, an unsupervised classification algorithm based on ISODATA algorithm is designed to classify different time periods of the day according to the variation of the speed. The classification helps to determine the appropriate time lag to use in the STARIMA model. Finally, a STARIMA-based model with time-varying lags is developed for short-term traffic prediction. Experimental results using real traffic data show that the developed STARIMA-based model with time-varying lags has superior accuracy compared with its counterpart developed using the traditional cross-correlation function and without employing time-varying lags. ",1,0,1,0,0,0 16871,Electric Vehicle Charging Station Placement Method for Urban Areas," For accommodating more electric vehicles (EVs) to battle against fossil fuel emission, the problem of charging station placement is inevitable and could be costly if done improperly. Some researches consider a general setup, using conditions such as driving ranges for planning. However, most of the EV growths in the next decades will happen in the urban area, where driving ranges is not the biggest concern. For such a need, we consider several practical aspects of urban systems, such as voltage regulation cost and protection device upgrade resulting from the large integration of EVs. Notably, our diversified objective can reveal the trade-off between different factors in different cities worldwide. To understand the global optimum of large-scale analysis, we add constraint one-by-one to see how to preserve the problem convexity. Our sensitivity analysis before and after convexification shows that our approach is not only universally applicable but also has a small approximation error for prioritizing the most urgent constraint in a specific setup. Finally, numerical results demonstrate the trade-off, the relationship between different factors and the global objective, and the small approximation error. A unique observation in this study shows the importance of incorporating the protection device upgrade in urban system planning on charging stations. ",1,0,0,0,0,0 16872,The Steinberg linkage class for a reductive algebraic group," Let G be a reductive algebraic group over a field of positive characteristic and denote by C(G) the category of rational G-modules. In this note we investigate the subcategory of C(G) consisting of those modules whose composition factors all have highest weights linked to the Steinberg weight. This subcategory is denoted ST and called the Steinberg component. We give an explicit equivalence between ST and C(G) and we derive some consequences. In particular, our result allows us to relate the Frobenius contracting functor to the projection functor from C(G) onto ST . ",0,0,1,0,0,0 16873,Detection and Resolution of Rumours in Social Media: A Survey," Despite the increasing use of social media platforms for information and news gathering, its unmoderated nature often leads to the emergence and spread of rumours, i.e. pieces of information that are unverified at the time of posting. At the same time, the openness of social media platforms provides opportunities to study how users share and discuss rumours, and to explore how natural language processing and data mining techniques may be used to find ways of determining their veracity. In this survey we introduce and discuss two types of rumours that circulate on social media; long-standing rumours that circulate for long periods of time, and newly-emerging rumours spawned during fast-paced events such as breaking news, where reports are released piecemeal and often with an unverified status in their early stages. We provide an overview of research into social media rumours with the ultimate goal of developing a rumour classification system that consists of four components: rumour detection, rumour tracking, rumour stance classification and rumour veracity classification. We delve into the approaches presented in the scientific literature for the development of each of these four components. We summarise the efforts and achievements so far towards the development of rumour classification systems and conclude with suggestions for avenues for future research in social media mining for detection and resolution of rumours. ",1,0,0,0,0,0 16874,On harmonic analysis of spherical convolutions on semisimple Lie groups," This paper contains a non-trivial generalization of the Harish-Chandra transforms on a connected semisimple Lie group $G,$ with finite center, into what we term spherical convolutions. Among other results we show that its integral over the collection of bounded spherical functions at the identity element $e \in G$ is a weighted Fourier transforms of the Abel transform at $0.$ Being a function on $G,$ the restriction of this integral of its spherical Fourier transforms to the positive-definite spherical functions is then shown to be (the non-zero constant multiple of) a positive-definite distribution on $G,$ which is tempered and invariant on $G=SL(2,\mathbb{R}).$ These results suggest the consideration of a calculus on the Schwartz algebras of spherical functions. The Plancherel measure of the spherical convolutions is also explicitly computed. ",0,0,1,0,0,0 16875,Relaxation-based viscosity mapping for magnetic particle imaging," Magnetic Particle Imaging (MPI) has been shown to provide remarkable contrast for imaging applications such as angiography, stem cell tracking, and cancer imaging. Recently, there is growing interest in the functional imaging capabilities of MPI, where color MPI techniques have explored separating different nanoparticles, which could potentially be used to distinguish nanoparticles in different states or environments. Viscosity mapping is a promising functional imaging application for MPI, as increased viscosity levels in vivo have been associated with numerous diseases such as hypertension, atherosclerosis, and cancer. In this work, we propose a viscosity mapping technique for MPI through the estimation of the relaxation time constant of the nanoparticles. Importantly, the proposed time constant estimation scheme does not require any prior information regarding the nanoparticles. We validate this method with extensive experiments in an in-house magnetic particle spectroscopy (MPS) setup at four different frequencies (between 250 Hz and 10.8 kHz) and at three different field strengths (between 5 mT and 15 mT) for viscosities ranging between 0.89 mPa.s to 15.33 mPa.s. Our results demonstrate the viscosity mapping ability of MPI in the biologically relevant viscosity range. ",0,1,0,0,0,0 16876,Detecting Statistically Significant Communities," Community detection is a key data analysis problem across different fields. During the past decades, numerous algorithms have been proposed to address this issue. However, most work on community detection does not address the issue of statistical significance. Although some research efforts have been made towards mining statistically significant communities, deriving an analytical solution of p-value for one community under the configuration model is still a challenging mission that remains unsolved. To partially fulfill this void, we present a tight upper bound on the p-value of a single community under the configuration model, which can be used for quantifying the statistical significance of each community analytically. Meanwhile, we present a local search method to detect statistically significant communities in an iterative manner. Experimental results demonstrate that our method is comparable with the competing methods on detecting statistically significant communities. ",1,0,0,0,0,0 16877,On the effectivity of spectra representing motivic cohomology theories," Let k be an infinite perfect field. We provide a general criterion for a spectrum in the stable homotopy category over k to be effective, i.e. to be in the localizing subcategory generated by the suspension spectra of smooth schemes. As a consequence, we show that two recent versions of generalized motivic cohomology theories coincide. ",0,0,1,0,0,0 16878,Aggregated Momentum: Stability Through Passive Damping," Momentum is a simple and widely used trick which allows gradient-based optimizers to pick up speed along low curvature directions. Its performance depends crucially on a damping coefficient $\beta$. Large $\beta$ values can potentially deliver much larger speedups, but are prone to oscillations and instability; hence one typically resorts to small values such as 0.5 or 0.9. We propose Aggregated Momentum (AggMo), a variant of momentum which combines multiple velocity vectors with different $\beta$ parameters. AggMo is trivial to implement, but significantly dampens oscillations, enabling it to remain stable even for aggressive $\beta$ values such as 0.999. We reinterpret Nesterov's accelerated gradient descent as a special case of AggMo and analyze rates of convergence for quadratic objectives. Empirically, we find that AggMo is a suitable drop-in replacement for other momentum methods, and frequently delivers faster convergence. ",0,0,0,1,0,0 16879,On the Number of Bins in Equilibria for Signaling Games," We investigate the equilibrium behavior for the decentralized quadratic cheap talk problem in which an encoder and a decoder, viewed as two decision makers, have misaligned objective functions. In prior work, we have shown that the number of bins under any equilibrium has to be at most countable, generalizing a classical result due to Crawford and Sobel who considered sources with density supported on $[0,1]$. In this paper, we refine this result in the context of exponential and Gaussian sources. For exponential sources, a relation between the upper bound on the number of bins and the misalignment in the objective functions is derived, the equilibrium costs are compared, and it is shown that there also exist equilibria with infinitely many bins under certain parametric assumptions. For Gaussian sources, it is shown that there exist equilibria with infinitely many bins. ",1,0,0,0,0,0 16880,The Dantzig selector for a linear model of diffusion processes," In this paper, a linear model of diffusion processes with unknown drift and diagonal diffusion matrices is discussed. We will consider the estimation problems for unknown parameters based on the discrete time observation in high-dimensional and sparse settings. To estimate drift matrices, the Dantzig selector which was proposed by Candés and Tao in 2007 will be applied. Then, we will prove two types of consistency of the estimator of drift matrix; one is the consistency in the sense of $l_q$ norm for every $q \in [1,\infty]$ and the other is the variable selection consistency. Moreover, we will construct an asymptotically normal estimator of the drift matrix by using the variable selection consistency of the Dantzig selector. ",0,0,1,1,0,0 16881,A Spatio-Temporal Multivariate Shared Component Model with an Application in Iran Cancer Data," Among the proposals for joint disease mapping, the shared component model has become more popular. Another recent advance to strengthen inference of disease data has been the extension of purely spatial models to include time and space-time interaction. Such analyses have additional benefits over purely spatial models. However, only a few proposed spatio-temporal models could address analysing multiple diseases jointly. In the proposed model, each component is shared by different subsets of diseases, spatial and temporal trends are considered for each component, and the relative weight of these trends for each component for each relevant disease can be estimated. We present an application of the proposed method on incidence rates of seven prevalent cancers in Iran. The effect of the shared components on the individual cancer types can be identified. Regional and temporal variation in relative risks is shown. We present a model which combines the benefits of shared-components with spatio-temporal techniques for multivariate data. We show, how the model allows to analyse geographical and temporal variation among diseases beyond previous approaches. ",0,0,0,1,0,0 16882,The dynamo effect in decaying helical turbulence," We show that in decaying hydromagnetic turbulence with initial kinetic helicity, a weak magnetic field eventually becomes fully helical. The sign of magnetic helicity is opposite to that of the kinetic helicity - regardless of whether or not the initial magnetic field was helical. The magnetic field undergoes inverse cascading with the magnetic energy decaying approximately like t^{-1/2}. This is even slower than in the fully helical case, where it decays like t^{-2/3}. In this parameter range, the product of magnetic energy and correlation length raised to a certain power slightly larger than unity, is approximately constant. This scaling of magnetic energy persists over long time scales. At very late times and for domain sizes large enough to accommodate the growing spatial scales, we expect a cross-over to the t^{-2/3} decay law that is commonly observed for fully helical magnetic fields. Regardless of the presence or absence of initial kinetic helicity, the magnetic field experiences exponential growth during the first few turnover times, which is suggestive of small-scale dynamo action. Our results have applications to a wide range of experimental dynamos and astrophysical time-dependent plasmas, including primordial turbulence in the early universe. ",0,1,0,0,0,0 16883,Geometrically finite amalgamations of hyperbolic 3-manifold groups are not LERF," We prove that, for any two finite volume hyperbolic $3$-manifolds, the amalgamation of their fundamental groups along any nontrivial geometrically finite subgroup is not LERF. This generalizes the author's previous work on nonLERFness of amalgamations of hyperbolic $3$-manifold groups along abelian subgroups. A consequence of this result is that closed arithmetic hyperbolic $4$-manifolds have nonLERF fundamental groups. Along with the author's previous work, we get that, for any arithmetic hyperbolic manifold with dimension at least $4$, with possible exceptions in $7$-dimensional manifolds defined by the octonion, its fundamental group is not LERF. ",0,0,1,0,0,0 16884,"Dining Philosophers, Leader Election and Ring Size problems, in the quantum setting"," We provide the first quantum (exact) protocol for the Dining Philosophers problem (DP), a central problem in distributed algorithms. It is well known that the problem cannot be solved exactly in the classical setting. We then use our DP protocol to provide a new quantum protocol for the tightly related problem of exact leader election (LE) on a ring, improving significantly in both time and memory complexity over the known LE protocol by Tani et. al. To do this, we show that in some sense the exact DP and exact LE problems are equivalent; interestingly, in the classical non-exact setting they are not. Hopefully, the results will lead to exact quantum protocols for other important distributed algorithmic questions; in particular, we discuss interesting connections to the ring size problem, as well as to a physically motivated question of breaking symmetry in 1D translationally invariant systems. ",1,0,0,0,0,0 16885,An Online Secretary Framework for Fog Network Formation with Minimal Latency," Fog computing is seen as a promising approach to perform distributed, low-latency computation for supporting Internet of Things applications. However, due to the unpredictable arrival of available neighboring fog nodes, the dynamic formation of a fog network can be challenging. In essence, a given fog node must smartly select the set of neighboring fog nodes that can provide low-latency computations. In this paper, this problem of fog network formation and task distribution is studied considering a hybrid cloud-fog architecture. The goal of the proposed framework is to minimize the maximum computational latency by enabling a given fog node to form a suitable fog network, under uncertainty on the arrival process of neighboring fog nodes. To solve this problem, a novel approach based on the online secretary framework is proposed. To find the desired set of neighboring fog nodes, an online algorithm is developed to enable a task initiating fog node to decide on which other nodes can be used as part of its fog network, to offload computational tasks, without knowing any prior information on the future arrivals of those other nodes. Simulation results show that the proposed online algorithm can successfully select an optimal set of neighboring fog nodes while achieving a latency that is as small as the one resulting from an ideal, offline scheme that has complete knowledge of the system. The results also show how, using the proposed approach, the computational tasks can be properly distributed between the fog network and a remote cloud server. ",1,0,0,0,0,0 16886,Computer-assisted proof of heteroclinic connections in the one-dimensional Ohta-Kawasaki model," We present a computer-assisted proof of heteroclinic connections in the one-dimensional Ohta-Kawasaki model of diblock copolymers. The model is a fourth-order parabolic partial differential equation subject to homogeneous Neumann boundary conditions, which contains as a special case the celebrated Cahn-Hilliard equation. While the attractor structure of the latter model is completely understood for one-dimensional domains, the diblock copolymer extension exhibits considerably richer long-term dynamical behavior, which includes a high level of multistability. In this paper, we establish the existence of certain heteroclinic connections between the homogeneous equilibrium state, which represents a perfect copolymer mixture, and all local and global energy minimizers. In this way, we show that not every solution originating near the homogeneous state will converge to the global energy minimizer, but rather is trapped by a stable state with higher energy. This phenomenon can not be observed in the one-dimensional Cahn-Hillard equation, where generic solutions are attracted by a global minimizer. ",1,0,1,0,0,0 16887,Dust and Gas in Star Forming Galaxies at z~3 - Extending Galaxy Uniformity to 11.5 Billion Years," We present millimetre dust emission measurements of two Lyman Break Galaxies at z~3 and construct for the first time fully sampled infrared spectral energy distributions (SEDs), from mid-IR to the Rayleigh-Jeans tail, of individually detected, unlensed, UV-selected, main sequence (MS) galaxies at $z=3$. The SED modelling of the two sources confirms previous findings, based on stacked ensembles, of an increasing mean radiation field with redshift, consistent with a rapidly decreasing gas metallicity in z > 2 galaxies. Complementing our study with CO[3-2] emission line observations, we measure the molecular gas mass (M_H2) reservoir of the systems using three independent approaches: 1) CO line observations, 2) the dust to gas mass ratio vs metallicity relation and 3) a single band, dust emission flux on the Rayleigh-Jeans side of the SED. All techniques return consistent M_H2 estimates within a factor of ~2 or less, yielding gas depletion time-scales (tau_dep ~ 0.35 Gyrs) and gas-to-stellar mass ratios (M_H2/M* ~ 0.5-1) for our z~3 massive MS galaxies. The overall properties of our galaxies are consistent with trends and relations established at lower redshifts, extending the apparent uniformity of star-forming galaxies over the last 11.5 billion years. ",0,1,0,0,0,0 16888,Flow speed has little impact on propulsive characteristics of oscillating foils," Experiments are reported on the performance of a pitching and heaving two-dimensional foil in a water channel in either continuous or intermittent motion. We find that the thrust and power are independent of the mean freestream velocity for two-fold changes in the mean velocity (four-fold in the dynamic pressure), and for oscillations in the velocity up to 38\% of the mean, where the oscillations are intended to mimic those of freely swimming motions where the thrust varies during the flapping cycle. We demonstrate that the correct velocity scale is not the flow velocity but the mean velocity of the trailing edge. We also find little or no impact of streamwise velocity change on the wake characteristics such as vortex organization, vortex strength, and time-averaged velocity profile development---the wake is both qualitatively and quantitatively unchanged. Our results suggest that constant velocity studies can be used to make robust conclusions about swimming performance without a need to explore the free-swimming condition. ",0,1,0,0,0,0 16889,"Switching and Data Injection Attacks on Stochastic Cyber-Physical Systems: Modeling, Resilient Estimation and Attack Mitigation"," In this paper, we consider the problem of attack-resilient state estimation, that is to reliably estimate the true system states despite two classes of attacks: (i) attacks on the switching mechanisms and (ii) false data injection attacks on actuator and sensor signals, in the presence of unbounded stochastic process and measurement noise signals. We model the systems under attack as hidden mode stochastic switched linear systems with unknown inputs and propose the use of a multiple-model inference algorithm to tackle these security issues. Moreover, we characterize fundamental limitations to resilient estimation (e.g., upper bound on the number of tolerable signal attacks) and discuss the topics of attack detection, identification and mitigation under this framework. Simulation examples of switching and false data injection attacks on a benchmark system and an IEEE 68-bus test system show the efficacy of our approach to recover resilient (i.e., asymptotically unbiased) state estimates as well as to identify and mitigate the attacks. ",1,0,1,0,0,0 16890,Generating Sentence Planning Variations for Story Telling," There has been a recent explosion in applications for dialogue interaction ranging from direction-giving and tourist information to interactive story systems. Yet the natural language generation (NLG) component for many of these systems remains largely handcrafted. This limitation greatly restricts the range of applications; it also means that it is impossible to take advantage of recent work in expressive and statistical language generation that can dynamically and automatically produce a large number of variations of given content. We propose that a solution to this problem lies in new methods for developing language generation resources. We describe the ES-Translator, a computational language generator that has previously been applied only to fables, and quantitatively evaluate the domain independence of the EST by applying it to personal narratives from weblogs. We then take advantage of recent work on language generation to create a parameterized sentence planner for story generation that provides aggregation operations, variations in discourse and in point of view. Finally, we present a user evaluation of different personal narrative retellings. ",1,0,0,0,0,0 16891,"Detection of planet candidates around K giants, HD 40956, HD 111591, and HD 113996"," Aims. The purpose of this paper is to detect and investigate the nature of long-term radial velocity (RV) variations of K-type giants and to confirm planetary companions around the stars. Methods. We have conducted two planet search programs by precise RV measurement using the 1.8 m telescope at Bohyunsan Optical Astronomy Observatory (BOAO) and the 1.88 m telescope at Okayama Astrophysical Observatory (OAO). The BOAO program searches for planets around 55 early K giants. The OAO program is looking for 190 G-K type giants. Results. In this paper, we report the detection of long-period RV variations of three K giant stars, HD 40956, HD 111591, and HD 113996. We investigated the cause of the observed RV variations and conclude the substellar companions are most likely the cause of the RV variations. The orbital analyses yield P = 578.6 $\pm$ 3.3 d, $m$ sin $i$ = 2.7 $\pm$ 0.6 $M_{\rm{J}}$, $a$ = 1.4 $\pm$ 0.1 AU for HD 40956; P = 1056.4 $\pm$ 14.3 d, $m$ sin $i$ = 4.4 $\pm$ 0.4 $M_{\rm{J}}$, $a$ = 2.5 $\pm$ 0.1 AU for HD 111591; P = 610.2 $\pm$ 3.8 d, $m$ sin $i$ = 6.3 $\pm$ 1.0 $M_{\rm{J}}$, $a$ = 1.6 $\pm$ 0.1 AU for HD 113996. ",0,1,0,0,0,0 16892,Perfect Half Space Games," We introduce perfect half space games, in which the goal of Player 2 is to make the sums of encountered multi-dimensional weights diverge in a direction which is consistent with a chosen sequence of perfect half spaces (chosen dynamically by Player 2). We establish that the bounding games of Jurdziński et al. (ICALP 2015) can be reduced to perfect half space games, which in turn can be translated to the lexicographic energy games of Colcombet and Niwiński, and are positionally determined in a strong sense (Player 2 can play without knowing the current perfect half space). We finally show how perfect half space games and bounding games can be employed to solve multi-dimensional energy parity games in pseudo-polynomial time when both the numbers of energy dimensions and of priorities are fixed, regardless of whether the initial credit is given as part of the input or existentially quantified. This also yields an optimal 2-EXPTIME complexity with given initial credit, where the best known upper bound was non-elementary. ",1,0,0,0,0,0 16893,"Energy-efficient Analog Sensing for Large-scale, High-density Persistent Wireless Monitoring"," The research challenge of current Wireless Sensor Networks~(WSNs) is to design energy-efficient, low-cost, high-accuracy, self-healing, and scalable systems for applications such as environmental monitoring. Traditional WSNs consist of low density, power-hungry digital motes that are expensive and cannot remain functional for long periods on a single charge. In order to address these challenges, a \textit{dumb-sensing and smart-processing} architecture that splits sensing and computation capabilities among tiers is proposed. Tier-1 consists of dumb sensors that only sense and transmit, while the nodes in Tier-2 do all the smart processing on Tier-1 sensor data. A low-power and low-cost solution for Tier-1 sensors has been proposed using Analog Joint Source Channel Coding~(AJSCC). An analog circuit that realizes the rectangular type of AJSCC has been proposed and realized on a Printed Circuit Board for feasibility analysis. A prototype consisting of three Tier-1 sensors (sensing temperature and humidity) communicating to a Tier-2 Cluster Head has been demonstrated to verify the proposed approach. Results show that our framework is indeed feasible to support large scale high density and persistent WSN deployment. ",1,0,0,0,0,0 16894,Computation of ground-state properties in molecular systems: back-propagation with auxiliary-field quantum Monte Carlo," We address the computation of ground-state properties of chemical systems and realistic materials within the auxiliary-field quantum Monte Carlo method. The phase constraint to control the fermion phase problem requires the random walks in Slater determinant space to be open-ended with branching. This in turn makes it necessary to use back-propagation (BP) to compute averages and correlation functions of operators that do not commute with the Hamiltonian. Several BP schemes are investigated and their optimization with respect to the phaseless constraint is considered. We propose a modified BP method for the computation of observables in electronic systems, discuss its numerical stability and computational complexity, and assess its performance by computing ground-state properties for several substances, including constituents of the primordial terrestrial atmosphere and small organic molecules. ",0,1,0,0,0,0 16895,Demonstration of the Relationship between Sensitivity and Identifiability for Inverse Uncertainty Quantification," Inverse Uncertainty Quantification (UQ), or Bayesian calibration, is the process to quantify the uncertainties of random input parameters based on experimental data. The introduction of model discrepancy term is significant because ""over-fitting"" can theoretically be avoided. But it also poses challenges in the practical applications. One of the mostly concerned and unresolved problem is the ""lack of identifiability"" issue. With the presence of model discrepancy, inverse UQ becomes ""non-identifiable"" in the sense that it is difficult to precisely distinguish between the parameter uncertainties and model discrepancy when estimating the calibration parameters. Previous research to alleviate the non-identifiability issue focused on using informative priors for the calibration parameters and the model discrepancy, which is usually not a viable solution because one rarely has such accurate and informative prior knowledge. In this work, we show that identifiability is largely related to the sensitivity of the calibration parameters with regards to the chosen responses. We adopted an improved modular Bayesian approach for inverse UQ that does not require priors for the model discrepancy term. The relationship between sensitivity and identifiability was demonstrated with a practical example in nuclear engineering. It was shown that, in order for a certain calibration parameter to be statistically identifiable, it should be significant to at least one of the responses whose data are used for inverse UQ. Good identifiability cannot be achieved for a certain calibration parameter if it is not significant to any of the responses. It is also demonstrated that ""fake identifiability"" is possible if model responses are not appropriately chosen, or inaccurate but informative priors are specified. ",0,0,0,1,0,0 16896,The GENIUS Approach to Robust Mendelian Randomization Inference," Mendelian randomization (MR) is a popular instrumental variable (IV) approach. A key IV identification condition known as the exclusion restriction requires no direct effect of an IV on the outcome not through the exposure which is unrealistic in most MR analyses. As a result, possible violation of the exclusion restriction can seldom be ruled out in such studies. To address this concern, we introduce a new class of IV estimators which are robust to violation of the exclusion restriction under a large collection of data generating mechanisms consistent with parametric models commonly assumed in the MR literature. Our approach named ""MR G-Estimation under No Interaction with Unmeasured Selection"" (MR GENIUS) may be viewed as a modification to Robins' G-estimation approach that is robust to both additive unmeasured confounding and violation of the exclusion restriction assumption. We also establish that estimation with MR GENIUS may also be viewed as a robust generalization of the well-known Lewbel estimator for a triangular system of structural equations with endogeneity. Specifically, we show that unlike Lewbel estimation, MR GENIUS is under fairly weak conditions also robust to unmeasured confounding of the effects of the genetic IVs, another possible violation of a key IV Identification condition. Furthermore, while Lewbel estimation involves specification of linear models both for the outcome and the exposure, MR GENIUS generally does not require specification of a structural model for the direct effect of invalid IVs on the outcome, therefore allowing the latter model to be unrestricted. Finally, unlike Lewbel estimation, MR GENIUS is shown to equally apply for binary, discrete or continuous exposure and outcome variables and can be used under prospective sampling, or retrospective sampling such as in a case-control study. ",0,0,0,1,0,0 16897,Evaluation of Trace Alignment Quality and its Application in Medical Process Mining," Trace alignment algorithms have been used in process mining for discovering the consensus treatment procedures and process deviations. Different alignment algorithms, however, may produce very different results. No widely-adopted method exists for evaluating the results of trace alignment. Existing reference-free evaluation methods cannot adequately and comprehensively assess the alignment quality. We analyzed and compared the existing evaluation methods, identifying their limitations, and introduced improvements in two reference-free evaluation methods. Our approach assesses the alignment result globally instead of locally, and therefore helps the algorithm to optimize overall alignment quality. We also introduced a novel metric to measure the alignment complexity, which can be used as a constraint on alignment algorithm optimization. We tested our evaluation methods on a trauma resuscitation dataset and provided the medical explanation of the activities and patterns identified as deviations using our proposed evaluation methods. ",1,0,0,0,0,0 16898,Size Constraints on Majorana Beamsplitter Interferometer: Majorana Coupling and Surface-Bulk Scattering," Topological insulator surfaces in proximity to superconductors have been proposed as a way to produce Majorana fermions in condensed matter physics. One of the simplest proposed experiments with such a system is Majorana interferometry. Here, we consider two possibly conflicting constraints on the size of such an interferometer. Coupling of a Majorana mode from the edge (the arms) of the interferometer to vortices in the centre of the device sets a lower bound on the size of the device. On the other hand, scattering to the usually imperfectly insulating bulk sets an upper bound. From estimates of experimental parameters, we find that typical samples may have no size window in which the Majorana interferometer can operate, implying that a new generation of more highly insulating samples must be explored. ",0,1,0,0,0,0 16899,Counting Arithmetical Structures on Paths and Cycles," Let $G$ be a finite, simple, connected graph. An arithmetical structure on $G$ is a pair of positive integer vectors $\mathbf{d},\mathbf{r}$ such that $(\mathrm{diag}(\mathbf{d})-A)\mathbf{r}=0$, where $A$ is the adjacency matrix of $G$. We investigate the combinatorics of arithmetical structures on path and cycle graphs, as well as the associated critical groups (the cokernels of the matrices $(\mathrm{diag}(\mathbf{d})-A)$). For paths, we prove that arithmetical structures are enumerated by the Catalan numbers, and we obtain refined enumeration results related to ballot sequences. For cycles, we prove that arithmetical structures are enumerated by the binomial coefficients $\binom{2n-1}{n-1}$, and we obtain refined enumeration results related to multisets. In addition, we determine the critical groups for all arithmetical structures on paths and cycles. ",0,0,1,0,0,0 16900,From synaptic interactions to collective dynamics in random neuronal networks models: critical role of eigenvectors and transient behavior," The study of neuronal interactions is currently at the center of several neuroscience big collaborative projects (including the Human Connectome, the Blue Brain, the Brainome, etc.) which attempt to obtain a detailed map of the entire brain matrix. Under certain constraints, mathematical theory can advance predictions of the expected neural dynamics based solely on the statistical properties of such synaptic interaction matrix. This work explores the application of free random variables (FRV) to the study of large synaptic interaction matrices. Besides recovering in a straightforward way known results on eigenspectra of neural networks, we extend them to heavy-tailed distributions of interactions. More importantly, we derive analytically the behavior of eigenvector overlaps, which determine stability of the spectra. We observe that upon imposing the neuronal excitation/inhibition balance, although the eigenvalues remain unchanged, their stability dramatically decreases due to strong non-orthogonality of associated eigenvectors. It leads us to the conclusion that the understanding of the temporal evolution of asymmetric neural networks requires considering the entangled dynamics of both eigenvectors and eigenvalues, which might bear consequences for learning and memory processes in these models. Considering the success of FRV analysis in a wide variety of branches disciplines, we hope that the results presented here foster additional application of these ideas in the area of brain sciences. ",0,0,0,0,1,0 16901,Exploring cosmic origins with CORE: mitigation of systematic effects," We present an analysis of the main systematic effects that could impact the measurement of CMB polarization with the proposed CORE space mission. We employ timeline-to-map simulations to verify that the CORE instrumental set-up and scanning strategy allow us to measure sky polarization to a level of accuracy adequate to the mission science goals. We also show how the CORE observations can be processed to mitigate the level of contamination by potentially worrying systematics, including intensity-to-polarization leakage due to bandpass mismatch, asymmetric main beams, pointing errors and correlated noise. We use analysis techniques that are well validated on data from current missions such as Planck to demonstrate how the residual contamination of the measurements by these effects can be brought to a level low enough not to hamper the scientific capability of the mission, nor significantly increase the overall error budget. We also present a prototype of the CORE photometric calibration pipeline, based on that used for Planck, and discuss its robustness to systematics, showing how CORE can achieve its calibration requirements. While a fine-grained assessment of the impact of systematics requires a level of knowledge of the system that can only be achieved in a future study phase, the analysis presented here strongly suggests that the main areas of concern for the CORE mission can be addressed using existing knowledge, techniques and algorithms. ",0,1,0,0,0,0 16902,A non-ordinary peridynamics implementation for anisotropic materials," Peridynamics (PD) represents a new approach for modelling fracture mechanics, where a continuum domain is modelled through particles connected via physical bonds. This formulation allows us to model crack initiation, propagation, branching and coalescence without special assumptions. Up to date, anisotropic materials were modelled in the PD framework as different isotropic materials (for instance, fibre and matrix of a composite laminate), where the stiffness of the bond depends on its orientation. A non-ordinary state-based formulation will enable the modelling of generally anisotropic materials, where the material properties are directly embedded in the formulation. Other material models include rocks, concrete and biomaterials such as bones. In this paper, we implemented this model and validated it for anisotropic composite materials. A composite damage criterion has been employed to model the crack propagation behaviour. Several numerical examples have been used to validate the approach, and compared to other benchmark solution from the finite element method (FEM) and experimental results when available. ",1,1,0,0,0,0 16903,Discrete-attractor-like Tracking in Continuous Attractor Neural Networks," Continuous attractor neural networks generate a set of smoothly connected attractor states. In memory systems of the brain, these attractor states may represent continuous pieces of information such as spatial locations and head directions of animals. However, during the replay of previous experiences, hippocampal neurons show a discontinuous sequence in which discrete transitions of neural state are phase-locked with the slow-gamma (30-40 Hz) oscillation. Here, we explored the underlying mechanisms of the discontinuous sequence generation. We found that a continuous attractor neural network has several phases depending on the interactions between external input and local inhibitory feedback. The discrete-attractor-like behavior naturally emerges in one of these phases without any discreteness assumption. We propose that the dynamics of continuous attractor neural networks is the key to generate discontinuous state changes phase-locked to the brain rhythm. ",0,0,0,0,1,0 16904,Framework for an Innovative Perceptive Mobile Network Using Joint Communication and Sensing," In this paper, we develop a framework for an innovative perceptive mobile (i.e. cellular) network that integrates sensing with communication, and supports new applications widely in transportation, surveillance and environmental sensing. Three types of sensing methods implemented in the base-stations are proposed, using either uplink or downlink multiuser communication signals. The required changes to system hardware and major technical challenges are briefly discussed. We also demonstrate the feasibility of estimating sensing parameters via developing a compressive sensing based scheme and providing simulation results to validate its effectiveness. ",1,0,0,0,0,0 16905,On the smallest non-abelian quotient of $\mathrm{Aut}(F_n)$," We show that the smallest non-abelian quotient of $\mathrm{Aut}(F_n)$ is $\mathrm{PSL}_n(\mathbb{Z}/2\mathbb{Z}) = \mathrm{L}_n(2)$, thus confirming a conjecture of Mecchia--Zimmermann. In the course of the proof we give an exponential (in $n$) lower bound for the cardinality of a set on which $\mathrm{SAut}(F_n)$, the unique index $2$ subgroup of $\mathrm{Aut}(F_n)$, can act non-trivially. We also offer new results on the representation theory of $\mathrm{SAut(F_n)}$ in small dimensions over small, positive characteristics, and on rigidity of maps from $\mathrm{SAut}(F_n)$ to finite groups of Lie type and algebraic groups in characteristic $2$. ",0,0,1,0,0,0 16906,Property Testing in High Dimensional Ising models," This paper explores the information-theoretic limitations of graph property testing in zero-field Ising models. Instead of learning the entire graph structure, sometimes testing a basic graph property such as connectivity, cycle presence or maximum clique size is a more relevant and attainable objective. Since property testing is more fundamental than graph recovery, any necessary conditions for property testing imply corresponding conditions for graph recovery, while custom property tests can be statistically and/or computationally more efficient than graph recovery based algorithms. Understanding the statistical complexity of property testing requires the distinction of ferromagnetic (i.e., positive interactions only) and general Ising models. Using combinatorial constructs such as graph packing and strong monotonicity, we characterize how target properties affect the corresponding minimax upper and lower bounds within the realm of ferromagnets. On the other hand, by studying the detection of an antiferromagnetic (i.e., negative interactions only) Curie-Weiss model buried in Rademacher noise, we show that property testing is strictly more challenging over general Ising models. In terms of methodological development, we propose two types of correlation based tests: computationally efficient screening for ferromagnets, and score type tests for general models, including a fast cycle presence test. Our correlation screening tests match the information-theoretic bounds for property testing in ferromagnets. ",0,0,1,1,0,0 16907,Stratification and duality for homotopical groups," In this paper, we show that the category of module spectra over $C^*(B\mathcal{G},\mathbb{F}_p)$ is stratified for any $p$-local compact group $\mathcal{G}$, thereby giving a support-theoretic classification of all localizing subcategories of this category. To this end, we generalize Quillen's $F$-isomorphism theorem, Quillen's stratification theorem, Chouinard's theorem, and the finite generation of cohomology rings from finite groups to homotopical groups. Moreover, we show that $p$-compact groups admit a homotopical form of Gorenstein duality. ",0,0,1,0,0,0 16908,"Efficiently and easily integrating differential equations with JiTCODE, JiTCDDE, and JiTCSDE"," We present a family of Python modules for the numerical integration of ordinary, delay, or stochastic differential equations. The key features are that the user enters the derivative symbolically and it is just-in-time-compiled, allowing the user to efficiently integrate differential equations from a higher-level interpreted language. The presented modules are particularly suited for large systems of differential equations such as used to describe dynamics on complex networks. Through the selected method of input, the presented modules also allow to almost completely automatize the process of estimating regular as well as transversal Lyapunov exponents for ordinary and delay differential equations. We conceptually discuss the modules' design, analyze their performance, and demonstrate their capabilities by application to timely problems. ",1,1,0,0,0,0 16909,Adaptive Diffusions for Scalable Learning over Graphs," Diffusion-based classifiers such as those relying on the Personalized PageRank and the Heat kernel, enjoy remarkable classification accuracy at modest computational requirements. Their performance however is affected by the extent to which the chosen diffusion captures a typically unknown label propagation mechanism, that can be specific to the underlying graph, and potentially different for each class. The present work introduces a disciplined, data-efficient approach to learning class-specific diffusion functions adapted to the underlying network topology. The novel learning approach leverages the notion of ""landing probabilities"" of class-specific random walks, which can be computed efficiently, thereby ensuring scalability to large graphs. This is supported by rigorous analysis of the properties of the model as well as the proposed algorithms. Furthermore, a robust version of the classifier facilitates learning even in noisy environments. Classification tests on real networks demonstrate that adapting the diffusion function to the given graph and observed labels, significantly improves the performance over fixed diffusions; reaching -- and many times surpassing -- the classification accuracy of computationally heavier state-of-the-art competing methods, that rely on node embeddings and deep neural networks. ",0,0,0,1,0,0 16910,On the Discrimination Power and Effective Utilization of Active Learning Measures in Version Space Search," Active Learning (AL) methods have proven cost-saving against passive supervised methods in many application domains. An active learner, aiming to find some target hypothesis, formulates sequential queries to some oracle. The set of hypotheses consistent with the already answered queries is called version space. Several query selection measures (QSMs) for determining the best query to ask next have been proposed. Assuming binaryoutcome queries, we analyze various QSMs wrt. to the discrimination power of their selected queries within the current version space. As a result, we derive superiority and equivalence relations between these QSMs and introduce improved versions of existing QSMs to overcome identified issues. The obtained picture gives a hint about which QSMs should preferably be used in pool-based AL scenarios. Moreover, we deduce properties optimal queries wrt. QSMs must satisfy. Based on these, we demonstrate how efficient heuristic search methods for optimal queries in query synthesis AL scenarios can be devised. ",1,0,0,0,0,0 16911,Torchbearer: A Model Fitting Library for PyTorch," We introduce torchbearer, a model fitting library for pytorch aimed at researchers working on deep learning or differentiable programming. The torchbearer library provides a high level metric and callback API that can be used for a wide range of applications. We also include a series of built in callbacks that can be used for: model persistence, learning rate decay, logging, data visualization and more. The extensive documentation includes an example library for deep learning and dynamic programming problems and can be found at this http URL. The code is licensed under the MIT License and available at this https URL. ",0,0,0,1,0,0 16912,On estimation of contamination from hydrogen cyanide in carbon monoxide line intensity mapping," Line-intensity mapping surveys probe large-scale structure through spatial variations in molecular line emission from a population of unresolved cosmological sources. Future such surveys of carbon monoxide line emission, specifically the CO(1-0) line, face potential contamination from a disjoint population of sources emitting in a hydrogen cyanide emission line, HCN(1-0). This paper explores the potential range of the strength of HCN emission and its effect on the CO auto power spectrum, using simulations with an empirical model of the CO/HCN--halo connection. We find that effects on the observed CO power spectrum depend on modeling assumptions but are very small for our fiducial model based on our understanding of the galaxy--halo connection, with the bias in overall CO detection significance due to HCN expected to be less than 1%. ",0,1,0,0,0,0 16913,Training Well-Generalizing Classifiers for Fairness Metrics and Other Data-Dependent Constraints," Classifiers can be trained with data-dependent constraints to satisfy fairness goals, reduce churn, achieve a targeted false positive rate, or other policy goals. We study the generalization performance for such constrained optimization problems, in terms of how well the constraints are satisfied at evaluation time, given that they are satisfied at training time. To improve generalization performance, we frame the problem as a two-player game where one player optimizes the model parameters on a training dataset, and the other player enforces the constraints on an independent validation dataset. We build on recent work in two-player constrained optimization to show that if one uses this two-dataset approach, then constraint generalization can be significantly improved. As we illustrate experimentally, this approach works not only in theory, but also in practice. ",0,0,0,1,0,0 16914,Automated Website Fingerprinting through Deep Learning," Several studies have shown that the network traffic that is generated by a visit to a website over Tor reveals information specific to the website through the timing and sizes of network packets. By capturing traffic traces between users and their Tor entry guard, a network eavesdropper can leverage this meta-data to reveal which website Tor users are visiting. The success of such attacks heavily depends on the particular set of traffic features that are used to construct the fingerprint. Typically, these features are manually engineered and, as such, any change introduced to the Tor network can render these carefully constructed features ineffective. In this paper, we show that an adversary can automate the feature engineering process, and thus automatically deanonymize Tor traffic by applying our novel method based on deep learning. We collect a dataset comprised of more than three million network traces, which is the largest dataset of web traffic ever used for website fingerprinting, and find that the performance achieved by our deep learning approaches is comparable to known methods which include various research efforts spanning over multiple years. The obtained success rate exceeds 96% for a closed world of 100 websites and 94% for our biggest closed world of 900 classes. In our open world evaluation, the most performant deep learning model is 2% more accurate than the state-of-the-art attack. Furthermore, we show that the implicit features automatically learned by our approach are far more resilient to dynamic changes of web content over time. We conclude that the ability to automatically construct the most relevant traffic features and perform accurate traffic recognition makes our deep learning based approach an efficient, flexible and robust technique for website fingerprinting. ",1,0,0,0,0,0 16915,"DataCite as a novel bibliometric source: Coverage, strengths and limitations"," This paper explores the characteristics of DataCite to determine its possibilities and potential as a new bibliometric data source to analyze the scholarly production of open data. Open science and the increasing data sharing requirements from governments, funding bodies, institutions and scientific journals has led to a pressing demand for the development of data metrics. As a very first step towards reliable data metrics, we need to better comprehend the limitations and caveats of the information provided by sources of open data. In this paper, we critically examine records downloaded from the DataCite's OAI API and elaborate a series of recommendations regarding the use of this source for bibliometric analyses of open data. We highlight issues related to metadata incompleteness, lack of standardization, and ambiguous definitions of several fields. Despite these limitations, we emphasize DataCite's value and potential to become one of the main sources for data metrics development. ",1,0,0,0,0,0 16916,Parameter Estimation of Complex Fractional Ornstein-Uhlenbeck Processes with Fractional Noise," We obtain strong consistency and asymptotic normality of a least squares estimator of the drift coefficient for complex-valued Ornstein-Uhlenbeck processes disturbed by fractional noise, extending the result of Y. Hu and D. Nualart, [Statist. Probab. Lett., 80 (2010), 1030-1038] to a special 2-dimensions. The strategy is to exploit the Garsia-Rodemich-Rumsey inequality and complex fourth moment theorems. The main ingredients of this paper are the sample path regularity of a multiple Wiener-Ito integral and two equivalent conditions of complex fourth moment theorems in terms of the contractions of integral kernels and complex Malliavin derivatives. ",0,0,1,1,0,0 16917,E-learning Information Technology Based on an Ontology Driven Learning Engine," In the article, proposed is a new e-learning information technology based on an ontology driven learning engine, which is matched with modern pedagogical technologies. With the help of proposed engine and developed question database we have conducted an experiment, where students were tested. The developed ontology driven system of e-learning facilitates the creation of favorable conditions for the development of personal qualities and creation of a holistic understanding of the subject area among students throughout the educational process. ",1,0,0,0,0,0 16918,Global regularity for 1D Eulerian dynamics with singular interaction forces," The Euler-Poisson-Alignment (EPA) system appears in mathematical biology and is used to model, in a hydrodynamic limit, a set agents interacting through mutual attraction/repulsion as well as alignment forces. We consider one-dimensional EPA system with a class of singular alignment terms as well as natural attraction/repulsion terms. The singularity of the alignment kernel produces an interesting effect regularizing the solutions of the equation and leading to global regularity for wide range of initial data. This was recently observed in the paper by Do, Kiselev, Ryzhik and Tan. Our goal in this paper is to generalize the result and to incorporate the attractive/repulsive potential. We prove that global regularity persists for these more general models. ",0,0,1,0,0,0 16919,A $q$-generalization of the para-Racah polynomials," New bispectral orthogonal polynomials are obtained from an unconventional truncation of the Askey-Wilson polynomials. In the limit $q \to 1$, they reduce to the para-Racah polynomials which are orthogonal with respect to a quadratic bi-lattice. The three term recurrence relation and q-difference equation are obtained through limits of those of the Askey-Wilson polynomials. An explicit expression in terms of hypergeometric series and the orthogonality relation are provided. A $q$-generalization of the para-Krawtchouk polynomials is obtained as a special case. Connections with the $q$-Racah and dual-Hahn polynomials are also presented. ",0,0,1,0,0,0 16920,Data Poisoning Attack against Unsupervised Node Embedding Methods," Unsupervised node embedding methods (e.g., DeepWalk, LINE, and node2vec) have attracted growing interests given their simplicity and effectiveness. However, although these methods have been proved effective in a variety of applications, none of the existing work has analyzed the robustness of them. This could be very risky if these methods are attacked by an adversarial party. In this paper, we take the task of link prediction as an example, which is one of the most fundamental problems for graph analysis, and introduce a data positioning attack to node embedding methods. We give a complete characterization of attacker's utilities and present efficient solutions to adversarial attacks for two popular node embedding methods: DeepWalk and LINE. We evaluate our proposed attack model on multiple real-world graphs. Experimental results show that our proposed model can significantly affect the results of link prediction by slightly changing the graph structures (e.g., adding or removing a few edges). We also show that our proposed model is very general and can be transferable across different embedding methods. Finally, we conduct a case study on a coauthor network to better understand our attack method. ",1,0,0,0,0,0 16921,Entanglement properties of the two-dimensional SU(3) AKLT state," Two-dimensional (spin-$2$) Affleck-Kennedy-Lieb-Tasaki (AKLT) type valence bond solids on the square lattice are known to be symmetry protected topological (SPT) gapped spin liquids [Shintaro Takayoshi, Pierre Pujol, and Akihiro Tanaka Phys. Rev. B ${\bf 94}$, 235159 (2016)]. Using the projected entangled pair state (PEPS) framework, we extend the construction of the AKLT state to the case of $SU(3)$, relevant for cold atom systems. The entanglement spectrum is shown to be described by an alternating $SU(3)$ chain of ""quarks"" and ""antiquarks"", subject to exponentially decaying (with distance) Heisenberg interactions, in close similarity with its $SU(2)$ analog. We discuss the SPT feature of the state. ",0,1,0,0,0,0 16922,Heart Rate Variability during Periods of Low Blood Pressure as a Predictor of Short-Term Outcome in Preterms," Efficient management of low blood pressure (BP) in preterm neonates remains challenging with a considerable variability in clinical practice. The ability to assess preterm wellbeing during episodes of low BP will help to decide when and whether hypotension treatment should be initiated. This work aims to investigate the relationship between heart rate variability (HRV), BP and the short-term neurological outcome in preterm infants less than 32 weeks gestational age (GA). The predictive power of common HRV features with respect to the outcome is assessed and shown to improve when HRV is observed during episodes of low mean arterial pressure (MAP) - with a single best feature leading to an AUC of 0.87. Combining multiple features with a boosted decision tree classifier achieves an AUC of 0.97. The work presents a promising step towards the use of multimodal data in building an objective decision support tool for clinical prediction of short-term outcome in preterms who suffer episodes of low BP. ",0,0,0,1,0,0 16923,Understanding News Outlets' Audience-Targeting Patterns," The power of the press to shape the informational landscape of a population is unparalleled, even now in the era of democratic access to all information outlets. However, it is known that news outlets (particularly more traditional ones) tend to discriminate who they want to reach, and who to leave aside. In this work, we attempt to shed some light on the audience targeting patterns of newspapers, using the Chilean media ecosystem. First, we use the gravity model to analyze geography as a factor in explaining audience reachability. This shows that some newspapers are indeed driven by geographical factors (mostly local news outlets) but some others are not (national-distribution outlets). For those which are not, we use a regression model to study the influence of socioeconomic and political characteristics in news outlets adoption. We conclude that indeed larger, national-distribution news outlets target populations based on these factors, rather than on geography or immediacy. ",1,0,0,0,0,0 16924,Deriving a Representative Vector for Ontology Classes with Instance Word Vector Embeddings," Selecting a representative vector for a set of vectors is a very common requirement in many algorithmic tasks. Traditionally, the mean or median vector is selected. Ontology classes are sets of homogeneous instance objects that can be converted to a vector space by word vector embeddings. This study proposes a methodology to derive a representative vector for ontology classes whose instances were converted to the vector space. We start by deriving five candidate vectors which are then used to train a machine learning model that would calculate a representative vector for the class. We show that our methodology out-performs the traditional mean and median vector representations. ",1,0,0,0,0,0 16925,The ESA Gaia Archive: Data Release 1," ESA Gaia mission is producing the more accurate source catalogue in astronomy up to now. That represents a challenge on the archiving area to make accessible this information to the astronomers in an efficient way. Also, new astronomical missions have reinforced the change on the development of archives. Archives, as simple applications to access the data are being evolving into complex data center structures where computing power services are available for users and data mining tools are integrated into the server side. In the case of astronomy science that involves the use of big catalogues, as in Gaia (or Euclid to come), the common ways to work on the data need to be changed to a new paradigm ""move code close to the data"", what implies that data mining functionalities are becoming a must to allow the science exploitation. To enable these capabilities, a TAP+ interface, crossmatch capabilities, full catalogue histograms, serialisation of intermediate results in cloud resources like VOSpace, etc have been implemented for the Gaia DR1, to enable the exploitation of these science resources by the community without the bottlenecks on the connection bandwidth. We present the architecture, infrastructure and tools already available in the Gaia Archive Data Release 1 (this http URL) and we describe capabilities and infrastructure. ",0,1,0,0,0,0 16926,Efficient algorithm for large spectral partitions," We present an amelioration of current known algorithms for optimal spectral partitioning problems. The idea is to use the advantage of a representation using density functions while decreasing the computational time. This is done by restricting the computation to neighbourhoods of regions where the associated densities are above a certain threshold. The algorithm extends and improves known methods in the plane and on surfaces in dimension 3. It also makes possible to make some of the first computations of volumic 3D spectral partitions on sufficiently large discretizations. ",0,0,1,0,0,0 16927,A Martian Origin for the Mars Trojan Asteroids," Seven of the nine known Mars Trojan asteroids belong to an orbital cluster named after its largest member 5261 Eureka. Eureka is likely the progenitor of the whole cluster, which formed at least 1 Gyr ago. It was suggested that the thermal YORP effect spun-up Eureka resulting with fragments being ejected by the rotational-fission mechanism. Eureka's spectrum exhibits a broad and deep absorption band around 1 {\mu}m, indicating an olivine-rich composition. Here we show evidence that the Trojan Eureka cluster progenitor could have originated as impact debris excavated from the Martian mantle. We present new near-infrared observations of two Trojans (311999 2007 NS2 and 385250 2001 DH47) and find that both exhibit an olivine-rich reflectance spectrum similar to Eureka's. These measurements confirm that the progenitor of the cluster has an achondritic composition. Olivine-rich reflectance spectra are rare amongst asteroids but are seen around the largest basins on Mars. They are also consistent with some Martian meteorites (e.g. Chassigny), and with the material comprising much of the Martian mantle. Using numerical simulations, we show that the Mars Trojans are more likely to be impact ejecta from Mars than captured olivine-rich asteroids transported from the main belt. This result links directly specific asteroids to debris from the forming planets. ",0,1,0,0,0,0 16928,Far-infrared metallicity diagnostics: Application to local ultraluminous infrared galaxies," The abundance of metals in galaxies is a key parameter which permits to distinguish between different galaxy formation and evolution models. Most of the metallicity determinations are based on optical line ratios. However, the optical spectral range is subject to dust extinction and, for high-z objects (z > 3), some of the lines used in optical metallicity diagnostics are shifted to wavelengths not accessible to ground based observatories. For this reason, we explore metallicity diagnostics using far-infrared (IR) line ratios which can provide a suitable alternative in such situations. To investigate these far-IR line ratios, we modeled the emission of a starburst with the photoionization code CLOUDY. The most sensitive far-IR ratios to measure metallicities are the [OIII]52$\mu$m and 88$\mu$m to [NIII]57$\mu$m ratios. We show that this ratio produces robust metallicities in the presence of an AGN and is insensitive to changes in the age of the ionizing stellar. Another metallicity sensitive ratio is the [OIII]88$\mu$m/[NII]122$\mu$m ratio, although it depends on the ionization parameter. We propose various mid- and far-IR line ratios to break this dependency. Finally, we apply these far-IR diagnostics to a sample of 19 local ultraluminous IR galaxies (ULIRGs) observed with Herschel and Spitzer. We find that the gas-phase metallicity in these local ULIRGs is in the range 0.7 < Z_gas/Z_sun < 1.5, which corresponds to 8.5 < 12 + log (O/H) < 8.9. The inferred metallicities agree well with previous estimates for local ULIRGs and this confirms that they lie below the local mass-metallicity relation. ",0,1,0,0,0,0 16929,Quantum communication by means of collapse of the wave function," We show that quantum communication by means of collapse of the wave function is possible. In this study, quantum communication does not mean quantum teleportation or quantum cryptography, but transmission of information itself. Because of consistency with special relativity, the possibility of the quantum communication leads to another conclusion that the collapse of the wave function must propagate at the speed of light or slower. We show this requirement is consistent with nonlocality in quantum mechanics. We also demonstrate that the Einstein-Podolsky-Rosen experiment does not disprove our conclusion. ",0,1,0,0,0,0 16930,DeepTerramechanics: Terrain Classification and Slip Estimation for Ground Robots via Deep Learning," Terramechanics plays a critical role in the areas of ground vehicles and ground mobile robots since understanding and estimating the variables influencing the vehicle-terrain interaction may mean the success or the failure of an entire mission. This research applies state-of-the-art algorithms in deep learning to two key problems: estimating wheel slip and classifying the terrain being traversed by a ground robot. Three data sets collected by ground robotic platforms (MIT single-wheel testbed, MSL Curiosity rover, and tracked robot Fitorobot) are employed in order to compare the performance of traditional machine learning methods (i.e. Support Vector Machine (SVM) and Multi-layer Perceptron (MLP)) against Deep Neural Networks (DNNs) and Convolutional Neural Networks (CNNs). This work also shows the impact that certain tuning parameters and the network architecture (MLP, DNN and CNN) play on the performance of those methods. This paper also contributes a deep discussion with the lessons learned in the implementation of DNNs and CNNs and how these methods can be extended to solve other problems. ",1,0,0,0,0,0 16931,"Characterizations of multinormality and corresponding tests of fit, including for Garch models"," We provide novel characterizations of multivariate normality that incorporate both the characteristic function and the moment generating function, and we employ these results to construct a class of affine invariant, consistent and easy-to-use goodness-of-fit tests for normality. The test statistics are suitably weighted $L^2$-statistics, and we provide their asymptotic behavior both for i.i.d. observations as well as in the context of testing that the innovation distribution of a multivariate GARCH model is Gaussian. We also study the finite-sample behavior of the new tests and compare the new criteria with alternative existing tests. ",0,0,1,1,0,0 16932,Grouped Gaussian Processes for Solar Power Prediction," We consider multi-task regression models where the observations are assumed to be a linear combination of several latent node functions and weight functions, which are both drawn from Gaussian process priors. Driven by the problem of developing scalable methods for forecasting distributed solar and other renewable power generation, we propose coupled priors over groups of (node or weight) processes to exploit spatial dependence between functions. We estimate forecast models for solar power at multiple distributed sites and ground wind speed at multiple proximate weather stations. Our results show that our approach maintains or improves point-prediction accuracy relative to competing solar benchmarks and improves over wind forecast benchmark models on all measures. Our approach consistently dominates the equivalent model without coupled priors, achieving faster gains in forecast accuracy. At the same time our approach provides better quantification of predictive uncertainties. ",0,0,0,1,0,0 16933,Modeling epidemics on d-cliqued graphs," Since social interactions have been shown to lead to symmetric clusters, we propose here that symmetries play a key role in epidemic modeling. Mathematical models on d-ary tree graphs were recently shown to be particularly effective for modeling epidemics in simple networks [Seibold & Callender, 2016]. To account for symmetric relations, we generalize this to a new type of networks modeled on d-cliqued tree graphs, which are obtained by adding edges to regular d-trees to form d-cliques. This setting gives a more realistic model for epidemic outbreaks originating, for example, within a family or classroom and which could reach a population by transmission via children in schools. Specifically, we quantify how an infection starting in a clique (e.g. family) can reach other cliques through the body of the graph (e.g. public places). Moreover, we propose and study the notion of a safe zone, a subset that has a negligible probability of infection. ",1,0,0,0,1,0 16934,On the K-theory stable bases of the Springer resolution," Cohomological and K-theoretic stable bases originated from the study of quantum cohomology and quantum K-theory. Restriction formula for cohomological stable bases played an important role in computing the quantum connection of cotangent bundle of partial flag varieties. In this paper we study the K-theoretic stable bases of cotangent bundles of flag varieties. We describe these bases in terms of the action of the affine Hecke algebra and the twisted group algebra of Kostant-Kumar. Using this algebraic description and the method of root polynomials, we give a restriction formula of the stable bases. We apply it to obtain the restriction formula for partial flag varieties. We also build a relation between the stable basis and the Casselman basis in the principal series representations of the Langlands dual group. As an application, we give a closed formula for the transition matrix between Casselman basis and the characteristic functions. ",0,0,1,0,0,0 16935,Recency Bias in the Era of Big Data: The Need to Strengthen the Status of History of Mathematics in Nigerian Schools," The amount of information available to the mathematics teacher is so enormous that the selection of desirable content is gradually becoming a huge task in itself. With respect to the inclusion of elements of history of mathematics in mathematics instruction, the era of Big Data introduces a high likelihood of Recency Bias, a hitherto unconnected challenge for stakeholders in mathematics education. This tendency to choose recent information at the expense of relevant older, composite, historical facts stands to defeat the aims and objectives of the epistemological and cultural approach to mathematics instructional delivery. This study is a didactic discourse with focus on this threat to the history and pedagogy of mathematics, particularly as it affects mathematics education in Nigeria. The implications for mathematics curriculum developers, teacher-training programmes, teacher lesson preparation, and publication of mathematics instructional materials were also deeply considered. ",1,0,1,0,0,0 16936,Convergence Analysis of Deterministic Kernel-Based Quadrature Rules in Misspecified Settings," This paper presents a convergence analysis of kernel-based quadrature rules in misspecified settings, focusing on deterministic quadrature in Sobolev spaces. In particular, we deal with misspecified settings where a test integrand is less smooth than a Sobolev RKHS based on which a quadrature rule is constructed. We provide convergence guarantees based on two different assumptions on a quadrature rule: one on quadrature weights, and the other on design points. More precisely, we show that convergence rates can be derived (i) if the sum of absolute weights remains constant (or does not increase quickly), or (ii) if the minimum distance between design points does not decrease very quickly. As a consequence of the latter result, we derive a rate of convergence for Bayesian quadrature in misspecified settings. We reveal a condition on design points to make Bayesian quadrature robust to misspecification, and show that, under this condition, it may adaptively achieve the optimal rate of convergence in the Sobolev space of a lesser order (i.e., of the unknown smoothness of a test integrand), under a slightly stronger regularity condition on the integrand. ",1,0,0,1,0,0 16937,The toric Frobenius morphism and a conjecture of Orlov," We combine the Bondal-Uehara method for producing exceptional collections on toric varieties with a result of the first author and Favero to expand the set of varieties satisfying Orlov's Conjecture on derived dimension. ",0,0,1,0,0,0 16938,Friendship Maintenance and Prediction in Multiple Social Networks," Due to the proliferation of online social networks (OSNs), users find themselves participating in multiple OSNs. These users leave their activity traces as they maintain friendships and interact with other users in these OSNs. In this work, we analyze how users maintain friendship in multiple OSNs by studying users who have accounts in both Twitter and Instagram. Specifically, we study the similarity of a user's friendship and the evenness of friendship distribution in multiple OSNs. Our study shows that most users in Twitter and Instagram prefer to maintain different friendships in the two OSNs, keeping only a small clique of common friends in across the OSNs. Based upon our empirical study, we conduct link prediction experiments to predict missing friendship links in multiple OSNs using the neighborhood features, neighborhood friendship maintenance features and cross-link features. Our link prediction experiments shows that un- supervised methods can yield good accuracy in predicting links in one OSN using another OSN data and the link prediction accuracy can be further improved using supervised method with friendship maintenance and others measures as features. ",1,1,0,0,0,0 16939,Learning to Generate Music with BachProp," As deep learning advances, algorithms of music composition increase in performance. However, most of the successful models are designed for specific musical structures. Here, we present BachProp, an algorithmic composer that can generate music scores in many styles given sufficient training data. To adapt BachProp to a broad range of musical styles, we propose a novel representation of music and train a deep network to predict the note transition probabilities of a given music corpus. In this paper, new music scores generated by BachProp are compared with the original corpora as well as with different network architectures and other related models. We show that BachProp captures important features of the original datasets better than other models and invite the reader to a qualitative comparison on a large collection of generated songs. ",1,0,0,0,0,0 16940,How to Quantize $n$ Outputs of a Binary Symmetric Channel to $n-1$ Bits?," Suppose that $Y^n$ is obtained by observing a uniform Bernoulli random vector $X^n$ through a binary symmetric channel with crossover probability $\alpha$. The ""most informative Boolean function"" conjecture postulates that the maximal mutual information between $Y^n$ and any Boolean function $\mathrm{b}(X^n)$ is attained by a dictator function. In this paper, we consider the ""complementary"" case in which the Boolean function is replaced by $f:\left\{0,1\right\}^n\to\left\{0,1\right\}^{n-1}$, namely, an $n-1$ bit quantizer, and show that $I(f(X^n);Y^n)\leq (n-1)\cdot\left(1-h(\alpha)\right)$ for any such $f$. Thus, in this case, the optimal function is of the form $f(x^n)=(x_1,\ldots,x_{n-1})$. ",1,0,1,0,0,0 16941,Semi-Supervised Deep Learning for Monocular Depth Map Prediction," Supervised deep learning often suffers from the lack of sufficient training data. Specifically in the context of monocular depth map prediction, it is barely possible to determine dense ground truth depth images in realistic dynamic outdoor environments. When using LiDAR sensors, for instance, noise is present in the distance measurements, the calibration between sensors cannot be perfect, and the measurements are typically much sparser than the camera images. In this paper, we propose a novel approach to depth map prediction from monocular images that learns in a semi-supervised way. While we use sparse ground-truth depth for supervised learning, we also enforce our deep network to produce photoconsistent dense depth maps in a stereo setup using a direct image alignment loss. In experiments we demonstrate superior performance in depth map prediction from single images compared to the state-of-the-art methods. ",1,0,0,0,0,0 16942,Approximation Schemes for Clustering with Outliers," Clustering problems are well-studied in a variety of fields such as data science, operations research, and computer science. Such problems include variants of centre location problems, $k$-median, and $k$-means to name a few. In some cases, not all data points need to be clustered; some may be discarded for various reasons. We study clustering problems with outliers. More specifically, we look at Uncapacitated Facility Location (UFL), $k$-Median, and $k$-Means. In UFL with outliers, we have to open some centres, discard up to $z$ points of $\cal X$ and assign every other point to the nearest open centre, minimizing the total assignment cost plus centre opening costs. In $k$-Median and $k$-Means, we have to open up to $k$ centres but there are no opening costs. In $k$-Means, the cost of assigning $j$ to $i$ is $\delta^2(j,i)$. We present several results. Our main focus is on cases where $\delta$ is a doubling metric or is the shortest path metrics of graphs from a minor-closed family of graphs. For uniform-cost UFL with outliers on such metrics we show that a multiswap simple local search heuristic yields a PTAS. With a bit more work, we extend this to bicriteria approximations for the $k$-Median and $k$-Means problems in the same metrics where, for any constant $\epsilon > 0$, we can find a solution using $(1+\epsilon)k$ centres whose cost is at most a $(1+\epsilon)$-factor of the optimum and uses at most $z$ outliers. We also show that natural local search heuristics that do not violate the number of clusters and outliers for $k$-Median (or $k$-Means) will have unbounded gap even in Euclidean metrics. Furthermore, we show how our analysis can be extended to general metrics for $k$-Means with outliers to obtain a $(25+\epsilon,1+\epsilon)$ bicriteria. ",1,0,0,0,0,0 16943,Order preserving pattern matching on trees and DAGs," The order preserving pattern matching (OPPM) problem is, given a pattern string $p$ and a text string $t$, find all substrings of $t$ which have the same relative orders as $p$. In this paper, we consider two variants of the OPPM problem where a set of text strings is given as a tree or a DAG. We show that the OPPM problem for a single pattern $p$ of length $m$ and a text tree $T$ of size $N$ can be solved in $O(m+N)$ time if the characters of $p$ are drawn from an integer alphabet of polynomial size. The time complexity becomes $O(m \log m + N)$ if the pattern $p$ is over a general ordered alphabet. We then show that the OPPM problem for a single pattern and a text DAG is NP-complete. ",1,0,0,0,0,0 16944,Categorical Probabilistic Theories," We present a simple categorical framework for the treatment of probabilistic theories, with the aim of reconciling the fields of Categorical Quantum Mechanics (CQM) and Operational Probabilistic Theories (OPTs). In recent years, both CQM and OPTs have found successful application to a number of areas in quantum foundations and information theory: they present many similarities, both in spirit and in formalism, but they remain separated by a number of subtle yet important differences. We attempt to bridge this gap, by adopting a minimal number of operationally motivated axioms which provide clean categorical foundations, in the style of CQM, for the treatment of the problems that OPTs are concerned with. ",0,0,1,0,0,0 16945,Maximal polynomial modulations of singular integrals," Let $K$ be a standard Hölder continuous Calderón--Zygmund kernel on $\mathbb{R}^{\mathbf{d}}$ whose truncations define $L^2$ bounded operators. We show that the maximal operator obtained by modulating $K$ by polynomial phases of a fixed degree is bounded on $L^p(\mathbb{R}^{\mathbf{d}})$ for $1 < p < \infty$. This extends Sjölin's multidimensional Carleson theorem and Lie's polynomial Carleson theorem. ",0,0,1,0,0,0 16946,Effects of Hubbard term correction on the structural parameters and electronic properties of wurtzite Zn," The effects of including the Hubbard on-site Coulombic correction to the structural parameters and valence energy states of wurtzite ZnO was explored. Due to the changes in the structural parameters caused by correction of hybridization between Zn d states and O p states, suitable parameters of Hubbard terms have to be determined for an accurate prediction of ZnO properties. Using the LDA+${U}$ method by applying Hubbard corrections $U_p$ to Zn 3d states and $U_p$ to O 2p states, the lattice constants were underestimated for all tested Hubbard parameters. The combination of both $U_d$ and $U_p$ correction terms managed to widen the band gap of wurtzite ZnO to the experimental value. Pairs of $U_p$ and $U_p$ parameters with the correct positioning of d-band and accurate bandwidths were selected, in addition to predicting an accurate band gap value. Inspection of vibrational properties, however, revealed mismatches between the estimated gamma phonon frequencies and experimental values. The selection of Hubbard terms based on electronic band properties alone cannot ensure an accurate vibrational description in LDA+${U}$ calculation. ",0,1,0,0,0,0 16947,A multi-scale Gaussian beam parametrix for the wave equation: the Dirichlet boundary value problem," We present a construction of a multi-scale Gaussian beam parametrix for the Dirichlet boundary value problem associated with the wave equation, and study its convergence rate to the true solution in the highly oscillatory regime. The construction elaborates on the wave-atom parametrix of Bao, Qian, Ying, and Zhang and extends to a multi-scale setting the technique of Gaussian beam propagation from a boundary of Katchalov, Kurylev and Lassas. ",0,0,1,0,0,0 16948,Uncertainty and sensitivity analysis of functional risk curves based on Gaussian processes," A functional risk curve gives the probability of an undesirable event as a function of the value of a critical parameter of a considered physical system. In several applicative situations, this curve is built using phenomenological numerical models which simulate complex physical phenomena. To avoid cpu-time expensive numerical models, we propose to use Gaussian process regression to build functional risk curves. An algorithm is given to provide confidence bounds due to this approximation. Two methods of global sensitivity analysis of the models' random input parameters on the functional risk curve are also studied. In particular, the PLI sensitivity indices allow to understand the effect of misjudgment on the input parameters' probability density functions. ",0,0,1,1,0,0 16949,Global optimization for low-dimensional switching linear regression and bounded-error estimation," The paper provides global optimization algorithms for two particularly difficult nonconvex problems raised by hybrid system identification: switching linear regression and bounded-error estimation. While most works focus on local optimization heuristics without global optimality guarantees or with guarantees valid only under restrictive conditions, the proposed approach always yields a solution with a certificate of global optimality. This approach relies on a branch-and-bound strategy for which we devise lower bounds that can be efficiently computed. In order to obtain scalable algorithms with respect to the number of data, we directly optimize the model parameters in a continuous optimization setting without involving integer variables. Numerical experiments show that the proposed algorithms offer a higher accuracy than convex relaxations with a reasonable computational burden for hybrid system identification. In addition, we discuss how bounded-error estimation is related to robust estimation in the presence of outliers and exact recovery under sparse noise, for which we also obtain promising numerical results. ",1,0,0,1,0,0 16950,An Incremental Self-Organizing Architecture for Sensorimotor Learning and Prediction," During visuomotor tasks, robots must compensate for temporal delays inherent in their sensorimotor processing systems. Delay compensation becomes crucial in a dynamic environment where the visual input is constantly changing, e.g., during the interacting with a human demonstrator. For this purpose, the robot must be equipped with a prediction mechanism for using the acquired perceptual experience to estimate possible future motor commands. In this paper, we present a novel neural network architecture that learns prototypical visuomotor representations and provides reliable predictions on the basis of the visual input. These predictions are used to compensate for the delayed motor behavior in an online manner. We investigate the performance of our method with a set of experiments comprising a humanoid robot that has to learn and generate visually perceived arm motion trajectories. We evaluate the accuracy in terms of mean prediction error and analyze the response of the network to novel movement demonstrations. Additionally, we report experiments with incomplete data sequences, showing the robustness of the proposed architecture in the case of a noisy and faulty visual sensor. ",1,0,0,0,0,0 16951,A CutFEM method for two-phase flow problems," In this article, we present a cut finite element method for two-phase Navier-Stokes flows. The main feature of the method is the formulation of a unified continuous interior penalty stabilisation approach for, on the one hand, stabilising advection and the pressure-velocity coupling and, on the other hand, stabilising the cut region. The accuracy of the algorithm is enhanced by the development of extended fictitious domains to guarantee a well defined velocity from previous time steps in the current geometry. Finally, the robustness of the moving-interface algorithm is further improved by the introduction of a curvature smoothing technique that reduces spurious velocities. The algorithm is shown to perform remarkably well for low capillary number flows, and is a first step towards flexible and robust CutFEM algorithms for the simulation of microfluidic devices. ",1,0,0,0,0,0 16952,Learning under selective labels in the presence of expert consistency," We explore the problem of learning under selective labels in the context of algorithm-assisted decision making. Selective labels is a pervasive selection bias problem that arises when historical decision making blinds us to the true outcome for certain instances. Examples of this are common in many applications, ranging from predicting recidivism using pre-trial release data to diagnosing patients. In this paper we discuss why selective labels often cannot be effectively tackled by standard methods for adjusting for sample selection bias, even if there are no unobservables. We propose a data augmentation approach that can be used to either leverage expert consistency to mitigate the partial blindness that results from selective labels, or to empirically validate whether learning under such framework may lead to unreliable models prone to systemic discrimination. ",0,0,0,1,0,0 16953,Opacity limit for supermassive protostars," We present a model for the evolution of supermassive protostars from their formation at $M_\star \simeq 0.1\,\text{M}_\odot$ until their growth to $M_\star \simeq 10^5\,\text{M}_\odot$. To calculate the initial properties of the object in the optically thick regime we follow two approaches: based on idealized thermodynamic considerations, and on a more detailed one-zone model. Both methods derive a similar value of $n_{\rm F} \simeq 2 \times 10^{17} \,\text{cm}^{-3}$ for the density of the object when opacity becomes important, i.e. the opacity limit. The subsequent evolution of the growing protostar is determined by the accretion of gas onto the object and can be described by a mass-radius relation of the form $R_\star \propto M_\star^{1/3}$ during the early stages, and of the form $R_\star \propto M_\star^{1/2}$ when internal luminosity becomes important. For the case of a supermassive protostar, this implies that the radius of the star grows from $R_\star \simeq 0.65 \,{\rm AU}$ to $R_\star \simeq 250 \,{\rm AU}$ during its evolution. Finally, we use this model to construct a sub-grid recipe for accreting sink particles in numerical simulations. A prime ingredient thereof is a physically motivated prescription for the accretion radius and the effective temperature of the growing protostar embedded inside it. From the latter, we can conclude that photo-ionization feedback can be neglected until very late in the assembly process of the supermassive object. ",0,1,0,0,0,0 16954,Learning to Imagine Manipulation Goals for Robot Task Planning," Prospection is an important part of how humans come up with new task plans, but has not been explored in depth in robotics. Predicting multiple task-level is a challenging problem that involves capturing both task semantics and continuous variability over the state of the world. Ideally, we would combine the ability of machine learning to leverage big data for learning the semantics of a task, while using techniques from task planning to reliably generalize to new environment. In this work, we propose a method for learning a model encoding just such a representation for task planning. We learn a neural net that encodes the $k$ most likely outcomes from high level actions from a given world. Our approach creates comprehensible task plans that allow us to predict changes to the environment many time steps into the future. We demonstrate this approach via application to a stacking task in a cluttered environment, where the robot must select between different colored blocks while avoiding obstacles, in order to perform a task. We also show results on a simple navigation task. Our algorithm generates realistic image and pose predictions at multiple points in a given task. ",1,0,0,0,0,0 16955,On the Combinatorial Power of the Weisfeiler-Lehman Algorithm," The classical Weisfeiler-Lehman method WL[2] uses edge colors to produce a powerful graph invariant. It is at least as powerful in its ability to distinguish non-isomorphic graphs as the most prominent algebraic graph invariants. It determines not only the spectrum of a graph, and the angles between standard basis vectors and the eigenspaces, but even the angles between projections of standard basis vectors into the eigenspaces. Here, we investigate the combinatorial power of WL[2]. For sufficiently large k, WL[k] determines all combinatorial properties of a graph. Many traditionally used combinatorial invariants are determined by WL[k] for small k. We focus on two fundamental invariants, the num- ber of cycles Cp of length p, and the number of cliques Kp of size p. We show that WL[2] determines the number of cycles of lengths up to 6, but not those of length 8. Also, WL[2] does not determine the number of 4-cliques. ",1,0,0,0,0,0 16956,Learning to Generate Reviews and Discovering Sentiment," We explore the properties of byte-level recurrent language models. When given sufficient amounts of capacity, training data, and compute time, the representations learned by these models include disentangled features corresponding to high-level concepts. Specifically, we find a single unit which performs sentiment analysis. These representations, learned in an unsupervised manner, achieve state of the art on the binary subset of the Stanford Sentiment Treebank. They are also very data efficient. When using only a handful of labeled examples, our approach matches the performance of strong baselines trained on full datasets. We also demonstrate the sentiment unit has a direct influence on the generative process of the model. Simply fixing its value to be positive or negative generates samples with the corresponding positive or negative sentiment. ",1,0,0,0,0,0 16957,Designing magnetism in Fe-based Heusler alloys: a machine learning approach," Combining material informatics and high-throughput electronic structure calculations offers the possibility of a rapid characterization of complex magnetic materials. Here we demonstrate that datasets of electronic properties calculated at the ab initio level can be effectively used to identify and understand physical trends in magnetic materials, thus opening new avenues for accelerated materials discovery. Following a data-centric approach, we utilize a database of Heusler alloys calculated at the density functional theory level to identify the ideal ions neighbouring Fe in the $X_2$Fe$Z$ Heusler prototype. The hybridization of Fe with the nearest neighbour $X$ ion is found to cause redistribution of the on-site Fe charge and a net increase of its magnetic moment proportional to the valence of $X$. Thus, late transition metals are ideal Fe neighbours for producing high-moment Fe-based Heusler magnets. At the same time a thermodynamic stability analysis is found to restrict $Z$ to main group elements. Machine learning regressors, trained to predict magnetic moment and volume of Heusler alloys, are used to determine the magnetization for all materials belonging to the proposed prototype. We find that Co$_2$Fe$Z$ alloys, and in particular Co$_2$FeSi, maximize the magnetization, which reaches values up to 1.2T. This is in good agreement with both ab initio and experimental data. Furthermore, we identify the Cu$_2$Fe$Z$ family to be a cost-effective materials class, offering a magnetization of approximately 0.65T. ",0,1,0,0,0,0 16958,On a diffuse interface model for tumour growth with non-local interactions and degenerate mobilities," We study a non-local variant of a diffuse interface model proposed by Hawkins--Darrud et al. (2012) for tumour growth in the presence of a chemical species acting as nutrient. The system consists of a Cahn--Hilliard equation coupled to a reaction-diffusion equation. For non-degenerate mobilities and smooth potentials, we derive well-posedness results, which are the non-local analogue of those obtained in Frigeri et al. (European J. Appl. Math. 2015). Furthermore, we establish existence of weak solutions for the case of degenerate mobilities and singular potentials, which serves to confine the order parameter to its physically relevant interval. Due to the non-local nature of the equations, under additional assumptions continuous dependence on initial data can also be shown. ",0,0,1,0,0,0 16959,Gradient Descent Can Take Exponential Time to Escape Saddle Points," Although gradient descent (GD) almost always escapes saddle points asymptotically [Lee et al., 2016], this paper shows that even with fairly natural random initialization schemes and non-pathological functions, GD can be significantly slowed down by saddle points, taking exponential time to escape. On the other hand, gradient descent with perturbations [Ge et al., 2015, Jin et al., 2017] is not slowed down by saddle points - it can find an approximate local minimizer in polynomial time. This result implies that GD is inherently slower than perturbed GD, and justifies the importance of adding perturbations for efficient non-convex optimization. While our focus is theoretical, we also present experiments that illustrate our theoretical findings. ",1,0,1,1,0,0 16960,Spectral parameter power series for arbitrary order linear differential equations," Let $L$ be the $n$-th order linear differential operator $Ly = \phi_0y^{(n)} + \phi_1y^{(n-1)} + \cdots + \phi_ny$ with variable coefficients. A representation is given for $n$ linearly independent solutions of $Ly=\lambda r y$ as power series in $\lambda$, generalizing the SPPS (spectral parameter power series) solution which has been previously developed for $n=2$. The coefficient functions in these series are obtained by recursively iterating a simple integration process, begining with a solution system for $\lambda=0$. It is shown how to obtain such an initializing system working upwards from equations of lower order. The values of the successive derivatives of the power series solutions at the basepoint of integration are given, which provides a technique for numerical solution of $n$-th order initial value problems and spectral problems. ",0,0,1,0,0,0 16961,Antropologia de la Informatica Social: Teoria de la Convergencia Tecno-Social," The traditional humanism of the twentieth century, inspired by the culture of the book, systematically distanced itself from the new society of digital information; the Internet and tools of information processing revolutionized the world, society during this period developed certain adaptive characteristics based on coexistence (Human - Machine), this transformation sets based on the impact of three technology segments: devices, applications and infrastructure of social communication, which are involved in various physical, behavioural and cognitive changes of the human being; and the emergence of new models of influence and social control through the new ubiquitous communication; however in this new process of conviviality new models like the ""collaborative thinking"" and ""InfoSharing"" develop; managing social information under three Human ontological dimensions (h) - Information (i) - Machine (m), which is the basis of a new physical-cyber ecosystem, where they coexist and develop new social units called ""virtual communities "". This new communication infrastructure and social management of information given discovered areas of vulnerability ""social perspective of risk"", impacting all social units through massive impact vector (i); The virtual environment ""H + i + M""; and its components, as well as the life cycle management of social information allows us to understand the path of integration ""Techno - Social"" and setting a new contribution to cybernetics, within the convergence of technology with society and the new challenges of coexistence, aimed at a new holistic and not pragmatic vision, as the human component (h) in the virtual environment is the precursor of the future and needs to be studied not as an application, but as the hub of a new society. ",1,0,0,0,0,0 16962,A Deterministic Approach to Avoid Saddle Points," Loss functions with a large number of saddle points are one of the main obstacles to training many modern machine learning models. Gradient descent (GD) is a fundamental algorithm for machine learning and converges to a saddle point for certain initial data. We call the region formed by these initial values the ""attraction region."" For quadratic functions, GD converges to a saddle point if the initial data is in a subspace of up to n-1 dimensions. In this paper, we prove that a small modification of the recently proposed Laplacian smoothing gradient descent (LSGD) [Osher, et al., arXiv:1806.06317] contributes to avoiding saddle points without sacrificing the convergence rate of GD. In particular, we show that the dimension of the LSGD's attraction region is at most floor((n-1)/2) for a class of quadratic functions which is significantly smaller than GD's (n-1)-dimensional attraction region. ",1,0,0,1,0,0 16963,Automatic Generation of Typographic Font from a Small Font Subset," This paper addresses the automatic generation of a typographic font from a subset of characters. Specifically, we use a subset of a typographic font to extrapolate additional characters. Consequently, we obtain a complete font containing a number of characters sufficient for daily use. The automated generation of Japanese fonts is in high demand because a Japanese font requires over 1,000 characters. Unfortunately, professional typographers create most fonts, resulting in significant financial and time investments for font generation. The proposed method can be a great aid for font creation because designers do not need to create the majority of the characters for a new font. The proposed method uses strokes from given samples for font generation. The strokes, from which we construct characters, are extracted by exploiting a character skeleton dataset. This study makes three main contributions: a novel method of extracting strokes from characters, which is applicable to both standard fonts and their variations; a fully automated approach for constructing characters; and a selection method for sample characters. We demonstrate our proposed method by generating 2,965 characters in 47 fonts. Objective and subjective evaluations verify that the generated characters are similar to handmade characters. ",1,0,0,0,0,0 16964,The Second Postulate of Euclid and the Hyperbolic Geometry," The article deals with the connection between the second postulate of Euclid and non-Euclidean geometry. It is shown that the violation of the second postulate of Euclid inevitably leads to hyperbolic geometry. This eliminates misunderstandings about the sums of some divergent series. The connection between hyperbolic geometry and relativistic computations is noted. ",0,0,1,0,0,0 16965,Transkernel: An Executor for Commodity Kernels on Peripheral Cores," Modern mobile and embedded platforms see a large number of ephemeral tasks driven by background activities. In order to execute such a task, the OS kernel wakes up the platform beforehand and puts it back to sleep afterwards. In doing so, the kernel operates various IO devices and orchestrates their power state transitions. Such kernel execution phases are lengthy, having high energy cost, and yet difficult to optimize. We advocate for relieving the CPU from these kernel phases by executing them on a low-power, microcontroller-like core, dubbed peripheral core, hence leaving the CPU off. Yet, for a peripheral core to execute phases in a complex commodity kernel (e.g. Linux), existing approaches either incur high engineering effort or high runtime overhead. We take a radical approach with a new executor model called transkernel. Running on a peripheral core, a transkernel executes the binary of the commodity kernel through cross-ISA, dynamic binary translation (DBT). The transkernel translates stateful kernel code while emulating a small set of stateless kernel services; it sets a narrow, stable binary interface for emulated services; it specializes for kernel's beaten paths; it exploits ISA similarities for low DBT cost. With a concrete implementation on a heterogeneous ARM SoC, we demonstrate the feasibility and benefit of transkernel. Our result contributes a new OS structure that combines cross-ISA DBT and emulation for harnessing a heterogeneous SoC. Our result demonstrates that while cross-ISA DBT is typically used under the assumption of efficiency loss, it can be used for efficiency gain, even atop off-the-shelf hardware. ",1,0,0,0,0,0 16966,No iterated identities satisfied by all finite groups," We show that there is no iterated identity satisfied by all finite groups. For $w$ being a non-trivial word of length $l$, we show that there exists a finite group $G$ of cardinality at most $\exp(l^C)$ which does not satisfy the iterated identity $w$. The proof uses the approach of Borisov and Sapir, who used dynamics of polynomial mappings for the proof of non residual finiteness of some groups. ",0,0,1,0,0,0 16967,Optimization Methods for Supervised Machine Learning: From Linear Models to Deep Learning," The goal of this tutorial is to introduce key models, algorithms, and open questions related to the use of optimization methods for solving problems arising in machine learning. It is written with an INFORMS audience in mind, specifically those readers who are familiar with the basics of optimization algorithms, but less familiar with machine learning. We begin by deriving a formulation of a supervised learning problem and show how it leads to various optimization problems, depending on the context and underlying assumptions. We then discuss some of the distinctive features of these optimization problems, focusing on the examples of logistic regression and the training of deep neural networks. The latter half of the tutorial focuses on optimization algorithms, first for convex logistic regression, for which we discuss the use of first-order methods, the stochastic gradient method, variance reducing stochastic methods, and second-order methods. Finally, we discuss how these approaches can be employed to the training of deep neural networks, emphasizing the difficulties that arise from the complex, nonconvex structure of these models. ",1,0,0,1,0,0 16968,A Unified Parallel Algorithm for Regularized Group PLS Scalable to Big Data," Partial Least Squares (PLS) methods have been heavily exploited to analyse the association between two blocs of data. These powerful approaches can be applied to data sets where the number of variables is greater than the number of observations and in presence of high collinearity between variables. Different sparse versions of PLS have been developed to integrate multiple data sets while simultaneously selecting the contributing variables. Sparse modelling is a key factor in obtaining better estimators and identifying associations between multiple data sets. The cornerstone of the sparsity version of PLS methods is the link between the SVD of a matrix (constructed from deflated versions of the original matrices of data) and least squares minimisation in linear regression. We present here an accurate description of the most popular PLS methods, alongside their mathematical proofs. A unified algorithm is proposed to perform all four types of PLS including their regularised versions. Various approaches to decrease the computation time are offered, and we show how the whole procedure can be scalable to big data sets. ",0,0,0,1,0,0 16969,Asymptotic behaviour of the fifth Painlevé transcendents in the space of initial values," We study the asymptotic behaviour of the solutions of the fifth Painlevé equation as the independent variable approaches zero and infinity in the space of initial values. We show that the limit set of each solution is compact and connected and, moreover, that any solution with the essential singularity at zero has an infinite number of poles and zeroes, and any solution with the essential singularity at infinity has infinite number of poles and takes value $1$ infinitely many times. ",0,1,1,0,0,0 16970,Hidden Treasures - Recycling Large-Scale Internet Measurements to Study the Internet's Control Plane," Internet-wide scans are a common active measurement approach to study the Internet, e.g., studying security properties or protocol adoption. They involve probing large address ranges (IPv4 or parts of IPv6) for specific ports or protocols. Besides their primary use for probing (e.g., studying protocol adoption), we show that - at the same time - they provide valuable insights into the Internet control plane informed by ICMP responses to these probes - a currently unexplored secondary use. We collect one week of ICMP responses (637.50M messages) to several Internet-wide ZMap scans covering multiple TCP and UDP ports as well as DNS-based scans covering > 50% of the domain name space. This perspective enables us to study the Internet's control plane as a by-product of Internet measurements. We receive ICMP messages from ~171M different IPs in roughly 53K different autonomous systems. Additionally, we uncover multiple control plane problems, e.g., we detect a plethora of outdated and misconfigured routers and uncover the presence of large-scale persistent routing loops in IPv4. ",1,0,0,0,0,0 16971,Image Registration and Predictive Modeling: Learning the Metric on the Space of Diffeomorphisms," We present a method for metric optimization in the Large Deformation Diffeomorphic Metric Mapping (LDDMM) framework, by treating the induced Riemannian metric on the space of diffeomorphisms as a kernel in a machine learning context. For simplicity, we choose the kernel Fischer Linear Discriminant Analysis (KLDA) as the framework. Optimizing the kernel parameters in an Expectation-Maximization framework, we define model fidelity via the hinge loss of the decision function. The resulting algorithm optimizes the parameters of the LDDMM norm-inducing differential operator as a solution to a group-wise registration and classification problem. In practice, this may lead to a biology-aware registration, focusing its attention on the predictive task at hand such as identifying the effects of disease. We first tested our algorithm on a synthetic dataset, showing that our parameter selection improves registration quality and classification accuracy. We then tested the algorithm on 3D subcortical shapes from the Schizophrenia cohort Schizconnect. Our Schizpohrenia-Control predictive model showed significant improvement in ROC AUC compared to baseline parameters. ",0,0,0,1,0,0 16972,On a direct algorithm for constructing recursion operators and Lax pairs for integrable models," We suggested an algorithm for searching the recursion operators for nonlinear integrable equations. It was observed that the recursion operator $R$ can be represented as a ratio of the form $R=L_1^{-1}L_2$ where the linear differential operators $L_1$ and $L_2$ are chosen in such a way that the ordinary differential equation $(L_2-\lambda L_1)U=0$ is consistent with the linearization of the given nonlinear integrable equation for any value of the parameter $\lambda\in \textbf{C}$. For constructing the operator $L_1$ we use the concept of the invariant manifold which is a generalization of the symmetry. Then for searching $L_2$ we take an auxiliary linear equation connected with the linearized equation by the Darboux transformation. Connection of the invariant manifold with the Lax pairs and the Dubrovin-Weierstrass equations is discussed. ",0,1,0,0,0,0 16973,Network Classification and Categorization," To the best of our knowledge, this paper presents the first large-scale study that tests whether network categories (e.g., social networks vs. web graphs) are distinguishable from one another (using both categories of real-world networks and synthetic graphs). A classification accuracy of $94.2\%$ was achieved using a random forest classifier with both real and synthetic networks. This work makes two important findings. First, real-world networks from various domains have distinct structural properties that allow us to predict with high accuracy the category of an arbitrary network. Second, classifying synthetic networks is trivial as our models can easily distinguish between synthetic graphs and the real-world networks they are supposed to model. ",1,0,0,1,0,0 16974,A Polynomial-Time Algorithm for Solving the Minimal Observability Problem in Conjunctive Boolean Networks," Many complex systems in biology, physics, and engineering include a large number of state-variables, and measuring the full state of the system is often impossible. Typically, a set of sensors is used to measure part of the state-variables. A system is called observable if these measurements allow to reconstruct the entire state of the system. When the system is not observable, an important and practical problem is how to add a \emph{minimal} number of sensors so that the system becomes observable. This minimal observability problem is practically useful and theoretically interesting, as it pinpoints the most informative nodes in the system. We consider the minimal observability problem for an important special class of Boolean networks, called conjunctive Boolean networks (CBNs). Using a graph-theoretic approach, we provide a necessary and sufficient condition for observability of a CBN with $n$ state-variables, and an efficient~$O(n^2)$-time algorithm for solving the minimal observability problem. We demonstrate the usefulness of these results by studying the properties of a class of random CBNs. ",1,0,1,0,0,0 16975,The Description and Scaling Behavior for the Inner Region of the Boundary Layer for 2-D Wall-bounded Flows," A second derivative-based moment method is proposed for describing the thickness and shape of the region where viscous forces are dominant in turbulent boundary layer flows. Rather than the fixed location sublayer model presently employed, the new method defines thickness and shape parameters that are experimentally accessible without differentiation. It is shown theoretically that one of the new length parameters used as a scaling parameter is also a similarity parameter for the velocity profile. In fact, we show that this new length scale parameter removes one of the theoretical inconsistencies present in the traditional Prandtl Plus scalings. Furthermore, the new length parameter and the Prandtl Plus scaling parameters perform identically when operating on experimental datasets. This means that many of the past successes ascribed to the Prandtl Plus scaling also apply to the new parameter set but without one of the theoretical inconsistencies. Examples are offered to show how the new description method is useful in exploring the actual physics of the boundary layer. ",0,1,0,0,0,0 16976,Completely Sidon sets in $C^*$-algebras (New title)," A sequence in a $C^*$-algebra $A$ is called completely Sidon if its span in $A$ is completely isomorphic to the operator space version of the space $\ell_1$ (i.e. $\ell_1$ equipped with its maximal operator space structure). The latter can also be described as the span of the free unitary generators in the (full) $C^*$-algebra of the free group $\F_\infty$ with countably infinitely many generators. Our main result is a generalization to this context of Drury's classical theorem stating that Sidon sets are stable under finite unions. In the particular case when $A=C^*(G)$ the (maximal) $C^*$-algebra of a discrete group $G$, we recover the non-commutative (operator space) version of Drury's theorem that we recently proved. We also give several non-commutative generalizations of our recent work on uniformly bounded orthonormal systems to the case of von Neumann algebras equipped with normal faithful tracial states. ",0,0,1,0,0,0 16977,Conflict-Free Coloring of Planar Graphs," A conflict-free k-coloring of a graph assigns one of k different colors to some of the vertices such that, for every vertex v, there is a color that is assigned to exactly one vertex among v and v's neighbors. Such colorings have applications in wireless networking, robotics, and geometry, and are well-studied in graph theory. Here we study the natural problem of the conflict-free chromatic number chi_CF(G) (the smallest k for which conflict-free k-colorings exist). We provide results both for closed neighborhoods N[v], for which a vertex v is a member of its neighborhood, and for open neighborhoods N(v), for which vertex v is not a member of its neighborhood. For closed neighborhoods, we prove the conflict-free variant of the famous Hadwiger Conjecture: If an arbitrary graph G does not contain K_{k+1} as a minor, then chi_CF(G) <= k. For planar graphs, we obtain a tight worst-case bound: three colors are sometimes necessary and always sufficient. We also give a complete characterization of the computational complexity of conflict-free coloring. Deciding whether chi_CF(G)<= 1 is NP-complete for planar graphs G, but polynomial for outerplanar graphs. Furthermore, deciding whether chi_CF(G)<= 2 is NP-complete for planar graphs G, but always true for outerplanar graphs. For the bicriteria problem of minimizing the number of colored vertices subject to a given bound k on the number of colors, we give a full algorithmic characterization in terms of complexity and approximation for outerplanar and planar graphs. For open neighborhoods, we show that every planar bipartite graph has a conflict-free coloring with at most four colors; on the other hand, we prove that for k in {1,2,3}, it is NP-complete to decide whether a planar bipartite graph has a conflict-free k-coloring. Moreover, we establish that any general} planar graph has a conflict-free coloring with at most eight colors. ",1,0,1,0,0,0 16978,Explicit solutions to utility maximization problems in a regime-switching market model via Laplace transforms," We study the problem of utility maximization from terminal wealth in which an agent optimally builds her portfolio by investing in a bond and a risky asset. The asset price dynamics follow a diffusion process with regime-switching coefficients modeled by a continuous-time finite-state Markov chain. We consider an investor with a Constant Relative Risk Aversion (CRRA) utility function. We deduce the associated Hamilton-Jacobi-Bellman equation to construct the solution and the optimal trading strategy and verify optimality by showing that the value function is the unique constrained viscosity solution of the HJB equation. By means of a Laplace transform method, we show how to explicitly compute the value function and illustrate the method with the two- and three-states cases. This method is interesting in its own right and can be adapted in other applications involving hybrid systems and using other types of transforms with basic properties similar to the Laplace transform. ",0,0,0,0,0,1 16979,Spectroscopic study of the elusive globular cluster ESO452-SC11 and its surroundings," Globular clusters (GCs) are amongst the oldest objects in the Galaxy and play a pivotal role in deciphering its early history. We present the first spectroscopic study of the GC ESO452-SC11 using the AAOmega spectrograph at medium resolution. Given the sparsity of this object and high degree of foreground contamination due to its location toward the bulge, few details are known for this cluster: there is no consensus of its age, metallicity, or its association with the disk or bulge. We identify 5 members based on radial velocity, metallicity, and position within the GC. Using spectral synthesis, accurate abundances of Fe and several $\alpha$-, Fe-peak, neutron-capture elements (Si,Ca,Ti,Cr,Co,Ni,Sr,Eu) were measured. Two of the 5 cluster candidates are likely non-members, as they have deviant Fe abundances and [$\alpha$/Fe] ratios. The mean radial velocity is 19$\pm$2 km s$^{-1}$ with a low dispersion of 2.8$\pm$3.4 km s$^{-1}$, in line with its low mass. The mean Fe-abundance from spectral fitting is $-0.88\pm0.03$, with a spread driven by observational errors. The $\alpha$-elements of the GC candidates are marginally lower than expected for the bulge at similar metallicities. As spectra of hundreds of stars were collected in a 2 degree field around ESO452-SC11, detailed abundances in the surrounding field were measured. Most non-members have higher [$\alpha$/Fe] ratios, typical of the nearby bulge population. Stars with measured Fe-peak abundances show a large scatter around Solar values, though with large uncertainties. Our study provides the first systematic measurement of Sr in a Galactic bulge GC. The Eu and Sr abundances of the GC candidates are consistent with a disk or bulge association. Our calculations place ESO452 on an elliptical orbit in the central 3 kpc of the bulge. We find no evidence of extratidal stars in our data. (Abridged) ",0,1,0,0,0,0 16980,The Fundamental Infinity-Groupoid of a Parametrized Family," Given an infinity-category C, one can naturally construct an infinity-category Fam(C) of families of objects in C indexed by infinity-groupoids. An ordinary categorical version of this construction was used by Borceux and Janelidze in the study of generalized covering maps in categorical Galois theory. In this paper, we develop the homotopy theory of such ""parametrized families"" as generalization of the classical homotopy theory of spaces. In particular, we study homotopy-theoretical constructions that arise from the fundamental infinity-groupoids of families in an infinity-category. In the same spirit, we show that Fam(C) admits a Grothendieck topology which generalizes the canonical/epimorphism topology on the infinity-topos of infinity-groupoids in the sense of Carchedi. ",0,0,1,0,0,0 16981,Leveraging Pre-Trained 3D Object Detection Models For Fast Ground Truth Generation," Training 3D object detectors for autonomous driving has been limited to small datasets due to the effort required to generate annotations. Reducing both task complexity and the amount of task switching done by annotators is key to reducing the effort and time required to generate 3D bounding box annotations. This paper introduces a novel ground truth generation method that combines human supervision with pretrained neural networks to generate per-instance 3D point cloud segmentation, 3D bounding boxes, and class annotations. The annotators provide object anchor clicks which behave as a seed to generate instance segmentation results in 3D. The points belonging to each instance are then used to regress object centroids, bounding box dimensions, and object orientation. Our proposed annotation scheme requires 30x lower human annotation time. We use the KITTI 3D object detection dataset to evaluate the efficiency and the quality of our annotation scheme. We also test the the proposed scheme on previously unseen data from the Autonomoose self-driving vehicle to demonstrate generalization capabilities of the network. ",0,0,0,1,0,0 16982,Epidemic spreading in multiplex networks influenced by opinion exchanges on vaccination," We study the changes of opinions about vaccination together with the evolution of a disease. In our model we consider a multiplex network consisting of two layers. One of the layers corresponds to a social network where people share their opinions and influence others opinions. The social model that rules the dynamic is the M-model, which takes into account two different processes that occurs in a society: persuasion and compromise. This two processes are related through a parameter $r$, $r<1$ describes a moderate and committed society, for $r>1$ the society tends to have extremist opinions, while $r=1$ represents a neutral society. This social network may be of real or virtual contacts. On the other hand, the second layer corresponds to a network of physical contacts where the disease spreading is described by the SIR-Model. In this model the individuals may be in one of the following four states: Susceptible ($S$), Infected($I$), Recovered ($R$) or Vaccinated ($V$). A Susceptible individual can: i) get vaccinated, if his opinion in the other layer is totally in favor of the vaccine, ii) get infected, with probability $\beta$ if he is in contact with an infected neighbor. Those $I$ individuals recover after a certain period $t_r=6$. Vaccinated individuals have an extremist positive opinion that does not change. We consider that the vaccine has a certain effectiveness $\omega$ and as a consequence vaccinated nodes can be infected with probability $\beta (1 - \omega)$ if they are in contact with an infected neighbor. In this case, if the infection process is successful, the new infected individual changes his opinion from extremist positive to totally against the vaccine. We find that depending on the trend in the opinion of the society, which depends on $r$, different behaviors in the spread of the epidemic occurs. An epidemic threshold was found. ",0,1,0,0,0,0 16983,CASP Solutions for Planning in Hybrid Domains," CASP is an extension of ASP that allows for numerical constraints to be added in the rules. PDDL+ is an extension of the PDDL standard language of automated planning for modeling mixed discrete-continuous dynamics. In this paper, we present CASP solutions for dealing with PDDL+ problems, i.e., encoding from PDDL+ to CASP, and extensions to the algorithm of the EZCSP CASP solver in order to solve CASP programs arising from PDDL+ domains. An experimental analysis, performed on well-known linear and non-linear variants of PDDL+ domains, involving various configurations of the EZCSP solver, other CASP solvers, and PDDL+ planners, shows the viability of our solution. ",1,0,0,0,0,0 16984,Primordial black holes from inflaton and spectator field perturbations in a matter-dominated era," We study production of primordial black holes (PBHs) during an early matter-dominated phase. As a source of perturbations, we consider either the inflaton field with a running spectral index or a spectator field that has a blue spectrum and thus provides a significant contribution to the PBH production at small scales. First, we identify the region of the parameter space where a significant fraction of the observed dark matter can be produced, taking into account all current PBH constraints. Then, we present constraints on the amplitude and spectral index of the spectator field as a function of the reheating temperature. We also derive constraints on the running of the inflaton spectral index, ${\rm d}n/{\rm d}{\rm ln}k \lesssim -0.002$, which are comparable to those from the Planck satellite for a scenario where the spectator field is absent. ",0,1,0,0,0,0 16985,State Space Decomposition and Subgoal Creation for Transfer in Deep Reinforcement Learning," Typical reinforcement learning (RL) agents learn to complete tasks specified by reward functions tailored to their domain. As such, the policies they learn do not generalize even to similar domains. To address this issue, we develop a framework through which a deep RL agent learns to generalize policies from smaller, simpler domains to more complex ones using a recurrent attention mechanism. The task is presented to the agent as an image and an instruction specifying the goal. This meta-controller guides the agent towards its goal by designing a sequence of smaller subtasks on the part of the state space within the attention, effectively decomposing it. As a baseline, we consider a setup without attention as well. Our experiments show that the meta-controller learns to create subgoals within the attention. ",1,0,0,1,0,0 16986,Robust Imitation of Diverse Behaviors," Deep generative models have recently shown great promise in imitation learning for motor control. Given enough data, even supervised approaches can do one-shot imitation learning; however, they are vulnerable to cascading failures when the agent trajectory diverges from the demonstrations. Compared to purely supervised methods, Generative Adversarial Imitation Learning (GAIL) can learn more robust controllers from fewer demonstrations, but is inherently mode-seeking and more difficult to train. In this paper, we show how to combine the favourable aspects of these two approaches. The base of our model is a new type of variational autoencoder on demonstration trajectories that learns semantic policy embeddings. We show that these embeddings can be learned on a 9 DoF Jaco robot arm in reaching tasks, and then smoothly interpolated with a resulting smooth interpolation of reaching behavior. Leveraging these policy representations, we develop a new version of GAIL that (1) is much more robust than the purely-supervised controller, especially with few demonstrations, and (2) avoids mode collapse, capturing many diverse behaviors when GAIL on its own does not. We demonstrate our approach on learning diverse gaits from demonstration on a 2D biped and a 62 DoF 3D humanoid in the MuJoCo physics environment. ",1,0,0,0,0,0 16987,Efficient Measurement of the Vibrational Rogue Waves by Compressive Sampling Based Wavelet Analysis," In this paper we discuss the possible usage of the compressive sampling based wavelet analysis for the efficient measurement and for the early detection of one dimensional (1D) vibrational rogue waves. We study the construction of the triangular (V-shaped) wavelet spectra using compressive samples of rogue waves that can be modeled as Peregrine and Akhmediev-Peregrine solitons. We show that triangular wavelet spectra can be sensed by compressive measurements at the early stages of the development of vibrational rogue waves. Our results may lead to development of the efficient vibrational rogue wave measurement and early sensing systems with reduced memory requirements which use the compressive sampling algorithms. In typical solid mechanics applications, compressed measurements can be acquired by randomly positioning single sensor and multisensors. ",0,0,1,0,0,0 16988,SceneCut: Joint Geometric and Object Segmentation for Indoor Scenes," This paper presents SceneCut, a novel approach to jointly discover previously unseen objects and non-object surfaces using a single RGB-D image. SceneCut's joint reasoning over scene semantics and geometry allows a robot to detect and segment object instances in complex scenes where modern deep learning-based methods either fail to separate object instances, or fail to detect objects that were not seen during training. SceneCut automatically decomposes a scene into meaningful regions which either represent objects or scene surfaces. The decomposition is qualified by an unified energy function over objectness and geometric fitting. We show how this energy function can be optimized efficiently by utilizing hierarchical segmentation trees. Moreover, we leverage a pre-trained convolutional oriented boundary network to predict accurate boundaries from images, which are used to construct high-quality region hierarchies. We evaluate SceneCut on several different indoor environments, and the results show that SceneCut significantly outperforms all the existing methods. ",1,0,0,0,0,0 16989,An Invariant Model of the Significance of Different Body Parts in Recognizing Different Actions," In this paper, we show that different body parts do not play equally important roles in recognizing a human action in video data. We investigate to what extent a body part plays a role in recognition of different actions and hence propose a generic method of assigning weights to different body points. The approach is inspired by the strong evidence in the applied perception community that humans perform recognition in a foveated manner, that is they recognize events or objects by only focusing on visually significant aspects. An important contribution of our method is that the computation of the weights assigned to body parts is invariant to viewing directions and camera parameters in the input data. We have performed extensive experiments to validate the proposed approach and demonstrate its significance. In particular, results show that considerable improvement in performance is gained by taking into account the relative importance of different body parts as defined by our approach. ",1,0,0,0,0,0 16990,Forecasting Crime with Deep Learning," The objective of this work is to take advantage of deep neural networks in order to make next day crime count predictions in a fine-grain city partition. We make predictions using Chicago and Portland crime data, which is augmented with additional datasets covering weather, census data, and public transportation. The crime counts are broken into 10 bins and our model predicts the most likely bin for a each spatial region at a daily level. We train this data using increasingly complex neural network structures, including variations that are suited to the spatial and temporal aspects of the crime prediction problem. With our best model we are able to predict the correct bin for overall crime count with 75.6% and 65.3% accuracy for Chicago and Portland, respectively. The results show the efficacy of neural networks for the prediction problem and the value of using external datasets in addition to standard crime data. ",0,0,0,1,0,0 16991,A family of compact semitoric systems with two focus-focus singularities," About 6 years ago, semitoric systems were classified by Pelayo & Vu Ngoc by means of five invariants. Standard examples are the coupled spin oscillator on $\mathbb{S}^2 \times \mathbb{R}^2$ and coupled angular momenta on $\mathbb{S}^2 \times \mathbb{S}^2$, both having exactly one focus-focus singularity. But so far there were no explicit examples of systems with more than one focus-focus singularity which are semitoric in the sense of that classification. This paper introduces a 6-parameter family of integrable systems on $\mathbb{S}^2 \times \mathbb{S}^2$ and proves that, for certain ranges of the parameters, it is a compact semitoric system with precisely two focus-focus singularities. Since the twisting index (one of the semitoric invariants) is related to the relationship between different focus-focus points, this paper provides systems for the future study of the twisting index. ",0,0,1,0,0,0 16992,Mixed Threefolds Isogenous to a Product," In this paper we study \emph{threefolds isogenous to a product of mixed type} i.e. quotients of a product of three compact Riemann surfaces $C_i$ of genus at least two by the action of a finite group $G$, which is free, but not diagonal. In particular, we are interested in the systematic construction and classification of these varieties. Our main result is the full classification of threefolds isogenous to a product of mixed type with $\chi(\mathcal O_X)=-1$ assuming that any automorphism in $G$, which restricts to the trivial element in $Aut(C_i)$ for some $C_i$, is the identity on the product. Since the holomorphic Euler-Poincaré-characteristic of a smooth threefold of general type with ample canonical class is always negative, these examples lie on the boundary, in the sense of threefold geography. To achieve our result we use techniques from computational group theory. Indeed, we develop a MAGMA algorithm to classify these threefolds for any given value of $\chi(\mathcal O_X)$. ",0,0,1,0,0,0 16993,Discriminatory Transfer," We observe standard transfer learning can improve prediction accuracies of target tasks at the cost of lowering their prediction fairness -- a phenomenon we named discriminatory transfer. We examine prediction fairness of a standard hypothesis transfer algorithm and a standard multi-task learning algorithm, and show they both suffer discriminatory transfer on the real-world Communities and Crime data set. The presented case study introduces an interaction between fairness and transfer learning, as an extension of existing fairness studies that focus on single task learning. ",1,0,0,1,0,0 16994,Ultrafast relaxation of hot phonons in Graphene-hBN Heterostructures," Fast carrier cooling is important for high power graphene based devices. Strongly Coupled Optical Phonons (SCOPs) play a major role in the relaxation of photoexcited carriers in graphene. Heterostructures of graphene and hexagonal boron nitride (hBN) have shown exceptional mobility and high saturation current, which makes them ideal for applications, but the effect of the hBN substrate on carrier cooling mechanisms is not understood. We track the cooling of hot photo-excited carriers in graphene-hBN heterostructures using ultrafast pump-probe spectroscopy. We find that the carriers cool down four times faster in the case of graphene on hBN than on a silicon oxide substrate thus overcoming the hot phonon (HP) bottleneck that plagues cooling in graphene devices. ",0,1,0,0,0,0 16995,Non-linear Associative-Commutative Many-to-One Pattern Matching with Sequence Variables," Pattern matching is a powerful tool which is part of many functional programming languages as well as computer algebra systems such as Mathematica. Among the existing systems, Mathematica offers the most expressive pattern matching. Unfortunately, no open source alternative has comparable pattern matching capabilities. Notably, these features include support for associative and/or commutative function symbols and sequence variables. While those features have individually been subject of previous research, their comprehensive combination has not yet been investigated. Furthermore, in many applications, a fixed set of patterns is matched repeatedly against different subjects. This many-to-one matching can be sped up by exploiting similarities between patterns. Discrimination nets are the state-of-the-art solution for many-to-one matching. In this thesis, a generalized discrimination net which supports the full feature set is presented. All algorithms have been implemented as an open-source library for Python. In experiments on real world examples, significant speedups of many-to-one over one-to-one matching have been observed. ",1,0,0,0,0,0 16996,Pair Background Envelopes in the SiD Detector," The beams at the ILC produce electron positron pairs due to beam-beam interactions. This note presents for the first time a study of these processes in a detailed simulation, which shows that these pair background particles appear at angles that extend to the inner layers of the detector. The full data set of pairs produced in one bunch crossing was used to calculate the helix tracks, which the particles form in the solenoid field of the SiD detector. The results suggest to further study the reduction of the beam pipe radius and therefore to either add another SiD vertex detector layer, or reduce the radius of the existing vertex detector layers, without increasing the detector occupancy significantly. This has to go along with additional studies whether the improvement in physics reconstruction methods, like c-tagging, is worth the increased background level at smaller radii. ",0,1,0,0,0,0 16997,Expansion of percolation critical points for Hamming graphs," The Hamming graph $H(d,n)$ is the Cartesian product of $d$ complete graphs on $n$ vertices. Let $m=d(n-1)$ be the degree and $V = n^d$ be the number of vertices of $H(d,n)$. Let $p_c^{(d)}$ be the critical point for bond percolation on $H(d,n)$. We show that, for $d \in \mathbb N$ fixed and $n \to \infty$, \begin{equation*} p_c^{(d)}= \dfrac{1}{m} + \dfrac{2d^2-1}{2(d-1)^2}\dfrac{1}{m^2} + O(m^{-3}) + O(m^{-1}V^{-1/3}), \end{equation*} which extends the asymptotics found in \cite{BorChaHofSlaSpe05b} by one order. The term $O(m^{-1}V^{-1/3})$ is the width of the critical window. For $d=4,5,6$ we have $m^{-3} = O(m^{-1}V^{-1/3})$, and so the above formula represents the full asymptotic expansion of $p_c^{(d)}$. In \cite{FedHofHolHul16a} \st{we show that} this formula is a crucial ingredient in the study of critical bond percolation on $H(d,n)$ for $d=2,3,4$. The proof uses a lace expansion for the upper bound and a novel comparison with a branching random walk for the lower bound. The proof of the lower bound also yields a refined asymptotics for the susceptibility of a subcritical Erdős-Rényi random graph. ",0,0,1,0,0,0 16998,Efficiently Manifesting Asynchronous Programming Errors in Android Apps," Android, the #1 mobile app framework, enforces the single-GUI-thread model, in which a single UI thread manages GUI rendering and event dispatching. Due to this model, it is vital to avoid blocking the UI thread for responsiveness. One common practice is to offload long-running tasks into async threads. To achieve this, Android provides various async programming constructs, and leaves developers themselves to obey the rules implied by the model. However, as our study reveals, more than 25% apps violate these rules and introduce hard-to-detect, fail-stop errors, which we term as aysnc programming errors (APEs). To this end, this paper introduces APEChecker, a technique to automatically and efficiently manifest APEs. The key idea is to characterize APEs as specific fault patterns, and synergistically combine static analysis and dynamic UI exploration to detect and verify such errors. Among the 40 real-world Android apps, APEChecker unveils and processes 61 APEs, of which 51 are confirmed (83.6% hit rate). Specifically, APEChecker detects 3X more APEs than the state-of-art testing tools (Monkey, Sapienz and Stoat), and reduces testing time from half an hour to a few minutes. On a specific type of APEs, APEChecker confirms 5X more errors than the data race detection tool, EventRacer, with very few false alarms. ",1,0,0,0,0,0 16999,AI Challenges in Human-Robot Cognitive Teaming," Among the many anticipated roles for robots in the future is that of being a human teammate. Aside from all the technological hurdles that have to be overcome with respect to hardware and control to make robots fit to work with humans, the added complication here is that humans have many conscious and subconscious expectations of their teammates - indeed, we argue that teaming is mostly a cognitive rather than physical coordination activity. This introduces new challenges for the AI and robotics community and requires fundamental changes to the traditional approach to the design of autonomy. With this in mind, we propose an update to the classical view of the intelligent agent architecture, highlighting the requirements for mental modeling of the human in the deliberative process of the autonomous agent. In this article, we outline briefly the recent efforts of ours, and others in the community, towards developing cognitive teammates along these guidelines. ",1,0,0,0,0,0 17000,"Generalizing the MVW involution, and the contragredient"," For certain quasi-split reductive groups $G$ over a general field $F$, we construct an automorphism $\iota_G$ of $G$ over $F$, well-defined as an element of ${\rm Aut}(G)(F)/jG(F)$ where $j:G(F) \rightarrow {\rm Aut}(G)(F)$ is the inner-conjugation action of $G(F)$ on $G$. The automorphism $\iota_G$ generalizes (although only for quasi-split groups) an involution due to Moeglin-Vigneras-Waldspurger in [MVW] for classical groups which takes any irreducible admissible representation $\pi$ of $G(F)$ for $G$ a classical group and $F$ a local field, to its contragredient $\pi^\vee$. The paper also formulates a conjecture on the contragredient of an irreducible admissible representation of $G(F)$ for $G$ a reductive algebraic group over a local field $F$ in terms of the (enhanced) Langlands parameter of the representation. ",0,0,1,0,0,0 17001,Miraculous cancellations for quantum $SL_2$," In earlier work, Helen Wong and the author discovered certain ""miraculous cancellations"" for the quantum trace map connecting the Kauffman bracket skein algebra of a surface to its quantum Teichmueller space, occurring when the quantum parameter $q$ is a root of unity. The current paper is devoted to giving a more representation theoretic interpretation of this phenomenon, in terms of the quantum group $U_q(sl_2)$ and its dual Hopf algebra $SL_2^q$. ",0,0,1,0,0,0 17002,Energy and time measurements with high-granular silicon devices," This note is a short summary of the workshop on ""Energy and time measurements with high-granular silicon devices"" that took place on the 13/6/16 and the 14/6/16 at DESY/Hamburg in the frame of the first AIDA-2020 Annual Meeting. This note tries to put forward trends that could be spotted and to emphasise in particular open issues that were addressed by the speakers. ",0,1,0,0,0,0 17003,Action Tubelet Detector for Spatio-Temporal Action Localization," Current state-of-the-art approaches for spatio-temporal action localization rely on detections at the frame level that are then linked or tracked across time. In this paper, we leverage the temporal continuity of videos instead of operating at the frame level. We propose the ACtion Tubelet detector (ACT-detector) that takes as input a sequence of frames and outputs tubelets, i.e., sequences of bounding boxes with associated scores. The same way state-of-the-art object detectors rely on anchor boxes, our ACT-detector is based on anchor cuboids. We build upon the SSD framework. Convolutional features are extracted for each frame, while scores and regressions are based on the temporal stacking of these features, thus exploiting information from a sequence. Our experimental results show that leveraging sequences of frames significantly improves detection performance over using individual frames. The gain of our tubelet detector can be explained by both more accurate scores and more precise localization. Our ACT-detector outperforms the state-of-the-art methods for frame-mAP and video-mAP on the J-HMDB and UCF-101 datasets, in particular at high overlap thresholds. ",1,0,0,0,0,0 17004,Significance of Side Information in the Graph Matching Problem," Percolation based graph matching algorithms rely on the availability of seed vertex pairs as side information to efficiently match users across networks. Although such algorithms work well in practice, there are other types of side information available which are potentially useful to an attacker. In this paper, we consider the problem of matching two correlated graphs when an attacker has access to side information, either in the form of community labels or an imperfect initial matching. In the former case, we propose a naive graph matching algorithm by introducing the community degree vectors which harness the information from community labels in an efficient manner. Furthermore, we analyze a variant of the basic percolation algorithm proposed in literature for graphs with community structure. In the latter case, we propose a novel percolation algorithm with two thresholds which uses an imperfect matching as input to match correlated graphs. We evaluate the proposed algorithms on synthetic as well as real world datasets using various experiments. The experimental results demonstrate the importance of communities as side information especially when the number of seeds is small and the networks are weakly correlated. ",1,1,0,0,0,0 17005,Extended Gray-Wyner System with Complementary Causal Side Information," We establish the rate region of an extended Gray-Wyner system for 2-DMS $(X,Y)$ with two additional decoders having complementary causal side information. This extension is interesting because in addition to the operationally significant extreme points of the Gray-Wyner rate region, which include Wyner's common information, G{á}cs-K{ö}rner common information and information bottleneck, the rate region for the extended system also includes the K{ö}rner graph entropy, the privacy funnel and excess functional information, as well as three new quantities of potential interest, as extreme points. To simplify the investigation of the 5-dimensional rate region of the extended Gray-Wyner system, we establish an equivalence of this region to a 3-dimensional mutual information region that consists of the set of all triples of the form $(I(X;U),\,I(Y;U),\,I(X,Y;U))$ for some $p_{U|X,Y}$. We further show that projections of this mutual information region yield the rate regions for many settings involving a 2-DMS, including lossless source coding with causal side information, distributed channel synthesis, and lossless source coding with a helper. ",1,0,1,0,0,0 17006,Learning Powers of Poisson Binomial Distributions," We introduce the problem of simultaneously learning all powers of a Poisson Binomial Distribution (PBD). A PBD of order $n$ is the distribution of a sum of $n$ mutually independent Bernoulli random variables $X_i$, where $\mathbb{E}[X_i] = p_i$. The $k$'th power of this distribution, for $k$ in a range $[m]$, is the distribution of $P_k = \sum_{i=1}^n X_i^{(k)}$, where each Bernoulli random variable $X_i^{(k)}$ has $\mathbb{E}[X_i^{(k)}] = (p_i)^k$. The learning algorithm can query any power $P_k$ several times and succeeds in learning all powers in the range, if with probability at least $1- \delta$: given any $k \in [m]$, it returns a probability distribution $Q_k$ with total variation distance from $P_k$ at most $\epsilon$. We provide almost matching lower and upper bounds on query complexity for this problem. We first show a lower bound on the query complexity on PBD powers instances with many distinct parameters $p_i$ which are separated, and we almost match this lower bound by examining the query complexity of simultaneously learning all the powers of a special class of PBD's resembling the PBD's of our lower bound. We study the fundamental setting of a Binomial distribution, and provide an optimal algorithm which uses $O(1/\epsilon^2)$ samples. Diakonikolas, Kane and Stewart [COLT'16] showed a lower bound of $\Omega(2^{1/\epsilon})$ samples to learn the $p_i$'s within error $\epsilon$. The question whether sampling from powers of PBDs can reduce this sampling complexity, has a negative answer since we show that the exponential number of samples is inevitable. Having sampling access to the powers of a PBD we then give a nearly optimal algorithm that learns its $p_i$'s. To prove our two last lower bounds we extend the classical minimax risk definition from statistics to estimating functions of sequences of distributions. ",1,0,1,1,0,0 17007,Geometry of simplices in Minkowski spaces," There are many problems and configurations in Euclidean geometry that were never extended to the framework of (normed or) finite dimensional real Banach spaces, although their original versions are inspiring for this type of generalization, and the analogous definitions for normed spaces represent a promising topic. An example is the geometry of simplices in non-Euclidean normed spaces. We present new generalizations of well known properties of Euclidean simplices. These results refer to analogues of circumcenters, Euler lines, and Feuerbach spheres of simplices in normed spaces. Using duality, we also get natural theorems on angular bisectors as well as in- and exspheres of (dual) simplices. ",0,0,1,0,0,0 17008,DLR : Toward a deep learned rhythmic representation for music content analysis," In the use of deep neural networks, it is crucial to provide appropriate input representations for the network to learn from. In this paper, we propose an approach to learn a representation that focus on rhythmic representation which is named as DLR (Deep Learning Rhythmic representation). The proposed approach aims to learn DLR from the raw audio signal and use it for other music informatics tasks. A 1-dimensional convolutional network is utilised in the learning of DLR. In the experiment, we present the results from the source task and the target task as well as visualisations of DLRs. The results reveals that DLR provides compact rhythmic information which can be used on multi-tagging task. ",1,0,0,0,0,0 17009,Phylogeny-based tumor subclone identification using a Bayesian feature allocation model," Tumor cells acquire different genetic alterations during the course of evolution in cancer patients. As a result of competition and selection, only a few subgroups of cells with distinct genotypes survive. These subgroups of cells are often referred to as subclones. In recent years, many statistical and computational methods have been developed to identify tumor subclones, leading to biologically significant discoveries and shedding light on tumor progression, metastasis, drug resistance and other processes. However, most existing methods are either not able to infer the phylogenetic structure among subclones, or not able to incorporate copy number variations (CNV). In this article, we propose SIFA (tumor Subclone Identification by Feature Allocation), a Bayesian model which takes into account both CNV and tumor phylogeny structure to infer tumor subclones. We compare the performance of SIFA with two other commonly used methods using simulation studies with varying sequencing depth, evolutionary tree size, and tree complexity. SIFA consistently yields better results in terms of Rand Index and cellularity estimation accuracy. The usefulness of SIFA is also demonstrated through its application to whole genome sequencing (WGS) samples from four patients in a breast cancer study. ",0,0,0,1,1,0 17010,Confidence-based Graph Convolutional Networks for Semi-Supervised Learning," Predicting properties of nodes in a graph is an important problem with applications in a variety of domains. Graph-based Semi-Supervised Learning (SSL) methods aim to address this problem by labeling a small subset of the nodes as seeds and then utilizing the graph structure to predict label scores for the rest of the nodes in the graph. Recently, Graph Convolutional Networks (GCNs) have achieved impressive performance on the graph-based SSL task. In addition to label scores, it is also desirable to have confidence scores associated with them. Unfortunately, confidence estimation in the context of GCN has not been previously explored. We fill this important gap in this paper and propose ConfGCN, which estimates labels scores along with their confidences jointly in GCN-based setting. ConfGCN uses these estimated confidences to determine the influence of one node on another during neighborhood aggregation, thereby acquiring anisotropic capabilities. Through extensive analysis and experiments on standard benchmarks, we find that ConfGCN is able to outperform state-of-the-art baselines. We have made ConfGCN's source code available to encourage reproducible research. ",1,0,0,1,0,0 17011,Long-range fluctuations and multifractality in connectivity density time series of a wind speed monitoring network," This paper studies the daily connectivity time series of a wind speed-monitoring network using multifractal detrended fluctuation analysis. It investigates the long-range fluctuation and multifractality in the residuals of the connectivity time series. Our findings reveal that the daily connectivity of the correlation-based network is persistent for any correlation threshold. Further, the multifractality degree is higher for larger absolute values of the correlation threshold ",0,0,0,1,0,0 17012,The Dynamics of Norm Change in the Cultural Evolution of Language," What happens when a new social convention replaces an old one? While the possible forces favoring norm change - such as institutions or committed activists - have been identified since a long time, little is known about how a population adopts a new convention, due to the difficulties of finding representative data. Here we address this issue by looking at changes occurred to 2,541 orthographic and lexical norms in English and Spanish through the analysis of a large corpora of books published between the years 1800 and 2008. We detect three markedly distinct patterns in the data, depending on whether the behavioral change results from the action of a formal institution, an informal authority or a spontaneous process of unregulated evolution. We propose a simple evolutionary model able to capture all the observed behaviors and we show that it reproduces quantitatively the empirical data. This work identifies general mechanisms of norm change and we anticipate that it will be of interest to researchers investigating the cultural evolution of language and, more broadly, human collective behavior. ",0,0,0,0,1,0 17013,Bayesian Joint Spike-and-Slab Graphical Lasso," In this article, we propose a new class of priors for Bayesian inference with multiple Gaussian graphical models. We introduce fully Bayesian treatments of two popular procedures, the group graphical lasso and the fused graphical lasso, and extend them to a continuous spike-and-slab framework to allow self-adaptive shrinkage and model selection simultaneously. We develop an EM algorithm that performs fast and dynamic explorations of posterior modes. Our approach selects sparse models efficiently with substantially smaller bias than would be induced by alternative regularization procedures. The performance of the proposed methods are demonstrated through simulation and two real data examples. ",0,0,0,1,0,0 17014,Variations on the theme of the uniform boundary condition," The uniform boundary condition in a normed chain complex asks for a uniform linear bound on fillings of null-homologous cycles. For the $\ell^1$-norm on the singular chain complex, Matsumoto and Morita established a characterisation of the uniform boundary condition in terms of bounded cohomology. In particular, spaces with amenable fundamental group satisfy the uniform boundary condition in every degree. We will give an alternative proof of statements of this type, using geometric F{\o}lner arguments on the chain level instead of passing to the dual cochain complex. These geometric methods have the advantage that they also lead to integral refinements. In particular, we obtain applications in the context of integral foliated simplicial volume. ",0,0,1,0,0,0 17015,revisit: a Workflow Tool for Data Science," In recent years there has been widespread concern in the scientific community over a reproducibility crisis. Among the major causes that have been identified is statistical: In many scientific research the statistical analysis (including data preparation) suffers from a lack of transparency and methodological problems, major obstructions to reproducibility. The revisit package aims toward remedying this problem, by generating a ""software paper trail"" of the statistical operations applied to a dataset. This record can be ""replayed"" for verification purposes, as well as be modified to enable alternative analyses. The software also issues warnings of certain kinds of potential errors in statistical methodology, again related to the reproducibility issue. ",1,0,0,1,0,0 17016,Programmatically Interpretable Reinforcement Learning," We present a reinforcement learning framework, called Programmatically Interpretable Reinforcement Learning (PIRL), that is designed to generate interpretable and verifiable agent policies. Unlike the popular Deep Reinforcement Learning (DRL) paradigm, which represents policies by neural networks, PIRL represents policies using a high-level, domain-specific programming language. Such programmatic policies have the benefits of being more easily interpreted than neural networks, and being amenable to verification by symbolic methods. We propose a new method, called Neurally Directed Program Search (NDPS), for solving the challenging nonsmooth optimization problem of finding a programmatic policy with maximal reward. NDPS works by first learning a neural policy network using DRL, and then performing a local search over programmatic policies that seeks to minimize a distance from this neural ""oracle"". We evaluate NDPS on the task of learning to drive a simulated car in the TORCS car-racing environment. We demonstrate that NDPS is able to discover human-readable policies that pass some significant performance bars. We also show that PIRL policies can have smoother trajectories, and can be more easily transferred to environments not encountered during training, than corresponding policies discovered by DRL. ",1,0,0,1,0,0 17017,Kinetic Simulation of Collisional Magnetized Plasmas with Semi-Implicit Time Integration," Plasmas with varying collisionalities occur in many applications, such as tokamak edge regions, where the flows are characterized by significant variations in density and temperature. While a kinetic model is necessary for weakly-collisional high-temperature plasmas, high collisionality in colder regions render the equations numerically stiff due to disparate time scales. In this paper, we propose an implicit-explicit algorithm for such cases, where the collisional term is integrated implicitly in time, while the advective term is integrated explicitly in time, thus allowing time step sizes that are comparable to the advective time scales. This partitioning results in a more efficient algorithm than those using explicit time integrators, where the time step sizes are constrained by the stiff collisional time scales. We implement semi-implicit additive Runge-Kutta methods in COGENT, a finite-volume gyrokinetic code for mapped, multiblock grids and test the accuracy, convergence, and computational cost of these semi-implicit methods for test cases with highly-collisional plasmas. ",1,1,0,0,0,0 17018,VC-dimension of short Presburger formulas," We study VC-dimension of short formulas in Presburger Arithmetic, defined to have a bounded number of variables, quantifiers and atoms. We give both lower and upper bounds, which are tight up to a polynomial factor in the bit length of the formula. ",1,0,1,0,0,0 17019,Real-time Traffic Accident Risk Prediction based on Frequent Pattern Tree," Traffic accident data are usually noisy, contain missing values, and heterogeneous. How to select the most important variables to improve real-time traffic accident risk prediction has become a concern of many recent studies. This paper proposes a novel variable selection method based on the Frequent Pattern tree (FP tree) algorithm. First, all the frequent patterns in the traffic accident dataset are discovered. Then for each frequent pattern, a new criterion, called the Relative Object Purity Ratio (ROPR) which we proposed, is calculated. This ROPR is added to the importance score of the variables that differentiate one frequent pattern from the others. To test the proposed method, a dataset was compiled from the traffic accidents records detected by only one detector on interstate highway I-64 in Virginia in 2005. This dataset was then linked to other variables such as real-time traffic information and weather conditions. Both the proposed method based on the FP tree algorithm, as well as the widely utilized, random forest method, were then used to identify the important variables or the Virginia dataset. The results indicate that there are some differences between the variables deemed important by the FP tree and those selected by the random forest method. Following this, two baseline models (i.e. a nearest neighbor (k-NN) method and a Bayesian network) were developed to predict accident risk based on the variables identified by both the FP tree method and the random forest method. The results show that the models based on the variable selection using the FP tree performed better than those based on the random forest method for several versions of the k-NN and Bayesian network models.The best results were derived from a Bayesian network model using variables from FP tree. That model could predict 61.11% of accidents accurately while having a false alarm rate of 38.16%. ",1,0,0,1,0,0 17020,Do Developers Update Their Library Dependencies? An Empirical Study on the Impact of Security Advisories on Library Migration," Third-party library reuse has become common practice in contemporary software development, as it includes several benefits for developers. Library dependencies are constantly evolving, with newly added features and patches that fix bugs in older versions. To take full advantage of third-party reuse, developers should always keep up to date with the latest versions of their library dependencies. In this paper, we investigate the extent of which developers update their library dependencies. Specifically, we conducted an empirical study on library migration that covers over 4,600 GitHub software projects and 2,700 library dependencies. Results show that although many of these systems rely heavily on dependencies, 81.5% of the studied systems still keep their outdated dependencies. In the case of updating a vulnerable dependency, the study reveals that affected developers are not likely to respond to a security advisory. Surveying these developers, we find that 69% of the interviewees claim that they were unaware of their vulnerable dependencies. Furthermore, developers are not likely to prioritize library updates, citing it as extra effort and added responsibility. This study concludes that even though third-party reuse is commonplace, the practice of updating a dependency is not as common for many developers. ",1,0,0,0,0,0 17021,Is Smaller Better: A Proposal To Consider Bacteria For Biologically Inspired Modeling," Bacteria are easily characterizable model organisms with an impressively complicated set of capabilities. Among their capabilities is quorum sensing, a detailed cell-cell signaling system that may have a common origin with eukaryotic cell-cell signaling. Not only are the two phenomena similar, but quorum sensing, as is the case with any bacterial phenomenon when compared to eukaryotes, is also easier to study in depth than eukaryotic cell-cell signaling. This ease of study is a contrast to the only partially understood cellular dynamics of neurons. Here we review the literature on the strikingly neuron-like qualities of bacterial colonies and biofilms, including ion-based and hormonal signaling, and action potential-like behavior. This allows them to feasibly act as an analog for neurons that could produce more detailed and more accurate biologically-based computational models. Using bacteria as the basis for biologically feasible computational models may allow models to better harness the tremendous ability of biological organisms to make decisions and process information. Additionally, principles gleaned from bacterial function have the potential to influence computational efforts divorced from biology, just as neuronal function has in the abstract influenced countless machine learning efforts. ",1,0,0,0,0,0 17022,A Bayesian Data Augmentation Approach for Learning Deep Models," Data augmentation is an essential part of the training process applied to deep learning models. The motivation is that a robust training process for deep learning models depends on large annotated datasets, which are expensive to be acquired, stored and processed. Therefore a reasonable alternative is to be able to automatically generate new annotated training samples using a process known as data augmentation. The dominant data augmentation approach in the field assumes that new training samples can be obtained via random geometric or appearance transformations applied to annotated training samples, but this is a strong assumption because it is unclear if this is a reliable generative model for producing new training samples. In this paper, we provide a novel Bayesian formulation to data augmentation, where new annotated training points are treated as missing variables and generated based on the distribution learned from the training set. For learning, we introduce a theoretically sound algorithm --- generalised Monte Carlo expectation maximisation, and demonstrate one possible implementation via an extension of the Generative Adversarial Network (GAN). Classification results on MNIST, CIFAR-10 and CIFAR-100 show the better performance of our proposed method compared to the current dominant data augmentation approach mentioned above --- the results also show that our approach produces better classification results than similar GAN models. ",1,0,0,0,0,0 17023,Interpreting Embedding Models of Knowledge Bases: A Pedagogical Approach," Knowledge bases are employed in a variety of applications from natural language processing to semantic web search; alas, in practice their usefulness is hurt by their incompleteness. Embedding models attain state-of-the-art accuracy in knowledge base completion, but their predictions are notoriously hard to interpret. In this paper, we adapt ""pedagogical approaches"" (from the literature on neural networks) so as to interpret embedding models by extracting weighted Horn rules from them. We show how pedagogical approaches have to be adapted to take upon the large-scale relational aspects of knowledge bases and show experimentally their strengths and weaknesses. ",0,0,0,1,0,0 17024,Parameterized complexity of machine scheduling: 15 open problems," Machine scheduling problems are a long-time key domain of algorithms and complexity research. A novel approach to machine scheduling problems are fixed-parameter algorithms. To stimulate this thriving research direction, we propose 15 open questions in this area whose resolution we expect to lead to the discovery of new approaches and techniques both in scheduling and parameterized complexity theory. ",1,0,0,0,0,0 17025,"Potential Conditional Mutual Information: Estimators, Properties and Applications"," The conditional mutual information I(X;Y|Z) measures the average information that X and Y contain about each other given Z. This is an important primitive in many learning problems including conditional independence testing, graphical model inference, causal strength estimation and time-series problems. In several applications, it is desirable to have a functional purely of the conditional distribution p_{Y|X,Z} rather than of the joint distribution p_{X,Y,Z}. We define the potential conditional mutual information as the conditional mutual information calculated with a modified joint distribution p_{Y|X,Z} q_{X,Z}, where q_{X,Z} is a potential distribution, fixed airport. We develop K nearest neighbor based estimators for this functional, employing importance sampling, and a coupling trick, and prove the finite k consistency of such an estimator. We demonstrate that the estimator has excellent practical performance and show an application in dynamical system inference. ",1,0,0,1,0,0 17026,"A new approach to divergences in quantum electrodynamics, concrete examples"," An interesting attempt for solving infrared divergence problems via the theory of generalized wave operators was made by P. Kulish and L. Faddeev. Our method of using the ideas from the theory of generalized wave operators is essentially different. We assume that the unperturbed operator $A_0$ is known and that the scattering operator $S$ and the unperturbed operator $A_0$ are permutable. (In the Kulish-Faddeev theory this basic property is not fulfilled.) The permutability of $S$ and $A_0$ gives us an important information about the structure of the scattering operator. We show that the divergences appeared because the deviations of the initial and final waves from the free waves were not taken into account. The approach is demonstrated on important examples. ",0,0,1,0,0,0 17027,Indefinite boundary value problems on graphs," We consider the spectral structure of indefinite second order boundary-value problems on graphs. A variational formulation for such boundary-value problems on graphs is given and we obtain both full and half-range completeness results. This leads to a max-min principle and as a consequence we can formulate an analogue of Dirichlet-Neumann bracketing and this in turn gives rise to asymptotic approximations for the eigenvalues. ",0,0,1,0,0,0 17028,Integral curvatures of Finsler manifolds and applications," In this paper, we study the integral curvatures of Finsler manifolds. Some Bishop-Gromov relative volume comparisons and several Myers type theorems are obtained. We also establish a Gromov type precompactness theorem and a Yamaguchi type finiteness theorem. Furthermore, the isoperimetric and Sobolev constants of a closed Finsler manifold are estimated by integral curvature bounds. ",0,0,1,0,0,0 17029,L-functions and sharp resonances of infinite index congruence subgroups of $SL_2(\mathbb{Z})$," For convex co-compact subgroups of SL2(Z) we consider the ""congruence subgroups"" for p prime. We prove a factorization formula for the Selberg zeta function in term of L-functions related to irreducible representations of the Galois group SL2(Fp) of the covering, together with a priori bounds and analytic continuation. We use this factorization property combined with an averaging technique over representations to prove a new existence result of non-trivial resonances in an effective low frequency strip. ",0,0,1,0,0,0 17030,An Enhanced Lumped Element Electrical Model of a Double Barrier Memristive Device," The massive parallel approach of neuromorphic circuits leads to effective methods for solving complex problems. It has turned out that resistive switching devices with a continuous resistance range are potential candidates for such applications. These devices are memristive systems - nonlinear resistors with memory. They are fabricated in nanotechnology and hence parameter spread during fabrication may aggravate reproducible analyses. This issue makes simulation models of memristive devices worthwhile. Kinetic Monte-Carlo simulations based on a distributed model of the device can be used to understand the underlying physical and chemical phenomena. However, such simulations are very time-consuming and neither convenient for investigations of whole circuits nor for real-time applications, e.g. emulation purposes. Instead, a concentrated model of the device can be used for both fast simulations and real-time applications, respectively. We introduce an enhanced electrical model of a valence change mechanism (VCM) based double barrier memristive device (DBMD) with a continuous resistance range. This device consists of an ultra-thin memristive layer sandwiched between a tunnel barrier and a Schottky-contact. The introduced model leads to very fast simulations by using usual circuit simulation tools while maintaining physically meaningful parameters. Kinetic Monte-Carlo simulations based on a distributed model and experimental data have been utilized as references to verify the concentrated model. ",1,1,0,0,0,0 17031,Non-perturbative positive Lyapunov exponent of Schrödinger equations and its applications to skew-shift," We first study the discrete Schrödinger equations with analytic potentials given by a class of transformations. It is shown that if the coupling number is large, then its logarithm equals approximately to the Lyapunov exponents. When the transformation becomes the skew-shift, we prove that the Lyapunov exponent is week Hölder continuous, and the spectrum satisfies Anderson Localization and contains large intervals. Moreover, all of these conclusions are non-perturbative. ",0,0,1,0,0,0 17032,Greedy Algorithms for Cone Constrained Optimization with Convergence Guarantees," Greedy optimization methods such as Matching Pursuit (MP) and Frank-Wolfe (FW) algorithms regained popularity in recent years due to their simplicity, effectiveness and theoretical guarantees. MP and FW address optimization over the linear span and the convex hull of a set of atoms, respectively. In this paper, we consider the intermediate case of optimization over the convex cone, parametrized as the conic hull of a generic atom set, leading to the first principled definitions of non-negative MP algorithms for which we give explicit convergence rates and demonstrate excellent empirical performance. In particular, we derive sublinear ($\mathcal{O}(1/t)$) convergence on general smooth and convex objectives, and linear convergence ($\mathcal{O}(e^{-t})$) on strongly convex objectives, in both cases for general sets of atoms. Furthermore, we establish a clear correspondence of our algorithms to known algorithms from the MP and FW literature. Our novel algorithms and analyses target general atom sets and general objective functions, and hence are directly applicable to a large variety of learning settings. ",1,0,0,1,0,0 17033,Probing the accretion disc structure by the twin kHz QPOs and spins of neutron stars in LMXBs," We analyze the relation between the emission radii of twin kilohertz quasi-periodic oscillations (kHz QPOs) and the co-rotation radii of the 12 neutron star low mass X-ray binaries (NS-LMXBs) which are simultaneously detected with the twin kHz QPOs and NS spins. We find that the average co-rotation radius of these sources is r_co about 32 km, and all the emission positions of twin kHz QPOs lie inside the corotation radii, indicating that the twin kHz QPOs are formed in the spin-up process. It is noticed that the upper frequency of twin kHz QPOs is higher than NS spin frequency by > 10%, which may account for a critical velocity difference between the Keplerian motion of accretion matter and NS spin that is corresponding to the production of twin kHz QPOs. In addition, we also find that about 83% of twin kHz QPOs cluster around the radius range of 15-20 km, which may be affected by the hard surface or the local strong magnetic field of NS. As a special case, SAX J1808.4-3658 shows the larger emission radii of twin kHz QPOs of r about 21-24 km, which may be due to its low accretion rate or small measured NS mass (< 1.4 solar mass). ",0,1,0,0,0,0 17034,Can scientists and their institutions become their own open access publishers?," This article offers a personal perspective on the current state of academic publishing, and posits that the scientific community is beset with journals that contribute little valuable knowledge, overload the community's capacity for high-quality peer review, and waste resources. Open access publishing can offer solutions that benefit researchers and other information users, as well as institutions and funders, but commercial journal publishers have influenced open access policies and practices in ways that favor their economic interests over those of other stakeholders in knowledge creation and sharing. One way to free research from constraints on access is the diamond route of open access publishing, in which institutions and funders that produce new knowledge reclaim responsibility for publication via institutional journals or other open platforms. I argue that research journals (especially those published for profit) may no longer be fit for purpose, and hope that readers will consider whether the time has come to put responsibility for publishing back into the hands of researchers and their institutions. The potential advantages and challenges involved in a shift away from for-profit journals in favor of institutional open access publishing are explored. ",1,0,0,0,0,0 17035,Character sums for elliptic curve densities," If $E$ is an elliptic curve over $\mathbb{Q}$, then it follows from work of Serre and Hooley that, under the assumption of the Generalized Riemann Hypothesis, the density of primes $p$ such that the group of $\mathbb{F}_p$-rational points of the reduced curve $\tilde{E}(\mathbb{F}_p)$ is cyclic can be written as an infinite product $\prod \delta_\ell$ of local factors $\delta_\ell$ reflecting the degree of the $\ell$-torsion fields, multiplied by a factor that corrects for the entanglements between the various torsion fields. We show that this correction factor can be interpreted as a character sum, and the resulting description allows us to easily determine non-vanishing criteria for it. We apply this method in a variety of other settings. Among these, we consider the aforementioned problem with the additional condition that the primes $p$ lie in a given arithmetic progression. We also study the conjectural constants appearing in Koblitz's conjecture, a conjecture which relates to the density of primes $p$ for which the cardinality of the group of $\mathbb{F}_p$-points of $E$ is prime. ",0,0,1,0,0,0 17036,A monolithic fluid-structure interaction formulation for solid and liquid membranes including free-surface contact," A unified fluid-structure interaction (FSI) formulation is presented for solid, liquid and mixed membranes. Nonlinear finite elements (FE) and the generalized-alpha scheme are used for the spatial and temporal discretization. The membrane discretization is based on curvilinear surface elements that can describe large deformations and rotations, and also provide a straightforward description for contact. The fluid is described by the incompressible Navier-Stokes equations, and its discretization is based on stabilized Petrov-Galerkin FE. The coupling between fluid and structure uses a conforming sharp interface discretization, and the resulting non-linear FE equations are solved monolithically within the Newton-Raphson scheme. An arbitrary Lagrangian-Eulerian formulation is used for the fluid in order to account for the mesh motion around the structure. The formulation is very general and admits diverse applications that include contact at free surfaces. This is demonstrated by two analytical and three numerical examples exhibiting strong coupling between fluid and structure. The examples include balloon inflation, droplet rolling and flapping flags. They span a Reynolds-number range from 0.001 to 2000. One of the examples considers the extension to rotation-free shells using isogeometric FE. ",1,1,0,0,0,0 17037,Different Non-extensive Models for heavy-ion collisions," The transverse momentum ($p_T$) spectra from heavy-ion collisions at intermediate momenta are described by non-extensive statistical models. Assuming a fixed relative variance of the temperature fluctuating event by event or alternatively a fixed mean multiplicity in a negative binomial distribution (NBD), two different linear relations emerge between the temperature, $T$, and the Tsallis parameter $q-1$. Our results qualitatively agree with that of G.~Wilk. Furthermore we revisit the ""Soft+Hard"" model, proposed recently by G.~G.~Barnaföldi \textit{et.al.}, by a $T$-independent average $p_T^2$ assumption. Finally we compare results with those predicted by another deformed distribution, using Kaniadakis' $\kappa$ parametrization. ",0,1,0,0,0,0 17038,Efficient Toxicity Prediction via Simple Features Using Shallow Neural Networks and Decision Trees," Toxicity prediction of chemical compounds is a grand challenge. Lately, it achieved significant progress in accuracy but using a huge set of features, implementing a complex blackbox technique such as a deep neural network, and exploiting enormous computational resources. In this paper, we strongly argue for the models and methods that are simple in machine learning characteristics, efficient in computing resource usage, and powerful to achieve very high accuracy levels. To demonstrate this, we develop a single task-based chemical toxicity prediction framework using only 2D features that are less compute intensive. We effectively use a decision tree to obtain an optimum number of features from a collection of thousands of them. We use a shallow neural network and jointly optimize it with decision tree taking both network parameters and input features into account. Our model needs only a minute on a single CPU for its training while existing methods using deep neural networks need about 10 min on NVidia Tesla K40 GPU. However, we obtain similar or better performance on several toxicity benchmark tasks. We also develop a cumulative feature ranking method which enables us to identify features that can help chemists perform prescreening of toxic compounds effectively. ",1,0,0,1,0,0 17039,Minmax Hierarchies and Minimal Surfaces in Manifolds," We introduce a general scheme that permits to generate successive min-max problems for producing critical points of higher and higher indices to Palais-Smale Functionals in Banach manifolds equipped with Finsler structures. We call the resulting tree of minmax problems a minmax hierarchy. Using the viscosity approach to the minmax theory of minimal surfaces introduced by the author in a series of recent works, we explain how this scheme can be deformed for producing smooth minimal surfaces of strictly increasing area in arbitrary codimension. We implement this scheme to the case of the $3-$dimensional sphere. In particular we are giving a min-max characterization of the Clifford Torus and conjecture what are the next minimal surfaces to come in the $S^3$ hierarchy. Among other results we prove here the lower semi continuity of the Morse Index in the viscosity method below an area level. ",0,0,1,0,0,0 17040,Nonseparable Multinomial Choice Models in Cross-Section and Panel Data," Multinomial choice models are fundamental for empirical modeling of economic choices among discrete alternatives. We analyze identification of binary and multinomial choice models when the choice utilities are nonseparable in observed attributes and multidimensional unobserved heterogeneity with cross-section and panel data. We show that derivatives of choice probabilities with respect to continuous attributes are weighted averages of utility derivatives in cross-section models with exogenous heterogeneity. In the special case of random coefficient models with an independent additive effect, we further characterize that the probability derivative at zero is proportional to the population mean of the coefficients. We extend the identification results to models with endogenous heterogeneity using either a control function or panel data. In time stationary panel models with two periods, we find that differences over time of derivatives of choice probabilities identify utility derivatives ""on the diagonal,"" i.e. when the observed attributes take the same values in the two periods. We also show that time stationarity does not identify structural derivatives ""off the diagonal"" both in continuous and multinomial choice panel models. ",0,0,0,1,0,0 17041,Corona limits of tilings : Periodic case," We study the limit shape of successive coronas of a tiling, which models the growth of crystals. We define basic terminologies and discuss the existence and uniqueness of corona limits, and then prove that corona limits are completely characterized by directional speeds. As an application, we give another proof that the corona limit of a periodic tiling is a centrally symmetric convex polyhedron (see [Zhuravlev 2001], [Maleev-Shutov 2011]). ",0,0,1,0,0,0 17042,The Spatial Range of Conformity," Properties of galaxies like their absolute magnitude and their stellar mass content are correlated. These correlations are tighter for close pairs of galaxies, which is called galactic conformity. In hierarchical structure formation scenarios, galaxies form within dark matter halos. To explain the amplitude and the spatial range of galactic conformity two--halo terms or assembly bias become important. With the scale dependent correlation coefficients the amplitude and the spatial range of conformity are determined from galaxy and halo samples. The scale dependent correlation coefficients are introduced as a new descriptive statistic to quantify the correlations between properties of galaxies or halos, depending on the distances to other galaxies or halos. These scale dependent correlation coefficients can be applied to the galaxy distribution directly. Neither a splitting of the sample into subsamples, nor an a priori clustering is needed. This new descriptive statistic is applied to galaxy catalogues derived from the Sloan Digital Sky Survey III and to halo catalogues from the MultiDark simulations. In the galaxy sample the correlations between absolute Magnitude, velocity dispersion, ellipticity, and stellar mass content are investigated. The correlations of mass, spin, and ellipticity are explored in the halo samples. Both for galaxies and halos a scale dependent conformity is confirmed. Moreover the scale dependent correlation coefficients reveal a signal of conformity out to 40Mpc and beyond. The halo and galaxy samples show a differing amplitude and range of conformity. ",0,1,0,0,0,0 17043,Non-convex learning via Stochastic Gradient Langevin Dynamics: a nonasymptotic analysis," Stochastic Gradient Langevin Dynamics (SGLD) is a popular variant of Stochastic Gradient Descent, where properly scaled isotropic Gaussian noise is added to an unbiased estimate of the gradient at each iteration. This modest change allows SGLD to escape local minima and suffices to guarantee asymptotic convergence to global minimizers for sufficiently regular non-convex objectives (Gelfand and Mitter, 1991). The present work provides a nonasymptotic analysis in the context of non-convex learning problems, giving finite-time guarantees for SGLD to find approximate minimizers of both empirical and population risks. As in the asymptotic setting, our analysis relates the discrete-time SGLD Markov chain to a continuous-time diffusion process. A new tool that drives the results is the use of weighted transportation cost inequalities to quantify the rate of convergence of SGLD to a stationary distribution in the Euclidean $2$-Wasserstein distance. ",1,0,1,1,0,0 17044,Multilevel preconditioner of Polynomial Chaos Method for quantifying uncertainties in a blood pump," More than 23 million people are suffered by Heart failure worldwide. Despite the modern transplant operation is well established, the lack of heart donations becomes a big restriction on transplantation frequency. With respect to this matter, ventricular assist devices (VADs) can play an important role in supporting patients during waiting period and after the surgery. Moreover, it has been shown that VADs by means of blood pump have advantages for working under different conditions. While a lot of work has been done on modeling the functionality of the blood pump, but quantifying uncertainties in a numerical model is a challenging task. We consider the Polynomial Chaos (PC) method, which is introduced by Wiener for modeling stochastic process with Gaussian distribution. The Galerkin projection, the intrusive version of the generalized Polynomial Chaos (gPC), has been densely studied and applied for various problems. The intrusive Galerkin approach could represent stochastic process directly at once with Polynomial Chaos series expansions, it would therefore optimize the total computing effort comparing with classical non-intrusive methods. We compared different preconditioning techniques for a steady state simulation of a blood pump configuration in our previous work, the comparison shows that an inexact multilevel preconditioner has a promising performance. In this work, we show an instationary blood flow through a FDA blood pump configuration with Galerkin Projection method, which is implemented in our open source Finite Element library Hiflow3. Three uncertainty sources are considered: inflow boundary condition, rotor angular speed and dynamic viscosity, the numerical results are demonstrated with more than 30 Million degrees of freedom by using supercomputer. ",0,1,0,0,0,0 17045,Superradiant Mott Transition," The combination of strong correlation and emergent lattice can be achieved when quantum gases are confined in a superradiant Fabry-Perot cavity. In addition to the discoveries of exotic phases, such as density wave ordered Mott insulator and superfluid, a surprising kink structure is found in the slope of the cavity strength as a function of the pumping strength. In this Letter, we show that the appearance of such a kink is a manifestation of a liquid-gas like transition between two superfluids with different densities. The slopes in the immediate neighborhood of the kink become divergent at the liquid-gas critical points and display a critical scaling law with a critical exponent 1 in the quantum critical region. Our predictions could be tested in current experimental set-up. ",0,1,0,0,0,0 17046,Communication via FRET in Nanonetworks of Mobile Proteins," A practical, biologically motivated case of protein complexes (immunoglobulin G and FcRII receptors) moving on the surface of mastcells, that are common parts of an immunological system, is investigated. Proteins are considered as nanomachines creating a nanonetwork. Accurate molecular models of the proteins and the fluorophores which act as their nanoantennas are used to simulate the communication between the nanomachines when they are close to each other. The theory of diffusion-based Brownian motion is applied to model movements of the proteins. It is assumed that fluorophore molecules send and receive signals using the Forster Resonance Energy Transfer. The probability of the efficient signal transfer and the respective bit error rate are calculated and discussed. ",0,0,0,0,1,0 17047,"Multivariate generalized Pareto distributions: parametrizations, representations, and properties"," Multivariate generalized Pareto distributions arise as the limit distributions of exceedances over multivariate thresholds of random vectors in the domain of attraction of a max-stable distribution. These distributions can be parametrized and represented in a number of different ways. Moreover, generalized Pareto distributions enjoy a number of interesting stability properties. An overview of the main features of such distributions are given, expressed compactly in several parametrizations, giving the potential user of these distributions a convenient catalogue of ways to handle and work with generalized Pareto distributions. ",0,0,1,1,0,0 17048,Invertibility of spectral x-ray data with pileup--two dimension-two spectrum case," In the Alvarez-Macovski method, the line integrals of the x-ray basis set coefficients are computed from measurements with multiple spectra. An important question is whether the transformation from measurements to line integrals is invertible. This paper presents a proof that for a system with two spectra and a photon counting detector, pileup does not affect the invertibility of the system. If the system is invertible with no pileup, it will remain invertible with pileup although the reduced Jacobian may lead to increased noise. ",0,1,0,0,0,0 17049,Steinberg representations and harmonic cochains for split adjoint quasi-simple groups," Let $G$ be an adjoint quasi-simple group defined and split over a non-archimedean local field $K$. We prove that the dual of the Steinberg representation of $G$ is isomorphic to a certain space of harmonic cochains on the Bruhat-Tits building of $G$. The Steinberg representation is considered with coefficients in any commutative ring. ",0,0,1,0,0,0 17050,Lorentzian surfaces and the curvature of the Schmidt metric," The b-boundary is a mathematical tool used to attach a topological boundary to incomplete Lorentzian manifolds using a Riemaniann metric called the Schmidt metric on the frame bundle. In this paper, we give the general form of the Schmidt metric in the case of Lorentzian surfaces. Furthermore, we write the Ricci scalar of the Schmidt metric in terms of the Ricci scalar of the Lorentzian manifold and give some examples. Finally, we discuss some applications to general relativity. ",0,0,1,0,0,0 17051,Mixed Precision Solver Scalable to 16000 MPI Processes for Lattice Quantum Chromodynamics Simulations on the Oakforest-PACS System," Lattice Quantum Chromodynamics (Lattice QCD) is a quantum field theory on a finite discretized space-time box so as to numerically compute the dynamics of quarks and gluons to explore the nature of subatomic world. Solving the equation of motion of quarks (quark solver) is the most compute-intensive part of the lattice QCD simulations and is one of the legacy HPC applications. We have developed a mixed-precision quark solver for a large Intel Xeon Phi (KNL) system named ""Oakforest-PACS"", employing the $O(a)$-improved Wilson quarks as the discretized equation of motion. The nested-BiCGSTab algorithm for the solver was implemented and optimized using mixed-precision, communication-computation overlapping with MPI-offloading, SIMD vectorization, and thread stealing techniques. The solver achieved 2.6 PFLOPS in the single-precision part on a $400^3\times 800$ lattice using 16000 MPI processes on 8000 nodes on the system. ",0,1,0,0,0,0 17052,A spectral/hp element MHD solver," A new MHD solver, based on the Nektar++ spectral/hp element framework, is presented in this paper. The velocity and electric potential quasi-static MHD model is used. The Hartmann flow in plane channel and its stability, the Hartmann flow in rectangular duct, and the stability of Hunt's flow are explored as examples. Exponential convergence is achieved and the resulting numerical values were found to have an accuracy up to $10^{-12}$ for the state flows compared to an exact solution, and $10^{-5}$ for the stability eigenvalues compared to independent numerical results. ",0,1,0,0,0,0 17053,"Journalists' information needs, seeking behavior, and its determinants on social media"," We describe the results of a qualitative study on journalists' information seeking behavior on social media. Based on interviews with eleven journalists along with a study of a set of university level journalism modules, we determined the categories of information need types that lead journalists to social media. We also determined the ways that social media is exploited as a tool to satisfy information needs and to define influential factors, which impacted on journalists' information seeking behavior. We find that not only is social media used as an information source, but it can also be a supplier of stories found serendipitously. We find seven information need types that expand the types found in previous work. We also find five categories of influential factors that affect the way journalists seek information. ",1,0,0,0,0,0 17054,Fast Low-Rank Bayesian Matrix Completion with Hierarchical Gaussian Prior Models," The problem of low rank matrix completion is considered in this paper. To exploit the underlying low-rank structure of the data matrix, we propose a hierarchical Gaussian prior model, where columns of the low-rank matrix are assumed to follow a Gaussian distribution with zero mean and a common precision matrix, and a Wishart distribution is specified as a hyperprior over the precision matrix. We show that such a hierarchical Gaussian prior has the potential to encourage a low-rank solution. Based on the proposed hierarchical prior model, a variational Bayesian method is developed for matrix completion, where the generalized approximate massage passing (GAMP) technique is embedded into the variational Bayesian inference in order to circumvent cumbersome matrix inverse operations. Simulation results show that our proposed method demonstrates superiority over existing state-of-the-art matrix completion methods. ",1,0,0,1,0,0 17055,Emergence of superconductivity in the canonical heavy-electron metal YbRh2Si2," We report magnetic and calorimetric measurements down to T = 1 mK on the canonical heavy-electron metal YbRh2Si2. The data reveal the development of nuclear antiferromagnetic order slightly above 2 mK. The latter weakens the primary electronic antiferromagnetism, thereby paving the way for heavy-electron superconductivity below Tc = 2 mK. Our results demonstrate that superconductivity driven by quantum criticality is a general phenomenon. ",0,1,0,0,0,0 17056,Obtaining a Proportional Allocation by Deleting Items," We consider the following control problem on fair allocation of indivisible goods. Given a set $I$ of items and a set of agents, each having strict linear preference over the items, we ask for a minimum subset of the items whose deletion guarantees the existence of a proportional allocation in the remaining instance; we call this problem Proportionality by Item Deletion (PID). Our main result is a polynomial-time algorithm that solves PID for three agents. By contrast, we prove that PID is computationally intractable when the number of agents is unbounded, even if the number $k$ of item deletions allowed is small, since the problem turns out to be W[3]-hard with respect to the parameter $k$. Additionally, we provide some tight lower and upper bounds on the complexity of PID when regarded as a function of $|I|$ and $k$. ",1,0,0,0,0,0 17057,DeepFace: Face Generation using Deep Learning," We use CNNs to build a system that both classifies images of faces based on a variety of different facial attributes and generates new faces given a set of desired facial characteristics. After introducing the problem and providing context in the first section, we discuss recent work related to image generation in Section 2. In Section 3, we describe the methods used to fine-tune our CNN and generate new images using a novel approach inspired by a Gaussian mixture model. In Section 4, we discuss our working dataset and describe our preprocessing steps and handling of facial attributes. Finally, in Sections 5, 6 and 7, we explain our experiments and results and conclude in the following section. Our classification system has 82\% test accuracy. Furthermore, our generation pipeline successfully creates well-formed faces. ",1,0,0,0,0,0 17058,High quality mesh generation using cross and asterisk fields: Application on coastal domains," This paper presents a method to generate high quality triangular or quadrilateral meshes that uses direction fields and a frontal point insertion strategy. Two types of direction fields are considered: asterisk fields and cross fields. With asterisk fields we generate high quality triangulations, while with cross fields we generate right-angled triangulations that are optimal for transformation to quadrilateral meshes. The input of our algorithm is an initial triangular mesh and a direction field calculated on it. The goal is to compute the vertices of the final mesh by an advancing front strategy along the direction field. We present an algorithm that enables to efficiently generate the points using solely information from the base mesh. A multi-threaded implementation of our algorithm is presented, allowing us to achieve significant speedup of the point generation. Regarding the quadrangulation process, we develop a quality criterion for right-angled triangles with respect to the local cross field and an optimization process based on it. Thus we are able to further improve the quality of the output quadrilaterals. The algorithm is demonstrated on the sphere and examples of high quality triangular and quadrilateral meshes of coastal domains are presented. ",1,0,0,0,0,0 17059,ELFI: Engine for Likelihood-Free Inference," Engine for Likelihood-Free Inference (ELFI) is a Python software library for performing likelihood-free inference (LFI). ELFI provides a convenient syntax for arranging components in LFI, such as priors, simulators, summaries or distances, to a network called ELFI graph. The components can be implemented in a wide variety of languages. The stand-alone ELFI graph can be used with any of the available inference methods without modifications. A central method implemented in ELFI is Bayesian Optimization for Likelihood-Free Inference (BOLFI), which has recently been shown to accelerate likelihood-free inference up to several orders of magnitude by surrogate-modelling the distance. ELFI also has an inbuilt support for output data storing for reuse and analysis, and supports parallelization of computation from multiple cores up to a cluster environment. ELFI is designed to be extensible and provides interfaces for widening its functionality. This makes the adding of new inference methods to ELFI straightforward and automatically compatible with the inbuilt features. ",1,0,0,1,0,0 17060,Boosting Adversarial Attacks with Momentum," Deep neural networks are vulnerable to adversarial examples, which poses security concerns on these algorithms due to the potentially severe consequences. Adversarial attacks serve as an important surrogate to evaluate the robustness of deep learning models before they are deployed. However, most of existing adversarial attacks can only fool a black-box model with a low success rate. To address this issue, we propose a broad class of momentum-based iterative algorithms to boost adversarial attacks. By integrating the momentum term into the iterative process for attacks, our methods can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples. To further improve the success rates for black-box attacks, we apply momentum iterative algorithms to an ensemble of models, and show that the adversarially trained models with a strong defense ability are also vulnerable to our black-box attacks. We hope that the proposed methods will serve as a benchmark for evaluating the robustness of various deep models and defense methods. With this method, we won the first places in NIPS 2017 Non-targeted Adversarial Attack and Targeted Adversarial Attack competitions. ",1,0,0,1,0,0 17061,Information spreading during emergencies and anomalous events," The most critical time for information to spread is in the aftermath of a serious emergency, crisis, or disaster. Individuals affected by such situations can now turn to an array of communication channels, from mobile phone calls and text messages to social media posts, when alerting social ties. These channels drastically improve the speed of information in a time-sensitive event, and provide extant records of human dynamics during and afterward the event. Retrospective analysis of such anomalous events provides researchers with a class of ""found experiments"" that may be used to better understand social spreading. In this chapter, we study information spreading due to a number of emergency events, including the Boston Marathon Bombing and a plane crash at a western European airport. We also contrast the different information which may be gleaned by social media data compared with mobile phone data and we estimate the rate of anomalous events in a mobile phone dataset using a proposed anomaly detection method. ",1,1,0,0,0,0 17062,Large-Scale Plant Classification with Deep Neural Networks," This paper discusses the potential of applying deep learning techniques for plant classification and its usage for citizen science in large-scale biodiversity monitoring. We show that plant classification using near state-of-the-art convolutional network architectures like ResNet50 achieves significant improvements in accuracy compared to the most widespread plant classification application in test sets composed of thousands of different species labels. We find that the predictions can be confidently used as a baseline classification in citizen science communities like iNaturalist (or its Spanish fork, Natusfera) which in turn can share their data with biodiversity portals like GBIF. ",1,0,0,1,0,0 17063,Deep Reinforcement Learning for General Video Game AI," The General Video Game AI (GVGAI) competition and its associated software framework provides a way of benchmarking AI algorithms on a large number of games written in a domain-specific description language. While the competition has seen plenty of interest, it has so far focused on online planning, providing a forward model that allows the use of algorithms such as Monte Carlo Tree Search. In this paper, we describe how we interface GVGAI to the OpenAI Gym environment, a widely used way of connecting agents to reinforcement learning problems. Using this interface, we characterize how widely used implementations of several deep reinforcement learning algorithms fare on a number of GVGAI games. We further analyze the results to provide a first indication of the relative difficulty of these games relative to each other, and relative to those in the Arcade Learning Environment under similar conditions. ",0,0,0,1,0,0 17064,Purely infinite labeled graph $C^*$-algebras," In this paper, we consider pure infiniteness of generalized Cuntz-Krieger algebras associated to labeled spaces $(E,\mathcal{L},\mathcal{E})$. It is shown that a $C^*$-algebra $C^*(E,\mathcal{L},\mathcal{E})$ is purely infinite in the sense that every nonzero hereditary subalgebra contains an infinite projection (we call this property (IH)) if $(E, \mathcal{L},\mathcal{E})$ is disagreeable and every vertex connects to a loop. We also prove that under the condition analogous to (K) for usual graphs, $C^*(E,\mathcal{L},\mathcal{E})=C^*(p_A, s_a)$ is purely infinite in the sense of Kirchberg and R{\o}rdam if and only if every generating projection $p_A$, $A\in \mathcal{E}$, is properly infinite, and also if and only if every quotient of $C^*(E,\mathcal{L},\mathcal{E})$ has the property (IH). ",0,0,1,0,0,0 17065,From safe screening rules to working sets for faster Lasso-type solvers," Convex sparsity-promoting regularizations are ubiquitous in modern statistical learning. By construction, they yield solutions with few non-zero coefficients, which correspond to saturated constraints in the dual optimization formulation. Working set (WS) strategies are generic optimization techniques that consist in solving simpler problems that only consider a subset of constraints, whose indices form the WS. Working set methods therefore involve two nested iterations: the outer loop corresponds to the definition of the WS and the inner loop calls a solver for the subproblems. For the Lasso estimator a WS is a set of features, while for a Group Lasso it refers to a set of groups. In practice, WS are generally small in this context so the associated feature Gram matrix can fit in memory. Here we show that the Gauss-Southwell rule (a greedy strategy for block coordinate descent techniques) leads to fast solvers in this case. Combined with a working set strategy based on an aggressive use of so-called Gap Safe screening rules, we propose a solver achieving state-of-the-art performance on sparse learning problems. Results are presented on Lasso and multi-task Lasso estimators. ",1,0,1,1,0,0 17066,Exoplanet Radius Gap Dependence on Host Star Type," Exoplanets smaller than Neptune are numerous, but the nature of the planet populations in the 1-4 Earth radii range remains a mystery. The complete Kepler sample of Q1-Q17 exoplanet candidates shows a radius gap at ~ 2 Earth radii, as reported by us in January 2017 in LPSC conference abstract #1576 (Zeng et al. 2017). A careful analysis of Kepler host stars spectroscopy by the CKS survey allowed Fulton et al. (2017) in March 2017 to unambiguously show this radius gap. The cause of this gap is still under discussion (Ginzburg et al. 2017; Lehmer & Catling 2017; Owen & Wu 2017). Here we add to our original analysis the dependence of the radius gap on host star type. ",0,1,0,0,0,0 17067,Mapping the Invocation Structure of Online Political Interaction," The surge in political information, discourse, and interaction has been one of the most important developments in social media over the past several years. There is rich structure in the interaction among different viewpoints on the ideological spectrum. However, we still have only a limited analytical vocabulary for expressing the ways in which these viewpoints interact. In this paper, we develop network-based methods that operate on the ways in which users share content; we construct \emph{invocation graphs} on Web domains showing the extent to which pages from one domain are invoked by users to reply to posts containing pages from other domains. When we locate the domains on a political spectrum induced from the data, we obtain an embedded graph showing how these interaction links span different distances on the spectrum. The structure of this embedded network, and its evolution over time, helps us derive macro-level insights about how political interaction unfolded through 2016, leading up to the US Presidential election. In particular, we find that the domains invoked in replies spanned increasing distances on the spectrum over the months approaching the election, and that there was clear asymmetry between the left-to-right and right-to-left patterns of linkage. ",1,0,0,0,0,0 17068,Collective decision for open set recognition," In open set recognition (OSR), almost all existing methods are designed specially for recognizing individual instances, even these instances are collectively coming in batch. Recognizers in decision either reject or categorize them to some known class using empirically-set threshold. Thus the threshold plays a key role, however, the selection for it usually depends on the knowledge of known classes, inevitably incurring risks due to lacking available information from unknown classes. On the other hand, a more realistic OSR system should NOT just rest on a reject decision but should go further, especially for discovering the hidden unknown classes among the reject instances, whereas existing OSR methods do not pay special attention. In this paper, we introduce a novel collective/batch decision strategy with an aim to extend existing OSR for new class discovery while considering correlations among the testing instances. Specifically, a collective decision-based OSR framework (CD-OSR) is proposed by slightly modifying the Hierarchical Dirichlet process (HDP). Thanks to the HDP, our CD-OSR does not need to define the specific threshold and can automatically reserve space for unknown classes in testing, naturally resulting in a new class discovery function. Finally, extensive experiments on benchmark datasets indicate the validity of CD-OSR. ",0,0,0,1,0,0 17069,HARPS-N high spectral resolution observations of Cepheids I. The Baade-Wesselink projection factor of δ Cep revisited," The projection factor p is the key quantity used in the Baade-Wesselink (BW) method for distance determination; it converts radial velocities into pulsation velocities. Several methods are used to determine p, such as geometrical and hydrodynamical models or the inverse BW approach when the distance is known. We analyze new HARPS-N spectra of delta Cep to measure its cycle-averaged atmospheric velocity gradient in order to better constrain the projection factor. We first apply the inverse BW method to derive p directly from observations. The projection factor can be divided into three subconcepts: (1) a geometrical effect (p0); (2) the velocity gradient within the atmosphere (fgrad); and (3) the relative motion of the optical pulsating photosphere with respect to the corresponding mass elements (fo-g). We then measure the fgrad value of delta Cep for the first time. When the HARPS-N mean cross-correlated line-profiles are fitted with a Gaussian profile, the projection factor is pcc-g = 1.239 +/- 0.034(stat) +/- 0.023(syst). When we consider the different amplitudes of the radial velocity curves that are associated with 17 selected spectral lines, we measure projection factors ranging from 1.273 to 1.329. We find a relation between fgrad and the line depth measured when the Cepheid is at minimum radius. This relation is consistent with that obtained from our best hydrodynamical model of delta Cep and with our projection factor decomposition. Using the observational values of p and fgrad found for the 17 spectral lines, we derive a semi-theoretical value of fo-g. We alternatively obtain fo-g = 0.975+/-0.002 or 1.006+/-0.002 assuming models using radiative transfer in plane-parallel or spherically symmetric geometries, respectively. The new HARPS-N observations of delta Cep are consistent with our decomposition of the projection factor. ",0,1,0,0,0,0 17070,Using Deep Learning and Google Street View to Estimate the Demographic Makeup of the US," The United States spends more than $1B each year on initiatives such as the American Community Survey (ACS), a labor-intensive door-to-door study that measures statistics relating to race, gender, education, occupation, unemployment, and other demographic factors. Although a comprehensive source of data, the lag between demographic changes and their appearance in the ACS can exceed half a decade. As digital imagery becomes ubiquitous and machine vision techniques improve, automated data analysis may provide a cheaper and faster alternative. Here, we present a method that determines socioeconomic trends from 50 million images of street scenes, gathered in 200 American cities by Google Street View cars. Using deep learning-based computer vision techniques, we determined the make, model, and year of all motor vehicles encountered in particular neighborhoods. Data from this census of motor vehicles, which enumerated 22M automobiles in total (8% of all automobiles in the US), was used to accurately estimate income, race, education, and voting patterns, with single-precinct resolution. (The average US precinct contains approximately 1000 people.) The resulting associations are surprisingly simple and powerful. For instance, if the number of sedans encountered during a 15-minute drive through a city is higher than the number of pickup trucks, the city is likely to vote for a Democrat during the next Presidential election (88% chance); otherwise, it is likely to vote Republican (82%). Our results suggest that automated systems for monitoring demographic trends may effectively complement labor-intensive approaches, with the potential to detect trends with fine spatial resolution, in close to real time. ",1,0,0,0,0,0 17071,Gaussian Process Neurons Learn Stochastic Activation Functions," We propose stochastic, non-parametric activation functions that are fully learnable and individual to each neuron. Complexity and the risk of overfitting are controlled by placing a Gaussian process prior over these functions. The result is the Gaussian process neuron, a probabilistic unit that can be used as the basic building block for probabilistic graphical models that resemble the structure of neural networks. The proposed model can intrinsically handle uncertainties in its inputs and self-estimate the confidence of its predictions. Using variational Bayesian inference and the central limit theorem, a fully deterministic loss function is derived, allowing it to be trained as efficiently as a conventional neural network using mini-batch gradient descent. The posterior distribution of activation functions is inferred from the training data alongside the weights of the network. The proposed model favorably compares to deep Gaussian processes, both in model complexity and efficiency of inference. It can be directly applied to recurrent or convolutional network structures, allowing its use in audio and image processing tasks. As an preliminary empirical evaluation we present experiments on regression and classification tasks, in which our model achieves performance comparable to or better than a Dropout regularized neural network with a fixed activation function. Experiments are ongoing and results will be added as they become available. ",1,0,0,1,0,0 17072,The short-term price impact of trades is universal," We analyze a proprietary dataset of trades by a single asset manager, comparing their price impact with that of the trades of the rest of the market. In the context of a linear propagator model we find no significant difference between the two, suggesting that both the magnitude and time dependence of impact are universal in anonymous, electronic markets. This result is important as optimal execution policies often rely on propagators calibrated on anonymous data. We also find evidence that in the wake of a trade the order flow of other market participants first adds further copy-cat trades enhancing price impact on very short time scales. The induced order flow then quickly inverts, thereby contributing to impact decay. ",0,1,0,0,0,0 17073,Questions on mod p representations of reductive p-adic groups," This is a list of questions raised by our joint work arXiv:1412.0737 and its sequels. ",0,0,1,0,0,0 17074,Filamentary superconductivity in semiconducting policrystalline ZrSe2 compound with Zr vacancies," ZrSe2 is a band semiconductor studied long time ago. It has interesting electronic properties, and because its layers structure can be intercalated with different atoms to change some of the physical properties. In this investigation we found that Zr deficiencies alter the semiconducting behavior and the compound can be turned into a superconductor. In this paper we report our studies related to this discovery. The decreasing of the number of Zr atoms in small proportion according to the formula ZrxSe2, where x is varied from about 8.1 to 8.6 K, changing the semiconducting behavior to a superconductor with transition temperatures ranging between 7.8 to 8.5 K, it depending of the deficiencies. Outside of those ranges the compound behaves as semiconducting with the properties already known. In our experiments we found that this new superconductor has only a very small fraction of superconducting material determined by magnetic measurements with applied magnetic field of 10 Oe. Our conclusions is that superconductivity is filamentary. However, in one studied sample the fraction was about 10.2 %, whereas in others is only about 1 % or less. We determined the superconducting characteristics; the critical fields that indicate a type two superonductor with Ginzburg-Landau ? parameter of the order about 2.7. The synthesis procedure is quite normal fol- lowing the conventional solid state reaction. In this paper are included, the electronic characteristics, transition temperature, and evolution with temperature of the critical fields. ",0,1,0,0,0,0 17075,Stochastic Block Model Reveals the Map of Citation Patterns and Their Evolution in Time," In this study we map out the large-scale structure of citation networks of science journals and follow their evolution in time by using stochastic block models (SBMs). The SBM fitting procedures are principled methods that can be used to find hierarchical grouping of journals into blocks that show similar incoming and outgoing citations patterns. These methods work directly on the citation network without the need to construct auxiliary networks based on similarity of nodes. We fit the SBMs to the networks of journals we have constructed from the data set of around 630 million citations and find a variety of different types of blocks, such as clusters, bridges, sources, and sinks. In addition we use a recent generalization of SBMs to determine how much a manually curated classification of journals into subfields of science is related to the block structure of the journal network and how this relationship changes in time. The SBM method tries to find a network of blocks that is the best high-level representation of the network of journals, and we illustrate how these block networks (at various levels of resolution) can be used as maps of science. ",1,1,0,0,0,0 17076,Limits on the anomalous speed of gravitational waves from binary pulsars," A large class of modified theories of gravity used as models for dark energy predict a propagation speed for gravitational waves which can differ from the speed of light. This difference of propagations speeds for photons and gravitons has an impact in the emission of gravitational waves by binary systems. Thus, we revisit the usual quadrupolar emission of binary system for an arbitrary propagation speed of gravitational waves and obtain the corresponding period decay formula. We then use timing data from the Hulse-Taylor binary pulsar and obtain that the speed of gravitational waves can only differ from the speed of light at the percentage level. This bound places tight constraints on dark energy models featuring an anomalous propagations speed for the gravitational waves. ",0,1,0,0,0,0 17077,Central limit theorems for entropy-regularized optimal transport on finite spaces and statistical applications," The notion of entropy-regularized optimal transport, also known as Sinkhorn divergence, has recently gained popularity in machine learning and statistics, as it makes feasible the use of smoothed optimal transportation distances for data analysis. The Sinkhorn divergence allows the fast computation of an entropically regularized Wasserstein distance between two probability distributions supported on a finite metric space of (possibly) high-dimension. For data sampled from one or two unknown probability distributions, we derive the distributional limits of the empirical Sinkhorn divergence and its centered version (Sinkhorn loss). We also propose a bootstrap procedure which allows to obtain new test statistics for measuring the discrepancies between multivariate probability distributions. Our work is inspired by the results of Sommerfeld and Munk (2016) on the asymptotic distribution of empirical Wasserstein distance on finite space using unregularized transportation costs. Incidentally we also analyze the asymptotic distribution of entropy-regularized Wasserstein distances when the regularization parameter tends to zero. Simulated and real datasets are used to illustrate our approach. ",0,0,1,1,0,0 17078,Inference for Stochastically Contaminated Variable Length Markov Chains," In this paper, we present a methodology to estimate the parameters of stochastically contaminated models under two contamination regimes. In both regimes, we assume that the original process is a variable length Markov chain that is contaminated by a random noise. In the first regime we consider that the random noise is added to the original source and in the second regime, the random noise is multiplied by the original source. Given a contaminated sample of these models, the original process is hidden. Then we propose a two steps estimator for the parameters of these models, that is, the probability transitions and the noise parameter, and prove its consistency. The first step is an adaptation of the Baum-Welch algorithm for Hidden Markov Models. This step provides an estimate of a complete order $k$ Markov chain, where $k$ is bigger than the order of the variable length Markov chain if it has finite order and is a constant depending on the sample size if the hidden process has infinite order. In the second estimation step, we propose a bootstrap Bayesian Information Criterion, given a sample of the Markov chain estimated in the first step, to obtain the variable length time dependence structure associated with the hidden process. We present a simulation study showing that our methodology is able to accurately recover the parameters of the models for a reasonable interval of random noises. ",0,0,0,1,0,0 17079,Variable-Length Resolvability for General Sources and Channels," We introduce the problem of variable-length source resolvability, where a given target probability distribution is approximated by encoding a variable-length uniform random number, and the asymptotically minimum average length rate of the uniform random numbers, called the (variable-length) resolvability, is investigated. We first analyze the variable-length resolvability with the variational distance as an approximation measure. Next, we investigate the case under the divergence as an approximation measure. When the asymptotically exact approximation is required, it is shown that the resolvability under the two kinds of approximation measures coincides. We then extend the analysis to the case of channel resolvability, where the target distribution is the output distribution via a general channel due to the fixed general source as an input. The obtained characterization of the channel resolvability is fully general in the sense that when the channel is just the identity mapping, the characterization reduces to the general formula for the source resolvability. We also analyze the second-order variable-length resolvability. ",1,0,0,0,0,0 17080,Diattenuation of Brain Tissue and its Impact on 3D Polarized Light Imaging," 3D-Polarized Light Imaging (3D-PLI) reconstructs nerve fibers in histological brain sections by measuring their birefringence. This study investigates another effect caused by the optical anisotropy of brain tissue - diattenuation. Based on numerical and experimental studies and a complete analytical description of the optical system, the diattenuation was determined to be below 4 % in rat brain tissue. It was demonstrated that the diattenuation effect has negligible impact on the fiber orientations derived by 3D-PLI. The diattenuation signal, however, was found to highlight different anatomical structures that cannot be distinguished with current imaging techniques, which makes Diattenuation Imaging a promising extension to 3D-PLI. ",0,1,0,0,0,0 17081,Higgs Modes in the Pair Density Wave Superconducting State," The pair density wave (PDW) superconducting state has been proposed to explain the layer- decoupling effect observed in the compound La$_{2-x}$Ba$_x$CuO$_4$ at $x=1/8$ (Phys. Rev. Lett. 99, 127003). In this state the superconducting order parameter is spatially modulated, in contrast with the usual superconducting (SC) state where the order parameter is uniform. In this work, we study the properties of the amplitude (Higgs) modes in a unidirectional PDW state. To this end we consider a phenomenological model of PDW type states coupled to a Fermi surface of fermionic quasiparticles. In contrast to conventional superconductors that have a single Higgs mode, unidirectional PDW superconductors have two Higgs modes. While in the PDW state the Fermi surface largely remains gapless, we find that the damping of the PDW Higgs modes into fermionic quasiparticles requires exceeding an energy threshold. We show that this suppression of damping in the PDW state is due to kinematics. As a result, only one of the two Higgs modes is significantly damped. In addition, motivated by the experimental phase diagram, we discuss the mixing of Higgs modes in the coexistence regime of the PDW and uniform SC states. These results should be observable directly in a Raman spectroscopy, in momentum resolved electron energy loss spectroscopy, and in resonant inelastic X-ray scattering, thus providing evidence of the PDW states. ",0,1,0,0,0,0 17082,A Serverless Tool for Platform Agnostic Computational Experiment Management," Neuroscience has been carried into the domain of big data and high performance computing (HPC) on the backs of initiatives in data collection and an increasingly compute-intensive tools. While managing HPC experiments requires considerable technical acumen, platforms and standards have been developed to ease this burden on scientists. While web-portals make resources widely accessible, data organizations such as the Brain Imaging Data Structure and tool description languages such as Boutiques provide researchers with a foothold to tackle these problems using their own datasets, pipelines, and environments. While these standards lower the barrier to adoption of HPC and cloud systems for neuroscience applications, they still require the consolidation of disparate domain-specific knowledge. We present Clowdr, a lightweight tool to launch experiments on HPC systems and clouds, record rich execution records, and enable the accessible sharing of experimental summaries and results. Clowdr uniquely sits between web platforms and bare-metal applications for experiment management by preserving the flexibility of do-it-yourself solutions while providing a low barrier for developing, deploying and disseminating neuroscientific analysis. ",1,0,0,0,0,0 17083,Traveling-wave parametric amplifier based on three-wave mixing in a Josephson metamaterial," We have developed a recently proposed Josephson traveling-wave parametric amplifier with three-wave mixing [A. B. Zorin, Phys. Rev. Applied 6, 034006, 2016]. The amplifier consists of a microwave transmission line formed by a serial array of nonhysteretic one-junction SQUIDs. These SQUIDs are flux-biased in a way that the phase drops across the Josephson junctions are equal to 90 degrees and the persistent currents in the SQUID loops are equal to the Josephson critical current values. Such a one-dimensional metamaterial possesses a maximal quadratic nonlinearity and zero cubic (Kerr) nonlinearity. This property allows phase matching and exponential power gain of traveling microwaves to take place over a wide frequency range. We report the proof-of-principle experiment performed at a temperature of T = 4.2 K on Nb trilayer samples, which has demonstrated that our concept of a practical broadband Josephson parametric amplifier is valid and very promising for achieving quantum-limited operation. ",0,1,0,0,0,0 17084,Measuring LDA Topic Stability from Clusters of Replicated Runs," Background: Unstructured and textual data is increasing rapidly and Latent Dirichlet Allocation (LDA) topic modeling is a popular data analysis methods for it. Past work suggests that instability of LDA topics may lead to systematic errors. Aim: We propose a method that relies on replicated LDA runs, clustering, and providing a stability metric for the topics. Method: We generate k LDA topics and replicate this process n times resulting in n*k topics. Then we use K-medioids to cluster the n*k topics to k clusters. The k clusters now represent the original LDA topics and we present them like normal LDA topics showing the ten most probable words. For the clusters, we try multiple stability metrics, out of which we recommend Rank-Biased Overlap, showing the stability of the topics inside the clusters. Results: We provide an initial validation where our method is used for 270,000 Mozilla Firefox commit messages with k=20 and n=20. We show how our topic stability metrics are related to the contents of the topics. Conclusions: Advances in text mining enable us to analyze large masses of text in software engineering but non-deterministic algorithms, such as LDA, may lead to unreplicable conclusions. Our approach makes LDA stability transparent and is also complementary rather than alternative to many prior works that focus on LDA parameter tuning. ",1,0,0,0,0,0 17085,Continuum Foreground Polarization and Na~I Absorption in Type Ia SNe," We present a study of the continuum polarization over the 400--600 nm range of 19 Type Ia SNe obtained with FORS at the VLT. We separate them in those that show Na I D lines at the velocity of their hosts and those that do not. Continuum polarization of the sodium sample near maximum light displays a broad range of values, from extremely polarized cases like SN 2006X to almost unpolarized ones like SN 2011ae. The non--sodium sample shows, typically, smaller polarization values. The continuum polarization of the sodium sample in the 400--600 nm range is linear with wavelength and can be characterized by the mean polarization (P$_{\rm{mean}}$). Its values span a wide range and show a linear correlation with color, color excess, and extinction in the visual band. Larger dispersion correlations were found with the equivalent width of the Na I D and Ca II H & K lines, and also a noisy relation between P$_{\rm{mean}}$ and $R_{V}$, the ratio of total to selective extinction. Redder SNe show stronger continuum polarization, with larger color excesses and extinctions. We also confirm that high continuum polarization is associated with small values of $R_{V}$. The correlation between extinction and polarization -- and polarization angles -- suggest that the dominant fraction of dust polarization is imprinted in interstellar regions of the host galaxies. We show that Na I D lines from foreground matter in the SN host are usually associated with non-galactic ISM, challenging the typical assumptions in foreground interstellar polarization models. ",0,1,0,0,0,0 17086,Toward Faultless Content-Based Playlists Generation for Instrumentals," This study deals with content-based musical playlists generation focused on Songs and Instrumentals. Automatic playlist generation relies on collaborative filtering and autotagging algorithms. Autotagging can solve the cold start issue and popularity bias that are critical in music recommender systems. However, autotagging remains to be improved and cannot generate satisfying music playlists. In this paper, we suggest improvements toward better autotagging-generated playlists compared to state-of-the-art. To assess our method, we focus on the Song and Instrumental tags. Song and Instrumental are two objective and opposite tags that are under-studied compared to genres or moods, which are subjective and multi-modal tags. In this paper, we consider an industrial real-world musical database that is unevenly distributed between Songs and Instrumentals and bigger than databases used in previous studies. We set up three incremental experiments to enhance automatic playlist generation. Our suggested approach generates an Instrumental playlist with up to three times less false positives than cutting edge methods. Moreover, we provide a design of experiment framework to foster research on Songs and Instrumentals. We give insight on how to improve further the quality of generated playlists and to extend our methods to other musical tags. Furthermore, we provide the source code to guarantee reproducible research. ",1,0,0,0,0,0 17087,Direct observation of the band gap transition in atomically thin ReS$_2$," ReS$_2$ is considered as a promising candidate for novel electronic and sensor applications. The low crystal symmetry of the van der Waals compound ReS$_2$ leads to a highly anisotropic optical, vibrational, and transport behavior. However, the details of the electronic band structure of this fascinating material are still largely unexplored. We present a momentum-resolved study of the electronic structure of monolayer, bilayer, and bulk ReS$_2$ using k-space photoemission microscopy in combination with first-principles calculations. We demonstrate that the valence electrons in bulk ReS$_2$ are - contrary to assumptions in recent literature - significantly delocalized across the van der Waals gap. Furthermore, we directly observe the evolution of the valence band dispersion as a function of the number of layers, revealing a significantly increased effective electron mass in single-layer crystals. We also find that only bilayer ReS$_2$ has a direct band gap. Our results establish bilayer ReS$_2$ as a advantageous building block for two-dimensional devices and van der Waals heterostructures. ",0,1,0,0,0,0 17088,Lattice embeddings between types of fuzzy sets. Closed-valued fuzzy sets," In this paper we deal with the problem of extending Zadeh's operators on fuzzy sets (FSs) to interval-valued (IVFSs), set-valued (SVFSs) and type-2 (T2FSs) fuzzy sets. Namely, it is known that seeing FSs as SVFSs, or T2FSs, whose membership degrees are singletons is not order-preserving. We then describe a family of lattice embeddings from FSs to SVFSs. Alternatively, if the former singleton viewpoint is required, we reformulate the intersection on hesitant fuzzy sets and introduce what we have called closed-valued fuzzy sets. This new type of fuzzy sets extends standard union and intersection on FSs. In addition, it allows handling together membership degrees of different nature as, for instance, closed intervals and finite sets. Finally, all these constructions are viewed as T2FSs forming a chain of lattices. ",1,0,0,0,0,0 17089,Coupling of Magneto-Thermal and Mechanical Superconducting Magnet Models by Means of Mesh-Based Interpolation," In this paper we present an algorithm for the coupling of magneto-thermal and mechanical finite element models representing superconducting accelerator magnets. The mechanical models are used during the design of the mechanical structure as well as the optimization of the magnetic field quality under nominal conditions. The magneto-thermal models allow for the analysis of transient phenomena occurring during quench initiation, propagation, and protection. Mechanical analysis of quenching magnets is of high importance considering the design of new protection systems and the study of new superconductor types. We use field/circuit coupling to determine temperature and electromagnetic force evolution during the magnet discharge. These quantities are provided as a load to existing mechanical models. The models are discretized with different meshes and, therefore, we employ a mesh-based interpolation method to exchange coupled quantities. The coupling algorithm is illustrated with a simulation of a mechanical response of a standalone high-field dipole magnet protected with CLIQ (Coupling-Loss Induced Quench) technology. ",1,1,0,0,0,0 17090,Converging expansions for Lipschitz self-similar perforations of a plane sector," In contrast with the well-known methods of matching asymptotics and multiscale (or compound) asymptotics, the "" functional analytic approach "" of Lanza de Cristoforis (Analysis 28, 2008) allows to prove convergence of expansions around interior small holes of size $\epsilon$ for solutions of elliptic boundary value problems. Using the method of layer potentials, the asymptotic behavior of the solution as $\epsilon$ tends to zero is described not only by asymptotic series in powers of $\epsilon$, but by convergent power series. Here we use this method to investigate the Dirichlet problem for the Laplace operator where holes are collapsing at a polygonal corner of opening $\omega$. Then in addition to the scale $\epsilon$ there appears the scale $\eta = \epsilon^{\pi/\omega}$. We prove that when $\pi/\omega$ is irrational, the solution of the Dirichlet problem is given by convergent series in powers of these two small parameters. Due to interference of the two scales, this convergence is obtained, in full generality, by grouping together integer powers of the two scales that are very close to each other. Nevertheless, there exists a dense subset of openings $\omega$ (characterized by Diophantine approximation properties), for which real analyticity in the two variables $\epsilon$ and $\eta$ holds and the power series converge unconditionally. When $\pi/\omega$ is rational, the series are unconditionally convergent, but contain terms in log $\epsilon$. ",0,0,1,0,0,0 17091,A Viral Timeline Branching Process to study a Social Network," Bio-inspired paradigms are proving to be useful in analyzing propagation and dissemination of information in networks. In this paper we explore the use of multi-type branching processes to analyse viral properties of content in a social network, with and without competition from other sources. We derive and compute various virality measures, e.g., probability of virality, expected number of shares, or the rate of growth of expected number of shares etc. They allow one to predict the emergence of global macro properties (e.g., viral spread of a post in the entire network) from the laws and parameters that determine local interactions. The local interactions, greatly depend upon the structure of the timelines holding the content and the number of friends (i.e., connections) of users of the network. We then formulate a non-cooperative game problem and study the Nash equilibria as a function of the parameters. The branching processes modelling the social network under competition turn out to be decomposable, multi-type and continuous time variants. For such processes types belonging to different sub-classes evolve at different rates and have different probabilities of extinction etc. We compute content provider wise extinction probability, rate of growth etc. We also conjecture the content-provider wise growth rate of expected shares. ",1,0,0,0,0,0 17092,Algorithmic Bio-surveillance For Precise Spatio-temporal Prediction of Zoonotic Emergence," Viral zoonoses have emerged as the key drivers of recent pandemics. Human infection by zoonotic viruses are either spillover events -- isolated infections that fail to cause a widespread contagion -- or species jumps, where successful adaptation to the new host leads to a pandemic. Despite expensive bio-surveillance efforts, historically emergence response has been reactive, and post-hoc. Here we use machine inference to demonstrate a high accuracy predictive bio-surveillance capability, designed to pro-actively localize an impending species jump via automated interrogation of massive sequence databases of viral proteins. Our results suggest that a jump might not purely be the result of an isolated unfortunate cross-infection localized in space and time; there are subtle yet detectable patterns of genotypic changes accumulating in the global viral population leading up to emergence. Using tens of thousands of protein sequences simultaneously, we train models that track maximum achievable accuracy for disambiguating host tropism from the primary structure of surface proteins, and show that the inverse classification accuracy is a quantitative indicator of jump risk. We validate our claim in the context of the 2009 swine flu outbreak, and the 2004 emergence of H5N1 subspecies of Influenza A from avian reservoirs; illustrating that interrogation of the global viral population can unambiguously track a near monotonic risk elevation over several preceding years leading to eventual emergence. ",0,0,0,1,1,0 17093,Practical Machine Learning for Cloud Intrusion Detection: Challenges and the Way Forward," Operationalizing machine learning based security detections is extremely challenging, especially in a continuously evolving cloud environment. Conventional anomaly detection does not produce satisfactory results for analysts that are investigating security incidents in the cloud. Model evaluation alone presents its own set of problems due to a lack of benchmark datasets. When deploying these detections, we must deal with model compliance, localization, and data silo issues, among many others. We pose the problem of ""attack disruption"" as a way forward in the security data science space. In this paper, we describe the framework, challenges, and open questions surrounding the successful operationalization of machine learning based security detections in a cloud environment and provide some insights on how we have addressed them. ",1,0,0,0,0,0 17094,SOI RF Switch for Wireless Sensor Network," The objective of this research was to design a 0-5 GHz RF SOI switch, with 0.18um power Jazz SOI technology by using Cadence software, for health care applications. This paper introduces the design of a RF switch implemented in shunt-series topology. An insertion loss of 0.906 dB and an isolation of 30.95 dB were obtained at 5 GHz. The switch also achieved a third order distortion of 53.05 dBm and 1 dB compression point reached 50.06dBm. The RF switch performance meets the desired specification requirements. ",1,0,0,0,0,0 17095,The Pentagonal Inequality," Given a positive linear combination of five (respectively seven) cosines, where the angles are positive and sum to pi, the aim of this article is to express the sharp bound of the combination as a Positive Real Fraction in the coefficients (hence cosine-free). The method uses algebraic and arithmetic manipulations with judicious transformations. ",0,0,1,0,0,0 17096,The Landscape of Deep Learning Algorithms," This paper studies the landscape of empirical risk of deep neural networks by theoretically analyzing its convergence behavior to the population risk as well as its stationary points and properties. For an $l$-layer linear neural network, we prove its empirical risk uniformly converges to its population risk at the rate of $\mathcal{O}(r^{2l}\sqrt{d\log(l)}/\sqrt{n})$ with training sample size of $n$, the total weight dimension of $d$ and the magnitude bound $r$ of weight of each layer. We then derive the stability and generalization bounds for the empirical risk based on this result. Besides, we establish the uniform convergence of gradient of the empirical risk to its population counterpart. We prove the one-to-one correspondence of the non-degenerate stationary points between the empirical and population risks with convergence guarantees, which describes the landscape of deep neural networks. In addition, we analyze these properties for deep nonlinear neural networks with sigmoid activation functions. We prove similar results for convergence behavior of their empirical risks as well as the gradients and analyze properties of their non-degenerate stationary points. To our best knowledge, this work is the first one theoretically characterizing landscapes of deep learning algorithms. Besides, our results provide the sample complexity of training a good deep neural network. We also provide theoretical understanding on how the neural network depth $l$, the layer width, the network size $d$ and parameter magnitude determine the neural network landscapes. ",1,0,1,1,0,0 17097,"The effect of the environment on the structure, morphology and star-formation history of intermediate-redshift galaxies"," With the aim of understanding the effect of the environment on the star formation history and morphological transformation of galaxies, we present a detailed analysis of the colour, morphology and internal structure of cluster and field galaxies at $0.4 \le z \le 0.8$. We use {\em HST} data for over 500 galaxies from the ESO Distant Cluster Survey (EDisCS) to quantify how the galaxies' light distribution deviate from symmetric smooth profiles. We visually inspect the galaxies' images to identify the likely causes for such deviations. We find that the residual flux fraction ($RFF$), which measures the fractional contribution to the galaxy light of the residuals left after subtracting a symmetric and smooth model, is very sensitive to the degree of structural disturbance but not the causes of such disturbance. On the other hand, the asymmetry of these residuals ($A_{\rm res}$) is more sensitive to the causes of the disturbance, with merging galaxies having the highest values of $A_{\rm res}$. Using these quantitative parameters we find that, at a fixed morphology, cluster and field galaxies show statistically similar degrees of disturbance. However, there is a higher fraction of symmetric and passive spirals in the cluster than in the field. These galaxies have smoother light distributions than their star-forming counterparts. We also find that while almost all field and cluster S0s appear undisturbed, there is a relatively small population of star-forming S0s in clusters but not in the field. These findings are consistent with relatively gentle environmental processes acting on galaxies infalling onto clusters. ",0,1,0,0,0,0 17098,Recommendations with Negative Feedback via Pairwise Deep Reinforcement Learning," Recommender systems play a crucial role in mitigating the problem of information overload by suggesting users' personalized items or services. The vast majority of traditional recommender systems consider the recommendation procedure as a static process and make recommendations following a fixed strategy. In this paper, we propose a novel recommender system with the capability of continuously improving its strategies during the interactions with users. We model the sequential interactions between users and a recommender system as a Markov Decision Process (MDP) and leverage Reinforcement Learning (RL) to automatically learn the optimal strategies via recommending trial-and-error items and receiving reinforcements of these items from users' feedback. Users' feedback can be positive and negative and both types of feedback have great potentials to boost recommendations. However, the number of negative feedback is much larger than that of positive one; thus incorporating them simultaneously is challenging since positive feedback could be buried by negative one. In this paper, we develop a novel approach to incorporate them into the proposed deep recommender system (DEERS) framework. The experimental results based on real-world e-commerce data demonstrate the effectiveness of the proposed framework. Further experiments have been conducted to understand the importance of both positive and negative feedback in recommendations. ",0,0,0,1,0,0 17099,Accumulated Gradient Normalization," This work addresses the instability in asynchronous data parallel optimization. It does so by introducing a novel distributed optimizer which is able to efficiently optimize a centralized model under communication constraints. The optimizer achieves this by pushing a normalized sequence of first-order gradients to a parameter server. This implies that the magnitude of a worker delta is smaller compared to an accumulated gradient, and provides a better direction towards a minimum compared to first-order gradients, which in turn also forces possible implicit momentum fluctuations to be more aligned since we make the assumption that all workers contribute towards a single minima. As a result, our approach mitigates the parameter staleness problem more effectively since staleness in asynchrony induces (implicit) momentum, and achieves a better convergence rate compared to other optimizers such as asynchronous EASGD and DynSGD, which we show empirically. ",1,0,0,1,0,0 17100,An Optimal Algorithm for Changing from Latitudinal to Longitudinal Formation of Autonomous Aircraft Squadrons," This work presents an algorithm for changing from latitudinal to longitudinal formation of autonomous aircraft squadrons. The maneuvers are defined dynamically by using a predefined set of 3D basic maneuvers. This formation changing is necessary when the squadron has to perform tasks which demand both formations, such as lift off, georeferencing, obstacle avoidance and landing. Simulations show that the formation changing is made without collision. The time complexity analysis of the transformation algorithm reveals that its efficiency is optimal, and the proof of correction ensures its longitudinal formation features. ",1,0,0,0,0,0 17101,Ideal Cluster Points in Topological Spaces," Given an ideal $\mathcal{I}$ on $\omega$, we show that a sequence in a topological space $X$ is $\mathcal{I}$-convergent if and only if there exists a ""big"" $\mathcal{I}$-convergent subsequence. In addition, we study several properties of $\mathcal{I}$-cluster points. As a consequence, the underlying topology $\tau$ coincides with the topology generated by the pair $(\tau,\mathcal{I})$. Then, we obtain two characterizations of the set of $\mathcal{I}$-cluster points as classical cluster points of a filters on $X$ and as the smallest closed set containing ""almost all"" the sequence. ",0,0,1,0,0,0 17102,Spin Hall effect of gravitational waves," Gravitons possess a Berry curvature due to their helicity. We derive the semiclassical equations of motion for gravitons taking into account the Berry curvature. We show that this quantum correction leads to the splitting of the trajectories of right- and left-handed gravitational waves in curved space, and that this correction can be understood as a topological phenomenon. This is the spin Hall effect (SHE) of gravitational waves. We find that the SHE of gravitational waves is twice as large as that of light. Possible future observations of the SHE of gravitational waves can potentially test the quantum nature of gravitons beyond the classical general relativity. ",0,1,0,0,0,0 17103,Many-Goals Reinforcement Learning," All-goals updating exploits the off-policy nature of Q-learning to update all possible goals an agent could have from each transition in the world, and was introduced into Reinforcement Learning (RL) by Kaelbling (1993). In prior work this was mostly explored in small-state RL problems that allowed tabular representations and where all possible goals could be explicitly enumerated and learned separately. In this paper we empirically explore 3 different extensions of the idea of updating many (instead of all) goals in the context of RL with deep neural networks (or DeepRL for short). First, in a direct adaptation of Kaelbling's approach we explore if many-goals updating can be used to achieve mastery in non-tabular visual-observation domains. Second, we explore whether many-goals updating can be used to pre-train a network to subsequently learn faster and better on a single main task of interest. Third, we explore whether many-goals updating can be used to provide auxiliary task updates in training a network to learn faster and better on a single main task of interest. We provide comparisons to baselines for each of the 3 extensions. ",0,0,0,1,0,0 17104,Localized Structured Prediction," Key to structured prediction is exploiting the problem structure to simplify the learning process. A major challenge arises when data exhibit a local structure (e.g., are made by ""parts"") that can be leveraged to better approximate the relation between (parts of) the input and (parts of) the output. Recent literature on signal processing, and in particular computer vision, has shown that capturing these aspects is indeed essential to achieve state-of-the-art performance. While such algorithms are typically derived on a case-by-case basis, in this work we propose the first theoretical framework to deal with part-based data from a general perspective. We derive a novel approach to deal with these problems and study its generalization properties within the setting of statistical learning theory. Our analysis is novel in that it explicitly quantifies the benefits of leveraging the part-based structure of the problem with respect to the learning rates of the proposed estimator. ",0,0,0,1,0,0 17105,Routing in FRET-based Nanonetworks," Nanocommunications, understood as communications between nanoscale devices, is commonly regarded as a technology essential for cooperation of large groups of nanomachines and thus crucial for development of the whole area of nanotechnology. While solutions for point-to-point nanocommunications have been already proposed, larger networks cannot function properly without routing. In this article we focus on the nanocommunications via Forster Resonance Energy Transfer (FRET), which was found to be a technique with a very high signal propagation speed, and discuss how to route signals through nanonetworks. We introduce five new routing mechanisms, based on biological properties of specific molecules. We experimentally validate one of these mechanisms. Finally, we analyze open issues showing the technical challenges for signal transmission and routing in FRET-based nanocommunications. ",0,0,0,0,1,0 17106,"FFT Convolutions are Faster than Winograd on Modern CPUs, Here is Why"," Winograd-based convolution has quickly gained traction as a preferred approach to implement convolutional neural networks (ConvNet) on various hardware platforms because it requires fewer floating point operations than FFT-based or direct convolutions. This paper compares three highly optimized implementations (regular FFT--, Gauss--FFT--, and Winograd--based convolutions) on modern multi-- and many--core CPUs. Although all three implementations employed the same optimizations for modern CPUs, our experimental results with two popular ConvNets (VGG and AlexNet) show that the FFT--based implementations generally outperform the Winograd--based approach, contrary to the popular belief. To understand the results, we use a Roofline performance model to analyze the three implementations in detail, by looking at each of their computation phases and by considering not only the number of floating point operations, but also the memory bandwidth and the cache sizes. The performance analysis explains why, and under what conditions, the FFT--based implementations outperform the Winograd--based one, on modern CPUs. ",1,0,0,0,0,0 17107,The application of selection principles in the study of the properties of function spaces," In this paper we investigate the properties of function spaces using the selection principles. ",0,0,1,0,0,0 17108,Progressive Neural Architecture Search," We propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary algorithms. Our approach uses a sequential model-based optimization (SMBO) strategy, in which we search for structures in order of increasing complexity, while simultaneously learning a surrogate model to guide the search through structure space. Direct comparison under the same search space shows that our method is up to 5 times more efficient than the RL method of Zoph et al. (2018) in terms of number of models evaluated, and 8 times faster in terms of total compute. The structures we discover in this way achieve state of the art classification accuracies on CIFAR-10 and ImageNet. ",1,0,0,1,0,0 17109,Lower Bounds for Maximum Gap in (Inverse) Cyclotomic Polynomials," The maximum gap $g(f)$ of a polynomial $f$ is the maximum of the differences (gaps) between two consecutive exponents that appear in $f$. Let $\Phi_{n}$ and $\Psi_{n}$ denote the $n$-th cyclotomic and $n$-th inverse cyclotomic polynomial, respectively. In this paper, we give several lower bounds for $g(\Phi_{n})$ and $g(\Psi_{n})$, where $n$ is the product of odd primes. We observe that they are very often exact. We also give an exact expression for $g(\Psi_{n})$ under a certain condition. Finally we conjecture an exact expression for $g(\Phi_{n})$ under a certain condition. ",0,0,1,0,0,0 17110,"Dynamical Analysis of Cylindrically Symmetric Anisotropic Sources in $f(R,T)$ Gravity"," In this paper, we have analyzed the stability of cylindrically symmetric collapsing object filled with locally anisotropic fluid in $f(R,T)$ theory, where $R$ is the scalar curvature and $T$ is the trace of stress-energy tensor of matter. Modified field equations and dynamical equations are constructed in $f(R,T)$ gravity. Evolution or collapse equation is derived from dynamical equations by performing linear perturbation on them. Instability range is explored in both Newtonian and post-Newtonian regimes with the help of adiabetic index, which defines the impact of physical parameters on the instability range. Some conditions are imposed on physical quantities to secure the stability of the gravitating sources. ",0,1,0,0,0,0 17111,Trace and Kunneth formulas for singularity categories and applications," We present an $\ell$-adic trace formula for saturated and admissible dg-categories over a base monoidal dg-category. Moreover, we prove Künneth formulas for dg-category of singularities, and for inertia-invariant vanishing cycles. As an application, we prove a version of Bloch's Conductor Conjecture (stated by Spencer Bloch in 1985), under the additional hypothesis that the monodromy action of the inertia group is unipotent. ",0,0,1,0,0,0 17112,A Redshift Survey of the Nearby Galaxy Cluster Abell 2199: Comparison of the Spatial and Kinematic Distributions of Galaxies with the Intracluster Medium," We present the results from an extensive spectroscopic survey of the central region of the nearby galaxy cluster Abell 2199 at $z=0.03$. By combining 775 new redshifts from the MMT/Hectospec observations with the data in the literature, we construct a large sample of 1624 galaxies with measured redshifts at $R<30^\prime$, which results in high spectroscopic completeness at $r_{\rm petro,0}<20.5$ (77%). We use these data to study the kinematics and clustering of galaxies focusing on the comparison with those of the intracluster medium (ICM) from Suzaku X-ray observations. We identify 406 member galaxies of A2199 at $R<30^\prime$ using the caustic technique. The velocity dispersion profile of cluster members appears smoothly connected to the stellar velocity dispersion profile of the cD galaxy. The luminosity function is well fitted with a Schechter function at $M_r<-15$. The radial velocities of cluster galaxies generally agree well with those of the ICM, but there are some regions where the velocity difference between the two is about a few hundred kilometer per second. The cluster galaxies show a hint of global rotation at $R<5^\prime$ with $v_{\rm rot}=300{-}600\,\textrm{km s}^{-1}$, but the ICM in the same region do not show such rotation. We apply a friends-of-friends algorithm to the cluster galaxy sample at $R<60^\prime$ and identify 32 group candidates, and examine the spatial correlation between the galaxy groups and X-ray emission. This extensive survey in the central region of A2199 provides an important basis for future studies of interplay among the galaxies, the ICM and the dark matter in the cluster. ",0,1,0,0,0,0 17113,A trapped field of 13.4 T in a stack of HTS tapes with 30 μm substrate," Superconducting bulk (RE)Ba$_2$Cu$_3$O$_{7-x}$ materials (RE-rare earth elements) have been successfully used to generate magnetic flux densities in excess of 17 T. This work investigates an alternative approach by trapping flux in stacks of second generation high temperature superconducting tape from several manufacturers using field cooling and pulsed field magnetisation techniques. Flux densities of up to 13.4 T were trapped by field cooling at ~5 K between two 12 mm square stacks, an improvement of 70% over previous value achieved in an HTS tape stack. The trapped flux approaches the record values in (RE)BCO bulks and reflects the rapid developments still being made in the HTS tape performance. ",0,1,0,0,0,0 17114,Exact Simulation of the Extrema of Stable Processes," We exhibit an exact simulation algorithm for the supremum of a stable process over a finite time interval using dominated coupling from the past (DCFTP). We establish a novel perpetuity equation for the supremum (via the representation of the concave majorants of Lévy processes) and apply it to construct a Markov chain in the DCFTP algorithm. We prove that the number of steps taken backwards in time before the coalescence is detected is finite. ",0,0,0,1,0,0 17115,Nonparametric Bayesian estimation of a Hölder continuous diffusion coefficient," We consider a nonparametric Bayesian approach to estimate the diffusion coefficient of a stochastic differential equation given discrete time observations over a fixed time interval. As a prior on the diffusion coefficient, we employ a histogram-type prior with piecewise constant realisations on bins forming a partition of the time interval. Specifically, these constants are realizations of independent inverse Gamma distributed randoma variables. We justify our approach by deriving the rate at which the corresponding posterior distribution asymptotically concentrates around the data-generating diffusion coefficient. This posterior contraction rate turns out to be optimal for estimation of a Hölder-continuous diffusion coefficient with smoothness parameter $0<\lambda\leq 1.$ Our approach is straightforward to implement, as the posterior distributions turn out to be inverse Gamma again, and leads to good practical results in a wide range of simulation examples. Finally, we apply our method on exchange rate data sets. ",0,0,1,1,0,0 17116,Dirac State in a Centrosymmetric Superconductor alpha-PdBi2," Topological superconductor (TSC) hosting Majorana fermions has been established as a milestone that may shift our scientific trajectory from research to applications in topological quantum computing. Recently, superconducting Pd-Bi binaries have attracted great attention as a possible medium for the TSC phase as a result of their large spin-orbit coupling strength. Here, we report a systematic high-resolution angle-resolved photoemission spectroscopy (ARPES) study on the normal state electronic structure of superconducting alpha-PdBi2 (Tc = 1.7 K). Our results show the presence of Dirac states at higher-binding energy with the location of the Dirac point at 1.26 eV below the chemical potential at the zone center. Furthermore, the ARPES data indicate multiple band crossings at the chemical potential, consistent with the metallic behavior of alpha-PdBi2. Our detailed experimental studies are complemented by first-principles calculations, which reveal the presence of surface Rashba states residing in the vicinity of the chemical potential. The obtained results provide an opportunity to investigate the relationship between superconductivity and topology, as well as explore pathways to possible future platforms for topological quantum computing. ",0,1,0,0,0,0 17117,Hölder and Lipschitz continuity of functions definable over Henselian rank one valued fields," Consider a Henselian rank one valued field $K$ of equicharacteristic zero with the three-sorted language $\mathcal{L}$ of Denef--Pas. Let $f: A \to K$ be a continuous $\mathcal{L}$-definable (with parameters) function on a closed bounded subset $A \subset K^{n}$. The main purpose is to prove that then $f$ is Hölder continuous with some exponent $s\geq 0$ and constant $c \geq 0$, a fortiori, $f$ is uniformly continuous. Further, if $f$ is locally Lipschitz continuous with a constant $c$, then $f$ is (globally) Lipschitz continuous with possibly some larger constant $d$. Also stated are some problems concerning continuous and Lipschitz continuous functions definable over Henselian valued fields. ",0,0,1,0,0,0 17118,Bistability of Cavity Magnon Polaritons," We report the first observation of the magnon-polariton bistability in a cavity magnonics system consisting of cavity photons strongly interacting with the magnons in a small yttrium iron garnet (YIG) sphere. The bistable behaviors are emerged as sharp frequency switchings of the cavity magnon-polaritons (CMPs) and related to the transition between states with large and small number of polaritons. In our experiment, we align, respectively, the [100] and [110] crystallographic axes of the YIG sphere parallel to the static magnetic field and find very different bistable behaviors (e.g., clockwise and counter-clockwise hysteresis loops) in these two cases. The experimental results are well fitted and explained as being due to the Kerr nonlinearity with either positive or negative coefficient. Moreover, when the magnetic field is tuned away from the anticrossing point of CMPs, we observe simultaneous bistability of both magnons and cavity photons by applying a drive field on the lower branch. ",0,1,0,0,0,0 17119,Errors and secret data in the Italian research assessment exercise. A comment to a reply," Italy adopted a performance-based system for funding universities that is centered on the results of a national research assessment exercise, realized by a governmental agency (ANVUR). ANVUR evaluated papers by using 'a dual system of evaluation', that is by informed peer review or by bibliometrics. In view of validating that system, ANVUR performed an experiment for estimating the agreement between informed review and bibliometrics. Ancaiani et al. (2015) presents the main results of the experiment. Baccini and De Nicolao (2017) documented in a letter, among other critical issues, that the statistical analysis was not realized on a random sample of articles. A reply to the letter has been published by Research Evaluation (Benedetto et al. 2017). This note highlights that in the reply there are (1) errors in data, (2) problems with 'representativeness' of the sample, (3) unverifiable claims about weights used for calculating kappas, (4) undisclosed averaging procedures; (5) a statement about 'same protocol in all areas' contradicted by official reports. Last but not least: the data used by the authors continue to be undisclosed. A general warning concludes: many recently published papers use data originating from Italian research assessment exercise. These data are not accessible to the scientific community and consequently these papers are not reproducible. They can be hardly considered as containing sound evidence at least until authors or ANVUR disclose the data necessary for replication. ",1,0,0,0,0,0 17120,Exploring Features for Predicting Policy Citations," In this study we performed an initial investigation and evaluation of altmetrics and their relationship with public policy citation of research papers. We examined methods for using altmetrics and other data to predict whether a research paper is cited in public policy and applied receiver operating characteristic curve on various feature groups in order to evaluate their potential usefulness. From the methods we tested, classifying based on tweet count provided the best results, achieving an area under the ROC curve of 0.91. ",1,0,0,0,0,0 17121,Recovery guarantees for compressed sensing with unknown errors," From a numerical analysis perspective, assessing the robustness of l1-minimization is a fundamental issue in compressed sensing and sparse regularization. Yet, the recovery guarantees available in the literature usually depend on a priori estimates of the noise, which can be very hard to obtain in practice, especially when the noise term also includes unknown discrepancies between the finite model and data. In this work, we study the performance of l1-minimization when these estimates are not available, providing robust recovery guarantees for quadratically constrained basis pursuit and random sampling in bounded orthonormal systems. Several applications of this work are approximation of high-dimensional functions, infinite-dimensional sparse regularization for inverse problems, and fast algorithms for non-Cartesian Magnetic Resonance Imaging. ",0,0,1,0,0,0 17122,Synthetic Homology in Homotopy Type Theory," This paper defines homology in homotopy type theory, in the process stable homotopy groups are also defined. Previous research in synthetic homotopy theory is relied on, in particular the definition of cohomology. This work lays the foundation for a computer checked construction of homology. ",1,0,1,0,0,0 17123,Spatial Models of Vector-Host Epidemics with Directed Movement of Vectors Over Long Distances," We investigate a time-dependent spatial vector-host epidemic model with non-coincident domains for the vector and host populations. The host population resides in small non-overlapping sub-regions, while the vector population resides throughout a much larger region. The dynamics of the populations are modeled by a reaction-diffusion-advection compartmental system of partial differential equations. The disease is transmitted through vector and host populations in criss-cross fashion. We establish global well-posedness and uniform a prior bounds as well as the long-term behavior. The model is applied to simulate the outbreak of bluetongue disease in sheep transmitted by midges infected with bluetongue virus. We show that the long-range directed movement of the midge population, due to wind-aided movement, enhances the transmission of the disease to sheep in distant sites. ",0,0,0,0,1,0 17124,Complex and Quaternionic Principal Component Pursuit and Its Application to Audio Separation," Recently, the principal component pursuit has received increasing attention in signal processing research ranging from source separation to video surveillance. So far, all existing formulations are real-valued and lack the concept of phase, which is inherent in inputs such as complex spectrograms or color images. Thus, in this letter, we extend principal component pursuit to the complex and quaternionic cases to account for the missing phase information. Specifically, we present both complex and quaternionic proximity operators for the $\ell_1$- and trace-norm regularizers. These operators can be used in conjunction with proximal minimization methods such as the inexact augmented Lagrange multiplier algorithm. The new algorithms are then applied to the singing voice separation problem, which aims to separate the singing voice from the instrumental accompaniment. Results on the iKala and MSD100 datasets confirmed the usefulness of phase information in principal component pursuit. ",0,0,0,1,0,0 17125,Light axion-like dark matter must be present during inflation," Axion-like particles (ALPs) might constitute the totality of the cold dark matter (CDM) observed. The parameter space of ALPs depends on the mass of the particle $m$ and on the energy scale of inflation $H_I$ , the latter being bound by the non-detection of primordial gravitational waves. We show that the bound on HI implies the existence of a mass scale $m_\chi = 10 {\rm \,neV} ÷ 0.5 {\rm \,peV}$, depending on the ALP susceptibility $\chi$, such that the energy density of ALPs of mass smaller than $m_\chi$ is too low to explain the present CDM budget, if the ALP field has originated after the end of inflation. This bound affects Ultra-Light Axions (ULAs), which have recently regained popularity as CDM candidates. Light ($m < m_\chi$) ALPs can then be CDM candidates only if the ALP field has already originated during the inflationary period, in which case the parameter space is constrained by the non-detection of axion isocurvature fluctuations. We comment on the effects on these bounds from additional physics beyond the Standard Model, besides ALPs. ",0,1,0,0,0,0 17126,Boolean function analysis meets stochastic optimization: An approximation scheme for stochastic knapsack," The stochastic knapsack problem is the stochastic variant of the classical knapsack problem in which the algorithm designer is given a a knapsack with a given capacity and a collection of items where each item is associated with a profit and a probability distribution on its size. The goal is to select a subset of items with maximum profit and violate the capacity constraint with probability at most $p$ (referred to as the overflow probability). While several approximation algorithms have been developed for this problem, most of these algorithms relax the capacity constraint of the knapsack. In this paper, we design efficient approximation schemes for this problem without relaxing the capacity constraint. (i) Our first result is in the case when item sizes are Bernoulli random variables. In this case, we design a (nearly) fully polynomial time approximation scheme (FPTAS) which only relaxes the overflow probability. (ii) Our second result generalizes the first result to the case when all the item sizes are supported on a (common) set of constant size. (iii) Our third result is in the case when item sizes are so-called ""hypercontractive"" random variables i.e., random variables whose second and fourth moments are within constant factors of each other. In other words, the kurtosis of the random variable is upper bounded by a constant. Crucially, all of our algorithms meet the capacity constraint exactly, a result which was previously known only when the item sizes were Poisson or Gaussian random variables. Our results rely on new connections between Boolean function analysis and stochastic optimization. We believe that these ideas and techniques may prove to be useful in other stochastic optimization problems as well. ",1,0,0,0,0,0 17127,Nonparanormal Information Estimation," We study the problem of using i.i.d. samples from an unknown multivariate probability distribution $p$ to estimate the mutual information of $p$. This problem has recently received attention in two settings: (1) where $p$ is assumed to be Gaussian and (2) where $p$ is assumed only to lie in a large nonparametric smoothness class. Estimators proposed for the Gaussian case converge in high dimensions when the Gaussian assumption holds, but are brittle, failing dramatically when $p$ is not Gaussian. Estimators proposed for the nonparametric case fail to converge with realistic sample sizes except in very low dimensions. As a result, there is a lack of robust mutual information estimators for many realistic data. To address this, we propose estimators for mutual information when $p$ is assumed to be a nonparanormal (a.k.a., Gaussian copula) model, a semiparametric compromise between Gaussian and nonparametric extremes. Using theoretical bounds and experiments, we show these estimators strike a practical balance between robustness and scaling with dimensionality. ",1,0,1,1,0,0 17128,Controlling a population," We introduce a new setting where a population of agents, each modelled by a finite-state system, are controlled uniformly: the controller applies the same action to every agent. The framework is largely inspired by the control of a biological system, namely a population of yeasts, where the controller may only change the environment common to all cells. We study a synchronisation problem for such populations: no matter how individual agents react to the actions of the controller, the controller aims at driving all agents synchronously to a target state. The agents are naturally represented by a non-deterministic finite state automaton (NFA), the same for every agent, and the whole system is encoded as a 2-player game. The first player (Controller) chooses actions, and the second player (Agents) resolves non-determinism for each agent. The game with m agents is called the m -population game. This gives rise to a parameterized control problem (where control refers to 2 player games), namely the population control problem: can Controller control the m-population game for all m in N whatever Agents does? ",1,0,0,0,0,0 17129,Finite scale local Lyapunov exponents distribution in fully developed homogeneous isotropic turbulence," The present work analyzes the distribution function of the finite scale local Lyapunov exponent of a pair fluid particles trajectories in fully developed incompressible homogeneous isotropic turbulence. According to the hypothesis of fully developed chaos, this PDF is reasonably estimated by maximizing the entropy associated to such distribution, resulting to be an uniform distribution function in a proper interval of variation of the local Lyapunov exponents. From this PDF, we determine the relationship between the average and maximum Lyapunov exponents and the longitudinal velocity correlation function. This link, which leads to the closure of von Kàrmàn--Howarth and Corrsin equations, agrees with the relation obtained in the previous work, supporting the proposed PDF calculation, at least for the purposes of the energy cascade effect estimation. Furthermore, through the property that the Lyapunov vectors tend to align to the direction of the maximum growth rate of trajectories distance, we obtain the link between maximum and average Lyapunov exponents in line with the previous result. ",0,1,0,0,0,0 17130,Improved Power Decoding of One-Point Hermitian Codes," We propose a new partial decoding algorithm for one-point Hermitian codes that can decode up to the same number of errors as the Guruswami--Sudan decoder. Simulations suggest that it has a similar failure probability as the latter one. The algorithm is based on a recent generalization of the power decoding algorithm for Reed--Solomon codes and does not require an expensive root-finding step. In addition, it promises improvements for decoding interleaved Hermitian codes. ",1,0,0,0,0,0 17131,Finite numbers of initial ideals in non-Noetherian polynomial rings," In this article, we generalize the well-known result that ideals of Noetherian polynomial rings have only finitely many initial ideals to the situation of ascending ideal chains in non-Noetherian polynomial rings. More precisely, we study ideal chains in the polynomial ring $R=K[x_{i,j}\,|\,1\leq i\leq c,j\in N]$ that are invariant under the action of the monoid $Inc(N)$ of strictly increasing functions on $N$, which acts on $R$ by shifting the second variable index. We show that for every such ideal chain, the number of initial ideal chains with respect to term orders on $R$ that are compatible with the action of $Inc(N)$ is finite. As a consequence of this, we will see that $Inc(N)$-invariant ideals of $R$ have only finitely many initial ideals with respect to $Inc(N)$-compatible term orders. The article also addresses the question of how many such term orders exist. We give a complete list of the $Inc(N)$-compatible term orders for the case $c=1$ and show that there are infinitely many for $c >1$. This answers a question by Hillar, Kroner, Leykin. ",0,0,1,0,0,0 17132,The Diederich-Fornaess Index and Good Vector Fields," We consider the relationship between two sufficient conditions for regularity of the Bergman Projection on smooth, bounded, pseudoconvex domains. We show that if the set of infinite type points is reasonably well-behaved, then the existence of a family of good vector fields in the sense of Boas and Straube implies that the Diederich-Fornaess Index of the domain is equal to one. ",0,0,1,0,0,0 17133,Complete Minors of Self-Complementary Graphs," We show that any self-complementary graph with $n$ vertices contains a $K_{\lfloor \frac{n+1}{2}\rfloor}$ minor. We derive topological properties of self-complementary graphs. ",0,0,1,0,0,0 17134,Statistical estimation of the Oscillating Brownian Motion," We study the asymptotic behavior of estimators of a two-valued, discontinuous diffusion coefficient in a Stochastic Differential Equation, called an Oscillating Brownian Motion. Using the relation of the latter process with the Skew Brownian Motion, we propose two natural consistent estimators, which are variants of the integrated volatility estimator and take the occupation times into account. We show the stable convergence of the renormalized errors' estimations toward some Gaussian mixture, possibly corrected by a term that depends on the local time. These limits stem from the lack of ergodicity as well as the behavior of the local time at zero of the process. We test both estimators on simulated processes, finding a complete agreement with the theoretical predictions. ",0,0,1,1,0,0 17135,Competing Ferromagnetic and Anti-Ferromagnetic interactions in Iron Nitride $ζ$-Fe$_2$N," The paper discusses the magnetic state of zeta phase of iron nitride viz. $\zeta$-Fe$_2$N on the basis of spin polarized first principles electronic structure calculations together with a review of already published data. Results of our first principles study suggest that the ground state of $\zeta$-Fe$_2$N is ferromagnetic (FM) with a magnetic moment of 1.528 $\mu_\text{B}$ on the Fe site. The FM ground state is lower than the anti-ferromagnetic (AFM) state by 8.44 meV and non-magnetic(NM) state by 191 meV per formula unit. These results are important in view of reports which claim that $\zeta$-Fe$_2$N undergoes an AFM transition below 10K and others which do not observe any magnetic transition up to 4.2K. We argue that the experimental results of AFM transition below 10K are inconclusive and we propose the presence of competing FM and AFM superexchange interactions between Fe sites mediated by nitrogen atoms, which are consistent with Goodenough-Kanamori-Anderson rules. We find that the anti-ferromagnetically coupled Fe sites are outnumbered by ferromagnetically coupled Fe sites leading to a stable FM ground state. A Stoner analysis of the results also supports our claim of a FM ground state. ",0,1,0,0,0,0 17136,A cautionary tale: limitations of a brightness-based spectroscopic approach to chromatic exoplanet radii," Determining wavelength-dependent exoplanet radii measurements is an excellent way to probe the composition of exoplanet atmospheres. In light of this, Borsa et al. (2016) sought to develop a technique to obtain such measurements by comparing ground-based transmission spectra to the expected brightness variations during an exoplanet transit. However, we demonstrate herein that this is not possible due to the transit light curve normalisation necessary to remove the effects of the Earth's atmosphere on the ground-based observations. This is because the recoverable exoplanet radius is set by the planet-to-star radius ratio within the transit light curve; we demonstrate this both analytically and with simulated planet transits, as well as through a reanalysis of the HD 189733b data. ",0,1,0,0,0,0 17137,Generating Memorable Mnemonic Encodings of Numbers," The major system is a mnemonic system that can be used to memorize sequences of numbers. In this work, we present a method to automatically generate sentences that encode a given number. We propose several encoding models and compare the most promising ones in a password memorability study. The results of the study show that a model combining part-of-speech sentence templates with an $n$-gram language model produces the most memorable password representations. ",1,0,0,0,0,0 17138,De-excitation spectroscopy of strongly interacting Rydberg gases," We present experimental results on the controlled de-excitation of Rydberg states in a cold gas of Rb atoms. The effect of the van der Waals interactions between the Rydberg atoms is clearly seen in the de-excitation spectrum and dynamics. Our observations are confirmed by numerical simulations. In particular, for off-resonant (facilitated) excitation we find that the de-excitation spectrum reflects the spatial arrangement of the atoms in the quasi one-dimensional geometry of our experiment. We discuss future applications of this technique and implications for detection and controlled dissipation schemes. ",0,1,0,0,0,0 17139,Hyperplane Clustering Via Dual Principal Component Pursuit," We extend the theoretical analysis of a recently proposed single subspace learning algorithm, called Dual Principal Component Pursuit (DPCP), to the case where the data are drawn from of a union of hyperplanes. To gain insight into the properties of the $\ell_1$ non-convex problem associated with DPCP, we develop a geometric analysis of a closely related continuous optimization problem. Then transferring this analysis to the discrete problem, our results state that as long as the hyperplanes are sufficiently separated, the dominant hyperplane is sufficiently dominant and the points are uniformly distributed inside the associated hyperplanes, then the non-convex DPCP problem has a unique global solution, equal to the normal vector of the dominant hyperplane. This suggests the correctness of a sequential hyperplane learning algorithm based on DPCP. A thorough experimental evaluation reveals that hyperplane learning schemes based on DPCP dramatically improve over the state-of-the-art methods for the case of synthetic data, while are competitive to the state-of-the-art in the case of 3D plane clustering for Kinect data. ",1,0,0,1,0,0 17140,Utilizing Domain Knowledge in End-to-End Audio Processing," End-to-end neural network based approaches to audio modelling are generally outperformed by models trained on high-level data representations. In this paper we present preliminary work that shows the feasibility of training the first layers of a deep convolutional neural network (CNN) model to learn the commonly-used log-scaled mel-spectrogram transformation. Secondly, we demonstrate that upon initializing the first layers of an end-to-end CNN classifier with the learned transformation, convergence and performance on the ESC-50 environmental sound classification dataset are similar to a CNN-based model trained on the highly pre-processed log-scaled mel-spectrogram features. ",1,0,0,1,0,0 17141,Testing Microfluidic Fully Programmable Valve Arrays (FPVAs)," Fully Programmable Valve Array (FPVA) has emerged as a new architecture for the next-generation flow-based microfluidic biochips. This 2D-array consists of regularly-arranged valves, which can be dynamically configured by users to realize microfluidic devices of different shapes and sizes as well as interconnections. Additionally, the regularity of the underlying structure renders FPVAs easier to integrate on a tiny chip. However, these arrays may suffer from various manufacturing defects such as blockage and leakage in control and flow channels. Unfortunately, no efficient method is yet known for testing such a general-purpose architecture. In this paper, we present a novel formulation using the concept of flow paths and cut-sets, and describe an ILP-based hierarchical strategy for generating compact test sets that can detect multiple faults in FPVAs. Simulation results demonstrate the efficacy of the proposed method in detecting manufacturing faults with only a small number of test vectors. ",1,0,0,0,0,0 17142,"Fundamental groups, slalom curves and extremal length"," We define the extremal length of elements of the fundamental group of the twice punctured complex plane and give upper and lower bounds for this invariant. The bounds differ by a multiplicative constant. The main motivation comes from $3$-braid invariants and their application. ",0,0,1,0,0,0 17143,Topology and stability of the Kondo phase in quark matter," We investigate properties of the ground state of a light quark matter with heavy quark impurities. This system exhibits the ""QCD Kondo effect"" where the interaction strength between a light quark near the Fermi surface and a heavy quark increases with decreasing energy of the light quark towards the Fermi energy, and diverges at some scale near the Fermi energy, called the Kondo scale. Around and below the Kondo scale, we must treat the dynamics nonperturbatively. As a typical nonperturbative method to treat the strong coupling regime, we adopt a mean-field approach where we introduce a condensate, the Kondo condensate, representing a mixing between a light quark and a heavy quark, and determine the ground state in the presence of the Kondo condensate. We show that the ground state is a topologically non-trivial state and the heavy quark spin forms the hedgehog configuration in the momentum space. We can define the Berry phase for the ground-state wavefunction in the momentum space which is associated with a monopole at the position of a heavy quark. We also investigate fluctuations around the mean field in the random-phase approximation, and show the existence of (exciton-like) collective excitations made of a hole $h$ of a light quark and a heavy quark $Q$. ",0,1,0,0,0,0 17144,Second order nonlinear gyrokinetic theory : From the particle to the gyrocenter," A gyrokinetic reduction is based on a specific ordering of the different small parameters characterizing the background magnetic field and the fluctuating electromagnetic fields. In this tutorial, we consider the following ordering of the small parameters: $\epsilon\_B=\epsilon\_\delta^2$ where $\epsilon\_B$ is the small parameter associated with spatial inhomogeneities of the background magnetic field and $\epsilon\_\delta$ characterizes the small amplitude of the fluctuating fields. In particular, we do not make any assumption on the amplitude of the background magnetic field. Given this choice of ordering, we describe a self-contained and systematic derivation which is particularly well suited for the gyrokinetic reduction, following a two-step procedure. We follow the approach developed in [Sugama, Physics of Plasmas 7, 466 (2000)]:In a first step, using a translation in velocity, we embed the transformation performed on the symplectic part of the gyrocentre reduction in the guiding-centre one. In a second step, using a canonical Lie transform, we eliminate the gyroangle dependence from the Hamiltonian. As a consequence, we explicitly derive the fully electromagnetic gyrokinetic equations at the second order in $\epsilon\_\delta$. ",0,1,0,0,0,0 17145,On the Classification and Algorithmic Analysis of Carmichael Numbers," In this paper, we study the properties of Carmichael numbers, false positives to several primality tests. We provide a classification for Carmichael numbers with a proportion of Fermat witnesses of less than 50%, based on if the smallest prime factor is greater than a determined lower bound. In addition, we conduct a Monte Carlo simulation as part of a probabilistic algorithm to detect if a given composite number is Carmichael. We modify this highly accurate algorithm with a deterministic primality test to create a novel, more efficient algorithm that differentiates between Carmichael numbers and prime numbers. ",0,0,1,0,0,0 17146,Phase reduction and synchronization of a network of coupled dynamical elements exhibiting collective oscillations," A general phase reduction method for a network of coupled dynamical elements exhibiting collective oscillations, which is applicable to arbitrary networks of heterogeneous dynamical elements, is developed. A set of coupled adjoint equations for phase sensitivity functions, which characterize phase response of the collective oscillation to small perturbations applied to individual elements, is derived. Using the phase sensitivity functions, collective oscillation of the network under weak perturbation can be described approximately by a one-dimensional phase equation. As an example, mutual synchronization between a pair of collectively oscillating networks of excitable and oscillatory FitzHugh-Nagumo elements with random coupling is studied. ",0,1,0,0,0,0 17147,Analysis of the Impact of Negative Sampling on Link Prediction in Knowledge Graphs," Knowledge graphs are large, useful, but incomplete knowledge repositories. They encode knowledge through entities and relations which define each other through the connective structure of the graph. This has inspired methods for the joint embedding of entities and relations in continuous low-dimensional vector spaces, that can be used to induce new edges in the graph, i.e., link prediction in knowledge graphs. Learning these representations relies on contrasting positive instances with negative ones. Knowledge graphs include only positive relation instances, leaving the door open for a variety of methods for selecting negative examples. In this paper we present an empirical study on the impact of negative sampling on the learned embeddings, assessed through the task of link prediction. We use state-of-the-art knowledge graph embeddings -- \rescal , TransE, DistMult and ComplEX -- and evaluate on benchmark datasets -- FB15k and WN18. We compare well known methods for negative sampling and additionally propose embedding based sampling methods. We note a marked difference in the impact of these sampling methods on the two datasets, with the ""traditional"" corrupting positives method leading to best results on WN18, while embedding based methods benefiting the task on FB15k. ",1,0,0,0,0,0 17148,Reverse Curriculum Generation for Reinforcement Learning," Many relevant tasks require an agent to reach a certain state, or to manipulate objects into a desired configuration. For example, we might want a robot to align and assemble a gear onto an axle or insert and turn a key in a lock. These goal-oriented tasks present a considerable challenge for reinforcement learning, since their natural reward function is sparse and prohibitive amounts of exploration are required to reach the goal and receive some learning signal. Past approaches tackle these problems by exploiting expert demonstrations or by manually designing a task-specific reward shaping function to guide the learning agent. Instead, we propose a method to learn these tasks without requiring any prior knowledge other than obtaining a single state in which the task is achieved. The robot is trained in reverse, gradually learning to reach the goal from a set of start states increasingly far from the goal. Our method automatically generates a curriculum of start states that adapts to the agent's performance, leading to efficient training on goal-oriented tasks. We demonstrate our approach on difficult simulated navigation and fine-grained manipulation problems, not solvable by state-of-the-art reinforcement learning methods. ",1,0,0,0,0,0 17149,Mutually touching infinite cylinders in the 3D world of lines," Recently we gave arguments that only two unique topologically different configurations of 7 equal all mutually touching round cylinders (the configurations being mirror reflections of each other) are possible in 3D, although a whole world of configurations is possible already for round cylinders of arbitrary radii. It was found that as many as 9 round cylinders (all mutually touching) are possible in 3D while the upper bound for arbitrary cylinders was estimated to be not more than 14 under plausible arguments. Now by using the chirality and Ring matrices that we introduced earlier for the topological classification of line configurations, we have given arguments that the maximal number of mutually touching straight infinite cylinders of arbitrary cross-section (provided that its boundary is a smooth curve) in 3D cannot exceed 10. We generated numerically several configurations of 10 cylinders, restricting ourselves with elliptic cylinders. Configurations of 8 and 9 equal elliptic cylinders (all in mutually touching) are generated numerically as well. A possibility and restriction of continuous transformations from elliptic into round cylinder configurations are discussed. Some curious results concerning the properties of the chirality matrix (which coincides with Seidel's adjacency matrix important for the Graph theory) are presented. ",0,0,1,0,0,0 17150,Introduction to Formal Concept Analysis and Its Applications in Information Retrieval and Related Fields," This paper is a tutorial on Formal Concept Analysis (FCA) and its applications. FCA is an applied branch of Lattice Theory, a mathematical discipline which enables formalisation of concepts as basic units of human thinking and analysing data in the object-attribute form. Originated in early 80s, during the last three decades, it became a popular human-centred tool for knowledge representation and data analysis with numerous applications. Since the tutorial was specially prepared for RuSSIR 2014, the covered FCA topics include Information Retrieval with a focus on visualisation aspects, Machine Learning, Data Mining and Knowledge Discovery, Text Mining and several others. ",1,0,0,1,0,0 17151,Fully stripped? The dynamics of dark and luminous matter in the massive cluster collision MACSJ0553.4$-$3342," We present the results of a multiwavelength investigation of the very X-ray luminous galaxy cluster MACSJ0553.4-3342 ($z = 0.4270$; hereafter MACSJ0553). Combining high-resolution data obtained with the Hubble Space Telescope and the Chandra X-ray Observatory with ground-based galaxy spectroscopy, our analysis establishes the system unambiguously as a binary, post-collision merger of massive clusters. Key characteristics include perfect alignment of luminous and dark matter for one component, a separation of almost 650 kpc (in projection) between the dark-matter peak of the other subcluster and the second X-ray peak, extremely hot gas (k$T > 15$ keV) at either end of the merger axis, a potential cold front in the east, an unusually low gas mass fraction of approximately 0.075 for the western component, a velocity dispersion of $1490_{-130}^{+104}$ km s$^{-1}$, and no indication of significant substructure along the line of sight. We propose that the MACSJ0553 merger proceeds not in the plane of the sky, but at a large inclination angle, is observed very close to turnaround, and that the eastern X-ray peak is the cool core of the slightly less massive western component that was fully stripped and captured by the eastern subcluster during the collision. If correct, this hypothesis would make MACSJ0553 a superb target for a competitive study of ram-pressure stripping and the collisional behaviour of luminous and dark matter during cluster formation. ",0,1,0,0,0,0 17152,When intuition fails in assessing conditional risks: the example of the frog riddle," Recently, the educational initiative TED-Ed has published a popular brain teaser coined the 'frog riddle', which illustrates non-intuitive implications of conditional probabilities. In its intended form, the frog riddle is a reformulation of the classic boy-girl paradox. However, the authors alter the narrative of the riddle in a form, that subtly changes the way information is conveyed. The presented solution, unfortunately, does not take this point into full account, and as a consequence, lacks consistency in the sense that different parts of the problem are treated on unequal footing. We here review, how the mechanism of receiving information matters, and why this is exactly the reason that such kind of problems challenge intuitive thinking. Subsequently, we present a generalized solution, that accounts for the above difficulties, and preserves full logical consistency. Eventually, the relation to the boy-girl paradox is discussed. ",0,1,0,0,0,0 17153,Tuning parameter selection rules for nuclear norm regularized multivariate linear regression," We consider the tuning parameter selection rules for nuclear norm regularized multivariate linear regression (NMLR) in high-dimensional setting. High-dimensional multivariate linear regression is widely used in statistics and machine learning, and regularization technique is commonly applied to deal with the special structures in high-dimensional data. As we know, how to select the tuning parameter is an essential issue for regularization approach and it directly affects the model estimation performance. To the best of our knowledge, there are no rules about the tuning parameter selection for NMLR from the point of view of optimization. In order to establish such rules, we study the duality theory of NMLR. Then, we claim the choice of tuning parameter for NMLR is based on the sample data and the solution of NMLR dual problem, which is a projection on a nonempty, closed and convex set. Moreover, based on the (firm) nonexpansiveness and the idempotence of the projection operator, we build four tuning parameter selection rules PSR, PSRi, PSRfn and PSR+. Furthermore, we give a sequence of tuning parameters and the corresponding intervals for every rule, which states that the rank of the estimation coefficient matrix is no more than a fixed number for the tuning parameter in the given interval. The relationships between these rules are also discussed and PSR+ is the most efficient one to select the tuning parameter. Finally, the numerical results are reported on simulation and real data, which show that these four tuning parameter selection rules are valuable. ",0,0,1,1,0,0 17154,Understanding the evolution of multimedia content in the Internet through BitTorrent glasses," Today's Internet traffic is mostly dominated by multimedia content and the prediction is that this trend will intensify in the future. Therefore, main Internet players, such as ISPs, content delivery platforms (e.g. Youtube, Bitorrent, Netflix, etc) or CDN operators, need to understand the evolution of multimedia content availability and popularity in order to adapt their infrastructures and resources to satisfy clients requirements while they minimize their costs. This paper presents a thorough analysis on the evolution of multimedia content available in BitTorrent. Specifically, we analyze the evolution of four relevant metrics across different content categories: content availability, content popularity, content size and user's feedback. To this end we leverage a large-scale dataset formed by 4 snapshots collected from the most popular BitTorrent portal, namely The Pirate Bay, between Nov. 2009 and Feb. 2012. Overall our dataset is formed by more than 160k content that attracted more than 185M of download sessions. ",1,0,0,0,0,0 17155,The weak rate of convergence for the Euler-Maruyama approximation of one-dimensional stochastic differential equations involving the local times of the unknown process," In this paper, we consider the weak convergence of the Euler-Maruyama approximation for one dimensional stochastic differential equations involving the local times of the unknown process. We use a transformation in order to remove the local time from the stochastic differential equations and we provide the approximation of Euler-maruyama for the stochastic differential equations without local time. After that, we conclude the approximation of Euler-maruyama for one dimensional stochastic differential equations involving the local times of the unknown process , and we provide the rate of weak convergence for any function G in a certain class. ",0,0,1,0,0,0 17156,Study of deteriorating semiopaque turquoise lead-potassium glass beads at different stages of corrosion using micro-FTIR spectroscopy," Nowadays, a problem of historical beadworks conservation in museum collections is actual more than ever because of fatal corrosion of the 19th century glass beads. Vibrational spectroscopy is a powerful method for investigation of glass, namely, of correlation of the structure-chemical composition. Therefore, Fourier-transform infrared spectroscopy was used for examination of degradation processes in cloudy turquoise glass beads, which in contrast to other color ones deteriorate especially strongly. Micro-X-ray fluorescence spectrometry of samples has shown that lead-potassium glass PbO-K$_2$O-SiO$_2$ with small amount of Cu and Sb was used for manufacture of cloudy turquoise beads. Fourier-transform infrared spectroscopy study of the beads at different stages of glass corrosion was carried out in the range from 200 to 4000 cm$^{-1}$ in the attenuated total reflection mode. In all the spectra, we have observed shifts of two major absorption bands to low-frequency range (~1000 and ~775 cm$^{-1}$) compared to ones typical for amorphous SiO2 (~1100 and 800 cm$^{-1}$, respectively). Such an effect is connected with Pb$^{2+}$ and K$^+$ appending to the glass network. The presence of a weak band at ~1630 cm$^{-1}$ in all the spectra is attributed to the adsorption of H$_2$O. After annealing of the beads, the band disappeared completely in less deteriorated samples and became significantly weaker in more destroyed ones. Based on that we conclude that there is adsorbed molecular water on the beads. However, products of corrosion (e.g., alkali in the form of white crystals or droplets of liquid alkali) were not observed on the glass surface. We have also observed glass depolymerisation in the strongly degraded beads, which is exhibited in domination of the band peaked at ~1000 cm$^{-1}$. ",0,1,0,0,0,0 17157,Stratified surgery and K-theory invariants of the signature operator," In work of Higson-Roe the fundamental role of the signature as a homotopy and bordism invariant for oriented manifolds is made manifest in how it and related secondary invariants define a natural transformation between the (Browder-Novikov-Sullivan-Wall) surgery exact sequence and a long exact sequence of C*-algebra K-theory groups. In recent years the (higher) signature invariants have been extended from closed oriented manifolds to a class of stratified spaces known as L-spaces or Cheeger spaces. In this paper we show that secondary invariants, such as the rho-class, also extend from closed manifolds to Cheeger spaces. We revisit a surgery exact sequence for stratified spaces originally introduced by Browder-Quinn and obtain a natural transformation analogous to that of Higson-Roe. We also discuss geometric applications. ",0,0,1,0,0,0 17158,Weighted Tensor Decomposition for Learning Latent Variables with Partial Data," Tensor decomposition methods are popular tools for learning latent variables given only lower-order moments of the data. However, the standard assumption is that we have sufficient data to estimate these moments to high accuracy. In this work, we consider the case in which certain dimensions of the data are not always observed---common in applied settings, where not all measurements may be taken for all observations---resulting in moment estimates of varying quality. We derive a weighted tensor decomposition approach that is computationally as efficient as the non-weighted approach, and demonstrate that it outperforms methods that do not appropriately leverage these less-observed dimensions. ",0,0,0,1,0,0 17159,Sparse Bounds for Discrete Quadratic Phase Hilbert Transform," Consider the discrete quadratic phase Hilbert Transform acting on $\ell^{2}$ finitely supported functions $$ H^{\alpha} f(n) : = \sum_{m \neq 0} \frac{e^{2 \pi i\alpha m^2} f(n - m)}{m}. $$ We prove that, uniformly in $\alpha \in \mathbb{T}$, there is a sparse bound for the bilinear form $\langle H^{\alpha} f , g \rangle$. The sparse bound implies several mapping properties such as weighted inequalities in an intersection of Muckenhoupt and reverse Hölder classes. ",0,0,1,0,0,0 17160,Fast Distributed Approximation for TAP and 2-Edge-Connectivity," The tree augmentation problem (TAP) is a fundamental network design problem, in which the input is a graph $G$ and a spanning tree $T$ for it, and the goal is to augment $T$ with a minimum set of edges $Aug$ from $G$, such that $T \cup Aug$ is 2-edge-connected. TAP has been widely studied in the sequential setting. The best known approximation ratio of 2 for the weighted case dates back to the work of Frederickson and JáJá, SICOMP 1981. Recently, a 3/2-approximation was given for the unweighted case by Kortsarz and Nutov, TALG 2016, and recent breakthroughs by Adjiashvili, SODA 2017, and by Fiorini et al., 2017, give approximations better than 2 for bounded weights. In this paper, we provide the first fast distributed approximations for TAP. We present a distributed $2$-approximation for weighted TAP which completes in $O(h)$ rounds, where $h$ is the height of $T$. When $h$ is large, we show a much faster 4-approximation algorithm for the unweighted case, completing in $O(D+\sqrt{n}\log^*{n})$ rounds, where $n$ is the number of vertices and $D$ is the diameter of $G$. Immediate consequences of our results are an $O(D)$-round 2-approximation algorithm for the minimum size 2-edge-connected spanning subgraph, which significantly improves upon the running time of previous approximation algorithms, and an $O(h_{MST}+\sqrt{n}\log^{*}{n})$-round 3-approximation algorithm for the weighted case, where $h_{MST}$ is the height of the MST of the graph. Additional applications are algorithms for verifying 2-edge-connectivity and for augmenting the connectivity of any connected spanning subgraph to 2. Finally, we complement our study with proving lower bounds for distributed approximations of TAP. ",1,0,0,0,0,0 17161,Generative Adversarial Source Separation," Generative source separation methods such as non-negative matrix factorization (NMF) or auto-encoders, rely on the assumption of an output probability density. Generative Adversarial Networks (GANs) can learn data distributions without needing a parametric assumption on the output density. We show on a speech source separation experiment that, a multi-layer perceptron trained with a Wasserstein-GAN formulation outperforms NMF, auto-encoders trained with maximum likelihood, and variational auto-encoders in terms of source to distortion ratio. ",1,0,0,1,0,0 17162,Bootstrapping Generalization Error Bounds for Time Series," We consider the problem of finding confidence intervals for the risk of forecasting the future of a stationary, ergodic stochastic process, using a model estimated from the past of the process. We show that a bootstrap procedure provides valid confidence intervals for the risk, when the data source is sufficiently mixing, and the loss function and the estimator are suitably smooth. Autoregressive (AR(d)) models estimated by least squares obey the necessary regularity conditions, even when mis-specified, and simulations show that the finite- sample coverage of our bounds quickly converges to the theoretical, asymptotic level. As an intermediate step, we derive sufficient conditions for asymptotic independence between empirical distribution functions formed by splitting a realization of a stochastic process, of independent interest. ",0,0,1,1,0,0 17163,Sign reversal of magnetoresistance and p to n transition in Ni doped ZnO thin film," We report the magnetoresistance and nonlinear Hall effect studies over a wide temperature range in pulsed laser deposited Ni0.07Zn0.93O thin film. Negative and positive contributions to magnetoresistance at high and low temperatures have been successfully modeled by the localized magnetic moment and two band conduction process involving heavy and light hole subbands, respectively. Nonlinearity in the Hall resistance also agrees well with the two channel conduction model. A negative Hall voltage has been found for T $\gte 50 K$, implying a dominant conduction mainly by electrons whereas positive Hall voltage for T less than 50 K shows hole dominated conduction in this material. Crossover in the sign of magnetoresistance from negative to positive reveals the spin polarization of the charge carriers and hence the applicability of Ni doped ZnO thin film for spintronic applications. ",0,1,0,0,0,0 17164,Proceedings of the Third Workshop on Formal Integrated Development Environment," This volume contains the proceedings of F-IDE 2016, the third international workshop on Formal Integrated Development Environment, which was held as an FM 2016 satellite event, on November 8, 2016, in Limassol (Cyprus). High levels of safety, security and also privacy standards require the use of formal methods to specify and develop compliant software (sub)systems. Any standard comes with an assessment process, which requires a complete documentation of the application in order to ease the justification of design choices and the review of code and proofs. Thus tools are needed for handling specifications, program constructs and verification artifacts. The aim of the F-IDE workshop is to provide a forum for presenting and discussing research efforts as well as experience returns on design, development and usage of formal IDE aiming at making formal methods ""easier"" for both specialists and non-specialists. ",1,0,0,0,0,0 17165,A Useful Motif for Flexible Task Learning in an Embodied Two-Dimensional Visual Environment," Animals (especially humans) have an amazing ability to learn new tasks quickly, and switch between them flexibly. How brains support this ability is largely unknown, both neuroscientifically and algorithmically. One reasonable supposition is that modules drawing on an underlying general-purpose sensory representation are dynamically allocated on a per-task basis. Recent results from neuroscience and artificial intelligence suggest the role of the general purpose visual representation may be played by a deep convolutional neural network, and give some clues how task modules based on such a representation might be discovered and constructed. In this work, we investigate module architectures in an embodied two-dimensional touchscreen environment, in which an agent's learning must occur via interactions with an environment that emits images and rewards, and accepts touches as input. This environment is designed to capture the physical structure of the task environments that are commonly deployed in visual neuroscience and psychophysics. We show that in this context, very simple changes in the nonlinear activations used by such a module can significantly influence how fast it is at learning visual tasks and how suitable it is for switching to new tasks. ",1,0,0,1,0,0 17166,3D Move to See: Multi-perspective visual servoing for improving object views with semantic segmentation," In this paper, we present a new approach to visual servoing for robotics, referred to as 3D Move to See (3DMTS), based on the principle of finding the next best view using a 3D camera array and a robotic manipulator to obtain multiple samples of the scene from different perspectives. The method uses semantic vision and an objective function applied to each perspective to sample a gradient representing the direction of the next best view. The method is demonstrated within simulation and on a real robotic platform containing a custom 3D camera array for the challenging scenario of robotic harvesting in a highly occluded and unstructured environment. It was shown on a real robotic platform that by moving the end effector using the gradient of an objective function leads to a locally optimal view of the object of interest, even amongst occlusions. The overall performance of the 3DMTS method obtained a mean increase in target size by 29.3% compared to a baseline method using a single RGB-D camera, which obtained 9.17%. The results demonstrate qualitatively and quantitatively that the 3DMTS method performed better in most scenarios, and yielded three times the target size compared to the baseline method. The increased target size in the final view will improve the detection of key features of the object of interest for further manipulation, such as grasping and harvesting. ",1,0,0,0,0,0 17167,Modeling influenza-like illnesses through composite compartmental models," Epidemiological models for the spread of pathogens in a population are usually only able to describe a single pathogen. This makes their application unrealistic in cases where multiple pathogens with similar symptoms are spreading concurrently within the same population. Here we describe a method which makes possible the application of multiple single-strain models under minimal conditions. As such, our method provides a bridge between theoretical models of epidemiology and data-driven approaches for modeling of influenza and other similar viruses. Our model extends the Susceptible-Infected-Recovered model to higher dimensions, allowing the modeling of a population infected by multiple viruses. We further provide a method, based on an overcomplete dictionary of feasible realizations of SIR solutions, to blindly partition the time series representing the number of infected people in a population into individual components, each representing the effect of a single pathogen. We demonstrate the applicability of our proposed method on five years of seasonal influenza-like illness (ILI) rates, estimated from Twitter data. We demonstrate that our method describes, on average, 44\% of the variance in the ILI time series. The individual infectious components derived from our model are matched to known viral profiles in the populations, which we demonstrate matches that of independently collected epidemiological data. We further show that the basic reproductive numbers ($R0$) of the matched components are in range known for these pathogens. Our results suggest that the proposed method can be applied to other pathogens and geographies, providing a simple method for estimating the parameters of epidemics in a population. ",1,1,0,0,0,0 17168,Notes on the Polish Algorithm," We study, with the help of a computer program, the Polish Algorithm for finite terms satisfying various algebraic laws, e.g., left distributivity a(bc) = (ab)(ac). While the termination of the algorithm for left distributivity remains open in general, we can establish some partial results, which might be useful towards a positive solution. In contrast, we show the divergence of the algorithm for the laws a(bc) = (ab)(cc) and a(bc) = (ab)(a(ac)). ",0,0,1,0,0,0 17169,Penalized Maximum Tangent Likelihood Estimation and Robust Variable Selection," We introduce a new class of mean regression estimators -- penalized maximum tangent likelihood estimation -- for high-dimensional regression estimation and variable selection. We first explain the motivations for the key ingredient, maximum tangent likelihood estimation (MTE), and establish its asymptotic properties. We further propose a penalized MTE for variable selection and show that it is $\sqrt{n}$-consistent, enjoys the oracle property. The proposed class of estimators consists penalized $\ell_2$ distance, penalized exponential squared loss, penalized least trimmed square and penalized least square as special cases and can be regarded as a mixture of minimum Kullback-Leibler distance estimation and minimum $\ell_2$ distance estimation. Furthermore, we consider the proposed class of estimators under the high-dimensional setting when the number of variables $d$ can grow exponentially with the sample size $n$, and show that the entire class of estimators (including the aforementioned special cases) can achieve the optimal rate of convergence in the order of $\sqrt{\ln(d)/n}$. Finally, simulation studies and real data analysis demonstrate the advantages of the penalized MTE. ",0,0,0,1,0,0 17170,A minimally-dissipative low-Mach number solver for complex reacting flows in OpenFOAM," Large eddy simulation (LES) has become the de-facto computational tool for modeling complex reacting flows, especially in gas turbine applications. However, readily usable general-purpose LES codes for complex geometries are typically academic or proprietary/commercial in nature. The objective of this work is to develop and disseminate an open source LES tool for low-Mach number turbulent combustion using the OpenFOAM framework. In particular, a collocated-mesh approach suited for unstructured grid formulation is provided. Unlike other fluid dynamics models, LES accuracy is intricately linked to so-called primary and secondary conservation properties of the numerical discretization schemes. This implies that although the solver only evolves equations for mass, momentum, and energy, the implied discrete equation for kinetic energy (square of velocity) should be minimally-dissipative. Here, a specific spatial and temporal discretization is imposed such that this kinetic energy dissipation is minimized. The method is demonstrated using manufactured solutions approach on regular and skewed meshes, a canonical flow problem, and a turbulent sooting flame in a complex domain relevant to gas turbines applications. ",0,1,0,0,0,0 17171,Surface energy of strained amorphous solids," Surface stress and surface energy are fundamental quantities which characterize the interface between two materials. Although these quantities are identical for interfaces involving only fluids, the Shuttleworth effect demonstrates that this is not the case for most interfaces involving solids, since their surface energies change with strain. Crystalline materials are known to have strain dependent surface energies, but in amorphous materials, such as polymeric glasses and elastomers, the strain dependence is debated due to a dearth of direct measurements. Here, we utilize contact angle measurements on strained glassy and elastomeric solids to address this matter. We show conclusively that interfaces involving polymeric glasses exhibit strain dependent surface energies, and give strong evidence for the absence of such a dependence for incompressible elastomers. The results provide fundamental insight into our understanding of the interfaces of amorphous solids and their interaction with contacting liquids. ",0,1,0,0,0,0 17172,Biaxial magnetic field setup for angular magnetic measurements of thin films and spintronic nanodevices," The biaxial magnetic-field setup for angular magnetic measurements of thin film and spintronic devices is designed and presented. The setup allows for application of the in-plane magnetic field using a quadrupole electromagnet, controlled by power supply units and integrated with an electromagnet biaxial magnetic field sensor. In addition, the probe station is equipped with a microwave circuitry, which enables angle-resolved spin torque oscillation measurements. The angular dependencies of magnetoresistance and spin diode effect in a giant magnetoresistance strip are shown as an operational verification of the experimental setup. We adapted an analytical macrospin model to reproduce both the resistance and spin-diode angular dependency measurements. ",0,1,0,0,0,0 17173,Active matter invasion of a viscous fluid: unstable sheets and a no-flow theorem," We investigate the dynamics of a dilute suspension of hydrodynamically interacting motile or immotile stress-generating swimmers or particles as they invade a surrounding viscous fluid. Colonies of aligned pusher particles are shown to elongate in the direction of particle orientation and undergo a cascade of transverse concentration instabilities, governed at small times by an equation which also describes the Saffman-Taylor instability in a Hele-Shaw cell, or Rayleigh-Taylor instability in two-dimensional flow through a porous medium. Thin sheets of aligned pusher particles are always unstable, while sheets of aligned puller particles can either be stable (immotile particles), or unstable (motile particles) with a growth rate which is non-monotonic in the force dipole strength. We also prove a surprising ""no-flow theorem"": a distribution initially isotropic in orientation loses isotropy immediately but in such a way that results in no fluid flow everywhere and for all time. ",0,0,0,0,1,0 17174,Interplay of synergy and redundancy in diamond motif," The formalism of partial information decomposition provides independent or non-overlapping components constituting total information content provided by a set of source variables about the target variable. These components are recognised as unique information, synergistic information and, redundant information. The metric of net synergy, conceived as the difference between synergistic and redundant information, is capable of detecting synergy, redundancy and, information independence among stochastic variables. And it can be quantified, as it is done here, using appropriate combinations of different Shannon mutual information terms. Utilisation of such a metric in network motifs with the nodes representing different biochemical species, involved in information sharing, uncovers rich store for interesting results. In the current study, we make use of this formalism to obtain a comprehensive understanding of the relative information processing mechanism in a diamond motif and two of its sub-motifs namely bifurcation and integration motif embedded within the diamond motif. The emerging patterns of synergy and redundancy and their effective contribution towards ensuring high fidelity information transmission are duly compared in the sub-motifs and independent motifs (bifurcation and integration). In this context, the crucial roles played by various time scales and activation coefficients in the network topologies are especially emphasised. We show that the origin of synergy and redundancy in information transmission can be physically justified by decomposing diamond motif into bifurcation and integration motif. ",0,1,0,0,0,0 17175,Hardy-Sobolev equations with asymptotically vanishing singularity: Blow-up analysis for the minimal energy," We study the asymptotic behavior of a sequence of positive solutions $(u_{\epsilon})_{\epsilon >0}$ as $\epsilon \to 0$ to the family of equations \begin{equation*} \left\{\begin{array}{ll} \Delta u_{\epsilon}+a(x)u_{\epsilon}= \frac{u_{\epsilon}^{2^*(s_{\epsilon})-1}}{|x|^{s_{\epsilon}}}& \hbox{ in }\Omega\\ u_{\epsilon}=0 & \hbox{ on }\partial\Omega. \end{array}\right. \end{equation*} where $(s_{\epsilon})_{\epsilon >0}$ is a sequence of positive real numbers such that $\lim \limits_{\epsilon \rightarrow 0} s_{\epsilon}=0$, $2^{*}(s_{\epsilon}):= \frac{2(n-s_{\epsilon})}{n-2}$ and $\Omega \subset \mathbb{R}^{n}$ is a bounded smooth domain such that $0 \in \partial \Omega$. When the sequence $(u_{\epsilon})_{\epsilon >0}$ is uniformly bounded in $L^{\infty}$, then upto a subsequence it converges strongly to a minimizing solution of the stationary Schrödinger equation with critical growth. In case the sequence blows up, we obtain strong pointwise control on the blow up sequence, and then using the Pohozaev identity localize the point of singularity, which in this case can at most be one, and derive precise blow up rates. In particular when $n=3$ or $a\equiv 0$ then blow up can occur only at an interior point of $\Omega$ or the point $0 \in \partial \Omega$. ",0,0,1,0,0,0 17176,"Some results on Ricatti Equations, Floquet Theory and Applications"," In this paper, we present two new results to the classical Floquet theory, which provides the Floquet multipliers for two classes of the planar periodic system. One these results provides the Floquet multipliers independently of the solution of system. To demonstrate the application of these analytical results, we consider a cholera epidemic model with phage dynamics and seasonality incorporated. ",0,0,1,0,0,0 17177,"Size, Shape, and Phase Control in Ultrathin CdSe Nanosheets"," Ultrathin two-dimensional nanosheets raise a rapidly increasing interest due to their unique dimensionality-dependent properties. Most of the two-dimensional materials are obtained by exfoliation of layered bulk materials or are grown on substrates by vapor deposition methods. To produce free-standing nanosheets, solution-based colloidal methods are emerging as promising routes. In this work, we demonstrate ultrathin CdSe nanosheets with controllable size, shape and phase. The key of our approach is the use of halogenated alkanes as additives in a hot-injection synthesis. Increasing concentrations of bromoalkanes can tune the shape from sexangular to quadrangular to triangular and the phase from zinc blende to wurtzite. Geometry and crystal structure evolution of the nanosheets take place in the presence of halide ions, acting as cadmium complexing agents and as surface X-type ligands, according to mass spectrometry and X-ray photoelectron spectroscopies. Our experimental findings show that the degree of these changes depends on the molecular structure of the halogen alkanes and the type of halogen atom. ",0,1,0,0,0,0 17178,Transfer of magnetic order and anisotropy through epitaxial integration of 3$d$ and 4$f$ spin systems," Resonant x-ray scattering at the Dy $M_5$ and Ni $L_3$ absorption edges was used to probe the temperature and magnetic field dependence of magnetic order in epitaxial LaNiO$_3$-DyScO$_3$ superlattices. For superlattices with 2 unit cell thick LaNiO$_3$ layers, a commensurate spiral state develops in the Ni spin system below 100 K. Upon cooling below $T_{ind} = 18$ K, Dy-Ni exchange interactions across the LaNiO$_3$-DyScO$_3$ interfaces induce collinear magnetic order of interfacial Dy moments as well as a reorientation of the Ni spins to a direction dictated by the strong magneto-crystalline anisotropy of Dy. This transition is reversible by an external magnetic field of 3 T. Tailored exchange interactions between rare-earth and transition-metal ions thus open up new perspectives for the manipulation of spin structures in metal-oxide heterostructures and devices. ",0,1,0,0,0,0 17179,Exotic limit sets of Teichmüller geodesics in the HHS boundary," We answer a question of Durham, Hagen, and Sisto, proving that a Teichmüller geodesic ray does not necessarily converge to a unique point in the hierarchically hyperbolic space boundary of Teichmüller space. In fact, we prove that the limit set can be almost anything allowed by the topology. ",0,0,1,0,0,0 17180,"Decoupling ""when to update"" from ""how to update"""," Deep learning requires data. A useful approach to obtain data is to be creative and mine data from various sources, that were created for different purposes. Unfortunately, this approach often leads to noisy labels. In this paper, we propose a meta algorithm for tackling the noisy labels problem. The key idea is to decouple ""when to update"" from ""how to update"". We demonstrate the effectiveness of our algorithm by mining data for gender classification by combining the Labeled Faces in the Wild (LFW) face recognition dataset with a textual genderizing service, which leads to a noisy dataset. While our approach is very simple to implement, it leads to state-of-the-art results. We analyze some convergence properties of the proposed algorithm. ",1,0,0,0,0,0 17181,Advection of potential temperature in the atmosphere of irradiated exoplanets: a robust mechanism to explain radius inflation," The anomalously large radii of strongly irradiated exoplanets have remained a major puzzle in astronomy. Based on a 2D steady state atmospheric circulation model, the validity of which is assessed by comparison to 3D calculations, we reveal a new mechanism, namely the advection of the potential temperature due to mass and longitudinal momentum conservation, a process occuring in the Earth's atmosphere or oceans. At depth, the vanishing heating flux forces the atmospheric structure to converge to a hotter adiabat than the one obtained with 1D calculations, implying a larger radius for the planet. Not only do the calculations reproduce the observed radius of HD209458b, but also the observed correlation between radius inflation and irradiation for transiting planets. Vertical advection of potential temperature induced by non uniform atmospheric heating thus provides a robust mechanism explaining the inflated radii of irradiated hot Jupiters. ",0,1,0,0,0,0 17182,Band depths based on multiple time instances," Bands of vector-valued functions $f:T\mapsto\mathbb{R}^d$ are defined by considering convex hulls generated by their values concatenated at $m$ different values of the argument. The obtained $m$-bands are families of functions, ranging from the conventional band in case the time points are individually considered (for $m=1$) to the convex hull in the functional space if the number $m$ of simultaneously considered time points becomes large enough to fill the whole time domain. These bands give rise to a depth concept that is new both for real-valued and vector-valued functions. ",0,0,1,1,0,0 17183,On Biased Correlation Estimation," In general, underestimation of risk is something which should be avoided as far as possible. Especially in financial asset management, equity risk is typically characterized by the measure of portfolio variance, or indirectly by quantities which are derived from it. Since there is a linear dependency of the variance and the empirical correlation between asset classes, one is compelled to control or to avoid the possibility of underestimating correlation coefficients. In the present approach, we formalize common practice and classify these approaches by computing their probability of underestimation. In addition, we introduce a new estimator which is characterized by having the advantage of a constant and controllable probability of underestimation. We prove that the new estimator is statistically consistent. ",0,0,1,1,0,0 17184,Atomic Swaptions: Cryptocurrency Derivatives," The atomic swap protocol allows for the exchange of cryptocurrencies on different blockchains without the need to trust a third-party. However, market participants who desire to hold derivative assets such as options or futures would also benefit from trustless exchange. In this paper I propose the atomic swaption, which extends the atomic swap to allow for such exchanges. Crucially, atomic swaptions do not require the use of oracles. I also introduce the margin contract, which provides the ability to create leveraged and short positions. Lastly, I discuss how atomic swaptions may be routed on the Lightning Network. ",0,0,0,0,0,1 17185,An arbitrary order scheme on generic meshes for miscible displacements in porous media," We design, analyse and implement an arbitrary order scheme applicable to generic meshes for a coupled elliptic-parabolic PDE system describing miscible displacement in porous media. The discretisation is based on several adaptations of the Hybrid-High-Order (HHO) method due to Di Pietro et al. [Computational Methods in Applied Mathematics, 14(4), (2014)]. The equation governing the pressure is discretised using an adaptation of the HHO method for variable diffusion, while the discrete concentration equation is based on the HHO method for advection-diffusion-reaction problems combined with numerically stable flux reconstructions for the advective velocity that we have derived using the results of Cockburn et al. [ESAIM: Mathematical Modelling and Numerical Analysis, 50(3), (2016)]. We perform some rigorous analysis of the method to demonstrate its $L^2$ stability under the irregular data often presented by reservoir engineering problems and present several numerical tests to demonstrate the quality of the results that are produced by the proposed scheme. ",1,0,0,0,0,0 17186,"Parametricity, automorphisms of the universe, and excluded middle"," It is known that one can construct non-parametric functions by assuming classical axioms. Our work is a converse to that: we prove classical axioms in dependent type theory assuming specific instances of non-parametricity. We also address the interaction between classical axioms and the existence of automorphisms of a type universe. We work over intensional Martin-Löf dependent type theory, and in some results assume further principles including function extensionality, propositional extensionality, propositional truncation, and the univalence axiom. ",1,0,1,0,0,0 17187,Convergence Rates for Deterministic and Stochastic Subgradient Methods Without Lipschitz Continuity," We extend the classic convergence rate theory for subgradient methods to apply to non-Lipschitz functions. For the deterministic projected subgradient method, we present a global $O(1/\sqrt{T})$ convergence rate for any convex function which is locally Lipschitz around its minimizers. This approach is based on Shor's classic subgradient analysis and implies generalizations of the standard convergence rates for gradient descent on functions with Lipschitz or Hölder continuous gradients. Further, we show a $O(1/\sqrt{T})$ convergence rate for the stochastic projected subgradient method on convex functions with at most quadratic growth, which improves to $O(1/T)$ under either strong convexity or a weaker quadratic lower bound condition. ",1,0,0,0,0,0 17188,A quantum dynamics method for excited electrons in molecular aggregate system using a group diabatic Fock matrix," We introduce a practical calculation scheme for the description of excited electron dynamics in molecular aggregated systems within a locally group diabatic Fock representation. This scheme makes it easy to analyze the interacting time-dependent excitations of local sites in complex systems. In addition, light-electron couplings are considered. The present scheme is intended for investigations on the migration dynamics of excited electrons in light-energy conversion systems. The scheme was applied to two systems: a naphthalene(NPTL)-tetracyanoethylene(TCNE) dimer and a 20-mer circle of ethylene molecules. Through local group analyses of the dynamical electrons, we obtained an intuitive understanding of the electron transfers between the monomers. ",0,1,0,0,0,0 17189,Products of topological groups in which all closed subgroups are separable," We prove that if $H$ is a topological group such that all closed subgroups of $H$ are separable, then the product $G\times H$ has the same property for every separable compact group $G$. Let $c$ be the cardinality of the continuum. Assuming $2^{\omega_1} = c$, we show that there exist: (1) pseudocompact topological abelian groups $G$ and $H$ such that all closed subgroups of $G$ and $H$ are separable, but the product $G\times H$ contains a closed non-separable $\sigma$-compact subgroup; (2) pseudocomplete locally convex vector spaces $K$ and $L$ such that all closed vector subspaces of $K$ and $L$ are separable, but the product $K\times L$ contains a closed non-separable $\sigma$-compact vector subspace. ",0,0,1,0,0,0 17190,Low rank solutions to differentiable systems over matrices and applications," Differentiable systems in this paper means systems of equations that are described by differentiable real functions in real matrix variables. This paper proposes algorithms for finding minimal rank solutions to such systems over (arbitrary and/or several structured) matrices by using the Levenberg-Marquardt method (LM-method) for solving least squares problems. We then apply these algorithms to solve several engineering problems such as the low-rank matrix completion problem and the low-dimensional Euclidean embedding one. Some numerical experiments illustrate the validity of the approach. On the other hand, we provide some further properties of low rank solutions to systems linear matrix equations. This is useful when the differentiable function is linear or quadratic. ",0,0,1,0,0,0 17191,A Simple and Realistic Pedestrian Model for Crowd Simulation and Application," The simulation of pedestrian crowd that reflects reality is a major challenge for researches. Several crowd simulation models have been proposed such as cellular automata model, agent-based model, fluid dynamic model, etc. It is important to note that agent-based model is able, over others approaches, to provide a natural description of the system and then to capture complex human behaviors. In this paper, we propose a multi-agent simulation model in which pedestrian positions are updated at discrete time intervals. It takes into account the major normal conditions of a simple pedestrian situated in a crowd such as preferences, realistic perception of environment, etc. Our objective is to simulate the pedestrian crowd realistically towards a simulation of believable pedestrian behaviors. Typical pedestrian phenomena, including the unidirectional and bidirectional movement in a corridor as well as the flow through bottleneck, are simulated. The conducted simulations show that our model is able to produce realistic pedestrian behaviors. The obtained fundamental diagram and flow rate at bottleneck agree very well with classic conclusions and empirical study results. It is hoped that the idea of this study may be helpful in promoting the modeling and simulation of pedestrian crowd in a simple way. ",1,1,0,0,0,0 17192,Local connectivity modulates multi-scale relaxation dynamics in a metallic glass-forming system," The structural description for the intriguing link between the fast vibrational dynamics and slow diffusive dynamics in glass-forming systems is one of the most challenging issues in physical science. Here, in a model of metallic supercooled liquid, we find that local connectivity as an atomic-level structural order parameter tunes the short-time vibrational excitations of the icosahedrally coordinated particles and meanwhile modulates their long-time relaxation dynamics changing from stretched to compressed exponentials, denoting a dynamic transition from subdiffusive to hyperdiffusive motions of such particles. Our result indicates that long-time dynamics has an atomic-level structural origin which is related to the short-time dynamics, thus suggests a structural bridge to link the fast vibrational dynamics and the slow structural relaxation in glassy materials. ",0,1,0,0,0,0 17193,SATR-DL: Improving Surgical Skill Assessment and Task Recognition in Robot-assisted Surgery with Deep Neural Networks," Purpose: This paper focuses on an automated analysis of surgical motion profiles for objective skill assessment and task recognition in robot-assisted surgery. Existing techniques heavily rely on conventional statistic measures or shallow modelings based on hand-engineered features and gesture segmentation. Such developments require significant expert knowledge, are prone to errors, and are less efficient in online adaptive training systems. Methods: In this work, we present an efficient analytic framework with a parallel deep learning architecture, SATR-DL, to assess trainee expertise and recognize surgical training activity. Through an end-to-end learning technique, abstract information of spatial representations and temporal dynamics is jointly obtained directly from raw motion sequences. Results: By leveraging a shared high-level representation learning, the resulting model is successful in the recognition of trainee skills and surgical tasks, suturing, needle-passing, and knot-tying. Meanwhile, we explore the use of ensemble in classification at the trial level, where the SATR-DL outperforms state-of-the-art performance by achieving accuracies of 0.960 and 1.000 in skill assessment and task recognition, respectively. Conclusion: This study highlights the potential of SATR-DL to provide improvements for an efficient data-driven assessment in intelligent robotic surgery. ",1,0,0,0,0,0 17194,Equivalence of weak and strong modes of measures on topological vector spaces," A strong mode of a probability measure on a normed space $X$ can be defined as a point $u$ such that the mass of the ball centred at $u$ uniformly dominates the mass of all other balls in the small-radius limit. Helin and Burger weakened this definition by considering only pairwise comparisons with balls whose centres differ by vectors in a dense, proper linear subspace $E$ of $X$, and posed the question of when these two types of modes coincide. We show that, in a more general setting of metrisable vector spaces equipped with measures that are finite on bounded sets, the density of $E$ and a uniformity condition suffice for the equivalence of these two types of modes. We accomplish this by introducing a new, intermediate type of mode. We also show that these modes can be inequivalent if the uniformity condition fails. Our results shed light on the relationships between among various notions of maximum a posteriori estimator in non-parametric Bayesian inference. ",0,0,1,1,0,0 17195,Algebras of Quasi-Plücker Coordinates are Koszul," Motivated by the theory of quasi-determinants, we study non-commutative algebras of quasi-Plücker coordinates. We prove that these algebras provide new examples of non-homogeneous quadratic Koszul algebras by showing that their quadratic duals have quadratic Gröbner bases. ",0,0,1,0,0,0 17196,Doubly-Attentive Decoder for Multi-modal Neural Machine Translation," We introduce a Multi-modal Neural Machine Translation model in which a doubly-attentive decoder naturally incorporates spatial visual features obtained using pre-trained convolutional neural networks, bridging the gap between image description and translation. Our decoder learns to attend to source-language words and parts of an image independently by means of two separate attention mechanisms as it generates words in the target language. We find that our model can efficiently exploit not just back-translated in-domain multi-modal data but also large general-domain text-only MT corpora. We also report state-of-the-art results on the Multi30k data set. ",1,0,0,0,0,0 17197,prDeep: Robust Phase Retrieval with a Flexible Deep Network," Phase retrieval algorithms have become an important component in many modern computational imaging systems. For instance, in the context of ptychography and speckle correlation imaging, they enable imaging past the diffraction limit and through scattering media, respectively. Unfortunately, traditional phase retrieval algorithms struggle in the presence of noise. Progress has been made recently on more robust algorithms using signal priors, but at the expense of limiting the range of supported measurement models (e.g., to Gaussian or coded diffraction patterns). In this work we leverage the regularization-by-denoising framework and a convolutional neural network denoiser to create prDeep, a new phase retrieval algorithm that is both robust and broadly applicable. We test and validate prDeep in simulation to demonstrate that it is robust to noise and can handle a variety of system models. A MatConvNet implementation of prDeep is available at this https URL. ",0,0,0,1,0,0 17198,Novel Exotic Magnetic Spin-order in Co5Ge3 Nano-size Materials," The Cobalt-germanium (Co-Ge) is a fascinating complex alloy system that has unique structure and exhibit range of interesting magnetic properties which would change when reduce to nanoscale dimension. At this experimental work, the high-aspect-ratio Co5Ge3 nanoparticle with average size of 8nm was synthesized by gas aggregation-type cluster-deposition technology. The nanostructure morphology of the as-made binary Co5Ge3 nanoparticles demonstrate excellent single-crystalline hexagonal structure with mostly preferable growth along (110) and (102) directions. In contrast the bulk possess Pauli paramagnetic spin-order at all range of temperature, here we discover size-driven new magnetic ordering of as-synthesized Co5Ge3 nanoparticles exhibiting ferromagnetism at room temperature with saturation magnetization of Ms = 32.2 emu/cm3. This is first report of observing such new magnetic spin ordering in this kind of material at nano-size which the magnetization has lower sensitivity to thermal energy fluctuation and exhibit high Curie temperature close to 850 K. This ferromagnetic behavior along with higher Curie temperature at Co5Ge3 nanoparticles are attributes to low-dimension and quantum-confinement effect which imposes strong spin coupling and provides a new set of size-driven spin structures in Co5Ge3 nanoparticle which no such magnetic behavior being present in the bulk of same material. This fundamental scientific study provides important insights into the formation, structural, and the magnetic property of sub 10nm Co5Ge3 nanostructure which shall lead to promising practical versatile applications for magneto- germanide based nano-devices. ",0,1,0,0,0,0 17199,Toward Multimodal Image-to-Image Translation," Many image-to-image translation problems are ambiguous, as a single input image may correspond to multiple possible outputs. In this work, we aim to model a \emph{distribution} of possible outputs in a conditional generative modeling setting. The ambiguity of the mapping is distilled in a low-dimensional latent vector, which can be randomly sampled at test time. A generator learns to map the given input, combined with this latent code, to the output. We explicitly encourage the connection between output and the latent code to be invertible. This helps prevent a many-to-one mapping from the latent code to the output during training, also known as the problem of mode collapse, and produces more diverse results. We explore several variants of this approach by employing different training objectives, network architectures, and methods of injecting the latent code. Our proposed method encourages bijective consistency between the latent encoding and output modes. We present a systematic comparison of our method and other variants on both perceptual realism and diversity. ",1,0,0,1,0,0 17200,On the Sample Complexity of the Linear Quadratic Regulator," This paper addresses the optimal control problem known as the Linear Quadratic Regulator in the case when the dynamics are unknown. We propose a multi-stage procedure, called Coarse-ID control, that estimates a model from a few experimental trials, estimates the error in that model with respect to the truth, and then designs a controller using both the model and uncertainty estimate. Our technique uses contemporary tools from random matrix theory to bound the error in the estimation procedure. We also employ a recently developed approach to control synthesis called System Level Synthesis that enables robust control design by solving a convex optimization problem. We provide end-to-end bounds on the relative error in control cost that are nearly optimal in the number of parameters and that highlight salient properties of the system to be controlled such as closed-loop sensitivity and optimal control magnitude. We show experimentally that the Coarse-ID approach enables efficient computation of a stabilizing controller in regimes where simple control schemes that do not take the model uncertainty into account fail to stabilize the true system. ",1,0,0,1,0,0 17201,Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification," Deep neural networks (DNNs) have transformed several artificial intelligence research areas including computer vision, speech recognition, and natural language processing. However, recent studies demonstrated that DNNs are vulnerable to adversarial manipulations at testing time. Specifically, suppose we have a testing example, whose label can be correctly predicted by a DNN classifier. An attacker can add a small carefully crafted noise to the testing example such that the DNN classifier predicts an incorrect label, where the crafted testing example is called adversarial example. Such attacks are called evasion attacks. Evasion attacks are one of the biggest challenges for deploying DNNs in safety and security critical applications such as self-driving cars. In this work, we develop new methods to defend against evasion attacks. Our key observation is that adversarial examples are close to the classification boundary. Therefore, we propose region-based classification to be robust to adversarial examples. For a benign/adversarial testing example, we ensemble information in a hypercube centered at the example to predict its label. In contrast, traditional classifiers are point-based classification, i.e., given a testing example, the classifier predicts its label based on the testing example alone. Our evaluation results on MNIST and CIFAR-10 datasets demonstrate that our region-based classification can significantly mitigate evasion attacks without sacrificing classification accuracy on benign examples. Specifically, our region-based classification achieves the same classification accuracy on testing benign examples as point-based classification, but our region-based classification is significantly more robust than point-based classification to various evasion attacks. ",1,0,0,1,0,0 17202,Partition-free families of sets," Let $m(n)$ denote the maximum size of a family of subsets which does not contain two disjoint sets along with their union. In 1968 Kleitman proved that $m(n) = {n\choose m+1}+\ldots +{n\choose 2m+1}$ if $n=3m+1$. Confirming the conjecture of Kleitman, we establish the same equality for the cases $n=3m$ and $n=3m+2$, and also determine all extremal families. Unlike the case $n=3m+1$, the extremal families are not unique. This is a plausible reason behind the relative difficulty of our proofs. We completely settle the case of several families as well. ",1,0,0,0,0,0 17203,Measuring the Galactic Cosmic Ray Flux with the LISA Pathfinder Radiation Monitor," Test mass charging caused by cosmic rays will be a significant source of acceleration noise for space-based gravitational wave detectors like LISA. Operating between December 2015 and July 2017, the technology demonstration mission LISA Pathfinder included a bespoke monitor to help characterise the relationship between test mass charging and the local radiation environment. The radiation monitor made in situ measurements of the cosmic ray flux while also providing information about its energy spectrum. We describe the monitor and present measurements which show a gradual 40% increase in count rate coinciding with the declining phase of the solar cycle. Modulations of up to 10% were also observed with periods of 13 and 26 days that are associated with co-rotating interaction regions and heliospheric current sheet crossings. These variations in the flux above the monitor detection threshold (approximately 70 MeV) are shown to be coherent with measurements made by the IREM monitor on-board the Earth orbiting INTEGRAL spacecraft. Finally we use the measured deposited energy spectra, in combination with a GEANT4 model, to estimate the galactic cosmic ray differential energy spectrum over the course of the mission. ",0,1,0,0,0,0 17204,Thread-Modular Static Analysis for Relaxed Memory Models," We propose a memory-model-aware static program analysis method for accurately analyzing the behavior of concurrent software running on processors with weak consistency models such as x86-TSO, SPARC-PSO, and SPARC-RMO. At the center of our method is a unified framework for deciding the feasibility of inter-thread interferences to avoid propagating spurious data flows during static analysis and thus boost the performance of the static analyzer. We formulate the checking of interference feasibility as a set of Datalog rules which are both efficiently solvable and general enough to capture a range of hardware-level memory models. Compared to existing techniques, our method can significantly reduce the number of bogus alarms as well as unsound proofs. We implemented the method and evaluated it on a large set of multithreaded C programs. Our experiments showthe method significantly outperforms state-of-the-art techniques in terms of accuracy with only moderate run-time overhead. ",1,0,0,0,0,0 17205,Bidirectional Evaluation with Direct Manipulation," We present an evaluation update (or simply, update) algorithm for a full-featured functional programming language, which synthesizes program changes based on output changes. Intuitively, the update algorithm retraces the steps of the original evaluation, rewriting the program as needed to reconcile differences between the original and updated output values. Our approach, furthermore, allows expert users to define custom lenses that augment the update algorithm with more advanced or domain-specific program updates. To demonstrate the utility of evaluation update, we implement the algorithm in Sketch-n-Sketch, a novel direct manipulation programming system for generating HTML documents. In Sketch-n-Sketch, the user writes an ML-style functional program to generate HTML output. When the user directly manipulates the output using a graphical user interface, the update algorithm reconciles the changes. We evaluate bidirectional evaluation in Sketch-n-Sketch by authoring ten examples comprising approximately 1400 lines of code in total. These examples demonstrate how a variety of HTML documents and applications can be developed and edited interactively in Sketch-n-Sketch, mitigating the tedious edit-run-view cycle in traditional programming environments. ",1,0,0,0,0,0 17206,Extreme Event Statistics in a Drifting Markov Chain," We analyse extreme event statistics of experimentally realized Markov chains with various drifts. Our Markov chains are individual trajectories of a single atom diffusing in a one dimensional periodic potential. Based on more than 500 individual atomic traces we verify the applicability of the Sparre Andersen theorem to our system despite the presence of a drift. We present detailed analysis of four different rare event statistics for our system: the distributions of extreme values, of record values, of extreme value occurrence in the chain, and of the number of records in the chain. We observe that for our data the shape of the extreme event distributions is dominated by the underlying exponential distance distribution extracted from the atomic traces. Furthermore, we find that even small drifts influence the statistics of extreme events and record values, which is supported by numerical simulations, and we identify cases in which the drift can be determined without information about the underlying random variable distributions. Our results facilitate the use of extreme event statistics as a signal for small drifts in correlated trajectories. ",0,1,0,0,0,0 17207,On a Possibility of Self Acceleration of Electrons in a Plasma," The self-consistent nonlinear interaction of a monoenergetic bunch with cold plasma is considered. It is shown that under certain conditions a self-acceleration of the bunch tail electrons up to high energies is possible. ",0,1,0,0,0,0 17208,An Adaptive Strategy for Active Learning with Smooth Decision Boundary," We present the first adaptive strategy for active learning in the setting of classification with smooth decision boundary. The problem of adaptivity (to unknown distributional parameters) has remained opened since the seminal work of Castro and Nowak (2007), which first established (active learning) rates for this setting. While some recent advances on this problem establish adaptive rates in the case of univariate data, adaptivity in the more practical setting of multivariate data has so far remained elusive. Combining insights from various recent works, we show that, for the multivariate case, a careful reduction to univariate-adaptive strategies yield near-optimal rates without prior knowledge of distributional parameters. ",1,0,0,1,0,0 17209,Towards Adversarial Retinal Image Synthesis," Synthesizing images of the eye fundus is a challenging task that has been previously approached by formulating complex models of the anatomy of the eye. New images can then be generated by sampling a suitable parameter space. In this work, we propose a method that learns to synthesize eye fundus images directly from data. For that, we pair true eye fundus images with their respective vessel trees, by means of a vessel segmentation technique. These pairs are then used to learn a mapping from a binary vessel tree to a new retinal image. For this purpose, we use a recent image-to-image translation technique, based on the idea of adversarial learning. Experimental results show that the original and the generated images are visually different in terms of their global appearance, in spite of sharing the same vessel tree. Additionally, a quantitative quality analysis of the synthetic retinal images confirms that the produced images retain a high proportion of the true image set quality. ",1,0,0,1,0,0 17210,Tailoring the SiC surface - a morphology study on the epitaxial growth of graphene and its buffer layer," We investigate the growth of the graphene buffer layer and the involved step bunching behavior of the silicon carbide substrate surface using atomic force microscopy. The formation of local buffer layer domains are identified to be the origin of undesirably high step edges in excellent agreement with the predictions of a general model of step dynamics. The applied polymer-assisted sublimation growth method demonstrates that the key principle to suppress this behavior is the uniform nucleation of the buffer layer. In this way, the silicon carbide surface is stabilized such that ultra-flat surfaces can be conserved during graphene growth on a large variety of silicon carbide substrate surfaces. The analysis of the experimental results describes different growth modes which extend the current understanding of epitaxial graphene growth by emphasizing the importance of buffer layer nucleation and critical mass transport processes. ",0,1,0,0,0,0 17211,Polarization of the Vaccination Debate on Facebook," Vaccine hesitancy has been recognized as a major global health threat. Having access to any type of information in social media has been suggested as a potential powerful influence factor to hesitancy. Recent studies in other fields than vaccination show that access to a wide amount of content through the Internet without intermediaries resolved into major segregation of the users in polarized groups. Users select the information adhering to theirs system of beliefs and tend to ignore dissenting information. In this paper we assess whether there is polarization in Social Media use in the field of vaccination. We perform a thorough quantitative analysis on Facebook analyzing 2.6M users interacting with 298.018 posts over a time span of seven years and 5 months. We used community detection algorithms to automatically detect the emergent communities from the users activity and to quantify the cohesiveness over time of the communities. Our findings show that content consumption about vaccines is dominated by the echo-chamber effect and that polarization increased over years. Communities emerge from the users consumption habits, i.e. the majority of users only consumes information in favor or against vaccines, not both. The existence of echo-chambers may explain why social-media campaigns providing accurate information may have limited reach, may be effective only in sub-groups and might even foment further polarization of opinions. The introduction of dissenting information into a sub-group is disregarded and can have a backfire effect, further reinforcing the existing opinions within the sub-group. ",1,0,0,0,0,0 17212,"Predicting Demographics, Moral Foundations, and Human Values from Digital Behaviors"," Personal electronic devices including smartphones give access to behavioural signals that can be used to learn about the characteristics and preferences of individuals. In this study, we explore the connection between demographic and psychological attributes and the digital behavioural records, for a cohort of 7,633 people, closely representative of the US population with respect to gender, age, geographical distribution, education, and income. Along with the demographic data, we collected self-reported assessments on validated psychometric questionnaires for moral traits and basic human values and combined this information with passively collected multi-modal digital data from web browsing behaviour and smartphone usage. A machine learning framework was then designed to infer both the demographic and psychological attributes from the behavioural data. In a cross-validated setting, our models predicted demographic attributes with good accuracy as measured by the weighted AUROC score (Area Under the Receiver Operating Characteristic), but were less performant for the moral traits and human values. These results call for further investigation since they are still far from unveiling individuals' psychological fabric. This connection, along with the most predictive features that we provide for each attribute, might prove useful for designing personalised services, communication strategies, and interventions, and can be used to sketch a portrait of people with a similar worldview. ",1,0,0,0,0,0 17213,Isotonic regression in general dimensions," We study the least squares regression function estimator over the class of real-valued functions on $[0,1]^d$ that are increasing in each coordinate. For uniformly bounded signals and with a fixed, cubic lattice design, we establish that the estimator achieves the minimax rate of order $n^{-\min\{2/(d+2),1/d\}}$ in the empirical $L_2$ loss, up to poly-logarithmic factors. Further, we prove a sharp oracle inequality, which reveals in particular that when the true regression function is piecewise constant on $k$ hyperrectangles, the least squares estimator enjoys a faster, adaptive rate of convergence of $(k/n)^{\min(1,2/d)}$, again up to poly-logarithmic factors. Previous results are confined to the case $d \leq 2$. Finally, we establish corresponding bounds (which are new even in the case $d=2$) in the more challenging random design setting. There are two surprising features of these results: first, they demonstrate that it is possible for a global empirical risk minimisation procedure to be rate optimal up to poly-logarithmic factors even when the corresponding entropy integral for the function class diverges rapidly; second, they indicate that the adaptation rate for shape-constrained estimators can be strictly worse than the parametric rate. ",0,0,1,1,0,0 17214,Extraction of Schottky barrier height insensitive to temperature via forward currentvoltage- temperature measurements," The thermal stability of most electronic and photo-electronic devices strongly depends on the relationship between Schottky Barrier Height (SBH) and temperature. In this paper, the possible of thermionic current depicted via correct and reliability relationship between forward current and voltage is consequently discussed, the intrinsic SBH insensitive to temperature can be calculated by modification on Richardson- Dushman`s formula suggested in this paper. The results of application on four hetero-junctions prove that the method proposed is credible in this paper, this suggests that the I/V/T method is a feasible alternative to characterize these heterojunctions. ",0,1,0,0,0,0 17215,Emergent Open-Endedness from Contagion of the Fittest," In this paper, we study emergent irreducible information in populations of randomly generated computable systems that are networked and follow a ""Susceptible-Infected-Susceptible"" contagion model of imitation of the fittest neighbor. We show that there is a lower bound for the stationary prevalence (or average density of ""infected"" nodes) that triggers an unlimited increase of the expected local emergent algorithmic complexity (or information) of a node as the population size grows. We call this phenomenon expected (local) emergent open-endedness. In addition, we show that static networks with a power-law degree distribution following the Barabási-Albert model satisfy this lower bound and, thus, display expected (local) emergent open-endedness. ",1,0,0,0,0,0 17216,Incompressible Limit of isentropic Navier-Stokes equations with Navier-slip boundary," This paper concerns the low Mach number limit of weak solutions to the compressible Navier-Stokes equations for isentropic fluids in a bounded domain with a Navier-slip boundary condition. In \cite{DGLM99}, it has been proved that if the velocity is imposed the homogeneous Dirichlet boundary condition, as the Mach number goes to 0, the velocity of the compressible flow converges strongly in $L^2$ under the geometrical assumption (H) on the domain. We justify the same strong convergence when the slip length in the Navier condition is the reciprocal of the square root of the Mach number. ",0,0,1,0,0,0 17217,On Some Exponential Sums Related to the Coulter's Polynomial," In this paper, the formulas of some exponential sums over finite field, related to the Coulter's polynomial, are settled based on the Coulter's theorems on Weil sums, which may have potential application in the construction of linear codes with few weights. ",1,0,0,0,0,0 17218,Distribution-Preserving k-Anonymity," Preserving the privacy of individuals by protecting their sensitive attributes is an important consideration during microdata release. However, it is equally important to preserve the quality or utility of the data for at least some targeted workloads. We propose a novel framework for privacy preservation based on the k-anonymity model that is ideally suited for workloads that require preserving the probability distribution of the quasi-identifier variables in the data. Our framework combines the principles of distribution-preserving quantization and k-member clustering, and we specialize it to two variants that respectively use intra-cluster and Gaussian dithering of cluster centers to achieve distribution preservation. We perform theoretical analysis of the proposed schemes in terms of distribution preservation, and describe their utility in workloads such as covariate shift and transfer learning where such a property is necessary. Using extensive experiments on real-world Medical Expenditure Panel Survey data, we demonstrate the merits of our algorithms over standard k-anonymization for a hallmark health care application where an insurance company wishes to understand the risk in entering a new market. Furthermore, by empirically quantifying the reidentification risk, we also show that the proposed approaches indeed maintain k-anonymity. ",1,0,0,1,0,0 17219,Using controlled disorder to probe the interplay between charge order and superconductivity in NbSe2," The interplay between superconductivity and charge density waves (CDW) in $H$-NbSe2 is not fully understood despite decades of study. Artificially introduced disorder can tip the delicate balance between two competing forms of long-range order, and reveal the underlying interactions that give rise to them. Here we introduce disorders by electron irradiation and measure in-plane resistivity, Hall resistivity, X-ray scattering, and London penetration depth. With increasing disorder, $T_{\textrm{c}}$ varies nonmonotonically, whereas $T_{\textrm{CDW}}$ monotonically decreases and becomes unresolvable above a critical irradiation dose where $T_{\textrm{c}}$ drops sharply. Our results imply that CDW order initially competes with superconductivity, but eventually assists it. We argue that at the transition where the long-range CDW order disappears, the cooperation with superconductivity is dramatically suppressed. X-ray scattering and Hall resistivity measurements reveal that the short-range CDW survives above the transition. Superconductivity persists to much higher dose levels, consistent with fully gapped superconductivity and moderate interband pairing. ",0,1,0,0,0,0 17220,A training process for improving the quality of software projects developed by a practitioner," Background: The quality of a software product depends on the quality of the software process followed in developing the product. Therefore, many higher education institutions (HEI) and software organizations have implemented software process improvement (SPI) training courses to improve the software quality. Objective: Because the duration of a course is a concern for HEI and software organizations, we investigate whether the quality of software projects will be improved by reorganizing the activities of the ten assignments of the original personal software process (PSP) course into a modified PSP having fewer assignments (i.e., seven assignments). Method: The assignments were developed by following a modified PSP with fewer assignments but including the phases, forms, standards, and logs suggested in the original PSP. The measurement of the quality of the software assignments was based on defect density. Results: When the activities in the original PSP were reordered into fewer assignments, as practitioners progress through the PSP training, the defect density improved with statistical significance. Conclusions: Our modified PSP could be applied in academy and industrial environments which are concerned in the sense of reducing the PSP training time ",1,0,0,0,0,0 17221,Gaia Data Release 1. Cross-match with external catalogues - Algorithm and results," Although the Gaia catalogue on its own will be a very powerful tool, it is the combination of this highly accurate archive with other archives that will truly open up amazing possibilities for astronomical research. The advanced interoperation of archives is based on cross-matching, leaving the user with the feeling of working with one single data archive. The data retrieval should work not only across data archives, but also across wavelength domains. The first step for seamless data access is the computation of the cross-match between Gaia and external surveys. The matching of astronomical catalogues is a complex and challenging problem both scientifically and technologically (especially when matching large surveys like Gaia). We describe the cross-match algorithm used to pre-compute the match of Gaia Data Release 1 (DR1) with a selected list of large publicly available optical and IR surveys. The overall principles of the adopted cross-match algorithm are outlined. Details are given on the developed algorithm, including the methods used to account for position errors, proper motions, and environment; to define the neighbours; and to define the figure of merit used to select the most probable counterpart. Statistics on the results are also given. The results of the cross-match are part of the official Gaia DR1 catalogue. ",0,1,0,0,0,0 17222,"Masses of Kepler-46b, c from Transit Timing Variations"," We use 16 quarters of the \textit{Kepler} mission data to analyze the transit timing variations (TTVs) of the extrasolar planet Kepler-46b (KOI-872). Our dynamical fits confirm that the TTVs of this planet (period $P=33.648^{+0.004}_{-0.005}$ days) are produced by a non-transiting planet Kepler-46c ($P=57.325^{+0.116}_{-0.098}$ days). The Bayesian inference tool \texttt{MultiNest} is used to infer the dynamical parameters of Kepler-46b and Kepler-46c. We find that the two planets have nearly coplanar and circular orbits, with eccentricities $\simeq 0.03$ somewhat higher than previously estimated. The masses of the two planets are found to be $M_{b}=0.885^{+0.374}_{-0.343}$ and $M_{c}=0.362^{+0.016}_{-0.016}$ Jupiter masses, with $M_{b}$ being determined here from TTVs for the first time. Due to the precession of its orbital plane, Kepler-46c should start transiting its host star in a few decades from now. ",0,1,0,0,0,0 17223,Recovering water wave elevation from pressure measurements," The reconstruction of water wave elevation from bottom pressure measurements is an important issue for coastal applications, but corresponds to a difficult mathematical problem. In this paper we present the derivation of a method which allows the elevation reconstruction of water waves in intermediate and shallow waters. From comparisons with numerical Euler solutions and wave-tank experiments we show that our nonlinear method provides much better results of the surface elevation reconstruction compared to the linear transfer function approach commonly used in coastal applications. More specifically, our methodaccurately reproduces the peaked and skewed shape of nonlinear wave fields. Therefore, it is particularly relevant for applications on extreme waves and wave-induced sediment transport. ",0,1,0,0,0,0 17224,Tractable and Scalable Schatten Quasi-Norm Approximations for Rank Minimization," The Schatten quasi-norm was introduced to bridge the gap between the trace norm and rank function. However, existing algorithms are too slow or even impractical for large-scale problems. Motivated by the equivalence relation between the trace norm and its bilinear spectral penalty, we define two tractable Schatten norms, i.e.\ the bi-trace and tri-trace norms, and prove that they are in essence the Schatten-$1/2$ and $1/3$ quasi-norms, respectively. By applying the two defined Schatten quasi-norms to various rank minimization problems such as MC and RPCA, we only need to solve much smaller factor matrices. We design two efficient linearized alternating minimization algorithms to solve our problems and establish that each bounded sequence generated by our algorithms converges to a critical point. We also provide the restricted strong convexity (RSC) based and MC error bounds for our algorithms. Our experimental results verified both the efficiency and effectiveness of our algorithms compared with the state-of-the-art methods. ",0,0,0,1,0,0 17225,Spectral Radii of Truncated Circular Unitary Matrices," Consider a truncated circular unitary matrix which is a $p_n$ by $p_n$ submatrix of an $n$ by $n$ circular unitary matrix by deleting the last $n-p_n$ columns and rows. Jiang and Qi (2017) proved that the maximum absolute value of the eigenvalues (known as spectral radius) of the truncated matrix, after properly normalized, converges in distribution to the Gumbel distribution if $p_n/n$ is bounded away from $0$ and $1$. In this paper we investigate the limiting distribution of the spectral radius under one of the following four conditions: (1). $p_n\to\infty$ and $p_n/n\to 0$ as $n\to\infty$; (2). $(n-p_n)/n\to 0$ and $(n-p_n)/(\log n)^3\to\infty$ as $n\to\infty$; (3). $n-p_n\to\infty$ and $(n-p_n)/\log n\to 0$ as $n\to\infty$ and (4). $n-p_n=k\ge 1$ is a fixed integer. We prove that the spectral radius converges in distribution to the Gumbel distribution under the first three conditions and to a reversed Weibull distribution under the fourth condition. ",0,0,1,1,0,0 17226,Information Assisted Dictionary Learning for fMRI data analysis," In this paper, the task-related fMRI problem is treated in its matrix factorization formulation. The focus of the reported work is on the dictionary learning (DL) matrix factorization approach. A major novelty of the paper lies in the incorporation of well-established assumptions associated with the GLM technique, which is currently in use by the neuroscientists. These assumptions are embedded as constraints in the DL formulation. In this way, our approach provides a framework of combining well-established and understood techniques with a more ``modern'' and powerful tool. Furthermore, this paper offers a way to relax a major drawback associated with DL techniques; that is, the proper tuning of the DL regularization parameter. This parameter plays a critical role in DL-based fMRI analysis since it essentially determines the shape and structures of the estimated functional brain networks. However, in actual fMRI data analysis, the lack of ground truth renders the a priori choice of the regularization parameter a truly challenging task. Indeed, the values of the DL regularization parameter, associated with the $\ell_1$ sparsity promoting norm, do not convey any tangible physical meaning. So it is practically difficult to guess its proper value. In this paper, the DL problem is reformulated around a sparsity-promoting constraint that can directly be related to the minimum amount of voxels that the spatial maps of the functional brain networks occupy. Such information is documented and it is readily available to neuroscientists and experts in the field. The proposed method is tested against a number of other popular techniques and the obtained performance gains are reported using a number of synthetic fMRI data. Results with real data have also been obtained in the context of a number of experiments and will be soon reported in a different publication. ",0,0,0,1,0,0 17227,"Efficient, Certifiably Optimal Clustering with Applications to Latent Variable Graphical Models"," Motivated by the task of clustering either $d$ variables or $d$ points into $K$ groups, we investigate efficient algorithms to solve the Peng-Wei (P-W) $K$-means semi-definite programming (SDP) relaxation. The P-W SDP has been shown in the literature to have good statistical properties in a variety of settings, but remains intractable to solve in practice. To this end we propose FORCE, a new algorithm to solve this SDP relaxation. Compared to the naive interior point method, our method reduces the computational complexity of solving the SDP from $\tilde{O}(d^7\log\epsilon^{-1})$ to $\tilde{O}(d^{6}K^{-2}\epsilon^{-1})$ arithmetic operations for an $\epsilon$-optimal solution. Our method combines a primal first-order method with a dual optimality certificate search, which when successful, allows for early termination of the primal method. We show for certain variable clustering problems that, with high probability, FORCE is guaranteed to find the optimal solution to the SDP relaxation and provide a certificate of exact optimality. As verified by our numerical experiments, this allows FORCE to solve the P-W SDP with dimensions in the hundreds in only tens of seconds. For a variation of the P-W SDP where $K$ is not known a priori a slight modification of FORCE reduces the computational complexity of solving this problem as well: from $\tilde{O}(d^7\log\epsilon^{-1})$ using a standard SDP solver to $\tilde{O}(d^{4}\epsilon^{-1})$. ",0,0,0,1,0,0 17228,Two- and three-dimensional wide-field weak lensing mass maps from the Hyper Suprime-Cam Subaru Strategic Program S16A data," We present wide-field (167 deg$^2$) weak lensing mass maps from the Hyper Supreme-Cam Subaru Strategic Program (HSC-SSP). We compare these weak lensing based dark matter maps with maps of the distribution of the stellar mass associated with luminous red galaxies. We find a strong correlation between these two maps with a correlation coefficient of $\rho=0.54\pm0.03$ (for a smoothing size of $8'$). This correlation is detected even with a smaller smoothing scale of $2'$ ($\rho=0.34\pm 0.01$). This detection is made uniquely possible because of the high source density of the HSC-SSP weak lensing survey ($\bar{n}\sim 25$ arcmin$^{-2}$). We also present a variety of tests to demonstrate that our maps are not significantly affected by systematic effects. By using the photometric redshift information associated with source galaxies, we reconstruct a three-dimensional mass map. This three-dimensional mass map is also found to correlate with the three-dimensional galaxy mass map. Cross-correlation tests presented in this paper demonstrate that the HSC-SSP weak lensing mass maps are ready for further science analyses. ",0,1,0,0,0,0 17229,A Time-spectral Approach to Numerical Weather Prediction," Finite difference methods are traditionally used for modelling the time domain in numerical weather prediction (NWP). Time-spectral solution is an attractive alternative for reasons of accuracy and efficiency and because time step limitations associated with causal, CFL-like critera are avoided. In this work, the Lorenz 1984 chaotic equations are solved using the time-spectral algorithm GWRM. Comparisons of accuracy and efficiency are carried out for both explicit and implicit time-stepping algorithms. It is found that the efficiency of the GWRM compares well with these methods, in particular at high accuracy. For perturbative scenarios, the GWRM was found to be as much as four times faster than the finite difference methods. A primary reason is that the GWRM time intervals typically are two orders of magnitude larger than those of the finite difference methods. The GWRM has the additional advantage to produce analytical solutions in the form of Chebyshev series expansions. The results are encouraging for pursuing further studies, including spatial dependence, of the relevance of time-spectral methods for NWP modelling. ",0,1,0,0,0,0 17230,Trivial Constraints on Orbital-free Kinetic Energy Density Functionals," Kinetic energy density functionals (KEDFs) are central to orbital-free density functional theory. Limitations on the spatial derivative dependencies of KEDFs have been claimed from differential virial theorems. We point out a central defect in the argument: the relationships are not true for an arbitrary density but hold only for the minimizing density and corresponding chemical potential. Contrary to the claims therefore, the relationships are not constraints and provide no independent information about the spatial derivative dependencies of approximate KEDFs. A simple argument also shows that validity for arbitrary $v$-representable densities is not restored by appeal to the density-potential bijection. ",0,1,0,0,0,0 17231,The Multi-layer Information Bottleneck Problem," The muti-layer information bottleneck (IB) problem, where information is propagated (or successively refined) from layer to layer, is considered. Based on information forwarded by the preceding layer, each stage of the network is required to preserve a certain level of relevance with regards to a specific hidden variable, quantified by the mutual information. The hidden variables and the source can be arbitrarily correlated. The optimal trade-off between rates of relevance and compression (or complexity) is obtained through a single-letter characterization, referred to as the rate-relevance region. Conditions of successive refinabilty are given. Binary source with BSC hidden variables and binary source with BSC/BEC mixed hidden variables are both proved to be successively refinable. We further extend our result to Guassian models. A counterexample of successive refinability is also provided. ",1,0,0,1,0,0 17232,A geometric approach to non-linear correlations with intrinsic scatter," We propose a new mathematical model for $n-k$-dimensional non-linear correlations with intrinsic scatter in $n$-dimensional data. The model is based on Riemannian geometry, and is naturally symmetric with respect to the measured variables and invariant under coordinate transformations. We combine the model with a Bayesian approach for estimating the parameters of the correlation relation and the intrinsic scatter. A side benefit of the approach is that censored and truncated datasets and independent, arbitrary measurement errors can be incorporated. We also derive analytic likelihoods for the typical astrophysical use case of linear relations in $n$-dimensional Euclidean space. We pay particular attention to the case of linear regression in two dimensions, and compare our results to existing methods. Finally, we apply our methodology to the well-known $M_\text{BH}$-$\sigma$ correlation between the mass of a supermassive black hole in the centre of a galactic bulge and the corresponding bulge velocity dispersion. The main result of our analysis is that the most likely slope of this correlation is $\sim 6$ for the datasets used, rather than the values in the range $\sim 4$-$5$ typically quoted in the literature for these data. ",0,1,1,1,0,0 17233,Computing simplicial representatives of homotopy group elements," A central problem of algebraic topology is to understand the homotopy groups $\pi_d(X)$ of a topological space $X$. For the computational version of the problem, it is well known that there is no algorithm to decide whether the fundamental group $\pi_1(X)$ of a given finite simplicial complex $X$ is trivial. On the other hand, there are several algorithms that, given a finite simplicial complex $X$ that is simply connected (i.e., with $\pi_1(X)$ trivial), compute the higher homotopy group $\pi_d(X)$ for any given $d\geq 2$. %The first such algorithm was given by Brown, and more recently, Čadek et al. However, these algorithms come with a caveat: They compute the isomorphism type of $\pi_d(X)$, $d\geq 2$ as an \emph{abstract} finitely generated abelian group given by generators and relations, but they work with very implicit representations of the elements of $\pi_d(X)$. Converting elements of this abstract group into explicit geometric maps from the $d$-dimensional sphere $S^d$ to $X$ has been one of the main unsolved problems in the emerging field of computational homotopy theory. Here we present an algorithm that, given a~simply connected space $X$, computes $\pi_d(X)$ and represents its elements as simplicial maps from a suitable triangulation of the $d$-sphere $S^d$ to $X$. For fixed $d$, the algorithm runs in time exponential in $size(X)$, the number of simplices of $X$. Moreover, we prove that this is optimal: For every fixed $d\geq 2$, we construct a family of simply connected spaces $X$ such that for any simplicial map representing a generator of $\pi_d(X)$, the size of the triangulation of $S^d$ on which the map is defined, is exponential in $size(X)$. ",1,0,1,0,0,0 17234,Multiobjective Optimization of Solar Powered Irrigation System with Fuzzy Type-2 Noise Modelling," Optimization is becoming a crucial element in industrial applications involving sustainable alternative energy systems. During the design of such systems, the engineer/decision maker would often encounter noise factors (e.g. solar insolation and ambient temperature fluctuations) when their system interacts with the environment. In this chapter, the sizing and design optimization of the solar powered irrigation system was considered. This problem is multivariate, noisy, nonlinear and multiobjective. This design problem was tackled by first using the Fuzzy Type II approach to model the noise factors. Consequently, the Bacterial Foraging Algorithm (BFA) (in the context of a weighted sum framework) was employed to solve this multiobjective fuzzy design problem. This method was then used to construct the approximate Pareto frontier as well as to identify the best solution option in a fuzzy setting. Comprehensive analyses and discussions were performed on the generated numerical results with respect to the implemented solution methods. ",1,0,0,0,0,0 17235,Convergence Analysis of Gradient EM for Multi-component Gaussian Mixture," In this paper, we study convergence properties of the gradient Expectation-Maximization algorithm \cite{lange1995gradient} for Gaussian Mixture Models for general number of clusters and mixing coefficients. We derive the convergence rate depending on the mixing coefficients, minimum and maximum pairwise distances between the true centers and dimensionality and number of components; and obtain a near-optimal local contraction radius. While there have been some recent notable works that derive local convergence rates for EM in the two equal mixture symmetric GMM, in the more general case, the derivations need structurally different and non-trivial arguments. We use recent tools from learning theory and empirical processes to achieve our theoretical results. ",1,0,1,1,0,0 17236,The Gravitational-Wave Physics," The direct detection of gravitational wave by Laser Interferometer Gravitational-Wave Observatory indicates the coming of the era of gravitational-wave astronomy and gravitational-wave cosmology. It is expected that more and more gravitational-wave events will be detected by currently existing and planned gravitational-wave detectors. The gravitational waves open a new window to explore the Universe and various mysteries will be disclosed through the gravitational-wave detection, combined with other cosmological probes. The gravitational-wave physics is not only related to gravitation theory, but also is closely tied to fundamental physics, cosmology and astrophysics. In this review article, three kinds of sources of gravitational waves and relevant physics will be discussed, namely gravitational waves produced during the inflation and preheating phases of the Universe, the gravitational waves produced during the first-order phase transition as the Universe cools down and the gravitational waves from the three phases: inspiral, merger and ringdown of a compact binary system, respectively. We will also discuss the gravitational waves as a standard siren to explore the evolution of the Universe. ",0,1,0,0,0,0 17237,Multivariant Assertion-based Guidance in Abstract Interpretation," Approximations during program analysis are a necessary evil, as they ensure essential properties, such as soundness and termination of the analysis, but they also imply not always producing useful results. Automatic techniques have been studied to prevent precision loss, typically at the expense of larger resource consumption. In both cases (i.e., when analysis produces inaccurate results and when resource consumption is too high), it is necessary to have some means for users to provide information to guide analysis and thus improve precision and/or performance. We present techniques for supporting within an abstract interpretation framework a rich set of assertions that can deal with multivariance/context-sensitivity, and can handle different run-time semantics for those assertions that cannot be discharged at compile time. We show how the proposed approach can be applied to both improving precision and accelerating analysis. We also provide some formal results on the effects of such assertions on the analysis results. ",1,0,0,0,0,0 17238,An Experimental Comparison of Uncertainty Sets for Robust Shortest Path Problems," Through the development of efficient algorithms, data structures and preprocessing techniques, real-world shortest path problems in street networks are now very fast to solve. But in reality, the exact travel times along each arc in the network may not be known. This lead to the development of robust shortest path problems, where all possible arc travel times are contained in a so-called uncertainty set of possible outcomes. Research in robust shortest path problems typically assumes this set to be given, and provides complexity results as well as algorithms depending on its shape. However, what can actually be observed in real-world problems are only discrete raw data points. The shape of the uncertainty is already a modelling assumption. In this paper we test several of the most widely used assumptions on the uncertainty set using real-world traffic measurements provided by the City of Chicago. We calculate the resulting different robust solutions, and evaluate which uncertainty approach is actually reasonable for our data. This anchors theoretical research in a real-world application and allows us to point out which robust models should be the future focus of algorithmic development. ",1,0,1,0,0,0 17239,An experimental comparison of velocities underneath focussed breaking waves," Nonlinear wave interactions affect the evolution of steep wave groups, their breaking and the associated kinematic field. Laboratory experiments are performed to investigate the effect of the underlying focussing mechanism on the shape of the breaking wave and its velocity field. In this regard, it is found that the shape of the wave spectrum plays a substantial role. Broader underlying wave spectra leads to energetic plungers at a relatively low amplitude. For narrower spectra waves break at a higher amplitudes but with a less energetic spiller. Comparison with standard engineering methods commonly used to predict the velocity underneath extreme waves shows that, under certain conditions, the measured velocity profile strongly deviates from engineering predictions. ",0,1,0,0,0,0 17240,Full Momentum and Energy Resolved Spectral Function of a 2D Electronic System," The single-particle spectral function measures the density of electronic states (DOS) in a material as a function of both momentum and energy, providing central insights into phenomena such as superconductivity and Mott insulators. While scanning tunneling microscopy (STM) and other tunneling methods have provided partial spectral information, until now only angle-resolved photoemission spectroscopy (ARPES) has permitted a comprehensive determination of the spectral function of materials in both momentum and energy. However, ARPES operates only on electronic systems at the material surface and cannot work in the presence of applied magnetic fields. Here, we demonstrate a new method for determining the full momentum and energy resolved electronic spectral function of a two-dimensional (2D) electronic system embedded in a semiconductor. In contrast with ARPES, the technique remains operational in the presence of large externally applied magnetic fields and functions for electronic systems with zero electrical conductivity or with zero electron density. It provides a direct high-resolution and high-fidelity probe of the dispersion and dynamics of the interacting 2D electron system. By ensuring the system of interest remains under equilibrium conditions, we uncover delicate signatures of many-body effects involving electron-phonon interactions, plasmons, polarons, and a novel phonon analog of the vacuum Rabi splitting in atomic systems. ",0,1,0,0,0,0 17241,Complete parallel mean curvature surfaces in two-dimensional complex space-forms," The purpose of this article is to determine explicitly the complete surfaces with parallel mean curvature vector, both in the complex projective plane and the complex hyperbolic plane. The main results are as follows: When the curvature of the ambient space is positive, there exists a unique such surface up to rigid motions of the target space. On the other hand, when the curvature of the ambient space is negative, there are `non-trivial' complete parallel mean curvature surfaces generated by Jacobi elliptic functions and they exhaust such surfaces. ",0,0,1,0,0,0 17242,Parallelized Linear Classification with Volumetric Chemical Perceptrons," In this work, we introduce a new type of linear classifier that is implemented in a chemical form. We propose a novel encoding technique which simultaneously represents multiple datasets in an array of microliter-scale chemical mixtures. Parallel computations on these datasets are performed as robotic liquid handling sequences, whose outputs are analyzed by high-performance liquid chromatography. As a proof of concept, we chemically encode several MNIST images of handwritten digits and demonstrate successful chemical-domain classification of the digits using volumetric perceptrons. We additionally quantify the performance of our method with a larger dataset of binary vectors and compare the experimental measurements against predicted results. Paired with appropriate chemical analysis tools, our approach can work on increasingly parallel datasets. We anticipate that related approaches will be scalable to multilayer neural networks and other more complex algorithms. Much like recent demonstrations of archival data storage in DNA, this work blurs the line between chemical and electrical information systems, and offers early insight into the computational efficiency and massive parallelism which may come with computing in chemical domains. ",0,0,0,0,1,0 17243,Learning Spatial Regularization with Image-level Supervisions for Multi-label Image Classification," Multi-label image classification is a fundamental but challenging task in computer vision. Great progress has been achieved by exploiting semantic relations between labels in recent years. However, conventional approaches are unable to model the underlying spatial relations between labels in multi-label images, because spatial annotations of the labels are generally not provided. In this paper, we propose a unified deep neural network that exploits both semantic and spatial relations between labels with only image-level supervisions. Given a multi-label image, our proposed Spatial Regularization Network (SRN) generates attention maps for all labels and captures the underlying relations between them via learnable convolutions. By aggregating the regularized classification results with original results by a ResNet-101 network, the classification performance can be consistently improved. The whole deep neural network is trained end-to-end with only image-level annotations, thus requires no additional efforts on image annotations. Extensive evaluations on 3 public datasets with different types of labels show that our approach significantly outperforms state-of-the-arts and has strong generalization capability. Analysis of the learned SRN model demonstrates that it can effectively capture both semantic and spatial relations of labels for improving classification performance. ",1,0,0,0,0,0 17244,The critical binary star separation for a planetary system origin of white dwarf pollution," The atmospheres of between one quarter and one half of observed single white dwarfs in the Milky Way contain heavy element pollution from planetary debris. The pollution observed in white dwarfs in binary star systems is, however, less clear, because companion star winds can generate a stream of matter which is accreted by the white dwarf. Here we (i) discuss the necessity or lack thereof of a major planet in order to pollute a white dwarf with orbiting minor planets in both single and binary systems, and (ii) determine the critical binary separation beyond which the accretion source is from a planetary system. We hence obtain user-friendly functions relating this distance to the masses and radii of both stars, the companion wind, and the accretion rate onto the white dwarf, for a wide variety of published accretion prescriptions. We find that for the majority of white dwarfs in known binaries, if pollution is detected, then that pollution should originate from planetary material. ",0,1,0,0,0,0 17245,A quantum Mirković-Vybornov isomorphism," We present a quantization of an isomorphism of Mirković and Vybornov which relates the intersection of a Slodowy slice and a nilpotent orbit closure in $\mathfrak{gl}_N$ , to a slice between spherical Schubert varieties in the affine Grassmannian of $PGL_n$ (with weights encoded by the Jordan types of the nilpotent orbits). A quantization of the former variety is provided by a parabolic W-algebra and of the latter by a truncated shifted Yangian. Building on earlier work of Brundan and Kleshchev, we define an explicit isomorphism between these non-commutative algebras, and show that its classical limit is a variation of the original isomorphism of Mirković and Vybornov. As a corollary, we deduce that the W-algebra is free as a left (or right) module over its Gelfand-Tsetlin subalgebra, as conjectured by Futorny, Molev, and Ovsienko. ",0,0,1,0,0,0 17246,Portfolio diversification and model uncertainty: a robust dynamic mean-variance approach," This paper is concerned with a multi-asset mean-variance portfolio selection problem under model uncertainty. We develop a continuous time framework for taking into account ambiguity aversion about both expected return rates and correlation matrix of the assets, and for studying the effects on portfolio diversification. We prove a separation principle for the associated robust control problem, which allows to reduce the determination of the optimal dynamic strategy to the parametric computation of the minimal risk premium function. Our results provide a justification for under-diversification, as documented in empirical studies. We explicitly quantify the degree of under-diversification in terms of correlation and Sharpe ratio ambiguity. In particular, we show that an investor with a poor confidence in the expected return estimation does not hold any risky asset, and on the other hand, trades only one risky asset when the level of ambiguity on correlation matrix is large. This extends to the continuous-time setting the results obtained by Garlappi, Uppal and Wang [13], and Liu and Zeng [24] in a one-period model. JEL Classification: G11, C61 MSC Classification: 91G10, 91G80, 60H30 ",0,0,0,0,0,1 17247,TensorLayer: A Versatile Library for Efficient Deep Learning Development," Deep learning has enabled major advances in the fields of computer vision, natural language processing, and multimedia among many others. Developing a deep learning system is arduous and complex, as it involves constructing neural network architectures, managing training/trained models, tuning optimization process, preprocessing and organizing data, etc. TensorLayer is a versatile Python library that aims at helping researchers and engineers efficiently develop deep learning systems. It offers rich abstractions for neural networks, model and data management, and parallel workflow mechanism. While boosting efficiency, TensorLayer maintains both performance and scalability. TensorLayer was released in September 2016 on GitHub, and has helped people from academia and industry develop real-world applications of deep learning. ",1,0,0,1,0,0 17248,Effects of excess carriers on native defects in wide bandgap semiconductors: illumination as a method to enhance p-type doping," Undesired unintentional doping and doping limits in semiconductors are typically caused by compensating defects with low formation energies. Since the formation energy of a charged defect depends linearly on the Fermi level, doping limits can be especially pronounced in wide bandgap semiconductors where the Fermi level can vary substantially. Introduction of non-equilibrium carrier concentrations during growth or processing alters the chemical potentials of band carriers and thus provides the possibility of modifying populations of charged defects in ways impossible at thermal equilibrium. Herein we demonstrate that, for an ergodic system with excess carriers, the rates of carrier capture and emission involving a defect charge transition level rigorously determine the admixture of electron and hole quasi-Fermi levels determining the formation energy of non-zero charge states of that defect type. To catalog the range of possible responses to excess carriers, we investigate the behavior of a single donor-like defect as functions of extrinsic doping and energy of the charge transition level. The technologically most important finding is that excess carriers will increase the formation energy of compensating defects for most values of the charge transition level in the bandgap. Thus, it may be possible to overcome limitations on doping imposed by native defects. Cases also exist in wide bandgap semiconductors in which the concentration of defects with the same charge polarity as the majority dopant is either left unchanged or actually increases. The causes of these various behaviors are rationalized in terms of the capture and emission rates and guidelines for carrying out experimental tests of this model are given. ",0,1,0,0,0,0 17249,LATTE: Application Oriented Social Network Embedding," In recent years, many research works propose to embed the network structured data into a low-dimensional feature space, where each node is represented as a feature vector. However, due to the detachment of embedding process with external tasks, the learned embedding results by most existing embedding models can be ineffective for application tasks with specific objectives, e.g., community detection or information diffusion. In this paper, we propose study the application oriented heterogeneous social network embedding problem. Significantly different from the existing works, besides the network structure preservation, the problem should also incorporate the objectives of external applications in the objective function. To resolve the problem, in this paper, we propose a novel network embedding framework, namely the ""appLicAtion orienTed neTwork Embedding"" (Latte) model. In Latte, the heterogeneous network structure can be applied to compute the node ""diffusive proximity"" scores, which capture both local and global network structures. Based on these computed scores, Latte learns the network representation feature vectors by extending the autoencoder model model to the heterogeneous network scenario, which can also effectively unite the objectives of network embedding and external application tasks. Extensive experiments have been done on real-world heterogeneous social network datasets, and the experimental results have demonstrated the outstanding performance of Latte in learning the representation vectors for specific application tasks. ",1,0,0,0,0,0 17250,Giant paramagnetism induced valley polarization of electrons in charge-tunable monolayer MoSe2," For applications exploiting the valley pseudospin degree of freedom in transition metal dichalcogenide monolayers, efficient preparation of electrons or holes in a single valley is essential. Here, we show that a magnetic field of 7 Tesla leads to a near-complete valley polarization of electrons in MoSe2 monolayer with a density 1.6x10^{12} cm^{-2}; in the absence of exchange interactions favoring single-valley occupancy, a similar degree of valley polarization would have required a pseudospin g-factor exceeding 40. To investigate the magnetic response, we use polarization resolved photoluminescence as well as resonant reflection measurements. In the latter, we observe gate voltage dependent transfer of oscillator strength from the exciton to the attractive-Fermi-polaron: stark differences in the spectrum of the two light helicities provide a confirmation of valley polarization. Our findings suggest an interaction induced giant paramagnetic response of MoSe2, which paves the way for valleytronics applications. ",0,1,0,0,0,0 17251,Highrisk Prediction from Electronic Medical Records via Deep Attention Networks," Predicting highrisk vascular diseases is a significant issue in the medical domain. Most predicting methods predict the prognosis of patients from pathological and radiological measurements, which are expensive and require much time to be analyzed. Here we propose deep attention models that predict the onset of the high risky vascular disease from symbolic medical histories sequence of hypertension patients such as ICD-10 and pharmacy codes only, Medical History-based Prediction using Attention Network (MeHPAN). We demonstrate two types of attention models based on 1) bidirectional gated recurrent unit (R-MeHPAN) and 2) 1D convolutional multilayer model (C-MeHPAN). Two MeHPAN models are evaluated on approximately 50,000 hypertension patients with respect to precision, recall, f1-measure and area under the curve (AUC). Experimental results show that our MeHPAN methods outperform standard classification models. Comparing two MeHPANs, R-MeHPAN provides more better discriminative capability with respect to all metrics while C-MeHPAN presents much shorter training time with competitive accuracy. ",1,0,0,1,0,0 17252,Agent based simulation of the evolution of society as an alternate maximization problem," Understanding the evolution of human society, as a complex adaptive system, is a task that has been looked upon from various angles. In this paper, we simulate an agent-based model with a high enough population tractably. To do this, we characterize an entity called \textit{society}, which helps us reduce the complexity of each step from $\mathcal{O}(n^2)$ to $\mathcal{O}(n)$. We propose a very realistic setting, where we design a joint alternate maximization step algorithm to maximize a certain \textit{fitness} function, which we believe simulates the way societies develop. Our key contributions include (i) proposing a novel protocol for simulating the evolution of a society with cheap, non-optimal joint alternate maximization steps (ii) providing a framework for carrying out experiments that adhere to this joint-optimization simulation framework (iii) carrying out experiments to show that it makes sense empirically (iv) providing an alternate justification for the use of \textit{society} in the simulations. ",1,0,0,1,0,0 17253,Can a heart rate variability biomarker identify the presence of autism spectrum disorder in eight year old children?," Autonomic nervous system (ANS) activity is altered in autism spectrum disorder (ASD). Heart rate variability (HRV) derived from electrocardiogram (ECG) has been a powerful tool to identify alterations in ANS due to a plethora of pathophysiological conditions, including psychological ones such as depression. ECG-derived HRV thus carries a yet to be explored potential to be used as a diagnostic and follow-up biomarker of ASD. However, few studies have explored this potential. In a cohort of boys (ages 8 - 11 years) with (n=18) and without ASD (n=18), we tested a set of linear and nonlinear HRV measures, including phase rectified signal averaging (PRSA), applied to a segment of ECG collected under resting conditions for their predictive properties of ASD. We identified HRV measures derived from time, frequency and geometric signal-analytical domains which are changed in ASD children relative to peers without ASD and correlate to psychometric scores (p<0.05 for each). Receiver operating curves area ranged between 0.71 - 0.74 for each HRV measure. Despite being a small cohort lacking external validation, these promising preliminary results warrant larger prospective validation studies. ",0,0,0,0,1,0 17254,Semantic Entity Retrieval Toolkit," Unsupervised learning of low-dimensional, semantic representations of words and entities has recently gained attention. In this paper we describe the Semantic Entity Retrieval Toolkit (SERT) that provides implementations of our previously published entity representation models. The toolkit provides a unified interface to different representation learning algorithms, fine-grained parsing configuration and can be used transparently with GPUs. In addition, users can easily modify existing models or implement their own models in the framework. After model training, SERT can be used to rank entities according to a textual query and extract the learned entity/word representation for use in downstream algorithms, such as clustering or recommendation. ",1,0,0,0,0,0 17255,Nearly Instance Optimal Sample Complexity Bounds for Top-k Arm Selection," In the Best-$k$-Arm problem, we are given $n$ stochastic bandit arms, each associated with an unknown reward distribution. We are required to identify the $k$ arms with the largest means by taking as few samples as possible. In this paper, we make progress towards a complete characterization of the instance-wise sample complexity bounds for the Best-$k$-Arm problem. On the lower bound side, we obtain a novel complexity term to measure the sample complexity that every Best-$k$-Arm instance requires. This is derived by an interesting and nontrivial reduction from the Best-$1$-Arm problem. We also provide an elimination-based algorithm that matches the instance-wise lower bound within doubly-logarithmic factors. The sample complexity of our algorithm strictly dominates the state-of-the-art for Best-$k$-Arm (module constant factors). ",1,0,0,1,0,0 17256,Dimensions of equilibrium measures on a class of planar self-affine sets," We study equilibrium measures (Käenmäki measures) supported on self-affine sets generated by a finite collection of diagonal and anti-diagonal matrices acting on the plane and satisfying the strong separation property. Our main result is that such measures are exact dimensional and the dimension satisfies the Ledrappier-Young formula, which gives an explicit expression for the dimension in terms of the entropy and Lyapunov exponents as well as the dimension of the important coordinate projection of the measure. In particular, we do this by showing that the Käenmäki measure is equal to the sum of (the pushforwards) of two Gibbs measures on an associated subshift of finite type. ",0,0,1,0,0,0 17257,Hubble PanCET: An isothermal day-side atmosphere for the bloated gas-giant HAT-P-32Ab," We present a thermal emission spectrum of the bloated hot Jupiter HAT-P-32Ab from a single eclipse observation made in spatial scan mode with the Wide Field Camera 3 (WFC3) aboard the Hubble Space Telescope (HST). The spectrum covers the wavelength regime from 1.123 to 1.644 microns which is binned into 14 eclipse depths measured to an averaged precision of 104 parts-per million. The spectrum is unaffected by a dilution from the close M-dwarf companion HAT-P-32B, which was fully resolved. We complemented our spectrum with literature results and performed a comparative forward and retrieval analysis with the 1D radiative-convective ATMO model. Assuming solar abundance of the planet atmosphere, we find that the measured spectrum can best be explained by the spectrum of a blackbody isothermal atmosphere with Tp = 1995 +/- 17K, but can equally-well be described by a spectrum with modest thermal inversion. The retrieved spectrum suggests emission from VO at the WFC3 wavelengths and no evidence of the 1.4 micron water feature. The emission models with temperature profiles decreasing with height are rejected at a high confidence. An isothermal or inverted spectrum can imply a clear atmosphere with an absorber, a dusty cloud deck or a combination of both. We find that the planet can have continuum of values for the albedo and recirculation, ranging from high albedo and poor recirculation to low albedo and efficient recirculation. Optical spectroscopy of the planet's day-side or thermal emission phase curves can potentially resolve the current albedo with recirculation degeneracy. ",0,1,0,0,0,0 17258,One Model To Learn Them All," Deep learning yields great results across many fields, from speech recognition, image classification, to translation. But for each problem, getting a deep model to work well involves research into the architecture and a long period of tuning. We present a single model that yields good results on a number of problems spanning multiple domains. In particular, this single model is trained concurrently on ImageNet, multiple translation tasks, image captioning (COCO dataset), a speech recognition corpus, and an English parsing task. Our model architecture incorporates building blocks from multiple domains. It contains convolutional layers, an attention mechanism, and sparsely-gated layers. Each of these computational blocks is crucial for a subset of the tasks we train on. Interestingly, even if a block is not crucial for a task, we observe that adding it never hurts performance and in most cases improves it on all tasks. We also show that tasks with less data benefit largely from joint training with other tasks, while performance on large tasks degrades only slightly if at all. ",1,0,0,1,0,0 17259,Porcupine Neural Networks: (Almost) All Local Optima are Global," Neural networks have been used prominently in several machine learning and statistics applications. In general, the underlying optimization of neural networks is non-convex which makes their performance analysis challenging. In this paper, we take a novel approach to this problem by asking whether one can constrain neural network weights to make its optimization landscape have good theoretical properties while at the same time, be a good approximation for the unconstrained one. For two-layer neural networks, we provide affirmative answers to these questions by introducing Porcupine Neural Networks (PNNs) whose weight vectors are constrained to lie over a finite set of lines. We show that most local optima of PNN optimizations are global while we have a characterization of regions where bad local optimizers may exist. Moreover, our theoretical and empirical results suggest that an unconstrained neural network can be approximated using a polynomially-large PNN. ",1,0,0,1,0,0 17260,Configuration Path Integral Monte Carlo Approach to the Static Density Response of the Warm Dense Electron Gas," Precise knowledge of the static density response function (SDRF) of the uniform electron gas (UEG) serves as key input for numerous applications, most importantly for density functional theory beyond generalized gradient approximations. Here we extend the configuration path integral Monte Carlo (CPIMC) formalism that was previously applied to the spatially uniform electron gas to the case of an inhomogeneous electron gas by adding a spatially periodic external potential. This procedure has recently been successfully used in permutation blocking path integral Monte Carlo simulations (PB-PIMC) of the warm dense electron gas [Dornheim \textit{et al.}, Phys. Rev. E in press, arXiv:1706.00315], but this method is restricted to low and moderate densities. Implementing this procedure into CPIMC allows us to obtain exact finite temperature results for the SDRF of the electron gas at \textit{high to moderate densities} closing the gap left open by the PB-PIMC data. In this paper we demonstrate how the CPIMC formalism can be efficiently extended to the spatially inhomogeneous electron gas and present the first data points. Finally, we discuss finite size errors involved in the quantum Monte Carlo results for the SDRF in detail and present a solution how to remove them that is based on a generalization of ground state techniques. ",0,1,0,0,0,0 17261,Superzone gap formation and low lying crystal electric field levels in PrPd$_2$Ge$_2$ single crystal," The magnetocrystalline anisotropy exhibited in PrPd$_2$Ge$_2$ single crystal has been investigated by measuring the magnetization, magnetic susceptibility, electrical resistivity and heat capacity. PrPd$_2$Ge$_2$ crystallizes in the well known ThCr$_2$Si$_2$\--type tetragonal structure. The antiferromagnetic ordering is confirmed as 5.1~K with the [001]-axis as the easy axis of magnetization. A superzone gap formation is observed from the electrical resistivity measurement when the current is passed along the [001] direction. The crystal electric field (CEF) analysis on the magnetic susceptibility, magnetization and the heat capacity measurements confirms a doublet ground state with a relatively low over all CEF level splitting. The CEF level spacings and the Zeeman splitting at high fields become comparable and lead to metamagnetic transition at 34~T due to the CEF level crossing. ",0,1,0,0,0,0 17262,Adaptive Real-Time Software Defined MIMO Visible Light Communications using Spatial Multiplexing and Spatial Diversity," In this paper, we experimentally demonstrate a real-time software defined multiple input multiple output (MIMO) visible light communication (VLC) system employing link adaptation of spatial multiplexing and spatial diversity. Real-time MIMO signal processing is implemented by using the Field Programmable Gate Array (FPGA) based Universal Software Radio Peripheral (USRP) devices. Software defined implantation of MIMO VLC can assist in enabling an adaptive and reconfigurable communication system without hardware changes. We measured the error vector magnitude (EVM), bit error rate (BER) and spectral efficiency performance for single carrier M-QAM MIMO VLC using spatial diversity and spatial multiplexing. Results show that spatial diversity MIMO VLC improves error performance at the cost of spectral efficiency that spatial multiplexing should enhance. We propose the adaptive MIMO solution that both modulation schema and MIMO schema are dynamically adapted to the changing channel conditions for enhancing the error performance and spectral efficiency. The average error-free spectral efficiency of adaptive 2x2 MIMO VLC achieved 12 b/s/Hz over 2 meters indoor dynamic transmission. ",1,0,1,0,0,0 17263,Maximum Principle Based Algorithms for Deep Learning," The continuous dynamical system approach to deep learning is explored in order to devise alternative frameworks for training algorithms. Training is recast as a control problem and this allows us to formulate necessary optimality conditions in continuous time using the Pontryagin's maximum principle (PMP). A modification of the method of successive approximations is then used to solve the PMP, giving rise to an alternative training algorithm for deep learning. This approach has the advantage that rigorous error estimates and convergence results can be established. We also show that it may avoid some pitfalls of gradient-based methods, such as slow convergence on flat landscapes near saddle points. Furthermore, we demonstrate that it obtains favorable initial convergence rate per-iteration, provided Hamiltonian maximization can be efficiently carried out - a step which is still in need of improvement. Overall, the approach opens up new avenues to attack problems associated with deep learning, such as trapping in slow manifolds and inapplicability of gradient-based methods for discrete trainable variables. ",1,0,0,1,0,0 17264,Factorization Machines Leveraging Lightweight Linked Open Data-enabled Features for Top-N Recommendations," With the popularity of Linked Open Data (LOD) and the associated rise in freely accessible knowledge that can be accessed via LOD, exploiting LOD for recommender systems has been widely studied based on various approaches such as graph-based or using different machine learning models with LOD-enabled features. Many of the previous approaches require construction of an additional graph to run graph-based algorithms or to extract path-based features by combining user- item interactions (e.g., likes, dislikes) and background knowledge from LOD. In this paper, we investigate Factorization Machines (FMs) based on particularly lightweight LOD-enabled features which can be directly obtained via a public SPARQL Endpoint without any additional effort to construct a graph. Firstly, we aim to study whether using FM with these lightweight LOD-enabled features can provide competitive performance compared to a learning-to-rank approach leveraging LOD as well as other well-established approaches such as kNN-item and BPRMF. Secondly, we are interested in finding out to what extent each set of LOD-enabled features contributes to the recommendation performance. Experimental evaluation on a standard dataset shows that our proposed approach using FM with lightweight LOD-enabled features provides the best performance compared to other approaches in terms of five evaluation metrics. In addition, the study of the recommendation performance based on different sets of LOD-enabled features indicate that property-object lists and PageRank scores of items are useful for improving the performance, and can provide the best performance through using them together for FM. We observe that subject-property lists of items does not contribute to the recommendation performance but rather decreases the performance. ",1,0,0,0,0,0 17265,Wave propagation and homogenization in 2D and 3D lattices: a semi-analytical approach," Wave motion in two- and three-dimensional periodic lattices of beam members supporting longitudinal and flexural waves is considered. An analytic method for solving the Bloch wave spectrum is developed, characterized by a generalized eigenvalue equation obtained by enforcing the Floquet condition. The dynamic stiffness matrix is shown to be explicitly Hermitian and to admit positive eigenvalues. Lattices with hexagonal, rectangular, tetrahedral and cubic unit cells are analyzed. The semi-analytical method can be asymptotically expanded for low frequency yielding explicit forms for the Christoffel matrix describing wave motion in the quasistatic limit. ",0,1,0,0,0,0 17266,Waring's problem for unipotent algebraic groups," In this paper, we formulate an analogue of Waring's problem for an algebraic group $G$. At the field level we consider a morphism of varieties $f\colon \mathbb{A}^1\to G$ and ask whether every element of $G(K)$ is the product of a bounded number of elements $f(\mathbb{A}^1(K)) = f(K)$. We give an affirmative answer when $G$ is unipotent and $K$ is a characteristic zero field which is not formally real. The idea is the same at the integral level, except one must work with schemes, and the question is whether every element in a finite index subgroup of $G(\mathcal{O})$ can be written as a product of a bounded number of elements of $f(\mathcal{O})$. We prove this is the case when $G$ is unipotent and $\mathcal{O}$ is the ring of integers of a totally imaginary number field. ",0,0,1,0,0,0 17267,Spreading of localized attacks in spatial multiplex networks," Many real-world multilayer systems such as critical infrastructure are interdependent and embedded in space with links of a characteristic length. They are also vulnerable to localized attacks or failures, such as terrorist attacks or natural catastrophes, which affect all nodes within a given radius. Here we study the effects of localized attacks on spatial multiplex networks of two layers. We find a metastable region where a localized attack larger than a critical size induces a nucleation transition as a cascade of failures spreads throughout the system, leading to its collapse. We develop a theory to predict the critical attack size and find that it exhibits novel scaling behavior. We further find that localized attacks in these multiplex systems can induce a previously unobserved combination of random and spatial cascades. Our results demonstrate important vulnerabilities in real-world interdependent networks and show new theoretical features of spatial networks. ",1,1,0,0,0,0 17268,Greedy Sparse Signal Reconstruction Using Matching Pursuit Based on Hope-tree," The reconstruction of sparse signals requires the solution of an $\ell_0$-norm minimization problem in Compressed Sensing. Previous research has focused on the investigation of a single candidate to identify the support (index of nonzero elements) of a sparse signal. To ensure that the optimal candidate can be obtained in each iteration, we propose here an iterative greedy reconstruction algorithm (GSRA). First, the intersection of the support sets estimated by the Orthogonal Matching Pursuit (OMP) and Subspace Pursuit (SP) is set as the initial support set. Then, a hope-tree is built to expand the set. Finally, a developed decreasing subspace pursuit method is used to rectify the candidate set. Detailed simulation results demonstrate that GSRA is more accurate than other typical methods in recovering Gaussian signals, 0--1 sparse signals, and synthetic signals. ",1,0,1,0,0,0 17269,Attack-Aware Multi-Sensor Integration Algorithm for Autonomous Vehicle Navigation Systems," In this paper, we propose a fault detection and isolation based attack-aware multi-sensor integration algorithm for the detection of cyberattacks in autonomous vehicle navigation systems. The proposed algorithm uses an extended Kalman filter to construct robust residuals in the presence of noise, and then uses a parametric statistical tool to identify cyberattacks. The parametric statistical tool is based on the residuals constructed by the measurement history rather than one measurement at a time in the properties of discrete-time signals and dynamic systems. This approach allows the proposed multi-sensor integration algorithm to provide quick detection and low false alarm rates for applications in dynamic systems. An example of INS/GNSS integration of autonomous navigation systems is presented to validate the proposed algorithm by using a software-in-the-loop simulation. ",1,0,0,0,0,0 17270,"Turbulence, cascade and singularity in a generalization of the Constantin-Lax-Majda equation"," We study numerically a Constantin-Lax-Majda-De Gregorio model generalized by Okamoto, Sakajo and Wunsch, which is a model of fluid turbulence in one dimension with an inviscid conservation law. In the presence of the viscosity and two types of the large-scale forcings, we show that turbulent cascade of the inviscid invariant, which is not limited to quadratic quantity, occurs and that properties of this model's turbulent state are related to singularity of the inviscid case by adopting standard tools of analyzing fluid turbulence. ",0,1,0,0,0,0 17271,Fitting phase--type scale mixtures to heavy--tailed data and distributions," We consider the fitting of heavy tailed data and distribution with a special attention to distributions with a non--standard shape in the ""body"" of the distribution. To this end we consider a dense class of heavy tailed distributions introduced recently, employing an EM algorithm for the the maximum likelihood estimates of its parameters. We present methods for fitting to observed data, histograms, censored data, as well as to theoretical distributions. Numerical examples are provided with simulated data and a benchmark reinsurance dataset. We empirically demonstrate that our model can provide excellent fits to heavy--tailed data/distributions with minimal assumptions ",0,0,1,1,0,0 17272,Deep Incremental Boosting," This paper introduces Deep Incremental Boosting, a new technique derived from AdaBoost, specifically adapted to work with Deep Learning methods, that reduces the required training time and improves generalisation. We draw inspiration from Transfer of Learning approaches to reduce the start-up time to training each incremental Ensemble member. We show a set of experiments that outlines some preliminary results on some common Deep Learning datasets and discuss the potential improvements Deep Incremental Boosting brings to traditional Ensemble methods in Deep Learning. ",1,0,0,1,0,0 17273,Empirical Likelihood for Linear Structural Equation Models with Dependent Errors," We consider linear structural equation models that are associated with mixed graphs. The structural equations in these models only involve observed variables, but their idiosyncratic error terms are allowed to be correlated and non-Gaussian. We propose empirical likelihood (EL) procedures for inference, and suggest several modifications, including a profile likelihood, in order to improve tractability and performance of the resulting methods. Through simulations, we show that when the error distributions are non-Gaussian, the use of EL and the proposed modifications may increase statistical efficiency and improve assessment of significance. ",0,0,0,1,0,0 17274,Grassmannian flows and applications to nonlinear partial differential equations," We show how solutions to a large class of partial differential equations with nonlocal Riccati-type nonlinearities can be generated from the corresponding linearized equations, from arbitrary initial data. It is well known that evolutionary matrix Riccati equations can be generated by projecting linear evolutionary flows on a Stiefel manifold onto a coordinate chart of the underlying Grassmann manifold. Our method relies on extending this idea to the infinite dimensional case. The key is an integral equation analogous to the Marchenko equation in integrable systems, that represents the coodinate chart map. We show explicitly how to generate such solutions to scalar partial differential equations of arbitrary order with nonlocal quadratic nonlinearities using our approach. We provide numerical simulations that demonstrate the generation of solutions to Fisher--Kolmogorov--Petrovskii--Piskunov equations with nonlocal nonlinearities. We also indicate how the method might extend to more general classes of nonlinear partial differential systems. ",0,1,1,0,0,0 17275,The Reinhardt Conjecture as an Optimal Control Problem," In 1934, Reinhardt conjectured that the shape of the centrally symmetric convex body in the plane whose densest lattice packing has the smallest density is a smoothed octagon. This conjecture is still open. We formulate the Reinhardt Conjecture as a problem in optimal control theory. The smoothed octagon is a Pontryagin extremal trajectory with bang-bang control. More generally, the smoothed regular $6k+2$-gon is a Pontryagin extremal with bang-bang control. The smoothed octagon is a strict (micro) local minimum to the optimal control problem. The optimal solution to the Reinhardt problem is a trajectory without singular arcs. The extremal trajectories that do not meet the singular locus have bang-bang controls with finitely many switching times. Finally, we reduce the Reinhardt problem to an optimization problem on a five-dimensional manifold. (Each point on the manifold is an initial condition for a potential Pontryagin extremal lifted trajectory.) We suggest that the Reinhardt conjecture might eventually be fully resolved through optimal control theory. Some proofs are computer-assisted using a computer algebra system. ",0,0,1,0,0,0 17276,Deep submillimeter and radio observations in the SSA22 field. I. Powering sources and Lyα escape fraction of Lyα blobs," We study the heating mechanisms and Ly{\alpha} escape fractions of 35 Ly{\alpha} blobs (LABs) at z = 3.1 in the SSA22 field. Dust continuum sources have been identified in 11 of the 35 LABs, all with star formation rates (SFRs) above 100 Msun/yr. Likely radio counterparts are detected in 9 out of 29 investigated LABs. The detection of submm dust emission is more linked to the physical size of the Ly{\alpha} emission than to the Ly{\alpha} luminosities of the LABs. A radio excess in the submm/radio detected LABs is common, hinting at the presence of active galactic nuclei. Most radio sources without X-ray counterparts are located at the centers of the LABs. However, all X-ray counterparts avoid the central regions. This may be explained by absorption due to exceptionally large column densities along the line-of-sight or by LAB morphologies, which are highly orientation dependent. The median Ly{\alpha} escape fraction is about 3\% among the submm-detected LABs, which is lower than a lower limit of 11\% for the submm-undetected LABs. We suspect that the large difference is due to the high dust attenuation supported by the large SFRs, the dense large-scale environment as well as large uncertainties in the extinction corrections required to apply when interpreting optical data. ",0,1,0,0,0,0 17277,Modeling temporal constraints for a system of interactive scores," In this chapter we explain briefly the fundamentals of the interactive scores formalism. Then we develop a solution for implementing the ECO machine by mixing petri nets and constraints propagation. We also present another solution for implementing the ECO machine using concurrent constraint programming. Finally, we present an extension of interactive score with conditional branching. ",1,0,0,0,0,0 17278,Electronic structure of ThRu2Si2 studied by angle-resolved photoelectron spectroscopy: Elucidating the contribution of U 5f states in URu2Si2," The electronic structure of ThRu2Si2 was studied by angle-resolved photoelectron spectroscopy (ARPES) with incident photon energies of hn=655-745 eV. Detailed band structure and the three-dimensional shapes of Fermi surfaces were derived experimentally, and their characteristic features were mostly explained by means of band structure calculations based on the density functional theory. Comparison of the experimental ARPES spectra of ThRu2Si2 with those of URu2Si2 shows that they have considerably different spectral profiles particularly in the energy range of 1 eV from the Fermi level, suggesting that U 5f states are substantially hybridized in these bands. The relationship between the ARPES spectra of URu2Si2 and ThRu2Si2 is very different from the one between the ARPES spectra of CeRu2Si2 and LaRu2Si2, where the intrinsic difference in their spectra is limited only in the very vicinity of the Fermi energy. The present result suggests that the U 5f electrons in URu2Si2 have strong hybridization with ligand states and have an essentially itinerant character. ",0,1,0,0,0,0 17279,Non-zero constant curvature factorable surfaces in pseudo-Galilean space," Factorable surfaces, i.e. graphs associated with the product of two functions of one variable, constitute a wide class of surfaces. Such surfaces in the pseudo-Galilean space with zero Gaussian and mean curvature were obtained in [1]. In this study, we provide new classification results relating to the factorable surfaces with non-zero Gaussian and mean curvature. ",0,0,1,0,0,0 17280,Darboux and Binary Darboux Transformations for Discrete Integrable Systems. II. Discrete Potential mKdV Equation," The paper presents two results. First it is shown how the discrete potential modified KdV equation and its Lax pairs in matrix form arise from the Hirota-Miwa equation by a 2-periodic reduction. Then Darboux transformations and binary Darboux transformations are derived for the discrete potential modified KdV equation and it is shown how these may be used to construct exact solutions. ",0,1,0,0,0,0 17281,Nearly second-order asymptotic optimality of sequential change-point detection with one-sample updates," Sequential change-point detection when the distribution parameters are unknown is a fundamental problem in statistics and machine learning. When the post-change parameters are unknown, we consider a set of detection procedures based on sequential likelihood ratios with non-anticipating estimators constructed using online convex optimization algorithms such as online mirror descent, which provides a more versatile approach to tackle complex situations where recursive maximum likelihood estimators cannot be found. When the underlying distributions belong to a exponential family and the estimators satisfy the logarithm regret property, we show that this approach is nearly second-order asymptotically optimal. This means that the upper bound for the false alarm rate of the algorithm (measured by the average-run-length) meets the lower bound asymptotically up to a log-log factor when the threshold tends to infinity. Our proof is achieved by making a connection between sequential change-point and online convex optimization and leveraging the logarithmic regret bound property of online mirror descent algorithm. Numerical and real data examples validate our theory. ",1,0,1,1,0,0 17282,Algorithms in the classical Néron Desingularization," We give algorithms to construct the Néron Desingularization and the easy case from \cite{KK} of the General Néron Desingularization. ",0,0,1,0,0,0 17283,Recent Advances in Neural Program Synthesis," In recent years, deep learning has made tremendous progress in a number of fields that were previously out of reach for artificial intelligence. The successes in these problems has led researchers to consider the possibilities for intelligent systems to tackle a problem that humans have only recently themselves considered: program synthesis. This challenge is unlike others such as object recognition and speech translation, since its abstract nature and demand for rigor make it difficult even for human minds to attempt. While it is still far from being solved or even competitive with most existing methods, neural program synthesis is a rapidly growing discipline which holds great promise if completely realized. In this paper, we start with exploring the problem statement and challenges of program synthesis. Then, we examine the fascinating evolution of program induction models, along with how they have succeeded, failed and been reimagined since. Finally, we conclude with a contrastive look at program synthesis and future research recommendations for the field. ",1,0,0,0,0,0 17284,Generator Reversal," We consider the problem of training generative models with deep neural networks as generators, i.e. to map latent codes to data points. Whereas the dominant paradigm combines simple priors over codes with complex deterministic models, we propose instead to use more flexible code distributions. These distributions are estimated non-parametrically by reversing the generator map during training. The benefits include: more powerful generative models, better modeling of latent structure and explicit control of the degree of generalization. ",1,0,0,1,0,0 17285,Finite model reasoning over existential rules," Ontology-based query answering (OBQA) asks whether a Boolean conjunctive query is satisfied by all models of a logical theory consisting of a relational database paired with an ontology. The introduction of existential rules (i.e., Datalog rules extended with existential quantifiers in rule-heads) as a means to specify the ontology gave birth to Datalog+/-, a framework that has received increasing attention in the last decade, with focus also on decidability and finite controllability to support effective reasoning. Five basic decidable fragments have been singled out: linear, weakly-acyclic, guarded, sticky, and shy. Moreover, for all these fragments, except shy, the important property of finite controllability has been proved, ensuring that a query is satisfied by all models of the theory iff it is satisfied by all its finite models. In this paper we complete the picture by demonstrating that finite controllability of OBQA holds also for shy ontologies, and it therefore applies to all basic decidable Datalog+/- classes. To make the demonstration, we devise a general technique to facilitate the process of (dis)proving finite controllability of an arbitrary ontological fragment. This paper is under consideration for acceptance in TPLP. ",1,0,0,0,0,0 17286,On the convergence properties of a $K$-step averaging stochastic gradient descent algorithm for nonconvex optimization," Despite their popularity, the practical performance of asynchronous stochastic gradient descent methods (ASGD) for solving large scale machine learning problems are not as good as theoretical results indicate. We adopt and analyze a synchronous K-step averaging stochastic gradient descent algorithm which we call K-AVG. We establish the convergence results of K-AVG for nonconvex objectives and explain why the K-step delay is necessary and leads to better performance than traditional parallel stochastic gradient descent which is a special case of K-AVG with $K=1$. We also show that K-AVG scales better than ASGD. Another advantage of K-AVG over ASGD is that it allows larger stepsizes. On a cluster of $128$ GPUs, K-AVG is faster than ASGD implementations and achieves better accuracies and faster convergence for \cifar dataset. ",1,0,0,1,0,0 17287,Adversarial Neural Machine Translation," In this paper, we study a new learning paradigm for Neural Machine Translation (NMT). Instead of maximizing the likelihood of the human translation as in previous works, we minimize the distinction between human translation and the translation given by an NMT model. To achieve this goal, inspired by the recent success of generative adversarial networks (GANs), we employ an adversarial training architecture and name it as Adversarial-NMT. In Adversarial-NMT, the training of the NMT model is assisted by an adversary, which is an elaborately designed Convolutional Neural Network (CNN). The goal of the adversary is to differentiate the translation result generated by the NMT model from that by human. The goal of the NMT model is to produce high quality translations so as to cheat the adversary. A policy gradient method is leveraged to co-train the NMT model and the adversary. Experimental results on English$\rightarrow$French and German$\rightarrow$English translation tasks show that Adversarial-NMT can achieve significantly better translation quality than several strong baselines. ",1,0,0,1,0,0 17288,Surface group amalgams that (don't) act on 3-manifolds," We determine which amalgamated products of surface groups identified over multiples of simple closed curves are not fundamental groups of 3-manifolds. We prove each surface amalgam considered is virtually the fundamental group of a 3-manifold. We prove that each such surface group amalgam is abstractly commensurable to a right-angled Coxeter group from a related family. In an appendix, we determine the quasi-isometry classes among these surface amalgams and their related right-angled Coxeter groups. ",0,0,1,0,0,0 17289,Shading Annotations in the Wild," Understanding shading effects in images is critical for a variety of vision and graphics problems, including intrinsic image decomposition, shadow removal, image relighting, and inverse rendering. As is the case with other vision tasks, machine learning is a promising approach to understanding shading - but there is little ground truth shading data available for real-world images. We introduce Shading Annotations in the Wild (SAW), a new large-scale, public dataset of shading annotations in indoor scenes, comprised of multiple forms of shading judgments obtained via crowdsourcing, along with shading annotations automatically generated from RGB-D imagery. We use this data to train a convolutional neural network to predict per-pixel shading information in an image. We demonstrate the value of our data and network in an application to intrinsic images, where we can reduce decomposition artifacts produced by existing algorithms. Our database is available at this http URL. ",1,0,0,0,0,0 17290,Koszul cycles and Golod rings," Let $S$ be the power series ring or the polynomial ring over a field $K$ in the variables $x_1,\ldots,x_n$, and let $R=S/I$, where $I$ is proper ideal which we assume to be graded if $S$ is the polynomial ring. We give an explicit description of the cycles of the Koszul complex whose homology classes generate the Koszul homology of $R=S/I$ with respect to $x_1,\ldots,x_n$. The description is given in terms of the data of the free $S$-resolution of $R$. The result is used to determine classes of Golod ideals, among them proper ordinary powers and proper symbolic powers of monomial ideals. Our theory is also applied to stretched local rings. ",0,0,1,0,0,0 17291,PacGAN: The power of two samples in generative adversarial networks," Generative adversarial networks (GANs) are innovative techniques for learning generative models of complex data distributions from samples. Despite remarkable recent improvements in generating realistic images, one of their major shortcomings is the fact that in practice, they tend to produce samples with little diversity, even when trained on diverse datasets. This phenomenon, known as mode collapse, has been the main focus of several recent advances in GANs. Yet there is little understanding of why mode collapse happens and why existing approaches are able to mitigate mode collapse. We propose a principled approach to handling mode collapse, which we call packing. The main idea is to modify the discriminator to make decisions based on multiple samples from the same class, either real or artificially generated. We borrow analysis tools from binary hypothesis testing---in particular the seminal result of Blackwell [Bla53]---to prove a fundamental connection between packing and mode collapse. We show that packing naturally penalizes generators with mode collapse, thereby favoring generator distributions with less mode collapse during the training process. Numerical experiments on benchmark datasets suggests that packing provides significant improvements in practice as well. ",1,0,0,1,0,0 17292,Stein-like Estimators for Causal Mediation Analysis in Randomized Trials," Causal mediation analysis aims to estimate the natural direct and indirect effects under clearly specified assumptions. Traditional mediation analysis based on Ordinary Least Squares (OLS) relies on the absence of unmeasured causes of the putative mediator and outcome. When this assumption cannot be justified, Instrumental Variables (IV) estimators can be used in order to produce an asymptotically unbiased estimator of the mediator-outcome link. However, provided that valid instruments exist, bias removal comes at the cost of variance inflation for standard IV procedures such as Two-Stage Least Squares (TSLS). A Semi-Parametric Stein-Like (SPSL) estimator has been proposed in the literature that strikes a natural trade-off between the unbiasedness of the TSLS procedure and the relatively small variance of the OLS estimator. Moreover, the SPSL has the advantage that its shrinkage parameter can be directly estimated from the data. In this paper, we demonstrate how this Stein-like estimator can be implemented in the context of the estimation of natural direct and natural indirect effects of treatments in randomized controlled trials. The performance of the competing methods is studied in a simulation study, in which both the strength of hidden confounding and the strength of the instruments are independently varied. These considerations are motivated by a trial in mental health evaluating the impact of a primary care-based intervention to reduce depression in the elderly. ",0,0,0,1,0,0 17293,Structure-Based Subspace Method for Multi-Channel Blind System Identification," In this work, a novel subspace-based method for blind identification of multichannel finite impulse response (FIR) systems is presented. Here, we exploit directly the impeded Toeplitz channel structure in the signal linear model to build a quadratic form whose minimization leads to the desired channel estimation up to a scalar factor. This method can be extended to estimate any predefined linear structure, e.g. Hankel, that is usually encountered in linear systems. Simulation findings are provided to highlight the appealing advantages of the new structure-based subspace (SSS) method over the standard subspace (SS) method in certain adverse identification scenarii. ",1,0,0,1,0,0 17294,On Certain Analytical Representations of Cellular Automata," We extend a previously introduced semi-analytical representation of a decomposition of CA dynamics in arbitrary dimensions and neighborhood schemes via the use of certain universal maps in which CA rule vectors are derivable from the equivalent of superpotentials. The results justify the search for alternative analog models of computation and their possible physical connections. ",0,1,0,0,0,0 17295,Strong consistency and optimality for generalized estimating equations with stochastic covariates," In this article we study the existence and strong consistency of GEE estimators, when the generalized estimating functions are martingales with random coefficients. Furthermore, we characterize estimating functions which are asymptotically optimal. ",0,0,1,1,0,0 17296,Synthesis and electronic properties of Ruddlesden-Popper strontium iridate epitaxial thin films stabilized by control of growth kinetics," We report on the selective fabrication of high-quality Sr$_2$IrO$_4$ and SrIrO$_3$ epitaxial thin films from a single polycrystalline Sr$_2$IrO$_4$ target by pulsed laser deposition. Using a combination of X-ray diffraction and photoemission spectroscopy characterizations, we discover that within a relatively narrow range of substrate temperature, the oxygen partial pressure plays a critical role in the cation stoichiometric ratio of the films, and triggers the stabilization of different Ruddlesden-Popper (RP) phases. Resonant X-ray absorption spectroscopy measurements taken at the Ir $L$-edge and the O $K$-edge demonstrate the presence of strong spin-orbit coupling, and reveal the electronic and orbital structures of both compounds. These results suggest that in addition to the conventional thermodynamics consideration, higher members of the Sr$_{n+1}$Ir$_n$O$_{3n+1}$ series can possibly be achieved by kinetic control away from the thermodynamic limit. These findings offer a new approach to the synthesis of ultra-thin films of the RP series of iridates and can be extended to other complex oxides with layered structure. ",0,1,0,0,0,0 17297,A proof on energy gap for Yang-Mills connection," In this note, we prove an ${L^{\frac{n}{2}}}$-energy gap result for Yang-Mills connections on a principal $G$-bundle over a compact manifold without using Lojasiewicz-Simon gradient inequality (arXiv:1502.00668). ",0,0,1,0,0,0 17298,Realisability of Pomsets via Communicating Automata," Pomsets are a model of concurrent computations introduced by Pratt. They can provide a syntax-oblivious description of semantics of coordination models based on asynchronous message-passing, such as Message Sequence Charts (MSCs). In this paper, we study conditions that ensure a specification expressed as a set of pomsets can be faithfully realised via communicating automata. Our main contributions are (i) the definition of a realisability condition accounting for termination soundness, (ii) conditions for global specifications with ""multi-threaded"" participants, and (iii) the definition of realisability conditions that can be decided directly over pomsets. A positive by-product of our approach is the efficiency gain in the verification of the realisability conditions obtained when restricting to specific classes of choreographies characterisable in term of behavioural types. ",1,0,0,0,0,0 17299,Complex pattern formation driven by the interaction of stable fronts in a competition-diffusion system," The ecological invasion problem in which a weaker exotic species invades an ecosystem inhabited by two strongly competing native species is modelled by a three-species competition-diffusion system. It is known that for a certain range of parameter values competitor-mediated coexistence occurs and complex spatio-temporal patterns are observed in two spatial dimensions. In this paper we uncover the mechanism which generates such patterns. Under some assumptions on the parameters the three-species competition-diffusion system admits two planarly stable travelling waves. Their interaction in one spatial dimension may result in either reflection or merging into a single homoclinic wave, depending on the strength of the invading species. This transition can be understood by studying the bifurcation structure of the homoclinic wave. In particular, a time-periodic homoclinic wave (breathing wave) is born from a Hopf bifurcation and its unstable branch acts as a separator between the reflection and merging regimes. The same transition occurs in two spatial dimensions: the stable regular spiral associated to the homoclinic wave destabilizes, giving rise first to an oscillating breathing spiral and then breaking up producing a dynamic pattern characterized by many spiral cores. We find that these complex patterns are generated by the interaction of two planarly stable travelling waves, in contrast with many other well known cases of pattern formation where planar instability plays a central role. ",0,0,0,0,1,0 17300,Solitons with rings and vortex rings on solitons in nonlocal nonlinear media," Nonlocality is a key feature of many physical systems since it prevents a catastrophic collapse and a symmetry-breaking azimuthal instability of intense wave beams in a bulk self-focusing nonlinear media. This opens up an intriguing perspective for stabilization of complex topological structures such as higher-order solitons, vortex rings and vortex ring-on-line complexes. Using direct numerical simulations, we find a class of cylindrically-symmetric $n$-th order spatial solitons having the intensity distribution with a central bright spot surrounded by $n$ bright rings of varying size. We investigate dynamical properties of these higher-order solitons in a media with thermal nonlocal nonlinear response. We show theoretically that a vortex complex of vortex ring and vortex line, carrying two independent winding numbers, can be created by perturbation of the stable optical vortex soliton in nonlocal nonlinear media. ",0,1,0,0,0,0 17301,Deep-HiTS: Rotation Invariant Convolutional Neural Network for Transient Detection," We introduce Deep-HiTS, a rotation invariant convolutional neural network (CNN) model for classifying images of transients candidates into artifacts or real sources for the High cadence Transient Survey (HiTS). CNNs have the advantage of learning the features automatically from the data while achieving high performance. We compare our CNN model against a feature engineering approach using random forests (RF). We show that our CNN significantly outperforms the RF model reducing the error by almost half. Furthermore, for a fixed number of approximately 2,000 allowed false transient candidates per night we are able to reduce the miss-classified real transients by approximately 1/5. To the best of our knowledge, this is the first time CNNs have been used to detect astronomical transient events. Our approach will be very useful when processing images from next generation instruments such as the Large Synoptic Survey Telescope (LSST). We have made all our code and data available to the community for the sake of allowing further developments and comparisons at this https URL. ",1,1,0,0,0,0 17302,"On Measuring and Quantifying Performance: Error Rates, Surrogate Loss, and an Example in SSL"," In various approaches to learning, notably in domain adaptation, active learning, learning under covariate shift, semi-supervised learning, learning with concept drift, and the like, one often wants to compare a baseline classifier to one or more advanced (or at least different) strategies. In this chapter, we basically argue that if such classifiers, in their respective training phases, optimize a so-called surrogate loss that it may also be valuable to compare the behavior of this loss on the test set, next to the regular classification error rate. It can provide us with an additional view on the classifiers' relative performances that error rates cannot capture. As an example, limited but convincing empirical results demonstrates that we may be able to find semi-supervised learning strategies that can guarantee performance improvements with increasing numbers of unlabeled data in terms of log-likelihood. In contrast, the latter may be impossible to guarantee for the classification error rate. ",1,0,0,1,0,0 17303,Do Reichenbachian Common Cause Systems of Arbitrary Finite Size Exist?," The principle of common cause asserts that positive correlations between causally unrelated events ought to be explained through the action of some shared causal factors. Reichenbachian common cause systems are probabilistic structures aimed at accounting for cases where correlations of the aforesaid sort cannot be explained through the action of a single common cause. The existence of Reichenbachian common cause systems of arbitrary finite size for each pair of non-causally correlated events was allegedly demonstrated by Hofer-Szabó and Rédei in 2006. This paper shows that their proof is logically deficient, and we propose an improved proof. ",1,1,0,1,0,0 17304,Co-evolution of nodes and links: diversity driven coexistence in cyclic competition of three species," When three species compete cyclically in a well-mixed, stochastic system of $N$ individuals, extinction is known to typically occur at times scaling as the system size $N$. This happens, for example, in rock-paper-scissors games or conserved Lotka-Volterra models in which every pair of individuals can interact on a complete graph. Here we show that if the competing individuals also have a ""social temperament"" to be either introverted or extroverted, leading them to cut or add links respectively, then long-living state in which all species coexist can occur when both introverts and extroverts are present. These states are non-equilibrium quasi-steady states, maintained by a subtle balance between species competition and network dynamcis. Remarkably, much of the phenomena is embodied in a mean-field description. However, an intuitive understanding of why diversity stabilizes the co-evolving node and link dynamics remains an open issue. ",0,0,0,0,1,0 17305,Online Learning with an Almost Perfect Expert," We study the multiclass online learning problem where a forecaster makes a sequence of predictions using the advice of $n$ experts. Our main contribution is to analyze the regime where the best expert makes at most $b$ mistakes and to show that when $b = o(\log_4{n})$, the expected number of mistakes made by the optimal forecaster is at most $\log_4{n} + o(\log_4{n})$. We also describe an adversary strategy showing that this bound is tight and that the worst case is attained for binary prediction. ",0,0,0,1,0,0 17306,Actively Learning what makes a Discrete Sequence Valid," Deep learning techniques have been hugely successful for traditional supervised and unsupervised machine learning problems. In large part, these techniques solve continuous optimization problems. Recently however, discrete generative deep learning models have been successfully used to efficiently search high-dimensional discrete spaces. These methods work by representing discrete objects as sequences, for which powerful sequence-based deep models can be employed. Unfortunately, these techniques are significantly hindered by the fact that these generative models often produce invalid sequences. As a step towards solving this problem, we propose to learn a deep recurrent validator model. Given a partial sequence, our model learns the probability of that sequence occurring as the beginning of a full valid sequence. Thus this identifies valid versus invalid sequences and crucially it also provides insight about how individual sequence elements influence the validity of discrete objects. To learn this model we propose an approach inspired by seminal work in Bayesian active learning. On a synthetic dataset, we demonstrate the ability of our model to distinguish valid and invalid sequences. We believe this is a key step toward learning generative models that faithfully produce valid discrete objects. ",1,0,0,1,0,0 17307,Symmetries and conservation laws of Hamiltonian systems," In this paper we study the infinitesimal symmetries, Newtonoid vector fields, infinitesimal Noether symmetries and conservation laws of Hamiltonian systems. Using the dynamical covariant derivative and Jacobi endomorphism on the cotangent bundle we find the invariant equations of infinitesimal symmetries and Newtonoid vector fields and prove that the canonical nonlinear connection induced by a regular Hamiltonian can be determined by these symmetries. Finally, an example from optimal control theory is given. ",0,0,1,0,0,0 17308,Fractional differential and fractional integral modified-Bloch equations for PFG anomalous diffusion and their general solutions," The studying of anomalous diffusion by pulsed field gradient (PFG) diffusion technique still faces challenges. Two different research groups have proposed modified Bloch equation for anomalous diffusion. However, these equations have different forms and, therefore, yield inconsistent results. The discrepancy in these reported modified Bloch equations may arise from different ways of combining the fractional diffusion equation with the precession equation where the time derivatives have different derivative orders and forms. Moreover, to the best of my knowledge, the general PFG signal attenuation expression including finite gradient pulse width (FGPW) effect for time-space fractional diffusion based on the fractional derivative has yet to be reported by other methods. Here, based on different combination strategy, two new modified Bloch equations are proposed, which belong to two significantly different types: a differential type based on the fractal derivative and an integral type based on the fractional derivative. The merit of the integral type modified Bloch equation is that the original properties of the contributions from linear or nonlinear processes remain unchanged at the instant of the combination. The general solutions including the FGPW effect were derived from these two equations as well as from two other methods: a method observing the signal intensity at the origin and the recently reported effective phase shift diffusion equation method. The relaxation effect was also considered. It is found that the relaxation behavior influenced by fractional diffusion based on the fractional derivative deviates from that of normal diffusion. The general solution agrees perfectly with continuous-time random walk (CTRW) simulations as well as reported literature results. The new modified Bloch equations is a valuable tool to describe PFG anomalous diffusion in NMR and MRI. ",0,1,0,0,0,0 17309,Change of the vortex core structure in two-band superconductors at impurity-scattering-driven $s_\pm/s_{++}$ crossover," We report a nontrivial transition in the core structure of vortices in two-band superconductors as a function of interband impurity scattering. We demonstrate that, in addition to singular zeros of the order parameter, the vortices there can acquire a circular nodal line around the singular point in one of the superconducting components. It results in the formation of the peculiar ""moat""-like profile in one of the superconducting gaps. The moat-core vortices occur generically in the vicinity of the impurity-induced crossover between $s_{\pm}$ and $s_{++}$ states. ",0,1,0,0,0,0 17310,Nearly Optimal Adaptive Procedure with Change Detection for Piecewise-Stationary Bandit," Multi-armed bandit (MAB) is a class of online learning problems where a learning agent aims to maximize its expected cumulative reward while repeatedly selecting to pull arms with unknown reward distributions. We consider a scenario where the reward distributions may change in a piecewise-stationary fashion at unknown time steps. We show that by incorporating a simple change-detection component with classic UCB algorithms to detect and adapt to changes, our so-called M-UCB algorithm can achieve nearly optimal regret bound on the order of $O(\sqrt{MKT\log T})$, where $T$ is the number of time steps, $K$ is the number of arms, and $M$ is the number of stationary segments. Comparison with the best available lower bound shows that our M-UCB is nearly optimal in $T$ up to a logarithmic factor. We also compare M-UCB with the state-of-the-art algorithms in numerical experiments using a public Yahoo! dataset to demonstrate its superior performance. ",0,0,0,1,0,0 17311,An initial-boundary value problem of the general three-component nonlinear Schrodinger equation with a 4x4 Lax pair on a finite interval," We investigate the initial-boundary value problem for the general three-component nonlinear Schrodinger (gtc-NLS) equation with a 4x4 Lax pair on a finite interval by extending the Fokas unified approach. The solutions of the gtc-NLS equation can be expressed in terms of the solutions of a 4x4 matrix Riemann-Hilbert (RH) problem formulated in the complex k-plane. Moreover, the relevant jump matrices of the RH problem can be explicitly found via the three spectral functions arising from the initial data, the Dirichlet-Neumann boundary data. The global relation is also established to deduce two distinct but equivalent types of representations (i.e., one by using the large k of asymptotics of the eigenfunctions and another one in terms of the Gelfand-Levitan-Marchenko (GLM) method) for the Dirichlet and Neumann boundary value problems. Moreover, the relevant formulae for boundary value problems on the finite interval can reduce to ones on the half-line as the length of the interval approaches to infinity. Finally, we also give the linearizable boundary conditions for the GLM representation. ",0,1,1,0,0,0 17312,Deep Learning Microscopy," We demonstrate that a deep neural network can significantly improve optical microscopy, enhancing its spatial resolution over a large field-of-view and depth-of-field. After its training, the only input to this network is an image acquired using a regular optical microscope, without any changes to its design. We blindly tested this deep learning approach using various tissue samples that are imaged with low-resolution and wide-field systems, where the network rapidly outputs an image with remarkably better resolution, matching the performance of higher numerical aperture lenses, also significantly surpassing their limited field-of-view and depth-of-field. These results are transformative for various fields that use microscopy tools, including e.g., life sciences, where optical microscopy is considered as one of the most widely used and deployed techniques. Beyond such applications, our presented approach is broadly applicable to other imaging modalities, also spanning different parts of the electromagnetic spectrum, and can be used to design computational imagers that get better and better as they continue to image specimen and establish new transformations among different modes of imaging. ",1,1,0,0,0,0 17313,Effects of pressure impulse and peak pressure of a shock wave on microjet velocity and the onset of cavitation in a microchannel," The development of needle-free injection systems utilizing high-speed microjets is of great importance to world healthcare. It is thus crucial to control the microjets, which are often induced by underwater shock waves. In this contribution from fluid-mechanics point of view, we experimentally investigate the effect of a shock wave on the velocity of a free surface (microjet) and underwater cavitation onset in a microchannel, focusing on the pressure impulse and peak pressure of the shock wave. The shock wave used had a non-spherically-symmetric peak pressure distribution and a spherically symmetric pressure impulse distribution [Tagawa et al., J. Fluid Mech., 2016, 808, 5-18]. First, we investigate the effect of the shock wave on the jet velocity by installing a narrow tube and a hydrophone in different configurations in a large water tank, and measuring the shock wave pressure and the jet velocity simultaneously. The results suggest that the jet velocity depends only on the pressure impulse of the shock wave. We then investigate the effect of the shock wave on the cavitation onset by taking measurements in an L-shaped microchannel. The results suggest that the probability of cavitation onset depends only on the peak pressure of the shock wave. In addition, the jet velocity varies according to the presence or absence of cavitation. The above findings provide new insights for advancing a control method for high-speed microjets. ",0,1,0,0,0,0 17314,Clustering with Noisy Queries," In this paper, we initiate a rigorous theoretical study of clustering with noisy queries (or a faulty oracle). Given a set of $n$ elements, our goal is to recover the true clustering by asking minimum number of pairwise queries to an oracle. Oracle can answer queries of the form : ""do elements $u$ and $v$ belong to the same cluster?"" -- the queries can be asked interactively (adaptive queries), or non-adaptively up-front, but its answer can be erroneous with probability $p$. In this paper, we provide the first information theoretic lower bound on the number of queries for clustering with noisy oracle in both situations. We design novel algorithms that closely match this query complexity lower bound, even when the number of clusters is unknown. Moreover, we design computationally efficient algorithms both for the adaptive and non-adaptive settings. The problem captures/generalizes multiple application scenarios. It is directly motivated by the growing body of work that use crowdsourcing for {\em entity resolution}, a fundamental and challenging data mining task aimed to identify all records in a database referring to the same entity. Here crowd represents the noisy oracle, and the number of queries directly relates to the cost of crowdsourcing. Another application comes from the problem of {\em sign edge prediction} in social network, where social interactions can be both positive and negative, and one must identify the sign of all pair-wise interactions by querying a few pairs. Furthermore, clustering with noisy oracle is intimately connected to correlation clustering, leading to improvement therein. Finally, it introduces a new direction of study in the popular {\em stochastic block model} where one has an incomplete stochastic block model matrix to recover the clusters. ",1,0,0,1,0,0 17315,Divide-and-Conquer Checkpointing for Arbitrary Programs with No User Annotation," Classical reverse-mode automatic differentiation (AD) imposes only a small constant-factor overhead in operation count over the original computation, but has storage requirements that grow, in the worst case, in proportion to the time consumed by the original computation. This storage blowup can be ameliorated by checkpointing, a process that reorders application of classical reverse-mode AD over an execution interval to tradeoff space \vs\ time. Application of checkpointing in a divide-and-conquer fashion to strategically chosen nested execution intervals can break classical reverse-mode AD into stages which can reduce the worst-case growth in storage from linear to sublinear. Doing this has been fully automated only for computations of particularly simple form, with checkpoints spanning execution intervals resulting from a limited set of program constructs. Here we show how the technique can be automated for arbitrary computations. The essential innovation is to apply the technique at the level of the language implementation itself, thus allowing checkpoints to span any execution interval. ",1,0,0,0,0,0 17316,Bow Ties in the Sky II: Searching for Gamma-ray Halos in the Fermi Sky Using Anisotropy," Many-degree-scale gamma-ray halos are expected to surround extragalactic high-energy gamma ray sources. These arise from the inverse Compton emission of an intergalactic population of relativistic electron/positron pairs generated by the annihilation of >100 GeV gamma rays on the extragalactic background light. These are typically anisotropic due to the jetted structure from which they originate or the presence of intergalactic magnetic fields. Here we propose a novel method for detecting these inverse-Compton gamma-ray halos based upon this anisotropic structure. Specifically, we show that by stacking suitably defined angular power spectra instead of images it is possible to robustly detect gamma-ray halos with existing Fermi Large Area Telescope (LAT) observations for a broad class of intergalactic magnetic fields. Importantly, these are largely insensitive to systematic uncertainties within the LAT instrumental response or associated with contaminating astronomical sources. ",0,1,0,0,0,0 17317,Gain-loss-driven travelling waves in PT-symmetric nonlinear metamaterials," In this work we investigate a one-dimensional parity-time (PT)-symmetric magnetic metamaterial consisting of split-ring dimers having gain or loss. Employing a Melnikov analysis we study the existence of localized travelling waves, i.e. homoclinic or heteroclinic solutions. We find conditions under which the homoclinic or heteroclinic orbits persist. Our analytical results are found to be in good agreement with direct numerical computations. For the particular nonlinearity admitting travelling kinks, numerically we observe homoclinic snaking in the bifurcation diagram. The Melnikov analysis yields a good approximation to one of the boundaries of the snaking profile. ",0,1,0,0,0,0 17318,CapsuleGAN: Generative Adversarial Capsule Network," We present Generative Adversarial Capsule Network (CapsuleGAN), a framework that uses capsule networks (CapsNets) instead of the standard convolutional neural networks (CNNs) as discriminators within the generative adversarial network (GAN) setting, while modeling image data. We provide guidelines for designing CapsNet discriminators and the updated GAN objective function, which incorporates the CapsNet margin loss, for training CapsuleGAN models. We show that CapsuleGAN outperforms convolutional-GAN at modeling image data distribution on MNIST and CIFAR-10 datasets, evaluated on the generative adversarial metric and at semi-supervised image classification. ",0,0,0,1,0,0 17319,sourceR: Classification and Source Attribution of Infectious Agents among Heterogeneous Populations," Zoonotic diseases are a major cause of morbidity, and productivity losses in both humans and animal populations. Identifying the source of food-borne zoonoses (e.g. an animal reservoir or food product) is crucial for the identification and prioritisation of food safety interventions. For many zoonotic diseases it is difficult to attribute human cases to sources of infection because there is little epidemiological information on the cases. However, microbial strain typing allows zoonotic pathogens to be categorised, and the relative frequencies of the strain types among the sources and in human cases allows inference on the likely source of each infection. We introduce sourceR, an R package for quantitative source attribution, aimed at food-borne diseases. It implements a fully joint Bayesian model using strain-typed surveillance data from both human cases and source samples, capable of identifying important sources of infection. The model measures the force of infection from each source, allowing for varying survivability, pathogenicity and virulence of pathogen strains, and varying abilities of the sources to act as vehicles of infection. A Bayesian non-parametric (Dirichlet process) approach is used to cluster pathogen strain types by epidemiological behaviour, avoiding model overfitting and allowing detection of strain types associated with potentially high 'virulence'. sourceR is demonstrated using Campylobacter jejuni isolate data collected in New Zealand between 2005 and 2008. It enables straightforward attribution of cases of zoonotic infection to putative sources of infection by epidemiologists and public health decision makers. As sourceR develops, we intend it to become an important and flexible resource for food-borne disease attribution studies. ",0,0,0,1,0,0 17320,Low resistive edge contacts to CVD-grown graphene using a CMOS compatible metal," The exploitation of the excellent intrinsic electronic properties of graphene for device applications is hampered by a large contact resistance between the metal and graphene. The formation of edge contacts rather than top contacts is one of the most promising solutions for realizing low ohmic contacts. In this paper the fabrication and characterization of edge contacts to large area CVD-grown monolayer graphene by means of optical lithography using CMOS compatible metals, i.e. Nickel and Aluminum is reported. Extraction of the contact resistance by Transfer Line Method (TLM) as well as the direct measurement using Kelvin Probe Force Microscopy demonstrates a very low width specific contact resistance. ",0,1,0,0,0,0 17321,Uniqueness of planar vortex patch in incompressible steady flow," We investigate a steady planar flow of an ideal fluid in a bounded simple connected domain and focus on the vortex patch problem with prescribed vorticity strength. There are two methods to deal with the existence of solutions for this problem: the vorticity method and the stream function method. A long standing open problem is whether these two entirely different methods result in the same solution. In this paper, we will give a positive answer to this problem by studying the local uniqueness of the solutions. Another result obtained in this paper is that if the domain is convex, then the vortex patch problem has a unique solution. ",0,0,1,0,0,0 17322,An Equivalence of Fully Connected Layer and Convolutional Layer," This article demonstrates that convolutional operation can be converted to matrix multiplication, which has the same calculation way with fully connected layer. The article is helpful for the beginners of the neural network to understand how fully connected layer and the convolutional layer work in the backend. To be concise and to make the article more readable, we only consider the linear case. It can be extended to the non-linear case easily through plugging in a non-linear encapsulation to the values like this $\sigma(x)$ denoted as $x^{\prime}$. ",1,0,0,1,0,0 17323,Critical Points of Neural Networks: Analytical Forms and Landscape Properties," Due to the success of deep learning to solving a variety of challenging machine learning tasks, there is a rising interest in understanding loss functions for training neural networks from a theoretical aspect. Particularly, the properties of critical points and the landscape around them are of importance to determine the convergence performance of optimization algorithms. In this paper, we provide full (necessary and sufficient) characterization of the analytical forms for the critical points (as well as global minimizers) of the square loss functions for various neural networks. We show that the analytical forms of the critical points characterize the values of the corresponding loss functions as well as the necessary and sufficient conditions to achieve global minimum. Furthermore, we exploit the analytical forms of the critical points to characterize the landscape properties for the loss functions of these neural networks. One particular conclusion is that: The loss function of linear networks has no spurious local minimum, while the loss function of one-hidden-layer nonlinear networks with ReLU activation function does have local minimum that is not global minimum. ",1,0,0,1,0,0 17324,When the Annihilator Graph of a Commutative Ring Is Planar or Toroidal?," Let $R$ be a commutative ring with identity, and let $Z(R)$ be the set of zero-divisors of $R$. The annihilator graph of $R$ is defined as the undirected graph $AG(R)$ with the vertex set $Z(R)^*=Z(R)\setminus\{0\}$, and two distinct vertices $x$ and $y$ are adjacent if and only if $ann_R(xy)\neq ann_R(x)\cup ann_R(y)$. In this paper, all rings whose annihilator graphs can be embed on the plane or torus are classified. ",0,0,1,0,0,0 17325,Econometric modelling and forecasting of intraday electricity prices," In the following paper we analyse the ID$_3$-Price on German Intraday Continuous Electricity Market using an econometric time series model. A multivariate approach is conducted for hourly and quarter-hourly products separately. We estimate the model using lasso and elastic net techniques and perform an out-of-sample very short-term forecasting study. The model's performance is compared with benchmark models and is discussed in detail. Forecasting results provide new insights to the German Intraday Continuous Electricity Market regarding its efficiency and to the ID$_3$-Price behaviour. The supplementary materials are available online. ",0,0,0,0,0,1 17326,Matrix-Based Characterization of the Motion and Wrench Uncertainties in Robotic Manipulators," Characterization of the uncertainty in robotic manipulators is the focus of this paper. Based on the random matrix theory (RMT), we propose uncertainty characterization schemes in which the uncertainty is modeled at the macro (system) level. This is different from the traditional approaches that model the uncertainty in the parametric space of micro (state) level. We show that perturbing the system matrices rather than the state of the system provides unique advantages especially for robotic manipulators. First, it requires only limited statistical information that becomes effective when dealing with complex systems where detailed information on their variability is not available. Second, the RMT-based models are aware of the system state and configuration that are significant factors affecting the level of uncertainty in system behavior. In this study, in addition to the motion uncertainty analysis that was first proposed in our earlier work, we also develop an RMT-based model for the quantification of the static wrench uncertainty in multi-agent cooperative systems. This model is aimed to be an alternative to the elaborate parametric formulation when only rough bounds are available on the system parameters. We discuss that how RMT-based model becomes advantageous when the complexity of the system increases. We perform experimental studies on a KUKA youBot arm to demonstrate the superiority of the RMT-based motion uncertainty models. We show that how these models outperform the traditional models built upon Gaussianity assumption in capturing real-system uncertainty and providing accurate bounds on the state estimation errors. In addition, to experimentally support our wrench uncertainty quantification model, we study the behavior of a cooperative system of mobile robots. It is shown that one can rely on less demanding RMT-based formulation and yet meets the acceptable accuracy. ",1,0,0,1,0,0 17327,Good Similar Patches for Image Denoising," Patch-based denoising algorithms like BM3D have achieved outstanding performance. An important idea for the success of these methods is to exploit the recurrence of similar patches in an input image to estimate the underlying image structures. However, in these algorithms, the similar patches used for denoising are obtained via Nearest Neighbour Search (NNS) and are sometimes not optimal. First, due to the existence of noise, NNS can select similar patches with similar noise patterns to the reference patch. Second, the unreliable noisy pixels in digital images can bring a bias to the patch searching process and result in a loss of color fidelity in the final denoising result. We observe that given a set of good similar patches, their distribution is not necessarily centered at the noisy reference patch and can be approximated by a Gaussian component. Based on this observation, we present a patch searching method that clusters similar patch candidates into patch groups using Gaussian Mixture Model-based clustering, and selects the patch group that contains the reference patch as the final patches for denoising. We also use an unreliable pixel estimation algorithm to pre-process the input noisy images to further improve the patch searching. Our experiments show that our approach can better capture the underlying patch structures and can consistently enable the state-of-the-art patch-based denoising algorithms, such as BM3D, LPCA and PLOW, to better denoise images by providing them with patches found by our approach while without modifying these algorithms. ",1,0,0,0,0,0 17328,Ginzburg - Landau expansion in strongly disordered attractive Anderson - Hubbard model," We have studied disordering effects on the coefficients of Ginzburg - Landau expansion in powers of superconducting order - parameter in attractive Anderson - Hubbard model within the generalized $DMFT+\Sigma$ approximation. We consider the wide region of attractive potentials $U$ from the weak coupling region, where superconductivity is described by BCS model, to the strong coupling region, where superconducting transition is related with Bose - Einstein condensation (BEC) of compact Cooper pairs formed at temperatures essentially larger than the temperature of superconducting transition, and the wide range of disorder - from weak to strong, where the system is in the vicinity of Anderson transition. In case of semi - elliptic bare density of states disorder influence upon the coefficients $A$ and $B$ before the square and the fourth power of the order - parameter is universal for any value of electron correlation and is related only to the general disorder widening of the bare band (generalized Anderson theorem). Such universality is absent for the gradient term expansion coefficient $C$. In the usual theory of ""dirty"" superconductors the $C$ coefficient drops with the growth of disorder. In the limit of strong disorder in BCS limit the coefficient $C$ is very sensitive to the effects of Anderson localization, which lead to its further drop with disorder growth up to the region of Anderson insulator. In the region of BCS - BEC crossover and in BEC limit the coefficient $C$ and all related physical properties are weakly dependent on disorder. In particular, this leads to relatively weak disorder dependence of both penetration depth and coherence lengths, as well as of related slope of the upper critical magnetic field at superconducting transition, in the region of very strong coupling. ",0,1,0,0,0,0 17329,Reallocating and Resampling: A Comparison for Inference," Simulation-based inference plays a major role in modern statistics, and often employs either reallocating (as in a randomization test) or resampling (as in bootstrapping). Reallocating mimics random allocation to treatment groups, while resampling mimics random sampling from a larger population; does it matter whether the simulation method matches the data collection method? Moreover, do the results differ for testing versus estimation? Here we answer these questions in a simple setting by exploring the distribution of a sample difference in means under a basic two group design and four different scenarios: true random allocation, true random sampling, reallocating, and resampling. For testing a sharp null hypothesis, reallocating is superior in small samples, but reallocating and resampling are asymptotically equivalent. For estimation, resampling is generally superior, unless the effect is truly additive. Moreover, these results hold regardless of whether the data were collected by random sampling or random allocation. ",0,0,1,1,0,0 17330,An Efficient Algorithm for Bayesian Nearest Neighbours," K-Nearest Neighbours (k-NN) is a popular classification and regression algorithm, yet one of its main limitations is the difficulty in choosing the number of neighbours. We present a Bayesian algorithm to compute the posterior probability distribution for k given a target point within a data-set, efficiently and without the use of Markov Chain Monte Carlo (MCMC) methods or simulation - alongside an exact solution for distributions within the exponential family. The central idea is that data points around our target are generated by the same probability distribution, extending outwards over the appropriate, though unknown, number of neighbours. Once the data is projected onto a distance metric of choice, we can transform the choice of k into a change-point detection problem, for which there is an efficient solution: we recursively compute the probability of the last change-point as we move towards our target, and thus de facto compute the posterior probability distribution over k. Applying this approach to both a classification and a regression UCI data-sets, we compare favourably and, most importantly, by removing the need for simulation, we are able to compute the posterior probability of k exactly and rapidly. As an example, the computational time for the Ripley data-set is a few milliseconds compared to a few hours when using a MCMC approach. ",1,0,0,1,0,0 17331,In search of a new economic model determined by logistic growth," In this paper we extend the work by Ryuzo Sato devoted to the development of economic growth models within the framework of the Lie group theory. We propose a new growth model based on the assumption of logistic growth in factors. It is employed to derive new production functions and introduce a new notion of wage share. In the process it is shown that the new functions compare reasonably well against relevant economic data. The corresponding problem of maximization of profit under conditions of perfect competition is solved with the aid of one of these functions. In addition, it is explained in reasonably rigorous mathematical terms why Bowley's law no longer holds true in post-1960 data. ",0,0,1,0,0,0 17332,Limits on light WIMPs with a 1 kg-scale germanium detector at 160 eVee physics threshold at the China Jinping Underground Laboratory," We report results of a search for light weakly interacting massive particle (WIMP) dark matter from the CDEX-1 experiment at the China Jinping Underground Laboratory (CJPL). Constraints on WIMP-nucleon spin-independent (SI) and spin-dependent (SD) couplings are derived with a physics threshold of 160 eVee, from an exposure of 737.1 kg-days. The SI and SD limits extend the lower reach of light WIMPs to 2 GeV and improve over our earlier bounds at WIMP mass less than 6 GeV. ",0,1,0,0,0,0 17333,"A stellar census of the nearby, young 32 Orionis group"," The 32 Orionis group was discovered almost a decade ago and despite the fact that it represents the first northern, young (age ~ 25 Myr) stellar aggregate within 100 pc of the Sun ($d \simeq 93$ pc), a comprehensive survey for members and detailed characterisation of the group has yet to be performed. We present the first large-scale spectroscopic survey for new (predominantly M-type) members of the group after combining kinematic and photometric data to select candidates with Galactic space motion and positions in colour-magnitude space consistent with membership. We identify 30 new members, increasing the number of known 32 Ori group members by a factor of three and bringing the total number of identified members to 46, spanning spectral types B5 to L1. We also identify the lithium depletion boundary (LDB) of the group, i.e. the luminosity at which lithium remains unburnt in a coeval population. We estimate the age of the 32 Ori group independently using both isochronal fitting and LDB analyses and find it is essentially coeval with the {\beta} Pictoris moving group, with an age of $24\pm4$ Myr. Finally, we have also searched for circumstellar disc hosts utilising the AllWISE catalogue. Although we find no evidence for warm, dusty discs, we identify several stars with excess emission in the WISE W4-band at 22 {\mu}m. Based on the limited number of W4 detections we estimate a debris disc fraction of $32^{+12}_{-8}$ per cent for the 32 Ori group. ",0,1,0,0,0,0 17334,A High-Level Rule-based Language for Software Defined Network Programming based on OpenFlow," This paper proposes XML-Defined Network policies (XDNP), a new high-level language based on XML notation, to describe network control rules in Software Defined Network environments. We rely on existing OpenFlow controllers specifically Floodlight but the novelty of this project is to separate complicated language- and framework-specific APIs from policy descriptions. This separation makes it possible to extend the current work as a northbound higher level abstraction that can support a wide range of controllers who are based on different programming languages. By this approach, we believe that network administrators can develop and deploy network control policies easier and faster. ",1,0,0,0,0,0 17335,Domain Objects and Microservices for Systems Development: a roadmap," This paper discusses a roadmap to investigate Domain Objects being an adequate formalism to capture the peculiarity of microservice architecture, and to support Software development since the early stages. It provides a survey of both Microservices and Domain Objects, and it discusses plans and reflections on how to investigate whether a modeling approach suited to adaptable service-based components can also be applied with success to the microservice scenario. ",1,0,0,0,0,0 17336,Stabilization of prethermal Floquet steady states in a periodically driven dissipative Bose-Hubbard model," We discuss the effect of dissipation on heating which occurs in periodically driven quantum many body systems. We especially focus on a periodically driven Bose-Hubbard model coupled to an energy and particle reservoir. Without dissipation, this model is known to undergo parametric instabilities which can be considered as an initial stage of heating. By taking the weak on-site interaction limit as well as the weak system-reservoir coupling limit, we find that parametric instabilities are suppressed if the dissipation is stronger than the on-site interaction strength and stable steady states appear. Our results demonstrate that periodically-driven systems can emit energy, which is absorbed from external drivings, to the reservoir so that they can avoid heating. ",0,1,0,0,0,0 17337,Compressed Sensing using Generative Models," The goal of compressed sensing is to estimate a vector from an underdetermined system of noisy linear measurements, by making use of prior knowledge on the structure of vectors in the relevant domain. For almost all results in this literature, the structure is represented by sparsity in a well-chosen basis. We show how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all. Instead, we suppose that vectors lie near the range of a generative model $G: \mathbb{R}^k \to \mathbb{R}^n$. Our main theorem is that, if $G$ is $L$-Lipschitz, then roughly $O(k \log L)$ random Gaussian measurements suffice for an $\ell_2/\ell_2$ recovery guarantee. We demonstrate our results using generative models from published variational autoencoder and generative adversarial networks. Our method can use $5$-$10$x fewer measurements than Lasso for the same accuracy. ",1,0,0,1,0,0 17338,Two-part models with stochastic processes for modelling longitudinal semicontinuous data: computationally efficient inference and modelling the overall marginal mean," Several researchers have described two-part models with patient-specific stochastic processes for analysing longitudinal semicontinuous data. In theory, such models can offer greater flexibility than the standard two-part model with patient-specific random effects. However, in practice the high dimensional integrations involved in the marginal likelihood (i.e. integrated over the stochastic processes) significantly complicates model fitting. Thus non-standard computationally intensive procedures based on simulating the marginal likelihood have so far only been proposed. In this paper, we describe an efficient method of implementation by demonstrating how the high dimensional integrations involved in the marginal likelihood can be computed efficiently. Specifically, by using a property of the multivariate normal distribution and the standard marginal cumulative distribution function identity, we transform the marginal likelihood so that the high dimensional integrations are contained in the cumulative distribution function of a multivariate normal distribution, which can then be efficiently evaluated. Hence maximum likelihood estimation can be used to obtain parameter estimates and asymptotic standard errors (from the observed information matrix) of model parameters. We describe our proposed efficient implementation procedure for the standard two-part model parameterisation and when it is of interest to directly model the overall marginal mean. The methodology is applied on a psoriatic arthritis data set concerning functional disability. ",0,0,0,1,0,0 17339,Progressive Image Deraining Networks: A Better and Simpler Baseline," Along with the deraining performance improvement of deep networks, their structures and learning become more and more complicated and diverse, making it difficult to analyze the contribution of various network modules when developing new deraining networks. To handle this issue, this paper provides a better and simpler baseline deraining network by considering network architecture, input and output, and loss functions. Specifically, by repeatedly unfolding a shallow ResNet, progressive ResNet (PRN) is proposed to take advantage of recursive computation. A recurrent layer is further introduced to exploit the dependencies of deep features across stages, forming our progressive recurrent network (PReNet). Furthermore, intra-stage recursive computation of ResNet can be adopted in PRN and PReNet to notably reduce network parameters with graceful degradation in deraining performance. For network input and output, we take both stage-wise result and original rainy image as input to each ResNet and finally output the prediction of {residual image}. As for loss functions, single MSE or negative SSIM losses are sufficient to train PRN and PReNet. Experiments show that PRN and PReNet perform favorably on both synthetic and real rainy images. Considering its simplicity, efficiency and effectiveness, our models are expected to serve as a suitable baseline in future deraining research. The source codes are available at this https URL. ",1,0,0,0,0,0 17340,Optimal Nonparametric Inference under Quantization," Statistical inference based on lossy or incomplete samples is of fundamental importance in research areas such as signal/image processing, medical image storage, remote sensing, signal transmission. In this paper, we propose a nonparametric testing procedure based on quantized samples. In contrast to the classic nonparametric approach, our method lives on a coarse grid of sample information and are simple-to-use. Under mild technical conditions, we establish the asymptotic properties of the proposed procedures including asymptotic null distribution of the quantization test statistic as well as its minimax power optimality. Concrete quantizers are constructed for achieving the minimax optimality in practical use. Simulation results and a real data analysis are provided to demonstrate the validity and effectiveness of the proposed test. Our work bridges the classical nonparametric inference to modern lossy data setting. ",1,0,1,1,0,0 17341,Nearest neighbor imputation for general parameter estimation in survey sampling," Nearest neighbor imputation is popular for handling item nonresponse in survey sampling. In this article, we study the asymptotic properties of the nearest neighbor imputation estimator for general population parameters, including population means, proportions and quantiles. For variance estimation, the conventional bootstrap inference for matching estimators with fixed number of matches has been shown to be invalid due to the nonsmoothness nature of the matching estimator. We propose asymptotically valid replication variance estimation. The key strategy is to construct replicates of the estimator directly based on linear terms, instead of individual records of variables. A simulation study confirms that the new procedure provides valid variance estimation. ",0,0,0,1,0,0 17342,Time-delay signature suppression in a chaotic semiconductor laser by fiber random grating induced distributed feedback," We demonstrate that a semiconductor laser perturbed by the distributed feedback from a fiber random grating can emit light chaotically without the time delay signature. A theoretical model is developed based on the Lang-Kobayashi model in order to numerically explore the chaotic dynamics of the laser diode subjected to the random distributed feedback. It is predicted that the random distributed feedback is superior to the single reflection feedback in suppressing the time-delay signature. In experiments, a massive number of feedbacks with randomly varied time delays induced by a fiber random grating introduce large numbers of external cavity modes into the semiconductor laser, leading to the high dimension of chaotic dynamics and thus the concealment of the time delay signature. The obtained time delay signature with the maximum suppression is 0.0088, which is the smallest to date. ",0,1,0,0,0,0 17343,SAFS: A Deep Feature Selection Approach for Precision Medicine," In this paper, we propose a new deep feature selection method based on deep architecture. Our method uses stacked auto-encoders for feature representation in higher-level abstraction. We developed and applied a novel feature learning approach to a specific precision medicine problem, which focuses on assessing and prioritizing risk factors for hypertension (HTN) in a vulnerable demographic subgroup (African-American). Our approach is to use deep learning to identify significant risk factors affecting left ventricular mass indexed to body surface area (LVMI) as an indicator of heart damage risk. The results show that our feature learning and representation approach leads to better results in comparison with others. ",1,0,0,1,0,0 17344,Deep Reasoning with Multi-scale Context for Salient Object Detection," To detect and segment salient objects accurately, existing methods are usually devoted to designing complex network architectures to fuse powerful features from the backbone networks. However, they put much less efforts on the saliency inference module and only use a few fully convolutional layers to perform saliency reasoning from the fused features. However, should feature fusion strategies receive much attention but saliency reasoning be ignored a lot? In this paper, we find that weakness of the saliency reasoning unit limits salient object detection performance, and claim that saliency reasoning after multi-scale convolutional features fusion is critical. To verify our findings, we first extract multi-scale features with a fully convolutional network, and then directly reason from these comprehensive features using a deep yet light-weighted network, modified from ShuffleNet, to fast and precisely predict salient objects. Such simple design is shown to be capable of reasoning from multi-scale saliency features as well as giving superior saliency detection performance with less computation cost. Experimental results show that our simple framework outperforms the best existing method with 2.3\% and 3.6\% promotion for F-measure scores, 2.8\% reduction for MAE score on PASCAL-S, DUT-OMRON and SOD datasets respectively. ",1,0,0,0,0,0 17345,On Estimation of $L_{r}$-Norms in Gaussian White Noise Models," We provide a complete picture of asymptotically minimax estimation of $L_r$-norms (for any $r\ge 1$) of the mean in Gaussian white noise model over Nikolskii-Besov spaces. In this regard, we complement the work of Lepski, Nemirovski and Spokoiny (1999), who considered the cases of $r=1$ (with poly-logarithmic gap between upper and lower bounds) and $r$ even (with asymptotically sharp upper and lower bounds) over Hölder spaces. We additionally consider the case of asymptotically adaptive minimax estimation and demonstrate a difference between even and non-even $r$ in terms of an investigator's ability to produce asymptotically adaptive minimax estimators without paying a penalty. ",1,0,1,1,0,0 17346,Secure communications with cooperative jamming: Optimal power allocation and secrecy outage analysis," This paper studies the secrecy rate maximization problem of a secure wireless communication system, in the presence of multiple eavesdroppers. The security of the communication link is enhanced through cooperative jamming, with the help of multiple jammers. First, a feasibility condition is derived to achieve a positive secrecy rate at the destination. Then, we solve the original secrecy rate maximization problem, which is not convex in terms of power allocation at the jammers. To circumvent this non-convexity, the achievable secrecy rate is approximated for a given power allocation at the jammers and the approximated problem is formulated into a geometric programming one. Based on this approximation, an iterative algorithm has been developed to obtain the optimal power allocation at the jammers. Next, we provide a bisection approach, based on one-dimensional search, to validate the optimality of the proposed algorithm. In addition, by assuming Rayleigh fading, the secrecy outage probability (SOP) of the proposed cooperative jamming scheme is analyzed. More specifically, a single-integral form expression for SOP is derived for the most general case as well as a closed-form expression for the special case of two cooperative jammers and one eavesdropper. Simulation results have been provided to validate the convergence and the optimality of the proposed algorithm as well as the theoretical derivations of the presented SOP analysis. ",1,0,1,0,0,0 17347,Stochastic Calculus with respect to Gaussian Processes: Part I," Stochastic integration \textit{wrt} Gaussian processes has raised strong interest in recent years, motivated in particular by its applications in Internet traffic modeling, biomedicine and finance. The aim of this work is to define and develop a White Noise Theory-based anticipative stochastic calculus with respect to all Gaussian processes that have an integral representation over a real (maybe infinite) interval. Very rich, this class of Gaussian processes contains, among many others, Volterra processes (and thus fractional Brownian motion) as well as processes the regularity of which varies along the time (such as multifractional Brownian motion).A systematic comparison of the stochastic calculus (including It{ô} formula) we provide here, to the ones given by Malliavin calculus in \cite{nualart,MV05,NuTa06,KRT07,KrRu10,LN12,SoVi14,LN12}, and by It{ô} stochastic calculus is also made. Not only our stochastic calculus fully generalizes and extends the ones originally proposed in \cite{MV05} and in \cite{NuTa06} for Gaussian processes, but also the ones proposed in \cite{ell,bosw,ben1} for fractional Brownian motion (\textit{resp.} in \cite{JLJLV1,JL13,LLVH} for multifractional Brownian motion). ",0,0,1,0,0,0 17348,Path-like integrals of lenght on surfaces of constant curvature," We naturally associate a measurable space of paths to a couple of orthogonal vector fields over a surface and we integrate the length function over it. This integral is interpreted as a natural continuous generalization of indirect influences on finite graphs and can be thought as a tool to capture geometric information of the surface. As a byproduct we calculate volumes in different examples of spaces of paths. ",0,0,1,0,0,0 17349,Automated Synthesis of Divide and Conquer Parallelism," This paper focuses on automated synthesis of divide-and-conquer parallelism, which is a common parallel programming skeleton supported by many cross-platform multithreaded libraries. The challenges of producing (manually or automatically) a correct divide-and-conquer parallel program from a given sequential code are two-fold: (1) assuming that individual worker threads execute a code identical to the sequential code, the programmer has to provide the extra code for dividing the tasks and combining the computation results, and (2) sometimes, the sequential code may not be usable as is, and may need to be modified by the programmer. We address both challenges in this paper. We present an automated synthesis technique for the case where no modifications to the sequential code are required, and we propose an algorithm for modifying the sequential code to make it suitable for parallelization when some modification is necessary. The paper presents theoretical results for when this {\em modification} is efficiently possible, and experimental evaluation of the technique and the quality of the produced parallel programs. ",1,0,0,0,0,0 17350,"Nikol'ski\uı, Jackson and Ul'yanov type inequalities with Muckenhoupt weights"," In the present work we prove a Nikol'ski inequality for trigonometric polynomials and Ul'yanov type inequalities for functions in Lebesgue spaces with Muckenhoupt weights. Realization result and Jackson inequalities are obtained. Simultaneous approximation by polynomials is considered. Some uniform norm inequalities are transferred to weighted Lebesgue space. ",0,0,1,0,0,0 17351,CosmoGAN: creating high-fidelity weak lensing convergence maps using Generative Adversarial Networks," Inferring model parameters from experimental data is a grand challenge in many sciences, including cosmology. This often relies critically on high fidelity numerical simulations, which are prohibitively computationally expensive. The application of deep learning techniques to generative modeling is renewing interest in using high dimensional density estimators as computationally inexpensive emulators of fully-fledged simulations. These generative models have the potential to make a dramatic shift in the field of scientific simulations, but for that shift to happen we need to study the performance of such generators in the precision regime needed for science applications. To this end, in this work we apply Generative Adversarial Networks to the problem of generating weak lensing convergence maps. We show that our generator network produces maps that are described by, with high statistical confidence, the same summary statistics as the fully simulated maps. ",1,1,0,0,0,0 17352,Gaussian approximation of maxima of Wiener functionals and its application to high-frequency data," This paper establishes an upper bound for the Kolmogorov distance between the maximum of a high-dimensional vector of smooth Wiener functionals and the maximum of a Gaussian random vector. As a special case, we show that the maximum of multiple Wiener-Itô integrals with common orders is well-approximated by its Gaussian analog in terms of the Kolmogorov distance if their covariance matrices are close to each other and the maximum of the fourth cumulants of the multiple Wiener-Itô integrals is close to zero. This may be viewed as a new kind of fourth moment phenomenon, which has attracted considerable attention in the recent studies of probability. This type of Gaussian approximation result has many potential applications to statistics. To illustrate this point, we present two statistical applications in high-frequency financial econometrics: One is the hypothesis testing problem for the absence of lead-lag effects and the other is the construction of uniform confidence bands for spot volatility. ",0,0,1,1,0,0 17353,A Kronecker-type identity and the representations of a number as a sum of three squares," By considering a limiting case of a Kronecker-type identity, we obtain an identity found by both Andrews and Crandall. We then use the Andrews-Crandall identity to give a new proof of a formula of Gauss for the representations of a number as a sum of three squares. From the Kronecker-type identity, we also deduce Gauss's theorem that every positive integer is representable as a sum of three triangular numbers. ",0,0,1,0,0,0 17354,DeepTrend: A Deep Hierarchical Neural Network for Traffic Flow Prediction," In this paper, we consider the temporal pattern in traffic flow time series, and implement a deep learning model for traffic flow prediction. Detrending based methods decompose original flow series into trend and residual series, in which trend describes the fixed temporal pattern in traffic flow and residual series is used for prediction. Inspired by the detrending method, we propose DeepTrend, a deep hierarchical neural network used for traffic flow prediction which considers and extracts the time-variant trend. DeepTrend has two stacked layers: extraction layer and prediction layer. Extraction layer, a fully connected layer, is used to extract the time-variant trend in traffic flow by feeding the original flow series concatenated with corresponding simple average trend series. Prediction layer, an LSTM layer, is used to make flow prediction by feeding the obtained trend from the output of extraction layer and calculated residual series. To make the model more effective, DeepTrend needs first pre-trained layer-by-layer and then fine-tuned in the entire network. Experiments show that DeepTrend can noticeably boost the prediction performance compared with some traditional prediction models and LSTM with detrending based methods. ",1,0,0,0,0,0 17355,A new approach to Kaluza-Klein Theory," We propose in this paper a new approach to the Kaluza-Klein idea of a five dimensional space-time unifying gravitation and electromagnetism, and extension to higher-dimensional space-time. By considering a natural geometric definition of a matter fluid and abandoning the usual requirement of a Ricci-flat five dimensional space-time, we show that a unified geometrical frame can be set for gravitation and electromagnetism, giving, by projection on the classical 4-dimensional space-time, the known Einstein-Maxwell-Lorentz equations for charged fluids. Thus, although not introducing new physics, we get a very aesthetic presentation of classical physics in the spirit of general relativity. The usual physical concepts, such as mass, energy, charge, trajectory, Maxwell-Lorentz law, are shown to be only various aspects of the geometry, for example curvature, of space-time considered as a Lorentzian manifold; that is no physical objects are introduced in space-time, no laws are given, everything is only geometry. We then extend these ideas to more than 5 dimensions, by considering spacetime as a generalization of a $(S^1\times W)$-fiber bundle, that we named multi-fibers bundle, where $S^1$ is the circle and $W$ a compact manifold. We will use this geometric structure as a possible way to model or encode deviations from standard 4-dimensional General Relativity, or ""dark"" effects such as dark matter or energy. ",0,0,1,0,0,0 17356,Density of orbits of dominant regular self-maps of semiabelian varieties," We prove a conjecture of Medvedev and Scanlon in the case of regular morphisms of semiabelian varieties. That is, if $G$ is a semiabelian variety defined over an algebraically closed field $K$ of characteristic $0$, and $\varphi\colon G\to G$ is a dominant regular self-map of $G$ which is not necessarily a group homomorphism, we prove that one of the following holds: either there exists a non-constant rational fibration preserved by $\varphi$, or there exists a point $x\in G(K)$ whose $\varphi$-orbit is Zariski dense in $G$. ",0,0,1,0,0,0 17357,Asymptotic coverage probabilities of bootstrap percentile confidence intervals for constrained parameters," The asymptotic behaviour of the commonly used bootstrap percentile confidence interval is investigated when the parameters are subject to linear inequality constraints. We concentrate on the important one- and two-sample problems with data generated from general parametric distributions in the natural exponential family. The focus of this paper is on quantifying the coverage probabilities of the parametric bootstrap percentile confidence intervals, in particular their limiting behaviour near boundaries. We propose a local asymptotic framework to study this subtle coverage behaviour. Under this framework, we discover that when the true parameters are on, or close to, the restriction boundary, the asymptotic coverage probabilities can always exceed the nominal level in the one-sample case; however, they can be, remarkably, both under and over the nominal level in the two-sample case. Using illustrative examples, we show that the results provide theoretical justification and guidance on applying the bootstrap percentile method to constrained inference problems. ",0,0,1,1,0,0 17358,Correlations and enlarged superconducting phase of $t$-$J_\perp$ chains of ultracold molecules on optical lattices," We compute physical properties across the phase diagram of the $t$-$J_\perp$ chain with long-range dipolar interactions, which describe ultracold polar molecules on optical lattices. Our results obtained by the density-matrix renormalization group (DMRG) indicate that superconductivity is enhanced when the Ising component $J_z$ of the spin-spin interaction and the charge component $V$ are tuned to zero, and even further by the long-range dipolar interactions. At low densities, a substantially larger spin gap is obtained. We provide evidence that long-range interactions lead to algebraically decaying correlation functions despite the presence of a gap. Although this has recently been observed in other long-range interacting spin and fermion models, the correlations in our case have the peculiar property of having a small and continuously varying exponent. We construct simple analytic models and arguments to understand the most salient features. ",0,1,0,0,0,0 17359,MinimalRNN: Toward More Interpretable and Trainable Recurrent Neural Networks," We introduce MinimalRNN, a new recurrent neural network architecture that achieves comparable performance as the popular gated RNNs with a simplified structure. It employs minimal updates within RNN, which not only leads to efficient learning and testing but more importantly better interpretability and trainability. We demonstrate that by endorsing the more restrictive update rule, MinimalRNN learns disentangled RNN states. We further examine the learning dynamics of different RNN structures using input-output Jacobians, and show that MinimalRNN is able to capture longer range dependencies than existing RNN architectures. ",1,0,0,1,0,0 17360,Boolean quadric polytopes are faces of linear ordering polytopes," Let $BQP(n)$ be a boolean quadric polytope, $LOP(m)$ be a linear ordering polytope. It is shown that $BQP(n)$ is linearly isomorphic to a face of $LOP(2n)$. ",1,0,0,0,0,0 17361,Sparse Matrix Code Dependence Analysis Simplification at Compile Time," Analyzing array-based computations to determine data dependences is useful for many applications including automatic parallelization, race detection, computation and communication overlap, verification, and shape analysis. For sparse matrix codes, array data dependence analysis is made more difficult by the use of index arrays that make it possible to store only the nonzero entries of the matrix (e.g., in A[B[i]], B is an index array). Here, dependence analysis is often stymied by such indirect array accesses due to the values of the index array not being available at compile time. Consequently, many dependences cannot be proven unsatisfiable or determined until runtime. Nonetheless, index arrays in sparse matrix codes often have properties such as monotonicity of index array elements that can be exploited to reduce the amount of runtime analysis needed. In this paper, we contribute a formulation of array data dependence analysis that includes encoding index array properties as universally quantified constraints. This makes it possible to leverage existing SMT solvers to determine whether such dependences are unsatisfiable and significantly reduces the number of dependences that require runtime analysis in a set of eight sparse matrix kernels. Another contribution is an algorithm for simplifying the remaining satisfiable data dependences by discovering equalities and/or subset relationships. These simplifications are essential to make a runtime-inspection-based approach feasible. ",1,0,0,0,0,0 17362,ICA based on the data asymmetry," Independent Component Analysis (ICA) - one of the basic tools in data analysis - aims to find a coordinate system in which the components of the data are independent. Most of existing methods are based on the minimization of the function of fourth-order moment (kurtosis). Skewness (third-order moment) has received much less attention. In this paper we present a competitive approach to ICA based on the Split Gaussian distribution, which is well adapted to asymmetric data. Consequently, we obtain a method which works better than the classical approaches, especially in the case when the underlying density is not symmetric, which is a typical situation in the color distribution in images. ",0,0,1,1,0,0 17363,Solid hulls of weighted Banach spaces of analytic functions on the unit disc with exponential weights," We study weighted $H^\infty$ spaces of analytic functions on the open unit disc in the case of non-doubling weights, which decrease rapidly with respect to the boundary distance. We characterize the solid hulls of such spaces and give quite explicit representations of them in the case of the most natural exponentially decreasing weights. ",0,0,1,0,0,0 17364,Line bundles defined by the Schwarz function," Cauchy and exponential transforms are characterized, and constructed, as canonical holomorphic sections of certain line bundles on the Riemann sphere defined in terms of the Schwarz function. A well known natural connection between Schwarz reflection and line bundles defined on the Schottky double of a planar domain is briefly discussed in the same context. ",0,0,1,0,0,0 17365,Collisional excitation of NH3 by atomic and molecular hydrogen," We report extensive theoretical calculations on the rotation-inversion excitation of interstellar ammonia (NH3) due to collisions with atomic and molecular hydrogen (both para- and ortho-H2). Close-coupling calculations are performed for total energies in the range 1-2000 cm-1 and rotational cross sections are obtained for all transitions among the lowest 17 and 34 rotation-inversion levels of ortho- and para-NH3, respectively. Rate coefficients are deduced for kinetic temperatures up to 200 K. Propensity rules for the three colliding partners are discussed and we also compare the new results to previous calculations for the spherically symmetrical He and para-H2 projectiles. Significant differences are found between the different sets of calculations. Finally, we test the impact of the new rate coefficients on the calibration of the ammonia thermometer. We find that the calibration curve is only weakly sensitive to the colliding partner and we confirm that the ammonia thermometer is robust. ",0,1,0,0,0,0 17366,Deterministic and Probabilistic Conditions for Finite Completability of Low-rank Multi-View Data," We consider the multi-view data completion problem, i.e., to complete a matrix $\mathbf{U}=[\mathbf{U}_1|\mathbf{U}_2]$ where the ranks of $\mathbf{U},\mathbf{U}_1$, and $\mathbf{U}_2$ are given. In particular, we investigate the fundamental conditions on the sampling pattern, i.e., locations of the sampled entries for finite completability of such a multi-view data given the corresponding rank constraints. In contrast with the existing analysis on Grassmannian manifold for a single-view matrix, i.e., conventional matrix completion, we propose a geometric analysis on the manifold structure for multi-view data to incorporate more than one rank constraint. We provide a deterministic necessary and sufficient condition on the sampling pattern for finite completability. We also give a probabilistic condition in terms of the number of samples per column that guarantees finite completability with high probability. Finally, using the developed tools, we derive the deterministic and probabilistic guarantees for unique completability. ",1,0,1,0,0,0 17367,Grid-forming Control for Power Converters based on Matching of Synchronous Machines," We consider the problem of grid-forming control of power converters in low-inertia power systems. Starting from an average-switch three-phase inverter model, we draw parallels to a synchronous machine (SM) model and propose a novel grid-forming converter control strategy which dwells upon the main characteristic of a SM: the presence of an internal rotating magnetic field. In particular, we augment the converter system with a virtual oscillator whose frequency is driven by the DC-side voltage measurement and which sets the converter pulse-width-modulation signal, thereby achieving exact matching between the converter in closed-loop and the SM dynamics. We then provide a sufficient condition assuring existence, uniqueness, and global asymptotic stability of equilibria in a coordinate frame attached to the virtual oscillator angle. By actuating the DC-side input of the converter we are able to enforce this sufficient condition. In the same setting, we highlight strict incremental passivity, droop, and power-sharing properties of the proposed framework, which are compatible with conventional requirements of power system operation. We subsequently adopt disturbance decoupling techniques to design additional control loops that regulate the DC-side voltage, as well as AC-side frequency and amplitude, while in the end validating them with numerical experiments. ",0,0,1,0,0,0 17368,Characterizing Dust Attenuation in Local Star-Forming Galaxies: Near-Infrared Reddening and Normalization," We characterize the near-infrared (NIR) dust attenuation for a sample of ~5500 local (z<0.1) star-forming galaxies and obtain an estimate of their average total-to-selective attenuation $k(\lambda)$. We utilize data from the United Kingdom Infrared Telescope (UKIRT) and the Two Micron All-Sky Survey (2MASS), which is combined with previously measured UV-optical data for these galaxies. The average attenuation curve is slightly lower in the far-UV than local starburst galaxies, by roughly 15%, but appears similar at longer wavelengths with a total-to-selective normalization at V-band of $R_V=3.67\substack{+0.44 \\ -0.35}$. Under the assumption of energy balance, the total attenuated energy inferred from this curve is found to be broadly consistent with the observed infrared dust emission ($L_{\rm{TIR}}$) in a small sample of local galaxies for which far-IR measurements are available. However, the significant scatter in this quantity among the sample may reflect large variations in the attenuation properties of individual galaxies. We also derive the attenuation curve for sub-populations of the main sample, separated according to mean stellar population age (via $D_n4000$), specific star formation rate, stellar mass, and metallicity, and find that they show only tentative trends with low significance, at least over the range which is probed by our sample. These results indicate that a single curve is reasonable for applications seeking to broadly characterize large samples of galaxies in the local Universe, while applications to individual galaxies would yield large uncertainties and is not recommended. ",0,1,0,0,0,0 17369,Sequential Checking: Reallocation-Free Data-Distribution Algorithm for Scale-out Storage," Using tape or optical devices for scale-out storage is one option for storing a vast amount of data. However, it is impossible or almost impossible to rewrite data with such devices. Thus, scale-out storage using such devices cannot use standard data-distribution algorithms because they rewrite data for moving between servers constituting the scale-out storage when the server configuration is changed. Although using rewritable devices for scale-out storage, when server capacity is huge, rewriting data is very hard when server constitution is changed. In this paper, a data-distribution algorithm called Sequential Checking is proposed, which can be used for scale-out storage composed of devices that are hardly able to rewrite data. Sequential Checking 1) does not need to move data between servers when the server configuration is changed, 2) distribute data, the amount of which depends on the server's volume, 3) select a unique server when datum is written, and 4) select servers when datum is read (there are few such server(s) in most cases) and find out a unique server that stores the newest datum from them. These basic characteristics were confirmed through proofs and simulations. Data can be read by accessing 1.98 servers on average from a storage comprising 256 servers under a realistic condition. And it is confirmed by evaluations in real environment that access time is acceptable. Sequential Checking makes selecting scale-out storage using tape or optical devices or using huge capacity servers realistic. ",1,0,0,0,0,0 17370,Learning with Confident Examples: Rank Pruning for Robust Classification with Noisy Labels," Noisy PN learning is the problem of binary classification when training examples may be mislabeled (flipped) uniformly with noise rate rho1 for positive examples and rho0 for negative examples. We propose Rank Pruning (RP) to solve noisy PN learning and the open problem of estimating the noise rates, i.e. the fraction of wrong positive and negative labels. Unlike prior solutions, RP is time-efficient and general, requiring O(T) for any unrestricted choice of probabilistic classifier with T fitting time. We prove RP has consistent noise estimation and equivalent expected risk as learning with uncorrupted labels in ideal conditions, and derive closed-form solutions when conditions are non-ideal. RP achieves state-of-the-art noise estimation and F1, error, and AUC-PR for both MNIST and CIFAR datasets, regardless of the amount of noise and performs similarly impressively when a large portion of training examples are noise drawn from a third distribution. To highlight, RP with a CNN classifier can predict if an MNIST digit is a ""one""or ""not"" with only 0.25% error, and 0.46 error across all digits, even when 50% of positive examples are mislabeled and 50% of observed positive labels are mislabeled negative examples. ",1,0,0,1,0,0 17371,code2vec: Learning Distributed Representations of Code," We present a neural model for representing snippets of code as continuous distributed vectors (""code embeddings""). The main idea is to represent a code snippet as a single fixed-length $\textit{code vector}$, which can be used to predict semantic properties of the snippet. This is performed by decomposing code to a collection of paths in its abstract syntax tree, and learning the atomic representation of each path $\textit{simultaneously}$ with learning how to aggregate a set of them. We demonstrate the effectiveness of our approach by using it to predict a method's name from the vector representation of its body. We evaluate our approach by training a model on a dataset of 14M methods. We show that code vectors trained on this dataset can predict method names from files that were completely unobserved during training. Furthermore, we show that our model learns useful method name vectors that capture semantic similarities, combinations, and analogies. Comparing previous techniques over the same data set, our approach obtains a relative improvement of over 75%, being the first to successfully predict method names based on a large, cross-project, corpus. Our trained model, visualizations and vector similarities are available as an interactive online demo at this http URL. The code, data, and trained models are available at this https URL. ",1,0,0,1,0,0 17372,Learning a Local Feature Descriptor for 3D LiDAR Scans," Robust data association is necessary for virtually every SLAM system and finding corresponding points is typically a preprocessing step for scan alignment algorithms. Traditionally, handcrafted feature descriptors were used for these problems but recently learned descriptors have been shown to perform more robustly. In this work, we propose a local feature descriptor for 3D LiDAR scans. The descriptor is learned using a Convolutional Neural Network (CNN). Our proposed architecture consists of a Siamese network for learning a feature descriptor and a metric learning network for matching the descriptors. We also present a method for estimating local surface patches and obtaining ground-truth correspondences. In extensive experiments, we compare our learned feature descriptor with existing 3D local descriptors and report highly competitive results for multiple experiments in terms of matching accuracy and computation time. \end{abstract} ",1,0,0,0,0,0 17373,Dynamical tides in exoplanetary systems containing Hot Jupiters: confronting theory and observations," We study the effect of dynamical tides associated with the excitation of gravity waves in an interior radiative region of the central star on orbital evolution in observed systems containing Hot Jupiters. We consider WASP-43, Ogle-tr-113, WASP-12, and WASP-18 which contain stars on the main sequence (MS). For these systems there are observational estimates regarding the rate of change of the orbital period. We also investigate Kepler-91 which contains an evolved giant star. We adopt the formalism of Ivanov et al. for calculating the orbital evolution. For the MS stars we determine expected rates of orbital evolution under different assumptions about the amount of dissipation acting on the tides, estimate the effect of stellar rotation for the two most rapidly rotating stars and compare results with observations. All cases apart from possibly WASP-43 are consistent with a regime in which gravity waves are damped during their propagation over the star. However, at present this is not definitive as observational errors are large. We find that although it is expected to apply to Kepler-91, linear radiative damping cannot explain this dis- sipation regime applying to MS stars. Thus, a nonlinear mechanism may be needed. Kepler-91 is found to be such that the time scale for evolution of the star is comparable to that for the orbit. This implies that significant orbital circularisation may have occurred through tides acting on the star. Quasi-static tides, stellar winds, hydrodynamic drag and tides acting on the planet have likely played a minor role. ",0,1,0,0,0,0 17374,Metastability versus collapse following a quench in attractive Bose-Einstein condensates," We consider a Bose-Einstein condensate (BEC) with attractive two-body interactions in a cigar-shaped trap, initially prepared in its ground state for a given negative scattering length, which is quenched to a larger absolute value of the scattering length. Using the mean-field approximation, we compute numerically, for an experimentally relevant range of aspect ratios and initial strengths of the coupling, two critical values of quench: one corresponds to the weakest attraction strength the quench to which causes the system to collapse before completing even a single return from the narrow configuration (""perihelion"") in its breathing cycle. The other is a similar critical point for the occurrence of collapse before completing two returns. In the latter case, we also compute the limiting value, as we keep increasing the strength of the post-quench attraction towards its critical value, of the time interval between the first two perihelia. We also use a Gaussian variational model to estimate the critical quenched attraction strength below which the system is stable against the collapse for long times. These time intervals and critical attraction strengths---apart from being fundamental properties of nonlinear dynamics of self-attractive BECs---may provide clues to the design of upcoming experiments that are trying to create robust BEC breathers. ",0,1,0,0,0,0 17375,A similarity criterion for sequential programs using truth-preserving partial functions," The execution of sequential programs allows them to be represented using mathematical functions formed by the composition of statements following one after the other. Each such statement is in itself a partial function, which allows only inputs satisfying a particular Boolean condition to carry forward the execution and hence, the composition of such functions (as a result of sequential execution of the statements) strengthens the valid set of input state variables for the program to complete its execution and halt succesfully. With this thought in mind, this paper tries to study a particular class of partial functions, which tend to preserve the truth of two given Boolean conditions whenever the state variables satisfying one are mapped through such functions into a domain of state variables satisfying the other. The existence of such maps allows us to study isomorphism between different programs, based not only on their structural characteristics (e.g. the kind of programming constructs used and the overall input-output transformation), but also the nature of computation performed on seemingly different inputs. Consequently, we can now relate programs which perform a given type of computation, like a loop counting down indefinitely, without caring about the input sets they work on individually or the set of statements each program contains. ",1,0,0,0,0,0 17376,Subsampling large graphs and invariance in networks," Specify a randomized algorithm that, given a very large graph or network, extracts a random subgraph. What can we learn about the input graph from a single subsample? We derive laws of large numbers for the sampler output, by relating randomized subsampling to distributional invariance: Assuming an invariance holds is tantamount to assuming the sample has been generated by a specific algorithm. That in turn yields a notion of ergodicity. Sampling algorithms induce model classes---graphon models, sparse generalizations of exchangeable graphs, and random multigraphs with exchangeable edges can all be obtained in this manner, and we specialize our results to a number of examples. One class of sampling algorithms emerges as special: Roughly speaking, those defined as limits of random transformations drawn uniformly from certain sequences of groups. Some known pathologies of network models based on graphons are explained as a form of selection bias. ",0,0,1,1,0,0 17377,Taylor coefficients of non-holomorphic Jacobi forms and applications," In this paper, we prove modularity results of Taylor coefficients of certain non-holomorphic Jacobi forms. It is well-known that Taylor coefficients of holomorphic Jacobi forms are quasimoular forms. However recently there has been a wide interest for Taylor coefficients of non-holomorphic Jacobi forms for example arising in combinatorics. In this paper, we show that such coefficients still inherit modular properties. We then work out the precise spaces in which these coefficients lie for two examples. ",0,0,1,0,0,0 17378,Beamspace SU-MIMO for Future Millimeter Wave Wireless Communications," For future networks (i.e., the fifth generation (5G) wireless networks and beyond), millimeter-wave (mmWave) communication with large available unlicensed spectrum is a promising technology that enables gigabit multimedia applications. Thanks to the short wavelength of mmWave radio, massive antenna arrays can be packed into the limited dimensions of mmWave transceivers. Therefore, with directional beamforming (BF), both mmWave transmitters (MTXs) and mmWave receivers (MRXs) are capable of supporting multiple beams in 5G networks. However, for the transmission between an MTX and an MRX, most works have only considered a single beam, which means that they do not make full potential use of mmWave. Furthermore, the connectivity of single beam transmission can easily be blocked. In this context, we propose a single-user multi-beam concurrent transmission scheme for future mmWave networks with multiple reflected paths. Based on spatial spectrum reuse, the scheme can be described as a multiple-input multiple-output (MIMO) technique in beamspace (i.e., in the beam-number domain). Moreover, this study investigates the challenges and potential solutions for implementing this scheme, including multibeam selection, cooperative beam tracking, multi-beam power allocation and synchronization. The theoretical and numerical results show that the proposed beamspace SU-MIMO can largely improve the achievable rate of the transmission between an MTX and an MRX and, meanwhile, can maintain the connectivity. ",1,0,0,0,0,0 17379,Learning Robust Visual-Semantic Embeddings," Many of the existing methods for learning joint embedding of images and text use only supervised information from paired images and its textual attributes. Taking advantage of the recent success of unsupervised learning in deep neural networks, we propose an end-to-end learning framework that is able to extract more robust multi-modal representations across domains. The proposed method combines representation learning models (i.e., auto-encoders) together with cross-domain learning criteria (i.e., Maximum Mean Discrepancy loss) to learn joint embeddings for semantic and visual features. A novel technique of unsupervised-data adaptation inference is introduced to construct more comprehensive embeddings for both labeled and unlabeled data. We evaluate our method on Animals with Attributes and Caltech-UCSD Birds 200-2011 dataset with a wide range of applications, including zero and few-shot image recognition and retrieval, from inductive to transductive settings. Empirically, we show that our framework improves over the current state of the art on many of the considered tasks. ",1,0,0,0,0,0 17380,Quantitative estimates of the surface habitability of Kepler-452b," Kepler-452b is currently the best example of an Earth-size planet in the habitable zone of a sun-like star, a type of planet whose number of detections is expected to increase in the future. Searching for biosignatures in the supposedly thin atmospheres of these planets is a challenging goal that requires a careful selection of the targets. Under the assumption of a rocky-dominated nature for Kepler-452b, we considered it as a test case to calculate a temperature-dependent habitability index, $h_{050}$, designed to maximize the potential presence of biosignature-producing activity (Silva et al.\ 2016). The surface temperature has been computed for a broad range of climate factors using a climate model designed for terrestrial-type exoplanets (Vladilo et al.\ 2015). After fixing the planetary data according to the experimental results (Jenkins et al.\ 2015), we changed the surface gravity, CO$_2$ abundance, surface pressure, orbital eccentricity, rotation period, axis obliquity and ocean fraction within the range of validity of our model. For most choices of parameters we find habitable solutions with $h_{050}>0.2$ only for CO$_2$ partial pressure $p_\mathrm{CO_2} \lesssim 0.04$\,bar. At this limiting value of CO$_2$ abundance the planet is still habitable if the total pressure is $p \lesssim 2$\,bar. In all cases the habitability drops for eccentricity $e \gtrsim 0.3$. Changes of rotation period and obliquity affect the habitability through their impact on the equator-pole temperature difference rather than on the mean global temperature. We calculated the variation of $h_{050}$ resulting from the luminosity evolution of the host star for a wide range of input parameters. Only a small combination of parameters yield habitability-weighted lifetimes $\gtrsim 2$\,Gyr, sufficiently long to develop atmospheric biosignatures still detectable at the present time. ",0,1,0,0,0,0 17381,Design and implementation of dynamic logic gates and R-S flip-flop using quasiperiodically driven Murali-Lakshmanan-Chua circuit," We report the propagation of a square wave signal in a quasi-periodically driven Murali-Lakshmanan-Chua (QPDMLC) circuit system. It is observed that signal propagation is possible only above a certain threshold strength of the square wave or digital signal and all the values above the threshold amplitude are termed as 'region of signal propagation'. Then, we extend this region of signal propagation to perform various logical operations like AND/NAND/OR/NOR and hence it is also designated as the 'region of logical operation'. Based on this region, we propose implementing the dynamic logic gates, namely AND/NAND/OR/NOR, which can be decided by the asymmetrical input square waves without altering the system parameters. Further, we show that a single QPDMLC system will produce simultaneously two outputs which are complementary to each other. As a result, a single QPDMLC system yields either AND as well as NAND or OR as well as NOR gates simultaneously. Then we combine the corresponding two QPDMLC systems in a cross-coupled way and report that its dynamics mimics that of fundamental R-S flip-flop circuit. All these phenomena have been explained with analytical solutions of the circuit equations characterizing the system and finally the results are compared with the corresponding numerical and experimental analysis. ",0,1,0,0,0,0 17382,"Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects"," We present Sequential Attend, Infer, Repeat (SQAIR), an interpretable deep generative model for videos of moving objects. It can reliably discover and track objects throughout the sequence of frames, and can also generate future frames conditioning on the current frame, thereby simulating expected motion of objects. This is achieved by explicitly encoding object presence, locations and appearances in the latent variables of the model. SQAIR retains all strengths of its predecessor, Attend, Infer, Repeat (AIR, Eslami et. al., 2016), including learning in an unsupervised manner, and addresses its shortcomings. We use a moving multi-MNIST dataset to show limitations of AIR in detecting overlapping or partially occluded objects, and show how SQAIR overcomes them by leveraging temporal consistency of objects. Finally, we also apply SQAIR to real-world pedestrian CCTV data, where it learns to reliably detect, track and generate walking pedestrians with no supervision. ",0,0,0,1,0,0 17383,Learning Local Shape Descriptors from Part Correspondences With Multi-view Convolutional Networks," We present a new local descriptor for 3D shapes, directly applicable to a wide range of shape analysis problems such as point correspondences, semantic segmentation, affordance prediction, and shape-to-scan matching. The descriptor is produced by a convolutional network that is trained to embed geometrically and semantically similar points close to one another in descriptor space. The network processes surface neighborhoods around points on a shape that are captured at multiple scales by a succession of progressively zoomed out views, taken from carefully selected camera positions. We leverage two extremely large sources of data to train our network. First, since our network processes rendered views in the form of 2D images, we repurpose architectures pre-trained on massive image datasets. Second, we automatically generate a synthetic dense point correspondence dataset by non-rigid alignment of corresponding shape parts in a large collection of segmented 3D models. As a result of these design choices, our network effectively encodes multi-scale local context and fine-grained surface detail. Our network can be trained to produce either category-specific descriptors or more generic descriptors by learning from multiple shape categories. Once trained, at test time, the network extracts local descriptors for shapes without requiring any part segmentation as input. Our method can produce effective local descriptors even for shapes whose category is unknown or different from the ones used while training. We demonstrate through several experiments that our learned local descriptors are more discriminative compared to state of the art alternatives, and are effective in a variety of shape analysis applications. ",1,0,0,0,0,0 17384,Alternating minimization for dictionary learning with random initialization," We present theoretical guarantees for an alternating minimization algorithm for the dictionary learning/sparse coding problem. The dictionary learning problem is to factorize vector samples $y^{1},y^{2},\ldots, y^{n}$ into an appropriate basis (dictionary) $A^*$ and sparse vectors $x^{1*},\ldots,x^{n*}$. Our algorithm is a simple alternating minimization procedure that switches between $\ell_1$ minimization and gradient descent in alternate steps. Dictionary learning and specifically alternating minimization algorithms for dictionary learning are well studied both theoretically and empirically. However, in contrast to previous theoretical analyses for this problem, we replace the condition on the operator norm (that is, the largest magnitude singular value) of the true underlying dictionary $A^*$ with a condition on the matrix infinity norm (that is, the largest magnitude term). This not only allows us to get convergence rates for the error of the estimated dictionary measured in the matrix infinity norm, but also ensures that a random initialization will provably converge to the global optimum. Our guarantees are under a reasonable generative model that allows for dictionaries with growing operator norms, and can handle an arbitrary level of overcompleteness, while having sparsity that is information theoretically optimal. We also establish upper bounds on the sample complexity of our algorithm. ",1,0,0,1,0,0 17385,Optimal Transmission Line Switching under Geomagnetic Disturbances," In recent years, there have been increasing concerns about how geomagnetic disturbances (GMDs) impact electrical power systems. Geomagnetically-induced currents (GICs) can saturate transformers, induce hot spot heating and increase reactive power losses. These effects can potentially cause catastrophic damage to transformers and severely impact the ability of a power system to deliver power. To address this problem, we develop a model of GIC impacts to power systems that includes 1) GIC thermal capacity of transformers as a function of normal Alternating Current (AC) and 2) reactive power losses as a function of GIC. We use this model to derive an optimization problem that protects power systems from GIC impacts through line switching, generator redispatch, and load shedding. We employ state-of-the-art convex relaxations of AC power flow equations to lower bound the objective. We demonstrate the approach on a modified RTS96 system and the UIUC 150-bus system and show that line switching is an effective means to mitigate GIC impacts. We also provide a sensitivity analysis of optimal switching decisions with respect to GMD direction. ",1,0,0,0,0,0 17386,Image Forgery Localization Based on Multi-Scale Convolutional Neural Networks," In this paper, we propose to utilize Convolutional Neural Networks (CNNs) and the segmentation-based multi-scale analysis to locate tampered areas in digital images. First, to deal with color input sliding windows of different scales, a unified CNN architecture is designed. Then, we elaborately design the training procedures of CNNs on sampled training patches. With a set of robust multi-scale tampering detectors based on CNNs, complementary tampering possibility maps can be generated. Last but not least, a segmentation-based method is proposed to fuse the maps and generate the final decision map. By exploiting the benefits of both the small-scale and large-scale analyses, the segmentation-based multi-scale analysis can lead to a performance leap in forgery localization of CNNs. Numerous experiments are conducted to demonstrate the effectiveness and efficiency of our method. ",1,0,0,0,0,0 17387,The QKP limit of the quantum Euler-Poisson equation," In this paper, we consider the derivation of the Kadomtsev-Petviashvili (KP) equation for cold ion-acoustic wave in the long wavelength limit of the two-dimensional quantum Euler-Poisson system, under different scalings for varying directions in the Gardner-Morikawa transform. It is shown that the types of the KP equation depend on the scaled quantum parameter $H>0$. The QKP-I is derived for $H>2$, QKP-II for $07$), slow slip events play a major role in accommodating tectonic motion on plate boundaries. These slip transients are the slow release of built-up tectonic stress that are geodetically imaged as a predominantly aseismic rupture, which is smooth in both time and space. We demonstrate here that large slow slip events are in fact a cluster of short-duration slow transients. Using a dense catalog of low-frequency earthquakes as a guide, we investigate the $M_w7.5$ slow slip event that occurred in 2006 along the subduction interface 40~km beneath Guerrero, Mexico. We show that while the long-period surface displacement as recorded by GPS suggests a six month duration, motion in the direction of tectonic release only sporadically occurs over 55 days and its surface signature is attenuated by rapid relocking of the plate interface.These results demonstrate that our current conceptual model of slow and continuous rupture is an artifact of low-resolution geodetic observations of a superposition of small, clustered slip events. Our proposed description of slow slip as a cluster of slow transients implies that we systematically overestimate the duration $T$ and underestimate the moment magnitude $M$ of large slow slip events. ",0,1,0,0,0,0 17392,General $N$-solitons and their dynamics in several nonlocal nonlinear Schrödinger equations," General $N$-solitons in three recently-proposed nonlocal nonlinear Schrödinger equations are presented. These nonlocal equations include the reverse-space, reverse-time, and reverse-space-time nonlinear Schrödinger equations, which are nonlocal reductions of the Ablowitz-Kaup-Newell-Segur (AKNS) hierarchy. It is shown that general $N$-solitons in these different equations can be derived from the same Riemann-Hilbert solutions of the AKNS hierarchy, except that symmetry relations on the scattering data are different for these equations. This Riemann-Hilbert framework allows us to identify new types of solitons with novel eigenvalue configurations in the spectral plane. Dynamics of $N$-solitons in these equations is also explored. In all the three nonlocal equations, a generic feature of their solutions is repeated collapsing. In addition, multi-solitons can behave very differently from fundamental solitons and may not correspond to a nonlinear superposition of fundamental solitons. ",0,1,0,0,0,0 17393,Revisiting wireless network jamming by SIR-based considerations and Multiband Robust Optimization," We revisit the mathematical models for wireless network jamming introduced by Commander et al.: we first point out the strong connections with classical wireless network design and then we propose a new model based on the explicit use of signal-to-interference quantities. Moreover, to address the intrinsic uncertain nature of the jamming problem and tackle the peculiar right-hand-side (RHS) uncertainty of the problem, we propose an original robust cutting-plane algorithm drawing inspiration from Multiband Robust Optimization. Finally, we assess the performance of the proposed cutting plane algorithm by experiments on realistic network instances. ",1,0,1,0,0,0 17394,New models for symbolic data analysis," Symbolic data analysis (SDA) is an emerging area of statistics based on aggregating individual level data into group-based distributional summaries (symbols), and then developing statistical methods to analyse them. It is ideal for analysing large and complex datasets, and has immense potential to become a standard inferential technique in the near future. However, existing SDA techniques are either non-inferential, do not easily permit meaningful statistical models, are unable to distinguish between competing models, and are based on simplifying assumptions that are known to be false. Further, the procedure for constructing symbols from the underlying data is erroneously not considered relevant to the resulting statistical analysis. In this paper we introduce a new general method for constructing likelihood functions for symbolic data based on a desired probability model for the underlying classical data, while only observing the distributional summaries. This approach resolves many of the conceptual and practical issues with current SDA methods, opens the door for new classes of symbol design and construction, in addition to developing SDA as a viable tool to enable and improve upon classical data analyses, particularly for very large and complex datasets. This work creates a new direction for SDA research, which we illustrate through several real and simulated data analyses. ",0,0,0,1,0,0 17395,Soft Methodology for Cost-and-error Sensitive Classification," Many real-world data mining applications need varying cost for different types of classification errors and thus call for cost-sensitive classification algorithms. Existing algorithms for cost-sensitive classification are successful in terms of minimizing the cost, but can result in a high error rate as the trade-off. The high error rate holds back the practical use of those algorithms. In this paper, we propose a novel cost-sensitive classification methodology that takes both the cost and the error rate into account. The methodology, called soft cost-sensitive classification, is established from a multicriteria optimization problem of the cost and the error rate, and can be viewed as regularizing cost-sensitive classification with the error rate. The simple methodology allows immediate improvements of existing cost-sensitive classification algorithms. Experiments on the benchmark and the real-world data sets show that our proposed methodology indeed achieves lower test error rates and similar (sometimes lower) test costs than existing cost-sensitive classification algorithms. We also demonstrate that the methodology can be extended for considering the weighted error rate instead of the original error rate. This extension is useful for tackling unbalanced classification problems. ",1,0,0,0,0,0 17396,Raman LIDARs and atmospheric calibration for the Cherenkov Telescope Array," The Cherenkov Telescope Array (CTA) is the next generation of Imaging Atmospheric Cherenkov Telescopes. It will reach a sensitivity and energy resolution never obtained until now by any other high energy gamma-ray experiment. Understanding the systematic uncertainties in general will be a crucial issue for the performance of CTA. It is well known that atmospheric conditions contribute particularly in this aspect.Within the CTA consortium several groups are currently building Raman LIDARs to be installed on the two sites. Raman LIDARs are devices composed of a powerful laser that shoots into the atmosphere, a collector that gathers the backscattered light from molecules and aerosols, a photo-sensor, an optical module that spectrally selects wavelengths of interest, and a read--out system.Unlike currently used elastic LIDARs, they can help reduce the systematic uncertainties of the molecular and aerosol components of the atmosphere to <5% so that CTA can achieve its energy resolution requirements of<10% uncertainty at 1 TeV.All the Raman LIDARs in this work have design features that make them different than typical Raman LIDARs used in atmospheric science and are characterized by large collecting mirrors (2.5m2) and reduced acquisition time.They provide both multiple elastic and Raman read-out channels and custom made optics design.In this paper, the motivation for Raman LIDARs, the design and the status of advance of these technologies are described. ",0,1,0,0,0,0 17397,Generalized notions of sparsity and restricted isometry property. Part II: Applications," The restricted isometry property (RIP) is a universal tool for data recovery. We explore the implication of the RIP in the framework of generalized sparsity and group measurements introduced in the Part I paper. It turns out that for a given measurement instrument the number of measurements for RIP can be improved by optimizing over families of Banach spaces. Second, we investigate the preservation of difference of two sparse vectors, which is not trivial in generalized models. Third, we extend the RIP of partial Fourier measurements at optimal scaling of number of measurements with random sign to far more general group structured measurements. Lastly, we also obtain RIP in infinite dimension in the context of Fourier measurement concepts with sparsity naturally replaced by smoothness assumptions. ",0,0,0,1,0,0 17398,Mellin-Meijer-kernel density estimation on $\mathbb{R}^+$," Nonparametric kernel density estimation is a very natural procedure which simply makes use of the smoothing power of the convolution operation. Yet, it performs poorly when the density of a positive variable is to be estimated (boundary issues, spurious bumps in the tail). So various extensions of the basic kernel estimator allegedly suitable for $\mathbb{R}^+$-supported densities, such as those using Gamma or other asymmetric kernels, abound in the literature. Those, however, are not based on any valid smoothing operation analogous to the convolution, which typically leads to inconsistencies. By contrast, in this paper a kernel estimator for $\mathbb{R}^+$-supported densities is defined by making use of the Mellin convolution, the natural analogue of the usual convolution on $\mathbb{R}^+$. From there, a very transparent theory flows and leads to new type of asymmetric kernels strongly related to Meijer's $G$-functions. The numerous pleasant properties of this `Mellin-Meijer-kernel density estimator' are demonstrated in the paper. Its pointwise and $L_2$-consistency (with optimal rate of convergence) is established for a large class of densities, including densities unbounded at 0 and showing power-law decay in their right tail. Its practical behaviour is investigated further through simulations and some real data analyses. ",0,0,1,1,0,0 17399,Gene Ontology (GO) Prediction using Machine Learning Methods," We applied machine learning to predict whether a gene is involved in axon regeneration. We extracted 31 features from different databases and trained five machine learning models. Our optimal model, a Random Forest Classifier with 50 submodels, yielded a test score of 85.71%, which is 4.1% higher than the baseline score. We concluded that our models have some predictive capability. Similar methodology and features could be applied to predict other Gene Ontology (GO) terms. ",1,0,0,1,0,0 17400,Dimension Spectra of Lines," This paper investigates the algorithmic dimension spectra of lines in the Euclidean plane. Given any line L with slope a and vertical intercept b, the dimension spectrum sp(L) is the set of all effective Hausdorff dimensions of individual points on L. We draw on Kolmogorov complexity and geometrical arguments to show that if the effective Hausdorff dimension dim(a, b) is equal to the effective packing dimension Dim(a, b), then sp(L) contains a unit interval. We also show that, if the dimension dim(a, b) is at least one, then sp(L) is infinite. Together with previous work, this implies that the dimension spectrum of any line is infinite. ",1,0,0,0,0,0 17401,Fast Amortized Inference and Learning in Log-linear Models with Randomly Perturbed Nearest Neighbor Search," Inference in log-linear models scales linearly in the size of output space in the worst-case. This is often a bottleneck in natural language processing and computer vision tasks when the output space is feasibly enumerable but very large. We propose a method to perform inference in log-linear models with sublinear amortized cost. Our idea hinges on using Gumbel random variable perturbations and a pre-computed Maximum Inner Product Search data structure to access the most-likely elements in sublinear amortized time. Our method yields provable runtime and accuracy guarantees. Further, we present empirical experiments on ImageNet and Word Embeddings showing significant speedups for sampling, inference, and learning in log-linear models. ",1,0,0,1,0,0 17402,Temperature Dependence of Magnetic Excitations: Terahertz Magnons above the Curie Temperature," When an ordered spin system of a given dimensionality undergoes a second order phase transition the dependence of the order parameter i.e. magnetization on temperature can be well-described by thermal excitations of elementary collective spin excitations (magnons). However, the behavior of magnons themselves, as a function of temperature and across the transition temperature TC, is an unknown issue. Utilizing spin-polarized high resolution electron energy loss spectroscopy we monitor the high-energy (terahertz) magnons, excited in an ultrathin ferromagnet, as a function of temperature. We show that the magnons' energy and lifetime decrease with temperature. The temperature-induced renormalization of the magnons' energy and lifetime depends on the wave vector. We provide quantitative results on the temperature-induced damping and discuss the possible mechanism e.g., multi-magnon scattering. A careful investigation of physical quantities determining the magnons' propagation indicates that terahertz magnons sustain their propagating character even at temperatures far above TC. ",0,1,0,0,0,0 17403,On stable solitons and interactions of the generalized Gross-Pitaevskii equation with PT-and non-PT-symmetric potentials," We report the bright solitons of the generalized Gross-Pitaevskii (GP) equation with some types of physically relevant parity-time-(PT-) and non-PT-symmetric potentials. We find that the constant momentum coefficient can modulate the linear stability and complicated transverse power-flows (not always from the gain toward loss) of nonlinear modes. However, the varying momentum coefficient Gamma(x) can modulate both unbroken linear PT-symmetric phases and stability of nonlinear modes. Particularly, the nonlinearity can excite the unstable linear mode (i.e., broken linear PT-symmetric phase) to stable nonlinear modes. Moreover, we also find stable bright solitons in the presence of non-PT-symmetric harmonic-Gaussian potential. The interactions of two bright solitons are also illustrated in PT-symmetric potentials. Finally, we consider nonlinear modes and transverse power-flows in the three-dimensional (3D) GP equation with the generalized PT-symmetric Scarf-II potential ",0,1,1,0,0,0 17404,Mechanical properties of borophene films: A reactive molecular dynamics investigation," The most recent experimental advances could provide ways for the fabrication of several atomic thick and planar forms of boron atoms. For the first time, we explore the mechanical properties of five types of boron films with various vacancy ratios ranging from 0.1 to 0.15, using molecular dynamics simulations with ReaxFF force field. It is found that the Young's modulus and tensile strength decrease with increasing the temperature. We found that boron sheets exhibit an anisotropic mechanical response due to the different arrangement of atoms along the armchair and zigzag directions. At room temperature, 2D Young's modulus and fracture stress of these five sheets appear in the range 63 N/m and 12 N/m, respectively. In addition, the strains at tensile strength are in the ranges of 9, 11, and 10 percent at 1, 300, and 600 K, respectively. This investigation not only reveals the remarkable stiffness of 2D boron, but establishes relations between the mechanical properties of the boron sheets to the loading direction, temperature and atomic structures. ",0,1,0,0,0,0 17405,Stronger selection can slow down evolution driven by recombination on a smooth fitness landscape," Stronger selection implies faster evolution---that is, the greater the force, the faster the change. This apparently self-evident proposition, however, is derived under the assumption that genetic variation within a population is primarily supplied by mutation (i.e.\ mutation-driven evolution). Here, we show that this proposition does not actually hold for recombination-driven evolution, i.e.\ evolution in which genetic variation is primarily created by recombination rather than mutation. By numerically investigating population genetics models of recombination, migration and selection, we demonstrate that stronger selection can slow down evolution on a perfectly smooth fitness landscape. Through simple analytical calculation, this apparently counter-intuitive result is shown to stem from two opposing effects of natural selection on the rate of evolution. On the one hand, natural selection tends to increase the rate of evolution by increasing the fixation probability of fitter genotypes. On the other hand, natural selection tends to decrease the rate of evolution by decreasing the chance of recombination between immigrants and resident individuals. As a consequence of these opposing effects, there is a finite selection pressure maximizing the rate of evolution. Hence, stronger selection can imply slower evolution if genetic variation is primarily supplied by recombination. ",0,1,0,0,0,0 17406,Differences Among Noninformative Stopping Rules Are Often Relevant to Bayesian Decisions," L.J. Savage once hoped to show that ""the superficially incompatible systems of ideas associated on the one hand with [subjective Bayesianism] and on the other hand with [classical statistics]...lend each other mutual support and clarification."" By 1972, however, he had largely ""lost faith in the devices"" of classical statistics. One aspect of those ""devices"" that he found objectionable is that differences among the ""stopping rules"" that are used to decide when to end an experiment which are ""noninformative"" from a Bayesian perspective can affect decisions made using a classical approach. Two experiments that produce the same data using different stopping rules seem to differ only in the intentions of the experimenters regarding whether or not they would have carried on if the data had been different, which seem irrelevant to the evidential import of the data and thus to facts about what actions the data warrant. I argue that classical and Bayesian ideas about stopping rules do in fact ""lend each other"" the kind of ""mutual support and clarification"" that Savage had originally hoped to find. They do so in a kind of case that is common in scientific practice, in which those who design an experiment have different interests from those who will make decisions in light of its results. I show that, in cases of this kind, Bayesian principles provide qualified support for the classical statistical practice of ""penalizing"" ""biased"" stopping rules. However, they require this practice in a narrower range of circumstances than classical principles do, and for different reasons. I argue that classical arguments for this practice are compelling in precisely the class of cases in which Bayesian principles also require it, and thus that we should regard Bayesian principles as clarifying classical statistical ideas about stopping rules rather than the reverse. ",0,0,1,1,0,0 17407,CNNs are Globally Optimal Given Multi-Layer Support," Stochastic Gradient Descent (SGD) is the central workhorse for training modern CNNs. Although giving impressive empirical performance it can be slow to converge. In this paper we explore a novel strategy for training a CNN using an alternation strategy that offers substantial speedups during training. We make the following contributions: (i) replace the ReLU non-linearity within a CNN with positive hard-thresholding, (ii) reinterpret this non-linearity as a binary state vector making the entire CNN linear if the multi-layer support is known, and (iii) demonstrate that under certain conditions a global optima to the CNN can be found through local descent. We then employ a novel alternation strategy (between weights and support) for CNN training that leads to substantially faster convergence rates, nice theoretical properties, and achieving state of the art results across large scale datasets (e.g. ImageNet) as well as other standard benchmarks. ",1,0,0,0,0,0 17408,The Kontsevich integral for bottom tangles in handlebodies," The Kontsevich integral is a powerful link invariant, taking values in spaces of Jacobi diagrams. In this paper, we extend the Kontsevich integral to construct a functor on the category of bottom tangles in handlebodies. This functor gives a universal finite type invariant of bottom tangles, and refines a functorial version of the Le-Murakami-Ohtsuki 3-manifold invariant for Lagrangian cobordisms of surfaces. ",0,0,1,0,0,0 17409,Commutativity theorems for groups and semigroups," In this note we prove a selection of commutativity theorems for various classes of semigroups. For instance, if in a separative or completely regular semigroup $S$ we have $x^p y^p = y^p x^p$ and $x^q y^q = y^q x^q$ for all $x,y\in S$ where $p$ and $q$ are relatively prime, then $S$ is commutative. In a separative or inverse semigroup $S$, if there exist three consecutive integers $i$ such that $(xy)^i = x^i y^i$ for all $x,y\in S$, then $S$ is commutative. Finally, if $S$ is a separative or inverse semigroup satisfying $(xy)^3=x^3y^3$ for all $x,y\in S$, and if the cubing map $x\mapsto x^3$ is injective, then $S$ is commutative. ",0,0,1,0,0,0 17410,Content-based Approach for Vietnamese Spam SMS Filtering," Short Message Service (SMS) spam is a serious problem in Vietnam because of the availability of very cheap pre-paid SMS packages. There are some systems to detect and filter spam messages for English, most of which use machine learning techniques to analyze the content of messages and classify them. For Vietnamese, there is some research on spam email filtering but none focused on SMS. In this work, we propose the first system for filtering Vietnamese spam SMS. We first propose an appropriate preprocessing method since existing tools for Vietnamese preprocessing cannot give good accuracy on our dataset. We then experiment with vector representations and classifiers to find the best model for this problem. Our system achieves an accuracy of 94% when labelling spam messages while the misclassification rate of legitimate messages is relatively small, about only 0.4%. This is an encouraging result compared to that of English and can be served as a strong baseline for future development of Vietnamese SMS spam prevention systems. ",1,0,0,0,0,0 17411,Measuring Software Performance on Linux," Measuring and analyzing the performance of software has reached a high complexity, caused by more advanced processor designs and the intricate interaction between user programs, the operating system, and the processor's microarchitecture. In this report, we summarize our experience about how performance characteristics of software should be measured when running on a Linux operating system and a modern processor. In particular, (1) We provide a general overview about hardware and operating system features that may have a significant impact on timing and how they interact, (2) we identify sources of errors that need to be controlled in order to obtain unbiased measurement results, and (3) we propose a measurement setup for Linux to minimize errors. Although not the focus of this report, we describe the measurement process using hardware performance counters, which can faithfully reflect the real bottlenecks on a given processor. Our experiments confirm that our measurement setup has a large impact on the results. More surprisingly, however, they also suggest that the setup can be negligible for certain analysis methods. Furthermore, we found that our setup maintains significantly better performance under background load conditions, which means it can be used to improve software in high-performance applications. ",1,0,0,0,0,0 17412,A general method for calculating lattice Green functions on the branch cut," We present a method for calculating the complex Green function $G_{ij} (\omega)$ at any real frequency $\omega$ between any two sites $i$ and $j$ on a lattice. Starting from numbers of walks on square, cubic, honeycomb, triangular, bcc, fcc, and diamond lattices, we derive Chebyshev expansion coefficients for $G_{ij} (\omega)$. The convergence of the Chebyshev series can be accelerated by constructing functions $f(\omega)$ that mimic the van Hove singularities in $G_{ij} (\omega)$ and subtracting their Chebyshev coefficients from the original coefficients. We demonstrate this explicitly for the square lattice and bcc lattice. Our algorithm achieves typical accuracies of 6--9 significant figures using 1000 series terms. ",0,1,0,0,0,0 17413,"Towards A Novel Unified Framework for Developing Formal, Network and Validated Agent-Based Simulation Models of Complex Adaptive Systems"," Literature on the modeling and simulation of complex adaptive systems (cas) has primarily advanced vertically in different scientific domains with scientists developing a variety of domain-specific approaches and applications. However, while cas researchers are inher-ently interested in an interdisciplinary comparison of models, to the best of our knowledge, there is currently no single unified framework for facilitating the development, comparison, communication and validation of models across different scientific domains. In this thesis, we propose first steps towards such a unified framework using a combination of agent-based and complex network-based modeling approaches and guidelines formulated in the form of a set of four levels of usage, which allow multidisciplinary researchers to adopt a suitable framework level on the basis of available data types, their research study objectives and expected outcomes, thus allowing them to better plan and conduct their respective re-search case studies. ",1,1,0,0,0,0 17414,Recover Fine-Grained Spatial Data from Coarse Aggregation," In this paper, we study a new type of spatial sparse recovery problem, that is to infer the fine-grained spatial distribution of certain density data in a region only based on the aggregate observations recorded for each of its subregions. One typical example of this spatial sparse recovery problem is to infer spatial distribution of cellphone activities based on aggregate mobile traffic volumes observed at sparsely scattered base stations. We propose a novel Constrained Spatial Smoothing (CSS) approach, which exploits the local continuity that exists in many types of spatial data to perform sparse recovery via finite-element methods, while enforcing the aggregated observation constraints through an innovative use of the ADMM algorithm. We also improve the approach to further utilize additional geographical attributes. Extensive evaluations based on a large dataset of phone call records and a demographical dataset from the city of Milan show that our approach significantly outperforms various state-of-the-art approaches, including Spatial Spline Regression (SSR). ",1,0,0,0,0,0 17415,The Power Allocation Game on Dynamic Networks: Subgame Perfection," In the game theory literature, there appears to be little research on equilibrium selection for normal-form games with an infinite strategy space and discontinuous utility functions. Moreover, many existing selection methods are not applicable to games involving both cooperative and noncooperative scenarios (e.g., ""games on signed graphs""). With the purpose of equilibrium selection, the power allocation game developed in \cite{allocation}, which is a static, resource allocation game on signed graphs, will be reformulated into an extensive form. Results about the subgame perfect Nash equilibria in the extensive-form game will be given. This appears to be the first time that subgame perfection based on time-varying graphs is used for equilibrium selection in network games. This idea of subgame perfection proposed in the paper may be extrapolated to other network games, which will be illustrated with a simple example of congestion games. ",1,0,0,0,0,0 17416,MIMO Graph Filters for Convolutional Neural Networks," Superior performance and ease of implementation have fostered the adoption of Convolutional Neural Networks (CNNs) for a wide array of inference and reconstruction tasks. CNNs implement three basic blocks: convolution, pooling and pointwise nonlinearity. Since the two first operations are well-defined only on regular-structured data such as audio or images, application of CNNs to contemporary datasets where the information is defined in irregular domains is challenging. This paper investigates CNNs architectures to operate on signals whose support can be modeled using a graph. Architectures that replace the regular convolution with a so-called linear shift-invariant graph filter have been recently proposed. This paper goes one step further and, under the framework of multiple-input multiple-output (MIMO) graph filters, imposes additional structure on the adopted graph filters, to obtain three new (more parsimonious) architectures. The proposed architectures result in a lower number of model parameters, reducing the computational complexity, facilitating the training, and mitigating the risk of overfitting. Simulations show that the proposed simpler architectures achieve similar performance as more complex models. ",0,0,0,1,0,0 17417,An Edge Driven Wavelet Frame Model for Image Restoration," Wavelet frame systems are known to be effective in capturing singularities from noisy and degraded images. In this paper, we introduce a new edge driven wavelet frame model for image restoration by approximating images as piecewise smooth functions. With an implicit representation of image singularities sets, the proposed model inflicts different strength of regularization on smooth and singular image regions and edges. The proposed edge driven model is robust to both image approximation and singularity estimation. The implicit formulation also enables an asymptotic analysis of the proposed models and a rigorous connection between the discrete model and a general continuous variational model. Finally, numerical results on image inpainting and deblurring show that the proposed model is compared favorably against several popular image restoration models. ",1,0,1,0,0,0 17418,An exact algorithm exhibiting RS-RSB/easy-hard correspondence for the maximum independent set problem," A recently proposed exact algorithm for the maximum independent set problem is analyzed. The typical running time is improved exponentially in some parameter regions compared to simple binary search. The algorithm also overcomes the core transition point, where the conventional leaf removal algorithm fails, and works up to the replica symmetry breaking (RSB) transition point. This suggests that a leaf removal core itself is not enough for typical hardness in the random maximum independent set problem, providing further evidence for RSB being the obstacle for algorithms in general. ",1,1,0,0,0,0 17419,Flexible Support for Fast Parallel Commutative Updates," Privatizing data is a useful strategy for increasing parallelism in a shared memory multithreaded program. Independent cores can compute independently on duplicates of shared data, combining their results at the end of their computations. Conventional approaches to privatization, however, rely on explicit static or dynamic memory allocation for duplicated state, increasing memory footprint and contention for cache resources, especially in shared caches. In this work, we describe CCache, a system for on-demand privatization of data manipulated by commutative operations. CCache garners the benefits of privatization, without the increase in memory footprint or cache occupancy. Each core in CCache dynamically privatizes commutatively manipulated data, operating on a copy. Periodically or at the end of its computation, the core merges its value with the value resident in memory, and when all cores have merged, the in-memory copy contains the up-to-date value. We describe a low-complexity architectural implementation of CCache that extends a conventional multicore to support on-demand privatization without using additional memory for private copies. We evaluate CCache on several high-value applications, including random access key-value store, clustering, breadth first search and graph ranking, showing speedups upto 3.2X. ",1,0,0,0,0,0 17420,Deep learning based supervised semantic segmentation of Electron Cryo-Subtomograms," Cellular Electron Cryo-Tomography (CECT) is a powerful imaging technique for the 3D visualization of cellular structure and organization at submolecular resolution. It enables analyzing the native structures of macromolecular complexes and their spatial organization inside single cells. However, due to the high degree of structural complexity and practical imaging limitations, systematic macromolecular structural recovery inside CECT images remains challenging. Particularly, the recovery of a macromolecule is likely to be biased by its neighbor structures due to the high molecular crowding. To reduce the bias, here we introduce a novel 3D convolutional neural network inspired by Fully Convolutional Network and Encoder-Decoder Architecture for the supervised segmentation of macromolecules of interest in subtomograms. The tests of our models on realistically simulated CECT data demonstrate that our new approach has significantly improved segmentation performance compared to our baseline approach. Also, we demonstrate that the proposed model has generalization ability to segment new structures that do not exist in training data. ",0,0,0,1,1,0 17421,CNN-MERP: An FPGA-Based Memory-Efficient Reconfigurable Processor for Forward and Backward Propagation of Convolutional Neural Networks," Large-scale deep convolutional neural networks (CNNs) are widely used in machine learning applications. While CNNs involve huge complexity, VLSI (ASIC and FPGA) chips that deliver high-density integration of computational resources are regarded as a promising platform for CNN's implementation. At massive parallelism of computational units, however, the external memory bandwidth, which is constrained by the pin count of the VLSI chip, becomes the system bottleneck. Moreover, VLSI solutions are usually regarded as a lack of the flexibility to be reconfigured for the various parameters of CNNs. This paper presents CNN-MERP to address these issues. CNN-MERP incorporates an efficient memory hierarchy that significantly reduces the bandwidth requirements from multiple optimizations including on/off-chip data allocation, data flow optimization and data reuse. The proposed 2-level reconfigurability is utilized to enable fast and efficient reconfiguration, which is based on the control logic and the multiboot feature of FPGA. As a result, an external memory bandwidth requirement of 1.94MB/GFlop is achieved, which is 55% lower than prior arts. Under limited DRAM bandwidth, a system throughput of 1244GFlop/s is achieved at the Vertex UltraScale platform, which is 5.48 times higher than the state-of-the-art FPGA implementations. ",1,0,0,0,0,0 17422,Enstrophy Cascade in Decaying Two-Dimensional Quantum Turbulence," We report evidence for an enstrophy cascade in large-scale point-vortex simulations of decaying two-dimensional quantum turbulence. Devising a method to generate quantum vortex configurations with kinetic energy narrowly localized near a single length scale, the dynamics are found to be well-characterised by a superfluid Reynolds number, $\mathrm{Re_s}$, that depends only on the number of vortices and the initial kinetic energy scale. Under free evolution the vortices exhibit features of a classical enstrophy cascade, including a $k^{-3}$ power-law kinetic energy spectrum, and steady enstrophy flux associated with inertial transport to small scales. Clear signatures of the cascade emerge for $N\gtrsim 500$ vortices. Simulating up to very large Reynolds numbers ($N = 32, 768$ vortices), additional features of the classical theory are observed: the Kraichnan-Batchelor constant is found to converge to $C' \approx 1.6$, and the width of the $k^{-3}$ range scales as $\mathrm{Re_s}^{1/2}$. The results support a universal phenomenology underpinning classical and quantum fluid turbulence. ",0,1,0,0,0,0 17423,Contextual Multi-armed Bandits under Feature Uncertainty," We study contextual multi-armed bandit problems under linear realizability on rewards and uncertainty (or noise) on features. For the case of identical noise on features across actions, we propose an algorithm, coined {\em NLinRel}, having $O\left(T^{\frac{7}{8}} \left(\log{(dT)}+K\sqrt{d}\right)\right)$ regret bound for $T$ rounds, $K$ actions, and $d$-dimensional feature vectors. Next, for the case of non-identical noise, we observe that popular linear hypotheses including {\em NLinRel} are impossible to achieve such sub-linear regret. Instead, under assumption of Gaussian feature vectors, we prove that a greedy algorithm has $O\left(T^{\frac23}\sqrt{\log d}\right)$ regret bound with respect to the optimal linear hypothesis. Utilizing our theoretical understanding on the Gaussian case, we also design a practical variant of {\em NLinRel}, coined {\em Universal-NLinRel}, for arbitrary feature distributions. It first runs {\em NLinRel} for finding the `true' coefficient vector using feature uncertainties and then adjust it to minimize its regret using the statistical feature information. We justify the performance of {\em Universal-NLinRel} on both synthetic and real-world datasets. ",1,0,0,1,0,0 17424,Asymmetric Variational Autoencoders," Variational inference for latent variable models is prevalent in various machine learning problems, typically solved by maximizing the Evidence Lower Bound (ELBO) of the true data likelihood with respect to a variational distribution. However, freely enriching the family of variational distribution is challenging since the ELBO requires variational likelihood evaluations of the latent variables. In this paper, we propose a novel framework to enrich the variational family by incorporating auxiliary variables to the variational family. The resulting inference network doesn't require density evaluations for the auxiliary variables and thus complex implicit densities over the auxiliary variables can be constructed by neural networks. It can be shown that the actual variational posterior of the proposed approach is essentially modeling a rich probabilistic mixture of simple variational posterior indexed by auxiliary variables, thus a flexible inference model can be built. Empirical evaluations on several density estimation tasks demonstrates the effectiveness of the proposed method. ",1,0,0,1,0,0 17425,Dynamics of homogeneous shear turbulence: A key role of the nonlinear transverse cascade in the bypass concept," To understand the self-sustenance of subcritical turbulence in spectrally stable shear flows, we performed direct numerical simulations of homogeneous shear turbulence for different aspect ratios of the flow domain and analyzed the dynamical processes in Fourier space. There are no exponentially growing modes in such flows and the turbulence is energetically supported only by the linear growth of perturbation harmonics due to the shear flow non-normality. This non-normality-induced, or nonmodal growth is anisotropic in spectral space, which, in turn, leads to anisotropy of nonlinear processes in this space. As a result, a transverse (angular) redistribution of harmonics in Fourier space appears to be the main nonlinear process in these flows, rather than direct or inverse cascades. We refer to this type of nonlinear redistribution as the nonlinear transverse cascade. It is demonstrated that the turbulence is sustained by a subtle interplay between the linear nonmodal growth and the nonlinear transverse cascade that exemplifies a well-known bypass scenario of subcritical turbulence. These two basic processes mainly operate at large length scales, comparable to the domain size. Therefore, this central, small wave number area of Fourier space is crucial in the self-sustenance; we defined its size and labeled it as the vital area of turbulence. Outside the vital area, the nonmodal growth and the transverse cascade are of secondary importance. Although the cascades and the self-sustaining process of turbulence are qualitatively the same at different aspect ratios, the number of harmonics actively participating in this process varies, but always remains quite large. This implies that the self-sustenance of subcritical turbulence cannot be described by low-order models. ",0,1,0,0,0,0 17426,A taxonomy of learning dynamics in 2 x 2 games," Learning would be a convincing method to achieve coordination on an equilibrium. But does learning converge, and to what? We answer this question in generic 2-player, 2-strategy games, using Experience-Weighted Attraction (EWA), which encompasses many extensively studied learning algorithms. We exhaustively characterize the parameter space of EWA learning, for any payoff matrix, and we understand the generic properties that imply convergent or non-convergent behaviour in 2 x 2 games. Irrational choice and lack of incentives imply convergence to a mixed strategy in the centre of the strategy simplex, possibly far from the Nash Equilibrium (NE). In the opposite limit, in which the players quickly modify their strategies, the behaviour depends on the payoff matrix: (i) a strong discrepancy between the pure strategies describes dominance-solvable games, which show convergence to a unique fixed point close to the NE; (ii) a preference towards profiles of strategies along the main diagonal describes coordination games, with multiple stable fixed points corresponding to the NE; (iii) a cycle of best responses defines discoordination games, which commonly yield limit cycles or low-dimensional chaos. While it is well known that mixed strategy equilibria may be unstable, our approach is novel from several perspectives: we fully analyse EWA and provide explicit thresholds that define the onset of instability; we find an emerging taxonomy of the learning dynamics, without focusing on specific classes of games ex-ante; we show that chaos can occur even in the simplest games; we make a precise theoretical prediction that can be tested against data on experimental learning of discoordination games. ",0,1,0,0,0,0 17427,Dynamic attitude planning for trajectory tracking in underactuated VTOL UAVs," This paper addresses the trajectory tracking control problem for underactuated VTOL UAVs. According to the different actuation mechanisms, the most common UAV platforms can achieve only a partial decoupling of attitude and position tasks. Since position tracking is of utmost importance for applications involving aerial vehicles, we propose a control scheme in which position tracking is the primary objective. To this end, this work introduces the concept of attitude planner, a dynamical system through which the desired attitude reference is processed to guarantee the satisfaction of the primary objective: the attitude tracking task is considered as a secondary objective which can be realized as long as the desired trajectory satisfies specific trackability conditions. Two numerical simulations are performed by applying the proposed control law to a hexacopter with and without tilted propellers, which accounts for unmodeled dynamics and external disturbances not included in the control design model. ",1,0,0,0,0,0 17428,The JCMT Transient Survey: Data Reduction and Calibration Methods," Though there has been a significant amount of work investigating the early stages of low-mass star formation in recent years, the evolution of the mass assembly rate onto the central protostar remains largely unconstrained. Examining in depth the variation in this rate is critical to understanding the physics of star formation. Instabilities in the outer and inner circumstellar disk can lead to episodic outbursts. Observing these brightness variations at infrared or submillimetre wavelengths sets constraints on the current accretion models. The JCMT Transient Survey is a three-year project dedicated to studying the continuum variability of deeply embedded protostars in eight nearby star-forming regions at a one month cadence. We use the SCUBA-2 instrument to simultaneously observe these regions at wavelengths of 450 $\mu$m and 850 $\mu$m. In this paper, we present the data reduction techniques, image alignment procedures, and relative flux calibration methods for 850 $\mu$m data. We compare the properties and locations of bright, compact emission sources fitted with Gaussians over time. Doing so, we achieve a spatial alignment of better than 1"" between the repeated observations and an uncertainty of 2-3\% in the relative peak brightness of significant, localised emission. This combination of imaging performance is unprecedented in ground-based, single dish submillimetre observations. Finally, we identify a few sources that show possible and confirmed brightness variations. These sources will be closely monitored and presented in further detail in additional studies throughout the duration of the survey. ",0,1,0,0,0,0 17429,"Synchronization Strings: Explicit Constructions, Local Decoding, and Applications"," This paper gives new results for synchronization strings, a powerful combinatorial object that allows to efficiently deal with insertions and deletions in various communication settings: $\bullet$ We give a deterministic, linear time synchronization string construction, improving over an $O(n^5)$ time randomized construction. Independently of this work, a deterministic $O(n\log^2\log n)$ time construction was just put on arXiv by Cheng, Li, and Wu. We also give a deterministic linear time construction of an infinite synchronization string, which was not known to be computable before. Both constructions are highly explicit, i.e., the $i^{th}$ symbol can be computed in $O(\log i)$ time. $\bullet$ This paper also introduces a generalized notion we call long-distance synchronization strings that allow for local and very fast decoding. In particular, only $O(\log^3 n)$ time and access to logarithmically many symbols is required to decode any index. We give several applications for these results: $\bullet$ For any $\delta<1$ and $\epsilon>0$ we provide an insdel correcting code with rate $1-\delta-\epsilon$ which can correct any $O(\delta)$ fraction of insdel errors in $O(n\log^3n)$ time. This near linear computational efficiency is surprising given that we do not even know how to compute the (edit) distance between the decoding input and output in sub-quadratic time. We show that such codes can not only efficiently recover from $\delta$ fraction of insdel errors but, similar to [Schulman, Zuckerman; TransInf'99], also from any $O(\delta/\log n)$ fraction of block transpositions and replications. $\bullet$ We show that highly explicitness and local decoding allow for infinite channel simulations with exponentially smaller memory and decoding time requirements. These simulations can be used to give the first near linear time interactive coding scheme for insdel errors. ",1,0,0,0,0,0 17430,Independent Component Analysis via Energy-based and Kernel-based Mutual Dependence Measures," We apply both distance-based (Jin and Matteson, 2017) and kernel-based (Pfister et al., 2016) mutual dependence measures to independent component analysis (ICA), and generalize dCovICA (Matteson and Tsay, 2017) to MDMICA, minimizing empirical dependence measures as an objective function in both deflation and parallel manners. Solving this minimization problem, we introduce Latin hypercube sampling (LHS) (McKay et al., 2000), and a global optimization method, Bayesian optimization (BO) (Mockus, 1994) to improve the initialization of the Newton-type local optimization method. The performance of MDMICA is evaluated in various simulation studies and an image data example. When the ICA model is correct, MDMICA achieves competitive results compared to existing approaches. When the ICA model is misspecified, the estimated independent components are less mutually dependent than the observed components using MDMICA, while they are prone to be even more mutually dependent than the observed components using other approaches. ",0,0,0,1,0,0 17431,The local geometry of testing in ellipses: Tight control via localized Kolmogorov widths," We study the local geometry of testing a mean vector within a high-dimensional ellipse against a compound alternative. Given samples of a Gaussian random vector, the goal is to distinguish whether the mean is equal to a known vector within an ellipse, or equal to some other unknown vector in the ellipse. Such ellipse testing problems lie at the heart of several applications, including non-parametric goodness-of-fit testing, signal detection in cognitive radio, and regression function testing in reproducing kernel Hilbert spaces. While past work on such problems has focused on the difficulty in a global sense, we study difficulty in a way that is localized to each vector within the ellipse. Our main result is to give sharp upper and lower bounds on the localized minimax testing radius in terms of an explicit formula involving the Kolmogorov width of the ellipse intersected with a Euclidean ball. When applied to particular examples, our general theorems yield interesting rates that were not known before: as a particular case, for testing in Sobolev ellipses of smoothness $\alpha$, we demonstrate rates that vary from $(\sigma^2)^{\frac{4 \alpha}{4 \alpha + 1}}$, corresponding to the classical global rate, to the faster rate $(\sigma^2)^{\frac{8 \alpha}{8 \alpha + 1}}$, achievable for vectors at favorable locations within the ellipse. We also show that the optimal test for this problem is achieved by a linear projection test that is based on an explicit lower-dimensional projection of the observation vector. ",0,0,1,1,0,0 17432,A Unified Optimization View on Generalized Matching Pursuit and Frank-Wolfe," Two of the most fundamental prototypes of greedy optimization are the matching pursuit and Frank-Wolfe algorithms. In this paper, we take a unified view on both classes of methods, leading to the first explicit convergence rates of matching pursuit methods in an optimization sense, for general sets of atoms. We derive sublinear ($1/t$) convergence for both classes on general smooth objectives, and linear convergence on strongly convex objectives, as well as a clear correspondence of algorithm variants. Our presented algorithms and rates are affine invariant, and do not need any incoherence or sparsity assumptions. ",1,0,0,1,0,0 17433,Subsampled Rényi Differential Privacy and Analytical Moments Accountant," We study the problem of subsampling in differential privacy (DP), a question that is the centerpiece behind many successful differentially private machine learning algorithms. Specifically, we provide a tight upper bound on the Rényi Differential Privacy (RDP) (Mironov, 2017) parameters for algorithms that: (1) subsample the dataset, and then (2) applies a randomized mechanism M to the subsample, in terms of the RDP parameters of M and the subsampling probability parameter. Our results generalize the moments accounting technique, developed by Abadi et al. (2016) for the Gaussian mechanism, to any subsampled RDP mechanism. ",0,0,0,1,0,0 17434,"Concurrency and Probability: Removing Confusion, Compositionally"," Assigning a satisfactory truly concurrent semantics to Petri nets with confusion and distributed decisions is a long standing problem, especially if one wants to fully replace nondeterminism with probability distributions and no stochastic structure is desired/allowed. Here we propose a general solution based on a recursive, static decomposition of (finite, occurrence) nets in loci of decision, called structural branching cells (s-cells). Each s-cell exposes a set of alternatives, called transactions, that can be equipped with a general probabilistic distribution. The solution is formalised as a transformation from a given Petri net to another net whose transitions are the transactions of the s-cells and whose places are the places of the original net, with some auxiliary structure for bookkeeping. The resulting net is confusion-free, namely if a transition is enabled, then all its conflicting alternatives are also enabled. Thus sets of conflicting alternatives can be equipped with probability distributions, while nonintersecting alternatives are purely concurrent and do not introduce any nondeterminism: they are Church-Rosser and their probability distributions are independent. The validity of the construction is witnessed by a tight correspondence result with the recent approach by Abbes and Benveniste (AB) based on recursively stopped configurations in event structures. Some advantages of our approach over AB's are that: i) s-cells are defined statically and locally in a compositional way, whereas AB's branching cells are defined dynamically and globally; ii) their recursively stopped configurations correspond to possible executions, but the existing concurrency is not made explicit. Instead, our resulting nets are equipped with an original concurrency structure exhibiting a so-called complete concurrency property. ",1,0,0,0,0,0 17435,Vision-Based Multi-Task Manipulation for Inexpensive Robots Using End-To-End Learning from Demonstration," We propose a technique for multi-task learning from demonstration that trains the controller of a low-cost robotic arm to accomplish several complex picking and placing tasks, as well as non-prehensile manipulation. The controller is a recurrent neural network using raw images as input and generating robot arm trajectories, with the parameters shared across the tasks. The controller also combines VAE-GAN-based reconstruction with autoregressive multimodal action prediction. Our results demonstrate that it is possible to learn complex manipulation tasks, such as picking up a towel, wiping an object, and depositing the towel to its previous position, entirely from raw images with direct behavior cloning. We show that weight sharing and reconstruction-based regularization substantially improve generalization and robustness, and training on multiple tasks simultaneously increases the success rate on all tasks. ",1,0,0,0,0,0 17436,Dynamic Word Embeddings," We present a probabilistic language model for time-stamped text data which tracks the semantic evolution of individual words over time. The model represents words and contexts by latent trajectories in an embedding space. At each moment in time, the embedding vectors are inferred from a probabilistic version of word2vec [Mikolov et al., 2013]. These embedding vectors are connected in time through a latent diffusion process. We describe two scalable variational inference algorithms--skip-gram smoothing and skip-gram filtering--that allow us to train the model jointly over all times; thus learning on all data while simultaneously allowing word and context vectors to drift. Experimental results on three different corpora demonstrate that our dynamic model infers word embedding trajectories that are more interpretable and lead to higher predictive likelihoods than competing methods that are based on static models trained separately on time slices. ",0,0,0,1,0,0 17437,Multiple scattering effect on angular distribution and polarization of radiation by relativistic electrons in a thin crystal," The multiple scattering of ultra relativistic electrons in an amorphous matter leads to the suppression of the soft part of radiation spectrum (the Landau-Pomeranchuk-Migdal effect), and also can change essentially the angular distribution of the emitted photons. A similar effect must take place in a crystal for the coherent radiation of relativistic electron. The results of the theoretical investigation of angular distributions and polarization of radiation by a relativistic electron passing through a thin (in comparison with a coherence length) crystal at a small angle to the crystal axis are presented. The electron trajectories in crystal were simulated using the binary collision model which takes into account both coherent and incoherent effects at scattering. The angular distribution of radiation and polarization were calculated as a sum of radiation from each electron. It is shown that there are nontrivial angular distributions of the emitted photons and their polarization that are connected to the superposition of the coherent scattering of electrons by atomic rows (""doughnut scattering"" effect) and the suppression of radiation (similar to the Landau-Pomeranchuk-Migdal effect in an amorphous matter). It is also shown that circular polarization of radiation in the considered case is identically zero. ",0,1,0,0,0,0 17438,Branched coverings of $CP^2$ and other basic 4-manifolds," We give necessary and sufficient conditions for a 4-manifold to be a branched covering of $CP^2$, $S^2\times S^2$, $S^2 \mathbin{\tilde\times} S^2$ and $S^3 \times S^1$, which are expressed in terms of the Betti numbers and the intersection form of the 4-manifold. ",0,0,1,0,0,0 17439,Instantaneous effects of photons on electrons in semiconductors," The photoelectric effect established by Einstein is well known, which indicates that electrons on lower energy levels can jump up to higher levels by absorbing photons, or jump down from higher levels to lower levels and give out photons1-3. However, how do photons act on electrons and further on atoms have kept unknown up to now. Here we show the results that photons collide on electrons with energy-transmission in semiconductors and pass their momenta to electrons, which make the electrons jump up from lower energy levels to higher levels. We found that (i) photons have rest mass of 7.287exp(-38) kg and 2.886exp(-35) kg, in vacuum and silicon respectively; (ii) excited by photons with energy of 1.12eV, electrons in silicon may jump up from the top of valance band to the bottom of conduction band with initial speed of 2.543exp(3) m/s and taking time of 4.977exp(-17) s; (iii) acted by photons with energy of 4.6eV, the atoms who lose electrons may be catapulted out of the semiconductors by the extruded neighbor atoms, and taking time of 2.224exp(-15) s. These results make reasonable explanation to rapid thermal annealing, laser ablation and laser cutting. ",0,1,0,0,0,0 17440,Mitigating the Impact of Speech Recognition Errors on Chatbot using Sequence-to-Sequence Model," We apply sequence-to-sequence model to mitigate the impact of speech recognition errors on open domain end-to-end dialog generation. We cast the task as a domain adaptation problem where ASR transcriptions and original text are in two different domains. In this paper, our proposed model includes two individual encoders for each domain data and make their hidden states similar to ensure the decoder predict the same dialog text. The method shows that the sequence-to-sequence model can learn the ASR transcriptions and original text pair having the same meaning and eliminate the speech recognition errors. Experimental results on Cornell movie dialog dataset demonstrate that the domain adaption system help the spoken dialog system generate more similar responses with the original text answers. ",1,0,0,0,0,0 17441,Simultaneous 183 GHz H2O Maser and SiO Observations Towards Evolved Stars Using APEX SEPIA Band 5," We investigate the use of 183 GHz H2O masers for characterization of the physical conditions and mass loss process in the circumstellar envelopes of evolved stars. We used APEX SEPIA Band 5 to observe the 183 GHz H2O line towards 2 Red Supergiant and 3 Asymptotic Giant Branch stars. Simultaneously, we observed lines in 28SiO v0, 1, 2 and 3, and for 29SiO v0 and 1. We detected the 183 GHz H2O line towards all the stars with peak flux densities greater than 100 Jy, including a new detection from VY CMa. Towards all 5 targets, the water line had indications of being due to maser emission and had higher peak flux densities than for the SiO lines. The SiO lines appear to originate from both thermal and maser processes. Comparison with simulations and models indicate that 183 GHz maser emission is likely to extend to greater radii in the circumstellar envelopes than SiO maser emission and to similar or greater radii than water masers at 22, 321 and 325 GHz. We speculate that a prominent blue-shifted feature in the W Hya 183 GHz spectrum is amplifying the stellar continuum, and is located at a similar distance from the star as mainline OH maser emission. From a comparison of the individual polarizations, we find that the SiO maser linear polarization fraction of several features exceeds the maximum fraction allowed under standard maser assumptions and requires strong anisotropic pumping of the maser transition and strongly saturated maser emission. The low polarization fraction of the H2O maser however, fits with the expectation for a non-saturated maser. 183 GHz H2O masers can provide strong probes of the mass loss process of evolved stars. Higher angular resolution observations of this line using ALMA Band 5 will enable detailed investigation of the emission location in circumstellar envelopes and can also provide information on magnetic field strength and structure. ",0,1,0,0,0,0 17442,What does the free energy principle tell us about the brain?," The free energy principle has been proposed as a unifying theory of brain function. It is closely related, and in some cases subsumes, earlier unifying ideas such as Bayesian inference, predictive coding, and active learning. This article clarifies these connections, teasing apart distinctive and shared predictions. ",0,0,0,0,1,0 17443,Learning with Changing Features," In this paper we study the setting where features are added or change interpretation over time, which has applications in multiple domains such as retail, manufacturing, finance. In particular, we propose an approach to provably determine the time instant from which the new/changed features start becoming relevant with respect to an output variable in an agnostic (supervised) learning setting. We also suggest an efficient version of our approach which has the same asymptotic performance. Moreover, our theory also applies when we have more than one such change point. Independent post analysis of a change point identified by our method for a large retailer revealed that it corresponded in time with certain unflattering news stories about a brand that resulted in the change in customer behavior. We also applied our method to data from an advanced manufacturing plant identifying the time instant from which downstream features became relevant. To the best of our knowledge this is the first work that formally studies change point detection in a distribution independent agnostic setting, where the change point is based on the changing relationship between input and output. ",1,0,0,1,0,0 17444,Estimation Considerations in Contextual Bandits," Contextual bandit algorithms are sensitive to the estimation method of the outcome model as well as the exploration method used, particularly in the presence of rich heterogeneity or complex outcome models, which can lead to difficult estimation problems along the path of learning. We study a consideration for the exploration vs. exploitation framework that does not arise in multi-armed bandits but is crucial in contextual bandits; the way exploration and exploitation is conducted in the present affects the bias and variance in the potential outcome model estimation in subsequent stages of learning. We develop parametric and non-parametric contextual bandits that integrate balancing methods from the causal inference literature in their estimation to make it less prone to problems of estimation bias. We provide the first regret bound analyses for contextual bandits with balancing in the domain of linear contextual bandits that match the state of the art regret bounds. We demonstrate the strong practical advantage of balanced contextual bandits on a large number of supervised learning datasets and on a synthetic example that simulates model mis-specification and prejudice in the initial training data. Additionally, we develop contextual bandits with simpler assignment policies by leveraging sparse model estimation methods from the econometrics literature and demonstrate empirically that in the early stages they can improve the rate of learning and decrease regret. ",1,0,0,1,0,0 17445,Using solar and load predictions in battery scheduling at the residential level," Smart solar inverters can be used to store, monitor and manage a home's solar energy. We describe a smart solar inverter system with battery which can either operate in an automatic mode or receive commands over a network to charge and discharge at a given rate. In order to make battery storage financially viable and advantageous to the consumers, effective battery scheduling algorithms can be employed. Particularly, when time-of-use tariffs are in effect in the region of the inverter, it is possible in some cases to schedule the battery to save money for the individual customer, compared to the ""automatic"" mode. Hence, this paper presents and evaluates the performance of a novel battery scheduling algorithm for residential consumers of solar energy. The proposed battery scheduling algorithm optimizes the cost of electricity over next 24 hours for residential consumers. The cost minimization is realized by controlling the charging/discharging of battery storage system based on the predictions for load and solar power generation values. The scheduling problem is formulated as a linear programming problem. We performed computer simulations over 83 inverters using several months of hourly load and PV data. The simulation results indicate that key factors affecting the viability of optimization are the tariffs and the PV to Load ratio at each inverter. Depending on the tariff, savings of between 1% and 10% can be expected over the automatic approach. The prediction approach used in this paper is also shown to out-perform basic ""persistence"" forecasting approaches. We have also examined the approaches for improving the prediction accuracy and optimization effectiveness. ",1,0,0,0,0,0 17446,Towards Large-Pose Face Frontalization in the Wild," Despite recent advances in face recognition using deep learning, severe accuracy drops are observed for large pose variations in unconstrained environments. Learning pose-invariant features is one solution, but needs expensively labeled large-scale data and carefully designed feature learning algorithms. In this work, we focus on frontalizing faces in the wild under various head poses, including extreme profile views. We propose a novel deep 3D Morphable Model (3DMM) conditioned Face Frontalization Generative Adversarial Network (GAN), termed as FF-GAN, to generate neutral head pose face images. Our framework differs from both traditional GANs and 3DMM based modeling. Incorporating 3DMM into the GAN structure provides shape and appearance priors for fast convergence with less training data, while also supporting end-to-end training. The 3DMM-conditioned GAN employs not only the discriminator and generator loss but also a new masked symmetry loss to retain visual quality under occlusions, besides an identity loss to recover high frequency information. Experiments on face recognition, landmark localization and 3D reconstruction consistently show the advantage of our frontalization method on faces in the wild datasets. ",1,0,0,0,0,0 17447,Managing the Public to Manage Data: Citizen Science and Astronomy," Citizen science projects recruit members of the public as volunteers to process and produce datasets. These datasets must win the trust of the scientific community. The task of securing credibility involves, in part, applying standard scientific procedures to clean these datasets. However, effective management of volunteer behavior also makes a significant contribution to enhancing data quality. Through a case study of Galaxy Zoo, a citizen science project set up to generate datasets based on volunteer classifications of galaxy morphologies, this paper explores how those involved in running the project manage volunteers. The paper focuses on how methods for crediting volunteer contributions motivate volunteers to provide higher quality contributions and to behave in a way that better corresponds to statistical assumptions made when combining volunteer contributions into datasets. These methods have made a significant contribution to the success of the project in securing trust in these datasets, which have been well used by other scientists. Implications for practice are then presented for citizen science projects, providing a list of considerations to guide choices regarding how to credit volunteer contributions to improve the quality and trustworthiness of citizen science-produced datasets. ",1,1,0,0,0,0 17448,Modular representations in type A with a two-row nilpotent central character," We study the category of representations of $\mathfrak{sl}_{m+2n}$ in positive characteristic, whose p-character is a nilpotent whose Jordan type is the two-row partition (m+n,n). In a previous paper with Anno, we used Bezrukavnikov-Mirkovic-Rumynin's theory of positive characteristic localization and exotic t-structures to give a geometric parametrization of the simples using annular crossingless matchings. Building on this, here we give combinatorial dimension formulae for the simple objects, and compute the Jordan-Holder multiplicities of the simples inside the baby Vermas (in special case where n=1, i.e. that a subregular nilpotent, these were known from work of Jantzen). We use Cautis-Kamnitzer's geometric categorification of the tangle calculus to study the images of the simple objects under the [BMR] equivalence. The dimension formulae may be viewed as a positive characteristic analogue of the combinatorial character formulae for simple objects in parabolic category O for $\mathfrak{sl}_{m+2n}$, due to Lascoux and Schutzenberger. ",0,0,1,0,0,0 17449,Knowledge Acquisition: A Complex Networks Approach," Complex networks have been found to provide a good representation of the structure of knowledge, as understood in terms of discoverable concepts and their relationships. In this context, the discovery process can be modeled as agents walking in a knowledge space. Recent studies proposed more realistic dynamics, including the possibility of agents being influenced by others with higher visibility or by their own memory. However, rather than dealing with these two concepts separately, as previously approached, in this study we propose a multi-agent random walk model for knowledge acquisition that incorporates both concepts. More specifically, we employed the true self avoiding walk alongside a new dynamics based on jumps, in which agents are attracted by the influence of others. That was achieved by using a Lévy flight influenced by a field of attraction emanating from the agents. In order to evaluate our approach, we use a set of network models and two real networks, one generated from Wikipedia and another from the Web of Science. The results were analyzed globally and by regions. In the global analysis, we found that most of the dynamics parameters do not significantly affect the discovery dynamics. The local analysis revealed a substantial difference of performance depending on the network regions where the dynamics are occurring. In particular, the dynamics at the core of networks tend to be more effective. The choice of the dynamics parameters also had no significant impact to the acquisition performance for the considered knowledge networks, even at the local scale. ",1,1,0,0,0,0 17450,Frequent flaring in the TRAPPIST-1 system - unsuited for life?," We analyze short cadence K2 light curve of the TRAPPIST-1 system. Fourier analysis of the data suggests $P_\mathrm{rot}=3.295\pm0.003$ days. The light curve shows several flares, of which we analyzed 42 events, these have integrated flare energies of $1.26\times10^{30}-1.24\times10^{33}$ ergs. Approximately 12% of the flares were complex, multi-peaked eruptions. The flaring and the possible rotational modulation shows no obvious correlation. The flaring activity of TRAPPIST-1 probably continuously alters the atmospheres of the orbiting exoplanets, making these less favorable for hosting life. ",0,1,0,0,0,0 17451,Playing Pairs with Pepper," As robots become increasingly prevalent in almost all areas of society, the factors affecting humans trust in those robots becomes increasingly important. This paper is intended to investigate the factor of robot attributes, looking specifically at the relationship between anthropomorphism and human development of trust. To achieve this, an interaction game, Matching the Pairs, was designed and implemented on two robots of varying levels of anthropomorphism, Pepper and Husky. Participants completed both pre- and post-test questionnaires that were compared and analyzed predominantly with the use of quantitative methods, such as paired sample t-tests. Post-test analyses suggested a positive relationship between trust and anthropomorphism with $80\%$ of participants confirming that the robots' adoption of facial features assisted in establishing trust. The results also indicated a positive relationship between interaction and trust with $90\%$ of participants confirming this for both robots post-test ",1,0,0,0,0,0 17452,Wild theories with o-minimal open core," Let $T$ be a consistent o-minimal theory extending the theory of densely ordered groups and let $T'$ be a consistent theory. Then there is a complete theory $T^*$ extending $T$ such that $T$ is an open core of $T^*$, but every model of $T^*$ interprets a model of $T'$. If $T'$ is NIP, $T^*$ can be chosen to be NIP as well. From this we deduce the existence of an NIP expansion of the real field that has no distal expansion. ",0,0,1,0,0,0 17453,Objective Bayesian inference with proper scoring rules," Standard Bayesian analyses can be difficult to perform when the full likelihood, and consequently the full posterior distribution, is too complex and difficult to specify or if robustness with respect to data or to model misspecifications is required. In these situations, we suggest to resort to a posterior distribution for the parameter of interest based on proper scoring rules. Scoring rules are loss functions designed to measure the quality of a probability distribution for a random variable, given its observed value. Important examples are the Tsallis score and the Hyvärinen score, which allow us to deal with model misspecifications or with complex models. Also the full and the composite likelihoods are both special instances of scoring rules. The aim of this paper is twofold. Firstly, we discuss the use of scoring rules in the Bayes formula in order to compute a posterior distribution, named SR-posterior distribution, and we derive its asymptotic normality. Secondly, we propose a procedure for building default priors for the unknown parameter of interest that can be used to update the information provided by the scoring rule in the SR-posterior distribution. In particular, a reference prior is obtained by maximizing the average $\alpha-$divergence from the SR-posterior distribution. For $0 \leq |\alpha|<1$, the result is a Jeffreys-type prior that is proportional to the square root of the determinant of the Godambe information matrix associated to the scoring rule. Some examples are discussed. ",0,0,1,1,0,0 17454,Large Area X-ray Proportional Counter (LAXPC) Instrument on AstroSat," Large Area X-ray Proportional Counter (LAXPC) is one of the major AstroSat payloads. LAXPC instrument will provide high time resolution X-ray observations in 3 to 80 keV energy band with moderate energy resolution. A cluster of three co-aligned identical LAXPC detectors is used in AstroSat to provide large collection area of more than 6000 cm2 . The large detection volume (15 cm depth) filled with xenon gas at about 2 atmosphere pressure, results in detection efficiency greater than 50%, above 30 keV. With its broad energy range and fine time resolution (10 microsecond), LAXPC instrument is well suited for timing and spectral studies of a wide variety of known and transient X-ray sources in the sky. We have done extensive calibration of all LAXPC detectors using radioactive sources as well as GEANT4 simulation of LAXPC detectors. We describe in brief some of the results obtained during the payload verification phase along with LXAPC capabilities. ",0,1,0,0,0,0 17455,"A Counterexample to the Vector Generalization of Costa's EPI, and Partial Resolution"," We give a counterexample to the vector generalization of Costa's entropy power inequality (EPI) due to Liu, Liu, Poor and Shamai. In particular, the claimed inequality can fail if the matix-valued parameter in the convex combination does not commute with the covariance of the additive Gaussian noise. Conversely, the inequality holds if these two matrices commute. ",1,0,0,0,0,0 17456,Models for Predicting Community-Specific Interest in News Articles," In this work, we ask two questions: 1. Can we predict the type of community interested in a news article using only features from the article content? and 2. How well do these models generalize over time? To answer these questions, we compute well-studied content-based features on over 60K news articles from 4 communities on reddit.com. We train and test models over three different time periods between 2015 and 2017 to demonstrate which features degrade in performance the most due to concept drift. Our models can classify news articles into communities with high accuracy, ranging from 0.81 ROC AUC to 1.0 ROC AUC. However, while we can predict the community-specific popularity of news articles with high accuracy, practitioners should approach these models carefully. Predictions are both community-pair dependent and feature group dependent. Moreover, these feature groups generalize over time differently, with some only degrading slightly over time, but others degrading greatly. Therefore, we recommend that community-interest predictions are done in a hierarchical structure, where multiple binary classifiers can be used to separate community pairs, rather than a traditional multi-class model. Second, these models should be retrained over time based on accuracy goals and the availability of training data. ",0,0,0,1,0,0 17457,On subfiniteness of graded linear series," Hilbert's 14th problem studies the finite generation property of the intersection of an integral algebra of finite type with a subfield of the field of fractions of the algebra. It has a negative answer due to the counterexample of Nagata. We show that a subfinite version of Hilbert's 14th problem has a confirmative answer. We then establish a graded analogue of this result, which permits to show that the subfiniteness of graded linear series does not depend on the function field in which we consider it. Finally, we apply the subfiniteness result to the study of geometric and arithmetic graded linear series. ",0,0,1,0,0,0 17458,Natasha 2: Faster Non-Convex Optimization Than SGD," We design a stochastic algorithm to train any smooth neural network to $\varepsilon$-approximate local minima, using $O(\varepsilon^{-3.25})$ backpropagations. The best result was essentially $O(\varepsilon^{-4})$ by SGD. More broadly, it finds $\varepsilon$-approximate local minima of any smooth nonconvex function in rate $O(\varepsilon^{-3.25})$, with only oracle access to stochastic gradients. ",1,0,0,1,0,0 17459,Evidence for a Dayside Thermal Inversion and High Metallicity for the Hot Jupiter WASP-18b," We find evidence for a strong thermal inversion in the dayside atmosphere of the highly irradiated hot Jupiter WASP-18b (T$_{eq}=2411K$, $M=10.3M_{J}$) based on emission spectroscopy from Hubble Space Telescope secondary eclipse observations and Spitzer eclipse photometry. We demonstrate a lack of water vapor in either absorption or emission at 1.4$\mu$m. However, we infer emission at 4.5$\mu$m and absorption at 1.6$\mu$m that we attribute to CO, as well as a non-detection of all other relevant species (e.g., TiO, VO). The most probable atmospheric retrieval solution indicates a C/O ratio of 1 and a high metallicity (C/H=$283^{+395}_{-138}\times$ solar). The derived composition and T/P profile suggest that WASP-18b is the first example of both a planet with a non-oxide driven thermal inversion and a planet with an atmospheric metallicity inconsistent with that predicted for Jupiter-mass planets at $>2\sigma$. Future observations are necessary to confirm the unusual planetary properties implied by these results. ",0,1,0,0,0,0 17460,$α$-$β$ and $β$-$γ$ phase boundaries of solid oxygen observed by adiabatic magnetocaloric effect," The magnetic-field-temperature phase diagram of solid oxygen is investigated by the adiabatic magnetocaloric effect (MCE) measurement with pulsed magnetic fields. Relatively large temperature decrease with hysteresis is observed at just below the $\beta$-$\gamma$ and $\alpha$-$\beta$ phase transition temperatures owing to the field-induced transitions. The magnetic field dependences of these phase boundaries are obtained as $T_\mathrm{\beta\gamma}(H)=43.8-1.55\times10^{-3}H^2$ K and $T_\mathrm{\alpha\beta}(H)=23.9-0.73\times10^{-3}H^2$ K. The magnetic Clausius-Clapeyron equation quantitatively explains the $H$ dependence of $T_\mathrm{\beta\gamma}$, meanwhile, does not $T_\mathrm{\alpha\beta}$. The MCE curve at $T_\mathrm{\beta\gamma}$ is of typical first-order, while the curve at $T_\mathrm{\alpha\beta}$ seems to have both characteristics of first- and second-order transitions. We discuss the order of the $\alpha$-$\beta$ phase transition and propose possible reasons for the unusual behavior. ",0,1,0,0,0,0 17461,Localization Algorithm with Circular Representation in 2D and its Similarity to Mammalian Brains," Extended Kalman filter (EKF) does not guarantee consistent mean and covariance under linearization, even though it is the main framework for robotic localization. While Lie group improves the modeling of the state space in localization, the EKF on Lie group still relies on the arbitrary Gaussian assumption in face of nonlinear models. We instead use von Mises filter for orientation estimation together with the conventional Kalman filter for position estimation, and thus we are able to characterize the first two moments of the state estimates. Since the proposed algorithm holds a solid probabilistic basis, it is fundamentally relieved from the inconsistency problem. Furthermore, we extend the localization algorithm to fully circular representation even for position, which is similar to grid patterns found in mammalian brains and in recurrent neural networks. The applicability of the proposed algorithms is substantiated not only by strong mathematical foundation but also by the comparison against other common localization methods. ",1,0,0,0,1,0 17462,"Lusin-type approximation of Sobolev by Lipschitz functions, in Gaussian and $RCD(K,\infty)$ spaces"," We establish new approximation results, in the sense of Lusin, of Sobolev functions by Lipschitz ones, in some classes of non-doubling metric measure structures. Our proof technique relies upon estimates for heat semigroups and applies to Gaussian and $RCD(K, \infty)$ spaces. As a consequence, we obtain quantitative stability for regular Lagrangian flows in Gaussian settings. ",0,0,1,0,0,0 17463,Distributed Coordination for a Class of Nonlinear Multi-agent Systems with Regulation Constraints," In this paper, a multi-agent coordination problem with steady-state regulation constraints is investigated for a class of nonlinear systems. Unlike existing leader-following coordination formulations, the reference signal is not given by a dynamic autonomous leader but determined as the optimal solution of a distributed optimization problem. Furthermore, we consider a global constraint having noisy data observations for the optimization problem, which implies that reference signal is not trivially available with existing optimization algorithms. To handle those challenges, we present a passivity-based analysis and design approach by using only local objective function, local data observation and exchanged information from their neighbors. The proposed distributed algorithms are shown to achieve the optimal steady-state regulation by rejecting the unknown observation disturbances for passive nonlinear agents, which are persuasive in various practical problems. Applications and simulation examples are then given to verify the effectiveness of our design. ",1,0,1,0,0,0 17464,Intermodulation distortion of actuated MEMS capacitive switches," For the first time, intermodulation distortion of micro-electromechanical capacitive switches in the actuated state was analyzed both theoretically and experimentally. The distortion, although higher than that of switches in the suspended state, was found to decrease with increasing bias voltage but to depend weakly on modulation frequencies between 55 kHz and 1.1 MHz. This dependence could be explained by the orders-of-magnitude increase of the spring constant when the switches were actuated. Additionally, the analysis suggested that increasing the spring constant and decreasing the contact roughness could improve the linearity of actuated switches. These results are critical to micro-electromechanical capacitive switches used in tuners, filters, phase shifters, etc. where the linearity of both suspended and actuated states are critical. ",0,1,0,0,0,0 17465,Parasitic Bipolar Leakage in III-V FETs: Impact of Substrate Architecture," InGaAs-based Gate-all-Around (GAA) FETs with moderate to high In content are shown experimentally and theoretically to be unsuitable for low-leakage advanced CMOS nodes. The primary cause for this is the large leakage penalty induced by the Parasitic Bipolar Effect (PBE), which is seen to be particularly difficult to remedy in GAA architectures. Experimental evidence of PBE in In70Ga30As GAA FETs is demonstrated, along with a simulation-based analysis of the PBE behavior. The impact of PBE is investigated by simulation for alternative device architectures, such as bulk FinFETs and FinFETs-on-insulator. PBE is found to be non-negligible in all standard InGaAs FET designs. Practical PBE metrics are introduced and the design of a substrate architecture for PBE suppression is elucidated. Finally, it is concluded that the GAA architecture is not suitable for low-leakage InGaAs FETs; a bulk FinFET is better suited for the role. ",0,1,0,0,0,0 17466,Properties of Ultra Gamma Function," In this paper we study the integral of type \[_{\delta,a}\Gamma_{\rho,b}(x) =\Gamma(\delta,a;\rho,b)(x)=\int_{0}^{\infty}t^{x-1}e^{-\frac{t^{\delta}}{a}-\frac{t^{-\rho}}{b}}dt.\] Different authors called this integral by different names like ultra gamma function, generalized gamma function, Kratzel integral, inverse Gaussian integral, reaction-rate probability integral, Bessel integral etc. We prove several identities and recurrence relation of above said integral, we called this integral as Four Parameter Gamma Function. Also we evaluate relation between Four Parameter Gamma Function, p-k Gamma Function and Classical Gamma Function. With some conditions we can evaluate Four Parameter Gamma Function in term of Hypergeometric function. ",0,0,1,0,0,0 17467,Further remarks on liftings of crossed modules," In this paper we define the notion of pullback lifting of a lifting crossed module over a crossed module morphism and interpret this notion in the category of group-groupoid actions as pullback action. Moreover, we give a criterion for the lifting of homotopic crossed module morphisms to be homotopic, which will be called homotopy lifting property for crossed module morphisms. Finally, we investigate some properties of derivations of lifting crossed modules according to base crossed module derivations. ",0,0,1,0,0,0 17468,Submodular Maximization through the Lens of Linear Programming," The simplex algorithm for linear programming is based on the fact that any local optimum with respect to the polyhedral neighborhood is also a global optimum. We show that a similar result carries over to submodular maximization. In particular, every local optimum of a constrained monotone submodular maximization problem yields a $1/2$-approximation, and we also present an appropriate extension to the non-monotone setting. However, reaching a local optimum quickly is a non-trivial task. Moreover, we describe a fast and very general local search procedure that applies to a wide range of constraint families, and unifies as well as extends previous methods. In our framework, we match known approximation guarantees while disentangling and simplifying previous approaches. Moreover, despite its generality, we are able to show that our local search procedure is slightly faster than previous specialized methods. Furthermore, we resolve an open question on the relation between linear optimization and submodular maximization; namely, whether a linear optimization oracle may be enough to obtain strong approximation algorithms for submodular maximization. We show that this is not the case by providing an example of a constraint family on a ground set of size $n$ for which, if only given a linear optimization oracle, any algorithm for submodular maximization with a polynomial number of calls to the linear optimization oracle will have an approximation ratio of only $O ( \frac{1}{\sqrt{n}} \cdot \frac{\log n}{\log\log n} )$. ",1,0,0,0,0,0 17469,Channel Estimation for Diffusive MIMO Molecular Communications," In diffusion-based communication, as for molecular systems, the achievable data rate is very low due to the slow nature of diffusion and the existence of severe inter-symbol interference (ISI). Multiple-input multiple-output (MIMO) technique can be used to improve the data rate. Knowledge of channel impulse response (CIR) is essential for equalization and detection in MIMO systems. This paper presents a training-based CIR estimation for diffusive MIMO (D-MIMO) channels. Maximum likelihood and least-squares estimators are derived, and the training sequences are designed to minimize the corresponding Cramér-Rao bound. Sub-optimal estimators are compared to Cramér-Rao bound to validate their performance. ",1,0,0,1,0,0 17470,Multi-stage splitting integrators for sampling with modified Hamiltonian Monte Carlo methods," Modified Hamiltonian Monte Carlo (MHMC) methods combine the ideas behind two popular sampling approaches: Hamiltonian Monte Carlo (HMC) and importance sampling. As in the HMC case, the bulk of the computational cost of MHMC algorithms lies in the numerical integration of a Hamiltonian system of differential equations. We suggest novel integrators designed to enhance accuracy and sampling performance of MHMC methods. The novel integrators belong to families of splitting algorithms and are therefore easily implemented. We identify optimal integrators within the families by minimizing the energy error or the average energy error. We derive and discuss in detail the modified Hamiltonians of the new integrators, as the evaluation of those Hamiltonians is key to the efficiency of the overall algorithms. Numerical experiments show that the use of the new integrators may improve very significantly the sampling performance of MHMC methods, in both statistical and molecular dynamics problems. ",1,0,0,0,0,0 17471,Proposal for a High Precision Tensor Processing Unit," This whitepaper proposes the design and adoption of a new generation of Tensor Processing Unit which has the performance of Google's TPU, yet performs operations on wide precision data. The new generation TPU is made possible by implementing arithmetic circuits which compute using a new general purpose, fractional arithmetic based on the residue number system. ",1,0,0,0,0,0 17472,DICOD: Distributed Convolutional Sparse Coding," In this paper, we introduce DICOD, a convolutional sparse coding algorithm which builds shift invariant representations for long signals. This algorithm is designed to run in a distributed setting, with local message passing, making it communication efficient. It is based on coordinate descent and uses locally greedy updates which accelerate the resolution compared to greedy coordinate selection. We prove the convergence of this algorithm and highlight its computational speed-up which is super-linear in the number of cores used. We also provide empirical evidence for the acceleration properties of our algorithm compared to state-of-the-art methods. ",1,0,0,1,0,0 17473,On uniqueness results for Dirichlet problems of elliptic systems without DeGiorgi-Nash-Moser regularity," We study uniqueness of Dirichlet problems of second order divergence-form elliptic systems with transversally independent coefficients on the upper half-space in absence of regularity of solutions. To this end, we develop a substitute for the fundamental solution used to invert elliptic operators on the whole space by means of a representation via abstract single layer potentials. We also show that such layer potentials are uniquely determined. ",0,0,1,0,0,0 17474,(LaTiO$_3$)$_n$/(LaVO$_3$)$_n$ as a model system for unconventional charge transfer and polar metallicity," At interfaces between oxide materials, lattice and electronic reconstructions always play important roles in exotic phenomena. In this study, the density functional theory and maximally localized Wannier functions are employed to investigate the (LaTiO$_3$)$_n$/(LaVO$_3$)$_n$ magnetic superlattices. The electron transfer from Ti$^{3+}$ to V$^{3+}$ is predicted, which violates the intuitive band alignment based on the electronic structures of LaTiO$_3$ and LaVO$_3$. Such unconventional charge transfer quenches the magnetism of LaTiO$_3$ layer mostly and leads to metal-insulator transition in the $n=1$ superlattice when the stacking orientation is altered. In addition, the compatibility among the polar structure, ferrimagnetism, and metallicity is predicted in the $n=2$ superlattice. ",0,1,0,0,0,0 17475,Modeling sorption of emerging contaminants in biofilms," A mathematical model for emerging contaminants sorption in multispecies biofilms, based on a continuum approach and mass conservation principles is presented. Diffusion of contaminants within the biofilm is described using a diffusion-reaction equation. Binding sites formation and occupation are modeled by two systems of hyperbolic partial differential equations are mutually connected through the two growth rate terms. The model is completed with a system of hyperbolic equations governing the microbial species growth within the biofilm; a system of parabolic equations for substrates diffusion and reaction and a nonlinear ordinary differential equation describing the free boundary evolution. Two real special cases are modelled. The first one describes the dynamics of a free sorbent component diffusing and reacting in a multispecies biofilm. In the second illustrative case, the fate of two different contaminants has been modelled. ",0,1,0,0,0,0 17476,Stabilizing Training of Generative Adversarial Networks through Regularization," Deep generative models based on Generative Adversarial Networks (GANs) have demonstrated impressive sample quality but in order to work they require a careful choice of architecture, parameter initialization, and selection of hyper-parameters. This fragility is in part due to a dimensional mismatch or non-overlapping support between the model distribution and the data distribution, causing their density ratio and the associated f-divergence to be undefined. We overcome this fundamental limitation and propose a new regularization approach with low computational cost that yields a stable GAN training procedure. We demonstrate the effectiveness of this regularizer across several architectures trained on common benchmark image generation tasks. Our regularization turns GAN models into reliable building blocks for deep learning. ",1,0,0,1,0,0 17477,Spurious Vanishing Problem in Approximate Vanishing Ideal," Approximate vanishing ideal, which is a new concept from computer algebra, is a set of polynomials that almost takes a zero value for a set of given data points. The introduction of approximation to exact vanishing ideal has played a critical role in capturing the nonlinear structures of noisy data by computing the approximate vanishing polynomials. However, approximate vanishing has a theoretical problem, which is giving rise to the spurious vanishing problem that any polynomial turns into an approximate vanishing polynomial by coefficient scaling. In the present paper, we propose a general method that enables many basis construction methods to overcome this problem. Furthermore, a coefficient truncation method is proposed that balances the theoretical soundness and computational cost. The experiments show that the proposed method overcomes the spurious vanishing problem and significantly increases the accuracy of classification. ",1,0,0,1,0,0 17478,Opportunities for Two-color Experiments at the SASE3 undulator line of the European XFEL," X-ray Free Electron Lasers (XFELs) have been proven to generate short and powerful radiation pulses allowing for a wide class of novel experiments. If an XFEL facility supports the generation of two X-ray pulses with different wavelengths and controllable delay, the range of possible experiments is broadened even further to include X-ray-pump/X-ray-probe applications. In this work we discuss the possibility of applying a simple and cost-effective method for producing two-color pulses at the SASE3 soft X-ray beamline of the European XFEL. The technique is based on the installation of a magnetic chicane in the baseline undulator and can be accomplished in several steps. We discuss the scientific interest of this upgrade for the Small Quantum Systems (SQS) instrument, in connection with the high-repetition rate of the European XFEL, and we provide start-to-end simulations up to the radiation focus on the sample, proving the feasibility of our concept. ",0,1,0,0,0,0 17479,Braiding errors in interacting Majorana quantum wires," Avenues of Majorana bound states (MBSs) have become one of the primary directions towards a possible realization of topological quantum computation. For a Y-junction of Kitaev quantum wires, we numerically investigate the braiding of MBSs while considering the full quasi-particle background. The two central sources of braiding errors are found to be the fidelity loss due to the incomplete adiabaticity of the braiding operation as well as the hybridization of the MBS. The explicit extraction of the braiding phase in the low-energy Majorana sector from the full many-particle Hilbert space allows us to analyze the breakdown of the independent-particle picture of Majorana braiding. Furthermore, we find nearest-neighbor interactions to significantly affect the braiding performance to the better or worse, depending on the sign and magnitude of the coupling. ",0,1,0,0,0,0 17480,Machine Learning for Structured Clinical Data," Research is a tertiary priority in the EHR, where the priorities are patient care and billing. Because of this, the data is not standardized or formatted in a manner easily adapted to machine learning approaches. Data may be missing for a large variety of reasons ranging from individual input styles to differences in clinical decision making, for example, which lab tests to issue. Few patients are annotated at a research quality, limiting sample size and presenting a moving gold standard. Patient progression over time is key to understanding many diseases but many machine learning algorithms require a snapshot, at a single time point, to create a usable vector form. Furthermore, algorithms that produce black box results do not provide the interpretability required for clinical adoption. This chapter discusses these challenges and others in applying machine learning techniques to the structured EHR (i.e. Patient Demographics, Family History, Medication Information, Vital Signs, Laboratory Tests, Genetic Testing). It does not cover feature extraction from additional sources such as imaging data or free text patient notes but the approaches discussed can include features extracted from these sources. ",1,0,0,0,0,0 17481,Hidden order and symmetry protected topological states in quantum link ladders," We show that whereas spin-1/2 one-dimensional U(1) quantum-link models (QLMs) are topologically trivial, when implemented in ladder-like lattices these models may present an intriguing ground-state phase diagram, which includes a symmetry protected topological (SPT) phase that may be readily revealed by analyzing long-range string spin correlations along the ladder legs. We propose a simple scheme for the realization of spin-1/2 U(1) QLMs based on single-component fermions loaded in an optical lattice with s- and p-bands, showing that the SPT phase may be experimentally realized by adiabatic preparation. ",0,1,0,0,0,0 17482,The effect of prudence on the optimal allocation in possibilistic and mixed models," In this paper two portfolio choice models are studied: a purely possibilistic model, in which the return of a risky asset is a fuzzy number, and a mixed model in which a probabilistic background risk is added. For the two models an approximate formula of the optimal allocation is computed, with respect to the possibilistic moments associated with fuzzy numbers and the indicators of the investor risk preferences (risk aversion, prudence). ",0,0,0,0,0,1 17483,On universal operators and universal pairs," We study some basic properties of the class of universal operators on Hilbert space, and provide new examples of universal operators and universal pairs. ",0,0,1,0,0,0 17484,Interleaving Lattice for the APS Linac," To realize and test advanced accelerator concepts and hardware, a beamline is being reconfigured in the Linac Extension Area (LEA) of APS linac. A photo-cathode RF gun installed at the beginning of the APS linac will provide a low emittance electron beam into the LEA beamline. The thermionic RF gun beam for the APS storage ring, and the photo-cathode RF gun beam for LEA beamline will be accelerated through the linac in an interleaved fashion. In this paper, the design studies for interleaving lattice realization in APS linac is described with initial experiment result ",0,1,0,0,0,0 17485,\textit{Ab Initio} Study of the Magnetic Behavior of Metal Hydrides: A Comparison with the Slater-Pauling Curve," We investigated the magnetic behavior of metal hydrides FeH$_{x}$, CoH$_{x}$ and NiH$_{x}$ for several concentrations of hydrogen ($x$) by using Density Functional Theory calculations. Several structural phases of the metallic host: bcc ($\alpha$), fcc ($\gamma$), hcp ($\varepsilon$), dhcp ($\varepsilon'$), tetragonal structure for FeH$_{x}$ and $\varepsilon$-$\gamma$ phases for CoH$_{x}$, were studied. We found that for CoH$_{x}$ and NiH$_{x}$ the magnetic moment ($m$) decreases regardless the concentration $x$. However, for FeH$_{x}$ systems, $m$ increases or decreases depending on the variation in $x$. In order to find a general trend for these changes of $m$ in magnetic metal hydrides, we compare our results with the Slater-Pauling curve for ferromagnetic metallic binary alloys. It is found that the $m$ of metal hydrides made of Fe, Co and Ni fits the shape of the Slater-Pauling curve as a function of $x$. Our results indicate that there are two main effects that determine the $m$ value due to hydrogenation: an increase of volume causes $m$ to increase, and the addition of an extra electron to the metal always causes it to decrease. We discuss these behaviors in detail. ",0,1,0,0,0,0 17486,Secret Sharing for Cloud Data Security," Cloud computing helps reduce costs, increase business agility and deploy solutions with a high return on investment for many types of applications. However, data security is of premium importance to many users and often restrains their adoption of cloud technologies. Various approaches, i.e., data encryption, anonymization, replication and verification, help enforce different facets of data security. Secret sharing is a particularly interesting cryptographic technique. Its most advanced variants indeed simultaneously enforce data privacy, availability and integrity, while allowing computation on encrypted data. The aim of this paper is thus to wholly survey secret sharing schemes with respect to data security, data access and costs in the pay-as-you-go paradigm. ",1,0,0,0,0,0 17487,Design and Analysis of a Secure Three Factor User Authentication Scheme Using Biometric and Smart Card," Password security can no longer provide enough security in the area of remote user authentication. Considering this security drawback, researchers are trying to find solution with multifactor remote user authentication system. Recently, three factor remote user authentication using biometric and smart card has drawn a considerable attention of the researchers. However, most of the current proposed schemes have security flaws. They are vulnerable to attacks like user impersonation attack, server masquerading attack, password guessing attack, insider attack, denial of service attack, forgery attack, etc. Also, most of them are unable to provide mutual authentication, session key agreement and password, or smart card recovery system. Considering these drawbacks, we propose a secure three factor user authentication scheme using biometric and smart card. Through security analysis, we show that our proposed scheme can overcome drawbacks of existing systems and ensure high security in remote user authentication. ",1,0,0,0,0,0 17488,Real-World Modeling of a Pathfinding Robot Using Robot Operating System (ROS)," This paper presents a practical approach towards implementing pathfinding algorithms on real-world and low-cost non- commercial hardware platforms. While using robotics simulation platforms as a test-bed for our algorithms we easily overlook real- world exogenous problems that are developed by external factors. Such problems involve robot wheel slips, asynchronous motors, abnormal sensory data or unstable power sources. The real-world dynamics tend to be very painful even for executing simple algorithms like a Wavefront planner or A-star search. This paper addresses designing techniques that tend to be robust as well as reusable for any hardware platforms; covering problems like controlling asynchronous drives, odometry offset issues and handling abnormal sensory feedback. The algorithm implementation medium and hardware design tools have been kept general in order to present our work as a serving platform for future researchers and robotics enthusiast working in the field of path planning robotics. ",1,0,0,0,0,0 17489,Understanding Convolution for Semantic Segmentation," Recent advances in deep learning, especially deep convolutional neural networks (CNNs), have led to significant improvement over previous semantic segmentation systems. Here we show how to improve pixel-wise semantic segmentation by manipulating convolution-related operations that are of both theoretical and practical value. First, we design dense upsampling convolution (DUC) to generate pixel-level prediction, which is able to capture and decode more detailed information that is generally missing in bilinear upsampling. Second, we propose a hybrid dilated convolution (HDC) framework in the encoding phase. This framework 1) effectively enlarges the receptive fields (RF) of the network to aggregate global information; 2) alleviates what we call the ""gridding issue"" caused by the standard dilated convolution operation. We evaluate our approaches thoroughly on the Cityscapes dataset, and achieve a state-of-art result of 80.1% mIOU in the test set at the time of submission. We also have achieved state-of-the-art overall on the KITTI road estimation benchmark and the PASCAL VOC2012 segmentation task. Our source code can be found at this https URL . ",1,0,0,0,0,0 17490,Mechanical Failure in Amorphous Solids: Scale Free Spinodal Criticality," The mechanical failure of amorphous media is a ubiquitous phenomenon from material engineering to geology. It has been noticed for a long time that the phenomenon is ""scale-free"", indicating some type of criticality. In spite of attempts to invoke ""Self-Organized Criticality"", the physical origin of this criticality, and also its universal nature, being quite insensitive to the nature of microscopic interactions, remained elusive. Recently we proposed that the precise nature of this critical behavior is manifested by a spinodal point of a thermodynamic phase transition. Moreover, at the spinodal point there exists a divergent correlation length which is associated with the system-spanning instabilities (known also as shear bands) which are typical to the mechanical yield. Demonstrating this requires the introduction of an ""order parameter"" that is suitable for distinguishing between disordered amorphous systems, and an associated correlation function, suitable for picking up the growing correlation length. The theory, the order parameter, and the correlation functions used are universal in nature and can be applied to any amorphous solid that undergoes mechanical yield. Critical exponents for the correlation length divergence and the system size dependence are estimated. The phenomenon is seen at its sharpest in athermal systems, as is explained below; in this paper we extend the discussion also to thermal systems, showing that at sufficiently high temperatures the spinodal phenomenon is destroyed by thermal fluctuations. ",0,1,0,0,0,0 17491,Moment conditions in strong laws of large numbers for multiple sums and random measures," The validity of the strong law of large numbers for multiple sums $S_n$ of independent identically distributed random variables $Z_k$, $k\leq n$, with $r$-dimensional indices is equivalent to the integrability of $|Z|(\log^+|Z|)^{r-1}$, where $Z$ is the typical summand. We consider the strong law of large numbers for more general normalisations, without assuming that the summands $Z_k$ are identically distributed, and prove a multiple sum generalisation of the Brunk--Prohorov strong law of large numbers. In the case of identical finite moments of irder $2q$ with integer $q\geq1$, we show that the strong law of large numbers holds with the normalisation $\|n_1\cdots n_r\|^{1/2}(\log n_1\cdots\log n_r)^{1/(2q)+\varepsilon}$ for any $\varepsilon>0$. The obtained results are also formulated in the setting of ergodic theorems for random measures, in particular those generated by marked point processes. ",0,0,1,0,0,0 17492,Connecting Clump Sizes in Turbulent Disk Galaxies to Instability Theory," In this letter we study the mean sizes of Halpha clumps in turbulent disk galaxies relative to kinematics, gas fractions, and Toomre Q. We use 100~pc resolution HST images, IFU kinematics, and gas fractions of a sample of rare, nearby turbulent disks with properties closely matched to z~1.5-2 main-sequence galaxies (the DYNAMO sample). We find linear correlations of normalized mean clump sizes with both the gas fraction and the velocity dispersion-to-rotation velocity ratio of the host galaxy. We show that these correlations are consistent with predictions derived from a model of instabilities in a self-gravitating disk (the so-called ""violent disk instability model""). We also observe, using a two-fluid model for Q, a correlation between the size of clumps and self-gravity driven unstable regions. These results are most consistent with the hypothesis that massive star forming clumps in turbulent disks are the result of instabilities in self-gravitating gas-rich disks, and therefore provide a direct connection between resolved clump sizes and this in situ mechanism. ",0,1,0,0,0,0 17493,Anomalous slowing down of individual human activity due to successive decision-making processes," Motivated by a host of empirical evidences revealing the bursty character of human dynamics, we develop a model of human activity based on successive switching between an hesitation state and a decision-realization state, with residency times in the hesitation state distributed according to a heavy-tailed Pareto distribution. This model is particularly reminiscent of an individual strolling through a randomly distributed human crowd. Using a stochastic model based on the concept of anomalous and non-Markovian Lévy walk, we show exactly that successive decision-making processes drastically slow down the progression of an individual faced with randomly distributed obstacles. Specifically, we prove exactly that the average displacement exhibits a sublinear scaling with time that finds its origins in: (i) the intrinsically non-Markovian character of human activity, and (ii) the power law distribution of hesitation times. ",0,1,0,0,0,0 17494,"The spectrum, radiation conditions and the Fredholm property for the Dirichlet Laplacian in a perforated plane with semi-infinite inclusions"," We consider the spectral Dirichlet problem for the Laplace operator in the plane $\Omega^{\circ}$ with double-periodic perforation but also in the domain $\Omega^{\bullet}$ with a semi-infinite foreign inclusion so that the Floquet-Bloch technique and the Gelfand transform do not apply directly. We describe waves which are localized near the inclusion and propagate along it. We give a formulation of the problem with radiation conditions that provides a Fredholm operator of index zero. The main conclusion concerns the spectra $\sigma^{\circ}$ and $\sigma^{\bullet}$ of the problems in $\Omega^{\circ}$ and $\Omega^{\bullet},$ namely we present a concrete geometry which supports the relation $\sigma^{\circ}\varsubsetneqq\sigma^{\bullet}$ due to a new non-empty spectral band caused by the semi-infinite inclusion called an open waveguide in the double-periodic medium. ",0,0,1,0,0,0 17495,On the Semantics and Complexity of Probabilistic Logic Programs," We examine the meaning and the complexity of probabilistic logic programs that consist of a set of rules and a set of independent probabilistic facts (that is, programs based on Sato's distribution semantics). We focus on two semantics, respectively based on stable and on well-founded models. We show that the semantics based on stable models (referred to as the ""credal semantics"") produces sets of probability models that dominate infinitely monotone Choquet capacities, we describe several useful consequences of this result. We then examine the complexity of inference with probabilistic logic programs. We distinguish between the complexity of inference when a probabilistic program and a query are given (the inferential complexity), and the complexity of inference when the probabilistic program is fixed and the query is given (the query complexity, akin to data complexity as used in database theory). We obtain results on the inferential and query complexity for acyclic, stratified, and cyclic propositional and relational programs, complexity reaches various levels of the counting hierarchy and even exponential levels. ",1,0,0,0,0,0 17496,Fitting Probabilistic Index Models on Large Datasets," Recently, Thas et al. (2012) introduced a new statistical model for the probability index. This index is defined as $P(Y \leq Y^*|X, X^*)$ where Y and Y* are independent random response variables associated with covariates X and X* [...] Crucially to estimate the parameters of the model, a set of pseudo-observations is constructed. For a sample size n, a total of $n(n-1)/2$ pairwise comparisons between observations is considered. Consequently for large sample sizes, it becomes computationally infeasible or even impossible to fit the model as the set of pseudo-observations increases nearly quadratically. In this dissertation, we provide two solutions to fit a probabilistic index model. The first algorithm consists of splitting the entire data set into unique partitions. On each of these, we fit the model and then aggregate the estimates. A second algorithm is a subsampling scheme in which we select $K << n$ observations without replacement and after B iterations aggregate the estimates. In Monte Carlo simulations, we show how the partitioning algorithm outperforms the latter [...] We illustrate the partitioning algorithm and the interpretation of the probabilistic index model on a real data set (Przybylski and Weinstein, 2017) of n = 116,630 where we compare it against the ordinary least squares method. By modelling the probabilistic index, we give an intuitive and meaningful quantification of the effect of the time adolescents spend using digital devices such as smartphones on self-reported mental well-being. We show how moderate usage is associated with an increased probability of reporting a higher mental well-being compared to random adolescents who do not use a smartphone. On the other hand, adolescents who excessively use their smartphone are associated with a higher probability of reporting a lower mental well-being than randomly chosen peers who do not use a smartphone.[...] ",0,0,0,1,0,0 17497,BICEP2 / Keck Array IX: New Bounds on Anisotropies of CMB Polarization Rotation and Implications for Axion-Like Particles and Primordial Magnetic Fields," We present the strongest constraints to date on anisotropies of CMB polarization rotation derived from $150$ GHz data taken by the BICEP2 & Keck Array CMB experiments up to and including the 2014 observing season (BK14). The definition of polarization angle in BK14 maps has gone through self-calibration in which the overall angle is adjusted to minimize the observed $TB$ and $EB$ power spectra. After this procedure, the $QU$ maps lose sensitivity to a uniform polarization rotation but are still sensitive to anisotropies of polarization rotation. This analysis places constraints on the anisotropies of polarization rotation, which could be generated by CMB photons interacting with axion-like pseudoscalar fields or Faraday rotation induced by primordial magnetic fields. The sensitivity of BK14 maps ($\sim 3\mu$K-arcmin) makes it possible to reconstruct anisotropies of polarization rotation angle and measure their angular power spectrum much more precisely than previous attempts. Our data are found to be consistent with no polarization rotation anisotropies, improving the upper bound on the amplitude of the rotation angle spectrum by roughly an order of magnitude compared to the previous best constraints. Our results lead to an order of magnitude better constraint on the coupling constant of the Chern-Simons electromagnetic term $f_a \geq 1.7\times 10^2\times (H_I/2\pi)$ ($2\sigma$) than the constraint derived from uniform rotation, where $H_I$ is the inflationary Hubble scale. The upper bound on the amplitude of the primordial magnetic fields is 30nG ($2\sigma$) from the polarization rotation anisotropies. ",0,1,0,0,0,0 17498,Unsupervised Object Discovery and Segmentation of RGBD-images," In this paper we introduce a system for unsupervised object discovery and segmentation of RGBD-images. The system models the sensor noise directly from data, allowing accurate segmentation without sensor specific hand tuning of measurement noise models making use of the recently introduced Statistical Inlier Estimation (SIE) method. Through a fully probabilistic formulation, the system is able to apply probabilistic inference, enabling reliable segmentation in previously challenging scenarios. In addition, we introduce new methods for filtering out false positives, significantly improving the signal to noise ratio. We show that the system significantly outperform state-of-the-art in on a challenging real-world dataset. ",1,0,0,0,0,0 17499,Enabling large-scale viscoelastic calculations via neural network acceleration," One of the most significant challenges involved in efforts to understand the effects of repeated earthquake cycle activity are the computational costs of large-scale viscoelastic earthquake cycle models. Computationally intensive viscoelastic codes must be evaluated thousands of times and locations, and as a result, studies tend to adopt a few fixed rheological structures and model geometries, and examine the predicted time-dependent deformation over short (<10 yr) time periods at a given depth after a large earthquake. Training a deep neural network to learn a computationally efficient representation of viscoelastic solutions, at any time, location, and for a large range of rheological structures, allows these calculations to be done quickly and reliably, with high spatial and temporal resolution. We demonstrate that this machine learning approach accelerates viscoelastic calculations by more than 50,000%. This magnitude of acceleration will enable the modeling of geometrically complex faults over thousands of earthquake cycles across wider ranges of model parameters and at larger spatial and temporal scales than have been previously possible. ",0,1,0,0,0,0 17500,Speaker identification from the sound of the human breath," This paper examines the speaker identification potential of breath sounds in continuous speech. Speech is largely produced during exhalation. In order to replenish air in the lungs, speakers must periodically inhale. When inhalation occurs in the midst of continuous speech, it is generally through the mouth. Intra-speech breathing behavior has been the subject of much study, including the patterns, cadence, and variations in energy levels. However, an often ignored characteristic is the {\em sound} produced during the inhalation phase of this cycle. Intra-speech inhalation is rapid and energetic, performed with open mouth and glottis, effectively exposing the entire vocal tract to enable maximum intake of air. This results in vocal tract resonances evoked by turbulence that are characteristic of the speaker's speech-producing apparatus. Consequently, the sounds of inhalation are expected to carry information about the speaker's identity. Moreover, unlike other spoken sounds which are subject to active control, inhalation sounds are generally more natural and less affected by voluntary influences. The goal of this paper is to demonstrate that breath sounds are indeed bio-signatures that can be used to identify speakers. We show that these sounds by themselves can yield remarkably accurate speaker recognition with appropriate feature representations and classification frameworks. ",1,0,0,1,0,0 17501,Vector valued maximal Carleson type operators on the weighted Lorentz spaces," In this paper, by using the idea of linearizing maximal op-erators originated by Charles Fefferman and the TT* method of Stein-Wainger, we establish a weighted inequality for vector valued maximal Carleson type operators with singular kernels proposed by Andersen and John on the weighted Lorentz spaces with vector-valued functions. ",0,0,1,0,0,0 17502,Rigidity of square-tiled interval exchange transformations," We look at interval exchange transformations defined as first return maps on the set of diagonals of a flow of direction $\theta$ on a square-tiled surface: using a combinatorial approach, we show that, when the surface has at least one true singularity both the flow and the interval exchange are rigid if and only if tan $\theta$ has bounded partial quotients. Moreover, if all vertices of the squares are singularities of the flat metric, and tan $\theta$ has bounded partial quotients, the square-tiled interval exchange transformation T is not of rank one. Finally, for another class of surfaces, those defined by the unfolding of billiards in Veech triangles, we build an uncountable set of rigid directional flows and an uncountable set of rigid interval exchange transformations. ",0,0,1,0,0,0 17503,Global Marcinkiewicz estimates for nonlinear parabolic equations with nonsmooth coefficients," Consider the parabolic equation with measure data \begin{equation*} \left\{ \begin{aligned} &u_t-{\rm div} \mathbf{a}(D u,x,t)=\mu&\text{in}& \quad \Omega_T, &u=0 \quad &\text{on}& \quad \partial_p\Omega_T, \end{aligned}\right. \end{equation*} where $\Omega$ is a bounded domain in $\mathbb{R}^n$, $\Omega_T=\Omega\times (0,T)$, $\partial_p\Omega_T=(\partial\Omega\times (0,T))\cup (\Omega\times\{0\})$, and $\mu$ is a signed Borel measure with finite total mass. Assume that the nonlinearity ${\bf a}$ satisfies a small BMO-seminorm condition, and $\Omega$ is a Reifenberg flat domain. This paper proves a global Marcinkiewicz estimate for the SOLA (Solution Obtained as Limits of Approximation) to the parabolic equation. ",0,0,1,0,0,0 17504,Can Who-Edits-What Predict Edit Survival?," As the number of contributors to online peer-production systems grows, it becomes increasingly important to predict whether the edits that users make will eventually be beneficial to the project. Existing solutions either rely on a user reputation system or consist of a highly specialized predictor that is tailored to a specific peer-production system. In this work, we explore a different point in the solution space that goes beyond user reputation but does not involve any content-based feature of the edits. We view each edit as a game between the editor and the component of the project. We posit that the probability that an edit is accepted is a function of the editor's skill, of the difficulty of editing the component and of a user-component interaction term. Our model is broadly applicable, as it only requires observing data about who makes an edit, what the edit affects and whether the edit survives or not. We apply our model on Wikipedia and the Linux kernel, two examples of large-scale peer-production systems, and we seek to understand whether it can effectively predict edit survival: in both cases, we provide a positive answer. Our approach significantly outperforms those based solely on user reputation and bridges the gap with specialized predictors that use content-based features. It is simple to implement, computationally inexpensive, and in addition it enables us to discover interesting structure in the data. ",1,0,0,1,0,0 17505,Introduction to intelligent computing unit 1," This brief note highlights some basic concepts required toward understanding the evolution of machine learning and deep learning models. The note starts with an overview of artificial intelligence and its relationship to biological neuron that ultimately led to the evolution of todays intelligent models. ",1,0,0,1,0,0 17506,Spatial heterogeneities shape collective behavior of signaling amoeboid cells," We present novel experimental results on pattern formation of signaling Dictyostelium discoideum amoeba in the presence of a periodic array of millimeter-sized pillars. We observe concentric cAMP waves that initiate almost synchronously at the pillars and propagate outwards. These waves have higher frequency than the other firing centers and dominate the system dynamics. The cells respond chemotactically to these circular waves and stream towards the pillars, forming periodic Voronoi domains that reflect the periodicity of the underlying lattice. We performed comprehensive numerical simulations of a reaction-diffusion model to study the characteristics of the boundary conditions given by the obstacles. Our simulations show that, the obstacles can act as the wave source depending on the imposed boundary condition. Interestingly, a critical minimum accumulation of cAMP around the obstacles is needed for the pillars to act as the wave source. This critical value is lower at smaller production rates of the intracellular cAMP which can be controlled in our experiments using caffeine. Experiments and simulations also show that in the presence of caffeine the number of firing centers is reduced which is crucial in our system for circular waves emitted from the pillars to successfully take over the dynamics. These results are crucial to understand the signaling mechanism of Dictyostelium cells that experience spatial heterogeneities in its natural habitat. ",0,0,0,0,1,0 17507,Yamabe Solitons on three-dimensional normal almost paracontact metric manifolds," The purpose of the paper is to study Yamabe solitons on three-dimensional para-Sasakian, paracosymplectic and para-Kenmotsu manifolds. Mainly, we proved that *If the semi-Riemannian metric of a three-dimensional para-Sasakian manifold is a Yamabe soliton, then it is of constant scalar curvature, and the flow vector field V is Killing. In the next step, we proved that either manifold has constant curvature -1 and reduces to an Einstein manifold, or V is an infinitesimal automorphism of the paracontact metric structure on the manifold. *If the semi-Riemannian metric of a three-dimensional paracosymplectic manifold is a Yamabe soliton, then it has constant scalar curvature. Furthermore either manifold is $\eta$-Einstein, or Ricci flat. *If the semi-Riemannian metric on a three-dimensional para-Kenmotsu manifold is a Yamabe soliton, then the manifold is of constant sectional curvature -1, reduces to an Einstein manifold. Furthermore, Yamabe soliton is expanding with $\lambda$=-6 and the vector field V is Killing. Finally, we construct examples to illustrate the results obtained in previous sections. ",0,0,1,0,0,0 17508,Clustering and Hitting Times of Threshold Exceedances and Applications," We investigate exceedances of the process over a sufficiently high threshold. The exceedances determine the risk of hazardous events like climate catastrophes, huge insurance claims, the loss and delay in telecommunication networks. Due to dependence such exceedances tend to occur in clusters. The cluster structure of social networks is caused by dependence (social relationships and interests) between nodes and possibly heavy-tailed distributions of the node degrees. A minimal time to reach a large node determines the first hitting time. We derive an asymptotically equivalent distribution and a limit expectation of the first hitting time to exceed the threshold $u_n$ as the sample size $n$ tends to infinity. The results can be extended to the second and, generally, to the $k$th ($k> 2$) hitting times. Applications in large-scale networks such as social, telecommunication and recommender systems are discussed. ",0,0,1,1,0,0 17509,Classification of Local Field Potentials using Gaussian Sequence Model," A problem of classification of local field potentials (LFPs), recorded from the prefrontal cortex of a macaque monkey, is considered. An adult macaque monkey is trained to perform a memory-based saccade. The objective is to decode the eye movement goals from the LFP collected during a memory period. The LFP classification problem is modeled as that of classification of smooth functions embedded in Gaussian noise. It is then argued that using minimax function estimators as features would lead to consistent LFP classifiers. The theory of Gaussian sequence models allows us to represent minimax estimators as finite dimensional objects. The LFP classifier resulting from this mathematical endeavor is a spectrum based technique, where Fourier series coefficients of the LFP data, followed by appropriate shrinkage and thresholding, are used as features in a linear discriminant classifier. The classifier is then applied to the LFP data to achieve high decoding accuracy. The function classification approach taken in the paper also provides a systematic justification for using Fourier series, with shrinkage and thresholding, as features for the problem, as opposed to using the power spectrum. It also suggests that phase information is crucial to the decision making. ",0,0,0,1,0,0 17510,A few explicit examples of complex dynamics of inertia groups on surfaces - a question of Professor Igor Dolgachev," We give a few explicit examples which answer an open minded question of Professor Igor Dolgachev on complex dynamics of the inertia group of a smooth rational curve on a projective K3 surface and its variants for a rational surface and a non-projective K3 surface. ",0,0,1,0,0,0 17511,Ancestral inference from haplotypes and mutations," We consider inference about the history of a sample of DNA sequences, conditional upon the haplotype counts and the number of segregating sites observed at the present time. After deriving some theoretical results in the coalescent setting, we implement rejection sampling and importance sampling schemes to perform the inference. The importance sampling scheme addresses an extension of the Ewens Sampling Formula for a configuration of haplotypes and the number of segregating sites in the sample. The implementations include both constant and variable population size models. The methods are illustrated by two human Y chromosome data sets. ",0,0,1,1,0,0 17512,Ellipsoid Method for Linear Programming made simple," In this paper, ellipsoid method for linear programming is derived using only minimal knowledge of algebra and matrices. Unfortunately, most authors first describe the algorithm, then later prove its correctness, which requires a good knowledge of linear algebra. ",1,0,0,0,0,0 17513,Collective strong coupling of cold atoms to an all-fiber ring cavity," We experimentally demonstrate a ring geometry all-fiber cavity system for cavity quantum electrodynamics with an ensemble of cold atoms. The fiber cavity contains a nanofiber section which mediates atom-light interactions through an evanescent field. We observe well-resolved, vacuum Rabi splitting of the cavity transmission spectrum in the weak driving limit due to a collective enhancement of the coupling rate by the ensemble of atoms within the evanescent field, and we present a simple theoretical model to describe this. In addition, we demonstrate a method to control and stabilize the resonant frequency of the cavity by utilizing the thermal properties of the nanofiber. ",0,1,0,0,0,0 17514,Robust Model-Based Clustering of Voting Records," We explore the possibility of discovering extreme voting patterns in the U.S. Congressional voting records by drawing ideas from the mixture of contaminated normal distributions. A mixture of latent trait models via contaminated normal distributions is proposed. We assume that the low dimensional continuous latent variable comes from a contaminated normal distribution and, therefore, picks up extreme patterns in the observed binary data while clustering. We consider in particular such model for the analysis of voting records. The model is applied to a U.S. Congressional Voting data set on 16 issues. Note this approach is the first instance within the literature of a mixture model handling binary data with possible extreme patterns. ",0,0,0,1,0,0 17515,The Quest for Solvable Multistate Landau-Zener Models," Recently, integrability conditions (ICs) in mutistate Landau-Zener (MLZ) theory were proposed [1]. They describe common properties of all known solved systems with linearly time-dependent Hamiltonians. Here we show that ICs enable efficient computer assisted search for new solvable MLZ models that span complexity range from several interacting states to mesoscopic systems with many-body dynamics and combinatorially large phase space. This diversity suggests that nontrivial solvable MLZ models are numerous. In addition, we refine the formulation of ICs and extend the class of solvable systems to models with points of multiple diabatic level crossing. ",0,1,1,0,0,0 17516,Bayesian Inference of the Multi-Period Optimal Portfolio for an Exponential Utility," We consider the estimation of the multi-period optimal portfolio obtained by maximizing an exponential utility. Employing Jeffreys' non-informative prior and the conjugate informative prior, we derive stochastic representations for the optimal portfolio weights at each time point of portfolio reallocation. This provides a direct access not only to the posterior distribution of the portfolio weights but also to their point estimates together with uncertainties and their asymptotic distributions. Furthermore, we present the posterior predictive distribution for the investor's wealth at each time point of the investment period in terms of a stochastic representation for the future wealth realization. This in turn makes it possible to use quantile-based risk measures or to calculate the probability of default. We apply the suggested Bayesian approach to assess the uncertainty in the multi-period optimal portfolio by considering assets from the FTSE 100 in the weeks after the British referendum to leave the European Union. The behaviour of the novel portfolio estimation method in a precarious market situation is illustrated by calculating the predictive wealth, the risk associated with the holding portfolio, and the default probability in each period. ",0,0,1,1,0,0 17517,Reconstructing the gravitational field of the local universe," Tests of gravity at the galaxy scale are in their infancy. As a first step to systematically uncovering the gravitational significance of galaxies, we map three fundamental gravitational variables -- the Newtonian potential, acceleration and curvature -- over the galaxy environments of the local universe to a distance of approximately 200 Mpc. Our method combines the contributions from galaxies in an all-sky redshift survey, halos from an N-body simulation hosting low-luminosity objects, and linear and quasi-linear modes of the density field. We use the ranges of these variables to determine the extent to which galaxies expand the scope of generic tests of gravity and are capable of constraining specific classes of model for which they have special significance. Finally, we investigate the improvements afforded by upcoming galaxy surveys. ",0,1,0,0,0,0 17518,Hierarchical organization of H. Eugene Stanley scientific collaboration community in weighted network representation," By mapping the most advanced elements of the contemporary social interactions, the world scientific collaboration network develops an extremely involved and heterogeneous organization. Selected characteristics of this heterogeneity are studied here and identified by focusing on the scientific collaboration community of H. Eugene Stanley - one of the most prolific world scholars at the present time. Based on the Web of Science records as of March 28, 2016, several variants of networks are constructed. It is found that the Stanley #1 network - this in analogy to the Erdős # - develops a largely consistent hierarchical organization and Stanley himself obeys rules of the same hierarchy. However, this is seen exclusively in the weighted network representation. When such a weighted network is evolving, an existing relevant model indicates that the spread of weight gets stimulation to the multiplicative bursts over the neighbouring nodes, which leads to a balanced growth of interconnections among them. While not exclusive to Stanley, such a behaviour is not a rule, however. Networks of other outstanding scholars studied here more often develop a star-like form and the central hubs constitute the outliers. This study is complemented by a spectral analysis of the normalised Laplacian matrices derived from the weighted variants of the corresponding networks and, among others, it points to the efficiency of such a procedure for identifying the component communities and relations among them in the complex weighted networks. ",1,1,0,0,0,0 17519,Rational links and DT invariants of quivers," We prove that the generating functions for the colored HOMFLY-PT polynomials of rational links are specializations of the generating functions of the motivic Donaldson-Thomas invariants of appropriate quivers that we naturally associate with these links. This shows that the conjectural links-quivers correspondence of Kucharski-Reineke-Stošić-Su{\l}kowski as well as the LMOV conjecture hold for rational links. Along the way, we extend the links-quivers correspondence to tangles and, thus, explore elements of a skein theory for motivic Donaldson-Thomas invariants. ",0,0,1,0,0,0 17520,G2-structures for N=1 supersymmetric AdS4 solutions of M-theory," We study the N=1 supersymmetric solutions of D=11 supergravity obtained as a warped product of four-dimensional anti-de-Sitter space with a seven-dimensional Riemannian manifold M. Using the octonion bundle structure on M we reformulate the Killing spinor equations in terms of sections of the octonion bundle on M. The solutions then define a single complexified G2-structure on M or equivalently two real G2-structures. We then study the torsion of these G2-structures and the relationships between them. ",0,0,1,0,0,0 17521,A Neural Stochastic Volatility Model," In this paper, we show that the recent integration of statistical models with deep recurrent neural networks provides a new way of formulating volatility (the degree of variation of time series) models that have been widely used in time series analysis and prediction in finance. The model comprises a pair of complementary stochastic recurrent neural networks: the generative network models the joint distribution of the stochastic volatility process; the inference network approximates the conditional distribution of the latent variables given the observables. Our focus here is on the formulation of temporal dynamics of volatility over time under a stochastic recurrent neural network framework. Experiments on real-world stock price datasets demonstrate that the proposed model generates a better volatility estimation and prediction that outperforms mainstream methods, e.g., deterministic models such as GARCH and its variants, and stochastic models namely the MCMC-based model \emph{stochvol} as well as the Gaussian process volatility model \emph{GPVol}, on average negative log-likelihood. ",1,0,0,1,0,0 17522,A Minimalist Approach to Type-Agnostic Detection of Quadrics in Point Clouds," This paper proposes a segmentation-free, automatic and efficient procedure to detect general geometric quadric forms in point clouds, where clutter and occlusions are inevitable. Our everyday world is dominated by man-made objects which are designed using 3D primitives (such as planes, cones, spheres, cylinders, etc.). These objects are also omnipresent in industrial environments. This gives rise to the possibility of abstracting 3D scenes through primitives, thereby positions these geometric forms as an integral part of perception and high level 3D scene understanding. As opposed to state-of-the-art, where a tailored algorithm treats each primitive type separately, we propose to encapsulate all types in a single robust detection procedure. At the center of our approach lies a closed form 3D quadric fit, operating in both primal & dual spaces and requiring as low as 4 oriented-points. Around this fit, we design a novel, local null-space voting strategy to reduce the 4-point case to 3. Voting is coupled with the famous RANSAC and makes our algorithm orders of magnitude faster than its conventional counterparts. This is the first method capable of performing a generic cross-type multi-object primitive detection in difficult scenes. Results on synthetic and real datasets support the validity of our method. ",1,0,0,0,0,0 17523,Mass-to-Light versus Color Relations for Dwarf Irregular Galaxies," We have determined new relations between $UBV$ colors and mass-to-light ratios ($M/L$) for dwarf irregular (dIrr) galaxies, as well as for transformed $g^\prime - r^\prime$. These $M/L$ to color relations (MLCRs) are based on stellar mass density profiles determined for 34 LITTLE THINGS dwarfs from spectral energy distribution fitting to multi-wavelength surface photometry in passbands from the FUV to the NIR. These relations can be used to determine stellar masses in dIrr galaxies for situations where other determinations of stellar mass are not possible. Our MLCRs are shallower than comparable MLCRs in the literature determined for spiral galaxies. We divided our dwarf data into four metallicity bins and found indications of a steepening of the MLCR with increased oxygen abundance, perhaps due to more line blanketing occurring at higher metallicity. ",0,1,0,0,0,0 17524,Insight into High-order Harmonic Generation from Solids: The Contributions of the Bloch Wave-packets Moving on the Group and Phase Velocities," We study numerically the Bloch electron wavepacket dynamics in periodic potentials to simulate laser-solid interactions. We introduce a new perspective in the coordinate space combined with the motion of the Bloch electron wavepackets moving at group and phase velocities under the laser fields. This model interprets the origins of the two contributions (intra- and interband transitions) of the high-order harmonic generation (HHG) by investigating the local and global behavior of the wavepackets. It also elucidates the underlying physical picture of the HHG intensity enhancement by means of carrier-envelope phase (CEP), chirp and inhomogeneous fields. It provides a deep insight into the emission of high-order harmonics from solids. This model is instructive for experimental measurements and provides a new avenue to distinguish mechanisms of the HHG from solids in diffrent laser fields. ",0,1,0,0,0,0 17525,Using deterministic approximations to accelerate SMC for posterior sampling," Sequential Monte Carlo has become a standard tool for Bayesian Inference of complex models. This approach can be computationally demanding, especially when initialized from the prior distribution. On the other hand, deter-ministic approximations of the posterior distribution are often available with no theoretical guaranties. We propose a bridge sampling scheme starting from such a deterministic approximation of the posterior distribution and targeting the true one. The resulting Shortened Bridge Sampler (SBS) relies on a sequence of distributions that is determined in an adaptive way. We illustrate the robustness and the efficiency of the methodology on a large simulation study. When applied to network datasets, SBS inference leads to different statistical conclusions from the one supplied by the standard variational Bayes approximation. ",0,0,0,1,0,0 17526,Evaluating (and improving) the correspondence between deep neural networks and human representations," Decades of psychological research have been aimed at modeling how people learn features and categories. The empirical validation of these theories is often based on artificial stimuli with simple representations. Recently, deep neural networks have reached or surpassed human accuracy on tasks such as identifying objects in natural images. These networks learn representations of real-world stimuli that can potentially be leveraged to capture psychological representations. We find that state-of-the-art object classification networks provide surprisingly accurate predictions of human similarity judgments for natural images, but fail to capture some of the structure represented by people. We show that a simple transformation that corrects these discrepancies can be obtained through convex optimization. We use the resulting representations to predict the difficulty of learning novel categories of natural images. Our results extend the scope of psychological experiments and computational modeling by enabling tractable use of large natural stimulus sets. ",1,0,0,0,0,0 17527,An approach to nonsolvable base change and descent," We present a collection of conjectural trace identities and explain why they are equivalent to base change and descent of automorphic representations of $\mathrm{GL}_n(\mathbb{A}_F)$ along nonsolvable extensions (under some simplifying hypotheses). The case $n=2$ is treated in more detail and applications towards the Artin conjecture for icosahedral Galois representations are given. ",0,0,1,0,0,0 17528,Towards Attack-Tolerant Networks: Concurrent Multipath Routing and the Butterfly Network," Targeted attacks against network infrastructure are notoriously difficult to guard against. In the case of communication networks, such attacks can leave users vulnerable to censorship and surveillance, even when cryptography is used. Much of the existing work on network fault-tolerance focuses on random faults and does not apply to adversarial faults (attacks). Centralized networks have single points of failure by definition, leading to a growing popularity in decentralized architectures and protocols for greater fault-tolerance. However, centralized network structure can arise even when protocols are decentralized. Despite their decentralized protocols, the Internet and World-Wide Web have been shown both theoretically and historically to be highly susceptible to attack, in part due to emergent structural centralization. When single points of failure exist, they are potentially vulnerable to non-technological (i.e., coercive) attacks, suggesting the importance of a structural approach to attack-tolerance. We show how the assumption of partial trust transitivity, while more realistic than the assumption underlying webs of trust, can be used to quantify the effective redundancy of a network as a function of trust transitivity. We also prove that the effective redundancy of the wrap-around butterfly topology increases exponentially with trust transitivity and describe a novel concurrent multipath routing algorithm for constructing paths to utilize that redundancy. When portions of network structure can be dictated our results can be used to create scalable, attack-tolerant infrastructures. More generally, our results provide a theoretical formalism for evaluating the effects of network structure on adversarial fault-tolerance. ",1,0,0,0,0,0 17529,PEORL: Integrating Symbolic Planning and Hierarchical Reinforcement Learning for Robust Decision-Making," Reinforcement learning and symbolic planning have both been used to build intelligent autonomous agents. Reinforcement learning relies on learning from interactions with real world, which often requires an unfeasibly large amount of experience. Symbolic planning relies on manually crafted symbolic knowledge, which may not be robust to domain uncertainties and changes. In this paper we present a unified framework {\em PEORL} that integrates symbolic planning with hierarchical reinforcement learning (HRL) to cope with decision-making in a dynamic environment with uncertainties. Symbolic plans are used to guide the agent's task execution and learning, and the learned experience is fed back to symbolic knowledge to improve planning. This method leads to rapid policy search and robust symbolic plans in complex domains. The framework is tested on benchmark domains of HRL. ",0,0,0,1,0,0 17530,Structure of Native Two-dimensional Oxides on III--Nitride Surfaces," When pristine material surfaces are exposed to air, highly reactive broken bonds can promote the formation of surface oxides with structures and properties differing greatly from bulk. Determination of the oxide structure, however, is often elusive through the use of indirect diffraction methods or techniques that probe only the outer most layer. As a result, surface oxides forming on widely used materials, such as group III-nitrides, have not been unambiguously resolved, even though critical properties can depend sensitively on their presence. In this work, aberration corrected scanning transmission electron microscopy reveals directly, and with depth dependence, the structure of native two--dimensional oxides that form on AlN and GaN surfaces. Through atomic resolution imaging and spectroscopy, we show that the oxide layers are comprised of tetrahedra--octahedra cation--oxygen units, similar to bulk $\theta$--Al$_2$O$_3$ and $\beta$--Ga$_2$O$_3$. By applying density functional theory, we show that the observed structures are more stable than previously proposed surface oxide models. We place the impact of these observations in the context of key III-nitride growth, device issues, and the recent discovery of two-dimnesional nitrides. ",0,1,0,0,0,0 17531,Hydrodynamic charge and heat transport on inhomogeneous curved spaces," We develop the theory of hydrodynamic charge and heat transport in strongly interacting quasi-relativistic systems on manifolds with inhomogeneous spatial curvature. In solid-state physics, this is analogous to strain disorder in the underlying lattice. In the hydrodynamic limit, we find that the thermal and electrical conductivities are dominated by viscous effects, and that the thermal conductivity is most sensitive to this disorder. We compare the effects of inhomogeneity in the spatial metric to inhomogeneity in the chemical potential, and discuss the extent to which our hydrodynamic theory is relevant for experimentally realizable condensed matter systems, including suspended graphene at the Dirac point. ",0,1,0,0,0,0 17532,Simulation assisted machine learning," Predicting how a proposed cancer treatment will affect a given tumor can be cast as a machine learning problem, but the complexity of biological systems, the number of potentially relevant genomic and clinical features, and the lack of very large scale patient data repositories make this a unique challenge. ""Pure data"" approaches to this problem are underpowered to detect combinatorially complex interactions and are bound to uncover false correlations despite statistical precautions taken (1). To investigate this setting, we propose a method to integrate simulations, a strong form of prior knowledge, into machine learning, a combination which to date has been largely unexplored. The results of multiple simulations (under various uncertainty scenarios) are used to compute similarity measures between every pair of samples: sample pairs are given a high similarity score if they behave similarly under a wide range of simulation parameters. These similarity values, rather than the original high dimensional feature data, are used to train kernelized machine learning algorithms such as support vector machines, thus handling the curse-of-dimensionality that typically affects genomic machine learning. Using four synthetic datasets of complex systems--three biological models and one network flow optimization model--we demonstrate that when the number of training samples is small compared to the number of features, the simulation kernel approach dominates over no-prior-knowledge methods. In addition to biology and medicine, this approach should be applicable to other disciplines, such as weather forecasting, financial markets, and agricultural management, where predictive models are sought and informative yet approximate simulations are available. The Python SimKern software, the models (in MATLAB, Octave, and R), and the datasets are made freely available at this https URL . ",0,0,0,1,1,0 17533,Rechargeable redox flow batteries: Maximum current density with electrolyte flow reactant penetration in a serpentine flow structure," Rechargeable redox flow batteries with serpentine flow field designs have been demonstrated to deliver higher current density and power density in medium and large-scale stationary energy storage applications. Nevertheless, the fundamental mechanisms involved with improved current density in flow batteries with flow field designs have not been understood. Here we report a maximum current density concept associated with stoichiometric availability of electrolyte reactant flow penetration through the porous electrode that can be achieved in a flow battery system with a ""zero-gap""serpentine flow field architecture. This concept can explain a higher current density achieved within allowing reactions of all species soluble in the electrolyte. Further validations with experimental data are confirmed by an example of a vanadium flow battery with a serpentine flow structure over carbon paper electrode. ",0,1,0,0,0,0 17534,Open problems in mathematical physics," We present a list of open questions in mathematical physics. After a historical introduction, a number of problems in a variety of different fields are discussed, with the intention of giving an overall impression of the current status of mathematical physics, particularly in the topical fields of classical general relativity, cosmology and the quantum realm. This list is motivated by the recent article proposing 42 fundamental questions (in physics) which must be answered on the road to full enlightenment. But paraphrasing a famous quote by the British football manager Bill Shankly, in response to the question of whether mathematics can answer the Ultimate Question of Life, the Universe, and Everything, mathematics is, of course, much more important than that. ",0,1,1,0,0,0 17535,Stochastic Bandit Models for Delayed Conversions," Online advertising and product recommendation are important domains of applications for multi-armed bandit methods. In these fields, the reward that is immediately available is most often only a proxy for the actual outcome of interest, which we refer to as a conversion. For instance, in web advertising, clicks can be observed within a few seconds after an ad display but the corresponding sale --if any-- will take hours, if not days to happen. This paper proposes and investigates a new stochas-tic multi-armed bandit model in the framework proposed by Chapelle (2014) --based on empirical studies in the field of web advertising-- in which each action may trigger a future reward that will then happen with a stochas-tic delay. We assume that the probability of conversion associated with each action is unknown while the distribution of the conversion delay is known, distinguishing between the (idealized) case where the conversion events may be observed whatever their delay and the more realistic setting in which late conversions are censored. We provide performance lower bounds as well as two simple but efficient algorithms based on the UCB and KLUCB frameworks. The latter algorithm, which is preferable when conversion rates are low, is based on a Poissonization argument, of independent interest in other settings where aggregation of Bernoulli observations with different success probabilities is required. ",1,0,0,0,0,0 17536,Fitting Analysis using Differential Evolution Optimization (FADO): Spectral population synthesis through genetic optimization under self-consistency boundary conditions," The goal of population spectral synthesis (PSS) is to decipher from the spectrum of a galaxy the mass, age and metallicity of its constituent stellar populations. This technique has been established as a fundamental tool in extragalactic research. It has been extensively applied to large spectroscopic data sets, notably the SDSS, leading to important insights into the galaxy assembly history. However, despite significant improvements over the past decade, all current PSS codes suffer from two major deficiencies that inhibit us from gaining sharp insights into the star-formation history (SFH) of galaxies and potentially introduce substantial biases in studies of their physical properties (e.g., stellar mass, mass-weighted stellar age and specific star formation rate). These are i) the neglect of nebular emission in spectral fits, consequently, ii) the lack of a mechanism that ensures consistency between the best-fitting SFH and the observed nebular emission characteristics of a star-forming (SF) galaxy. In this article, we present FADO (Fitting Analysis using Differential evolution Optimization): a conceptually novel, publicly available PSS tool with the distinctive capability of permitting identification of the SFH that reproduces the observed nebular characteristics of a SF galaxy. This so-far unique self-consistency concept allows us to significantly alleviate degeneracies in current spectral synthesis. The innovative character of FADO is further augmented by its mathematical foundation: FADO is the first PSS code employing genetic differential evolution optimization. This, in conjunction with other unique elements in its mathematical concept (e.g., optimization of the spectral library using artificial intelligence, convergence test, quasi-parallelization) results in key improvements with respect to computational efficiency and uniqueness of the best-fitting SFHs. ",0,1,0,0,0,0 17537,A Combinatoric Shortcut to Evaluate CHY-forms," In \cite{Chen:2016fgi} we proposed a differential operator for the evaluation of the multi-dimensional residues on isolated (zero-dimensional) poles.In this paper we discuss some new insight on evaluating the (generalized) Cachazo-He-Yuan (CHY) forms of the scattering amplitudes using this differential operator. We introduce a tableau representation for the coefficients appearing in the proposed differential operator. Combining the tableaux with the polynomial forms of the scattering equations, the evaluation of the generalized CHY form becomes a simple combinatoric problem. It is thus possible to obtain the coefficients arising in the differential operator in a straightforward way. We present the procedure for a complete solution of the $n$-gon amplitudes at one-loop level in a generalized CHY form. We also apply our method to fully evaluate the one-loop five-point amplitude in the maximally supersymmetric Yang-Mills theory; the final result is identical to the one obtained by Q-Cut. ",0,0,1,0,0,0 17538,ART: adaptive residual--time restarting for Krylov subspace matrix exponential evaluations," In this paper a new restarting method for Krylov subspace matrix exponential evaluations is proposed. Since our restarting technique essentially employs the residual, some convergence results for the residual are given. We also discuss how the restart length can be adjusted after each restart cycle, which leads to an adaptive restarting procedure. Numerical tests are presented to compare our restarting with three other restarting methods. Some of the algorithms described in this paper are a part of the Octave/Matlab package expmARPACK available at this http URL. ",1,0,0,0,0,0 17539,Nil extensions of simple regular ordered semigroup," In this paper, nil extensions of some special type of ordered semigroups, such as, simple regular ordered semigroups, left simple and right regular ordered semigroup. Moreover, we have characterized complete semilattice decomposition of all ordered semigroups which are nil extension of ordered semigroup. ",0,0,1,0,0,0 17540,The Unreasonable Effectiveness of Structured Random Orthogonal Embeddings," We examine a class of embeddings based on structured random matrices with orthogonal rows which can be applied in many machine learning applications including dimensionality reduction and kernel approximation. For both the Johnson-Lindenstrauss transform and the angular kernel, we show that we can select matrices yielding guaranteed improved performance in accuracy and/or speed compared to earlier methods. We introduce matrices with complex entries which give significant further accuracy improvement. We provide geometric and Markov chain-based perspectives to help understand the benefits, and empirical results which suggest that the approach is helpful in a wider range of applications. ",0,0,0,1,0,0 17541,The Bayesian optimist's guide to adaptive immune receptor repertoire analysis," Probabilistic modeling is fundamental to the statistical analysis of complex data. In addition to forming a coherent description of the data-generating process, probabilistic models enable parameter inference about given data sets. This procedure is well-developed in the Bayesian perspective, in which one infers probability distributions describing to what extent various possible parameters agree with the data. In this paper we motivate and review probabilistic modeling for adaptive immune receptor repertoire data then describe progress and prospects for future work, from germline haplotyping to adaptive immune system deployment across tissues. The relevant quantities in immune sequence analysis include not only continuous parameters such as gene use frequency, but also discrete objects such as B cell clusters and lineages. Throughout this review, we unravel the many opportunities for probabilistic modeling in adaptive immune receptor analysis, including settings for which the Bayesian approach holds substantial promise (especially if one is optimistic about new computational methods). From our perspective the greatest prospects for progress in probabilistic modeling for repertoires concern ancestral sequence estimation for B cell receptor lineages, including uncertainty from germline genotype, rearrangement, and lineage development. ",0,0,0,0,1,0 17542,Predictive Indexing," There has been considerable research on automated index tuning in database management systems (DBMSs). But the majority of these solutions tune the index configuration by retrospectively making computationally expensive physical design changes all at once. Such changes degrade the DBMS's performance during the process, and have reduced utility during subsequent query processing due to the delay between a workload shift and the associated change. A better approach is to generate small changes that tune the physical design over time, forecast the utility of these changes, and apply them ahead of time to maximize their impact. This paper presents predictive indexing that continuously improves a database's physical design using lightweight physical design changes. It uses a machine learning model to forecast the utility of these changes, and continuously refines the index configuration of the database to handle evolving workloads. We introduce a lightweight hybrid scan operator with which a DBMS can make use of partially-built indexes for query processing. Our evaluation shows that predictive indexing improves the throughput of a DBMS by 3.5--5.2x compared to other state-of-the-art indexing approaches. We demonstrate that predictive indexing works seamlessly with other lightweight automated physical design tuning methods. ",1,0,0,0,0,0 17543,Making Deep Q-learning methods robust to time discretization," Despite remarkable successes, Deep Reinforcement Learning (DRL) is not robust to hyperparameterization, implementation details, or small environment changes (Henderson et al. 2017, Zhang et al. 2018). Overcoming such sensitivity is key to making DRL applicable to real world problems. In this paper, we identify sensitivity to time discretization in near continuous-time environments as a critical factor; this covers, e.g., changing the number of frames per second, or the action frequency of the controller. Empirically, we find that Q-learning-based approaches such as Deep Q- learning (Mnih et al., 2015) and Deep Deterministic Policy Gradient (Lillicrap et al., 2015) collapse with small time steps. Formally, we prove that Q-learning does not exist in continuous time. We detail a principled way to build an off-policy RL algorithm that yields similar performances over a wide range of time discretizations, and confirm this robustness empirically. ",1,0,0,1,0,0 17544,Anomalous current in diffusive ferromagnetic Josephson junctions," We demonstrate that in diffusive superconductor/ferromagnet/superconductor (S/F/S) junctions a finite, {\it anomalous}, Josephson current can flow even at zero phase difference between the S electrodes. The conditions for the observation of this effect are non-coplanar magnetization distribution and a broken magnetization inversion symmetry of the superconducting current. The latter symmetry is intrinsic for the widely used quasiclassical approximation and prevent previous works, based on this approximation, from obtaining the Josephson anomalous current. We show that this symmetry can be removed by introducing spin-dependent boundary conditions for the quasiclassical equations at the superconducting/ferromagnet interfaces in diffusive systems. Using this recipe we considered generic multilayer magnetic systems and determine the ideal experimental conditions in order to maximize the anomalous current. ",0,1,0,0,0,0 17545,Rate Optimal Estimation and Confidence Intervals for High-dimensional Regression with Missing Covariates," Although a majority of the theoretical literature in high-dimensional statistics has focused on settings which involve fully-observed data, settings with missing values and corruptions are common in practice. We consider the problems of estimation and of constructing component-wise confidence intervals in a sparse high-dimensional linear regression model when some covariates of the design matrix are missing completely at random. We analyze a variant of the Dantzig selector [9] for estimating the regression model and we use a de-biasing argument to construct component-wise confidence intervals. Our first main result is to establish upper bounds on the estimation error as a function of the model parameters (the sparsity level s, the expected fraction of observed covariates $\rho_*$, and a measure of the signal strength $\|\beta^*\|_2$). We find that even in an idealized setting where the covariates are assumed to be missing completely at random, somewhat surprisingly and in contrast to the fully-observed setting, there is a dichotomy in the dependence on model parameters and much faster rates are obtained if the covariance matrix of the random design is known. To study this issue further, our second main contribution is to provide lower bounds on the estimation error showing that this discrepancy in rates is unavoidable in a minimax sense. We then consider the problem of high-dimensional inference in the presence of missing data. We construct and analyze confidence intervals using a de-biased estimator. In the presence of missing data, inference is complicated by the fact that the de-biasing matrix is correlated with the pilot estimator and this necessitates the design of a new estimator and a novel analysis. We also complement our mathematical study with extensive simulations on synthetic and semi-synthetic data that show the accuracy of our asymptotic predictions for finite sample sizes. ",0,0,0,1,0,0 17546,Applications of an algorithm for solving Fredholm equations of the first kind," In this paper we use an iterative algorithm for solving Fredholm equations of the first kind. The basic algorithm is known and is based on an EM algorithm when involved functions are non-negative and integrable. With this algorithm we demonstrate two examples involving the estimation of a mixing density and a first passage time density function involving Brownian motion. We also develop the basic algorithm to include functions which are not necessarily non-negative and again present illustrations under this scenario. A self contained proof of convergence of all the algorithms employed is presented. ",0,0,1,1,0,0 17547,Fully symmetric kernel quadrature," Kernel quadratures and other kernel-based approximation methods typically suffer from prohibitive cubic time and quadratic space complexity in the number of function evaluations. The problem arises because a system of linear equations needs to be solved. In this article we show that the weights of a kernel quadrature rule can be computed efficiently and exactly for up to tens of millions of nodes if the kernel, integration domain, and measure are fully symmetric and the node set is a union of fully symmetric sets. This is based on the observations that in such a setting there are only as many distinct weights as there are fully symmetric sets and that these weights can be solved from a linear system of equations constructed out of row sums of certain submatrices of the full kernel matrix. We present several numerical examples that show feasibility, both for a large number of nodes and in high dimensions, of the developed fully symmetric kernel quadrature rules. Most prominent of the fully symmetric kernel quadrature rules we propose are those that use sparse grids. ",1,0,1,1,0,0 17548,Conditional Accelerated Lazy Stochastic Gradient Descent," In this work we introduce a conditional accelerated lazy stochastic gradient descent algorithm with optimal number of calls to a stochastic first-order oracle and convergence rate $O\left(\frac{1}{\varepsilon^2}\right)$ improving over the projection-free, Online Frank-Wolfe based stochastic gradient descent of Hazan and Kale [2012] with convergence rate $O\left(\frac{1}{\varepsilon^4}\right)$. ",1,0,0,1,0,0 17549,MMD GAN: Towards Deeper Understanding of Moment Matching Network," Generative moment matching network (GMMN) is a deep generative model that differs from Generative Adversarial Network (GAN) by replacing the discriminator in GAN with a two-sample test based on kernel maximum mean discrepancy (MMD). Although some theoretical guarantees of MMD have been studied, the empirical performance of GMMN is still not as competitive as that of GAN on challenging and large benchmark datasets. The computational efficiency of GMMN is also less desirable in comparison with GAN, partially due to its requirement for a rather large batch size during the training. In this paper, we propose to improve both the model expressiveness of GMMN and its computational efficiency by introducing adversarial kernel learning techniques, as the replacement of a fixed Gaussian kernel in the original GMMN. The new approach combines the key ideas in both GMMN and GAN, hence we name it MMD GAN. The new distance measure in MMD GAN is a meaningful loss that enjoys the advantage of weak topology and can be optimized via gradient descent with relatively small batch sizes. In our evaluation on multiple benchmark datasets, including MNIST, CIFAR- 10, CelebA and LSUN, the performance of MMD-GAN significantly outperforms GMMN, and is competitive with other representative GAN works. ",1,0,0,1,0,0 17550,Multipartite entanglement after a quantum quench," We study the multipartite entanglement of a quantum many-body system undergoing a quantum quench. We quantify multipartite entanglement through the quantum Fisher information (QFI) density and we are able to express it after a quench in terms of a generalized response function. For pure state initial conditions and in the thermodynamic limit, we can express the QFI as the fluctuations of an observable computed in the so-called diagonal ensemble. We apply the formalism to the dynamics of a quantum Ising chain after a quench in the transverse field. In this model the asymptotic state is, in almost all cases, more than two-partite entangled. Moreover, starting from the ferromagnetic phase, we find a divergence of multipartite entanglement for small quenches closely connected to a corresponding divergence of the correlation length. ",0,1,0,0,0,0 17551,Thermal properties of graphene from path-integral simulations," Thermal properties of graphene monolayers are studied by path-integral molecular dynamics (PIMD) simulations, which take into account the quantization of vibrational modes in the crystalline membrane, and allow one to consider anharmonic effects in these properties. This system was studied at temperatures in the range from 12 to 2000~K and zero external stress, by describing the interatomic interactions through the LCBOPII effective potential. We analyze the internal energy and specific heat and compare the results derived from the simulations with those yielded by a harmonic approximation for the vibrational modes. This approximation turns out to be rather precise up to temperatures of about 400~K. At higher temperatures, we observe an influence of the elastic energy, due to the thermal expansion of the graphene sheet. Zero-point and thermal effects on the in-plane and ""real"" surface of graphene are discussed. The thermal expansion coefficient $\alpha$ of the real area is found to be positive at all temperatures, in contrast to the expansion coefficient $\alpha_p$ of the in-plane area, which is negative at low temperatures, and becomes positive for $T \gtrsim$ 1000~K. ",0,1,0,0,0,0 17552,Measuring Affectiveness and Effectiveness in Software Systems," The summary presented in this paper highlights the results obtained in a four-years project aiming at analyzing the development process of software artifacts from two points of view: Effectiveness and Affectiveness. The first attribute is meant to analyze the productivity of the Open Source Communities by measuring the time required to resolve an issue, while the latter provides a novel approach for studying the development process by analyzing the affectiveness ex-pressed by developers in their comments posted during the issue resolution phase. Affectivenes is obtained by measuring Sentiment, Politeness and Emotions. All the study presented in this summary are based on Jira, one of the most used software repositories. ",1,0,0,0,0,0 17553,Intertangled stochastic motifs in networks of excitatory-inhibitory units," A stochastic model of excitatory and inhibitory interactions which bears universality traits is introduced and studied. The endogenous component of noise, stemming from finite size corrections, drives robust inter-nodes correlations, that persist at large large distances. Anti-phase synchrony at small frequencies is resolved on adjacent nodes and found to promote the spontaneous generation of long-ranged stochastic patterns, that invade the network as a whole. These patterns are lacking under the idealized deterministic scenario, and could provide novel hints on how living systems implement and handle a large gallery of delicate computational tasks. ",0,1,0,0,0,0 17554,Accurate Computation of the Distribution of Sums of Dependent Log-Normals with Applications to the Black-Scholes Model," We present a new Monte Carlo methodology for the accurate estimation of the distribution of the sum of dependent log-normal random variables. The methodology delivers statistically unbiased estimators for three distributional quantities of significant interest in finance and risk management: the left tail, or cumulative distribution function, the probability density function, and the right tail, or complementary distribution function of the sum of dependent log-normal factors. In all of these three cases our methodology delivers fast and highly accurate estimators in settings for which existing methodology delivers estimators with large variance that tend to underestimate the true quantity of interest. We provide insight into the computational challenges using theory and numerical experiments, and explain their much wider implications for Monte Carlo statistical estimators of rare-event probabilities. In particular, we find that theoretically strongly-efficient estimators should be used with great caution in practice, because they may yield inaccurate results in the pre-limit. Further, this inaccuracy may not be detectable from the output of the Monte Carlo simulation, because the simulation output may severely underestimate the true variance of the estimator. ",0,0,0,1,0,0 17555,"The complete unitary dual of non-compact Lie superalgebra su(p,q|m) via the generalised oscillator formalism, and non-compact Young diagrams"," We study the unitary representations of the non-compact real forms of the complex Lie superalgebra sl(n|m). Among them, only the real form su(p,q|m) (p+q=n) admits nontrivial unitary representations, and all such representations are of the highest-weight type (or the lowest-weight type). We extend the standard oscillator construction of the unitary representations of non-compact Lie superalgebras over standard Fock spaces to generalised Fock spaces which allows us to define the action of oscillator determinants raised to non-integer powers. We prove that the proposed construction yields all the unitary representations including those with continuous labels. The unitary representations can be diagrammatically represented by non-compact Young diagrams. We apply our general results to the physically important case of four-dimensional conformal superalgebra su(2,2|4) and show how it yields readily its unitary representations including those corresponding to supermultiplets of conformal fields with continuous (anomalous) scaling dimensions. ",0,0,1,0,0,0 17556,DeepPainter: Painter Classification Using Deep Convolutional Autoencoders," In this paper we describe the problem of painter classification, and propose a novel approach based on deep convolutional autoencoder neural networks. While previous approaches relied on image processing and manual feature extraction from paintings, our approach operates on the raw pixel level, without any preprocessing or manual feature extraction. We first train a deep convolutional autoencoder on a dataset of paintings, and subsequently use it to initialize a supervised convolutional neural network for the classification phase. The proposed approach substantially outperforms previous methods, improving the previous state-of-the-art for the 3-painter classification problem from 90.44% accuracy (previous state-of-the-art) to 96.52% accuracy, i.e., a 63% reduction in error rate. ",1,0,0,1,0,0 17557,"Sharp gradient estimate for heat kernels on $RCD^*(K,N)$ metric measure spaces"," In this paper, we will establish an elliptic local Li-Yau gradient estimate for weak solutions of the heat equation on metric measure spaces with generalized Ricci curvature bounded from below. One of its main applications is a sharp gradient estimate for the logarithm of heat kernels. These results seem new even for smooth Riemannian manifolds. ",0,0,1,0,0,0 17558,Thermodynamic properties of diatomic molecules systems under anharmonic Eckart potential," Due to one of the most representative contributions to the energy in diatomic molecules being the vibrational, we consider the generalized Morse potential (GMP) as one of the typical potential of interaction for one-dimensional microscopic systems, which describes local anharmonic effects. From Eckart potential (EP) model, it is possible to find a connection with the GMP model, as well as obtain the analytical expression for the energy spectrum because it is based on $S\,O\left(2,1\right)$ algebras. In this work we find the macroscopic properties such as vibrational mean energy $U$, specific heat $C$, Helmholtz free energy $F$ and entropy $S$ for a heteronuclear diatomic system, along with the exact partition function and its approximation for the high temperature region. Finally, we make a comparison between the graphs of some thermodynamic functions obtained with the GMP and the Morse potential (MP) for $H\,Cl$ molecules. ",0,1,0,0,0,0 17559,Effects of ultrasound waves intensity on the removal of Congo red color from the textile industry wastewater by Fe3O4@TiO2 core-shell nanospheres," Environmental pollutants, such as colors from the textile industry, affect water quality indicators like color, smell, and taste. These substances in the water cause the obstruction of filters and membranes and thereby reduce the efficiency of advanced water treatment processes. In addition, they are harmful to human health because of reaction with disinfectants and production of by-products. Iron oxide nanoparticles are considered effective absorbents for the removal of pollutants from aqueous environments. In order to increase the stability and dispersion, nanospheres with iron oxide core and titanium dioxide coating were used in this research and their ability to absorb Congo red color was evaluated. Iron oxide-titanium oxide nanospheres were prepared based on the coprecipitation method and then their physical properties were determined using a tunneling electron microscope (TEM) and an X-ray diffraction device. Morphological investigation of the absorbent surface showed that iron oxide-titanium oxide nanospheres sized about 5 to 10 nm. X-ray dispersion survey also suggested the high purity of the sample. In addition, the absorption rate was measured in the presence of ultrasound waves and the results indicated that the capacity of the synthesized sample to absorb Congo red is greatly dependent on the intensity power of ultrasound waves, as the absorption rate reaches 100% at powers above 30 watts. ",0,1,0,0,0,0 17560,Enumeration of Tree-like Maps with Arbitrary Number of Vertices," This paper provides the generating series for the embedding of tree-like graphs of arbitrary number of vertices, accourding to their genus. It applies and extends the techniques of Chan, where it was used to give an alternate proof of the Goulden and Slofstra formula. Furthermore, this greatly generalizes the famous Harer-Zagier formula, which computes the Euler characteristic of the moduli space of curves, and is equivalent to the computation of one vertex maps. ",0,0,1,0,0,0 17561,Depth resolved chemical speciation of a superlattice structure," We report results of simultaneous x-ray reflectivity and grazing incidence x-ray fluorescence measurements in combination with x-ray standing wave assisted depth resolved near edge x-ray absorption measurements to reveal new insights on chemical speciation of W in a W-B4C superlattice structure. Interestingly, our results show existence of various unusual electronic states for the W atoms especially those sitting at the surface and interface boundary of a thin film medium as compared to that of the bulk. These observations are found to be consistent with the results obtained using first principles calculations. Unlike the conventional x-ray absorption measurements the present approach has an advantage that it permits the determination of depth resolved chemical nature of an element in the thin layered materials at atomic length scale resolutions. ",0,1,0,0,0,0 17562,Optospintronics in graphene via proximity coupling," The observation of micron size spin relaxation makes graphene a promising material for applications in spintronics requiring long distance spin communication. However, spin dependent scatterings at the contact/graphene interfaces affect the spin injection efficiencies and hence prevent the material from achieving its full potential. While this major issue could be eliminated by nondestructive direct optical spin injection schemes, graphenes intrinsically low spin orbit coupling strength and optical absorption place an obstacle in their realization. We overcome this challenge by creating sharp artificial interfaces between graphene and WSe2 monolayers. Application of a circularly polarized light activates the spin polarized charge carriers in the WSe2 layer due to its spin coupled valley selective absorption. These carriers diffuse into the superjacent graphene layer, transport over a 3.5 um distance, and are finally detected electrically using BN/Co contacts in a non local geometry. Polarization dependent measurements confirm the spin origin of the non local signal. ",0,1,0,0,0,0 17563,Control strategy to limit duty cycle impact of earthquakes on the LIGO gravitational-wave detectors," Advanced gravitational-wave detectors such as the Laser Interferometer Gravitational-Wave Observatories (LIGO) require an unprecedented level of isolation from the ground. When in operation, they are expected to observe changes in the space-time continuum of less than one thousandth of the diameter of a proton. Strong teleseismic events like earthquakes disrupt the proper functioning of the detectors, and result in a loss of data until the detectors can be returned to their operating states. An earthquake early-warning system, as well as a prediction model have been developed to help understand the impact of earthquakes on LIGO. This paper describes a control strategy to use this early-warning system to reduce the LIGO downtime by 30%. It also presents a plan to implement this new earthquake configuration in the LIGO automation system. ",0,1,0,0,0,0 17564,Multi-rendezvous Spacecraft Trajectory Optimization with Beam P-ACO," The design of spacecraft trajectories for missions visiting multiple celestial bodies is here framed as a multi-objective bilevel optimization problem. A comparative study is performed to assess the performance of different Beam Search algorithms at tackling the combinatorial problem of finding the ideal sequence of bodies. Special focus is placed on the development of a new hybridization between Beam Search and the Population-based Ant Colony Optimization algorithm. An experimental evaluation shows all algorithms achieving exceptional performance on a hard benchmark problem. It is found that a properly tuned deterministic Beam Search always outperforms the remaining variants. Beam P-ACO, however, demonstrates lower parameter sensitivity, while offering superior worst-case performance. Being an anytime algorithm, it is then found to be the preferable choice for certain practical applications. ",1,1,0,0,0,0 17565,Types and unitary representations of reductive p-adic groups," We prove that for every Bushnell-Kutzko type that satisfies a certain rigidity assumption, the equivalence of categories between the corresponding Bernstein component and the category of modules for the Hecke algebra of the type induces a bijection between irreducible unitary representations in the two categories. This is a generalization of the unitarity criterion of Barbasch and Moy for representations with Iwahori fixed vectors. ",0,0,1,0,0,0 17566,Average values of L-functions in even characteristic," Let $k = \mathbb{F}_{q}(T)$ be the rational function field over a finite field $\mathbb{F}_{q}$, where $q$ is a power of $2$. In this paper we solve the problem of averaging the quadratic $L$-functions $L(s, \chi_{u})$ over fundamental discriminants. Any separable quadratic extension $K$ of $k$ is of the form $K = k(x_{u})$, where $x_{u}$ is a zero of $X^2+X+u=0$ for some $u\in k$. We characterize the family $\mathcal I$ (resp. $\mathcal F$, $\mathcal F'$) of rational functions $u\in k$ such that any separable quadratic extension $K$ of $k$ in which the infinite prime $\infty = (1/T)$ of $k$ ramifies (resp. splits, is inert) can be written as $K = k(x_{u})$ with a unique $u\in\mathcal I$ (resp. $u\in\mathcal F$, $u\in\mathcal F'$). For almost all $s\in\mathbb C$ with ${\rm Re}(s)\ge \frac{1}2$, we obtain the asymptotic formulas for the summation of $L(s,\chi_{u})$ over all $k(x_{u})$ with $u\in \mathcal I$, all $k(x_{u})$ with $u\in \mathcal F$ or all $k(x_{u})$ with $u\in \mathcal F'$ of given genus. As applications, we obtain the asymptotic mean value formulas of $L$-functions at $s=\frac{1}2$ and $s=1$ and the asymptotic mean value formulas of the class number $h_{u}$ or the class number times regulator $h_{u} R_{u}$. ",0,0,1,0,0,0 17567,"Decoupled molecules with binding polynomials of bidegree (n,2)"," We present a result on the number of decoupled molecules for systems binding two different types of ligands. In the case of $n$ and $2$ binding sites respectively, we show that, generically, there are $2(n!)^{2}$ decoupled molecules with the same binding polynomial. For molecules with more binding sites for the second ligand, we provide computational results. ",1,1,0,0,0,0 17568,Learning to update Auto-associative Memory in Recurrent Neural Networks for Improving Sequence Memorization," Learning to remember long sequences remains a challenging task for recurrent neural networks. Register memory and attention mechanisms were both proposed to resolve the issue with either high computational cost to retain memory differentiability, or by discounting the RNN representation learning towards encoding shorter local contexts than encouraging long sequence encoding. Associative memory, which studies the compression of multiple patterns in a fixed size memory, were rarely considered in recent years. Although some recent work tries to introduce associative memory in RNN and mimic the energy decay process in Hopfield nets, it inherits the shortcoming of rule-based memory updates, and the memory capacity is limited. This paper proposes a method to learn the memory update rule jointly with task objective to improve memory capacity for remembering long sequences. Also, we propose an architecture that uses multiple such associative memory for more complex input encoding. We observed some interesting facts when compared to other RNN architectures on some well-studied sequence learning tasks. ",1,0,0,1,0,0 17569,McDiarmid Drift Detection Methods for Evolving Data Streams," Increasingly, Internet of Things (IoT) domains, such as sensor networks, smart cities, and social networks, generate vast amounts of data. Such data are not only unbounded and rapidly evolving. Rather, the content thereof dynamically evolves over time, often in unforeseen ways. These variations are due to so-called concept drifts, caused by changes in the underlying data generation mechanisms. In a classification setting, concept drift causes the previously learned models to become inaccurate, unsafe and even unusable. Accordingly, concept drifts need to be detected, and handled, as soon as possible. In medical applications and emergency response settings, for example, change in behaviours should be detected in near real-time, to avoid potential loss of life. To this end, we introduce the McDiarmid Drift Detection Method (MDDM), which utilizes McDiarmid's inequality in order to detect concept drift. The MDDM approach proceeds by sliding a window over prediction results, and associate window entries with weights. Higher weights are assigned to the most recent entries, in order to emphasize their importance. As instances are processed, the detection algorithm compares a weighted mean of elements inside the sliding window with the maximum weighted mean observed so far. A significant difference between the two weighted means, upper-bounded by the McDiarmid inequality, implies a concept drift. Our extensive experimentation against synthetic and real-world data streams show that our novel method outperforms the state-of-the-art. Specifically, MDDM yields shorter detection delays as well as lower false negative rates, while maintaining high classification accuracies. ",1,0,0,1,0,0 17570,Yield in Amorphous Solids: The Ant in the Energy Landscape Labyrinth," It has recently been shown that yield in amorphous solids under oscillatory shear is a dynamical transition from asymptotically periodic to asymptotically chaotic, diffusive dynamics. However, the type and universality class of this transition are still undecided. Here we show that the diffusive behavior of the vector of coordinates of the particles comprising an amorphous solid when subject to oscillatory shear, is analogous to that of a particle diffusing in a percolating lattice, the so-called ""ant in the labyrinth"" problem, and that yield corresponds to a percolation transition in the lattice. We explain this as a transition in the connectivity of the energy landscape, which affects the phase-space regions accessible to the coordinate vector for a given maximal strain amplitude. This transition provides a natural explanation to the observed limit-cycles, periods larger than one and diverging time-scales at yield. ",0,1,0,0,0,0 17571,Statistical methods in astronomy," We present a review of data types and statistical methods often encountered in astronomy. The aim is to provide an introduction to statistical applications in astronomy for statisticians and computer scientists. We highlight the complex, often hierarchical, nature of many astronomy inference problems and advocate for cross-disciplinary collaborations to address these challenges. ",0,1,0,1,0,0 17572,A note on some algebraic trapdoors for block ciphers," We provide sufficient conditions to guarantee that a translation based cipher is not vulnerable with respect to the partition-based trapdoor. This trapdoor has been introduced, recently, by Bannier et al. (2016) and it generalizes that introduced by Paterson in 1999. Moreover, we discuss the fact that studying the group generated by the round functions of a block cipher may not be sufficient to guarantee security against these trapdoors for the cipher. ",1,0,1,0,0,0 17573,Bi-Demographic Changes and Current Account using SVAR Modeling," The paper, as a new contribution, aims to explore the impacts of bi-demographic structure on the current account and growth. By using a SVAR modeling, we track the dynamic impacts between the underlying variables of the Saudi economy. New insights have been developed to study the interrelations between population growth, current account and economic growth inside the neoclassical theory of population. The long-run net impact on economic growth of the bi-population growth is negative, due to the typically lower skill sets of the immigrant labor population. Besides, the negative long-run contribution of immigrant workers to the current account growth largely exceeds that of contributions from the native population, because of the increasing levels of remittance outflows from the country. We find that a positive shock in immigration leads to a negative impact on native active age ratio. Thus, the immigrants appear to be more substitutes than complements for native workers. ",0,0,0,0,0,1 17574,Supervised Machine Learning for Signals Having RRC Shaped Pulses," Classification performances of the supervised machine learning techniques such as support vector machines, neural networks and logistic regression are compared for modulation recognition purposes. The simple and robust features are used to distinguish continuous-phase FSK from QAM-PSK signals. Signals having root-raised-cosine shaped pulses are simulated in extreme noisy conditions having joint impurities of block fading, lack of symbol and sampling synchronization, carrier offset, and additive white Gaussian noise. The features are based on sample mean and sample variance of the imaginary part of the product of two consecutive complex signal values. ",1,0,0,0,0,0 17575,End-to-End Optimized Transmission over Dispersive Intensity-Modulated Channels Using Bidirectional Recurrent Neural Networks," We propose an autoencoding sequence-based transceiver for communication over dispersive channels with intensity modulation and direct detection (IM/DD), designed as a bidirectional deep recurrent neural network (BRNN). The receiver uses a sliding window technique to allow for efficient data stream estimation. We find that this sliding window BRNN (SBRNN), based on end-to-end deep learning of the communication system, achieves a significant bit-error-rate reduction at all examined distances in comparison to previous block-based autoencoders implemented as feed-forward neural networks (FFNNs), leading to an increase of the transmission distance. We also compare the end-to-end SBRNN with a state-of-the-art IM/DD solution based on two level pulse amplitude modulation with an FFNN receiver, simultaneously processing multiple received symbols and approximating nonlinear Volterra equalization. Our results show that the SBRNN outperforms such systems at both 42 and 84\,Gb/s, while training fewer parameters. Our novel SBRNN design aims at tailoring the end-to-end deep learning-based systems for communication over nonlinear channels with memory, such as the optical IM/DD fiber channel. ",1,0,0,1,0,0 17576,Effective difference elimination and Nullstellensatz," We prove effective Nullstellensatz and elimination theorems for difference equations in sequence rings. More precisely, we compute an explicit function of geometric quantities associated to a system of difference equations (and these geometric quantities may themselves be bounded by a function of the number of variables, the order of the equations, and the degrees of the equations) so that for any system of difference equations in variables $\mathbf{x} = (x_1, \ldots, x_m)$ and $\mathbf{u} = (u_1, \ldots, u_r)$, if these equations have any nontrivial consequences in the $\mathbf{x}$ variables, then such a consequence may be seen algebraically considering transforms up to the order of our bound. Specializing to the case of $m = 0$, we obtain an effective method to test whether a given system of difference equations is consistent. ",0,0,1,0,0,0 17577,A note on primitive $1-$normal elements over finite fields," Let $q$ be a prime power of a prime $p$, $n$ a positive integer and $\mathbb F_{q^n}$ the finite field with $q^n$ elements. The $k-$normal elements over finite fields were introduced and characterized by Huczynska et al (2013). Under the condition that $n$ is not divisible by $p$, they obtained an existence result on primitive $1-$normal elements of $\mathbb F_{q^n}$ over $\mathbb F_q$ for $q>2$. In this note, we extend their result to the excluded case $q=2$. ",0,0,1,0,0,0 17578,Sparse Coding Predicts Optic Flow Specificities of Zebrafish Pretectal Neurons," Zebrafish pretectal neurons exhibit specificities for large-field optic flow patterns associated with rotatory or translatory body motion. We investigate the hypothesis that these specificities reflect the input statistics of natural optic flow. Realistic motion sequences were generated using computer graphics simulating self-motion in an underwater scene. Local retinal motion was estimated with a motion detector and encoded in four populations of directionally tuned retinal ganglion cells, represented as two signed input variables. This activity was then used as input into one of two learning networks: a sparse coding network (competitive learning) and backpropagation network (supervised learning). Both simulations develop specificities for optic flow which are comparable to those found in a neurophysiological study (Kubo et al. 2014), and relative frequencies of the various neuronal responses are best modeled by the sparse coding approach. We conclude that the optic flow neurons in the zebrafish pretectum do reflect the optic flow statistics. The predicted vectorial receptive fields show typical optic flow fields but also ""Gabor"" and dipole-shaped patterns that likely reflect difference fields needed for reconstruction by linear superposition. ",0,0,0,0,1,0 17579,Revisiting the problem of audio-based hit song prediction using convolutional neural networks," Being able to predict whether a song can be a hit has impor- tant applications in the music industry. Although it is true that the popularity of a song can be greatly affected by exter- nal factors such as social and commercial influences, to which degree audio features computed from musical signals (whom we regard as internal factors) can predict song popularity is an interesting research question on its own. Motivated by the recent success of deep learning techniques, we attempt to ex- tend previous work on hit song prediction by jointly learning the audio features and prediction models using deep learning. Specifically, we experiment with a convolutional neural net- work model that takes the primitive mel-spectrogram as the input for feature learning, a more advanced JYnet model that uses an external song dataset for supervised pre-training and auto-tagging, and the combination of these two models. We also consider the inception model to characterize audio infor- mation in different scales. Our experiments suggest that deep structures are indeed more accurate than shallow structures in predicting the popularity of either Chinese or Western Pop songs in Taiwan. We also use the tags predicted by JYnet to gain insights into the result of different models. ",1,0,0,1,0,0 17580,Natural Language Multitasking: Analyzing and Improving Syntactic Saliency of Hidden Representations," We train multi-task autoencoders on linguistic tasks and analyze the learned hidden sentence representations. The representations change significantly when translation and part-of-speech decoders are added. The more decoders a model employs, the better it clusters sentences according to their syntactic similarity, as the representation space becomes less entangled. We explore the structure of the representation space by interpolating between sentences, which yields interesting pseudo-English sentences, many of which have recognizable syntactic structure. Lastly, we point out an interesting property of our models: The difference-vector between two sentences can be added to change a third sentence with similar features in a meaningful way. ",0,0,0,1,0,0 17581,Temporal Logistic Neural Bag-of-Features for Financial Time series Forecasting leveraging Limit Order Book Data," Time series forecasting is a crucial component of many important applications, ranging from forecasting the stock markets to energy load prediction. The high-dimensionality, velocity and variety of the data collected in these applications pose significant and unique challenges that must be carefully addressed for each of them. In this work, a novel Temporal Logistic Neural Bag-of-Features approach, that can be used to tackle these challenges, is proposed. The proposed method can be effectively combined with deep neural networks, leading to powerful deep learning models for time series analysis. However, combining existing BoF formulations with deep feature extractors pose significant challenges: the distribution of the input features is not stationary, tuning the hyper-parameters of the model can be especially difficult and the normalizations involved in the BoF model can cause significant instabilities during the training process. The proposed method is capable of overcoming these limitations by a employing a novel adaptive scaling mechanism and replacing the classical Gaussian-based density estimation involved in the regular BoF model with a logistic kernel. The effectiveness of the proposed approach is demonstrated using extensive experiments on a large-scale financial time series dataset that consists of more than 4 million limit orders. ",1,0,0,1,0,1 17582,Extended Bose Hubbard model for two leg ladder systems in artificial magnetic fields," We investigate the ground state properties of ultracold atoms with long range interactions trapped in a two leg ladder configuration in the presence of an artificial magnetic field. Using a Gross-Pitaevskii approach and a mean field Gutzwiller variational method, we explore both the weakly interacting and strongly interacting regime, respectively. We calculate the boundaries between the density-wave/supersolid and the Mott-insulator/superfluid phases as a function of magnetic flux and uncover regions of supersolidity. The mean-field results are confirmed by numerical simulations using a cluster mean field approach. ",0,1,0,0,0,0 17583,Insensitivity of The Distance Ladder Hubble Constant Determination to Cepheid Calibration Modeling Choices," Recent determination of the Hubble constant via Cepheid-calibrated supernovae by \citet{riess_2.4_2016} (R16) find $\sim 3\sigma$ tension with inferences based on cosmic microwave background temperature and polarization measurements from $Planck$. This tension could be an indication of inadequacies in the concordance $\Lambda$CDM model. Here we investigate the possibility that the discrepancy could instead be due to systematic bias or uncertainty in the Cepheid calibration step of the distance ladder measurement by R16. We consider variations in total-to-selective extinction of Cepheid flux as a function of line-of-sight, hidden structure in the period-luminosity relationship, and potentially different intrinsic color distributions of Cepheids as a function of host galaxy. Considering all potential sources of error, our final determination of $H_0 = 73.3 \pm 1.7~{\rm km/s/Mpc}$ (not including systematic errors from the treatment of geometric distances or Type Ia Supernovae) shows remarkable robustness and agreement with R16. We conclude systematics from the modeling of Cepheid photometry, including Cepheid selection criteria, cannot explain the observed tension between Cepheid-variable and CMB-based inferences of the Hubble constant. Considering a `model-independent' approach to relating Cepheids in galaxies with known distances to Cepheids in galaxies hosting a Type Ia supernova and finding agreement with the R16 result, we conclude no generalization of the model relating anchor and host Cepheid magnitude measurements can introduce significant bias in the $H_0$ inference. ",0,1,0,0,0,0 17584,Phrase-based Image Captioning with Hierarchical LSTM Model," Automatic generation of caption to describe the content of an image has been gaining a lot of research interests recently, where most of the existing works treat the image caption as pure sequential data. Natural language, however possess a temporal hierarchy structure, with complex dependencies between each subsequence. In this paper, we propose a phrase-based hierarchical Long Short-Term Memory (phi-LSTM) model to generate image description. In contrast to the conventional solutions that generate caption in a pure sequential manner, our proposed model decodes image caption from phrase to sentence. It consists of a phrase decoder at the bottom hierarchy to decode noun phrases of variable length, and an abbreviated sentence decoder at the upper hierarchy to decode an abbreviated form of the image description. A complete image caption is formed by combining the generated phrases with sentence during the inference stage. Empirically, our proposed model shows a better or competitive result on the Flickr8k, Flickr30k and MS-COCO datasets in comparison to the state-of-the art models. We also show that our proposed model is able to generate more novel captions (not seen in the training data) which are richer in word contents in all these three datasets. ",1,0,0,0,0,0 17585,The three-dimensional structure of swirl-switching in bent pipe flow," Swirl-switching is a low-frequency oscillatory phenomenon which affects the Dean vortices in bent pipes and may cause fatigue in piping systems. Despite thirty years worth of research, the mechanism that causes these oscillations and the frequencies that characterise them remain unclear. Here we show that a three-dimensional wave-like structure is responsible for the low-frequency switching of the dominant Dean vortex. The present study, performed via direct numerical simulation, focuses on the turbulent flow through a 90 \degree pipe bend preceded and followed by straight pipe segments. A pipe with curvature 0.3 (defined as ratio between pipe radius and bend radius) is studied for a bulk Reynolds number Re = 11 700, corresponding to a friction Reynolds number Re_\tau \approx 360. Synthetic turbulence is generated at the inflow section and used instead of the classical recycling method in order to avoid the interference between recycling and swirl-switching frequencies. The flow field is analysed by three-dimensional proper orthogonal decomposition (POD) which for the first time allows the identification of the source of swirl-switching: a wave-like structure that originates in the pipe bend. Contrary to some previous studies, the flow in the upstream pipe does not show any direct influence on the swirl-switching modes. Our analysis further shows that a three- dimensional characterisation of the modes is crucial to understand the mechanism, and that reconstructions based on 2D POD modes are incomplete. ",0,1,0,0,0,0 17586,InfoVAE: Information Maximizing Variational Autoencoders," A key advance in learning generative models is the use of amortized inference distributions that are jointly trained with the models. We find that existing training objectives for variational autoencoders can lead to inaccurate amortized inference distributions and, in some cases, improving the objective provably degrades the inference quality. In addition, it has been observed that variational autoencoders tend to ignore the latent variables when combined with a decoding distribution that is too flexible. We again identify the cause in existing training criteria and propose a new class of objectives (InfoVAE) that mitigate these problems. We show that our model can significantly improve the quality of the variational posterior and can make effective use of the latent features regardless of the flexibility of the decoding distribution. Through extensive qualitative and quantitative analyses, we demonstrate that our models outperform competing approaches on multiple performance metrics. ",1,0,0,1,0,0 17587,Between-class Learning for Image Classification," In this paper, we propose a novel learning method for image classification called Between-Class learning (BC learning). We generate between-class images by mixing two images belonging to different classes with a random ratio. We then input the mixed image to the model and train the model to output the mixing ratio. BC learning has the ability to impose constraints on the shape of the feature distributions, and thus the generalization ability is improved. BC learning is originally a method developed for sounds, which can be digitally mixed. Mixing two image data does not appear to make sense; however, we argue that because convolutional neural networks have an aspect of treating input data as waveforms, what works on sounds must also work on images. First, we propose a simple mixing method using internal divisions, which surprisingly proves to significantly improve performance. Second, we propose a mixing method that treats the images as waveforms, which leads to a further improvement in performance. As a result, we achieved 19.4% and 2.26% top-1 errors on ImageNet-1K and CIFAR-10, respectively. ",1,0,0,1,0,0 17588,Girsanov reweighting for path ensembles and Markov state models," The sensitivity of molecular dynamics on changes in the potential energy function plays an important role in understanding the dynamics and function of complex molecules.We present a method to obtain path ensemble averages of a perturbed dynamics from a set of paths generated by a reference dynamics. It is based on the concept of path probability measure and the Girsanov theorem, a result from stochastic analysis to estimate a change of measure of a path ensemble. Since Markov state models (MSM) of the molecular dynamics can be formulated as a combined phase-space and path ensemble average, the method can be extended toreweight MSMs by combining it with a reweighting of the Boltzmann distribution. We demonstrate how to efficiently implement the Girsanov reweighting in a molecular dynamics simulation program by calculating parts of the reweighting factor ""on the fly"" during the simulation, and we benchmark the method on test systems ranging from a two-dimensional diffusion process to an artificial many-body system and alanine dipeptide and valine dipeptide in implicit and explicit water. The method can be used to study the sensitivity of molecular dynamics on external perturbations as well as to reweight trajectories generated by enhanced sampling schemes to the original dynamics. ",0,1,0,0,0,0 17589,Coordination of multi-agent systems via asynchronous cloud communication," In this work we study a multi-agent coordination problem in which agents are only able to communicate with each other intermittently through a cloud server. To reduce the amount of required communication, we develop a self-triggered algorithm that allows agents to communicate with the cloud only when necessary rather than at some fixed period. Unlike the vast majority of similar works that propose distributed event- and/or self-triggered control laws, this work doesn't assume agents can be ""listening"" continuously. In other words, when an event is triggered by one agent, neighboring agents will not be aware of this until the next time they establish communication with the cloud themselves. Using a notion of ""promises"" about future control inputs, agents are able to keep track of higher quality estimates about their neighbors allowing them to stay disconnected from the cloud for longer periods of time while still guaranteeing a positive contribution to the global task. We prove that our self-triggered coordination algorithm guarantees that the system asymptotically reaches the set of desired states. Simulations illustrate our results. ",1,0,1,0,0,0 17590,PatternListener: Cracking Android Pattern Lock Using Acoustic Signals," Pattern lock has been widely used for authentication to protect user privacy on mobile devices (e.g., smartphones and tablets). Given its pervasive usage, the compromise of pattern lock could lead to serious consequences. Several attacks have been constructed to crack the lock. However, these approaches require the attackers to either be physically close to the target device or be able to manipulate the network facilities (e.g., WiFi hotspots) used by the victims. Therefore, the effectiveness of the attacks is significantly impacted by the environment of mobile devices. Also, these attacks are not scalable since they cannot easily infer unlock patterns of a large number of devices. Motivated by an observation that fingertip motions on the screen of a mobile device can be captured by analyzing surrounding acoustic signals on it, we propose PatternListener, a novel acoustic attack that cracks pattern lock by analyzing imperceptible acoustic signals reflected by the fingertip. It leverages speakers and microphones of the victim's device to play imperceptible audio and record the acoustic signals reflected by the fingertip. In particular, it infers each unlock pattern by analyzing individual lines that compose the pattern and are the trajectories of the fingertip. We propose several algorithms to construct signal segments according to the captured signals for each line and infer possible candidates of each individual line according to the signal segments. Finally, we map all line candidates into grid patterns and thereby obtain the candidates of the entire unlock pattern. We implement a PatternListener prototype by using off-the-shelf smartphones and thoroughly evaluate it using 130 unique patterns. The real experimental results demonstrate that PatternListener can successfully exploit over 90% patterns within five attempts. ",1,0,0,0,0,0 17591,Marginal Release Under Local Differential Privacy," Many analysis and machine learning tasks require the availability of marginal statistics on multidimensional datasets while providing strong privacy guarantees for the data subjects. Applications for these statistics range from finding correlations in the data to fitting sophisticated prediction models. In this paper, we provide a set of algorithms for materializing marginal statistics under the strong model of local differential privacy. We prove the first tight theoretical bounds on the accuracy of marginals compiled under each approach, perform empirical evaluation to confirm these bounds, and evaluate them for tasks such as modeling and correlation testing. Our results show that releasing information based on (local) Fourier transformations of the input is preferable to alternatives based directly on (local) marginals. ",1,0,0,0,0,0 17592,A possible flyby anomaly for Juno at Jupiter," In the last decades there have been an increasing interest in improving the accuracy of spacecraft navigation and trajectory data. In the course of this plan some anomalies have been found that cannot, in principle, be explained in the context of the most accurate orbital models including all known effects from classical dynamics and general relativity. Of particular interest for its puzzling nature, and the lack of any accepted explanation for the moment, is the flyby anomaly discovered in some spacecraft flybys of the Earth over the course of twenty years. This anomaly manifest itself as the impossibility of matching the pre and post-encounter Doppler tracking and ranging data within a single orbit but, on the contrary, a difference of a few mm$/$s in the asymptotic velocities is required to perform the fitting. Nevertheless, no dedicated missions have been carried out to elucidate the origin of this phenomenon with the objective either of revising our understanding of gravity or to improve the accuracy of spacecraft Doppler tracking by revealing a conventional origin. With the occasion of the Juno mission arrival at Jupiter and the close flybys of this planet, that are currently been performed, we have developed an orbital model suited to the time window close to the perijove. This model shows that an anomalous acceleration of a few mm$/$s$^2$ is also present in this case. The chance for overlooked conventional or possible unconventional explanations is discussed. ",0,1,0,0,0,0 17593,Crossover between various initial conditions in KPZ growth: flat to stationary," We conjecture the universal probability distribution at large time for the one-point height in the 1D Kardar-Parisi-Zhang (KPZ) stochastic growth universality class, with initial conditions interpolating from any one of the three main classes (droplet, flat, stationary) on the left, to another on the right, allowing for drifts and also for a step near the origin. The result is obtained from a replica Bethe ansatz calculation starting from the KPZ continuum equation, together with a ""decoupling assumption"" in the large time limit. Some cases are checked to be equivalent to previously known results from other models in the same class, which provides a test of the method, others appear to be new. In particular we obtain the crossover distribution between flat and stationary initial conditions (crossover from Airy$_1$ to Airy$_{\rm stat}$) in a simple compact form. ",0,1,1,0,0,0 17594,Multimodel Response Assessment for Monthly Rainfall Distribution in Some Selected Indian Cities Using Best Fit Probability as a Tool," We carry out a study of the statistical distribution of rainfall precipitation data for 20 cites in India. We have determined the best-fit probability distribution for these cities from the monthly precipitation data spanning 100 years of observations from 1901 to 2002. To fit the observed data, we considered 10 different distributions. The efficacy of the fits for these distributions was evaluated using four empirical non-parametric goodness-of-fit tests namely Kolmogorov-Smirnov, Anderson-Darling, Chi-Square, Akaike information criterion, and Bayesian Information criterion. Finally, the best-fit distribution using each of these tests were reported, by combining the results from the model comparison tests. We then find that for most of the cities, Generalized Extreme-Value Distribution or Inverse Gaussian Distribution most adequately fits the observed data. ",0,1,0,1,0,0 17595,Small Boxes Big Data: A Deep Learning Approach to Optimize Variable Sized Bin Packing," Bin Packing problems have been widely studied because of their broad applications in different domains. Known as a set of NP-hard problems, they have different vari- ations and many heuristics have been proposed for obtaining approximate solutions. Specifically, for the 1D variable sized bin packing problem, the two key sets of optimization heuristics are the bin assignment and the bin allocation. Usually the performance of a single static optimization heuristic can not beat that of a dynamic one which is tailored for each bin packing instance. Building such an adaptive system requires modeling the relationship between bin features and packing perform profiles. The primary drawbacks of traditional AI machine learnings for this task are the natural limitations of feature engineering, such as the curse of dimensionality and feature selection quality. We introduce a deep learning approach to overcome the drawbacks by applying a large training data set, auto feature selection and fast, accurate labeling. We show in this paper how to build such a system by both theoretical formulation and engineering practices. Our prediction system achieves up to 89% training accuracy and 72% validation accuracy to select the best heuristic that can generate a better quality bin packing solution. ",1,0,0,1,0,0 17596,Surges of collective human activity emerge from simple pairwise correlations," Human populations exhibit complex behaviors---characterized by long-range correlations and surges in activity---across a range of social, political, and technological contexts. Yet it remains unclear where these collective behaviors come from, or if there even exists a set of unifying principles. Indeed, existing explanations typically rely on context-specific mechanisms, such as traffic jams driven by work schedules or spikes in online traffic induced by significant events. However, analogies with statistical mechanics suggest a more general mechanism: that collective patterns can emerge organically from fine-scale interactions within a population. Here, across four different modes of human activity, we show that the simplest correlations in a population---those between pairs of individuals---can yield accurate quantitative predictions for the large-scale behavior of the entire population. To quantify the minimal consequences of pairwise correlations, we employ the principle of maximum entropy, making our description equivalent to an Ising model whose interactions and external fields are notably calculated from past observations of population activity. In addition to providing accurate quantitative predictions, we show that the topology of learned Ising interactions resembles the network of inter-human communication within a population. Together, these results demonstrate that fine-scale correlations can be used to predict large-scale social behaviors, a perspective that has critical implications for modeling and resource allocation in human populations. ",1,0,0,0,0,0 17597,Semantically Enhanced Dynamic Bayesian Network for Detecting Sepsis Mortality Risk in ICU Patients with Infection," Although timely sepsis diagnosis and prompt interventions in Intensive Care Unit (ICU) patients are associated with reduced mortality, early clinical recognition is frequently impeded by non-specific signs of infection and failure to detect signs of sepsis-induced organ dysfunction in a constellation of dynamically changing physiological data. The goal of this work is to identify patient at risk of life-threatening sepsis utilizing a data-centered and machine learning-driven approach. We derive a mortality risk predictive dynamic Bayesian network (DBN) guided by a customized sepsis knowledgebase and compare the predictive accuracy of the derived DBN with the Sepsis-related Organ Failure Assessment (SOFA) score, the Quick SOFA (qSOFA) score, the Simplified Acute Physiological Score (SAPS-II) and the Modified Early Warning Score (MEWS) tools. A customized sepsis ontology was used to derive the DBN node structure and semantically characterize temporal features derived from both structured physiological data and unstructured clinical notes. We assessed the performance in predicting mortality risk of the DBN predictive model and compared performance to other models using Receiver Operating Characteristic (ROC) curves, area under curve (AUROC), calibration curves, and risk distributions. The derived dataset consists of 24,506 ICU stays from 19,623 patients with evidence of suspected infection, with 2,829 patients deceased at discharge. The DBN AUROC was found to be 0.91, which outperformed the SOFA (0.843), qSOFA (0.66), MEWS (0.73), and SAPS-II (0.77) scoring tools. Continuous Net Reclassification Index and Integrated Discrimination Improvement analysis supported the superiority DBN. Compared with conventional rule-based risk scoring tools, the sepsis knowledgebase-driven DBN algorithm offers improved performance for predicting mortality of infected patients in ICUs. ",0,0,0,1,0,0 17598,Hölder regularity of viscosity solutions of some fully nonlinear equations in the Heisenberg group," In this paper we prove the Hölder regularity of bounded, uniformly continuous, viscosity solutions of some degenerate fully nonlinear equations in the Heisenberg group. ",0,0,1,0,0,0 17599,Normal form for transverse instability of the line soliton with a nearly critical speed of propagation," There exists a critical speed of propagation of the line solitons in the Zakharov-Kuznetsov (ZK) equation such that small transversely periodic perturbations are unstable for line solitons with larger-than-critical speeds and orbitally stable for those with smaller-than-critical speeds. The normal form for transverse instability of the line soliton with a nearly critical speed of propagation is derived by means of symplectic projections and near-identity transformations. Justification of this normal form is provided with the energy method. The normal form predicts a transformation of the unstable line solitons with larger-than-critical speeds to the orbitally stable transversely modulated solitary waves. ",0,1,1,0,0,0 17600,The CLaC Discourse Parser at CoNLL-2016," This paper describes our submission ""CLaC"" to the CoNLL-2016 shared task on shallow discourse parsing. We used two complementary approaches for the task. A standard machine learning approach for the parsing of explicit relations, and a deep learning approach for non-explicit relations. Overall, our parser achieves an F1-score of 0.2106 on the identification of discourse relations (0.3110 for explicit relations and 0.1219 for non-explicit relations) on the blind CoNLL-2016 test set. ",1,0,0,0,0,0 17601,Ordinary differential equations in algebras of generalized functions," A local existence and uniqueness theorem for ODEs in the special algebra of generalized functions is established, as well as versions including parameters and dependence on initial values in the generalized sense. Finally, a Frobenius theorem is proved. In all these results, composition of generalized functions is based on the notion of c-boundedness. ",0,0,1,0,0,0 17602,Interesting Paths in the Mapper," The Mapper produces a compact summary of high dimensional data as a simplicial complex. We study the problem of quantifying the interestingness of subpopulations in a Mapper, which appear as long paths, flares, or loops. First, we create a weighted directed graph G using the 1-skeleton of the Mapper. We use the average values at the vertices of a target function to direct edges (from low to high). The difference between the average values at vertices (high-low) is set as the edge's weight. Covariation of the remaining h functions (independent variables) is captured by a h-bit binary signature assigned to the edge. An interesting path in G is a directed path whose edges all have the same signature. We define the interestingness score of such a path as a sum of its edge weights multiplied by a nonlinear function of their ranks in the path. Second, we study three optimization problems on this graph G. In the problem Max-IP, we seek an interesting path in G with the maximum interestingness score. We show that Max-IP is NP-complete. For the special case when G is a directed acyclic graph (DAG), we show that Max-IP can be solved in polynomial time - in O(mnd_i) where d_i is the maximum indegree of a vertex in G. In the more general problem IP, the goal is to find a collection of edge-disjoint interesting paths such that the overall sum of their interestingness scores is maximized. We also study a variant of IP termed k-IP, where the goal is to identify a collection of edge-disjoint interesting paths each with k edges, and their total interestingness score is maximized. While k-IP can be solved in polynomial time for k <= 2, we show k-IP is NP-complete for k >= 3 even when G is a DAG. We develop polynomial time heuristics for IP and k-IP on DAGs. ",1,0,1,0,0,0 17603,On a three dimensional vision based collision avoidance model," This paper presents a three dimensional collision avoidance approach for aerial vehicles inspired by coordinated behaviors in biological groups. The proposed strategy aims to enable a group of vehicles to converge to a common destination point avoiding collisions with each other and with moving obstacles in their environment. The interaction rules lead the agents to adapt their velocity vectors through a modification of the relative bearing angle and the relative elevation. Moreover the model satisfies the limited field of view constraints resulting from individual perception sensitivity. From the proposed individual based model, a mean-field kinetic model is derived. Simulations are performed to show the effectiveness of the proposed model. ",0,0,1,0,0,0 17604,Algorithms for Covering Multiple Barriers," In this paper, we consider the problems for covering multiple intervals on a line. Given a set $B$ of $m$ line segments (called ""barriers"") on a horizontal line $L$ and another set $S$ of $n$ horizontal line segments of the same length in the plane, we want to move all segments of $S$ to $L$ so that their union covers all barriers and the maximum movement of all segments of $S$ is minimized. Previously, an $O(n^3\log n)$-time algorithm was given for the case $m=1$. In this paper, we propose an $O(n^2\log n\log \log n+nm\log m)$-time algorithm for a more general setting with any $m\geq 1$, which also improves the previous work when $m=1$. We then consider a line-constrained version of the problem in which the segments of $S$ are all initially on the line $L$. Previously, an $O(n\log n)$-time algorithm was known for the case $m=1$. We present an algorithm of $O(m\log m+n\log m \log n)$ time for any $m\geq 1$. These problems may have applications in mobile sensor barrier coverage in wireless sensor networks. ",1,0,0,0,0,0 17605,Shattering the glass ceiling? How the institutional context mitigates the gender gap in entrepreneurship," We examine how the institutional context affects the relationship between gender and opportunity entrepreneurship. To do this, we develop a multi-level model that connects feminist theory at the micro-level to institutional theory at the macro-level. It is hypothesized that the gender gap in opportunity entrepreneurship is more pronounced in low-quality institutional contexts and less pronounced in high-quality institutional contexts. Using data from the Global Entrepreneurship Monitor (GEM) and regulation data from the economic freedom of the world index (EFW), we test our predictions and find evidence in support of our model. Our findings suggest that, while there is a gender gap in entrepreneurship, these disparities are reduced as the quality of the institutional context improves. ",0,0,0,0,0,1 17606,An Automated Text Categorization Framework based on Hyperparameter Optimization," A great variety of text tasks such as topic or spam identification, user profiling, and sentiment analysis can be posed as a supervised learning problem and tackle using a text classifier. A text classifier consists of several subprocesses, some of them are general enough to be applied to any supervised learning problem, whereas others are specifically designed to tackle a particular task, using complex and computational expensive processes such as lemmatization, syntactic analysis, etc. Contrary to traditional approaches, we propose a minimalistic and wide system able to tackle text classification tasks independent of domain and language, namely microTC. It is composed by some easy to implement text transformations, text representations, and a supervised learning algorithm. These pieces produce a competitive classifier even in the domain of informally written text. We provide a detailed description of microTC along with an extensive experimental comparison with relevant state-of-the-art methods. mircoTC was compared on 30 different datasets. Regarding accuracy, microTC obtained the best performance in 20 datasets while achieves competitive results in the remaining 10. The compared datasets include several problems like topic and polarity classification, spam detection, user profiling and authorship attribution. Furthermore, it is important to state that our approach allows the usage of the technology even without knowledge of machine learning and natural language processing. ",1,0,0,1,0,0 17607,Abdominal aortic aneurysms and endovascular sealing: deformation and dynamic response," Endovascular sealing is a new technique for the repair of abdominal aortic aneurysms. Commercially available in Europe since~2013, it takes a revolutionary approach to aneurysm repair through minimally invasive techniques. Although aneurysm sealing may be thought as more stable than conventional endovascular stent graft repairs, post-implantation movement of the endoprosthesis has been described, potentially leading to late complications. The paper presents for the first time a model, which explains the nature of forces, in static and dynamic regimes, acting on sealed abdominal aortic aneurysms, with references to real case studies. It is shown that elastic deformation of the aorta and of the endoprosthesis induced by static forces and vibrations during daily activities can potentially promote undesired movements of the endovascular sealing structure. ",0,1,0,0,0,0 17608,Affinity Scheduling and the Applications on Data Center Scheduling with Data Locality," MapReduce framework is the de facto standard in Hadoop. Considering the data locality in data centers, the load balancing problem of map tasks is a special case of affinity scheduling problem. There is a huge body of work on affinity scheduling, proposing heuristic algorithms which try to increase data locality in data centers like Delay Scheduling and Quincy. However, not enough attention has been put on theoretical guarantees on throughput and delay optimality of such algorithms. In this work, we present and compare different algorithms and discuss their shortcoming and strengths. To the best of our knowledge, most data centers are using static load balancing algorithms which are not efficient in any ways and results in wasting the resources and causing unnecessary delays for users. ",1,0,0,0,0,0 17609,Multivariate Regression with Gross Errors on Manifold-valued Data," We consider the topic of multivariate regression on manifold-valued output, that is, for a multivariate observation, its output response lies on a manifold. Moreover, we propose a new regression model to deal with the presence of grossly corrupted manifold-valued responses, a bottleneck issue commonly encountered in practical scenarios. Our model first takes a correction step on the grossly corrupted responses via geodesic curves on the manifold, and then performs multivariate linear regression on the corrected data. This results in a nonconvex and nonsmooth optimization problem on manifolds. To this end, we propose a dedicated approach named PALMR, by utilizing and extending the proximal alternating linearized minimization techniques. Theoretically, we investigate its convergence property, where it is shown to converge to a critical point under mild conditions. Empirically, we test our model on both synthetic and real diffusion tensor imaging data, and show that our model outperforms other multivariate regression models when manifold-valued responses contain gross errors, and is effective in identifying gross errors. ",1,0,1,1,0,0 17610,Computing an Approximately Optimal Agreeable Set of Items," We study the problem of finding a small subset of items that is \emph{agreeable} to all agents, meaning that all agents value the subset at least as much as its complement. Previous work has shown worst-case bounds, over all instances with a given number of agents and items, on the number of items that may need to be included in such a subset. Our goal in this paper is to efficiently compute an agreeable subset whose size approximates the size of the smallest agreeable subset for a given instance. We consider three well-known models for representing the preferences of the agents: ordinal preferences on single items, the value oracle model, and additive utilities. In each of these models, we establish virtually tight bounds on the approximation ratio that can be obtained by algorithms running in polynomial time. ",1,0,0,0,0,0 17611,3D Sketching using Multi-View Deep Volumetric Prediction," Sketch-based modeling strives to bring the ease and immediacy of drawing to the 3D world. However, while drawings are easy for humans to create, they are very challenging for computers to interpret due to their sparsity and ambiguity. We propose a data-driven approach that tackles this challenge by learning to reconstruct 3D shapes from one or more drawings. At the core of our approach is a deep convolutional neural network (CNN) that predicts occupancy of a voxel grid from a line drawing. This CNN provides us with an initial 3D reconstruction as soon as the user completes a single drawing of the desired shape. We complement this single-view network with an updater CNN that refines an existing prediction given a new drawing of the shape created from a novel viewpoint. A key advantage of our approach is that we can apply the updater iteratively to fuse information from an arbitrary number of viewpoints, without requiring explicit stroke correspondences between the drawings. We train both CNNs by rendering synthetic contour drawings from hand-modeled shape collections as well as from procedurally-generated abstract shapes. Finally, we integrate our CNNs in a minimal modeling interface that allows users to seamlessly draw an object, rotate it to see its 3D reconstruction, and refine it by re-drawing from another vantage point using the 3D reconstruction as guidance. The main strengths of our approach are its robustness to freehand bitmap drawings, its ability to adapt to different object categories, and the continuum it offers between single-view and multi-view sketch-based modeling. ",1,0,0,0,0,0 17612,Inductive Pairwise Ranking: Going Beyond the n log(n) Barrier," We study the problem of ranking a set of items from nonactively chosen pairwise preferences where each item has feature information with it. We propose and characterize a very broad class of preference matrices giving rise to the Feature Low Rank (FLR) model, which subsumes several models ranging from the classic Bradley-Terry-Luce (BTL) (Bradley and Terry 1952) and Thurstone (Thurstone 1927) models to the recently proposed blade-chest (Chen and Joachims 2016) and generic low-rank preference (Rajkumar and Agarwal 2016) models. We use the technique of matrix completion in the presence of side information to develop the Inductive Pairwise Ranking (IPR) algorithm that provably learns a good ranking under the FLR model, in a sample-efficient manner. In practice, through systematic synthetic simulations, we confirm our theoretical findings regarding improvements in the sample complexity due to the use of feature information. Moreover, on popular real-world preference learning datasets, with as less as 10% sampling of the pairwise comparisons, our method recovers a good ranking. ",1,0,0,1,0,0 17613,SAGA and Restricted Strong Convexity," SAGA is a fast incremental gradient method on the finite sum problem and its effectiveness has been tested on a vast of applications. In this paper, we analyze SAGA on a class of non-strongly convex and non-convex statistical problem such as Lasso, group Lasso, Logistic regression with $\ell_1$ regularization, linear regression with SCAD regularization and Correct Lasso. We prove that SAGA enjoys the linear convergence rate up to the statistical estimation accuracy, under the assumption of restricted strong convexity (RSC). It significantly extends the applicability of SAGA in convex and non-convex optimization. ",0,0,0,1,0,0 17614,Characterization of Traps at Nitrided SiO$_2$/SiC Interfaces near the Conduction Band Edge by using Hall Effect Measurements," The effects of nitridation on the density of traps at SiO$_2$/SiC interfaces near the conduction band edge were qualitatively examined by a simple, newly developed characterization method that utilizes Hall effect measurements and split capacitance-voltage measurements. The results showed a significant reduction in the density of interface traps near the conduction band edge by nitridation, as well as the high density of interface traps that was not eliminated by nitridation. ",0,1,0,0,0,0 17615,Response theory of the ergodic many-body delocalized phase: Keldysh Finkel'stein sigma models and the 10-fold way," We derive the finite temperature Keldysh response theory for interacting fermions in the presence of quenched disorder, as applicable to any of the 10 Altland-Zirnbauer classes in an Anderson delocalized phase with at least a U(1) continuous symmetry. In this formulation of the interacting Finkel'stein nonlinear sigma model, the statistics of one-body wave functions are encoded by the constrained matrix field, while physical correlations follow from the hydrodynamic density or spin response field, which decouples the interactions. Integrating out the matrix field first, we obtain weak (anti)localization and Altshuler-Aronov quantum conductance corrections from the hydrodynamic response function. This procedure automatically incorporates the correct infrared physics, and in particular gives the Altshuler-Aronov-Khmelnitsky (AAK) equations for dephasing of weak (anti)localization due to electron-electron collisions. We explicate the method by deriving known quantum corrections in two dimensions for the symplectic metal class AII, as well as the spin-SU(2) invariant superconductor classes C and CI. We show that conductance corrections due to the special modes at zero energy in nonstandard classes are automatically cut off by temperature, as previously expected, while the Wigner-Dyson class Cooperon modes that persist to all energies are cut by dephasing. We also show that for short-ranged interactions, the standard self-consistent solution for the dephasing rate is equivalent to a diagrammatic summation via the self-consistent Born approximation. This should be compared to the AAK solution for long-ranged Coulomb interactions, which exploits the Markovian noise correlations induced by thermal fluctuations of the electromagnetic field. We discuss prospects for exploring the many-body localization transition from the ergodic side as a dephasing catastrophe in short-range interacting models. ",0,1,0,0,0,0 17616,Classification via Tensor Decompositions of Echo State Networks," This work introduces a tensor-based method to perform supervised classification on spatiotemporal data processed in an echo state network. Typically when performing supervised classification tasks on data processed in an echo state network, the entire collection of hidden layer node states from the training dataset is shaped into a matrix, allowing one to use standard linear algebra techniques to train the output layer. However, the collection of hidden layer states is multidimensional in nature, and representing it as a matrix may lead to undesirable numerical conditions or loss of spatial and temporal correlations in the data. This work proposes a tensor-based supervised classification method on echo state network data that preserves and exploits the multidimensional nature of the hidden layer states. The method, which is based on orthogonal Tucker decompositions of tensors, is compared with the standard linear output weight approach in several numerical experiments on both synthetic and natural data. The results show that the tensor-based approach tends to outperform the standard approach in terms of classification accuracy. ",1,0,0,1,0,0 17617,Inference on Breakdown Frontiers," Given a set of baseline assumptions, a breakdown frontier is the boundary between the set of assumptions which lead to a specific conclusion and those which do not. In a potential outcomes model with a binary treatment, we consider two conclusions: First, that ATE is at least a specific value (e.g., nonnegative) and second that the proportion of units who benefit from treatment is at least a specific value (e.g., at least 50\%). For these conclusions, we derive the breakdown frontier for two kinds of assumptions: one which indexes relaxations of the baseline random assignment of treatment assumption, and one which indexes relaxations of the baseline rank invariance assumption. These classes of assumptions nest both the point identifying assumptions of random assignment and rank invariance and the opposite end of no constraints on treatment selection or the dependence structure between potential outcomes. This frontier provides a quantitative measure of robustness of conclusions to relaxations of the baseline point identifying assumptions. We derive $\sqrt{N}$-consistent sample analog estimators for these frontiers. We then provide two asymptotically valid bootstrap procedures for constructing lower uniform confidence bands for the breakdown frontier. As a measure of robustness, estimated breakdown frontiers and their corresponding confidence bands can be presented alongside traditional point estimates and confidence intervals obtained under point identifying assumptions. We illustrate this approach in an empirical application to the effect of child soldiering on wages. We find that sufficiently weak conclusions are robust to simultaneous failures of rank invariance and random assignment, while some stronger conclusions are fairly robust to failures of rank invariance but not necessarily to relaxations of random assignment. ",0,0,0,1,0,0 17618,Sequential Detection of Three-Dimensional Signals under Dependent Noise," We study detection methods for multivariable signals under dependent noise. The main focus is on three-dimensional signals, i.e. on signals in the space-time domain. Examples for such signals are multifaceted. They include geographic and climatic data as well as image data, that are observed over a fixed time horizon. We assume that the signal is observed as a finite block of noisy samples whereby we are interested in detecting changes from a given reference signal. Our detector statistic is based on a sequential partial sum process, related to classical signal decomposition and reconstruction approaches applied to the sampled signal. We show that this detector process converges weakly under the no change null hypothesis that the signal coincides with the reference signal, provided that the spatial-temporal partial sum process associated to the random field of the noise terms disturbing the sampled signal con- verges to a Brownian motion. More generally, we also establish the limiting distribution under a wide class of local alternatives that allows for smooth as well as discontinuous changes. Our results also cover extensions to the case that the reference signal is unknown. We conclude with an extensive simulation study of the detection algorithm. ",0,0,1,1,0,0 17619,T-Branes at the Limits of Geometry," Singular limits of 6D F-theory compactifications are often captured by T-branes, namely a non-abelian configuration of intersecting 7-branes with a nilpotent matrix of normal deformations. The long distance approximation of such 7-branes is a Hitchin-like system in which simple and irregular poles emerge at marked points of the geometry. When multiple matter fields localize at the same point in the geometry, the associated Higgs field can exhibit irregular behavior, namely poles of order greater than one. This provides a geometric mechanism to engineer wild Higgs bundles. Physical constraints such as anomaly cancellation and consistent coupling to gravity also limit the order of such poles. Using this geometric formulation, we unify seemingly different wild Hitchin systems in a single framework in which orders of poles become adjustable parameters dictated by tuning gauge singlet moduli of the F-theory model. ",0,0,1,0,0,0 17620,Space-time crystal and space-time group," Crystal structures and the Bloch theorem play a fundamental role in condensed matter physics. We extend the static crystal to the dynamic ""space-time"" crystal characterized by the general intertwined space-time periodicities in $D+1$ dimensions, which include both the static crystal and the Floquet crystal as special cases. A new group structure dubbed ""space-time"" group is constructed to describe the discrete symmetries of space-time crystal. Compared to space and magnetic groups, space-time group is augmented by ""time-screw"" rotations and ""time-glide"" reflections involving fractional translations along the time direction. A complete classification of the 13 space-time groups in 1+1D is performed. The Kramers-type degeneracy can arise from the glide time-reversal symmetry without the half-integer spinor structure, which constrains the winding number patterns of spectral dispersions. In 2+1D, non-symmorphic space-time symmetries enforce spectral degeneracies, leading to protected Floquet semi-metal states. Our work provides a general framework for further studying topological properties of the $D+1$ dimensional space-time crystal. ",0,1,0,0,0,0 17621,On the Performance of Zero-Forcing Processing in Multi-Way Massive MIMO Relay Networks," We consider a multi-way massive multiple-input multiple-output relay network with zero-forcing processing at the relay. By taking into account the time-division duplex protocol with channel estimation, we derive an analytical approximation of the spectral efficiency. This approximation is very tight and simple which enables us to analyze the system performance, as well as, to compare the spectral efficiency with zero-forcing and maximum-ratio processing. Our results show that by using a very large number of relay antennas and with the zero-forcing technique, we can simultaneously serve many active users in the same time-frequency resource, each with high spectral efficiency. ",1,0,1,0,0,0 17622,Motivic rational homotopy type," In this paper we introduce and study motives for rational homotopy types. ",0,0,1,0,0,0 17623,Preconditioner-free Wiener filtering with a dense noise matrix," This work extends the Elsner & Wandelt (2013) iterative method for efficient, preconditioner-free Wiener filtering to cases in which the noise covariance matrix is dense, but can be decomposed into a sum whose parts are sparse in convenient bases. The new method, which uses multiple messenger fields, reproduces Wiener filter solutions for test problems, and we apply it to a case beyond the reach of the Elsner & Wandelt (2013) method. We compute the Wiener filter solution for a simulated Cosmic Microwave Background map that contains spatially-varying, uncorrelated noise, isotropic $1/f$ noise, and large-scale horizontal stripes (like those caused by the atmospheric noise). We discuss simple extensions that can filter contaminated modes or inverse-noise filter the data. These techniques help to address complications in the noise properties of maps from current and future generations of ground-based Microwave Background experiments, like Advanced ACTPol, Simons Observatory, and CMB-S4. ",0,1,0,0,0,0 17624,"Order-unity argument for structure-generated ""extra"" expansion"," Self-consistent treatment of cosmological structure formation and expansion within the context of classical general relativity may lead to ""extra"" expansion above that expected in a structureless universe. We argue that in comparison to an early-epoch, extrapolated Einstein-de Sitter model, about 10-15% ""extra"" expansion is sufficient at the present to render superfluous the ""dark energy"" 68% contribution to the energy density budget, and that this is observationally realistic. ",0,1,0,0,0,0 17625,The least unramified prime which does not split completely," Let $K/F$ be a finite extension of number fields of degree $n \geq 2$. We establish effective field-uniform unconditional upper bounds for the least norm of a prime ideal of $F$ which is degree 1 over $\mathbb{Q}$ and does not ramify or split completely in $K$. We improve upon the previous best known general estimates due to X. Li when $F = \mathbb{Q}$ and Murty-Patankar when $K/F$ is Galois. Our bounds are the first when $K/F$ is not assumed to be Galois and $F \neq \mathbb{Q}$. ",0,0,1,0,0,0 17626,Crosscorrelation of Rudin-Shapiro-Like Polynomials," We consider the class of Rudin-Shapiro-like polynomials, whose $L^4$ norms on the complex unit circle were studied by Borwein and Mossinghoff. The polynomial $f(z)=f_0+f_1 z + \cdots + f_d z^d$ is identified with the sequence $(f_0,f_1,\ldots,f_d)$ of its coefficients. From the $L^4$ norm of a polynomial, one can easily calculate the autocorrelation merit factor of its associated sequence, and conversely. In this paper, we study the crosscorrelation properties of pairs of sequences associated to Rudin-Shapiro-like polynomials. We find an explicit formula for the crosscorrelation merit factor. A computer search is then used to find pairs of Rudin-Shapiro-like polynomials whose autocorrelation and crosscorrelation merit factors are simultaneously high. Pursley and Sarwate proved a bound that limits how good this combined autocorrelation and crosscorrelation performance can be. We find infinite families of polynomials whose performance approaches quite close to this fundamental limit. ",1,0,1,0,0,0 17627,An Application of Rubi: Series Expansion of the Quark Mass Renormalization Group Equation," We highlight how Rule-based Integration (Rubi) is an enhanced method of symbolic integration which allows for the integration of many difficult integrals not accomplished by other computer algebra systems. Using Rubi, many integration techniques become tractable. Integrals are approached using step-wise simplification, hence distilling an integral (if the solution is unknown) into composite integrals which highlight yet undiscovered integration rules. The motivating example we use is the derivation of the updated series expansion of the quark mass renormalization group equation (RGE) to five-loop order. This series provides the relation between a light quark mass in the modified minimal subtraction ($\overline{\text{MS}}$) scheme defined at some given scale, e.g. at the tau-lepton mass scale, and another chosen energy scale, $s$. This relation explicitly depicts the renormalization scheme dependence of the running quark mass on the scale parameter, $s$, and is important in accurately determining a light quark mass at a chosen scale. The five-loop QCD $\beta(a_s)$ and $\gamma(a_s)$ functions are used in this determination. ",1,0,0,0,0,0 17628,On the Performance of Wireless Powered Communication With Non-linear Energy Harvesting," In this paper, we analyze the performance of a time-slotted multi-antenna wireless powered communication (WPC) system, where a wireless device first harvests radio frequency (RF) energy from a power station (PS) in the downlink to facilitate information transfer to an information receiving station (IRS) in the uplink. The main goal of this paper is to provide insights and guidelines for the design of practical WPC systems. To this end, we adopt a recently proposed parametric non-linear RF energy harvesting (EH) model, which has been shown to accurately model the end-to-end non-linearity of practical RF EH circuits. In order to enhance the RF power transfer efficiency, maximum ratio transmission is adopted at the PS to focus the energy signals on the wireless device. Furthermore, at the IRS, maximum ratio combining is used. We analyze the outage probability and the average throughput of information transfer, assuming Nakagami-$m$ fading uplink and downlink channels. Moreover, we study the system performance as a function of the number of PS transmit antennas, the number of IRS receive antennas, the transmit power of the PS, the fading severity, the transmission rate of the wireless device, and the EH time duration. In addition, we obtain a fixed point equation for the optimal transmission rate and the optimal EH time duration that maximize the asymptotic throughput for high PS transmit powers. All analytical results are corroborated by simulations. ",1,0,0,0,0,0 17629,Realizing polarization conversion and unidirectional transmission by using a uniaxial crystal plate," We show that polarization states of electromagnetic waves can be manipulated easily using a single thin uniaxial crystal plate. By performing a rotational transformation of the coordinates and controlling the thickness of the plate, we can achieve a complete polarization conversion between TE wave and TM wave in a spectral band. We show that the off-diagonal element of the permittivity is the key for polarization conversion. Our analysis can explain clearly the results found in experiments with metamaterials. Finally, we propose a simple device to realize unidirectional transmission based on polarization conversion and excitation of surface plasmon polaritons. ",0,1,0,0,0,0 17630,When Should You Adjust Standard Errors for Clustering?," In empirical work in economics it is common to report standard errors that account for clustering of units. Typically, the motivation given for the clustering adjustments is that unobserved components in outcomes for units within clusters are correlated. However, because correlation may occur across more than one dimension, this motivation makes it difficult to justify why researchers use clustering in some dimensions, such as geographic, but not others, such as age cohorts or gender. It also makes it difficult to explain why one should not cluster with data from a randomized experiment. In this paper, we argue that clustering is in essence a design problem, either a sampling design or an experimental design issue. It is a sampling design issue if sampling follows a two stage process where in the first stage, a subset of clusters were sampled randomly from a population of clusters, while in the second stage, units were sampled randomly from the sampled clusters. In this case the clustering adjustment is justified by the fact that there are clusters in the population that we do not see in the sample. Clustering is an experimental design issue if the assignment is correlated within the clusters. We take the view that this second perspective best fits the typical setting in economics where clustering adjustments are used. This perspective allows us to shed new light on three questions: (i) when should one adjust the standard errors for clustering, (ii) when is the conventional adjustment for clustering appropriate, and (iii) when does the conventional adjustment of the standard errors matter. ",0,0,1,1,0,0 17631,Approximate homomorphisms on lattices," We prove two results concerning an Ulam-type stability problem for homomorphisms between lattices. One of them involves estimates by quite general error functions; the other deals with approximate (join) homomorphisms in terms of certain systems of lattice neighborhoods. As a corollary, we obtain a stability result for approximately monotone functions. ",0,0,1,0,0,0 17632,Learning Latent Events from Network Message Logs: A Decomposition Based Approach," In this communication, we describe a novel technique for event mining using a decomposition based approach that combines non-parametric change-point detection with LDA. We prove theoretical guarantees about sample-complexity and consistency of the approach. In a companion paper, we will perform a thorough evaluation of our approach with detailed experiments. ",0,0,0,1,0,0 17633,Generating retinal flow maps from structural optical coherence tomography with artificial intelligence," Despite significant advances in artificial intelligence (AI) for computer vision, its application in medical imaging has been limited by the burden and limits of expert-generated labels. We used images from optical coherence tomography angiography (OCTA), a relatively new imaging modality that measures perfusion of the retinal vasculature, to train an AI algorithm to generate vasculature maps from standard structural optical coherence tomography (OCT) images of the same retinae, both exceeding the ability and bypassing the need for expert labeling. Deep learning was able to infer perfusion of microvasculature from structural OCT images with similar fidelity to OCTA and significantly better than expert clinicians (P < 0.00001). OCTA suffers from need of specialized hardware, laborious acquisition protocols, and motion artifacts; whereas our model works directly from standard OCT which are ubiquitous and quick to obtain, and allows unlocking of large volumes of previously collected standard OCT data both in existing clinical trials and clinical practice. This finding demonstrates a novel application of AI to medical imaging, whereby subtle regularities between different modalities are used to image the same body part and AI is used to generate detailed and accurate inferences of tissue function from structure imaging. ",0,0,0,1,0,0 17634,Varieties with Ample Tangent Sheaves," This paper generalises Mori's famous theorem about ""Projective manifolds with ample tangent bundles"" to normal projective varieties in the following way: A normal projective variety over $\mathbb{C}$ with ample tangent sheaf is isomorphic to the complex projective space. ",0,0,1,0,0,0 17635,Water sub-diffusion in membranes for fuel cells," We investigate the dynamics of water confined in soft ionic nano-assemblies, an issue critical for a general understanding of the multi-scale structure-function interplay in advanced materials. We focus in particular on hydrated perfluoro-sulfonic acid compounds employed as electrolytes in fuel cells. These materials form phase-separated morphologies that show outstanding proton-conducting properties, directly related to the state and dynamics of the absorbed water. We have quantified water motion and ion transport by combining Quasi Elastic Neutron Scattering, Pulsed Field Gradient Nuclear Magnetic Resonance, and Molecular Dynamics computer simulation. Effective water and ion diffusion coefficients have been determined together with their variation upon hydration at the relevant atomic, nanoscopic and macroscopic scales, providing a complete picture of transport. We demonstrate that confinement at the nanoscale and direct interaction with the charged interfaces produce anomalous sub-diffusion, due to a heterogeneous space-dependent dynamics within the ionic nanochannels. This is irrespective of the details of the chemistry of the hydrophobic confining matrix, confirming the statistical significance of our conclusions. Our findings turn out to indicate interesting connections and possibilities of cross-fertilization with other domains, including biophysics. They also establish fruitful correspondences with advanced topics in statistical mechanics, resulting in new possibilities for the analysis of Neutron scattering data. ",0,1,0,0,0,0 17636,Global Strong Solution of a 2D coupled Parabolic-Hyperbolic Magnetohydrodynamic System," The main objective of this paper is to study the global strong solution of the parabolic-hyperbolic incompressible magnetohydrodynamic (MHD) model in two dimensional space. Based on Agmon, Douglis and Nirenberg's estimates for the stationary Stokes equation and the Solonnikov's theorem of $L^p$-$L^q$-estimates for the evolution Stokes equation, it is shown that the mixed-type MHD equations exist a global strong solution. ",0,0,1,0,0,0 17637,Subcritical thermal convection of liquid metals in a rapidly rotating sphere," Planetary cores consist of liquid metals (low Prandtl number $Pr$) that convect as the core cools. Here we study nonlinear convection in a rotating (low Ekman number $Ek$) planetary core using a fully 3D direct numerical simulation. Near the critical thermal forcing (Rayleigh number $Ra$), convection onsets as thermal Rossby waves, but as the $Ra$ increases, this state is superceded by one dominated by advection. At moderate rotation, these states (here called the weak branch and strong branch, respectively) are smoothly connected. As the planetary core rotates faster, the smooth transition is replaced by hysteresis cycles and subcriticality until the weak branch disappears entirely and the strong branch onsets in a turbulent state at $Ek < 10^{-6}$. Here the strong branch persists even as the thermal forcing drops well below the linear onset of convection ($Ra=0.7Ra_{crit}$ in this study). We highlight the importance of the Reynolds stress, which is required for convection to subsist below the linear onset. In addition, the Péclet number is consistently above 10 in the strong branch. We further note the presence of a strong zonal flow that is nonetheless unimportant to the convective state. Our study suggests that, in the asymptotic regime of rapid rotation relevant for planetary interiors, thermal convection of liquid metals in a sphere onsets through a subcritical bifurcation. ",0,1,0,0,0,0 17638,Decision-making processes underlying pedestrian behaviours at signalised crossings: Part 2. Do pedestrians show cultural herding behaviour ?," Followership is generally defined as a strategy that evolved to solve social coordination problems, and particularly those involved in group movement. Followership behaviour is particularly interesting in the context of road-crossing behaviour because it involves other principles such as risk-taking and evaluating the value of social information. This study sought to identify the cognitive mechanisms underlying decision-making by pedestrians who follow another person across the road at the green or at the red light in two different countries (France and Japan). We used agent-based modelling to simulate the road-crossing behaviours of pedestrians. This study showed that modelling is a reliable means to test different hypotheses and find the exact processes underlying decision-making when crossing the road. We found that two processes suffice to simulate pedestrian behaviours. Importantly, the study revealed differences between the two nationalities and between sexes in the decision to follow and cross at the green and at the red light. Japanese pedestrians are particularly attentive to the number of already departed pedestrians and the number of waiting pedestrians at the red light, whilst their French counterparts only consider the number of pedestrians that have already stepped off the kerb, thus showing the strong conformism of Japanese people. Finally, the simulations are revealed to be similar to observations, not only for the departure latencies but also for the number of crossing pedestrians and the rates of illegal crossings. The conclusion suggests new solutions for safety in transportation research. ",0,0,0,0,1,0 17639,Driven by Excess? Climatic Implications of New Global Mapping of Near-Surface Water-Equivalent Hydrogen on Mars," We present improved Mars Odyssey Neutron Spectrometer (MONS) maps of near-surface Water-Equivalent Hydrogen (WEH) on Mars that have intriguing implications for the global distribution of ""excess"" ice, which occurs when the mass fraction of water ice exceeds the threshold amount needed to saturate the pore volume in normal soils. We have refined the crossover technique of Feldman et al. (2011) by using spatial deconvolution and Gaussian weighting to create the first globally self-consistent map of WEH. At low latitudes, our new maps indicate that WEH exceeds 15% in several near-equatorial regions, such as Arabia Terra, which has important implications for the types of hydrated minerals present at low latitudes. At high latitudes, we demonstrate that the disparate MONS and Phoenix Robotic Arm (RA) observations of near surface WEH can be reconciled by a three-layer model incorporating dry soil over fully saturated pore ice over pure excess ice: such a three-layer model can also potentially explain the strong anticorrelation of subsurface ice content and ice table depth observed at high latitudes. At moderate latitudes, we show that the distribution of recently formed impact craters is also consistent with our latest MONS results, as both the shallowest ice-exposing crater and deepest non-ice-exposing crater at each impact site are in good agreement with our predictions of near-surface WEH. Overall, we find that our new mapping is consistent with the widespread presence at mid-to-high Martian latitudes of recently deposited shallow excess ice reservoirs that are not yet in equilibrium with the atmosphere. ",0,1,0,0,0,0 17640,On distribution of points with conjugate algebraic integer coordinates close to planar curves," Let $\varphi:\mathbb{R}\rightarrow \mathbb{R}$ be a continuously differentiable function on an interval $J\subset\mathbb{R}$ and let $\boldsymbol{\alpha}=(\alpha_1,\alpha_2)$ be a point with algebraic conjugate integer coordinates of degree $\leq n$ and of height $\leq Q$. Denote by $\tilde{M}^n_\varphi(Q,\gamma, J)$ the set of points $\boldsymbol{\alpha}$ such that $|\varphi(\alpha_1)-\alpha_2|\leq c_1 Q^{-\gamma}$. In this paper we show that for a real $0<\gamma<1$ and any sufficiently large $Q$ there exist positive values $c_2