arxiv_id
stringlengths
9
12
paper
stringlengths
2.65k
90.8k
targets
sequencelengths
4
4
targets_idx
sequencelengths
4
4
cite_corpus_id_map
stringlengths
108
31.6k
2210.01864
<|paper_start|> Title: Recycling Scraps: Improving Private Learning by Leveraging Intermediate Checkpoints Abstract: Recycling Scraps: Improving Private Learning by Leveraging Intermediate Checkpoints: In this work, we focus on improving the accuracy-variance trade-off for state-of-the-art differentially private machine learning (DP ML) methods. First, we design a general framework that uses aggregates of intermediate checkpoints \emph{during training} to increase the accuracy of DP ML techniques. Specifically, we demonstrate that training over aggregates can provide significant gains in prediction accuracy over the existing state-of-the-art for StackOverflow, CIFAR10 and CIFAR100 datasets. For instance, we improve the state-of-the-art DP StackOverflow accuracies to 22.74\% (+2.06\% relative) for $\epsilon=8.2$, and 23.90\% (+2.09\%) for $\epsilon=18.9$. Furthermore, these gains magnify in settings with periodically varying training data distributions. We also demonstrate that our methods achieve relative improvements of 0.54\% and 62.6\% in terms of utility and variance, on a proprietary, production-grade pCVR task. Lastly, we initiate an exploration into estimating the uncertainty (variance) that DP noise adds in the predictions of DP ML models. We prove that, under standard assumptions on the loss function, the sample variance from last few checkpoints provides a good approximation of the variance of the final model of a DP run. Empirically, we show that the last few checkpoints can provide a reasonable lower bound for the variance of a converged DP model. Crucially, all the methods proposed in this paper operate on \emph{a single training run} of the DP ML technique, thus incurring no additional privacy cost. Introduction \label{intro} Machine learning models can unintentionally memorize sensitive information about the data they were trained on, which has led to numerous attacks that extract private information about the training data <|cite_start|> (Reference: Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data from Machine Learning Classifiers: Machine Learning (ML) algorithms are used to train computers to perform a variety of complex tasks and improve with experience. Computers learn how to recognize patterns, make unintended decisions, or react to a dynamic environment. Certain trained machines may be more effective than others because they are based on more suitable ML algorithms or because they were trained through superior training sets. Although ML algorithms are known and publicly released, training sets may not be reasonably ascertainable and, indeed, may be guarded as trade secrets. While much research has been performed about the privacy of the elements of training sets, in this paper we focus our attention on ML classifiers and on the statistical information that can be unconsciously or maliciously revealed from them. We show that it is possible to infer unexpected but useful information from ML classifiers. In particular, we build a novel meta-classifier and train it to hack other classifiers, obtaining meaningful information about their training sets. This kind of information leakage can be exploited, for example, by a vendor to build more effective classifiers or to simply acquire trade secrets from a competitor's apparatus, potentially violating its intellectual property rights.) <|cite_end|> <|cite_start|> (Reference: Privacy in Pharmacogenetics: An end-to-end case study of personalized Warfarin dosing: We initiate the study of privacy in pharmacogenetics, wherein machine learning models are used to guide medical treatments based on a patient's genotype and background. Performing an in-depth case study on privacy in personalized warfarin dosing, we show that suggested models carry privacy risks, in particular because attackers can perform what we call model inversion: an attacker, given the model and some demographic information about a patient, can predict the patient's genetic markers. As differential privacy (DP) is an oft-proposed solution for medical settings such as this, we evaluate its effectiveness for building private versions of pharmacogenetic models. We show that DP mechanisms prevent our model inversion attacks when the privacy budget is carefully selected. We go on to analyze the impact on utility by performing simulated clinical trials with DP dosing models. We find that for privacy budgets effective at preventing attacks, patients would be exposed to increased risk of stroke, bleeding events, and mortality. We conclude that current DP mechanisms do not simultaneously improve genomic privacy while retaining desirable clinical efficacy, highlighting the need for new mechanisms that should be evaluated in situ using the general methodology introduced by our work.) <|cite_end|> <|cite_start|> (Reference: {Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures: Machine-learning (ML) algorithms are increasingly utilized in privacy-sensitive applications such as predicting lifestyle choices, making medical diagnoses, and facial recognition. In a model inversion attack, recently introduced in a case study of linear classifiers in personalized medicine by Fredrikson et al., adversarial access to an ML model is abused to learn sensitive genomic information about individuals. Whether model inversion attacks apply to settings outside theirs, however, is unknown. We develop a new class of model inversion attack that exploits confidence values revealed along with predictions. Our new attacks are applicable in a variety of settings, and we explore two in depth: decision trees for lifestyle surveys as used on machine-learning-as-a-service systems and neural networks for facial recognition. In both cases confidence values are revealed to those with the ability to make prediction queries to models. We experimentally show attacks that are able to estimate whether a respondent in a lifestyle survey admitted to cheating on their significant other and, in the other context, show how to recover recognizable images of people's faces given only their name and access to the ML model. We also initiate experimental exploration of natural countermeasures, investigating a privacy-aware decision tree training algorithm that is a simple variant of CART learning, as well as revealing only rounded confidence values. The lesson that emerges is that one can avoid these kinds of MI attacks with negligible degradation to utility.) <|cite_end|> <|cite_start|> (Reference: The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks: This paper describes a testing methodology for quantitatively assessing the risk that rare or unique training-data sequences are unintentionally memorized by generative sequence models---a common type of machine-learning model. Because such models are sometimes trained on sensitive data (e.g., the text of users' private messages), this methodology can benefit privacy by allowing deep-learning practitioners to select means of training that minimize such memorization. In experiments, we show that unintended memorization is a persistent, hard-to-avoid issue that can have serious consequences. Specifically, for models trained without consideration of memorization, we describe new, efficient procedures that can extract unique, secret sequences, such as credit card numbers. We show that our testing strategy is a practical and easy-to-use first line of defense, e.g., by describing its application to quantitatively limit data exposure in Google's Smart Compose, a commercial text-completion neural network trained on millions of users' email messages.) <|cite_end|> <|cite_start|> (Reference: Membership Inference Attacks Against NLP Classification Models: The success of natural language processing (NLP) is making NLP applications commonplace. Unfortunately, recent research has shown that privacy might be at stake given that these models are often trained on private user data. While privacy risks are demonstrated in text generation settings, privacy risks of the text classification settings, which subsume myriad downstream applications, are largely unexplored. In this work, we study the susceptibility of NLP classification models, used for text classification tasks, to membership inference (MI), which is a fundamental type of privacy leakage. We design a comprehensive suite of attacks to assess the risk of sample-level MI , as well as that of relatively unexplored user-level MI . We introduce novel user-level MI attacks that outperform the existing attacks and conduct experiments on Transformer-based and RNN-based NLP models. Our evaluations show that user-level MI is significantly stronger than sample-level MI. We further perform in-depth analyses showing the effect of various NLP-specific parameters on MI against NLP classification models.) <|cite_end|> <|cite_start|> (Reference: Extracting Training Data from Large Language Models: It has become common to publish large (billion parameter) language models that have been trained on private datasets. This paper demonstrates that in such settings, an adversary can perform a training data extraction attack to recover individual training examples by querying the language model. We demonstrate our attack on GPT-2, a language model trained on scrapes of the public Internet, and are able to extract hundreds of verbatim text sequences from the model's training data. These extracted examples include (public) personally identifiable information (names, phone numbers, and email addresses), IRC conversations, code, and 128-bit UUIDs. Our attack is possible even though each of the above sequences are included in just one document in the training data. We comprehensively evaluate our extraction attack to understand the factors that contribute to its success. Worryingly, we find that larger models are more vulnerable than smaller models. We conclude by drawing lessons and discussing possible safeguards for training large language models.) <|cite_end|> <|cite_start|> (Reference: Membership Inference Attacks From First Principles: A membership inference attack allows an adversary to query a trained machine learning model to predict whether or not a particular example was contained in the model's training dataset. These attacks are currently evaluated using average-case "accuracy" metrics that fail to characterize whether the attack can confidently identify any members of the training set. We argue that attacks should instead be evaluated by computing their true-positive rate at low (e.g., <0.1%) false-positive rates, and find most prior attacks perform poorly when evaluated in this way. To address this we develop a Likelihood Ratio Attack (LiRA) that carefully combines multiple ideas from the literature. Our attack is 10x more powerful at low false-positive rates, and also strictly dominates prior attacks on existing metrics.) <|cite_end|>. For instance, membership inference attacks <|cite_start|> (Reference: Membership Inference Attacks against Machine Learning Models: We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained. We focus on the basic membership inference attack: given a data record and black-box access to a model, determine if the record was in the model's training dataset. To perform membership inference against a target model, we make adversarial use of machine learning and train our own inference model to recognize differences in the target model's predictions on the inputs that it trained on versus the inputs that it did not train on. We empirically evaluate our inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon. Using realistic datasets and classification tasks, including a hospital discharge dataset whose membership is sensitive from the privacy perspective, we show that these models can be vulnerable to membership inference attacks. We then investigate the factors that influence this leakage and evaluate mitigation strategies.) <|cite_end|> can infer whether a target sample was used to train a given ML model, while property inference attacks <|cite_start|> (Reference: Exploiting Unintended Feature Leakage in Collaborative Learning: Collaborative machine learning and related techniques such as federated learning allow multiple participants, each with his own training dataset, to build a joint model by training locally and periodically exchanging model updates. We demonstrate that these updates leak unintended information about participants' training data and develop passive and active inference attacks to exploit this leakage. First, we show that an adversarial participant can infer the presence of exact data points -- for example, specific locations -- in others' training data (i.e., membership inference). Then, we show how this adversary can infer properties that hold only for a subset of the training data and are independent of the properties that the joint model aims to capture. For example, he can infer when a specific person first appears in the photos used to train a binary gender classifier. We evaluate our attacks on a variety of tasks, datasets, and learning configurations, analyze their limitations, and discuss possible defenses.) <|cite_end|> <|cite_start|> (Reference: Property Inference From Poisoning: Property inference attacks consider an adversary who has access to the trained model and tries to extract some global statistics of the training data. In this work, we study property inference in scenarios where the adversary can maliciously control part of the training data (poisoning data) with the goal of increasing the leakage. Previous work on poisoning attacks focused on trying to decrease the accuracy of models either on the whole population or on specific sub-populations or instances. Here, for the first time, we study poisoning attacks where the goal of the adversary is to increase the information leakage of the model. Our findings suggest that poisoning attacks can boost the information leakage significantly and should be considered as a stronger threat model in sensitive applications where some of the data sources may be malicious. We describe our \emph{property inference poisoning attack} that allows the adversary to learn the prevalence in the training data of any property it chooses. We theoretically prove that our attack can always succeed as long as the learning algorithm used has good generalization properties. We then verify the effectiveness of our attack by experimentally evaluating it on two datasets: a Census dataset and the Enron email dataset. We were able to achieve above $90\%$ attack accuracy with $9-10\%$ poisoning in all of our experiments.) <|cite_end|> can infer certain sensitive properties of the training data. To address such privacy risks, literature has introduced various approaches to privacy-preserving ML <|cite_start|> (Reference: Machine Learning with Membership Privacy using Adversarial Regularization: Machine learning models leak information about the datasets on which they are trained. An adversary can build an algorithm to trace the individual members of a model's training dataset. As a fundamental inference attack, he aims to distinguish between data points that were part of the model's training set and any other data points from the same distribution. This is known as the tracing (and also membership inference) attack. In this paper, we focus on such attacks against black-box models, where the adversary can only observe the output of the model, but not its parameters. This is the current setting of machine learning as a service in the Internet. We introduce a privacy mechanism to train machine learning models that provably achieve membership privacy: the model's predictions on its training data are indistinguishable from its predictions on other data points from the same distribution. We design a strategic mechanism where the privacy mechanism anticipates the membership inference attacks. The objective is to train a model such that not only does it have the minimum prediction error (high utility), but also it is the most robust model against its corresponding strongest inference attack (high privacy). We formalize this as a min-max game optimization problem, and design an adversarial training algorithm that minimizes the classification loss of the model as well as the maximum gain of the membership inference attack against it. This strategy, which guarantees membership privacy (as prediction indistinguishability), acts also as a strong regularizer and significantly generalizes the model. We evaluate our privacy mechanism on deep neural networks using different benchmark datasets. We show that our min-max strategy can mitigate the risk of membership inference attacks (close to the random guess) with a negligible cost in terms of the classification error.) <|cite_end|> <|cite_start|> (Reference: Membership Privacy for Machine Learning Models Through Knowledge Transfer: Large capacity machine learning (ML) models are prone to membership inference attacks (MIAs), which aim to infer whether the target sample is a member of the target model's training dataset. The serious privacy concerns due to the membership inference have motivated multiple defenses against MIAs, e.g., differential privacy and adversarial regularization. Unfortunately, these defenses produce ML models with unacceptably low classification performances. Our work proposes a new defense, called distillation for membership privacy (DMP), against MIAs that preserves the utility of the resulting models significantly better than prior defenses. DMP leverages knowledge distillation to train ML models with membership privacy. We provide a novel criterion to tune the data used for knowledge transfer in order to amplify the membership privacy of DMP. Our extensive evaluation shows that DMP provides significantly better tradeoffs between membership privacy and classification accuracies compared to state-of-the-art MIA defenses. For instance, DMP achieves ~100% accuracy improvement over adversarial regularization for DenseNet trained on CIFAR100, for similar membership privacy (measured using MIA risk): when the MIA risk is 53.7%, adversarially regularized DenseNet is 33.6% accurate, while DMP-trained DenseNet is 65.3% accurate.) <|cite_end|> <|cite_start|> (Reference: Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture: Membership inference attacks are a key measure to evaluate privacy leakage in machine learning (ML) models. These attacks aim to distinguish training members from non-members by exploiting differential behavior of the models on member and non-member inputs. The goal of this work is to train ML models that have high membership privacy while largely preserving their utility; we therefore aim for an empirical membership privacy guarantee as opposed to the provable privacy guarantees provided by techniques like differential privacy, as such techniques are shown to deteriorate model utility. Specifically, we propose a new framework to train privacy-preserving models that induces similar behavior on member and non-member inputs to mitigate membership inference attacks. Our framework, called SELENA, has two major components. The first component and the core of our defense is a novel ensemble architecture for training. This architecture, which we call Split-AI, splits the training data into random subsets, and trains a model on each subset of the data. We use an adaptive inference strategy at test time: our ensemble architecture aggregates the outputs of only those models that did not contain the input sample in their training data. We prove that our Split-AI architecture defends against a large family of membership inference attacks, however, it is susceptible to new adaptive attacks. Therefore, we use a second component in our framework called Self-Distillation to protect against such stronger attacks. The Self-Distillation component (self-)distills the training dataset through our Split-AI ensemble, without using any external public datasets. Through extensive experiments on major benchmark datasets we show that SELENA presents a superior trade-off between membership privacy and utility compared to the state of the art.) <|cite_end|>. In particular, iterative techniques like differentially private stochastic gradient decent (DP-SGD) <|cite_start|> (Reference: {Stochastic gradient descent with differentially private updates: In recent decades, the amount of data available has grown immensely. A lot of this data may be private or sensitive. Privacy of of this data is very important, which is why algorithms that can operate on this data without violating privacy have become crucial. A framework for designing such algorithms is differential privacy. In this paper we propose differentially private versions of single-point and mini-batch stochastic gradient descent (SGD) and use these for optimizing the objective for logistic regression. We use several data sets of varying sizes to test the algorithms. We conclude that the performance of mini-batch differentially private SGD is very close to non-private SGD, in contrast to single-point differentially private SGD, which does not converge and has a high variance. This holds for both low and high dimensional problems. We also conclude that deciding on hyperparameters is not an easy choice. All the results mentioned before are obtained with doing a single pass through the data sets. We also test the effect of doing multiple passes through the data set for singlepoint differentially private SGD. This decreases the level of privacy and does not increase performance as much as mini-batching does.) <|cite_end|> <|cite_start|> (Reference: Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds: Convex empirical risk minimization is a basic tool in machine learning and statistics. We provide new algorithms and matching lower bounds for differentially private convex empirical risk minimization assuming only that each data point's contribution to the loss function is Lipschitz and that the domain of optimization is bounded. We provide a separate set of algorithms and matching lower bounds for the setting in which the loss functions are known to also be strongly convex. Our algorithms run in polynomial time, and in some cases even match the optimal nonprivate running time (as measured by oracle complexity). We give separate algorithms (and lower bounds) for (ε, 0)and (ε, δ)-differential privacy; perhaps surprisingly, the techniques used for designing optimal algorithms in the two cases are completely different. Our lower bounds apply even to very simple, smooth function families, such as linear and quadratic functions. This implies that algorithms from previous work can be used to obtain optimal error rates, under the additional assumption that the contributions of each data point to the loss function is smooth. We show that simple approaches to smoothing arbitrary loss functions (in order to apply previous techniques) do not yield optimal error rates. In particular, optimal algorithms were not previously known for problems such as training support vector machines and the high-dimensional median.) <|cite_end|> <|cite_start|> (Reference: Deep Learning with Differential Privacy: Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains. Often, the training of models requires large, representative datasets, which may be crowdsourced and contain sensitive information. The models should not expose private information in these datasets. Addressing this goal, we develop new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy. Our implementation and experiments demonstrate that we can train deep neural networks with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality.) <|cite_end|> <|cite_start|> (Reference: Learning Differentially Private Recurrent Language Models: We demonstrate that it is possible to train large recurrent language models with user-level differential privacy guarantees with only a negligible cost in predictive accuracy. Our work builds on recent advances in the training of deep networks on user-partitioned data and privacy accounting for stochastic gradient descent. In particular, we add user-level privacy protection to the federated averaging algorithm, which makes "large step" updates from user-level data. Our work demonstrates that given a dataset with a sufficiently large number of users (a requirement easily met by even small internet-scale datasets), achieving differential privacy comes at the cost of increased computation, rather than in decreased utility as in most prior work. We find that our private LSTM language models are quantitatively and qualitatively similar to un-noised models when trained on a large dataset.) <|cite_end|> and DP Follow The Regularized Leader (DP-FTRL) <|cite_start|> (Reference: Practical and Private (Deep) Learning without Sampling or Shuffling: We consider training models with differential privacy (DP) using mini-batch gradients. The existing state-of-the-art, Differentially Private Stochastic Gradient Descent (DP-SGD), requires privacy amplification by sampling or shuffling to obtain the best privacy/accuracy/computation trade-offs. Unfortunately, the precise requirements on exact sampling and shuffling can be hard to obtain in important practical scenarios, particularly federated learning (FL). We design and analyze a DP variant of Follow-The-Regularized-Leader (DP-FTRL) that compares favorably (both theoretically and empirically) to amplified DP-SGD, while allowing for much more flexible data access patterns. DP-FTRL does not use any form of privacy amplification. The code is available at https://github.com/google-research/federated/tree/master/dp_ftrl and https://github.com/google-research/DP-FTRL .) <|cite_end|> have become the state-of-the-art for training DP neural networks. For establishing benchmarks, prior works in DP ML <|cite_start|> (Reference: Deep Learning with Differential Privacy: Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains. Often, the training of models requires large, representative datasets, which may be crowdsourced and contain sensitive information. The models should not expose private information in these datasets. Addressing this goal, we develop new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy. Our implementation and experiments demonstrate that we can train deep neural networks with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality.) <|cite_end|> <|cite_start|> (Reference: Learning Differentially Private Recurrent Language Models: We demonstrate that it is possible to train large recurrent language models with user-level differential privacy guarantees with only a negligible cost in predictive accuracy. Our work builds on recent advances in the training of deep networks on user-partitioned data and privacy accounting for stochastic gradient descent. In particular, we add user-level privacy protection to the federated averaging algorithm, which makes "large step" updates from user-level data. Our work demonstrates that given a dataset with a sufficiently large number of users (a requirement easily met by even small internet-scale datasets), achieving differential privacy comes at the cost of increased computation, rather than in decreased utility as in most prior work. We find that our private LSTM language models are quantitatively and qualitatively similar to un-noised models when trained on a large dataset.) <|cite_end|> <|cite_start|> (Reference: A General Approach to Adding Differential Privacy to Iterative Training Procedures: In this work we address the practical challenges of training machine learning models on privacy-sensitive datasets by introducing a modular approach that minimizes changes to training algorithms, provides a variety of configuration strategies for the privacy mechanism, and then isolates and simplifies the critical logic that computes the final privacy guarantees. A key challenge is that training algorithms often require estimating many different quantities (vectors) from the same set of examples --- for example, gradients of different layers in a deep learning architecture, as well as metrics and batch normalization parameters. Each of these may have different properties like dimensionality, magnitude, and tolerance to noise. By extending previous work on the Moments Accountant for the subsampled Gaussian mechanism, we can provide privacy for such heterogeneous sets of vectors, while also structuring the approach to minimize software engineering challenges.) <|cite_end|> <|cite_start|> (Reference: Differentially Private Learning with Adaptive Clipping: Existing approaches for training neural networks with user-level differential privacy (e.g., DP Federated Averaging) in federated learning (FL) settings involve bounding the contribution of each user's model update by clipping it to some constant value. However there is no good a priori setting of the clipping norm across tasks and learning settings: the update norm distribution depends on the model architecture and loss, the amount of data on each device, the client learning rate, and possibly various other parameters. We propose a method wherein instead of a fixed clipping norm, one clips to a value at a specified quantile of the update norm distribution, where the value at the quantile is itself estimated online, with differential privacy. The method tracks the quantile closely, uses a negligible amount of privacy budget, is compatible with other federated learning technologies such as compression and secure aggregation, and has a straightforward joint DP analysis with DP-FedAvg. Experiments demonstrate that adaptive clipping to the median update norm works well across a range of realistic federated learning tasks, sometimes outperforming even the best fixed clip chosen in hindsight, and without the need to tune any clipping hyperparameter.) <|cite_end|> <|cite_start|> (Reference: Amplification by Shuffling: From Local to Central Differential Privacy via Anonymity: Sensitive statistics are often collected across sets of users, with repeated collection of reports done over time. For example, trends in users' private preferences or software usage may be monitored via such reports. We study the collection of such statistics in the local differential privacy (LDP) model, and describe an algorithm whose privacy cost is polylogarithmic in the number of changes to a user's value. More fundamentally---by building on anonymity of the users' reports---we also demonstrate how the privacy cost of our LDP algorithm can actually be much lower when viewed in the central model of differential privacy. We show, via a new and general privacy amplification technique, that any permutation-invariant algorithm satisfying $\varepsilon$-local differential privacy will satisfy $(O(\varepsilon \sqrt{\log(1/\delta)/n}), \delta)$-central differential privacy. By this, we explain how the high noise and $\sqrt{n}$ overhead of LDP protocols is a consequence of them being significantly more private in the central model. As a practical corollary, our results imply that several LDP-based industrial deployments may have much lower privacy cost than their advertised $\varepsilon$ would indicate---at least if reports are anonymized.) <|cite_end|> <|cite_start|> (Reference: Subsampled R\'enyi Differential Privacy and Analytical Moments Accountant: We study the problem of subsampling in differential privacy (DP), a question that is the centerpiece behind many successful differentially private machine learning algorithms. Specifically, we provide a tight upper bound on the R\'enyi Differential Privacy (RDP) (Mironov, 2017) parameters for algorithms that: (1) subsample the dataset, and then (2) applies a randomized mechanism M to the subsample, in terms of the RDP parameters of M and the subsampling probability parameter. Our results generalize the moments accounting technique, developed by Abadi et al. (2016) for the Gaussian mechanism, to any subsampled RDP mechanism.) <|cite_end|> <|cite_start|> (Reference: Privacy Amplification via Random Check-Ins: Differentially Private Stochastic Gradient Descent (DP-SGD) forms a fundamental building block in many applications for learning over sensitive data. Two standard approaches, privacy amplification by subsampling, and privacy amplification by shuffling, permit adding lower noise in DP-SGD than via na\"{\i}ve schemes. A key assumption in both these approaches is that the elements in the data set can be uniformly sampled, or be uniformly permuted -- constraints that may become prohibitive when the data is processed in a decentralized or distributed fashion. In this paper, we focus on conducting iterative methods like DP-SGD in the setting of federated learning (FL) wherein the data is distributed among many devices (clients). Our main contribution is the \emph{random check-in} distributed protocol, which crucially relies only on randomized participation decisions made locally and independently by each client. It has privacy/accuracy trade-offs similar to privacy amplification by subsampling/shuffling. However, our method does not require server-initiated communication, or even knowledge of the population size. To our knowledge, this is the first privacy amplification tailored for a distributed learning framework, and it may have broader applicability beyond FL. Along the way, we extend privacy amplification by shuffling to incorporate $(\epsilon,\delta)$-DP local randomizers, and exponentially improve its guarantees. In practical regimes, this improvement allows for similar privacy and utility using data from an order of magnitude fewer users.) <|cite_end|> <|cite_start|> (Reference: Encode, Shuffle, Analyze Privacy Revisited: Formalizations and Empirical Evaluation: Recently, a number of approaches and techniques have been introduced for reporting software statistics with strong privacy guarantees. These range from abstract algorithms to comprehensive systems with varying assumptions and built upon local differential privacy mechanisms and anonymity. Based on the Encode-Shuffle-Analyze (ESA) framework, notable results formally clarified large improvements in privacy guarantees without loss of utility by making reports anonymous. However, these results either comprise of systems with seemingly disparate mechanisms and attack models, or formal statements with little guidance to practitioners. Addressing this, we provide a formal treatment and offer prescriptive guidelines for privacy-preserving reporting with anonymity. We revisit the ESA framework with a simple, abstract model of attackers as well as assumptions covering it and other proposed systems of anonymity. In light of new formal privacy bounds, we examine the limitations of sketch-based encodings and ESA mechanisms such as data-dependent crowds. We also demonstrate how the ESA notion of fragmentation (reporting data aspects in separate, unlinkable messages) improves privacy/utility tradeoffs both in terms of local and central differential-privacy guarantees. Finally, to help practitioners understand the applicability and limitations of privacy-preserving reporting, we report on a large number of empirical experiments. We use real-world datasets with heavy-tailed or near-flat distributions, which pose the greatest difficulty for our techniques; in particular, we focus on data drawn from images that can be easily visualized in a way that highlights reconstruction errors. Showing the promise of the approach, and of independent interest, we also report on experiments using anonymous, privacy-preserving reporting to train high-accuracy deep neural networks on standard tasks---MNIST and CIFAR-10.) <|cite_end|> <|cite_start|> (Reference: Tempered Sigmoid Activations for Deep Learning with Differential Privacy: Because learning sometimes involves sensitive data, machine learning algorithms have been extended to offer privacy for training data. In practice, this has been mostly an afterthought, with privacy-preserving models obtained by re-running training with a different optimizer, but using the model architectures that already performed well in a non-privacy-preserving setting. This approach leads to less than ideal privacy/utility tradeoffs, as we show here. Instead, we propose that model architectures are chosen ab initio explicitly for privacy-preserving training. To provide guarantees under the gold standard of differential privacy, one must bound as strictly as possible how individual training points can possibly affect model updates. In this paper, we are the first to observe that the choice of activation function is central to bounding the sensitivity of privacy-preserving deep learning. We demonstrate analytically and experimentally how a general family of bounded activation functions, the tempered sigmoids, consistently outperform unbounded activation functions like ReLU. Using this paradigm, we achieve new state-of-the-art accuracy on MNIST, FashionMNIST, and CIFAR10 without any modification of the learning procedure fundamentals or differential privacy analysis.) <|cite_end|> <|cite_start|> (Reference: Differentially Private Learning Needs Better Features (or Much More Data): We demonstrate that differentially private machine learning has not yet reached its "AlexNet moment" on many canonical vision tasks: linear models trained on handcrafted features significantly outperform end-to-end deep neural networks for moderate privacy budgets. To exceed the performance of handcrafted features, we show that private learning requires either much more private data, or access to features learned on public data from a similar domain. Our work introduces simple yet strong baselines for differentially private learning that can inform the evaluation of future progress in this area.) <|cite_end|> <|cite_start|> (Reference: Differentially Private Learning with Adaptive Clipping: Existing approaches for training neural networks with user-level differential privacy (e.g., DP Federated Averaging) in federated learning (FL) settings involve bounding the contribution of each user's model update by clipping it to some constant value. However there is no good a priori setting of the clipping norm across tasks and learning settings: the update norm distribution depends on the model architecture and loss, the amount of data on each device, the client learning rate, and possibly various other parameters. We propose a method wherein instead of a fixed clipping norm, one clips to a value at a specified quantile of the update norm distribution, where the value at the quantile is itself estimated online, with differential privacy. The method tracks the quantile closely, uses a negligible amount of privacy budget, is compatible with other federated learning technologies such as compression and secure aggregation, and has a straightforward joint DP analysis with DP-FedAvg. Experiments demonstrate that adaptive clipping to the median update norm works well across a range of realistic federated learning tasks, sometimes outperforming even the best fixed clip chosen in hindsight, and without the need to tune any clipping hyperparameter.) <|cite_end|> <|cite_start|> (Reference: Practical and Private (Deep) Learning without Sampling or Shuffling: We consider training models with differential privacy (DP) using mini-batch gradients. The existing state-of-the-art, Differentially Private Stochastic Gradient Descent (DP-SGD), requires privacy amplification by sampling or shuffling to obtain the best privacy/accuracy/computation trade-offs. Unfortunately, the precise requirements on exact sampling and shuffling can be hard to obtain in important practical scenarios, particularly federated learning (FL). We design and analyze a DP variant of Follow-The-Regularized-Leader (DP-FTRL) that compares favorably (both theoretically and empirically) to amplified DP-SGD, while allowing for much more flexible data access patterns. DP-FTRL does not use any form of privacy amplification. The code is available at https://github.com/google-research/federated/tree/master/dp_ftrl and https://github.com/google-research/DP-FTRL .) <|cite_end|> <|cite_start|> (Reference: Public Data-Assisted Mirror Descent for Private Model Training: In this paper, we revisit the problem of using in-distribution public data to improve the privacy/utility trade-offs for differentially private (DP) model training. (Here, public data refers to auxiliary data sets that have no privacy concerns.) We design a natural variant of DP mirror descent, where the DP gradients of the private/sensitive data act as the linear term, and the loss generated by the public data as the mirror map. We show that, for linear regression with feature vectors drawn from a non-isotropic sub-Gaussian distribution, our algorithm, PDA-DPMD (a variant of mirror descent), provides population risk guarantees that are asymptotically better than the best known guarantees under DP (without having access to public data), when the number of public data samples ($n_{\sf pub}$) is sufficiently large. We further show that our algorithm has natural "noise stability" properties that control the variance due to noise added to ensure DP. We demonstrate the efficacy of our algorithm by showing privacy/utility trade-offs on four benchmark datasets (StackOverflow, WikiText-2, CIFAR-10, and EMNIST). We show that our algorithm not only significantly improves over traditional DP-SGD, which does not have access to public data, but to our knowledge is the first to improve over DP-SGD on models that have been pre-trained with public data.) <|cite_end|> <|cite_start|> (Reference: Unlocking High-Accuracy Differentially Private Image Classification through Scale: Differential Privacy (DP) provides a formal privacy guarantee preventing adversaries with access to a machine learning model from extracting information about individual training points. Differentially Private Stochastic Gradient Descent (DP-SGD), the most popular DP training method for deep learning, realizes this protection by injecting noise during training. However previous works have found that DP-SGD often leads to a significant degradation in performance on standard image classification benchmarks. Furthermore, some authors have postulated that DP-SGD inherently performs poorly on large models, since the norm of the noise required to preserve privacy is proportional to the model dimension. In contrast, we demonstrate that DP-SGD on over-parameterized models can perform significantly better than previously thought. Combining careful hyper-parameter tuning with simple techniques to ensure signal propagation and improve the convergence rate, we obtain a new SOTA without extra data on CIFAR-10 of 81.4% under (8, 10^{-5})-DP using a 40-layer Wide-ResNet, improving over the previous SOTA of 71.7%. When fine-tuning a pre-trained NFNet-F3, we achieve a remarkable 83.8% top-1 accuracy on ImageNet under (0.5, 8*10^{-7})-DP. Additionally, we also achieve 86.7% top-1 accuracy under (8, 8 \cdot 10^{-7})-DP, which is just 4.3% below the current non-private SOTA for this task. We believe our results are a significant step towards closing the accuracy gap between private and non-private image classification.) <|cite_end|> <|cite_start|> (Reference: Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy Amplification by Shuffling: Recent work of Erlingsson, Feldman, Mironov, Raghunathan, Talwar, and Thakurta [EFMRTT19] demonstrates that random shuffling amplifies differential privacy guarantees of locally randomized data. Such amplification implies substantially stronger privacy guarantees for systems in which data is contributed anonymously [BEMMRLRKTS17] and has lead to significant interest in the shuffle model of privacy [CSUZZ19; EFMRTT19]. We show that random shuffling of $n$ data records that are input to $\varepsilon_0$-differentially private local randomizers results in an $(O((1-e^{-\varepsilon_0})\sqrt{\frac{e^{\varepsilon_0}\log(1/\delta)}{n}}), \delta)$-differentially private algorithm. This significantly improves over previous work and achieves the asymptotically optimal dependence in $\varepsilon_0$. Our result is based on a new approach that is simpler than previous work and extends to approximate differential privacy with nearly the same guarantees. Importantly, our work also yields an algorithm for deriving tighter bounds on the resulting $\varepsilon$ and $\delta$ as well as R\'enyi differential privacy guarantees. We show numerically that our algorithm gets to within a small constant factor of the optimal bound. As a direct corollary of our analysis we derive a simple and nearly optimal algorithm for frequency estimation in the shuffle model of privacy. We also observe that our result implies the first asymptotically optimal privacy analysis of noisy stochastic gradient descent that applies to sampling without replacement.) <|cite_end|> use only the final model output by the DP algorithm. This is also how DP models are deployed in practice <|cite_start|> (Reference: Training Production Language Models without Memorizing User Data: This paper presents the first consumer-scale next-word prediction (NWP) model trained with Federated Learning (FL) while leveraging the Differentially Private Federated Averaging (DP-FedAvg) technique. There has been prior work on building practical FL infrastructure, including work demonstrating the feasibility of training language models on mobile devices using such infrastructure. It has also been shown (in simulations on a public corpus) that it is possible to train NWP models with user-level differential privacy using the DP-FedAvg algorithm. Nevertheless, training production-quality NWP models with DP-FedAvg in a real-world production environment on a heterogeneous fleet of mobile phones requires addressing numerous challenges. For instance, the coordinating central server has to keep track of the devices available at the start of each round and sample devices uniformly at random from them, while ensuring \emph{secrecy of the sample}, etc. Unlike all prior privacy-focused FL work of which we are aware, for the first time we demonstrate the deployment of a differentially private mechanism for the training of a production neural network in FL, as well as the instrumentation of the production training infrastructure to perform an end-to-end empirical measurement of unintended memorization.) <|cite_end|>. However, the privacy analyses for the techniques used allow releasing/using all of the intermediate training checkpoints. In this work, we comprehensively study various methods that leverage intermediate checkpoints to 1) improve the utility of DP training, and 2) quantify the uncertainty in DP ML models that is due to the DP noise. \mypar{Accuracy improvement using checkpoints} We propose two classes of aggregation methods based on aggregating the \emph{parameters} of checkpoints, or their \emph{outputs}. We provide both theoretical and empirical analyses for our aggregation methods. Theoretically, we show that excess empirical risk of the final checkpoint of DP-SGD is $\text{log}(n)$ times more than that of the weighted average of the past $k$ checkpoints. Here, $n$ is the size of dataset. Empirically, we demonstrate significant top-1 accuracy gains due to our aggregations for image classification (CIFAR10) and a next word prediction (StackOverflow) tasks. Specifically, we show that our checkpoints aggregations achieve absolute (relative) prediction accuracy improvements of 3.79\% (7.2\%) at $\epsilon=1$ for CIFAR10 (DP-SGD), and 0.43\% (1.9\%) at $\epsilon=8.2$ for the StackOverflow (DP-FTRLM) SOTA baselines, respectively. We also show that our aggregations significantly reduce the variance in the performance of DP models over training. Finally, we show that these benefits further magnify in more practical settings with periodically varying training data distributions. For instance, we note absolute (relative) accuracy gains of 17.4\% (28.6\%) at $\epsilon=8$ for CIFAR10 over DP-SGD baseline in such a setting. \mypar{Uncertainty quantification using checkpoints} There are various sources of randomness in a ML training pipeline <|cite_start|> (Reference: A Review of Uncertainty Quantification in Deep Learning: Techniques, Applications and Challenges: ) <|cite_end|>, e.g., choice of initial parameters, dataset, batching, etc. This randomness induces uncertainty in the predictions made using such ML models. In critical domains, e.g., medical diagnosis, self-driving cars and financial market analysis, failing to capture the uncertainty in such predictions can have undesirable repercussions. DP learning adds an additional source of randomness by injecting noise at every training round. Hence, it is paramount to quantify reliability of the DP models, e.g., by quantifying the uncertainty in their predictions. To this end, we take the first steps in this work towards \emph{quantifying the uncertainty that DP noise adds} to DP ML training. As prior work, <|cite_start|> (Reference: Finite Sample Differentially Private Confidence Intervals: We study the problem of estimating finite sample confidence intervals of the mean of a normal population under the constraint of differential privacy. We consider both the known and unknown variance cases and construct differentially private algorithms to estimate confidence intervals. Crucially, our algorithms guarantee a finite sample coverage, as opposed to an asymptotic coverage. Unlike most previous differentially private algorithms, we do not require the domain of the samples to be bounded. We also prove lower bounds on the expected size of any differentially private confidence set showing that our the parameters are optimal up to polylogarithmic factors.) <|cite_end|> develop finite sample confidence intervals but for the simpler Gaussian mean estimation problem. Various methods exist for uncertainty quantification in ML-based systems <|cite_start|> (Reference: The need for biases in learning generalizations: Learning involves the ability to generalize from past experience in order to deal with new situations that are ”related to” this experience. The inductive leaap needed to deal with new situations seems to be possible only under certain biases for choosing one generalization of the situation over another. This paper defines precisely the notion of bias in generalization problems, then shows that biases are necessary for the inductive leap. Classes of justifiable biases are considered, and the relationship between bias and domain-independence is considered. We restrict the scope of this discussion to the problem of generalizing from training instances, defined as follows: The Generalization Problem Given:) <|cite_end|> <|cite_start|> (Reference: Inherent Brain Segmentation Quality Control from Fully ConvNet Monte Carlo Sampling: We introduce inherent measures for effective quality control of brain segmentation based on a Bayesian fully convolutional neural network, using model uncertainty. Monte Carlo samples from the posterior distribution are efficiently generated using dropout at test time. Based on these samples, we introduce next to a voxel-wise uncertainty map also three metrics for structure-wise uncertainty. We then incorporate these structure-wise uncertainty in group analyses as a measure of confidence in the observation. Our results show that the metrics are highly correlated to segmentation accuracy and therefore present an inherent measure of segmentation quality. Furthermore, group analysis with uncertainty results in effect sizes closer to that of manual annotations. The introduced uncertainty metrics can not only be very useful in translation to clinical practice but also provide automated quality control and group analyses in processing large data repositories.) <|cite_end|> <|cite_start|> (Reference: The need for uncertainty quantification in machine-assisted medical decision making: ) <|cite_end|> <|cite_start|> (Reference: {Calibrating Uncertainty Models for Steering Angle Estimation: Various approaches to end-to-end vehicle control using deep neural networks have been proposed recently, examining various architectures to predict steering angles based on raw sensor data. However, most of these approaches are only used as black boxes, which work well in most scenarios and drive vehicles in real traffic, but it is unclear when they will fail. In order to use such models in larger architectures used in autonomous vehicles, they need to reason about decisions or at least provide an additional measure of confidence that captures the uncertainty of the model. In this paper, we introduce and motivate different uncertainty models, comparing Monte Carlo dropout with network architectures based on the bootstrap ensembling method and a Gaussian mixture for the task of end-to-end vehicle control. Furthermore, we evaluate the presented uncertainty models regarding their driving performance as well as their model uncertainty calibration. The model calibration can be regarded as a measure of how well an uncertainty estimates fits to the expected performance. Well calibrated uncertainty estimates are crucial when embedding deep learning models into probabilistic models.) <|cite_end|> <|cite_start|> (Reference: Deep Echo State Networks with Uncertainty Quantification for Spatio-Temporal Forecasting: Long-lead forecasting for spatio-temporal systems can often entail complex nonlinear dynamics that are difficult to specify it a priori. Current statistical methodologies for modeling these processes are often highly parameterized and thus, challenging to implement from a computational perspective. One potential parsimonious solution to this problem is a method from the dynamical systems and engineering literature referred to as an echo state network (ESN). ESN models use so-called {\it reservoir computing} to efficiently compute recurrent neural network (RNN) forecasts. Moreover, so-called "deep" models have recently been shown to be successful at predicting high-dimensional complex nonlinear processes, particularly those with multiple spatial and temporal scales of variability (such as we often find in spatio-temporal environmental data). Here we introduce a deep ensemble ESN (D-EESN) model. We present two versions of this model for spatio-temporal processes that both produce forecasts and associated measures of uncertainty. The first approach utilizes a bootstrap ensemble framework and the second is developed within a hierarchical Bayesian framework (BD-EESN). This more general hierarchical Bayesian framework naturally accommodates non-Gaussian data types and multiple levels of uncertainties. The methodology is first applied to a data set simulated from a novel non-Gaussian multiscale Lorenz-96 dynamical system simulation model and then to a long-lead United States (U.S.) soil moisture forecasting application.) <|cite_end|> <|cite_start|> (Reference: Single-Model Uncertainties for Deep Learning: We provide single-model estimates of aleatoric and epistemic uncertainty for deep neural networks. To estimate aleatoric uncertainty, we propose Simultaneous Quantile Regression (SQR), a loss function to learn all the conditional quantiles of a given target variable. These quantiles can be used to compute well-calibrated prediction intervals. To estimate epistemic uncertainty, we propose Orthonormal Certificates (OCs), a collection of diverse non-constant functions that map all training samples to zero. These certificates map out-of-distribution examples to non-zero values, signaling epistemic uncertainty. Our uncertainty estimators are computationally attractive, as they do not require ensembling or retraining deep models, and achieve competitive performance.) <|cite_end|> <|cite_start|> (Reference: Aleatoric uncertainty estimation with test-time augmentation for medical image segmentation with convolutional neural networks: Despite the state-of-the-art performance for medical image segmentation, deep convolutional neural networks (CNNs) have rarely provided uncertainty estimations regarding their segmentation outputs, e.g., model (epistemic) and image-based (aleatoric) uncertainties. In this work, we analyze these different types of uncertainties for CNN-based 2D and 3D medical image segmentation tasks. We additionally propose a test-time augmentation-based aleatoric uncertainty to analyze the effect of different transformations of the input image on the segmentation output. Test-time augmentation has been previously used to improve segmentation accuracy, yet not been formulated in a consistent mathematical framework. Hence, we also propose a theoretical formulation of test-time augmentation, where a distribution of the prediction is estimated by Monte Carlo simulation with prior distributions of parameters in an image acquisition model that involves image transformations and noise. We compare and combine our proposed aleatoric uncertainty with model uncertainty. Experiments with segmentation of fetal brains and brain tumors from 2D and 3D Magnetic Resonance Images (MRI) showed that 1) the test-time augmentation-based aleatoric uncertainty provides a better uncertainty estimation than calculating the test-time dropout-based model uncertainty alone and helps to reduce overconfident incorrect predictions, and 2) our test-time augmentation outperforms a single-prediction baseline and dropout-based multiple predictions.) <|cite_end|> <|cite_start|> (Reference: Exploring Uncertainty Measures in Deep Networks for Multiple Sclerosis Lesion Detection and Segmentation: Deep learning (DL) networks have recently been shown to outperform other segmentation methods on various public, medical-image challenge datasets [3,11,16], especially for large pathologies. However, in the context of diseases such as Multiple Sclerosis (MS), monitoring all the focal lesions visible on MRI sequences, even very small ones, is essential for disease staging, prognosis, and evaluating treatment efficacy. Moreover, producing deterministic outputs hinders DL adoption into clinical routines. Uncertainty estimates for the predictions would permit subsequent revision by clinicians. We present the first exploration of multiple uncertainty estimates based on Monte Carlo (MC) dropout [4] in the context of deep networks for lesion detection and segmentation in medical images. Specifically, we develop a 3D MS lesion segmentation CNN, augmented to provide four different voxel-based uncertainty measures based on MC dropout. We train the network on a proprietary, large-scale, multi-site, multi-scanner, clinical MS dataset, and compute lesion-wise uncertainties by accumulating evidence from voxel-wise uncertainties within detected lesions. We analyze the performance of voxel-based segmentation and lesion-level detection by choosing operating points based on the uncertainty. Empirical evidence suggests that uncertainty measures consistently allow us to choose superior operating points compared only using the network's sigmoid output as a probability.) <|cite_end|> <|cite_start|> (Reference: Parametric Bootstrap for Differentially Private Confidence Intervals: The goal of this paper is to develop a practical and general-purpose approach to construct confidence intervals for differentially private parametric estimation. We find that the parametric bootstrap is a simple and effective solution. It cleanly reasons about variability of both the data sample and the randomized privacy mechanism and applies "out of the box" to a wide class of private estimation routines. It can also help correct bias caused by clipping data to limit sensitivity. We prove that the parametric bootstrap gives consistent confidence intervals in two broadly relevant settings, including a novel adaptation to linear regression that avoids accessing the covariate data multiple times. We demonstrate its effectiveness for a variety of estimators, and find that it provides confidence intervals with good coverage even at modest sample sizes and performs better than alternative approaches.) <|cite_end|>. However, these methods either use specialized (or simpler) model architectures to facilitate uncertainty quantification, or are not directly applicable to quantify the uncertainty in DP deep learning due to DP noise. For instance, the most common way of uncertainty quantification <|cite_start|> (Reference: Differentially Private Significance Tests for Regression Coefficients: ABSTRACT Many data producers seek to provide users access to confidential data without unduly compromising data subjects’ privacy and confidentiality. One general strategy is to require users to do analyses without seeing the confidential data; for example, analysts only get access to synthetic data or query systems that provide disclosure-protected outputs of statistical models. With synthetic data or redacted outputs, the analyst never really knows how much to trust the resulting findings. In particular, if the user did the same analysis on the confidential data, would regression coefficients of interest be statistically significant or not? We present algorithms for assessing this question that satisfy differential privacy. We describe conditions under which the algorithms should give accurate answers about statistical significance. We illustrate the properties of the proposed methods using artificial and genuine data. Supplementary materials for this article are available online.) <|cite_end|> <|cite_start|> (Reference: Smooth sensitivity and sampling in private data analysis: We introduce a new, generic framework for private data analysis. The goal of private data analysis is to release aggregate information about a data set while protecting the privacy of the individuals whose information the data set contains. Our framework allows one to release functions f of the data with instance-specific additive noise. That is, the noise magnitude is determined not only by the function we want to release, but also by the database itself. One of the challenges is to ensure that the noise magnitude does not leak information about the database. To address that, we calibrate the noise magnitude to the smooth sensitivity of f on the database x — a measure of variability of f in the neighborhood of the instance x. The new framework greatly expands the applicability of output perturbation, a technique for protecting individuals’ privacy by adding a small amount of random noise to the released statistics. To our knowledge, this is the first formal analysis of the effect of instance-specific noise in the context of data privacy. Our framework raises many interesting algorithmic questions. Namely, to apply the framework one must compute or approximate the smooth sensitivity of f on x. We show how to do this efficiently for several different functions, including the median and the cost of the minimum spanning tree. We also give a generic procedure based on sampling that allows one to release f(x) accurately on many databases x. This procedure is applicable even when no efficient algorithm for approximating smooth sensitivity of f is known or when f is given as a black box. We illustrate the procedure by applying it to k-SED (k-means) clustering and learning mixtures of Gaussians.) <|cite_end|> <|cite_start|> (Reference: Statistically Valid Inferences from Privacy Protected Data: Unprecedented quantities of data that could help social scientists understand and ameliorate the challenges of human society are presently locked away inside companies, governments, and other organizations, in part because of privacy concerns. We address this problem with a general-purpose data access and analysis system with mathematical guarantees of privacy for research subjects, and statistical validity guarantees for researchers seeking social science insights. We build on the standard of “differential privacy,” correct for biases induced by the privacy-preserving procedures, provide a proper accounting of uncertainty, and impose minimal constraints on the choice of statistical methods and quantities estimated. We illustrate by replicating key analyses from two recent published articles and show how we can obtain approximately the same substantive results while simultaneously protecting privacy. Our approach is simple to use and computationally efficient; we also offer open-source software that implements all our methods.) <|cite_end|> <|cite_start|> (Reference: Bootstrap Inference and Differential Privacy: Standard Errors for Free ∗: The bootstrap is a common and powerful statistical tool for numerically computing the standard error of estimators, that is, a calculation of the uncertainty of functions computed on sample data so as to make an inference back to the original population from which the sample was drawn. Understanding uncertainty, and inferential questions, in the context of private data is an increasingly important task within the literature of differential privacy [7, 20, 15]. We show how to construct an implementation of the bootstrap within differential privacy. Most importantly, we show that, for a broad class of functions under zero concentrated differential privacy, the bootstrap can be implemented at no cost. That is, for a given choice of privacy parameter and associated expected error of some query, the bootstrap can be implemented for the exact same privacy guarantee, resulting in the same expected error (or sometimes less) in the desired query, but additionally provide the standard error of that query. In section 2 we provide a brief overview of differential privacy. Then to describe these results on bootstrap inference, in section 3 we describe some foundational results on the aggregation of repeated queries under contrasting privacy and composition definitions. This leads to a tangential result in section 4 on a low-noise Gaussian mechanism for pure differential privacy. Next we provide a brief foundation on the bootstrap algorithm in statistics in section 5, before showing our algorithmic construction of the bootstrap using the mechanisms of differential privacy in section 6. In section 7 we describe how to use the differentially private estimate of the standard error in the construction of confidence intervals and hypothesis tests, and then demonstrate this in section 8 with examples using published Census microdata in the style of privacy sensitive data.) <|cite_end|> that we call the \emph{independent runs} method, needs $k$ independent (bootstrap) runs of the ML algorithm. However, repeating a DP ML algorithm multiple times can incur significant privacy and computation costs. To address the above issue, we propose to use the last $k$ checkpoints of a single run of a DP ML algorithm as a proxy for the $k$ final checkpoints from independent runs. This does not incur any additional privacy cost to the DP ML algorithm. Furthermore, it is readily useful in practice as it does not incur additional training computation and can work with any algorithm that produces intermediate checkpoints. Theoretically, we consider using the sample variance of a statistic $f(\theta)$ computed at the checkpoints $\theta_{t_1}, \ldots, \theta_{t_k}$ as an estimator of the variance of the statistic $f(\theta_{t_k})$, i.e., the statistic at the final checkpoint, and give a bound on the bias of this estimator. As expected, our bound on the bias decreases as the ``\emph{burn-in}'' time $t_1$ as well as the time between checkpoints both increase. Intuitively, our proof shows that (i) as the burn-in time increases, the marginal distribution of each $\theta_{t_i}$ approaches the distribution of $\theta_{t_k}$, and (ii) as the time between checkpoints increases, any pair $\theta_{t_i}, \theta_{t_j}$ approaches pairwise independence. Both (i) and (ii) are proven via a mixing time bound, which shows that starting from any point distribution $\theta_0$, the Markov chain given by DP-SGD approaches its stationary distribution at a certain rate. On the empirical end, we show that our method consistently provides reasonable lower bounds on the uncertainty quantified using the more accurate (but privacy and computation intensive) method that uses independent runs. \mypar{Related work on Checkpoint aggregations} <|cite_start|> (Reference: Checkpoint Ensembles: Ensemble Methods from a Single Training Process: We present the checkpoint ensembles method that can learn ensemble models on a single training process. Although checkpoint ensembles can be applied to any parametric iterative learning technique, here we focus on neural networks. Neural networks' composable and simple neurons make it possible to capture many individual and interaction effects among features. However, small sample sizes and sampling noise may result in patterns in the training data that are not representative of the true relationship between the features and the outcome. As a solution, regularization during training is often used (e.g. dropout). However, regularization is no panacea -- it does not perfectly address overfitting. Even with methods like dropout, two methodologies are commonly used in practice. First is to utilize a validation set independent to the training set as a way to decide when to stop training. Second is to use ensemble methods to further reduce overfitting and take advantage of local optima (i.e. averaging over the predictions of several models). In this paper, we explore checkpoint ensembles -- a simple technique that combines these two ideas in one training process. Checkpoint ensembles improve performance by averaging the predictions from "checkpoints" of the best models within single training process. We use three real-world data sets -- text, image, and electronic health record data -- using three prediction models: a vanilla neural network, a convolutional neural network, and a long short term memory network to show that checkpoint ensembles outperform existing methods: a method that selects a model by minimum validation score, and two methods that average models by weights. Our results also show that checkpoint ensembles capture a portion of the performance gains that traditional ensembles provide.) <|cite_end|> <|cite_start|> (Reference: Averaging Weights Leads to Wider Optima and Better Generalization: Deep neural networks are typically trained by optimizing a loss function with an SGD variant, in conjunction with a decaying learning rate, until convergence. We show that simple averaging of multiple points along the trajectory of SGD, with a cyclical or constant learning rate, leads to better generalization than conventional training. We also show that this Stochastic Weight Averaging (SWA) procedure finds much flatter solutions than SGD, and approximates the recent Fast Geometric Ensembling (FGE) approach with a single model. Using SWA we achieve notable improvement in test accuracy over conventional SGD training on a range of state-of-the-art residual networks, PyramidNets, DenseNets, and Shake-Shake networks on CIFAR-10, CIFAR-100, and ImageNet. In short, SWA is extremely easy to implement, improves generalization, and has almost no computational overhead.) <|cite_end|> explore checkpoint aggregation methods to improve performance in (non-DP) ML settings, but observe negligible performance gains. To our knowledge, <|cite_start|> (Reference: Unlocking High-Accuracy Differentially Private Image Classification through Scale: Differential Privacy (DP) provides a formal privacy guarantee preventing adversaries with access to a machine learning model from extracting information about individual training points. Differentially Private Stochastic Gradient Descent (DP-SGD), the most popular DP training method for deep learning, realizes this protection by injecting noise during training. However previous works have found that DP-SGD often leads to a significant degradation in performance on standard image classification benchmarks. Furthermore, some authors have postulated that DP-SGD inherently performs poorly on large models, since the norm of the noise required to preserve privacy is proportional to the model dimension. In contrast, we demonstrate that DP-SGD on over-parameterized models can perform significantly better than previously thought. Combining careful hyper-parameter tuning with simple techniques to ensure signal propagation and improve the convergence rate, we obtain a new SOTA without extra data on CIFAR-10 of 81.4% under (8, 10^{-5})-DP using a 40-layer Wide-ResNet, improving over the previous SOTA of 71.7%. When fine-tuning a pre-trained NFNet-F3, we achieve a remarkable 83.8% top-1 accuracy on ImageNet under (0.5, 8*10^{-7})-DP. Additionally, we also achieve 86.7% top-1 accuracy under (8, 8 \cdot 10^{-7})-DP, which is just 4.3% below the current non-private SOTA for this task. We believe our results are a significant step towards closing the accuracy gap between private and non-private image classification.) <|cite_end|> is the only work in the DP ML literature that uses intermediate checkpoints post training. They apply an exponential moving average (EMA) over the checkpoints of DP-SGD, and note non-trivial gains in performance. However, we propose various aggregation methods that outperform EMA on standard benchmarks. <|paper_end|>
[ "<|reference_start|> Membership Inference Attacks From First Principles: A membership inference attack allows an adversary to query a trained machine learning model to predict whether or not a particular example was contained in the model's training dataset. These attacks are currently evaluated using average-case \"accuracy\" metrics that fail to characterize whether the attack can confidently identify any members of the training set. We argue that attacks should instead be evaluated by computing their true-positive rate at low (e.g., <0.1%) false-positive rates, and find most prior attacks perform poorly when evaluated in this way. To address this we develop a Likelihood Ratio Attack (LiRA) that carefully combines multiple ideas from the literature. Our attack is 10x more powerful at low false-positive rates, and also strictly dominates prior attacks on existing metrics. <|reference_end|>", "<|reference_start|> Deep Learning with Differential Privacy: Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains. Often, the training of models requires large, representative datasets, which may be crowdsourced and contain sensitive information. The models should not expose private information in these datasets. Addressing this goal, we develop new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy. Our implementation and experiments demonstrate that we can train deep neural networks with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality. <|reference_end|>", "<|reference_start|> Privacy Amplification via Random Check-Ins: Differentially Private Stochastic Gradient Descent (DP-SGD) forms a fundamental building block in many applications for learning over sensitive data. Two standard approaches, privacy amplification by subsampling, and privacy amplification by shuffling, permit adding lower noise in DP-SGD than via na\\\"{\\i}ve schemes. A key assumption in both these approaches is that the elements in the data set can be uniformly sampled, or be uniformly permuted -- constraints that may become prohibitive when the data is processed in a decentralized or distributed fashion. In this paper, we focus on conducting iterative methods like DP-SGD in the setting of federated learning (FL) wherein the data is distributed among many devices (clients). Our main contribution is the \\emph{random check-in} distributed protocol, which crucially relies only on randomized participation decisions made locally and independently by each client. It has privacy/accuracy trade-offs similar to privacy amplification by subsampling/shuffling. However, our method does not require server-initiated communication, or even knowledge of the population size. To our knowledge, this is the first privacy amplification tailored for a distributed learning framework, and it may have broader applicability beyond FL. Along the way, we extend privacy amplification by shuffling to incorporate $(\\epsilon,\\delta)$-DP local randomizers, and exponentially improve its guarantees. In practical regimes, this improvement allows for similar privacy and utility using data from an order of magnitude fewer users. <|reference_end|>", "<|reference_start|> {Calibrating Uncertainty Models for Steering Angle Estimation: Various approaches to end-to-end vehicle control using deep neural networks have been proposed recently, examining various architectures to predict steering angles based on raw sensor data. However, most of these approaches are only used as black boxes, which work well in most scenarios and drive vehicles in real traffic, but it is unclear when they will fail. In order to use such models in larger architectures used in autonomous vehicles, they need to reason about decisions or at least provide an additional measure of confidence that captures the uncertainty of the model. In this paper, we introduce and motivate different uncertainty models, comparing Monte Carlo dropout with network architectures based on the bootstrap ensembling method and a Gaussian mixture for the task of end-to-end vehicle control. Furthermore, we evaluate the presented uncertainty models regarding their driving performance as well as their model uncertainty calibration. The model calibration can be regarded as a measure of how well an uncertainty estimates fits to the expected performance. Well calibrated uncertainty estimates are crucial when embedding deep learning models into probabilistic models. <|reference_end|>" ]
[ 6, 18, 24, 39 ]
{"<|multi_cite_3_1|>": "arxiv-47133", "<|multi_cite_3_2|>": "ss-701379", "<|multi_cite_3_3|>": "ss-1254557", "<|multi_cite_3_4|>": "arxiv-149341", "<|multi_cite_3_5|>": "ss-791387", "<|multi_cite_3_6|>": "arxiv-310006", "<|multi_cite_3_7|>": "arxiv-385860", "<|cite_4|>": "arxiv-108160", "<|multi_cite_5_1|>": "arxiv-158029", "<|multi_cite_5_2|>": "arxiv-317327", "<|multi_cite_6_1|>": "arxiv-166048", "<|multi_cite_6_2|>": "arxiv-209932", "<|multi_cite_6_3|>": "ss-2115396", "<|multi_cite_7_1|>": "ss-1325508", "<|multi_cite_7_2|>": "ss-772573", "<|multi_cite_7_3|>": "arxiv-101277", "<|multi_cite_7_4|>": "arxiv-137632", "<|cite_8|>": "arxiv-323927", "<|multi_cite_9_1|>": "arxiv-101277", "<|multi_cite_9_2|>": "arxiv-137632", "<|multi_cite_9_3|>": "arxiv-184612", "<|multi_cite_9_4|>": "arxiv-203492", "<|multi_cite_9_5|>": "arxiv-182509", "<|multi_cite_9_6|>": "arxiv-167868", "<|multi_cite_9_8|>": "arxiv-278154", "<|multi_cite_9_9|>": "arxiv-242840", "<|multi_cite_9_10|>": "arxiv-281257", "<|multi_cite_9_11|>": "arxiv-305356", "<|multi_cite_9_12|>": "arxiv-203492", "<|multi_cite_9_13|>": "arxiv-323927", "<|multi_cite_9_14|>": "arxiv-384359", "<|multi_cite_9_15|>": "arxiv-416166", "<|multi_cite_9_16|>": "arxiv-311860", "<|multi_cite_10_1|>": "arxiv-291228", "<|cite_11|>": "ss-1182130", "<|cite_1|>": "arxiv-139751", "<|multi_cite_12_1|>": "ss-1840509", "<|multi_cite_12_2|>": "arxiv-155534", "<|multi_cite_12_3|>": "ss-771735", "<|multi_cite_12_4|>": "ss-1399585", "<|multi_cite_12_5|>": "arxiv-164021", "<|multi_cite_12_6|>": "arxiv-178724", "<|multi_cite_12_7|>": "arxiv-166466", "<|multi_cite_12_8|>": "arxiv-168206", "<|multi_cite_12_9|>": "arxiv-271683", "<|multi_cite_13_1|>": "ss-772609", "<|multi_cite_13_2|>": "ss-1265502", "<|multi_cite_13_3|>": "ss-1287345", "<|multi_cite_13_4|>": "ss-772612", "<|multi_cite_14_1|>": "arxiv-136833", "<|multi_cite_14_2|>": "arxiv-151596", "<|cite_2|>": "arxiv-416166"}
2205.07017-1
<|cite_start|> (Reference: Graph R-CNN for Scene Graph Generation: We propose a novel scene graph generation model called Graph R-CNN, that is both effective and efficient at detecting objects and their relations in images. Our model contains a Relation Proposal Network (RePN) that efficiently deals with the quadratic number of potential relations between objects in an image. We also propose an attentional Graph Convolutional Network (aGCN) that effectively captures contextual information between objects and relations. Finally, we introduce a new evaluation metric that is more holistic and realistic than existing metrics. We report state-of-the-art performance on scene graph generation as evaluated using both existing and our proposed metrics.) <|cite_end|>, <|cite_start|> (Reference: Scene Graph Generation from Objects, Phrases and Region Captions: Object detection, scene graph generation and region captioning, which are three scene understanding tasks at different semantic levels, are tied together: scene graphs are generated on top of objects detected in an image with their pairwise relationship predicted, while region captioning gives a language description of the objects, their attributes, relations, and other context information. In this work, to leverage the mutual connections across semantic levels, we propose a novel neural network model, termed as Multi-level Scene Description Network (denoted as MSDN), to solve the three vision tasks jointly in an end-to-end manner. Objects, phrases, and caption regions are first aligned with a dynamic graph based on their spatial and semantic connections. Then a feature refining structure is used to pass messages across the three levels of semantic tasks through the graph. We benchmark the learned model on three tasks, and show the joint learning across three tasks with our proposed method can bring mutual improvements over previous models. Particularly, on the scene graph generation task, our proposed method outperforms the state-of-art method with more than 3% margin.) <|cite_end|>, <|cite_start|> (Reference: GPS-Net: Graph Property Sensing Network for Scene Graph Generation: Scene graph generation (SGG) aims to detect objects in an image along with their pairwise relationships. There are three key properties of scene graph that have been underexplored in recent works: namely, the edge direction information, the difference in priority between nodes, and the long-tailed distribution of relationships. Accordingly, in this paper, we propose a Graph Property Sensing Network (GPS-Net) that fully explores these three properties for SGG. First, we propose a novel message passing module that augments the node feature with node-specific contextual information and encodes the edge direction information via a tri-linear model. Second, we introduce a node priority sensitive loss to reflect the difference in priority between nodes during training. This is achieved by designing a mapping function that adjusts the focusing parameter in the focal loss. Third, since the frequency of relationships is affected by the long-tailed distribution problem, we mitigate this issue by first softening the distribution and then enabling it to be adjusted for each subject-object pair according to their visual appearance. Systematic experiments demonstrate the effectiveness of the proposed techniques. Moreover, GPS-Net achieves state-of-the-art performance on three popular databases: VG, OI, and VRD by significant gains under various settings and metrics. The code and models are available at \url{https://github.com/taksau/GPS-Net}.) <|cite_end|>, <|cite_start|> (Reference: Bipartite Graph Network with Adaptive Message Passing for Unbiased Scene Graph Generation: Scene graph generation is an important visual understanding task with a broad range of vision applications. Despite recent tremendous progress, it remains challenging due to the intrinsic long-tailed class distribution and large intra-class variation. To address these issues, we introduce a novel confidence-aware bipartite graph neural network with adaptive message propagation mechanism for unbiased scene graph generation. In addition, we propose an efficient bi-level data resampling strategy to alleviate the imbalanced data distribution problem in training our graph network. Our approach achieves superior or competitive performance over previous methods on several challenging datasets, including Visual Genome, Open Images V4/V6, demonstrating its effectiveness and generality.) <|cite_end|>tend to rely on a unified MPNN-based MFVB methodology. Such formulation generally employs the ELBO as the variational inference objective, in which the resulting variational approximation derived from ELBO often underestimates the underlying complex posterior <|cite_start|> (Reference: Importance Weighted Autoencoders: The variational autoencoder (VAE; Kingma, Welling (2014)) is a recently proposed generative model pairing a top-down generative network with a bottom-up recognition network which approximates posterior inference. It typically makes strong assumptions about posterior inference, for instance that the posterior distribution is approximately factorial, and that its parameters can be approximated with nonlinear regression from the observations. As we show empirically, the VAE objective can lead to overly simplified representations which fail to use the network's entire modeling capacity. We present the importance weighted autoencoder (IWAE), a generative model with the same architecture as the VAE, but which uses a strictly tighter log-likelihood lower bound derived from importance weighting. In the IWAE, the recognition network uses multiple samples to approximate the posterior, giving it increased flexibility to model complex posteriors which do not fit the VAE modeling assumptions. We show empirically that IWAEs learn richer latent space representations than VAEs, leading to improved test log-likelihood on density estimation benchmarks.) <|cite_end|>. In contrary, we use a tighter importance weighted lower bound as the variational inference objective in the proposed IWSL method, and solve the resulting constrained variational inference task via a generic entropic mirror descent strategy rather than the traditional message passing technique. Specifically, various samples drawn from a reparameterizable Gumbel-Softmax sampler <|cite_start|> (Reference: Categorical Reparameterization with Gumbel-Softmax: Categorical variables are a natural choice for representing discrete structure in the world. However, stochastic neural networks rarely use categorical latent variables due to the inability to backpropagate through samples. In this work, we present an efficient gradient estimator that replaces the non-differentiable sample from a categorical distribution with a differentiable sample from a novel Gumbel-Softmax distribution. This distribution has the essential property that it can be smoothly annealed into a categorical distribution. We show that our Gumbel-Softmax estimator outperforms state-of-the-art gradient estimators on structured output prediction and unsupervised generative modeling tasks with categorical latent variables, and enables large speedups on semi-supervised classification.) <|cite_end|>, <|cite_start|> (Reference: The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables: The reparameterization trick enables optimizing large scale stochastic computation graphs via gradient descent. The essence of the trick is to refactor each stochastic node into a differentiable function of its parameters and a random variable with fixed distribution. After refactoring, the gradients of the loss propagated by the chain rule through the graph are low variance unbiased estimators of the gradients of the expected loss. While many continuous random variables have such reparameterizations, discrete random variables lack useful reparameterizations due to the discontinuous nature of discrete states. In this work we introduce Concrete random variables---continuous relaxations of discrete random variables. The Concrete distribution is a new family of distributions with closed form densities and a simple reparameterization. Whenever a discrete stochastic node of a computation graph can be refactored into a one-hot bit representation that is treated continuously, Concrete stochastic nodes can be used with automatic differentiation to produce low-variance biased gradients of objectives (including objectives that depend on the log-probability of latent stochastic nodes) on the corresponding discrete graph. We demonstrate the effectiveness of Concrete relaxations on density estimation and structured prediction tasks using neural networks.) <|cite_end|>are applied to compute the above importance weighted lower bound. The above strictly tighter importance weighted lower bound is firstly introduced in the importance weighted autoencoder (IWAE) <|cite_start|> (Reference: Importance Weighted Autoencoders: The variational autoencoder (VAE; Kingma, Welling (2014)) is a recently proposed generative model pairing a top-down generative network with a bottom-up recognition network which approximates posterior inference. It typically makes strong assumptions about posterior inference, for instance that the posterior distribution is approximately factorial, and that its parameters can be approximated with nonlinear regression from the observations. As we show empirically, the VAE objective can lead to overly simplified representations which fail to use the network's entire modeling capacity. We present the importance weighted autoencoder (IWAE), a generative model with the same architecture as the VAE, but which uses a strictly tighter log-likelihood lower bound derived from importance weighting. In the IWAE, the recognition network uses multiple samples to approximate the posterior, giving it increased flexibility to model complex posteriors which do not fit the VAE modeling assumptions. We show empirically that IWAEs learn richer latent space representations than VAEs, leading to improved test log-likelihood on density estimation benchmarks.) <|cite_end|>, which is a generative model with the same architecture as the classical variational autoencoder (VAE). In particular, the recognition network in IWAE relies on multiple samples to approximate the posterior, which increases the flexibility to model complex posteriors which do not fit the VAE modeling assumptions. Moreover, due to the inability to backpropagate through samples, the output categorical latent variables in SGG tasks are rarely employed in the stochastic neural networks. To this end, in this paper, instead of producing non-differentiable samples from a categorical distribution, Gumbel-Softmax sampler is utilized to draw differentiable samples from a novel Gumbel-Softmax distribution <|cite_start|> (Reference: Categorical Reparameterization with Gumbel-Softmax: Categorical variables are a natural choice for representing discrete structure in the world. However, stochastic neural networks rarely use categorical latent variables due to the inability to backpropagate through samples. In this work, we present an efficient gradient estimator that replaces the non-differentiable sample from a categorical distribution with a differentiable sample from a novel Gumbel-Softmax distribution. This distribution has the essential property that it can be smoothly annealed into a categorical distribution. We show that our Gumbel-Softmax estimator outperforms state-of-the-art gradient estimators on structured output prediction and unsupervised generative modeling tasks with categorical latent variables, and enables large speedups on semi-supervised classification.) <|cite_end|>, <|cite_start|> (Reference: The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables: The reparameterization trick enables optimizing large scale stochastic computation graphs via gradient descent. The essence of the trick is to refactor each stochastic node into a differentiable function of its parameters and a random variable with fixed distribution. After refactoring, the gradients of the loss propagated by the chain rule through the graph are low variance unbiased estimators of the gradients of the expected loss. While many continuous random variables have such reparameterizations, discrete random variables lack useful reparameterizations due to the discontinuous nature of discrete states. In this work we introduce Concrete random variables---continuous relaxations of discrete random variables. The Concrete distribution is a new family of distributions with closed form densities and a simple reparameterization. Whenever a discrete stochastic node of a computation graph can be refactored into a one-hot bit representation that is treated continuously, Concrete stochastic nodes can be used with automatic differentiation to produce low-variance biased gradients of objectives (including objectives that depend on the log-probability of latent stochastic nodes) on the corresponding discrete graph. We demonstrate the effectiveness of Concrete relaxations on density estimation and structured prediction tasks using neural networks.) <|cite_end|>. Due to the applied explicit reparameterization function, it is quite easy to construct an efficient gradient estimator. <|paper_end|>
[ "<|reference_start|> Graph R-CNN for Scene Graph Generation: We propose a novel scene graph generation model called Graph R-CNN, that is both effective and efficient at detecting objects and their relations in images. Our model contains a Relation Proposal Network (RePN) that efficiently deals with the quadratic number of potential relations between objects in an image. We also propose an attentional Graph Convolutional Network (aGCN) that effectively captures contextual information between objects and relations. Finally, we introduce a new evaluation metric that is more holistic and realistic than existing metrics. We report state-of-the-art performance on scene graph generation as evaluated using both existing and our proposed metrics. <|reference_end|>", "<|reference_start|> Importance Weighted Autoencoders: The variational autoencoder (VAE; Kingma, Welling (2014)) is a recently proposed generative model pairing a top-down generative network with a bottom-up recognition network which approximates posterior inference. It typically makes strong assumptions about posterior inference, for instance that the posterior distribution is approximately factorial, and that its parameters can be approximated with nonlinear regression from the observations. As we show empirically, the VAE objective can lead to overly simplified representations which fail to use the network's entire modeling capacity. We present the importance weighted autoencoder (IWAE), a generative model with the same architecture as the VAE, but which uses a strictly tighter log-likelihood lower bound derived from importance weighting. In the IWAE, the recognition network uses multiple samples to approximate the posterior, giving it increased flexibility to model complex posteriors which do not fit the VAE modeling assumptions. We show empirically that IWAEs learn richer latent space representations than VAEs, leading to improved test log-likelihood on density estimation benchmarks. <|reference_end|>", "<|reference_start|> Categorical Reparameterization with Gumbel-Softmax: Categorical variables are a natural choice for representing discrete structure in the world. However, stochastic neural networks rarely use categorical latent variables due to the inability to backpropagate through samples. In this work, we present an efficient gradient estimator that replaces the non-differentiable sample from a categorical distribution with a differentiable sample from a novel Gumbel-Softmax distribution. This distribution has the essential property that it can be smoothly annealed into a categorical distribution. We show that our Gumbel-Softmax estimator outperforms state-of-the-art gradient estimators on structured output prediction and unsupervised generative modeling tasks with categorical latent variables, and enables large speedups on semi-supervised classification. <|reference_end|>", "<|reference_start|> The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables: The reparameterization trick enables optimizing large scale stochastic computation graphs via gradient descent. The essence of the trick is to refactor each stochastic node into a differentiable function of its parameters and a random variable with fixed distribution. After refactoring, the gradients of the loss propagated by the chain rule through the graph are low variance unbiased estimators of the gradients of the expected loss. While many continuous random variables have such reparameterizations, discrete random variables lack useful reparameterizations due to the discontinuous nature of discrete states. In this work we introduce Concrete random variables---continuous relaxations of discrete random variables. The Concrete distribution is a new family of distributions with closed form densities and a simple reparameterization. Whenever a discrete stochastic node of a computation graph can be refactored into a one-hot bit representation that is treated continuously, Concrete stochastic nodes can be used with automatic differentiation to produce low-variance biased gradients of objectives (including objectives that depend on the log-probability of latent stochastic nodes) on the corresponding discrete graph. We demonstrate the effectiveness of Concrete relaxations on density estimation and structured prediction tasks using neural networks. <|reference_end|>" ]
[ 0, 4, 5, 9 ]
{"<|cite_1|>": "arxiv-93845", "<|cite_2|>": "arxiv-111611", "<|cite_3|>": "arxiv-183483", "<|cite_4|>": "arxiv-106070", "<|cite_5|>": "arxiv-130256", "<|cite_6|>": "arxiv-183329", "<|cite_7|>": "ss-1035235", "<|cite_8|>": "ss-1349763", "<|cite_9|>": "arxiv-114041", "<|cite_10|>": "arxiv-130721", "<|cite_11|>": "arxiv-121370", "<|cite_12|>": "arxiv-180560", "<|cite_13|>": "arxiv-167909", "<|cite_14|>": "ss-822650", "<|cite_15|>": "arxiv-183335", "<|cite_16|>": "arxiv-331521", "<|cite_17|>": "ss-1035235", "<|cite_18|>": "ss-1349763", "<|cite_19|>": "ss-1035235", "<|cite_20|>": "ss-1349763", "<|cite_21|>": "arxiv-183335", "<|cite_22|>": "arxiv-331521", "<|cite_23|>": "arxiv-164231", "<|cite_24|>": "arxiv-194545", "<|cite_25|>": "arxiv-256141", "<|cite_26|>": "arxiv-195162", "<|cite_27|>": "arxiv-183335", "<|cite_28|>": "arxiv-331521", "<|cite_29|>": "arxiv-164231", "<|cite_30|>": "arxiv-194545", "<|cite_31|>": "arxiv-256141", "<|cite_32|>": "arxiv-140175", "<|cite_33|>": "arxiv-140175", "<|cite_34|>": "arxiv-83320", "<|cite_35|>": "arxiv-83320", "<|cite_36|>": "arxiv-140175", "<|cite_37|>": "arxiv-83320", "<|cite_38|>": "arxiv-109304", "<|cite_39|>": "arxiv-109215", "<|cite_40|>": "ss-799563", "<|cite_41|>": "arxiv-183483", "<|cite_42|>": "arxiv-121370", "<|cite_43|>": "arxiv-164231", "<|cite_44|>": "arxiv-181909", "<|cite_45|>": "arxiv-140464", "<|cite_46|>": "arxiv-181909", "<|cite_47|>": "arxiv-167909", "<|cite_48|>": "arxiv-183335", "<|cite_49|>": "arxiv-180560", "<|cite_50|>": "arxiv-256141", "<|cite_51|>": "arxiv-22115", "<|cite_52|>": "arxiv-89252", "<|cite_53|>": "arxiv-157135", "<|cite_54|>": "arxiv-218064", "<|cite_55|>": "arxiv-257036", "<|cite_56|>": "arxiv-331521", "<|cite_57|>": "arxiv-156291", "<|cite_58|>": "arxiv-237885", "<|cite_59|>": "arxiv-363694", "<|cite_60|>": "arxiv-210264", "<|cite_61|>": "arxiv-187800", "<|cite_62|>": "arxiv-250766", "<|cite_63|>": "arxiv-194344", "<|cite_64|>": "arxiv-140464", "<|cite_65|>": "arxiv-167909", "<|cite_66|>": "arxiv-130721", "<|cite_67|>": "arxiv-256141", "<|cite_68|>": "arxiv-331521", "<|cite_69|>": "arxiv-83320", "<|cite_70|>": "arxiv-109304", "<|cite_71|>": "arxiv-109215", "<|cite_72|>": "arxiv-83320", "<|cite_73|>": "arxiv-109304", "<|cite_74|>": "arxiv-109215"}
1901.01091
<|paper_start|> Title: Adaptive Density Estimation for Generative Models Abstract: Adaptive Density Estimation for Generative Models: Unsupervised learning of generative models has seen tremendous progress over recent years, in particular due to generative adversarial networks (GANs), variational autoencoders, and flow-based models. GANs have dramatically improved sample quality, but suffer from two drawbacks: (i) they mode-drop, i.e., do not cover the full support of the train data, and (ii) they do not allow for likelihood evaluations on held-out data. In contrast, likelihood-based training encourages models to cover the full support of the train data, but yields poorer samples. These mutual shortcomings can in principle be addressed by training generative latent variable models in a hybrid adversarial-likelihood manner. However, we show that commonly made parametric assumptions create a conflict between them, making successful hybrid models non trivial. As a solution, we propose to use deep invertible transformations in the latent variable decoder. This approach allows for likelihood computations in image space, is more efficient than fully invertible models, and can take full advantage of adversarial training. We show that our model significantly improves over existing hybrid models: offering GAN-like samples, IS and FID scores that are competitive with fully adversarial models, and improved likelihood scores. Introduction Successful recent generative models of natural images can be divided into two broad families, which are trained in fundamentally different ways. The first is trained using likelihood-based criteria, which ensure that all training data points are well covered by the model. This category includes variational autoencoders (VAEs) <|cite_start|> (Reference: Auto-Encoding Variational Bayes: How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions are two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.) <|cite_end|> <|cite_start|> (Reference: Improving variational autoencoders with inverse autoregressive flow: We propose a simple and scalable method for improving the flexibility of variational inference through a transformation with autoregressive neural networks. Autoregressive neural networks, such as RNNs or the PixelCNN, are very powerful models and potentially interesting for use as variational posterior approximation. However, ancestral sampling in such networks is a long sequential operation, and therefore typically very slow on modern parallel hardware, such as GPUs. We show that by inverting autoregressive neural networks we can obtain equally powerful posterior models from which we can sample efficiently on modern hardware. We show that such data transformations, inverse autoregressive flows (IAF), can be used to transform a simple distribution over the latent variables into a much more flexible distribution, while still allowing us to compute the resulting variables' probability density function. The method is simple to implement, can be made arbitrarily flexible and, in contrast with previous work, is well applicable to models with high-dimensional latent spaces, such as convolutional generative models. The method is applied to a novel deep architecture of variational auto-encoders. In experiments with natural images, we demonstrate that autoregressive flow leads to significant performance gains.) <|cite_end|>, autoregressive models such as PixelCNNs <|cite_start|> (Reference: Pixel Recurrent Neural Networks: Modeling the distribution of natural images is a landmark problem in unsupervised learning. This task requires an image model that is at once expressive, tractable and scalable. We present a deep neural network that sequentially predicts the pixels in an image along the two spatial dimensions. Our method models the discrete probability of the raw pixel values and encodes the complete set of dependencies in the image. Architectural novelties include fast two-dimensional recurrent layers and an effective use of residual connections in deep recurrent networks. We achieve log-likelihood scores on natural images that are considerably better than the previous state of the art. Our main results also provide benchmarks on the diverse ImageNet dataset. Samples generated from the model appear crisp, varied and globally coherent.) <|cite_end|> <|cite_start|> (Reference: PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications: PixelCNNs are a recently proposed class of powerful generative models with tractable likelihood. Here we discuss our implementation of PixelCNNs which we make available at https://github.com/openai/pixel-cnn. Our implementation contains a number of modifications to the original model that both simplify its structure and improve its performance. 1) We use a discretized logistic mixture likelihood on the pixels, rather than a 256-way softmax, which we find to speed up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels, simplifying the model structure. 3) We use downsampling to efficiently capture structure at multiple resolutions. 4) We introduce additional short-cut connections to further speed up optimization. 5) We regularize the model using dropout. Finally, we present state-of-the-art log likelihood results on CIFAR-10 to demonstrate the usefulness of these modifications.) <|cite_end|>, and flow-based models such as Real-NVP <|cite_start|> (Reference: Density estimation using Real NVP: Unsupervised learning of probabilistic models is a central yet challenging problem in machine learning. Specifically, designing models with tractable learning, sampling, inference and evaluation is crucial in solving this task. We extend the space of such models using real-valued non-volume preserving (real NVP) transformations, a set of powerful invertible and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact sampling, exact inference of latent variables, and an interpretable latent space. We demonstrate its ability to model natural images on four datasets through sampling, log-likelihood evaluation and latent variable manipulations.) <|cite_end|>. The second category is trained based on a signal that measures to what extent (statistics of) samples from the model can be distinguished from (statistics of) the training data, \ie, based on the quality of samples drawn from the model. This is the case for generative adversarial networks (GANs) <|cite_start|> (Reference: Generative {{Adversarial Nets}}: CNN and RNN are classifiers for image and speech recognition, and are used in many computer vision. However, this model alone does not produce images...) <|cite_end|>, as well as moment matching methods <|cite_start|> (Reference: Generative Moment Matching Networks: We consider the problem of learning deep generative models from data. We formulate a method that generates an independent sample via a single feedforward pass through a multilayer perceptron, as in the recently proposed generative adversarial networks (Goodfellow et al., 2014). Training a generative adversarial network, however, requires careful optimization of a difficult minimax program. Instead, we utilize a technique from statistical hypothesis testing known as maximum mean discrepancy (MMD), which leads to a simple objective that can be interpreted as matching all orders of statistics between a dataset and samples from the model, and can be trained by backpropagation. We further boost the performance of this approach by combining our generative network with an auto-encoder network, using MMD to learn to generate codes that can then be decoded to produce samples. We show that the combination of these techniques yields excellent generative models compared to baseline approaches as measured on MNIST and the Toronto Face Database.) <|cite_end|>. \textbf{Motivation.} Despite recent progress, existing methods exhibit a number of drawbacks. Likelihood-based models are trained to put probability mass on all elements of the training set. However, covering all modes of the training distribution forces models to over-generalize and assign probability mass on non-realistic images due to the lack of flexibility, as illustrated in \fig{cdt}. Limiting factors in such models include the use of fully factorized decoders in variational autoencoders, and restriction to the class of fully invertible functions in Real-NVP. Addressing these limitations is key to improving the sample quality. Adversarial training on the other hand pushes samples to be indistinguishable from training images, at the expense of covering the full support of the training distribution. This phenomenon, known as ``mode collapse'' <|cite_start|> (Reference: Wasserstein Generative Adversarial Networks: We introduce a new algorithm named WGAN, an alternative to traditional GAN training. In this new model, we show that we can improve the stability of learning, get rid of problems like mode collapse, and provide meaningful learning curves useful for debugging and hyperparameter searches. Furthermore, we show that the corresponding optimization problem is sound, and provide extensive theoretical work highlighting the deep connections to different distances between distributions.) <|cite_end|>, is illustrated in \fig{qdt}. Moreover, adversarial models have a low-dimensional support, so that held-out data typically has zero probability under the learned model. This, together with the lack of an inference mechanism prevents the use of likelihood to assess coverage of held-out data, and thus complicates evaluation of GANs. \begin{figure}[tb] \begin{tabular}{cc} \subfloat{\includegraphics[width=0.45\linewidth]{gaussian_cdt_crop.pdf}\label{fig:cdt}} & \hspace*{-0.4cm} \subfloat{\includegraphics[width=0.50\linewidth]{gaussian_qdt_crop.pdf}\label{fig:qdt}} \\ \end{tabular} \figvspaceOne \caption{Illustration of coverage-driven (\ie, maximum likelihood) and quality-driven (\ie, adversarial) training, in a one-dimensional setting. The former pulls probability mass towards points from regions of high density of the distribution underlying the data, while the latter pushes mass out of low-density regions. } \figvspaceTwo \label{fig:cdt_qdt} \end{figure} \textbf{Contribution.} Prior attempts have been made to leverage the complementarity of quality and coverage driven training using an inference network, for instance the VAE-GAN model <|cite_start|> (Reference: Autoencoding beyond pixels using a learned similarity metric: We present an autoencoder that leverages learned representations to better measure similarities in data space. By combining a variational autoencoder with a generative adversarial network we can use learned feature representations in the GAN discriminator as basis for the VAE reconstruction objective. Thereby, we replace element-wise errors with feature-wise errors to better capture the data distribution while offering invariance towards e.g. translation. We apply our method to images of faces and show that it outperforms VAEs with element-wise similarity measures in terms of visual fidelity. Moreover, we show that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic.) <|cite_end|>, and approaches that learn an inference network adversarially <|cite_start|> (Reference: Adversarially Learned Inference: We introduce the adversarially learned inference (ALI) model, which jointly learns a generation network and an inference network using an adversarial process. The generation network maps samples from stochastic latent variables to the data space while the inference network maps training examples in data space to the space of latent variables. An adversarial game is cast between these two networks and a discriminative network is trained to distinguish between joint latent/data-space samples from the generative network and joint samples from the inference network. We illustrate the ability of the model to learn mutually coherent inference and generation networks through the inspections of model samples and reconstructions and confirm the usefulness of the learned representations by obtaining a performance competitive with state-of-the-art on the semi-supervised SVHN and CIFAR10 tasks.) <|cite_end|> <|cite_start|> (Reference: Adversarial Feature Learning: The ability of the Generative Adversarial Networks (GANs) framework to learn generative models mapping from simple latent distributions to arbitrarily complex data distributions has been demonstrated empirically, with compelling results showing that the latent space of such generators captures semantic variation in the data distribution. Intuitively, models trained to predict these semantic latent representations given data may serve as useful feature representations for auxiliary problems where semantics are relevant. However, in their existing form, GANs have no means of learning the inverse mapping -- projecting data back into the latent space. We propose Bidirectional Generative Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and demonstrate that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning.) <|cite_end|> <|cite_start|> (Reference: It Takes (Only) Two: Adversarial Generator-Encoder Networks: We present a new autoencoder-type architecture that is trainable in an unsupervised mode, sustains both generation and inference, and has the quality of conditional and unconditional samples boosted by adversarial learning. Unlike previous hybrids of autoencoders and adversarial networks, the adversarial game in our approach is set up directly between the encoder and the generator, and no external mappings are trained in the process of learning. The game objective compares the divergences of each of the real and the generated data distributions with the prior distribution in the latent space. We show that direct generator-vs-encoder game leads to a tight coupling of the two components, resulting in samples and reconstructions of a comparable quality to some recently-proposed more complex architectures.) <|cite_end|>. In contrast to these approaches, our model is directly optimized on a valid measure of log-likelihood performance in the RGB space, which we then report on a held-out dataset. As illustrated in \fig{main_schema}, our model uses non-volume preserving invertible transformations close to the output, optimized to increase the volume of data points. This relaxes naive independence assumptions on pixels given the latent variables, which are typical in VAEs. The invertibility of the feature is a crucial difference with <|cite_start|> (Reference: Autoencoding beyond pixels using a learned similarity metric: We present an autoencoder that leverages learned representations to better measure similarities in data space. By combining a variational autoencoder with a generative adversarial network we can use learned feature representations in the GAN discriminator as basis for the VAE reconstruction objective. Thereby, we replace element-wise errors with feature-wise errors to better capture the data distribution while offering invariance towards e.g. translation. We apply our method to images of faces and show that it outperforms VAEs with element-wise similarity measures in terms of visual fidelity. Moreover, we show that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic.) <|cite_end|>, as it enables likelihood computations and ensures that separate data points cannot collapse in feature space. Experimental results show this extension to be beneficial for both the sample quality and the likelihood of held-out data. An adversarial loss is then used to explicitly optimize the sample quality. We experimentally validate our approach on the CIFAR-10 dataset. Using the same architecture, our proposed model yields substantially improved samples over VAE models, as measured by the IS and FID scores, and improved likelihoods compared to a modified GAN model. Our model significantly improves upon existing hybrid models, producing GAN-like samples, and IS and FID scores that are competitive with fully adversarial models, while offering likelihoods on held-out data comparable to recent likelihood-based methods. We further confirm these observations with qualitative and quantitative experimental results on the CelebA dataset, STL-10, ImageNet, and LSUN-Bedrooms. We are the first to report IS and FID scores together with held-out likelihoods on all these five datasets. We also assess the performance of conditional versions of our models with the data augmentation based GAN evaluation procedure proposed in <|cite_start|> (Reference: How good is my GAN?: Generative adversarial networks (GANs) are one of the most popular methods for generating images today. While impressive results have been validated by visual inspection, a number of quantitative criteria have emerged only recently. We argue here that the existing ones are insufficient and need to be in adequation with the task at hand. In this paper we introduce two measures based on image classification---GAN-train and GAN-test, which approximate the recall (diversity) and precision (quality of the image) of GANs respectively. We evaluate a number of recent GAN approaches based on these two measures and demonstrate a clear difference in performance. Furthermore, we observe that the increasing difficulty of the dataset, from CIFAR10 over CIFAR100 to ImageNet, shows an inverse correlation with the quality of the GANs, as clearly evident from our measures.) <|cite_end|>. <|paper_end|>
[ "<|reference_start|> PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications: PixelCNNs are a recently proposed class of powerful generative models with tractable likelihood. Here we discuss our implementation of PixelCNNs which we make available at https://github.com/openai/pixel-cnn. Our implementation contains a number of modifications to the original model that both simplify its structure and improve its performance. 1) We use a discretized logistic mixture likelihood on the pixels, rather than a 256-way softmax, which we find to speed up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels, simplifying the model structure. 3) We use downsampling to efficiently capture structure at multiple resolutions. 4) We introduce additional short-cut connections to further speed up optimization. 5) We regularize the model using dropout. Finally, we present state-of-the-art log likelihood results on CIFAR-10 to demonstrate the usefulness of these modifications. <|reference_end|>", "<|reference_start|> Wasserstein Generative Adversarial Networks: We introduce a new algorithm named WGAN, an alternative to traditional GAN training. In this new model, we show that we can improve the stability of learning, get rid of problems like mode collapse, and provide meaningful learning curves useful for debugging and hyperparameter searches. Furthermore, we show that the corresponding optimization problem is sound, and provide extensive theoretical work highlighting the deep connections to different distances between distributions. <|reference_end|>", "<|reference_start|> Autoencoding beyond pixels using a learned similarity metric: We present an autoencoder that leverages learned representations to better measure similarities in data space. By combining a variational autoencoder with a generative adversarial network we can use learned feature representations in the GAN discriminator as basis for the VAE reconstruction objective. Thereby, we replace element-wise errors with feature-wise errors to better capture the data distribution while offering invariance towards e.g. translation. We apply our method to images of faces and show that it outperforms VAEs with element-wise similarity measures in terms of visual fidelity. Moreover, we show that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic. <|reference_end|>", "<|reference_start|> How good is my GAN?: Generative adversarial networks (GANs) are one of the most popular methods for generating images today. While impressive results have been validated by visual inspection, a number of quantitative criteria have emerged only recently. We argue here that the existing ones are insufficient and need to be in adequation with the task at hand. In this paper we introduce two measures based on image classification---GAN-train and GAN-test, which approximate the recall (diversity) and precision (quality of the image) of GANs respectively. We evaluate a number of recent GAN approaches based on these two measures and demonstrate a clear difference in performance. Furthermore, we observe that the increasing difficulty of the dataset, from CIFAR10 over CIFAR100 to ImageNet, shows an inverse correlation with the quality of the GANs, as clearly evident from our measures. <|reference_end|>" ]
[ 3, 7, 12, 13 ]
{"<|multi_cite_2_1|>": "arxiv-54350", "<|multi_cite_2_2|>": "ss-2273848", "<|multi_cite_3_1|>": "arxiv-91001", "<|multi_cite_3_2|>": "arxiv-114720", "<|cite_4|>": "arxiv-98839", "<|multi_cite_5_1|>": "ss-805363", "<|cite_6|>": "arxiv-72810", "<|cite_7|>": "ss-1258180", "<|cite_1|>": "arxiv-89799", "<|multi_cite_8_1|>": "arxiv-99181", "<|multi_cite_8_2|>": "arxiv-99044", "<|multi_cite_8_3|>": "arxiv-121143", "<|cite_9|>": "arxiv-89799", "<|cite_10|>": "arxiv-167108"}
2208.01003
<|paper_start|> Title: What Can Be Learnt With Wide Convolutional Neural Networks? Abstract: What Can Be Learnt With Wide Convolutional Neural Networks?: Understanding how convolutional neural networks (CNNs) can efficiently learn high-dimensional functions remains a fundamental challenge. A popular belief is that these models harness the local and hierarchical structure of natural data such as images. Yet, we lack a quantitative understanding of how such structure affects performance, e.g., the rate of decay of the generalisation error with the number of training samples. In this paper, we study infinitely-wide deep CNNs in the kernel regime. First, we show that the spectrum of the corresponding kernel inherits the hierarchical structure of the network, and we characterise its asymptotics. Then, we use this result together with generalisation bounds to prove that deep CNNs adapt to the spatial scale of the target function. In particular, we find that if the target function depends on low-dimensional subsets of adjacent input variables, then the decay of the error is controlled by the effective dimensionality of these subsets. Conversely, if the target function depends on the full set of input variables, then the error decay is controlled by the input dimension. We conclude by computing the generalisation error of a deep CNN trained on the output of another deep CNN with randomly-initialised parameters. Interestingly, we find that, despite their hierarchical structure, the functions generated by infinitely-wide deep CNNs are too rich to be efficiently learnable in high dimension. Introduction Deep convolutional neural networks (CNNs) are particularly successful in certain tasks such as image classification. Such tasks generally entail the approximation of functions of a large number of variables, for instance, the number of pixels which determine the content of an image. Learning a generic high-dimensional function is plagued by the \emph{curse of dimensionality}: the rate at which the generalisation error $\epsilon$ decays with the number of training samples $n$ vanishes as the dimensionality $d$ of the input space grows, i.e., $\epsilon(n) \sim n^{-\beta}$ with $\beta=O(1/d)$ <|cite_start|> (Reference: High‐dimensional Statistics: A Non‐asymptotic Viewpoint, Martin J.Wainwright, Cambridge University Press, 2019, xvii 552 pages, £57.99, hardback ISBN: 978‐1‐1084‐9802‐9: ) <|cite_end|>. Therefore, the success of CNNs in classifying data whose dimension can be in the hundreds or more <|cite_start|> (Reference: Deep Learning Scaling is Predictable, Empirically: Deep learning (DL) creates impactful advances following a virtuous recipe: model architecture search, creating large training data sets, and scaling computation. It is widely believed that growing training sets and models should improve accuracy and result in better products. As DL application domains grow, we would like a deeper understanding of the relationships between training set size, computational scale, and model accuracy improvements to advance the state-of-the-art. This paper presents a large scale empirical characterization of generalization error and model size growth as training sets grow. We introduce a methodology for this measurement and test four machine learning domains: machine translation, language modeling, image processing, and speech recognition. Our empirical results show power-law generalization error scaling across a breadth of factors, resulting in power-law exponents---the "steepness" of the learning curve---yet to be explained by theoretical work. Further, model improvements only shift the error but do not appear to affect the power-law exponent. We also show that model size scales sublinearly with data size. These scaling relationships have significant implications on deep learning research, practice, and systems. They can assist model debugging, setting accuracy targets, and decisions about data set growth. They can also guide computing system design and underscore the importance of continued computational scaling.) <|cite_end|> <|cite_start|> (Reference: Asymptotic learning curves of kernel methods: empirical data versus teacher--student paradigm: How many training data are needed to learn a supervised task? It is often observed that the generalization error decreases as n −β where n is the number of training examples and β is an exponent that depends on both data and algorithm. In this work we measure β when applying kernel methods to real datasets. For MNIST we find β ≈ 0.4 and for CIFAR10 β ≈ 0.1, for both regression and classification tasks, and for Gaussian or Laplace kernels. To rationalize the existence of non-trivial exponents that can be independent of the specific kernel used, we study the teacher–student framework for kernels. In this scheme, a teacher generates data according to a Gaussian random field, and a student learns them via kernel regression. With a simplifying assumption—namely that the data are sampled from a regular lattice—we derive analytically β for translation invariant kernels, using previous results from the kriging literature. Provided that the student is not too sensitive to high frequencies, β depends only on the smoothness and dimension of the training data. We confirm numerically that these predictions hold when the training points are sampled at random on a hypersphere. Overall, the test error is found to be controlled by the magnitude of the projection of the true function on the kernel eigenvectors whose rank is larger than n. Using this idea we predict the exponent β from real data by performing kernel PCA, leading to β ≈ 0.36 for MNIST and β ≈ 0.07 for CIFAR10, in good agreement with observations. We argue that these rather large exponents are possible due to the small effective dimension of the data.) <|cite_end|> points to the existence of some underlying structure in the task that CNNs can leverage. Understanding the structure of learnable tasks is arguably one of the most fundamental problems in deep learning, and also one of central practical importance---as it determines how many examples are required to learn up to a certain error. A popular hypothesis is that learnable tasks are local and hierarchical: features at any scale are made of sub-features of smaller scales. Although many works have investigated this hypothesis <|cite_start|> (Reference: Recognition-by-components: A theory of human image understanding.: The perceptual recognition of objects is conceptualized to be a process in which the image of the input is segmented at regions of deep concavity into an arrangement of simple geometric components, such as blocks, cylinders, wedges, and cones. The fundamental assumption of the proposed theory, recognition-by-components (RBC), is that a modest set of generalized-cone components, called geons (N £ 36), can be derived from contrasts of five readily detectable properties of edges in a two-dimensiona l image: curvature, collinearity, symmetry, parallelism, and cotermination. The detection of these properties is generally invariant over viewing position an$ image quality and consequently allows robust object perception when the image is projected from a novel viewpoint or is degraded. RBC thus provides a principled account of the heretofore undecided relation between the classic principles of perceptual organization and pattern recognition: The constraints toward regularization (Pragnanz) characterize not the complete object but the object's components. Representational power derives from an allowance of free combinations of the geons. A Principle of Componential Recovery can account for the major phenomena of object recognition: If an arrangement of two or three geons can be recovered from the input, objects can be quickly recognized even when they are occluded, novel, rotated in depth, or extensively degraded. The results from experiments on the perception of briefly presented pictures by human observers provide empirical support for the theory. Any single object can project an infinity of image configurations to the retina. The orientation of the object to the viewer can vary continuously, each giving rise to a different two-dimensional projection. The object can be occluded by other objects or texture fields, as when viewed behind foliage. The object need not be presented as a full-colored textured image but instead can be a simplified line drawing. Moreover, the object can even be missing some of its parts or be a novel exemplar of its particular category. But it is only with rare exceptions that an image fails to be rapidly and readily classified, either as an instance of a familiar object category or as an instance that cannot be so classified (itself a form of classification).) <|cite_end|> <|cite_start|> (Reference: Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review: ) <|cite_end|> <|cite_start|> (Reference: On the Generalization of Equivariance and Convolution in Neural Networks to the Action of Compact Groups: Convolutional neural networks have been extremely successful in the image recognition domain because they ensure equivariance to translations. There have been many recent attempts to generalize this framework to other domains, including graphs and data lying on manifolds. In this paper we give a rigorous, theoretical treatment of convolution and equivariance in neural networks with respect to not just translations, but the action of any compact group. Our main result is to prove that (given some natural constraints) convolutional structure is not just a sufficient, but also a necessary condition for equivariance to the action of a compact group. Our exposition makes use of concepts from representation theory and noncommutative harmonic analysis and derives new generalized convolution formulae.) <|cite_end|> <|cite_start|> (Reference: Building Bayesian Neural Networks with Blocks: On Structure, Interpretability and Uncertainty: We provide simple schemes to build Bayesian Neural Networks (BNNs), block by block, inspired by a recent idea of computation skeletons. We show how by adjusting the types of blocks that are used within the computation skeleton, we can identify interesting relationships with Deep Gaussian Processes (DGPs), deep kernel learning (DKL), random features type approximation and other topics. We give strategies to approximate the posterior via doubly stochastic variational inference for such models which yield uncertainty estimates. We give a detailed theoretical analysis and point out extensions that may be of independent interest. As a special case, we instantiate our procedure to define a Bayesian {\em additive} Neural network -- a promising strategy to identify statistical interactions and has direct benefits for obtaining interpretable models.) <|cite_end|> <|cite_start|> (Reference: Hierarchically Compositional Tasks and Deep Convolutional Networks: The main success stories of deep learning, starting with ImageNet, depend on deep convolutional networks, which on certain tasks perform significantly better than traditional shallow classifiers, such as support vector machines, and also better than deep fully connected networks; but what is so special about deep convolutional networks? Recent results in approximation theory proved an exponential advantage of deep convolutional networks with or without shared weights in approximating functions with hierarchical locality in their compositional structure. More recently, the hierarchical structure was proved to be hard to learn from data, suggesting that it is a powerful prior embedded in the architecture of the network. These mathematical results, however, do not say which real-life tasks correspond to input-output functions with hierarchical locality. To evaluate this, we consider a set of visual tasks where we disrupt the local organization of images via "deterministic scrambling" to later perform a visual task on these images structurally-altered in the same way for training and testing. For object recognition we find, as expected, that scrambling does not affect the performance of shallow or deep fully connected networks contrary to the out-performance of convolutional networks. Not all tasks involving images are however affected. Texture perception and global color estimation are much less sensitive to deterministic scrambling showing that the underlying functions corresponding to these tasks are not hierarchically local; and also counter-intuitively showing that these tasks are better approximated by networks that are not deep (texture) nor convolutional (color). Altogether, these results shed light into the importance of matching a network architecture with its embedded prior of the task to be learned.) <|cite_end|> <|cite_start|> (Reference: On the rate of convergence of image classifiers based on convolutional neural networks: Image classifiers based on convolutional neural networks are defined, and the rate of convergence of the misclassification risk of the estimates towards the optimal misclassification risk is analyzed. Under suitable assumptions on the smoothness and structure of the aposteriori probability a rate of convergence is shown which is independent of the dimension of the image. This proves that in image classification it is possible to circumvent the curse of dimensionality by convolutional neural networks.) <|cite_end|> <|cite_start|> (Reference: Theoretical issues in deep networks: While deep learning is successful in a number of applications, it is not yet well understood theoretically. A theoretical characterization of deep learning should answer questions about their approximation power, the dynamics of optimization, and good out-of-sample performance, despite overparameterization and the absence of explicit regularization. We review our recent results toward this goal. In approximation theory both shallow and deep networks are known to approximate any continuous functions at an exponential cost. However, we proved that for certain types of compositional functions, deep networks of the convolutional type (even without weight sharing) can avoid the curse of dimensionality. In characterizing minimization of the empirical exponential loss we consider the gradient flow of the weight directions rather than the weights themselves, since the relevant function underlying classification corresponds to normalized networks. The dynamics of normalized weights turn out to be equivalent to those of the constrained problem of minimizing the loss subject to a unit norm constraint. In particular, the dynamics of typical gradient descent have the same critical points as the constrained problem. Thus there is implicit regularization in training deep networks under exponential-type loss functions during gradient flow. As a consequence, the critical points correspond to minimum norm infima of the loss. This result is especially relevant because it has been recently shown that, for overparameterized models, selection of a minimum norm solution optimizes cross-validation leave-one-out stability and thereby the expected error. Thus our results imply that gradient descent in deep networks minimize the expected error.) <|cite_end|> <|cite_start|> (Reference: Nonparametric regression using deep neural networks with ReLU activation function: Consider the multivariate nonparametric regression model. It is shown that estimators based on sparsely connected deep neural networks with ReLU activation function and properly chosen network architecture achieve the minimax rates of convergence (up to $\log n$-factors) under a general composition assumption on the regression function. The framework includes many well-studied structural constraints such as (generalized) additive models. While there is a lot of flexibility in the network architecture, the tuning parameter is the sparsity of the network. Specifically, we consider large networks with number of potential network parameters exceeding the sample size. The analysis gives some insights into why multilayer feedforward neural networks perform well in practice. Interestingly, for ReLU activation function the depth (number of layers) of the neural network architectures plays an important role and our theory suggests that for nonparametric regression, scaling the network depth with the sample size is natural. It is also shown that under the composition assumption wavelet estimators can only achieve suboptimal rates.) <|cite_end|> <|cite_start|> (Reference: Posterior contraction for deep Gaussian process priors: We study posterior contraction rates for a class of deep Gaussian process priors applied to the nonparametric regression problem under a general composition assumption on the regression function. It is shown that the contraction rates can achieve the minimax convergence rate (up to $\log n$ factors), while being adaptive to the underlying structure and smoothness of the target function. The proposed framework extends the Bayesian nonparametric theory for Gaussian process priors.) <|cite_end|> <|cite_start|> (Reference: On the inability of Gaussian process regression to optimally learn compositional functions: We rigorously prove that deep Gaussian process priors can outperform Gaussian process priors if the target function has a compositional structure. To this end, we study information-theoretic lower bounds for posterior contraction rates for Gaussian process regression in a continuous regression model. We show that if the true function is a generalized additive function, then the posterior based on any mean-zero Gaussian process can only recover the truth at a rate that is strictly slower than the minimax rate by a factor that is polynomially suboptimal in the sample size $n$.) <|cite_end|>, there are no available predictions for the exponent $\beta$ for deep CNNs trained on tasks with a varying degree of locality or a truly hierarchical structure. In this paper, we perform such a computation in the overparameterised regime, where the width of the hidden layer of the neural networks diverges and the network output is rescaled so as to converge to that of a kernel method <|cite_start|> (Reference: Neural Tangent Kernel: Convergence and Generalization in Neural Networks: At initialization, artificial neural networks (ANNs) are equivalent to Gaussian processes in the infinite-width limit, thus connecting them to kernel methods. We prove that the evolution of an ANN during training can also be described by a kernel: during gradient descent on the parameters of an ANN, the network function $f_\theta$ (which maps input vectors to output vectors) follows the kernel gradient of the functional cost (which is convex, in contrast to the parameter cost) w.r.t. a new kernel: the Neural Tangent Kernel (NTK). This kernel is central to describe the generalization features of ANNs. While the NTK is random at initialization and varies during training, in the infinite-width limit it converges to an explicit limiting kernel and it stays constant during training. This makes it possible to study the training of ANNs in function space instead of parameter space. Convergence of the training can then be related to the positive-definiteness of the limiting NTK. We prove the positive-definiteness of the limiting NTK when the data is supported on the sphere and the non-linearity is non-polynomial. We then focus on the setting of least-squares regression and show that in the infinite-width limit, the network function $f_\theta$ follows a linear differential equation during training. The convergence is fastest along the largest kernel principal components of the input data with respect to the NTK, hence suggesting a theoretical motivation for early stopping. Finally we study the NTK numerically, observe its behavior for wide networks, and compare it to the infinite-width limit.) <|cite_end|> <|cite_start|> (Reference: Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent: A longstanding goal in deep learning research has been to precisely characterize training and generalization. However, the often complex loss landscapes of neural networks have made a theory of learning dynamics elusive. In this work, we show that for wide neural networks the learning dynamics simplify considerably and that, in the infinite width limit, they are governed by a linear model obtained from the first-order Taylor expansion of the network around its initial parameters. Furthermore, mirroring the correspondence between wide Bayesian neural networks and Gaussian processes, gradient-based training of wide neural networks with a squared loss produces test set predictions drawn from a Gaussian process with a particular compositional kernel. While these theoretical results are only exact in the infinite width limit, we nevertheless find excellent empirical agreement between the predictions of the original network and those of the linearized version even for finite practically-sized networks. This agreement is robust across different architectures, optimization methods, and loss functions.) <|cite_end|>. Although the deep networks deployed in real scenarios do not generally operate in such regime, the connection with the theory of kernel regression provides a recipe for computing the decay of the generalisation error with the number of training examples. Namely, given an infinitely wide neural network, its generalisation abilities depend on the spectrum of the corresponding kernel <|cite_start|> (Reference: Optimal Rates for the Regularized Least-Squares Algorithm: ) <|cite_end|> <|cite_start|> (Reference: Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks: We derive analytical expressions for the generalization performance of kernel regression as a function of the number of training samples using theoretical methods from Gaussian processes and statistical physics. Our expressions apply to wide neural networks due to an equivalence between training them and kernel regression with the Neural Tangent Kernel (NTK). By computing the decomposition of the total generalization error due to different spectral components of the kernel, we identify a new spectral principle: as the size of the training set grows, kernel machines and neural networks fit successively higher spectral modes of the target function. When data are sampled from a uniform distribution on a high-dimensional hypersphere, dot product kernels, including NTK, exhibit learning stages where different frequency modes of the target function are learned. We verify our theory with simulations on synthetic data and MNIST dataset.) <|cite_end|>: the main challenge is then to characterise this spectrum, especially for deep CNNs whose kernels are rather cumbersome and defined recursively <|cite_start|> (Reference: On Exact Computation with an Infinitely Wide Neural Net: How well does a classic deep net architecture like AlexNet or VGG19 classify on a standard dataset such as CIFAR-10 when its width --- namely, number of channels in convolutional layers, and number of nodes in fully-connected internal layers --- is allowed to increase to infinity? Such questions have come to the forefront in the quest to theoretically understand deep learning and its mysteries about optimization and generalization. They also connect deep learning to notions such as Gaussian processes and kernels. A recent paper [Jacot et al., 2018] introduced the Neural Tangent Kernel (NTK) which captures the behavior of fully-connected deep nets in the infinite width limit trained by gradient descent; this object was implicit in some other recent papers. An attraction of such ideas is that a pure kernel-based method is used to capture the power of a fully-trained deep net of infinite width. The current paper gives the first efficient exact algorithm for computing the extension of NTK to convolutional neural nets, which we call Convolutional NTK (CNTK), as well as an efficient GPU implementation of this algorithm. This results in a significant new benchmark for the performance of a pure kernel-based method on CIFAR-10, being $10\%$ higher than the methods reported in [Novak et al., 2019], and only $6\%$ lower than the performance of the corresponding finite deep net architecture (once batch normalization, etc. are turned off). Theoretically, we also give the first non-asymptotic proof showing that a fully-trained sufficiently wide net is indeed equivalent to the kernel regression predictor using NTK.) <|cite_end|>. This characterisation is the main result of our paper, together with the ensuing study of generalisation in deep CNNs. \subsection{Our contributions} More specifically, this paper studies the generalisation properties of deep CNNs with non-overlapping patches and no pooling (defined in~\cref{sec:setup}, see~\cref{fig:main-msg} for an illustration), trained on a target function $f^*$ by empirical minimisation of the mean squared loss. We consider the infinite-width limit (\cref{sec:kernels}) where the model parameters change infinitesimally over training, thus the trained network coincides with the predictor of kernel regression with the Neural Tangent Kernel (NTK) of the network. Due to the equivalence with kernel methods, generalisation is fully characterised by the spectrum of the integral operator of the kernel: in simple terms, the projections on the eigenfunctions with larger eigenvalues can be learnt (up to a fixed generalisation error) with fewer training points (see, e.g., <|cite_start|> (Reference: Learning theory from first principles Lecture 1: Introduction to supervised learning: • The class will be organized in 9 three-hour sessions, each with a precise topic except the last one dedicated to recent learning theory results. • Validation: one written in-class exam, and (very) simple coding assignments (to illustrate convergence results). • Register online: https://forms.gle/f4nXh5u6VR98dGJAA • Ask questions! (chat or directly) • References: [1, 2, 3, 4, 5]. • Prerequisites: We will prove results in class so a good knowledge of undergraduate mathematics is important, as well as basic notions in probability. Having followed an introductory class on machine learning is beneficial. Good references for introduction to machine learning are [6, 7].) <|cite_end|>). \paragraph{Spectrum of deep hierarchical kernels (\cref{th:eig-scaling}).} Due to the network architecture, the hidden neurons of each layer depend only on a subset of the input variables, known as the receptive field of that neuron (highlighted by coloured boxes in~\cref{fig:main-msg}, left panel). We find that the eigenfunctions of the NTK of a hierarchical CNN of depth $L\,{+}\,1$ can be organised into sectors $l\,{=}\,1,\dots,L$ associated with the hidden layers of the network (\cref{th:eig-scaling}). The eigenfunctions of each sector depend only on the receptive fields of the neurons of the corresponding hidden layer: if we denote with $d_{\text{eff}}(l)$ the size of the receptive fields of neurons in the $l$-th layer, then the eigenfunctions of the $l$-th sector are effectively functions of $d_{\text{eff}}(l)$ variables. We characterise the asymptotic behaviour of the NTK eigenvalues with the degree of the corresponding eigenfunctions (\cref{th:eig-scaling}) and find that it is controlled by $d_{\text{eff}}(l)$. As a consequence, the eigenfunctions with the largest eigenvalues---the easiest to learn---are those which depend on small subsets of the input variables and have low polynomial degree. This is our main technical contribution, and all of our conclusions follow from it.\looseness=-1 \paragraph{Adaptivity to the spatial structure of the target (\cref{co:adaptivity}).} We use the above result to prove that deep CNNs can adapt to the spatial scale of the target function (\cref{sec:adaptivity}). More specifically, by using rigorous bounds from the theory of kernel ridge regression <|cite_start|> (Reference: Optimal Rates for the Regularized Least-Squares Algorithm: ) <|cite_end|> (reviewed in the first paragraph of~\cref{sec:adaptivity}), we show that when learning with the kernel of a CNN and optimal regularisation, the decay of the error depends on the effective dimensionality of the target $f^*$---i.e., if $f^*$ only depends on $d_{\text{eff}}$ adjacent coordinates of the $d$-dimensional input, then $\epsilon\sim n^{-\beta}$ with $\beta\geq O(1/d_{\text{eff}})$ (\cref{co:adaptivity}, see~\cref{fig:main-msg} for a pictorial representation). We find a similar picture in ridgeless regression by using non-rigorous results derived with the replica method <|cite_start|> (Reference: Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks: We derive analytical expressions for the generalization performance of kernel regression as a function of the number of training samples using theoretical methods from Gaussian processes and statistical physics. Our expressions apply to wide neural networks due to an equivalence between training them and kernel regression with the Neural Tangent Kernel (NTK). By computing the decomposition of the total generalization error due to different spectral components of the kernel, we identify a new spectral principle: as the size of the training set grows, kernel machines and neural networks fit successively higher spectral modes of the target function. When data are sampled from a uniform distribution on a high-dimensional hypersphere, dot product kernels, including NTK, exhibit learning stages where different frequency modes of the target function are learned. We verify our theory with simulations on synthetic data and MNIST dataset.) <|cite_end|> <|cite_start|> (Reference: Learning curves of generic features maps for realistic datasets with a teacher-student model: Teacher-student models provide a framework in which the typical-case performance of high-dimensional supervised learning can be described in closed form. The assumptions of Gaussian i.i.d. input data underlying the canonical teacher-student model may, however, be perceived as too restrictive to capture the behaviour of realistic data sets. In this paper, we introduce a Gaussian covariate generalisation of the model where the teacher and student can act on different spaces, generated with fixed, but generic feature maps. While still solvable in a closed form, this generalization is able to capture the learning curves for a broad range of realistic data sets, thus redeeming the potential of the teacher-student framework. Our contribution is then two-fold: First, we prove a rigorous formula for the asymptotic training loss and generalisation error. Second, we present a number of situations where the learning curve of the model captures the one of a realistic data set learned with kernel regression and classification, with out-of-the-box feature maps such as random projections or scattering transforms, or with pre-learned ones - such as the features learned by training multi-layer neural networks. We discuss both the power and the limitations of the framework.) <|cite_end|> (\cref{sec:examples}). Notice that for targets that, if $d_{\text{eff}}\,{\ll}\,d$, the rates achieved with deep CNNs are much closer to the Bayes-optimal rates---realised when the architecture is fine-tuned to the structure of the target---than $\beta=O(1/d)$ obtained with the kernel of a fully-connected network. Moreover, we find that hierarchical functions generated by the output of deep CNNs are too rich to be efficiently learnable in high dimensions (\cref{lemma:curse-hierarchical}). We confirm these results through extensive numerical studies and find them to hold even if the nonoverlapping patches assumption is relaxed (\cref{app:extensions}).\looseness=-1 \begin{figure*} \centering \subfigure{\includegraphics[width=0.4\textwidth]{figures/main_msg_tree.pdf}} \hspace{1cm} \subfigure{\includegraphics[width=0.3\textwidth]{figures/main_msg_curves.pdf}} \caption{\textbf{Left:} Computational skeleton of a convolutional neural network of depth $L+1\,{=}\,4$ ($L\,{=}\,3$ hidden layers). The leaves of the graph (squares) correspond to input coordinates, and the root (empty circle) to the output. All other nodes represent (infinitely wide layers of) hidden neurons. We define as `meta-patches' (i.e., patches of patches) the sets of input variables that share a common ancestor node along the tree (such as the squares within each coloured rectangle). Each meta-patch coincides with the receptive field of the neuron represented by this common ancestor node, as indicated below the input coordinates. For each hidden layer $l\,{=}\,1,\dots,L$, there is a family of meta-patches having dimensionality $d_{\text{eff}}(l)$. \textbf{Right:} Sketches of learning curves $\epsilon(n)$ obtained by learning target functions of varying spatial scale with the network on the left. More specifically, the target is a function of a $3$-dimensional patch for the blue curve, a $6$-dimensional patch for the orange curve, and the full input for the green curve. We predict (and confirm empirically) that both the decay of $\epsilon$ with $n$ (full lines) and the rigorous upper bound (dashed lines) are controlled by the effective dimensionality of the target.} \label{fig:main-msg} \end{figure*} \subsection{Related work} \looseness=-1 The benefits of shallow CNNs in the kernel regime have been investigated by <|cite_start|> (Reference: Approximation and Learning with Deep Convolutional Models: a Kernel Perspective: The empirical success of deep convolutional networks on tasks involving high-dimensional data such as images or audio suggests that they can efficiently approximate certain functions that are well-suited for such tasks. In this paper, we study this through the lens of kernel methods, by considering simple hierarchical kernels with two or three convolution and pooling layers, inspired by convolutional kernel networks. These achieve good empirical performance on standard vision datasets, while providing a precise description of their functional space that yields new insights on their inductive bias. We show that the RKHS consists of additive models of interaction terms between patches, and that its norm encourages spatial similarities between these terms through pooling layers. We then provide generalization bounds which illustrate how pooling and patches yield improved sample complexity guarantees when the target function presents such regularities.) <|cite_end|> <|cite_start|> (Reference: Locality defeats the curse of dimensionality in convolutional teacher-student scenarios: Convolutional neural networks perform a local and translationally-invariant treatment of the data: quantifying which of these two aspects is central to their success remains a challenge. We study this problem within a teacher-student framework for kernel regression, using `convolutional' kernels inspired by the neural tangent kernel of simple convolutional architectures of given filter size. Using heuristic methods from physics, we find in the ridgeless case that locality is key in determining the learning curve exponent $\beta$ (that relates the test error $\epsilon_t\sim P^{-\beta}$ to the size of the training set $P$), whereas translational invariance is not. In particular, if the filter size of the teacher $t$ is smaller than that of the student $s$, $\beta$ is a function of $s$ only and does not depend on the input dimension. We confirm our predictions on $\beta$ empirically. We conclude by proving, using a natural universality assumption, that performing kernel regression with a ridge that decreases with the size of the training set leads to similar learning curve exponents to those we obtain in the ridgeless case.) <|cite_end|> <|cite_start|> (Reference: Learning with convolution and pooling operations in kernel methods: Recent empirical work has shown that hierarchical convolutional kernels inspired by convolutional neural networks (CNNs) significantly improve the performance of kernel methods in image classification tasks. A widely accepted explanation for their success is that these architectures encode hypothesis classes that are suitable for natural images. However, understanding the precise interplay between approximation and generalization in convolutional architectures remains a challenge. In this paper, we consider the stylized setting of covariates (image pixels) uniformly distributed on the hypercube, and characterize exactly the RKHS of kernels composed of single layers of convolution, pooling, and downsampling operations. We use this characterization to compute sharp asymptotics of the generalization error for any given function in high-dimension. In particular, we quantify the gain in sample complexity brought by enforcing locality with the convolution operation and approximate translation invariance with average pooling. Notably, these results provide a precise description of how convolution and pooling operations trade off approximation with generalization power in one layer convolutional kernels.) <|cite_end|> <|cite_start|> (Reference: Synergy and Symmetry in Deep Learning: Interactions between the Data, Model, and Inference Algorithm: Although learning in high dimensions is commonly believed to suffer from the curse of dimensionality, modern machine learning methods often exhibit an astonishing power to tackle a wide range of challenging real-world learning problems without using abundant amounts of data. How exactly these methods break this curse remains a fundamental open question in the theory of deep learning. While previous efforts have investigated this question by studying the data (D), model (M), and inference algorithm (I) as independent modules, in this paper, we analyze the triplet (D, M, I) as an integrated system and identify important synergies that help mitigate the curse of dimensionality. We first study the basic symmetries associated with various learning algorithms (M, I), focusing on four prototypical architectures in deep learning: fully-connected networks (FCN), locally-connected networks (LCN), and convolutional networks with and without pooling (GAP/VEC). We find that learning is most efficient when these symmetries are compatible with those of the data distribution and that performance significantly deteriorates when any member of the (D, M, I) triplet is inconsistent or suboptimal.) <|cite_end|> <|cite_start|> (Reference: Eigenspace Restructuring: a Principle of Space and Frequency in Neural Networks: Understanding the fundamental principles behind the massive success of neural networks is one of the most important open questions in deep learning. However, due to the highly complex nature of the problem, progress has been relatively slow. In this note, through the lens of infinite-width networks, a.k.a. neural kernels, we present one such principle resulting from hierarchical localities. It is well-known that the eigenstructure of infinite-width multilayer perceptrons (MLPs) depends solely on the concept frequency, which measures the order of interactions. We show that the topologies from deep convolutional networks (CNNs) restructure the associated eigenspaces into finer subspaces. In addition to frequency, the new structure also depends on the concept space, which measures the spatial distance among nonlinear interaction terms. The resulting fine-grained eigenstructure dramatically improves the network's learnability, empowering them to simultaneously model a much richer class of interactions, including Long-Range-Low-Frequency interactions, Short-Range-High-Frequency interactions, and various interpolations and extrapolations in-between. Additionally, model scaling can improve the resolutions of interpolations and extrapolations and, therefore, the network's learnability. Finally, we prove a sharp characterization of the generalization error for infinite-width CNNs of any depth in the high-dimensional setting. Two corollaries follow: (1) infinite-width deep CNNs can break the curse of dimensionality without losing their expressivity, and (2) scaling improves performance in both the finite and infinite data regimes.) <|cite_end|> <|cite_start|> (Reference: On the Spectral Bias of Convolutional Neural Tangent and Gaussian Process Kernels: We study the properties of various over-parametrized convolutional neural architectures through their respective Gaussian process and neural tangent kernels. We prove that, with normalized multi-channel input and ReLU activation, the eigenfunctions of these kernels with the uniform measure are formed by products of spherical harmonics, defined over the channels of the different pixels. We next use hierarchical factorizable kernels to bound their respective eigenvalues. We show that the eigenvalues decay polynomially, quantify the rate of decay, and derive measures that reflect the composition of hierarchical features in these networks. Our results provide concrete quantitative characterization of over-parameterized convolutional network architectures.) <|cite_end|>. <|cite_start|> (Reference: Locality defeats the curse of dimensionality in convolutional teacher-student scenarios: Convolutional neural networks perform a local and translationally-invariant treatment of the data: quantifying which of these two aspects is central to their success remains a challenge. We study this problem within a teacher-student framework for kernel regression, using `convolutional' kernels inspired by the neural tangent kernel of simple convolutional architectures of given filter size. Using heuristic methods from physics, we find in the ridgeless case that locality is key in determining the learning curve exponent $\beta$ (that relates the test error $\epsilon_t\sim P^{-\beta}$ to the size of the training set $P$), whereas translational invariance is not. In particular, if the filter size of the teacher $t$ is smaller than that of the student $s$, $\beta$ is a function of $s$ only and does not depend on the input dimension. We confirm our predictions on $\beta$ empirically. We conclude by proving, using a natural universality assumption, that performing kernel regression with a ridge that decreases with the size of the training set leads to similar learning curve exponents to those we obtain in the ridgeless case.) <|cite_end|>, and later <|cite_start|> (Reference: Learning with convolution and pooling operations in kernel methods: Recent empirical work has shown that hierarchical convolutional kernels inspired by convolutional neural networks (CNNs) significantly improve the performance of kernel methods in image classification tasks. A widely accepted explanation for their success is that these architectures encode hypothesis classes that are suitable for natural images. However, understanding the precise interplay between approximation and generalization in convolutional architectures remains a challenge. In this paper, we consider the stylized setting of covariates (image pixels) uniformly distributed on the hypercube, and characterize exactly the RKHS of kernels composed of single layers of convolution, pooling, and downsampling operations. We use this characterization to compute sharp asymptotics of the generalization error for any given function in high-dimension. In particular, we quantify the gain in sample complexity brought by enforcing locality with the convolution operation and approximate translation invariance with average pooling. Notably, these results provide a precise description of how convolution and pooling operations trade off approximation with generalization power in one layer convolutional kernels.) <|cite_end|> <|cite_start|> (Reference: Synergy and Symmetry in Deep Learning: Interactions between the Data, Model, and Inference Algorithm: Although learning in high dimensions is commonly believed to suffer from the curse of dimensionality, modern machine learning methods often exhibit an astonishing power to tackle a wide range of challenging real-world learning problems without using abundant amounts of data. How exactly these methods break this curse remains a fundamental open question in the theory of deep learning. While previous efforts have investigated this question by studying the data (D), model (M), and inference algorithm (I) as independent modules, in this paper, we analyze the triplet (D, M, I) as an integrated system and identify important synergies that help mitigate the curse of dimensionality. We first study the basic symmetries associated with various learning algorithms (M, I), focusing on four prototypical architectures in deep learning: fully-connected networks (FCN), locally-connected networks (LCN), and convolutional networks with and without pooling (GAP/VEC). We find that learning is most efficient when these symmetries are compatible with those of the data distribution and that performance significantly deteriorates when any member of the (D, M, I) triplet is inconsistent or suboptimal.) <|cite_end|>, studied generalisation properties of shallow CNNs, finding that they are able to beat the curse of dimensionality on local target functions. However, these architectures can only approximate functions of single input patches or linear combinations thereof. <|cite_start|> (Reference: Approximation and Learning with Deep Convolutional Models: a Kernel Perspective: The empirical success of deep convolutional networks on tasks involving high-dimensional data such as images or audio suggests that they can efficiently approximate certain functions that are well-suited for such tasks. In this paper, we study this through the lens of kernel methods, by considering simple hierarchical kernels with two or three convolution and pooling layers, inspired by convolutional kernel networks. These achieve good empirical performance on standard vision datasets, while providing a precise description of their functional space that yields new insights on their inductive bias. We show that the RKHS consists of additive models of interaction terms between patches, and that its norm encourages spatial similarities between these terms through pooling layers. We then provide generalization bounds which illustrate how pooling and patches yield improved sample complexity guarantees when the target function presents such regularities.) <|cite_end|>, in addition, includes generic pooling layers and begins considering the role of depth by studying the approximation properties of kernels which are integer powers of other kernels. We generalise this line of work by studying CNNs of any depth with nonanalytic (ReLU) activations: we find that the depth and nonanalyticity of the resulting kernel are crucial for understanding the inductive bias of deep CNNs. This result should also be contrasted with the spectrum of the kernels of deep fully-connected networks, whose asymptotics do not depend on depth <|cite_start|> (Reference: Deep Equals Shallow for ReLU Networks in Kernel Regimes: Deep networks are often considered to be more expressive than shallow ones in terms of approximation. Indeed, certain functions can be approximated by deep networks provably more efficiently than by shallow ones, however, no tractable algorithms are known for learning such deep models. Separately, a recent line of work has shown that deep networks trained with gradient descent may behave like (tractable) kernel methods in a certain over-parameterized regime, where the kernel is determined by the architecture and initialization, and this paper focuses on approximation for such kernels. We show that for ReLU activations, the kernels derived from deep fully-connected networks have essentially the same approximation properties as their shallow two-layer counterpart, namely the same eigenvalue decay for the corresponding integral operator. This highlights the limitations of the kernel framework for understanding the benefits of such deep architectures. Our main theoretical result relies on characterizing such eigenvalue decays through differentiability properties of the kernel function, which also easily applies to the study of other kernels defined on the sphere.) <|cite_end|>. Furthermore, we extend the analysis of generalisation to target functions that have a hierarchical structure similar to that of the networks themselves. <|cite_start|> (Reference: On the Spectral Bias of Convolutional Neural Tangent and Gaussian Process Kernels: We study the properties of various over-parametrized convolutional neural architectures through their respective Gaussian process and neural tangent kernels. We prove that, with normalized multi-channel input and ReLU activation, the eigenfunctions of these kernels with the uniform measure are formed by products of spherical harmonics, defined over the channels of the different pixels. We next use hierarchical factorizable kernels to bound their respective eigenvalues. We show that the eigenvalues decay polynomially, quantify the rate of decay, and derive measures that reflect the composition of hierarchical features in these networks. Our results provide concrete quantitative characterization of over-parameterized convolutional network architectures.) <|cite_end|> derive bounds on the spectrum of the kernels of deep CNNs. However, they consider only filters of size one in the first layer and do not include a theoretical analysis of generalisation. Instead, we allow filters of general dimension and give tight estimates of the asymptotic behaviour of eigenvalues, which allow us to predict generalisation properties. <|cite_start|> (Reference: Eigenspace Restructuring: a Principle of Space and Frequency in Neural Networks: Understanding the fundamental principles behind the massive success of neural networks is one of the most important open questions in deep learning. However, due to the highly complex nature of the problem, progress has been relatively slow. In this note, through the lens of infinite-width networks, a.k.a. neural kernels, we present one such principle resulting from hierarchical localities. It is well-known that the eigenstructure of infinite-width multilayer perceptrons (MLPs) depends solely on the concept frequency, which measures the order of interactions. We show that the topologies from deep convolutional networks (CNNs) restructure the associated eigenspaces into finer subspaces. In addition to frequency, the new structure also depends on the concept space, which measures the spatial distance among nonlinear interaction terms. The resulting fine-grained eigenstructure dramatically improves the network's learnability, empowering them to simultaneously model a much richer class of interactions, including Long-Range-Low-Frequency interactions, Short-Range-High-Frequency interactions, and various interpolations and extrapolations in-between. Additionally, model scaling can improve the resolutions of interpolations and extrapolations and, therefore, the network's learnability. Finally, we prove a sharp characterization of the generalization error for infinite-width CNNs of any depth in the high-dimensional setting. Two corollaries follow: (1) infinite-width deep CNNs can break the curse of dimensionality without losing their expressivity, and (2) scaling improves performance in both the finite and infinite data regimes.) <|cite_end|> is the closest to our work, as it also investigates the spectral bias of deep CNNs in the kernel regime. However, it considers a different limit where both the input dimension and the number of training points diverge and does not characterise the asymptotic decay of generalisation error with the number of training samples. <|cite_start|> (Reference: How isotropic kernels perform on simple invariants: We investigate how the training curve of isotropic kernel methods depends on the symmetry of the task to be learned, in several settings. (i) We consider a regression task, where the target function is a Gaussian random field that depends only on $d_\parallel$ variables, fewer than the input dimension $d$. We compute the expected test error $\epsilon$ that follows $\epsilon\sim p^{-\beta}$ where $p$ is the size of the training set. We find that $\beta\sim 1/d$ independently of $d_\parallel$, supporting previous findings that the presence of invariants does not resolve the curse of dimensionality for kernel regression. (ii) Next we consider support-vector binary classification and introduce the stripe model where the data label depends on a single coordinate $y(\underline{x}) = y(x_1)$, corresponding to parallel decision boundaries separating labels of different signs, and consider that there is no margin at these interfaces. We argue and confirm numerically that for large bandwidth, $\beta = \frac{d-1+\xi}{3d-3+\xi}$, where $\xi\in (0,2)$ is the exponent characterizing the singularity of the kernel at the origin. This estimation improves classical bounds obtainable from Rademacher complexity. In this setting there is no curse of dimensionality since $\beta\rightarrow 1 / 3$ as $d\rightarrow\infty$. (iii) We confirm these findings for the spherical model for which $y(\underline{x}) = y(|\underline{x}|)$. (iv) In the stripe model, we show that if the data are compressed along their invariants by some factor $\lambda$ (an operation believed to take place in deep networks), the test error is reduced by a factor $\lambda^{-\frac{2(d-1)}{3d-3+\xi}}$.) <|cite_end|> <|cite_start|> (Reference: Computational Separation Between Convolutional and Fully-Connected Networks: Convolutional neural networks (CNN) exhibit unmatched performance in a multitude of computer vision tasks. However, the advantage of using convolutional networks over fully-connected networks is not understood from a theoretical perspective. In this work, we show how convolutional networks can leverage locality in the data, and thus achieve a computational advantage over fully-connected networks. Specifically, we show a class of problems that can be efficiently solved using convolutional networks trained with gradient-descent, but at the same time is hard to learn using a polynomial-size fully-connected network.) <|cite_end|> <|cite_start|> (Reference: The merged-staircase property: a necessary and nearly sufficient condition for SGD learning of sparse functions on two-layer neural networks: It is currently known how to characterize functions that neural networks can learn with SGD for two extremal parameterizations: neural networks in the linear regime, and neural networks with no structural constraints. However, for the main parametrization of interest (non-linear but regular networks) no tight characterization has yet been achieved, despite significant developments. We take a step in this direction by considering depth-2 neural networks trained by SGD in the mean-field regime. We consider functions on binary inputs that depend on a latent low-dimensional subspace (i.e., small number of coordinates). This regime is of interest since it is poorly understood how neural networks routinely tackle high-dimensional datasets and adapt to latent low-dimensional structure without suffering from the curse of dimensionality. Accordingly, we study SGD-learnability with $O(d)$ sample complexity in a large ambient dimension $d$. Our main results characterize a hierarchical property, the "merged-staircase property", that is both necessary and nearly sufficient for learning in this setting. We further show that non-linear training is necessary: for this class of functions, linear methods on any feature map (e.g., the NTK) are not capable of learning efficiently. The key tools are a new "dimension-free" dynamics approximation result that applies to functions defined on a latent space of low-dimension, a proof of global convergence based on polynomial identity testing, and an improvement of lower bounds against linear methods for non-almost orthogonal functions.) <|cite_end|> use sparse target functions which depend only on a few of the input variables to prove sample complexity separation results between networks operating in the kernel regime and in the feature regime---where the change in parameters during training can be arbitrarily large. In this respect, our work shows that when the few relevant input variables are adjacent, i.e., the target function is spatially localised, deep CNNs achieve near-optimal performances even in the kernel regime. <|paper_end|>
[ "<|reference_start|> High‐dimensional Statistics: A Non‐asymptotic Viewpoint, Martin J.Wainwright, Cambridge University Press, 2019, xvii 552 pages, £57.99, hardback ISBN: 978‐1‐1084‐9802‐9: <|reference_end|>", "<|reference_start|> Learning with convolution and pooling operations in kernel methods: Recent empirical work has shown that hierarchical convolutional kernels inspired by convolutional neural networks (CNNs) significantly improve the performance of kernel methods in image classification tasks. A widely accepted explanation for their success is that these architectures encode hypothesis classes that are suitable for natural images. However, understanding the precise interplay between approximation and generalization in convolutional architectures remains a challenge. In this paper, we consider the stylized setting of covariates (image pixels) uniformly distributed on the hypercube, and characterize exactly the RKHS of kernels composed of single layers of convolution, pooling, and downsampling operations. We use this characterization to compute sharp asymptotics of the generalization error for any given function in high-dimension. In particular, we quantify the gain in sample complexity brought by enforcing locality with the convolution operation and approximate translation invariance with average pooling. Notably, these results provide a precise description of how convolution and pooling operations trade off approximation with generalization power in one layer convolutional kernels. <|reference_end|>", "<|reference_start|> On the Spectral Bias of Convolutional Neural Tangent and Gaussian Process Kernels: We study the properties of various over-parametrized convolutional neural architectures through their respective Gaussian process and neural tangent kernels. We prove that, with normalized multi-channel input and ReLU activation, the eigenfunctions of these kernels with the uniform measure are formed by products of spherical harmonics, defined over the channels of the different pixels. We next use hierarchical factorizable kernels to bound their respective eigenvalues. We show that the eigenvalues decay polynomially, quantify the rate of decay, and derive measures that reflect the composition of hierarchical features in these networks. Our results provide concrete quantitative characterization of over-parameterized convolutional network architectures. <|reference_end|>", "<|reference_start|> The merged-staircase property: a necessary and nearly sufficient condition for SGD learning of sparse functions on two-layer neural networks: It is currently known how to characterize functions that neural networks can learn with SGD for two extremal parameterizations: neural networks in the linear regime, and neural networks with no structural constraints. However, for the main parametrization of interest (non-linear but regular networks) no tight characterization has yet been achieved, despite significant developments. We take a step in this direction by considering depth-2 neural networks trained by SGD in the mean-field regime. We consider functions on binary inputs that depend on a latent low-dimensional subspace (i.e., small number of coordinates). This regime is of interest since it is poorly understood how neural networks routinely tackle high-dimensional datasets and adapt to latent low-dimensional structure without suffering from the curse of dimensionality. Accordingly, we study SGD-learnability with $O(d)$ sample complexity in a large ambient dimension $d$. Our main results characterize a hierarchical property, the \"merged-staircase property\", that is both necessary and nearly sufficient for learning in this setting. We further show that non-linear training is necessary: for this class of functions, linear methods on any feature map (e.g., the NTK) are not capable of learning efficiently. The key tools are a new \"dimension-free\" dynamics approximation result that applies to functions defined on a latent space of low-dimension, a proof of global convergence based on polynomial identity testing, and an improvement of lower bounds against linear methods for non-almost orthogonal functions. <|reference_end|>" ]
[ 0, 29, 33, 37 ]
{"<|cite_1|>": "ss-1253574", "<|multi_cite_2_1|>": "arxiv-141957", "<|multi_cite_2_2|>": "ss-1671293", "<|multi_cite_3_1|>": "ss-679284", "<|multi_cite_3_2|>": "ss-846118", "<|multi_cite_3_3|>": "arxiv-147911", "<|multi_cite_3_4|>": "arxiv-161901", "<|multi_cite_3_5|>": "arxiv-274290", "<|multi_cite_3_6|>": "arxiv-251765", "<|multi_cite_3_7|>": "ss-1539969", "<|multi_cite_3_8|>": "arxiv-132549", "<|multi_cite_3_9|>": "ss-1554671", "<|multi_cite_3_10|>": "arxiv-419831", "<|multi_cite_4_1|>": "arxiv-163159", "<|multi_cite_4_2|>": "arxiv-191959", "<|multi_cite_5_1|>": "ss-1525210", "<|multi_cite_5_2|>": "arxiv-246825", "<|cite_6|>": "arxiv-201682", "<|cite_11|>": "ss-2291073", "<|cite_7|>": "ss-1525210", "<|multi_cite_8_1|>": "arxiv-246825", "<|multi_cite_8_2|>": "arxiv-321541", "<|multi_cite_12_1|>": "arxiv-322392", "<|multi_cite_12_2|>": "arxiv-348761", "<|multi_cite_12_3|>": "arxiv-381299", "<|multi_cite_12_4|>": "arxiv-432845", "<|multi_cite_12_5|>": "arxiv-386666", "<|multi_cite_12_6|>": "arxiv-406331", "<|cite_13|>": "arxiv-348761", "<|multi_cite_9_1|>": "arxiv-381299", "<|multi_cite_9_2|>": "arxiv-432845", "<|cite_14|>": "arxiv-322392", "<|cite_10|>": "arxiv-292859", "<|cite_15|>": "arxiv-406331", "<|cite_16|>": "arxiv-386666", "<|multi_cite_17_1|>": "arxiv-272537", "<|multi_cite_17_2|>": "arxiv-293628", "<|multi_cite_17_3|>": "arxiv-399838"}
2407.20695
<|paper_start|> Title: Time Series Anomaly Detection with CNN for Environmental Sensors in Healthcare-IoT Abstract: Time Series Anomaly Detection with CNN for Environmental Sensors in Healthcare-IoT: This research develops a new method to detect anomalies in time series data using Convolutional Neural Networks (CNNs) in healthcare-IoT. The proposed method creates a Distributed Denial of Service (DDoS) attack using an IoT network simulator, Cooja, which emulates environmental sensors such as temperature and humidity. CNNs detect anomalies in time series data, resulting in a 92\% accuracy in identifying possible attacks. Introduction The Internet of Things (IoT) is a network of physical devices that communicate with each other through sensors, software, and connectivity <|cite_start|> (Reference: Machine Learning for Healthcare-IoT Security: A Review and Risk Mitigation: The Healthcare Internet-of-Things (H-IoT), commonly known as Digital Healthcare, is a data-driven infrastructure that highly relies on smart sensing devices (i.e., blood pressure monitors, temperature sensors, etc.) for faster response time, treatments, and diagnosis. However, with the evolving cyber threat landscape, IoT devices have become more vulnerable to the broader risk surface (e.g., risks associated with generative AI, 5G-IoT, etc.), which, if exploited, may lead to data breaches, unauthorized access, and lack of command and control and potential harm. This paper reviews the fundamentals of healthcare IoT, its privacy, and data security challenges associated with machine learning and H-IoT devices. The paper further emphasizes the importance of monitoring healthcare IoT layers such as perception, network, cloud, and application. Detecting and responding to anomalies involves various cyber-attacks and protocols such as Wi-Fi 6, Narrowband Internet of Things (NB-IoT), Bluetooth, ZigBee, LoRa, and 5G New Radio (5G NR). A robust authentication mechanism based on machine learning and deep learning techniques is required to protect and mitigate H-IoT devices from increasing cybersecurity vulnerabilities. Hence, in this review paper, security and privacy challenges and risk mitigation strategies for building resilience in H-IoT are explored and reported.) <|cite_end|>, <|cite_start|> (Reference: Industrial IoT, cyber threats, and standards landscape: Evaluation and roadmap: Industrial IoT (IIoT) is a novel concept of a fully connected, transparent, automated, and intelligent factory setup improving manufacturing processes and efficiency. To achieve this, existing hierarchical models must transition to a fully connected vertical model. Since IIoT is a novel approach, the environment is susceptible to cyber threat vectors, standardization, and interoperability issues, bridging the gaps at the IT/OT ICS (industrial control systems) level. IIoT M2M communication relies on new communication models (5G, TSN ethernet, self-driving networks, etc.) and technologies which require challenging approaches to achieve the desired levels of data security. Currently there are no methods to assess the vulnerabilities/risk impact which may be exploited by malicious actors through system gaps left due to improper implementation of security standards. The authors are currently working on an Industry 4.0 cybersecurity project and the insights provided in this paper are derived from the project. This research enables an understanding of converged/hybrid cybersecurity standards, reviews the best practices, and provides a roadmap for identifying, aligning, mapping, converging, and implementing the right cybersecurity standards and strategies for securing M2M communications in the IIoT.) <|cite_end|>, <|cite_start|> (Reference: IoT--Assets Taxonomy, Threats Assessment and Potential Solutions: Internet of Things (IoT) is a system of interconnected devices and networks that provides autonomous functioning capability. Increasing expansion rate of IoT system in diverse set of domains has resulted in inclination of associated risks as well. This research paper presents classification of assets which are part of IoT system, identify platform-independent threats, and consolidate potential solutions to secure the IoT system from threats. Various researchers have conducted studies which are focused on specific industry/domain. Therefore, there is an inevitable necessity to present generalized assessment of IoT system and associated threats, irrespective of the industry or domain. In addition, this research presents prioritization, in terms of criticality, of IoT assets and threats that will enable stakeholders to identify the items that are crucial and items that can be ignored considering the low priority. This would enable researchers and stakeholders of IoT system (such as policy makers, end-users, manufacturers, and security experts etc. to have a deeper understanding of the implications and have an insight on solutions/recommendations in a broad-spectrum. Survey findings showed that DDoS (distributed denial of service), privacy attack, and information modification are the top three threats in terms of criticality ranking. A generalized (industry-independent) set of recommendations have been consolidated based on literature analysis and survey findings in three categories, which are related to policies, organizational measures, and technical measures.) <|cite_end|>. Considering that Wireless Sensor Networks (WSNs) are used to monitor the environment in health facilities, the necessity for precise and reliable data is significant. Aligning an organization's cybersecurity measures is crucial for protecting patient privacy and safety and ensuring the continuous and efficient provision of high-quality care <|cite_start|> (Reference: Cyber-Resilience, Principles, and Practices: ) <|cite_end|>. The massive influx of data makes it challenging to discern the presence of a malfunctioning sensor, environmental changes, or abrupt temperature fluctuations. For example, patients' rooms with high humidity can create a breeding ground for bacteria, posing a heightened risk of infection, especially in patients with weakened immune systems <|cite_start|> (Reference: Effect of Heating, Ventilation, and Air Conditioning (HVAC) system on indoor air quality in a medical facility: Indoor Air Quality (IAQ) refers to the stationary air within an inhabited or occupied structure. Previously, there were fewer studies on indoor air quality in medical facilities in Malaysia especially in Terengganu. Most indoor air quality issues are caused by insufficient Heating, Ventilation, and Air Conditioning (HVAC) systems, which regulate three parameters. The purpose of this study is to assess the indoor air quality of a medical facility and determine if it complies with the Industry Code of Practice 2010 (ICOP 2010) and ASHRAE 170-2017. In this investigation, a total of 3 locations namely Administration Office, Surgical Outpatient Department (SOPD) waiting area and Ophthalmology Consultation Room in Hospital Sultanah Nur Zahirah (HSNZ) were evaluated. Walkthrough inspections were done at the locations before data collection to determine the IAQ. Two IAQ meters, notably VelociCalc and Testo, were used to collect data to assess the temperature, relative humidity, and air flow of the selected locations. Samples were taken every 2 hours for each location from 8 a.m. to 5 p.m. The data then were analysed. All three locations' temperatures were lower than ICOP 2010's acceptable limit (23-26°C), but still within ASHRAE 170-207's 21-24°C range, except for the SOPD waiting room. All three locations met ICOP 2010 and ASHRAE 170-207 relative humidity standards. Meanwhile, only the SOPD waiting room had an appropriate air flow of 0.16-0.17m/s per ICOP 2010. The study also revealed that there was a correlation between the number of occupancies and the performance of HVAC system with the indoor air quality level.) <|cite_end|>. As another example, fluctuations in temperature within an operating room can be detrimental to the patient's well-being <|cite_start|> (Reference: Subjective assessment of indoor air quality and thermal environment in patient rooms: A survey study of Polish hospitals: ) <|cite_end|>. Thus, providing integrity to the transmitted data is essential in the healthcare-IoT (H-IoT) environment. Any disruption can adversely affect patient care, including delayed or prematurely accelerated data caused by cyber threats <|cite_start|> (Reference: Machine Learning for Healthcare-IoT Security: A Review and Risk Mitigation: The Healthcare Internet-of-Things (H-IoT), commonly known as Digital Healthcare, is a data-driven infrastructure that highly relies on smart sensing devices (i.e., blood pressure monitors, temperature sensors, etc.) for faster response time, treatments, and diagnosis. However, with the evolving cyber threat landscape, IoT devices have become more vulnerable to the broader risk surface (e.g., risks associated with generative AI, 5G-IoT, etc.), which, if exploited, may lead to data breaches, unauthorized access, and lack of command and control and potential harm. This paper reviews the fundamentals of healthcare IoT, its privacy, and data security challenges associated with machine learning and H-IoT devices. The paper further emphasizes the importance of monitoring healthcare IoT layers such as perception, network, cloud, and application. Detecting and responding to anomalies involves various cyber-attacks and protocols such as Wi-Fi 6, Narrowband Internet of Things (NB-IoT), Bluetooth, ZigBee, LoRa, and 5G New Radio (5G NR). A robust authentication mechanism based on machine learning and deep learning techniques is required to protect and mitigate H-IoT devices from increasing cybersecurity vulnerabilities. Hence, in this review paper, security and privacy challenges and risk mitigation strategies for building resilience in H-IoT are explored and reported.) <|cite_end|>, reducing potential disruptions that could adversely affect clinical results. This research investigates the accuracy of convolutional neural network (CNN) models in detecting anomalies in time series data and their adaptability to real-world constraints within healthcare-IoT, as simulated within the Cooja environment. CNN is widely used for various applications, including the analysis of time series data, for which it can achieve competitive and better results than traditional time series analysis models like Autoregressive Integrated Moving Average (ARIMA) <|cite_start|> (Reference: Robust multi-step wind speed forecasting based on a graph-based data reconstruction deep learning method: ) <|cite_end|>. The primary goal is to detect anomalous readings on environmental sensors within the hospital's IoT ecosystem. Moreover, our dataset improves abnormal reading detection, reduces risks, enhances efficiency, and elevates healthcare quality in H-IoT environments. <|paper_end|>
[ "<|reference_start|> Machine Learning for Healthcare-IoT Security: A Review and Risk Mitigation: The Healthcare Internet-of-Things (H-IoT), commonly known as Digital Healthcare, is a data-driven infrastructure that highly relies on smart sensing devices (i.e., blood pressure monitors, temperature sensors, etc.) for faster response time, treatments, and diagnosis. However, with the evolving cyber threat landscape, IoT devices have become more vulnerable to the broader risk surface (e.g., risks associated with generative AI, 5G-IoT, etc.), which, if exploited, may lead to data breaches, unauthorized access, and lack of command and control and potential harm. This paper reviews the fundamentals of healthcare IoT, its privacy, and data security challenges associated with machine learning and H-IoT devices. The paper further emphasizes the importance of monitoring healthcare IoT layers such as perception, network, cloud, and application. Detecting and responding to anomalies involves various cyber-attacks and protocols such as Wi-Fi 6, Narrowband Internet of Things (NB-IoT), Bluetooth, ZigBee, LoRa, and 5G New Radio (5G NR). A robust authentication mechanism based on machine learning and deep learning techniques is required to protect and mitigate H-IoT devices from increasing cybersecurity vulnerabilities. Hence, in this review paper, security and privacy challenges and risk mitigation strategies for building resilience in H-IoT are explored and reported. <|reference_end|>", "<|reference_start|> IoT--Assets Taxonomy, Threats Assessment and Potential Solutions: Internet of Things (IoT) is a system of interconnected devices and networks that provides autonomous functioning capability. Increasing expansion rate of IoT system in diverse set of domains has resulted in inclination of associated risks as well. This research paper presents classification of assets which are part of IoT system, identify platform-independent threats, and consolidate potential solutions to secure the IoT system from threats. Various researchers have conducted studies which are focused on specific industry/domain. Therefore, there is an inevitable necessity to present generalized assessment of IoT system and associated threats, irrespective of the industry or domain. In addition, this research presents prioritization, in terms of criticality, of IoT assets and threats that will enable stakeholders to identify the items that are crucial and items that can be ignored considering the low priority. This would enable researchers and stakeholders of IoT system (such as policy makers, end-users, manufacturers, and security experts etc. to have a deeper understanding of the implications and have an insight on solutions/recommendations in a broad-spectrum. Survey findings showed that DDoS (distributed denial of service), privacy attack, and information modification are the top three threats in terms of criticality ranking. A generalized (industry-independent) set of recommendations have been consolidated based on literature analysis and survey findings in three categories, which are related to policies, organizational measures, and technical measures. <|reference_end|>", "<|reference_start|> Cyber-Resilience, Principles, and Practices: <|reference_end|>", "<|reference_start|> Effect of Heating, Ventilation, and Air Conditioning (HVAC) system on indoor air quality in a medical facility: Indoor Air Quality (IAQ) refers to the stationary air within an inhabited or occupied structure. Previously, there were fewer studies on indoor air quality in medical facilities in Malaysia especially in Terengganu. Most indoor air quality issues are caused by insufficient Heating, Ventilation, and Air Conditioning (HVAC) systems, which regulate three parameters. The purpose of this study is to assess the indoor air quality of a medical facility and determine if it complies with the Industry Code of Practice 2010 (ICOP 2010) and ASHRAE 170-2017. In this investigation, a total of 3 locations namely Administration Office, Surgical Outpatient Department (SOPD) waiting area and Ophthalmology Consultation Room in Hospital Sultanah Nur Zahirah (HSNZ) were evaluated. Walkthrough inspections were done at the locations before data collection to determine the IAQ. Two IAQ meters, notably VelociCalc and Testo, were used to collect data to assess the temperature, relative humidity, and air flow of the selected locations. Samples were taken every 2 hours for each location from 8 a.m. to 5 p.m. The data then were analysed. All three locations' temperatures were lower than ICOP 2010's acceptable limit (23-26°C), but still within ASHRAE 170-207's 21-24°C range, except for the SOPD waiting room. All three locations met ICOP 2010 and ASHRAE 170-207 relative humidity standards. Meanwhile, only the SOPD waiting room had an appropriate air flow of 0.16-0.17m/s per ICOP 2010. The study also revealed that there was a correlation between the number of occupancies and the performance of HVAC system with the indoor air quality level. <|reference_end|>" ]
[ 0, 2, 3, 4 ]
{"<|cite_1|>": "arxiv-576314", "<|cite_2|>": "ss-2080104", "<|cite_3|>": "ss-2080105", "<|cite_4|>": "ss-2080106", "<|cite_5|>": "ss-2080107", "<|cite_6|>": "ss-2080108", "<|cite_7|>": "arxiv-576314", "<|cite_8|>": "ss-2080109"}
1812.05581
<|paper_start|> Title: Benchmark Dataset for Automatic Damaged Building Detection from Post-Hurricane Remotely Sensed Imagery Abstract: Benchmark Dataset for Automatic Damaged Building Detection from Post-Hurricane Remotely Sensed Imagery: Rapid damage assessment is of crucial importance to emergency responders during hurricane events, however, the evaluation process is often slow, labor-intensive, costly, and error-prone. New advances in computer vision and remote sensing open possibilities to observe the Earth at a different scale. However, substantial pre-processing work is still required in order to apply state-of-the-art methodology for emergency response. To enable the comparison of methods for automatic detection of damaged buildings from post-hurricane remote sensing imagery taken from both airborne and satellite sensors, this paper presents the development of benchmark datasets from publicly available data. The major contributions of this work include (1) a scalable framework for creating benchmark datasets of hurricane-damaged buildings and (2) public sharing of the resulting benchmark datasets for Greater Houston area after Hurricane Harvey in 2017. The proposed approach can be used to build other hurricane-damaged building datasets on which researchers can train and test object detection models to automatically identify damaged buildings. Introduction \IEEEPARstart{E}{mergency} managers of today grapple with post-hurricane damage assessment that largely relies on field surveys and damage reports. The recent expansion of private and government satellite imaging operations and their push to share some of the acquired data presents new opportunities for observing hurricane affected areas <|cite_start|> (Reference: USGS remote sensing coordination for the 2010 Haiti earthquake: ) <|cite_end|>. New methods in processing aerial and satellite images have improved assessment efficiency, but the process still depends on human visual inspection <|cite_start|> (Reference: Combining human computing and machine learning to make sense of big (aerial) data for disaster response: Aerial imagery captured via unmanned aerial vehicles (UAVs) is playing an increasingly important role in disaster response. Unlike satellite imagery, aerial imagery can be captured and processed within hours rather than days. In addition, the spatial resolution of aerial imagery is an order of magnitude higher than the imagery produced by the most sophisticated commercial satellites today. Both the United States Federal Emergency Management Agency (FEMA) and the European Commission's Joint Research Center (JRC) have noted that aerial imagery will inevitably present a big data challenge. The purpose of this article is to get ahead of this future challenge by proposing a hybrid crowdsourcing and real-time machine learning solution to rapidly process large volumes of aerial data for disaster response in a time-sensitive manner. Crowdsourcing can be used to annotate features of interest in aerial images (such as damaged shelters and roads blocked by debris). These human-annotated features can then be used to train a supervised machine learning system to learn to recognize such features in new unseen images. In this article, we describe how this hybrid solution for image analysis can be implemented as a module (i.e., Aerial Clicker) to extend an existing platform called Artificial Intelligence for Disaster Response (AIDR), which has already been deployed to classify microblog messages during disasters using its Text Clicker module and in response to Cyclone Pam, a category 5 cyclone that devastated Vanuatu in March 2015. The hybrid solution we present can be applied to both aerial and satellite imagery and has applications beyond disaster response such as wildlife protection, human rights, and archeological exploration. As a proof of concept, we recently piloted this solution using very high-resolution aerial photographs of a wildlife reserve in Namibia to support rangers with their wildlife conservation efforts (SAVMAP project, http://lasig.epfl.ch/savmap ). The results suggest that the platform we have developed to combine crowdsourcing and machine learning to make sense of large volumes of aerial images can be used for disaster response.) <|cite_end|> <|cite_start|> (Reference: Crowdsourcing earthquake damage assessment using remote sensing imagery: This paper describes the evolution of recent work on using crowdsourced analysis of remote sensing imagery, particularly high-resolution aerial imagery, to provide rapid, reliable assessments of damage caused by earthquakes and potentially other disasters. The initial effort examined online imagery taken after the 2008 Wenchuan, China, earthquake. A more recent response to the 2010 Haiti earthquake led to the formation of an international consortium: the Global Earth Observation Catastrophe Assessment Network (GEO-CAN). The success of GEO-CAN in contributing to the official damage assessments made by the Government of Haiti, the United Nations, and the World Bank led to further development of a web-based interface. A current initiative in Christchurch, New Zealand, is underway where remote sensing experts are analyzing satellite imagery, geotechnical engineers are marking liquefaction areas, and structural engineers are identifying building damage. The current site includes online training to improve the accuracy of the assessments and make it possible for even novice users to contribute to the crowdsourced solution. The paper discusses lessons learned from these initiatives and presents a way forward for using crowdsourced remote sensing as a tool for rapid assessment of damage caused by natural disasters around the world.) <|cite_end|>. In the aftermath of Hurricane Irma in 2017, analysts at the U.S. National Geospatial-Intelligence Agency sifted through hundreds of satellite images each day for damage assessment. These labor-intensive approaches are expensive and inefficient <|cite_start|> (Reference: Machine learning for aerial image labeling: Information extracted from aerial photographs has found applications in a wide range of areas including urban planning, crop and forest management, disaster relief, and climate modeling. At present, much of the extraction is still performed by human experts, making the process slow, costly, and error prone. The goal of this thesis is to develop methods for automatically extracting the locations of objects such as roads, buildings, and trees directly from aerial images. We investigate the use of machine learning methods trained on aligned aerial images and possibly outdated maps for labeling the pixels of an aerial image with semantic labels. We show how deep neural networks implemented on modern GPUs can be used to efficiently learn highly discriminative image features. We then introduce new loss functions for training neural networks that are partially robust to incomplete and poorly registered target maps. Finally, we propose two ways of improving the predictions of our system by introducing structure into the outputs of the neural networks. We evaluate our system on the largest and most-challenging road and building detection datasets considered in the literature and show that it works reliably under a wide variety of conditions. Furthermore, we are releasing the first large-scale road and building detection datasets to the public in order to facilitate future comparisons with other methods.) <|cite_end|>. Further, delayed assessment slows down urban search and rescue response times. Despite the availability of various disaster-relevant public data, they are not always in a format to easily access, integrate and process. This paper presents an important first step towards the automatic detection of damaged buildings on post-hurricane remote sensing imagery taken from both airborne and satellite sensors. In our work we propose a scalable framework to create benchmark datasets of hurricane-damaged buildings from terabytes of data. We also publicly share the resulting benchmark datasets for Greater Houston area after Hurricane Harvey, 2017. The benchmark datasets are suitable for training and testing of state-of-the-art \textit{object detection} models which have been already successful in detecting objects from various categories in other domains. Such benchmark data development effort is called for by machine learning researchers in the remote sensing domain <|cite_start|> (Reference: A Survey on Object Detection in Optical Remote Sensing Images: Object detection in optical remote sensing images, being a fundamental but challenging problem in the field of aerial and satellite image analysis, plays an important role for a wide range of applications and is receiving significant attention in recent years. While enormous methods exist, a deep review of the literature concerning generic object detection is still lacking. This paper aims to provide a review of the recent progress in this field. Different from several previously published surveys that focus on a specific object class such as building and road, we concentrate on more generic object categories including, but are not limited to, road, building, tree, vehicle, ship, airport, urban-area. Covering about 270 publications we survey 1) template matching-based object detection methods, 2) knowledge-based object detection methods, 3) object-based image analysis (OBIA)-based object detection methods, 4) machine learning-based object detection methods, and 5) five publicly available datasets and three standard evaluation metrics. We also discuss the challenges of current studies and propose two promising research directions, namely deep learning-based feature representation and weakly supervised learning-based geospatial object detection. It is our hope that this survey will be beneficial for the researchers to have better understanding of this research field.) <|cite_end|>. For example, benchmark datasets for aerial scene classification are widely used <|cite_start|> (Reference: AID: A benchmark data set for performance evaluation of aerial scene classification: Aerial scene classification, which aims to automatically label an aerial image with a specific semantic category, is a fundamental problem for understanding high-resolution remote sensing imagery. In recent years, it has become an active task in the remote sensing area, and numerous algorithms have been proposed for this task, including many machine learning and data-driven approaches. However, the existing data sets for aerial scene classification, such as UC-Merced data set and WHU-RS19, contain relatively small sizes, and the results on them are already saturated. This largely limits the development of scene classification algorithms. This paper describes the Aerial Image data set (AID): a large-scale data set for aerial scene classification. The goal of AID is to advance the state of the arts in scene classification of remote sensing images. For creating AID, we collect and annotate more than 10000 aerial scene images. In addition, a comprehensive review of the existing aerial scene classification techniques as well as recent widely used deep learning methods is given. Finally, we provide a performance analysis of typical aerial scene classification and deep learning approaches on AID, which can be served as the baseline results on this benchmark.) <|cite_end|>. A benchmark dataset for damaged-building \textit{classification} is also developed recently <|cite_start|> (Reference: Detecting damaged buildings on post-hurricane satellite imagery based on customized convolutional neural networks: After a hurricane, damage assessment is critical to emergency managers and first responders so that resources can be planned and allocated appropriately. One way to gauge the damage extent is to detect and quantify the number of damaged buildings, which is traditionally done through driving around the affected area. This process can be labor intensive and time-consuming. In this paper, utilizing the availability and readiness of satellite imagery, we propose to improve the efficiency and accuracy of damage detection via image classification algorithms. From the building coordinates, we extract their aerial-view windows of appropriate size and classify whether a building is damaged or not. We demonstrate the result of our method in the case study of 2017 Hurricane Harvey.) <|cite_end|> and is distinct from this work because data for classification cannot be used for object detection that requires localization of an object of interest in addition to its classification to a correct category. Our benchmark datasets consist of raster (satellite and aerial imagery) and vector data (auxiliary building damage information), which together provide the necessary components to train a machine learning model. The vector data, including crowdsourced damage annotations from the TOMNOD project (\url{https://www.tomnod.com/}), flood damage estimates by the U.S. Federal Emergency Management Agency (FEMA), and bounding boxes, are shared publicly (see Appendix). The raw raster data (in order of terabytes), shared by DigitalGlobe and the U.S. National Oceanic and Atmospheric Administration (NOAA), are available through the stable URLs of the original data sources as described later. The data contains RGB bands only. The remainder of this paper is organized as follows. Section~\ref{sec:background} summarizes existing work on disaster damage assessment using satellite imagery. Section~\ref{sec:data} details the process of creating the benchmark dataset. Section~\ref{sec:conclusion} concludes the paper with remarks on future research directions. Related Work \label{sec:background} Current damage assessment methods for emergency managers consist largely of field or windshield surveys and damage reports <|cite_start|> (Reference: Hierarchical disaster image classification for situation report enhancement: In this paper, a hierarchical disaster image classification (HDIC) framework based on multi-source data fusion (MSDF) and multiple correspondence analysis (MCA) is proposed to aid emergency managers in disaster response situations. The HDIC framework classifies images into different disaster categories and sub-categories using a pre-defined semantic hierarchy. In order to effectively fuse different sources (visual and text) of information, a weighting scheme is presented to assign different weights to each data resource depending on the hierarchical structure. The experimental analysis demonstrates that the proposed approach can effectively classify disaster images at each logical layer. In addition, the paper also presents an iPad application developed for situation report management using the proposed HDIC framework.) <|cite_end|> <|cite_start|> (Reference: Survey of data management and analysis in disaster situations: ) <|cite_end|>. Interviews with emergency managers reveal that this practice requires significant information integration resources. Aerial imagery is becoming increasingly more pervasive in damage assessment practice since it can be captured and processed within hours, while satellite imagery could take days <|cite_start|> (Reference: Combining human computing and machine learning to make sense of big (aerial) data for disaster response: Aerial imagery captured via unmanned aerial vehicles (UAVs) is playing an increasingly important role in disaster response. Unlike satellite imagery, aerial imagery can be captured and processed within hours rather than days. In addition, the spatial resolution of aerial imagery is an order of magnitude higher than the imagery produced by the most sophisticated commercial satellites today. Both the United States Federal Emergency Management Agency (FEMA) and the European Commission's Joint Research Center (JRC) have noted that aerial imagery will inevitably present a big data challenge. The purpose of this article is to get ahead of this future challenge by proposing a hybrid crowdsourcing and real-time machine learning solution to rapidly process large volumes of aerial data for disaster response in a time-sensitive manner. Crowdsourcing can be used to annotate features of interest in aerial images (such as damaged shelters and roads blocked by debris). These human-annotated features can then be used to train a supervised machine learning system to learn to recognize such features in new unseen images. In this article, we describe how this hybrid solution for image analysis can be implemented as a module (i.e., Aerial Clicker) to extend an existing platform called Artificial Intelligence for Disaster Response (AIDR), which has already been deployed to classify microblog messages during disasters using its Text Clicker module and in response to Cyclone Pam, a category 5 cyclone that devastated Vanuatu in March 2015. The hybrid solution we present can be applied to both aerial and satellite imagery and has applications beyond disaster response such as wildlife protection, human rights, and archeological exploration. As a proof of concept, we recently piloted this solution using very high-resolution aerial photographs of a wildlife reserve in Namibia to support rangers with their wildlife conservation efforts (SAVMAP project, http://lasig.epfl.ch/savmap ). The results suggest that the platform we have developed to combine crowdsourcing and machine learning to make sense of large volumes of aerial images can be used for disaster response.) <|cite_end|>. A few studies directly compare aerial and satellite imagery for assessment reliability, finding satellite imagery to be useful for damage pattern recognition <|cite_start|> (Reference: Comparison of damage assessment maps derived from very high spatial resolution satellite and aerial imagery produced for the Haiti 2010 earthquake: Following the devastating M7.2 earthquake that affected Haiti on 12 January 2010 two types of building damage assessment maps were produced: 1) area-based damage assessments using pre- and post-event satellite imagery and 2) detailed building-by-building damage assessments using post-event aerial photography. In this paper, we compare the reliability and the usability of area-based damage assessment maps from satellite imagery with respect to the detailed damage assessment from aerial data. The main objective is to better understand how cooperative rapid mapping can steer the more detailed assessments that are typical in determining postdisaster recovery and reconstruction efforts. The results of these experiments indicate that damage assessment maps based on satellite data are capable of capturing the damage pattern, mainly in areas with a high level of damaged and many collapsed structures. However, these maps cannot provide the level of information needed for the quantification of damage intensity.) <|cite_end|>, and aerial imagery to be helpful for estimation of the intensity of building damages <|cite_start|> (Reference: Intercomparison and validation of building damage assessments based on post-Haiti 2010 earthquake imagery using multi-source reference data: Abstract. The Haiti 2010 earthquake is one of the first major disasters in which very high resolution satellite and airborne imagery was embraced to delineate the event impact. Several rapid mapping initiatives exploited post-earthquake satellite and airborne imagery to produce independent point feature sets marking the damage grade of affected buildings. Despite the obvious potential of the satellite remote sensing technology in providing damage figures, the scale and complexity of the urban structures in Port-au-Prince cause overall figures and patterns of the damage assessments to yield a rather poor representation of the true damage extent. The higher detail airborne imagery performs much better as confirmed by different validation studies carried out in the last two years. In this paper, in addition to the review and analysis of the different validation works, we investigate the quality of damage assessment derived by different activities through a simple intercomparison and a validation using a complete building ground survey. The results show that the identification of building damage from aerial imagery provides a realistic estimate of the spatial pattern and intensity of building damage.) <|cite_end|>. However, the current capability of satellite data collection and availability is improving. International organizations (e.g. United Nations Platform for Space-based Information for Disaster Management and Emergency Response, International Charter on Space and Major Disasters) and national agencies (e.g. NASA, USGS, NOAA) are sharing satellite imagery to aid damage assessment \cite {tralli2005satellite, duda2011usgs}. Commercial satellite imagery companies (e.g. DigitalGlobe, Planet Labs) are releasing pre- and post-event satellite imagery <|cite_start|> (Reference: Obstacles and Lessons Implementing a Differential Privacy-Based Open Data Program: ) <|cite_end|>, and other organizations are releasing real-time satellite imagery in the US and Europe <|cite_start|> (Reference: Post-Disaster Image Processing for Damage Analysis Using GENESI-DR, WPS and Grid Computing: The goal of the two year Ground European Network for Earth Science Interoperations-Digital Repositories (GENESI-DR) project was to build an open and seamless access service to Earth science digital repositories for European and world-wide science users. In order to showcase GENESI-DR, one of the developed technology demonstrators focused on fast search, discovery, and access to remotely sensed imagery in the context of post-disaster building damage assessment. This paper describes the scenario and implementation details of the technology demonstrator, which was developed to support post-disaster damage assessment analyst activities. Once a disaster alert has been issued, response time is critical to providing relevant damage information to analysts and/or stakeholders. The presented technology demonstrator validates the GENESI-DR project data search, discovery and security infrastructure and integrates the rapid urban area mapping and the near real-time orthorectification web processing services to support a post-disaster damage needs assessment analysis scenario. It also demonstrates how the GENESI-DR SOA can be linked to web processing services that access grid computing resources for fast image processing and use secure communication to ensure confidentiality of information.) <|cite_end|>. The use of automatic damage detection systems that take satellite imagery as input is uneven across different types of natural disasters. While automatic methods for earthquake damage assessment are relatively well-established <|cite_start|> (Reference: Object-based classification of earthquake damage from high-resolution optical imagery using machine learning: Abstract. Object-based approaches in the segmentation and classification of remotely sensed images yield more promising results compared to pixel-based approaches. However, the development of an object-based approach presents challenges in terms of algorithm selection and parameter tuning. Subjective methods are often used, but yield less than optimal results. Objective methods are warranted, especially for rapid deployment in time-sensitive applications, such as earthquake damage assessment. Herein, we used a systematic approach in evaluating object-based image segmentation and machine learning algorithms for the classification of earthquake damage in remotely sensed imagery. We tested a variety of algorithms and parameters on post-event aerial imagery for the 2011 earthquake in Christchurch, New Zealand. Results were compared against manually selected test cases representing different classes. In doing so, we can evaluate the effectiveness of the segmentation and classification of different classes and compare different levels of multistep image segmentations. Our classifier is compared against recent pixel-based and object-based classification studies for postevent imagery of earthquake damage. Our results show an improvement against both pixel-based and object-based methods for classifying earthquake damage in high resolution, post-event imagery.) <|cite_end|>, they are less so for hurricane damage assessment. Within the domain of hurricanes, flood detection remains the focus of existing methods, which leaves other types of damages, such as wind-induced ones, neglected. Synthetic-aperture radar (SAR) images are typically used for this task <|cite_start|> (Reference: In Colorado, a global flood observatory keeps a close watch on Harvey’s torrents: ) <|cite_end|>. Segmentation has been used to automatically annotate flooded roads where pre- and post-event satellite imagery is available. Other flood detection methods utilize certain spectral bands, namely near-infrared in optical sensor images <|cite_start|> (Reference: Combining Time-Series Variation Modeling and Fuzzy Spatiotemporal Feature Fusion: A Novel Approach for Unsupervised Flood Mapping Using Dual-Polarized Sentinel-1 SAR Images: Due to the impact of climate change, the frequency of flood events has increased in recent years, which puts forward an urgent need for timely and accurate flood mapping for emergency response. As the synthetic aperture radar (SAR) enables all-time monitoring regardless of bad weather conditions, it fits far better than passive optical sensors to delineate submerged areas during flood events. However, the universal, rapid, and accurate detection of flood extent remains a challenge. Drawing inspiration from the analysis of time-series variation in representative ground objects caused by flood events, as observed in a dual-polarized SAR time series over a hydrological year, we construct a novel window-based variation model. This model can be used to capture both long-term trends and short-term fluctuations of flood features across different polarization modes. Subsequently, we introduce an unsupervised flood-mapping framework that integrates spatiotemporal flood features extracted by fuzzy-based methods. Given the distinct backscatter value of short vegetation, a flooded short vegetation (FSV) activation model is designed and performed to enhance flood-mapping accuracy in complex regions. The proposed method, tested on the 2020 East Dongting Lake flood in China, surpasses three unsupervised flood-mapping methods and two deep-learning methods in terms of quantitative evaluation and visual performance. The uncertainty of our proposed framework is tested through parameter sensitivity analyses, comparisons with flood-mapping results from other sensor images, and extensive experiments on floods at different locations and times, thereby demonstrating its effectiveness, stability, and universality.) <|cite_end|> to detect impure water, a proxy for a flooded area. These models rely on a selected threshold that is dependent on factors such as time of day and geographical characteristics. The reliance on such thresholds limits the generalizability of this model to new events <|cite_start|> (Reference: Infrared Colorization Using Deep Convolutional Neural Networks: This paper proposes a method for transferring the RGB color spectrum to near-infrared (NIR) images using deep multi-scale convolutional neural networks. A direct and integrated transfer between NIR and RGB pixels is trained. The trained model does not require any user guidance or a reference image database in the recall phase to produce images with a natural appearance. To preserve the rich details of the NIR image, its high frequency features are transferred to the estimated RGB image. The presented approach is trained and evaluated on a real-world dataset containing a large amount of road scene images in summer. The dataset was captured by a multi-CCD NIR/RGB camera, which ensures a perfect pixel to pixel registration.) <|cite_end|>. The most prominent work for both earthquake and hurricane damage assessment is the Advanced Rapid Imaging and Analysis (ARIA) project, which uses SAR sensor outputs based on a physics-based understanding of the way damages appear on SAR images. In contrast to the ARIA project, this work focuses on creating a dataset and prepares it for statistical machine learning of damages of any type recognizable by humans on pansharpened satellite images from optical sensors. Many existing (semi-)automated damage assessment methods using satellite imagery take either physics-based or rule-based approaches <|cite_start|> (Reference: A comprehensive review of earthquake-induced building damage detection with remote sensing techniques: ) <|cite_end|>. Methods on optical images extract and use various properties of damages from images <|cite_start|> (Reference: Damage patterns from satellite images of the 2003 Bam, Iran, earthquake: High-resolution (0.6m) commercial satellite images contain a wealth of information for mapping earthquake damage. Satellite images of the city of Bam, acquired on 30 September 2003 (pre-earthquake) and 03 January 2004 (post-earthquake), were obtained and used to distinguish damage patterns across the city. Comparisons between pre- and post-earthquake images clearly show structural damage and collapse. Using spectral (color) and textural information from the post-earthquake image, regions of damage were identified using a semi-automated computer-based algorithm. This analysis indicates that the damage within the city of Bam was concentrated in the eastern sections of the city. The extent of damage in some sections of the city reached 100%. The results from this study not only provide information regarding damage patterns for the city of Bam, but they also illustrate the potential for using satellite images to understand and document earthquake effects during future earthquakes.) <|cite_end|>. These are fine-tuned to a particular event and although they appear effective for a past event, these are not applicable to other events <|cite_start|> (Reference: Objects textural features sensitivity for earthquake damage mapping: The availability of very high resolution (VHR) optical sensors, can provide satellite images reaching less than one meter of ground resolution per pixel. It speeds up the development of new techniques addressing change detection applications, in particular, aiming at the damage mapping purpose. The present work is focused on the earthquake of Bam, occurred on 26 December 2003. A pair of VHR optical images, acquired by QuickBird satellite, has been exploited to study the sensitivity of objects textural features with respect to damage levels. In particular, the damage level at single building scale has been considered and different textural parameters have been compared to ground truth data, based on the European Macroseismic Scale 1998 (EMS98). The preliminary results are presented.) <|cite_end|>. Some methods require pre-event imagery for comparison with post-event imagery. These, too, are less generalizable to other events, especially to those in regions where pre-event imagery is not available <|cite_start|> (Reference: An improved approach of information extraction for earthquake-damaged buildings using high-resolution imagery: The development of remote sensing technology, especially the availability of high-resolution satellite imagery, has been applied to building recognition, hazard investigation and rapid pre-evaluation in post-earthquake management. Existing pixel-oriented approaches which are commonly used for satellite high-resolution imagery have limitations in information extraction, ground object classification, and processing speed. This paper presents an object-oriented method to extract earthquake-damaged building information using high-resolution remote sensing imagery of the 5.12 Wenchuan Earthquake. This method segmented the whole image into non-intersecting pieces of image objects, and then classified these pieces to extract damaged/undamaged buildings using image features such as spectral characters, textures, shapes, and their contexts. The results show a higher-precision classification than conventional methods.) <|cite_end|>. Methods using SAR imagery have even more limited generalizability than those using optical imagery due to the small archive of SAR imagery that is available <|cite_start|> (Reference: Earthquake damage assessment of buildings using VHR optical and SAR imagery: Rapid damage assessment after natural disasters (e.g., earthquakes) and violent conflicts (e.g., war-related destruction) is crucial for initiating effective emergency response actions. Remote-sensing satellites equipped with very high spatial resolution (VHR) multispectral and synthetic aperture radar (SAR) imaging sensors can provide vital information due to their ability to map the affected areas with high geometric precision and in an uncensored manner. In this paper, we present a novel method that detects buildings destroyed in an earthquake using pre-event VHR optical and post-event detected VHR SAR imagery. The method operates at the level of individual buildings and assumes that they have a rectangular footprint and are isolated. First, the 3-D parameters of a building are estimated from the pre-event optical imagery. Second, the building information and the acquisition parameters of the VHR SAR scene are used to predict the expected signature of the building in the post-event SAR scene assuming that it is not affected by the event. Third, the similarity between the predicted image and the actual SAR image is analyzed. If the similarity is high, the building is likely to be still intact, whereas a low similarity indicates that the building is destroyed. A similarity threshold is used to classify the buildings. We demonstrate the feasibility and the effectiveness of the method for a subset of the town of Yingxiu, China, which was heavily damaged in the Sichuan earthquake of May 12, 2008. For the experiment, we use QuickBird and WorldView-1 optical imagery, and TerraSAR-X and COSMO-SkyMed SAR data.) <|cite_end|>. Another damage assessment method, classification, has been used to determine whether damaged buildings occur in satellite imagery <|cite_start|> (Reference: Detecting damaged buildings on post-hurricane satellite imagery based on customized convolutional neural networks: After a hurricane, damage assessment is critical to emergency managers and first responders so that resources can be planned and allocated appropriately. One way to gauge the damage extent is to detect and quantify the number of damaged buildings, which is traditionally done through driving around the affected area. This process can be labor intensive and time-consuming. In this paper, utilizing the availability and readiness of satellite imagery, we propose to improve the efficiency and accuracy of damage detection via image classification algorithms. From the building coordinates, we extract their aerial-view windows of appropriate size and classify whether a building is damaged or not. We demonstrate the result of our method in the case study of 2017 Hurricane Harvey.) <|cite_end|>. Object detection allows for the identification and localization of multiple object classes, such as the 60 classes (e.g. passenger vehicle, fixed-wing aircraft, building) defined in the Xview dataset, one of the largest publicly available overhead imagery object detection datasets. According to Lam et al. (2018), ``several object detection datasets exist in the natural imagery space, but there are few for overhead satellite imagery'' <|cite_start|> (Reference: xView: Objects in Context in Overhead Imagery: We introduce a new large-scale dataset for the advancement of object detection techniques and overhead object detection research. This satellite imagery dataset enables research progress pertaining to four key computer vision frontiers. We utilize a novel process for geospatial category detection and bounding box annotation with three stages of quality control. Our data is collected from WorldView-3 satellites at 0.3m ground sample distance, providing higher resolution imagery than most public satellite imagery datasets. We compare xView to other object detection datasets in both natural and overhead imagery domains and then provide a baseline analysis using the Single Shot MultiBox Detector. xView is one of the largest and most diverse publicly available object-detection datasets to date, with over 1 million objects across 60 classes in over 1,400 km^2 of imagery.) <|cite_end|>. Planet Labs has also recently developed a training dataset using crowdsourced annotations on satellite images, which were then chipped and visually inspected. The dataset includes an ontology of objects found in regions affected by disasters prepared for object detection. This dataset has not been shared publicly. Building on the momentum of the public Xview dataset, this paper discusses the preparation of a public dataset using post-event satellite imagery from optical sensors, as well as, aerial imagery. This dataset was developed for training object detection models. \begin{figure*} \centering \includegraphics[width=1\textwidth]{images/pipelineflowchart.jpg} \caption{\textbf{Benchmark Dataset Preparation Process.} In the above diagram we describe the steps of creating a benchmark dataset: the first row indicates the preprocessing steps which are required to convert the large raw datasets to a more manageable tiled format; the second row describes how the damage annotation vector data is joined with the raster data to obtain corresponding bounding boxes; the last row illustrates a traditional workflow that a machine learning practitioner will take to train object detection algorithms on the resulting benchmark dataset.} \end{figure*} <|paper_end|>
[ "<|reference_start|> Combining human computing and machine learning to make sense of big (aerial) data for disaster response: Aerial imagery captured via unmanned aerial vehicles (UAVs) is playing an increasingly important role in disaster response. Unlike satellite imagery, aerial imagery can be captured and processed within hours rather than days. In addition, the spatial resolution of aerial imagery is an order of magnitude higher than the imagery produced by the most sophisticated commercial satellites today. Both the United States Federal Emergency Management Agency (FEMA) and the European Commission's Joint Research Center (JRC) have noted that aerial imagery will inevitably present a big data challenge. The purpose of this article is to get ahead of this future challenge by proposing a hybrid crowdsourcing and real-time machine learning solution to rapidly process large volumes of aerial data for disaster response in a time-sensitive manner. Crowdsourcing can be used to annotate features of interest in aerial images (such as damaged shelters and roads blocked by debris). These human-annotated features can then be used to train a supervised machine learning system to learn to recognize such features in new unseen images. In this article, we describe how this hybrid solution for image analysis can be implemented as a module (i.e., Aerial Clicker) to extend an existing platform called Artificial Intelligence for Disaster Response (AIDR), which has already been deployed to classify microblog messages during disasters using its Text Clicker module and in response to Cyclone Pam, a category 5 cyclone that devastated Vanuatu in March 2015. The hybrid solution we present can be applied to both aerial and satellite imagery and has applications beyond disaster response such as wildlife protection, human rights, and archeological exploration. As a proof of concept, we recently piloted this solution using very high-resolution aerial photographs of a wildlife reserve in Namibia to support rangers with their wildlife conservation efforts (SAVMAP project, http://lasig.epfl.ch/savmap ). The results suggest that the platform we have developed to combine crowdsourcing and machine learning to make sense of large volumes of aerial images can be used for disaster response. <|reference_end|>", "<|reference_start|> Comparison of damage assessment maps derived from very high spatial resolution satellite and aerial imagery produced for the Haiti 2010 earthquake: Following the devastating M7.2 earthquake that affected Haiti on 12 January 2010 two types of building damage assessment maps were produced: 1) area-based damage assessments using pre- and post-event satellite imagery and 2) detailed building-by-building damage assessments using post-event aerial photography. In this paper, we compare the reliability and the usability of area-based damage assessment maps from satellite imagery with respect to the detailed damage assessment from aerial data. The main objective is to better understand how cooperative rapid mapping can steer the more detailed assessments that are typical in determining postdisaster recovery and reconstruction efforts. The results of these experiments indicate that damage assessment maps based on satellite data are capable of capturing the damage pattern, mainly in areas with a high level of damaged and many collapsed structures. However, these maps cannot provide the level of information needed for the quantification of damage intensity. <|reference_end|>", "<|reference_start|> Damage patterns from satellite images of the 2003 Bam, Iran, earthquake: High-resolution (0.6m) commercial satellite images contain a wealth of information for mapping earthquake damage. Satellite images of the city of Bam, acquired on 30 September 2003 (pre-earthquake) and 03 January 2004 (post-earthquake), were obtained and used to distinguish damage patterns across the city. Comparisons between pre- and post-earthquake images clearly show structural damage and collapse. Using spectral (color) and textural information from the post-earthquake image, regions of damage were identified using a semi-automated computer-based algorithm. This analysis indicates that the damage within the city of Bam was concentrated in the eastern sections of the city. The extent of damage in some sections of the city reached 100%. The results from this study not only provide information regarding damage patterns for the city of Bam, but they also illustrate the potential for using satellite images to understand and document earthquake effects during future earthquakes. <|reference_end|>", "<|reference_start|> xView: Objects in Context in Overhead Imagery: We introduce a new large-scale dataset for the advancement of object detection techniques and overhead object detection research. This satellite imagery dataset enables research progress pertaining to four key computer vision frontiers. We utilize a novel process for geospatial category detection and bounding box annotation with three stages of quality control. Our data is collected from WorldView-3 satellites at 0.3m ground sample distance, providing higher resolution imagery than most public satellite imagery datasets. We compare xView to other object detection datasets in both natural and overhead imagery domains and then provide a baseline analysis using the Single Shot MultiBox Detector. xView is one of the largest and most diverse publicly available object-detection datasets to date, with over 1 million objects across 60 classes in over 1,400 km^2 of imagery. <|reference_end|>" ]
[ 9, 10, 19, 24 ]
{"<|cite_2|>": "ss-1826672", "<|multi_cite_3_1|>": "ss-1085268", "<|multi_cite_3_2|>": "ss-1168279", "<|cite_5|>": "ss-1377720", "<|cite_7|>": "arxiv-94286", "<|cite_8|>": "ss-1256342", "<|cite_9|>": "ss-1826673", "<|multi_cite_10_2|>": "ss-1826674", "<|multi_cite_10_3|>": "ss-2433538", "<|cite_11|>": "ss-1085268", "<|cite_12|>": "ss-1826675", "<|cite_13|>": "ss-2166594", "<|multi_cite_14_1|>": "ss-1826676", "<|multi_cite_15_1|>": "ss-1826677", "<|cite_16|>": "ss-1826678", "<|multi_cite_17_2|>": "ss-1826679", "<|multi_cite_19_1|>": "ss-1378765", "<|cite_20|>": "arxiv-95511", "<|cite_22|>": "ss-1168282", "<|cite_23|>": "ss-1826680", "<|cite_24|>": "ss-1826681", "<|cite_25|>": "ss-1826682", "<|cite_26|>": "ss-1101231", "<|cite_27|>": "ss-1826673", "<|cite_28|>": "arxiv-149229"}
2402.06249
<|paper_start|> Title: Anomaly Unveiled: Securing Image Classification against Adversarial Patch Attacks Abstract: Anomaly Unveiled: Securing Image Classification against Adversarial Patch Attacks: Adversarial patch attacks pose a significant threat to the practical deployment of deep learning systems. However, existing research primarily focuses on image pre-processing defenses, which often result in reduced classification accuracy for clean images and fail to effectively counter physically feasible attacks. In this paper, we investigate the behavior of adversarial patches as anomalies within the distribution of image information and leverage this insight to develop a robust defense strategy. Our proposed defense mechanism utilizes a clustering-based technique called DBSCAN to isolate anomalous image segments, which is carried out by a three-stage pipeline consisting of Segmenting, Isolating, and Blocking phases to identify and mitigate adversarial noise. Upon identifying adversarial components, we neutralize them by replacing them with the mean pixel value, surpassing alternative replacement options. Our model-agnostic defense mechanism is evaluated across multiple models and datasets, demonstrating its effectiveness in countering various adversarial patch attacks in image classification tasks. Our proposed approach significantly improves accuracy, increasing from 38.8\% without the defense to 67.1\% with the defense against LaVAN and GoogleAp attacks, surpassing prominent state-of-the-art methods such as LGS (53.86\%) and Jujutsu (60\%) Introduction \label{intro} Adversarial manipulations pose a significant challenge to the resilience and effectiveness of well-trained deep neural network (DNN) architectures <|cite_start|> (Reference: Physical Adversarial Attacks For Camera-based Smart Systems: Current Trends, Categorization, Applications, Research Challenges, and Future Outlook: In this paper, we present a comprehensive survey of the current trends focusing specifically on physical adversarial attacks. We aim to provide a thorough understanding of the concept of physical adversarial attacks, analyzing their key characteristics and distinguishing features. Furthermore, we explore the specific requirements and challenges associated with executing attacks in the physical world. Our article delves into various physical adversarial attack methods, categorized according to their target tasks in different applications, including classification, detection, face recognition, semantic segmentation and depth estimation. We assess the performance of these attack methods in terms of their effectiveness, stealthiness, and robustness. We examine how each technique strives to ensure the successful manipulation of DNNs while mitigating the risk of detection and withstanding real-world distortions. Lastly, we discuss the current challenges and outline potential future research directions in the field of physical adversarial attacks. We highlight the need for enhanced defense mechanisms, the exploration of novel attack strategies, the evaluation of attacks in different application domains, and the establishment of standardized benchmarks and evaluation criteria for physical adversarial attacks. Through this comprehensive survey, we aim to provide a valuable resource for researchers, practitioners, and policymakers to gain a holistic understanding of physical adversarial attacks in computer vision and facilitate the development of robust and secure DNN-based systems.) <|cite_end|> <|cite_start|> (Reference: Towards Evaluating the Robustness of Neural Networks: Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input $x$ and any target classification $t$, it is possible to find a new input $x'$ that is similar to $x$ but classified as $t$. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from $95\%$ to $0.5\%$. In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with $100\%$ probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.) <|cite_end|> <|cite_start|> (Reference: {Scalable Optimization of Randomized Operational Decisions in Adversarial Classification Settings: When learning, such as classification, is used in adversarial settings, such as intrusion detection, intelligent adversaries will attempt to evade the resulting policies. The literature on adversarial machine learning aims to develop learning algorithms which are robust to such adversarial evasion, but exhibits two significant limitations: a) failure to account for operational constraints and b) a restriction that decisions are deterministic. To overcome these limitations, we introduce a conceptual separation between learning, used to infer attacker preferences, and operational decisions, which account for adversarial evasion, enforce operational constraints, and naturally admit randomization. Our approach gives rise to an intractably large linear program. To overcome scalability limitations, we introduce a novel method for estimating a compact parity basis representation for the operational decision function. Additionally, we develop an iterative constraint generation approach which embeds adversary’s best response calculation, to arrive at a scalable algorithm for computing near-optimal randomized operational decisions. Extensive experiments demonstrate the efficacy of our approach.) <|cite_end|> <|cite_start|> (Reference: Explaining and Harnessing Adversarial Examples: Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.) <|cite_end|> <|cite_start|> (Reference: Curse of Dimensionality in Adversarial Examples: While machine learning and deep neural networks in particular, have undergone massive progress in the past years, this ubiquitous paradigm faces a relatively newly discovered challenge, adversarial attacks. An adversary can leverage a plethora of attacking algorithms to severely reduce the performance of existing models, therefore threatening the use of AI in many safety-critical applications. Several attempts have been made to try and understand the root cause behind the generation of adversarial examples. In this paper, we try to relate the geometry of the high-dimensional space in which the model operates and optimizes, and the properties and problems therein, to such adversarial attacks. We present the mathematical background, the intuition behind the existence of adversarial examples and substantiate them with empirical results from our experiments.) <|cite_end|> <|cite_start|> (Reference: Robustness Against Adversarial Attacks Using Dimensionality: ) <|cite_end|>. In such scenarios, adversaries strategically introduce perturbations to test samples, leading to noticeable disruptions in the model's ability to accurately predict outcomes. A notable form of these attacks involves the insertion of localized patches into test images, exploiting vulnerabilities and causing the DNN model to err in crucial tasks such as image classification or object detection <|cite_start|> (Reference: DAP: A Dynamic Adversarial Patch for Evading Person Detectors: Patch-based adversarial attacks were proven to compromise the robustness and reliability of computer vision systems. However, their conspicuous and easily detectable nature challenge their practicality in real-world setting. To address this, recent work has proposed using Generative Adversarial Networks (GANs) to generate naturalistic patches that may not attract human attention. However, such approaches suffer from a limited latent space making it challenging to produce a patch that is efficient, stealthy, and robust to multiple real-world transformations. This paper introduces a novel approach that produces a Dynamic Adversarial Patch (DAP) designed to overcome these limitations. DAP maintains a naturalistic appearance while optimizing attack efficiency and robustness to real-world transformations. The approach involves redefining the optimization problem and introducing a novel objective function that incorporates a similarity metric to guide the patch's creation. Unlike GAN-based techniques, the DAP directly modifies pixel values within the patch, providing increased flexibility and adaptability to multiple transformations. Furthermore, most clothing-based physical attacks assume static objects and ignore the possible transformations caused by non-rigid deformation due to changes in a person's pose. To address this limitation, a 'Creases Transformation' (CT) block is introduced, enhancing the patch's resilience to a variety of real-world distortions. Experimental results demonstrate that the proposed approach outperforms state-of-the-art attacks, achieving a success rate of up to 82.28% in the digital world when targeting the YOLOv7 detector and 65% in the physical world when targeting YOLOv3tiny detector deployed in edge-based smart cameras.) <|cite_end|> <|cite_start|> (Reference: AdvART: Adversarial Art for Camouflaged Object Detection Attacks: Physical adversarial attacks pose a significant practical threat as it deceives deep learning systems operating in the real world by producing prominent and maliciously designed physical perturbations. Emphasizing the evaluation of naturalness is crucial in such attacks, as humans can readily detect and eliminate unnatural manipulations. To overcome this limitation, recent work has proposed leveraging generative adversarial networks (GANs) to generate naturalistic patches, which may not catch human's attention. However, these approaches suffer from a limited latent space which leads to an inevitable trade-off between naturalness and attack efficiency. In this paper, we propose a novel approach to generate naturalistic and inconspicuous adversarial patches. Specifically, we redefine the optimization problem by introducing an additional loss term to the cost function. This term works as a semantic constraint to ensure that the generated camouflage pattern holds semantic meaning rather than arbitrary patterns. The additional term leverages similarity metrics to construct a similarity loss that we optimize within the global objective function. Our technique is based on directly manipulating the pixel values in the patch, which gives higher flexibility and larger space compared to the GAN-based techniques that are based on indirectly optimizing the patch by modifying the latent vector. Our attack achieves superior success rate of up to 91.19\% and 72\%, respectively, in the digital world and when deployed in smart cameras at the edge compared to the GAN-based technique.) <|cite_end|> <|cite_start|> (Reference: IEEE/CVF International Conference on Computer Vision Workshops, ICCVW 2021, Montreal, QC, Canada, October 11-17, 2021: ) <|cite_end|>. Patch-based attacks are recognized as a practical form of adversarial manipulation, valued for their adaptability, especially in scenarios with limited accessibility <|cite_start|> (Reference: Physical Adversarial Attacks For Camera-based Smart Systems: Current Trends, Categorization, Applications, Research Challenges, and Future Outlook: In this paper, we present a comprehensive survey of the current trends focusing specifically on physical adversarial attacks. We aim to provide a thorough understanding of the concept of physical adversarial attacks, analyzing their key characteristics and distinguishing features. Furthermore, we explore the specific requirements and challenges associated with executing attacks in the physical world. Our article delves into various physical adversarial attack methods, categorized according to their target tasks in different applications, including classification, detection, face recognition, semantic segmentation and depth estimation. We assess the performance of these attack methods in terms of their effectiveness, stealthiness, and robustness. We examine how each technique strives to ensure the successful manipulation of DNNs while mitigating the risk of detection and withstanding real-world distortions. Lastly, we discuss the current challenges and outline potential future research directions in the field of physical adversarial attacks. We highlight the need for enhanced defense mechanisms, the exploration of novel attack strategies, the evaluation of attacks in different application domains, and the establishment of standardized benchmarks and evaluation criteria for physical adversarial attacks. Through this comprehensive survey, we aim to provide a valuable resource for researchers, practitioners, and policymakers to gain a holistic understanding of physical adversarial attacks in computer vision and facilitate the development of robust and secure DNN-based systems.) <|cite_end|> <|cite_start|> (Reference: SAAM: Stealthy Adversarial Attack on Monocular Depth Estimation: In this paper, we investigate the vulnerability of MDE to adversarial patches. We propose a novel \underline{S}tealthy \underline{A}dversarial \underline{A}ttacks on \underline{M}DE (SAAM) that compromises MDE by either corrupting the estimated distance or causing an object to seamlessly blend into its surroundings. Our experiments, demonstrate that the designed stealthy patch successfully causes a DNN-based MDE to misestimate the depth of objects. In fact, our proposed adversarial patch achieves a significant 60\% depth error with 99\% ratio of the affected region. Importantly, despite its adversarial nature, the patch maintains a naturalistic appearance, making it inconspicuous to human observers. We believe that this work sheds light on the threat of adversarial attacks in the context of MDE on edge devices. We hope it raises awareness within the community about the potential real-life harm of such attacks and encourages further research into developing more robust and adaptive defense mechanisms.) <|cite_end|> <|cite_start|> (Reference: APARATE: Adaptive Adversarial Patch for CNN-based Monocular Depth Estimation for Autonomous Navigation: In recent times, monocular depth estimation (MDE) has experienced significant advancements in performance, largely attributed to the integration of innovative architectures, i.e., convolutional neural networks (CNNs) and Transformers. Nevertheless, the susceptibility of these models to adversarial attacks has emerged as a noteworthy concern, especially in domains where safety and security are paramount. This concern holds particular weight for MDE due to its critical role in applications like autonomous driving and robotic navigation, where accurate scene understanding is pivotal. To assess the vulnerability of CNN-based depth prediction methods, recent work tries to design adversarial patches against MDE. However, the existing approaches fall short of inducing a comprehensive and substantially disruptive impact on the vision system. Instead, their influence is partial and confined to specific local areas. These methods lead to erroneous depth predictions only within the overlapping region with the input image, without considering the characteristics of the target object, such as its size, shape, and position. In this paper, we introduce a novel adversarial patch named APARATE. This patch possesses the ability to selectively undermine MDE in two distinct ways: by distorting the estimated distances or by creating the illusion of an object disappearing from the perspective of the autonomous system. Notably, APARATE is designed to be sensitive to the shape and scale of the target object, and its influence extends beyond immediate proximity. APARATE, results in a mean depth estimation error surpassing $0.5$, significantly impacting as much as $99\%$ of the targeted region when applied to CNN-based MDE models. Furthermore, it yields a significant error of $0.34$ and exerts substantial influence over $94\%$ of the target region in the context of Transformer-based MDE.) <|cite_end|>. Unlike traditional adversarial methods that require extensive perturbations spanning the entire target object, patch-based attacks exhibit a localized nature. These attacks function like discrete stickers, making them easy to apply to potential targets, reflecting real-world situations where adversaries may face resource or access constraints. The subtle attributes of patch-based attacks contribute to their elusive nature, emphasizing the urgent need for the rapid deployment of robust defense mechanisms. However, these defenses are prone to generating false positives <|cite_start|> (Reference: Local Gradients Smoothing: Defense against localized adversarial attacks: Deep neural networks (DNNs) have shown vulnerability to adversarial attacks, i.e., carefully perturbed inputs designed to mislead the network at inference time. Recently introduced localized attacks, Localized and Visible Adversarial Noise (LaVAN) and Adversarial patch, pose a new challenge to deep learning security by adding adversarial noise only within a specific region without affecting the salient objects in an image. Driven by the observation that such attacks introduce concentrated high-frequency changes at a particular image location, we have developed an effective method to estimate noise location in gradient domain and transform those high activation regions caused by adversarial noise in image domain while having minimal effect on the salient object that is important for correct classification. Our proposed Local Gradients Smoothing (LGS) scheme achieves this by regularizing gradients in the estimated noisy region before feeding the image to DNN for inference. We have shown the effectiveness of our method in comparison to other defense methods including Digital Watermarking, JPEG compression, Total Variance Minimization (TVM) and Feature squeezing on ImageNet dataset. In addition, we systematically study the robustness of the proposed defense mechanism against Back Pass Differentiable Approximation (BPDA), a state of the art attack recently developed to break defenses that transform an input sample to minimize the adversarial effect. Compared to other defense mechanisms, LGS is by far the most resistant to BPDA in localized adversarial attack setting.) <|cite_end|> and face challenges in accurately distinguishing between adversarial and clean samples. Additionally, in certain instances, these defenses may inadvertently remove or alter crucial features <|cite_start|> (Reference: (De) Randomized Smoothing for Certifiable Defense against Patch Attacks: Patch adversarial attacks on images, in which the attacker can distort pixels within a region of bounded size, are an important threat model since they provide a quantitative model for physical adversarial attacks. In this paper, we introduce a certifiable defense against patch attacks that guarantees for a given image and patch attack size, no patch adversarial examples exist. Our method is related to the broad class of randomized smoothing robustness schemes which provide high-confidence probabilistic robustness certificates. By exploiting the fact that patch attacks are more constrained than general sparse attacks, we derive meaningfully large robustness certificates. Additionally, the algorithm we propose is de-randomized, providing deterministic certificates. To the best of our knowledge, there exists only one prior method for certifiable defense against patch attacks, which relies on interval bound propagation. While this sole existing method performs well on MNIST, it has several limitations: it requires computationally expensive training, does not scale to ImageNet, and performs poorly on CIFAR-10. In contrast, our proposed method effectively addresses all of these issues: our classifier can be trained quickly, achieves high clean and certified robust accuracy on CIFAR-10, and provides certificates at the ImageNet scale. For example, for a 5*5 patch attack on CIFAR-10, our method achieves up to around 57.8% certified accuracy (with a classifier around 83.9% clean accuracy), compared to at most 30.3% certified accuracy for the existing method (with a classifier with around 47.8% clean accuracy), effectively establishing a new state-of-the-art. Code is available at this https URL.) <|cite_end|>, resulting in the degradation of the model's performance even on benign samples. Adversarial patches exhibit characteristics of outliers or anomalies within the distribution of input images. The adversarial noise embedded within these patches diverges significantly from the signal or information present in the rest of the sample. Leveraging advanced anomaly detection techniques facilitates the identification and segregation of these patches in instances where they deviate from the broader image distribution. This is particularly useful in developing practical adversarial defenses against such patch based attacks. \subsection{Contribution} The primary contributions of this paper is mentioned here: \begin{itemize} \item We introduce a novel defense mechanism against adversarial patch attacks. Our approach involves isolating the region in the image containing the patch as an anomaly and subsequently blocking the adversarial information. \item We demonstrate the distinctive informational disparities inherent in adversarial patches, offering invaluable insights crucial for the development of resilient defense strategies against adversarial patch attacks. \item We propose a three-step pipeline for implementing our defense mechanism, comprising a Segmenting phase, an Isolating phase, and a Blocking phase. Initially, the Segmenting phase divides the image into parts, which are then subjected to a clustering algorithm (DBSCAN) <|cite_start|> (Reference: {A density-based algorithm for discovering clusters in large spatial databases with noise: Clustering algorithms are attractive for the task of class identification in spatial databases. However, the application to large spatial databases rises the following requirements for clustering algorithms: minimal requirements of domain knowledge to determine the input parameters, discovery of clusters with arbitrary shape and good efficiency on large databases. The well-known clustering algorithms offer no solution to the combination of these requirements. In this paper, we present the new clustering algorithm DBSCAN relying on a density-based notion of clusters which is designed to discover clusters of arbitrary shape. DBSCAN requires only one input parameter and supports the user in determining an appropriate value for it. We performed an experimental evaluation of the effectiveness and efficiency of DBSCAN using synthetic data and real data of the SEQUOIA 2000 benchmark. The results of our experiments demonstrate that (1) DBSCAN is significantly more effective in discovering clusters of arbitrary shape than the well-known algorithm CLARANS, and that (2) DBSCAN outperforms CLARANS by a factor of more than 100 in terms of efficiency.) <|cite_end|> to identify segments containing adversarial noise. Subsequently, we replace these identified segments with the mean pixel value to neutralize the adversarial patch. \item Our defense mechanism is model-agnostic and demonstrates impressive performance, achieving up to 85\% recovery on adversarial samples in image classification tasks across various datasets, adversarial patches, and neural architectures. \end{itemize} \begin{figure*}[ht!] \centerline{\includegraphics[width=2\columnwidth]{figures/AU_methodology.pptx.pdf}} \caption{Detailed diagram of our proposed methodology.} \label{fig:methodology} \end{figure*} Related Work \label{related} Defenses against adversarial patch-based attacks can be broadly classified into two categories: certified defenses and empirical defenses. \textit{Certified defenses:} \textbf{De-randomized Smoothing (DS)} <|cite_start|> (Reference: (De) Randomized Smoothing for Certifiable Defense against Patch Attacks: Patch adversarial attacks on images, in which the attacker can distort pixels within a region of bounded size, are an important threat model since they provide a quantitative model for physical adversarial attacks. In this paper, we introduce a certifiable defense against patch attacks that guarantees for a given image and patch attack size, no patch adversarial examples exist. Our method is related to the broad class of randomized smoothing robustness schemes which provide high-confidence probabilistic robustness certificates. By exploiting the fact that patch attacks are more constrained than general sparse attacks, we derive meaningfully large robustness certificates. Additionally, the algorithm we propose is de-randomized, providing deterministic certificates. To the best of our knowledge, there exists only one prior method for certifiable defense against patch attacks, which relies on interval bound propagation. While this sole existing method performs well on MNIST, it has several limitations: it requires computationally expensive training, does not scale to ImageNet, and performs poorly on CIFAR-10. In contrast, our proposed method effectively addresses all of these issues: our classifier can be trained quickly, achieves high clean and certified robust accuracy on CIFAR-10, and provides certificates at the ImageNet scale. For example, for a 5*5 patch attack on CIFAR-10, our method achieves up to around 57.8% certified accuracy (with a classifier around 83.9% clean accuracy), compared to at most 30.3% certified accuracy for the existing method (with a classifier with around 47.8% clean accuracy), effectively establishing a new state-of-the-art. Code is available at this https URL.) <|cite_end|> introduces a certified defense technique by building a smoothed classifier through ensembling local predictions made on pixel patches. \textbf{PatchGuard} employs a small receptive field within deep neural networks (DNNs) and secures feature aggregation by masking out regions with the highest sum of class evidence. \textit{Empirical defenses:} \textbf{Localized Gradient Smoothing (LGS)} <|cite_start|> (Reference: Local Gradients Smoothing: Defense against localized adversarial attacks: Deep neural networks (DNNs) have shown vulnerability to adversarial attacks, i.e., carefully perturbed inputs designed to mislead the network at inference time. Recently introduced localized attacks, Localized and Visible Adversarial Noise (LaVAN) and Adversarial patch, pose a new challenge to deep learning security by adding adversarial noise only within a specific region without affecting the salient objects in an image. Driven by the observation that such attacks introduce concentrated high-frequency changes at a particular image location, we have developed an effective method to estimate noise location in gradient domain and transform those high activation regions caused by adversarial noise in image domain while having minimal effect on the salient object that is important for correct classification. Our proposed Local Gradients Smoothing (LGS) scheme achieves this by regularizing gradients in the estimated noisy region before feeding the image to DNN for inference. We have shown the effectiveness of our method in comparison to other defense methods including Digital Watermarking, JPEG compression, Total Variance Minimization (TVM) and Feature squeezing on ImageNet dataset. In addition, we systematically study the robustness of the proposed defense mechanism against Back Pass Differentiable Approximation (BPDA), a state of the art attack recently developed to break defenses that transform an input sample to minimize the adversarial effect. Compared to other defense mechanisms, LGS is by far the most resistant to BPDA in localized adversarial attack setting.) <|cite_end|> normalizes gradient values and utilizes a moving window to identify high-density regions based on specific thresholds. \textbf{Jujutsu} <|cite_start|> (Reference: Jujutsu: A Two-stage Defense against Adversarial Patch Attacks on Deep Neural Networks: Adversarial patch attacks create adversarial examples by injecting arbitrary distortions within a bounded region of the input to fool deep neural networks (DNNs). These attacks are robust (i.e., physically-realizable) and universally malicious, and hence represent a severe security threat to real-world DNN-based systems. We propose Jujutsu, a two-stage technique to detect and mitigate robust and universal adversarial patch attacks. We first observe that adversarial patches are crafted as localized features that yield large influence on the prediction output, and continue to dominate the prediction on any input. Jujutsu leverages this observation for accurate attack detection with low false positives. Patch attacks corrupt only a localized region of the input, while the majority of the input remains unperturbed. Therefore, Jujutsu leverages generative adversarial networks (GAN) to perform localized attack recovery by synthesizing the semantic contents of the input that are corrupted by the attacks, and reconstructs a ``clean'' input for correct prediction. We evaluate Jujutsu on four diverse datasets spanning 8 different DNN models, and find that it achieves superior performance and significantly outperforms four existing defenses. We further evaluate Jujutsu against physical-world attacks, as well as adaptive attacks.) <|cite_end|> focuses on localizing adversarial patches and distinguishing them from benign samples. While these defenses offer valuable contributions, they are not without limitations, including high false positive rates and poor detection rates. Additionally, a significant challenge lies in effectively mitigating the adversarial impact while ensuring deep neural networks (DNNs) make accurate inferences on clean examples. <|paper_end|>
[ "<|reference_start|> Explaining and Harnessing Adversarial Examples: Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset. <|reference_end|>", "<|reference_start|> Robustness Against Adversarial Attacks Using Dimensionality: <|reference_end|>", "<|reference_start|> DAP: A Dynamic Adversarial Patch for Evading Person Detectors: Patch-based adversarial attacks were proven to compromise the robustness and reliability of computer vision systems. However, their conspicuous and easily detectable nature challenge their practicality in real-world setting. To address this, recent work has proposed using Generative Adversarial Networks (GANs) to generate naturalistic patches that may not attract human attention. However, such approaches suffer from a limited latent space making it challenging to produce a patch that is efficient, stealthy, and robust to multiple real-world transformations. This paper introduces a novel approach that produces a Dynamic Adversarial Patch (DAP) designed to overcome these limitations. DAP maintains a naturalistic appearance while optimizing attack efficiency and robustness to real-world transformations. The approach involves redefining the optimization problem and introducing a novel objective function that incorporates a similarity metric to guide the patch's creation. Unlike GAN-based techniques, the DAP directly modifies pixel values within the patch, providing increased flexibility and adaptability to multiple transformations. Furthermore, most clothing-based physical attacks assume static objects and ignore the possible transformations caused by non-rigid deformation due to changes in a person's pose. To address this limitation, a 'Creases Transformation' (CT) block is introduced, enhancing the patch's resilience to a variety of real-world distortions. Experimental results demonstrate that the proposed approach outperforms state-of-the-art attacks, achieving a success rate of up to 82.28% in the digital world when targeting the YOLOv7 detector and 65% in the physical world when targeting YOLOv3tiny detector deployed in edge-based smart cameras. <|reference_end|>", "<|reference_start|> Local Gradients Smoothing: Defense against localized adversarial attacks: Deep neural networks (DNNs) have shown vulnerability to adversarial attacks, i.e., carefully perturbed inputs designed to mislead the network at inference time. Recently introduced localized attacks, Localized and Visible Adversarial Noise (LaVAN) and Adversarial patch, pose a new challenge to deep learning security by adding adversarial noise only within a specific region without affecting the salient objects in an image. Driven by the observation that such attacks introduce concentrated high-frequency changes at a particular image location, we have developed an effective method to estimate noise location in gradient domain and transform those high activation regions caused by adversarial noise in image domain while having minimal effect on the salient object that is important for correct classification. Our proposed Local Gradients Smoothing (LGS) scheme achieves this by regularizing gradients in the estimated noisy region before feeding the image to DNN for inference. We have shown the effectiveness of our method in comparison to other defense methods including Digital Watermarking, JPEG compression, Total Variance Minimization (TVM) and Feature squeezing on ImageNet dataset. In addition, we systematically study the robustness of the proposed defense mechanism against Back Pass Differentiable Approximation (BPDA), a state of the art attack recently developed to break defenses that transform an input sample to minimize the adversarial effect. Compared to other defense mechanisms, LGS is by far the most resistant to BPDA in localized adversarial attack setting. <|reference_end|>" ]
[ 3, 5, 6, 16 ]
{"<|multi_cite_1_1|>": "arxiv-530502", "<|multi_cite_1_2|>": "arxiv-104040", "<|multi_cite_1_3|>": "ss-1101443", "<|multi_cite_1_4|>": "arxiv-70555", "<|multi_cite_1_5|>": "ss-1312765", "<|multi_cite_1_6|>": "ss-2126298", "<|multi_cite_2_1|>": "arxiv-506479", "<|multi_cite_2_2|>": "arxiv-485845", "<|multi_cite_2_3|>": "ss-682260", "<|multi_cite_3_1|>": "arxiv-530502", "<|multi_cite_3_2|>": "arxiv-529030", "<|multi_cite_3_3|>": "arxiv-485661", "<|cite_4|>": "arxiv-164636", "<|multi_cite_5_2|>": "ss-768978", "<|multi_cite_6_1|>": "ss-988437", "<|cite_7|>": "ss-768978", "<|cite_9|>": "arxiv-164636", "<|cite_10|>": "arxiv-360364"}
2010.12746
<|paper_start|> Title: LCFI: A Fault Injection Tool for Studying Lossy Compression Error Propagation in HPC Programs Abstract: LCFI: A Fault Injection Tool for Studying Lossy Compression Error Propagation in HPC Programs: Error-bounded lossy compression is becoming more and more important to today's extreme-scale HPC applications because of the ever-increasing volume of data generated because it has been widely used in in-situ visualization, data stream intensity reduction, storage reduction, I/O performance improvement, checkpoint/restart acceleration, memory footprint reduction, etc. Although many works have optimized ratio, quality, and performance for different error-bounded lossy compressors, there is none of the existing works attempting to systematically understand the impact of lossy compression errors on HPC application due to error propagation. In this paper, we propose and develop a lossy compression fault injection tool, called LCFI. To the best of our knowledge, this is the first fault injection tool that helps both lossy compressor developers and users to systematically and comprehensively understand the impact of lossy compression errors on HPC programs. The contributions of this work are threefold: (1) We propose an efficient approach to inject lossy compression errors according to a statistical analysis of compression errors for different state-of-the-art compressors. (2) We build a fault injector which is highly applicable, customizable, easy-to-use in generating top-down comprehensive results, and demonstrate the use of LCFI. (3) We evaluate LCFI on four representative HPC benchmarks with different abstracted fault models and make several observations about error propagation and their impacts on program outputs. Introduction \label{sec:intro} Today's HPC simulations and advanced instruments produce vast volumes of scientific data, which may cause many serious issues including a huge storage burden <|cite_start|> (Reference: Evaluating lossy data compression on climate simulation data within a large ensemble: Abstract. High-resolution Earth system model simulations generate enormous data volumes, and retaining the data from these simulations often strains institutional storage resources. Further, these exceedingly large storage requirements negatively impact science objectives, for example, by forcing reductions in data output frequency, simulation length, or ensemble size. To lessen data volumes from the Community Earth System Model (CESM), we advocate the use of lossy data compression techniques. While lossy data compression does not exactly preserve the original data (as lossless compression does), lossy techniques have an advantage in terms of smaller storage requirements. To preserve the integrity of the scientific simulation data, the effects of lossy data compression on the original data should, at a minimum, not be statistically distinguishable from the natural variability of the climate system, and previous preliminary work with data from CESM has shown this goal to be attainable. However, to ultimately convince climate scientists that it is acceptable to use lossy data compression, we provide climate scientists with access to publicly available climate data that have undergone lossy data compression. In particular, we report on the results of a lossy data compression experiment with output from the CESM Large Ensemble (CESM-LE) Community Project, in which we challenge climate scientists to examine features of the data relevant to their interests, and attempt to identify which of the ensemble members have been compressed and reconstructed. We find that while detecting distinguishing features is certainly possible, the compression effects noticeable in these features are often unimportant or disappear in post-processing analyses. In addition, we perform several analyses that directly compare the original data to the reconstructed data to investigate the preservation, or lack thereof, of specific features critical to climate science. Overall, we conclude that applying lossy data compression to climate simulation data is both advantageous in terms of data reduction and generally acceptable in terms of effects on scientific results.) <|cite_end|> <|cite_start|> (Reference: A Methodology for Evaluating the Impact of Data Compression on Climate Simulation Data: High-resolution climate simulations require tremendous computing resources and can generate massive datasets. At present, preserving the data from these simulations consumes vast storage resources at institutions such as the National Center for Atmospheric Research (NCAR). The historical data generation trends are economically unsustainable, and storage resources are already beginning to limit science objectives. To mitigate this problem, we investigate the use of data compression techniques on climate simulation data from the Community Earth System Model. Ultimately, to convince climate scientists to compress their simulation data, we must be able to demonstrate that the reconstructed data reveals the same mean climate as the original data, and this paper is a first step toward that goal. To that end, we develop an approach for verifying the climate data and use it to evaluate several compression algorithms. We find that the diversity of the climate data requires the individual treatment of variables, and, in doing so, the reconstructed data can fall within the natural variability of the system, while achieving compression rates of up to 5:1.) <|cite_end|> <|cite_start|> (Reference: Error-Controlled Lossy Compression Optimized for High Compression Ratios of Scientific Datasets: Today’s scientific simulations require a significant reduction of the data size because of extremely large volumes of data they produce and the limitation of storage bandwidth and space. If the compression is set to reach a high compression ratio, however, the reconstructed data are often distorted too much to tolerate. In this paper, we explore a new compression strategy that can effectively control the data distortion when significantly reducing the data size. The contribution is threefold. (1) We propose an adaptive compression framework to select either our improved Lorenzo prediction method or our optimized linear regression method dynamically in different regions of the dataset. (2) We explore how to select them accurately based on the data features in each block to obtain the best compression quality. (3) We analyze the effectiveness of our solution in details using four real-world scientific datasets with 100+ fields. Evaluation results confirm that our new adaptive solution can significantly improve the rate distortion for the lossy compression with fairly high compression ratios. The compression ratio of our compressor is 1.5X~8X as high as that of two other leading lossy compressors (SZ and ZFP) with the same peak single-to-noise ratio (PSNR), in the high-compression cases. Parallel experiments with 8,192 cores and 24 TB of data shows that our solution obtains 1.86X dumping performance and 1.95X loading performance compared with the second-best lossy compressor, respectively.) <|cite_end|> <|cite_start|> (Reference: Optimizing Lossy Compression with Adjacent Snapshots for N-body Simulation Data: Today’s N-body simulations are producing extremely large amounts of data. The Hardware/Hybrid Accelerated Cosmology Code (HACC), for example, may simulate trillions of particles, producing tens of petabytes of data to store in a parallel file system, according to the HACC users. In this paper, we design and implement an efficient, in situ error-bounded lossy compressor to significantly reduce the data size for N-body simulations. Not only can our compressor save significant storage space for N-body simulation researchers, but it can also improve the I/O performance considerably with limited memory and computation overhead. Our contribution is threefold. (1) We propose an efficient data compression model by leveraging the consecutiveness of the cosmological data in both space and time dimensions as well as the physical correlation across different fields. (2) We propose a lightweight, efficient alignment mechanism to align the disordered particles across adjacent snapshots in the simulation, which is a fundamental step in the whole compression procedure. We also optimize the compression quality by exploring best-fit data prediction strategies and optimizing the frequencies of the space-based compression vs. time-based compression. (3) We evaluate our compressor using both a cosmological simulation package and molecular dynamics simulation data—two major categories in the N-body simulation domain. Experiments show that under the same distortion of data, our solution produces up to 43% higher compression ratios on the velocity field and up to 300% higher on the position field than do other state-of-the-art compressors (including SZ, ZFP, NUMARCK, and decimation). With our compressor, the overall I/O time on HACC data is reduced by up to 20% compared with the second-best compressor.) <|cite_end|>, I/O bottlenecks compared with fast stream processing <|cite_start|> (Reference: Use cases of lossy compression for floating-point data in scientific data sets: Architectural and technological trends of systems used for scientific computing call for a significant reduction of scientific data sets that are composed mainly of floating-point data. This article surveys and presents experimental results of currently identified use cases of generic lossy compression to address the different limitations of scientific computing systems. The article shows from a collection of experiments run on parallel systems of a leadership facility that lossy data compression not only can reduce the footprint of scientific data sets on storage but also can reduce I/O and checkpoint/restart times, accelerate computation, and even allow significantly larger problems to be run than without lossy compression. These results suggest that lossy compression will become an important technology in many aspects of high performance scientific computing. Because the constraints for each use case are different and often conflicting, this collection of results also indicates the need for more specialization of the compression pipelines.) <|cite_end|>, and insufficient memory issues <|cite_start|> (Reference: Memory-Efficient Quantum Circuit Simulation by Using Lossy Data Compression: In order to evaluate, validate, and refine the design of new quantum algorithms or quantum computers, researchers and developers need methods to assess their correctness and fidelity. This requires the capabilities of quantum circuit simulations. However, the number of quantum state amplitudes increases exponentially with the number of qubits, leading to the exponential growth of the memory requirement for the simulations. In this work, we present our memory-efficient quantum circuit simulation by using lossy data compression. Our empirical data shows that we reduce the memory requirement to 16.5% and 2.24E-06 of the original requirement for QFT and Grover's search, respectively. This finding further suggests that we can simulate deep quantum circuits up to 63 qubits with 0.8 petabytes memory.) <|cite_end|>. For example, the Hardware/Hybrid Accelerated Cosmology Code (HACC) <|cite_start|> (Reference: {HACC: Supercomputing is evolving toward hybrid and accelerator-based architectures with millions of cores. The Hardware/Hybrid Accelerated Cosmology Code (HACC) framework exploits this diverse landscape at the largest scales of problem size, obtaining high scalability and sustained performance. Developed to satisfy the science requirements of cosmological surveys, HACC melds particle and grid methods using a novel algorithmic structure that flexibly maps across architectures, including CPU/GPU, multi/many-core, and Blue Gene systems. In this Research Highlight, we demonstrate the success of HACC on two very different machines, the CPU/GPU system Titan and the BG/Q systems Sequoia and Mira, attaining very high levels of scalable performance. We demonstrate strong and weak scaling on Titan, obtaining up to 99.2% parallel efficiency, evolving 1.1 trillion particles. On Sequoia, we reach 13.94 PFlops (69.2% of peak) and 90% parallel efficiency on 1,572,864 cores, with 3.6 trillion particles, the largest cosmological benchmark yet performed. HACC design concepts are applicable to several other supercomputer applications.) <|cite_end|> (twice a finalist nomination for ACM's Gordon Bell Prize) can produce 20 petabytes of data to store when simulating up to 3.5 trillions of particles with 300 timesteps. Even considering a sustained bandwidth of 1 TB/s, the I/O time will still exceed 5 hours, which is prohibitive. Thus, the researchers generally output the data by decimation, that is, storing one snapshot every several timesteps in the simulation. This process definitely degrades the temporal constructiveness of the simulation and also loses valuable information for postanalysis. Another typical example is instrument data generated for materials science research. The advanced instruments (such as the Advanced Photon Source at Argonne) may produce the data with a super-high rate such as 500 GB/s (will increase by at least two orders of magnitude with the coming upgrades <|cite_start|> (Reference: Advanced Photon Source Upgrade Project preliminary design report: ) <|cite_end|>), so that thousands of discs are required to sustain the high data production rate if without compression support. To mitigate the significant storage burden and I/O bottleneck, researchers have used many data compressors. Lossless compressors such as Gzip <|cite_start|> (Reference: Gzip: Каждый пользователь хранит огромное количество информации на своем компьютере, соответственно было бы разумным сжимать данные для экономии пространства на жестком диске, а также для хранения нескольких файлов в одном архиве для последующей передачи через Интернет. Особенно актуально это для мобильных платформ и для тех, у кого медленный Интернет. В данной статье рассматривается создание приложения, реализующего GZip сжатие при помощи алгоритма Deflate и упаковку файлов в архив.) <|cite_end|>, Zstd, Blosc, and FPC <|cite_start|> (Reference: Fpc: A high-speed compressor for double-precision floating-point data: Many scientific programs exchange large quantities of double-precision data between processing nodes and with mass storage devices. Data compression can reduce the number of bytes that need to be transferred and stored. However, data compression is only likely to be employed in high-end computing environments if it does not impede the throughput. This paper describes and evaluates FPC, a fast lossless compression algorithm for linear streams of 64-bit floating-point data. FPC works well on hard-to-compress scientific data sets and meets the throughput demands of high-performance systems. A comparison with five lossless compression schemes, BZIP2, DFCM, FSD, GZIP, and PLMI, on 4 architectures and 13 data sets shows that FPC compresses and decompresses one to two orders of magnitude faster than the other algorithms at the same geometric-mean compression ratio. Moreover, FPC provides a guaranteed throughput as long as the prediction tables fit into the L1 data cache. For example, on a 1.6-GHz Itanium 2 server, the throughput is 670 Mbytes/s regardless of what data are being compressed.) <|cite_end|> suffer from low compression ratios (around 2:1 <|cite_start|> (Reference: Data compression for the exascale computing era-survey: While periodic checkpointing has been an important mechanism for tolerating faults in high performance computing HPC systems, it is cost-prohibitive as the HPC system approaches exascale. Applying compression techniques is one common way to mitigate such burdens by reducing the data size, but they are often found to be less effective for scientific datasets. Traditional lossless compression techniques that look for repeated patterns are ineffective for scientific data in which high-precision data is used and hence common patterns are rare to find. In this paper, we present a comparison of several lossless and lossy data compression algorithms and discuss their methodology under the exascale environment. As data volume increases, we discover an increasing trend of new domain-driven algorithms that exploit the inherent characteristics exhibited in many scientific dataset, such as relatively small changes in data values from one simulation iteration to the next or among neighboring data. In particular, significant data reduction has been observed in lossy compression. This paper also discusses how the errors introduced by lossy compressions are controlled and the tradeoffs with the compression ratio.) <|cite_end|>) in reducing scientific data size because of the high randomness of ending mantissa bits in the floating-point representations. Accordingly, error-bounded lossy compression has been treated as one of the best approaches to solve this big scientific data issue <|cite_start|> (Reference: Fast error-bounded lossy HPC data compression with {SZ: Today's HPC applications are producing extremely large amounts of data, thus it is necessary to use an efficient compression before storing them to parallel file systems. In this paper, we optimize the error-bounded HPC data compression, by proposing a novel HPC data compression method that works very effectively on compressing large-scale HPC data sets. The compression method starts by linearizing multi-dimensional snapshot data. The key idea is to fit/predict the successive data points with the bestfit selection of curve fitting models. The data that can be predicted precisely will be replaced by the code of the corresponding curve-fitting model. As for the unpredictable data that cannot be approximated by curve-fitting models, we perform an optimized lossy compression via a binary representation analysis. We evaluate our proposed solution using 13 real-world HPC applications across different scientific domains, and compare it to many other state-of-the-art compression methods (including Gzip, FPC, ISABELA, NUMARCK, ZFP, FPZIP, etc.). Experiments show that the compression ratio of our compressor ranges in 3.3/1 - 436/1, which is higher than the second-best solution ZFP by as little as 2x and as much as an order of magnitude for most cases. The compression time of SZ is comparable to other solutions', while its decompression time is less than the second best one by 50%-90%. On an extreme-scale use case, experiments show that the compression ratio of SZ exceeds that of ZFP by 80%.) <|cite_end|> <|cite_start|> (Reference: Optimizing Lossy Compression with Adjacent Snapshots for N-body Simulation Data: Today’s N-body simulations are producing extremely large amounts of data. The Hardware/Hybrid Accelerated Cosmology Code (HACC), for example, may simulate trillions of particles, producing tens of petabytes of data to store in a parallel file system, according to the HACC users. In this paper, we design and implement an efficient, in situ error-bounded lossy compressor to significantly reduce the data size for N-body simulations. Not only can our compressor save significant storage space for N-body simulation researchers, but it can also improve the I/O performance considerably with limited memory and computation overhead. Our contribution is threefold. (1) We propose an efficient data compression model by leveraging the consecutiveness of the cosmological data in both space and time dimensions as well as the physical correlation across different fields. (2) We propose a lightweight, efficient alignment mechanism to align the disordered particles across adjacent snapshots in the simulation, which is a fundamental step in the whole compression procedure. We also optimize the compression quality by exploring best-fit data prediction strategies and optimizing the frequencies of the space-based compression vs. time-based compression. (3) We evaluate our compressor using both a cosmological simulation package and molecular dynamics simulation data—two major categories in the N-body simulation domain. Experiments show that under the same distortion of data, our solution produces up to 43% higher compression ratios on the velocity field and up to 300% higher on the position field than do other state-of-the-art compressors (including SZ, ZFP, NUMARCK, and decimation). With our compressor, the overall I/O time on HACC data is reduced by up to 20% compared with the second-best compressor.) <|cite_end|>. Although existing error-bounded lossy compressors such as SZ <|cite_start|> (Reference: Fast error-bounded lossy HPC data compression with {SZ: Today's HPC applications are producing extremely large amounts of data, thus it is necessary to use an efficient compression before storing them to parallel file systems. In this paper, we optimize the error-bounded HPC data compression, by proposing a novel HPC data compression method that works very effectively on compressing large-scale HPC data sets. The compression method starts by linearizing multi-dimensional snapshot data. The key idea is to fit/predict the successive data points with the bestfit selection of curve fitting models. The data that can be predicted precisely will be replaced by the code of the corresponding curve-fitting model. As for the unpredictable data that cannot be approximated by curve-fitting models, we perform an optimized lossy compression via a binary representation analysis. We evaluate our proposed solution using 13 real-world HPC applications across different scientific domains, and compare it to many other state-of-the-art compression methods (including Gzip, FPC, ISABELA, NUMARCK, ZFP, FPZIP, etc.). Experiments show that the compression ratio of our compressor ranges in 3.3/1 - 436/1, which is higher than the second-best solution ZFP by as little as 2x and as much as an order of magnitude for most cases. The compression time of SZ is comparable to other solutions', while its decompression time is less than the second best one by 50%-90%. On an extreme-scale use case, experiments show that the compression ratio of SZ exceeds that of ZFP by 80%.) <|cite_end|> <|cite_start|> (Reference: Significantly Improving Lossy Compression for Scientific Data Sets Based on Multidimensional Prediction and Error-Controlled Quantization: Today's HPC applications are producing extremely large amounts of data, such that data storage and analysis are becoming more challenging for scientific research. In this work, we design a new error-controlled lossy compression algorithm for large-scale scientific data. Our key contribution is significantly improving the prediction hitting rate (or prediction accuracy) for each data point based on its nearby data values along multiple dimensions. We derive a series of multilayer prediction formulas and their unified formula in the context of data compression. One serious challenge is that the data prediction has to be performed based on the preceding decompressed values during the compression in order to guarantee the error bounds, which may degrade the prediction accuracy in turn. We explore the best layer for the prediction by considering the impact of compression errors on the prediction accuracy. Moreover, we propose an adaptive error-controlled quantization encoder, which can further improve the prediction hitting rate considerably. The data size can be reduced significantly after performing the variable-length encoding because of the uneven distribution produced by our quantization encoder. We evaluate the new compressor on production scientific data sets and compare it with many other state-of-the-art compressors: GZIP, FPZIP, ZFP, SZ-1.1, and ISABELA. Experiments show that our compressor is the best in class, especially with regard to compression factors (or bit-rates) and compression errors (including RMSE, NRMSE, and PSNR). Our solution is better than the second-best solution by more than a 2x increase in the compression factor and 3.8x reduction in the normalized root mean squared error on average, with reasonable error bounds and user-desired bit-rates.) <|cite_end|> <|cite_start|> (Reference: Error-Controlled Lossy Compression Optimized for High Compression Ratios of Scientific Datasets: Today’s scientific simulations require a significant reduction of the data size because of extremely large volumes of data they produce and the limitation of storage bandwidth and space. If the compression is set to reach a high compression ratio, however, the reconstructed data are often distorted too much to tolerate. In this paper, we explore a new compression strategy that can effectively control the data distortion when significantly reducing the data size. The contribution is threefold. (1) We propose an adaptive compression framework to select either our improved Lorenzo prediction method or our optimized linear regression method dynamically in different regions of the dataset. (2) We explore how to select them accurately based on the data features in each block to obtain the best compression quality. (3) We analyze the effectiveness of our solution in details using four real-world scientific datasets with 100+ fields. Evaluation results confirm that our new adaptive solution can significantly improve the rate distortion for the lossy compression with fairly high compression ratios. The compression ratio of our compressor is 1.5X~8X as high as that of two other leading lossy compressors (SZ and ZFP) with the same peak single-to-noise ratio (PSNR), in the high-compression cases. Parallel experiments with 8,192 cores and 24 TB of data shows that our solution obtains 1.86X dumping performance and 1.95X loading performance compared with the second-best lossy compressor, respectively.) <|cite_end|> and ZFP <|cite_start|> (Reference: Fixed-rate compressed floating-point arrays: Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4d values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.) <|cite_end|> can strictly control the compression error of each data point, a significant gap still remains in understanding the impact of compression errors on program output. In other words, the propagation of compression errors in HPC programs has not been well studied and understood. Therefore, current lossy compression methods may lead to unacceptably inaccurate results for scientific discovery <|cite_start|> (Reference: Exploration of lossy compression for application-level checkpoint/restart: The scale of high performance computing (HPC) systems is exponentially growing, potentially causing prohibitive shrinkage of mean time between failures (MTBF) while the overall increase in the I/O performance of parallel file systems will be far behind the increase in scale. As such, there have been various attempts to decrease the checkpoint overhead, one of which is to employ compression techniques to the checkpoint files. While most of the existing techniques focus on lossless compression, their compression rates and thus effectiveness remain rather limited. Instead, we propose a loss compression technique based on wavelet transformation for checkpoints, and explore its impact to application results. Experimental application of our loss compression technique to a production climate application, NICAM, shows that the overall checkpoint time including compression is reduced by 81%, while relative error remains fairly constant at approximately 1.2% on overall average of all variables of compressed physical quantities compared to original checkpoint without compression.) <|cite_end|> <|cite_start|> (Reference: Exploring the feasibility of lossy compression for pde simulations: Checkpoint restart plays an important role in high-performance computing (HPC) applications, allowing simulation runtime to extend beyond a single job allocation and facilitating recovery from hardware failure. Yet, as machines grow in size and in complexity, traditional approaches to checkpoint restart are becoming prohibitive. Current methods store a subset of the application’s state and exploit the memory hierarchy in the machine. However, as the energy cost of data movement continues to dominate, further reductions in checkpoint size are needed. Lossy compression, which can significantly reduce checkpoint sizes, offers a potential to reduce computational cost in checkpoint restart. This article investigates the use of numerical properties of partial differential equation (PDE) simulations, such as bounds on the truncation error, to evaluate the feasibility of using lossy compression in checkpointing PDE simulations. Restart from a checkpoint with lossy compression is considered for a fail-stop error in two time-dependent HPC application codes: PlasComCM and Nek5000. Results show that error in application variables due to a restart from a lossy compressed checkpoint can be masked by the numerical error in the discretization, leading to increased efficiency in checkpoint restart without influencing overall accuracy in the simulation.) <|cite_end|> <|cite_start|> (Reference: Analyzing the Performance and Accuracy of Lossy Checkpointing on Sub-iteration of NWChem: Future exascale systems are expected to be characterized by more frequent failures than current petascale systems. This places increased importance on the application to minimize the amount of time wasted due to recompution when recovering from a checkpoint. Typically HPC application checkpoint at iteration boundaries. However, for applications that have a high per-iteration cost, checkpointing inside the iteration limits the amount of re-computation. This paper analyzes the performance and accuracy of using lossy compressed check-pointing in the computational chemistry application NWChem. Our results indicate that lossy compression is an effective tool for reducing the sub-iteration checkpoint size. Moreover, compression error tolerances that yield acceptable deviation in accuracy and iteration count are quantified.) <|cite_end|> based on the corrupted program output. Fault Injection (FI) is a widely used technique to evaluate the resilience of software applications to faults. While FI has been extensively used in general purpose applications, to the best of our knowledge, there does not exist a FI tool for lossy compression errors. The main challenges in developing such a fault injector remain in (1) designing a proper abstraction of compression fault model, and (2) integrating the fault model at the level where one can also conduct program-level error propagation analysis. Our contributions are listed as follows. \begin{itemize} \item We propose a systematic approach for efficient lossy compression fault injection to help compressor developers and users to understand the impact of compression error on their interest HPC applications. \item We build a fault injector (called \textsc{LCFI}) to inject lossy compression errors into any given HPC programs. The tool is highly applicable, customizable, ease-to-use, and able to generate top-down comprehensive results. We also demonstrate the use of \textsc{LCFI} using a simple example program. \item We evaluate \textsc{LCFI} on four representative HPC benchmark programs with different lossy compression errors to understand how different compressors affect those programs' outputs. Experimental results provide several important insights for users to understand how to strategically use lossy compression in order to avoid corrupting program output. \end{itemize} The rest of the paper is organized as follows. In Section~\ref{sec:background}, we discuss the background and our research motivation. In Section~\ref{sec:model}, we discuss our fault model for lossy compression error. In Section~\ref{sec:design}, we present the design and implementation details of our FI tool \textsc{LCFI}. In Section~\ref{sec:usage}, we describe the use of \textsc{LCFI} in detail. In Section~\ref{sec:evaluation}, we present our evaluation results. In Section~\ref{sec:conclusion}, we conclude and discuss future work. <|paper_end|>
[ "<|reference_start|> Error-Controlled Lossy Compression Optimized for High Compression Ratios of Scientific Datasets: Today’s scientific simulations require a significant reduction of the data size because of extremely large volumes of data they produce and the limitation of storage bandwidth and space. If the compression is set to reach a high compression ratio, however, the reconstructed data are often distorted too much to tolerate. In this paper, we explore a new compression strategy that can effectively control the data distortion when significantly reducing the data size. The contribution is threefold. (1) We propose an adaptive compression framework to select either our improved Lorenzo prediction method or our optimized linear regression method dynamically in different regions of the dataset. (2) We explore how to select them accurately based on the data features in each block to obtain the best compression quality. (3) We analyze the effectiveness of our solution in details using four real-world scientific datasets with 100+ fields. Evaluation results confirm that our new adaptive solution can significantly improve the rate distortion for the lossy compression with fairly high compression ratios. The compression ratio of our compressor is 1.5X~8X as high as that of two other leading lossy compressors (SZ and ZFP) with the same peak single-to-noise ratio (PSNR), in the high-compression cases. Parallel experiments with 8,192 cores and 24 TB of data shows that our solution obtains 1.86X dumping performance and 1.95X loading performance compared with the second-best lossy compressor, respectively. <|reference_end|>", "<|reference_start|> Optimizing Lossy Compression with Adjacent Snapshots for N-body Simulation Data: Today’s N-body simulations are producing extremely large amounts of data. The Hardware/Hybrid Accelerated Cosmology Code (HACC), for example, may simulate trillions of particles, producing tens of petabytes of data to store in a parallel file system, according to the HACC users. In this paper, we design and implement an efficient, in situ error-bounded lossy compressor to significantly reduce the data size for N-body simulations. Not only can our compressor save significant storage space for N-body simulation researchers, but it can also improve the I/O performance considerably with limited memory and computation overhead. Our contribution is threefold. (1) We propose an efficient data compression model by leveraging the consecutiveness of the cosmological data in both space and time dimensions as well as the physical correlation across different fields. (2) We propose a lightweight, efficient alignment mechanism to align the disordered particles across adjacent snapshots in the simulation, which is a fundamental step in the whole compression procedure. We also optimize the compression quality by exploring best-fit data prediction strategies and optimizing the frequencies of the space-based compression vs. time-based compression. (3) We evaluate our compressor using both a cosmological simulation package and molecular dynamics simulation data—two major categories in the N-body simulation domain. Experiments show that under the same distortion of data, our solution produces up to 43% higher compression ratios on the velocity field and up to 300% higher on the position field than do other state-of-the-art compressors (including SZ, ZFP, NUMARCK, and decimation). With our compressor, the overall I/O time on HACC data is reduced by up to 20% compared with the second-best compressor. <|reference_end|>", "<|reference_start|> Memory-Efficient Quantum Circuit Simulation by Using Lossy Data Compression: In order to evaluate, validate, and refine the design of new quantum algorithms or quantum computers, researchers and developers need methods to assess their correctness and fidelity. This requires the capabilities of quantum circuit simulations. However, the number of quantum state amplitudes increases exponentially with the number of qubits, leading to the exponential growth of the memory requirement for the simulations. In this work, we present our memory-efficient quantum circuit simulation by using lossy data compression. Our empirical data shows that we reduce the memory requirement to 16.5% and 2.24E-06 of the original requirement for QFT and Grover's search, respectively. This finding further suggests that we can simulate deep quantum circuits up to 63 qubits with 0.8 petabytes memory. <|reference_end|>", "<|reference_start|> Exploration of lossy compression for application-level checkpoint/restart: The scale of high performance computing (HPC) systems is exponentially growing, potentially causing prohibitive shrinkage of mean time between failures (MTBF) while the overall increase in the I/O performance of parallel file systems will be far behind the increase in scale. As such, there have been various attempts to decrease the checkpoint overhead, one of which is to employ compression techniques to the checkpoint files. While most of the existing techniques focus on lossless compression, their compression rates and thus effectiveness remain rather limited. Instead, we propose a loss compression technique based on wavelet transformation for checkpoints, and explore its impact to application results. Experimental application of our loss compression technique to a production climate application, NICAM, shows that the overall checkpoint time including compression is reduced by 81%, while relative error remains fairly constant at approximately 1.2% on overall average of all variables of compressed physical quantities compared to original checkpoint without compression. <|reference_end|>" ]
[ 2, 3, 5, 17 ]
{"<|multi_cite_1_1|>": "ss-1960111", "<|multi_cite_1_2|>": "ss-838756", "<|multi_cite_1_3|>": "ss-753388", "<|multi_cite_1_4|>": "ss-1960112", "<|cite_2|>": "ss-1367695", "<|cite_3|>": "arxiv-180331", "<|cite_4|>": "ss-1960113", "<|cite_5|>": "ss-681360", "<|cite_6|>": "ss-885889", "<|cite_9|>": "ss-845849", "<|cite_10|>": "ss-1863332", "<|multi_cite_12_1|>": "ss-1960114", "<|multi_cite_12_2|>": "ss-1960112", "<|multi_cite_13_1|>": "ss-1960114", "<|multi_cite_13_2|>": "arxiv-126597", "<|multi_cite_13_3|>": "ss-753388", "<|cite_14|>": "ss-1287574", "<|multi_cite_15_1|>": "ss-2276917", "<|multi_cite_15_2|>": "ss-1442178", "<|multi_cite_15_3|>": "ss-1960115"}
2309.06810-0
<|paper_start|> Title: Leveraging SE(3) Equivariance for Learning 3D Geometric Shape Assembly Abstract: Leveraging SE(3) Equivariance for Learning 3D Geometric Shape Assembly: Shape assembly aims to reassemble parts (or fragments) into a complete object, which is a common task in our daily life. Different from the semantic part assembly (e.g., assembling a chair's semantic parts like legs into a whole chair), geometric part assembly (e.g., assembling bowl fragments into a complete bowl) is an emerging task in computer vision and robotics. Instead of semantic information, this task focuses on geometric information of parts. As the both geometric and pose space of fractured parts are exceptionally large, shape pose disentanglement of part representations is beneficial to geometric shape assembly. In our paper, we propose to leverage SE(3) equivariance for such shape pose disentanglement. Moreover, while previous works in vision and robotics only consider SE(3) equivariance for the representations of single objects, we move a step forward and propose leveraging SE(3) equivariance for representations considering multi-part correlations, which further boosts the performance of the multi-part assembly. Experiments demonstrate the significance of SE(3) equivariance and our proposed method for geometric shape assembly. Project page: https://crtie.github.io/SE-3-part-assembly/ Introduction Shape assembly aims to compose the parts or fragments of an object into a complete shape. It is a common task in the human-built world, from furniture assembly <|cite_start|> (Reference: IKEA Furniture Assembly Environment for Long-Horizon Complex Manipulation Tasks: The IKEA Furniture Assembly Environment is one of the first benchmarks for testing and accelerating the automation of complex manipulation tasks. The environment is designed to advance reinforcement learning from simple toy tasks to complex tasks requiring both long-term planning and sophisticated low-level control. Our environment supports over 80 different furniture models, Sawyer and Baxter robot simulation, and domain randomization. The IKEA Furniture Assembly Environment is a testbed for methods aiming to solve complex manipulation tasks. The environment is publicly available at https://clvrai.com/furniture) <|cite_end|> <|cite_start|> (Reference: Generative 3D Part Assembly via Dynamic Graph Learning: Autonomous part assembly is a challenging yet crucial task in 3D computer vision and robotics. Analogous to buying an IKEA furniture, given a set of 3D parts that can assemble a single shape, an intelligent agent needs to perceive the 3D part geometry, reason to propose pose estimations for the input parts, and finally call robotic planning and control routines for actuation. In this paper, we focus on the pose estimation subproblem from the vision side involving geometric and relational reasoning over the input part geometry. Essentially, the task of generative 3D part assembly is to predict a 6-DoF part pose, including a rigid rotation and translation, for each input part that assembles a single 3D shape as the final output. To tackle this problem, we propose an assembly-oriented dynamic graph learning framework that leverages an iterative graph neural network as a backbone. It explicitly conducts sequential part assembly refinements in a coarse-to-fine manner, exploits a pair of part relation reasoning module and part aggregation module for dynamically adjusting both part features and their relations in the part graph. We conduct extensive experiments and quantitative comparisons to three strong baseline methods, demonstrating the effectiveness of the proposed approach.) <|cite_end|>(\emph{e.g.}, assemble chair parts like legs and handles into a whole chair) to fractured object reassembly <|cite_start|> (Reference: Neural Shape Mating: Self-Supervised Object Assembly with Adversarial Shape Priors: Learning to autonomously assemble shapes is a crucial skill for many robotic applications. While the majority of existing part assembly methods focus on correctly posing semantic parts to recreate a whole object, we interpret assembly more literally: as mating geometric parts together to achieve a snug fit. By focusing on shape alignment rather than semantic cues, we can achieve across-category generalization. In this paper, we introduce a novel task, pairwise 3D geometric shape mating, and propose Neural Shape Mating (NSM) to tackle this problem. Given the point clouds of two object parts of an unknown category, NSM learns to reason about the fit of the two parts and predict a pair of 3D poses that tightly mate them together. We couple the training of NSM with an implicit shape reconstruction task to make NSM more robust to imperfect point cloud observations. To train NSM, we present a self-supervised data collection pipeline that generates pairwise shape mating data with ground truth by randomly cutting an object mesh into two parts, resulting in a dataset that consists of 200K shape mating pairs from numerous object meshes with diverse cut types. We train NSM on the collected dataset and compare it with several point cloud registration methods and one part assembly baseline. Extensive experimental results and ablation studies under various settings demonstrate the effectiveness of the proposed algorithm. Additional material is available at: https://neural-shape-mating.github.io/) <|cite_end|> <|cite_start|> (Reference: Breaking Bad: A Dataset for Geometric Fracture and Reassembly: We introduce Breaking Bad, a large-scale dataset of fractured objects. Our dataset consists of over one million fractured objects simulated from ten thousand base models. The fracture simulation is powered by a recent physically based algorithm that efficiently generates a variety of fracture modes of an object. Existing shape assembly datasets decompose objects according to semantically meaningful parts, effectively modeling the construction process. In contrast, Breaking Bad models the destruction process of how a geometric object naturally breaks into fragments. Our dataset serves as a benchmark that enables the study of fractured object reassembly and presents new challenges for geometric shape understanding. We analyze our dataset with several geometry measurements and benchmark three state-of-the-art shape assembly deep learning methods under various settings. Extensive experimental results demonstrate the difficulty of our dataset, calling on future research in model designs specifically for the geometric shape assembly task. We host our dataset at https://breaking-bad-dataset.github.io/.) <|cite_end|>(\emph{e.g.}, assemble bowl fractures into a whole bowl) . When trying to complete an object from parts, we will focus on their \textbf{\textit{geometric}} and \textbf{\textit{semantic}} information. There is a vast literature in both the computer vision and robotics fields studying the shape assembly problem, especially for the application purposes like furniture assembly and object assembly <|cite_start|> (Reference: Designing Effective Step-by-step Assembly Instructions: We present design principles for creating effective assembly instructions and a system that is based on these principles. The principles are drawn from cognitive psychology research which investigated people's conceptual models of assembly and effective methods to visually communicate assembly information. Our system is inspired by earlier work in robotics on assembly planning and in visualization on automated presentation design. Although other systems have considered presentation and planning independently, we believe it is necessary to address the two problems simultaneously in order to create effective assembly instructions. We describe the algorithmic techniques used to produce assembly instructions given object geometry, orientation, and optional grouping and ordering constraints on the object's parts. Our results demonstrate that it is possible to produce aesthetically pleasing and easy to follow instructions for many everyday objects.) <|cite_end|> <|cite_start|> (Reference: IKEA Furniture Assembly Environment for Long-Horizon Complex Manipulation Tasks: The IKEA Furniture Assembly Environment is one of the first benchmarks for testing and accelerating the automation of complex manipulation tasks. The environment is designed to advance reinforcement learning from simple toy tasks to complex tasks requiring both long-term planning and sophisticated low-level control. Our environment supports over 80 different furniture models, Sawyer and Baxter robot simulation, and domain randomization. The IKEA Furniture Assembly Environment is a testbed for methods aiming to solve complex manipulation tasks. The environment is publicly available at https://clvrai.com/furniture) <|cite_end|> <|cite_start|> (Reference: Learning 3D Part Assembly from a Single Image: Autonomous assembly is a crucial capability for robots in many applications. For this task, several problems such as obstacle avoidance, motion planning, and actuator control have been extensively studied in robotics. However, when it comes to task specification, the space of possibilities remains underexplored. Towards this end, we introduce a novel problem, single-image-guided 3D part assembly, along with a learningbased solution. We study this problem in the setting of furniture assembly from a given complete set of parts and a single image depicting the entire assembled object. Multiple challenges exist in this setting, including handling ambiguity among parts (e.g., slats in a chair back and leg stretchers) and 3D pose prediction for parts and part subassemblies, whether visible or occluded. We address these issues by proposing a two-module pipeline that leverages strong 2D-3D correspondences and assembly-oriented graph message-passing to infer part relationships. In experiments with a PartNet-based synthetic benchmark, we demonstrate the effectiveness of our framework as compared with three baseline approaches.) <|cite_end|> <|cite_start|> (Reference: Generative 3D Part Assembly via Dynamic Graph Learning: Autonomous part assembly is a challenging yet crucial task in 3D computer vision and robotics. Analogous to buying an IKEA furniture, given a set of 3D parts that can assemble a single shape, an intelligent agent needs to perceive the 3D part geometry, reason to propose pose estimations for the input parts, and finally call robotic planning and control routines for actuation. In this paper, we focus on the pose estimation subproblem from the vision side involving geometric and relational reasoning over the input part geometry. Essentially, the task of generative 3D part assembly is to predict a 6-DoF part pose, including a rigid rotation and translation, for each input part that assembles a single 3D shape as the final output. To tackle this problem, we propose an assembly-oriented dynamic graph learning framework that leverages an iterative graph neural network as a backbone. It explicitly conducts sequential part assembly refinements in a coarse-to-fine manner, exploits a pair of part relation reasoning module and part aggregation module for dynamically adjusting both part features and their relations in the part graph. We conduct extensive experiments and quantitative comparisons to three strong baseline methods, demonstrating the effectiveness of the proposed approach.) <|cite_end|>. Imagine we want to assemble a simple table with four wooden sticks and a flat board, we can infer that the sticks are the table legs so they should be vertically placed, while the board is the table top and should be horizontally placed. Here, we not only use geometric clues to infer the parts' functions but also use semantic information to predict the parts' poses. \begin{figure}[t] \centering \includegraphics[width=\linewidth, ]{figs/teaser_v2.pdf} \caption{ \textbf{Geometric Shape Assembly} aims to assemble different fractured parts into a whole shape. We propose to leverage \textbf{SE(3) Equivariance} for learning Geometric Shape Assembly, which disentangles poses and shapes of fractured parts, and performs better than networks without SE(3)-equivariant representations. } \label{fig_teaser} \end{figure} Recently, a two-part geometric mating dataset is proposed in NSM <|cite_start|> (Reference: Neural Shape Mating: Self-Supervised Object Assembly with Adversarial Shape Priors: Learning to autonomously assemble shapes is a crucial skill for many robotic applications. While the majority of existing part assembly methods focus on correctly posing semantic parts to recreate a whole object, we interpret assembly more literally: as mating geometric parts together to achieve a snug fit. By focusing on shape alignment rather than semantic cues, we can achieve across-category generalization. In this paper, we introduce a novel task, pairwise 3D geometric shape mating, and propose Neural Shape Mating (NSM) to tackle this problem. Given the point clouds of two object parts of an unknown category, NSM learns to reason about the fit of the two parts and predict a pair of 3D poses that tightly mate them together. We couple the training of NSM with an implicit shape reconstruction task to make NSM more robust to imperfect point cloud observations. To train NSM, we present a self-supervised data collection pipeline that generates pairwise shape mating data with ground truth by randomly cutting an object mesh into two parts, resulting in a dataset that consists of 200K shape mating pairs from numerous object meshes with diverse cut types. We train NSM on the collected dataset and compare it with several point cloud registration methods and one part assembly baseline. Extensive experimental results and ablation studies under various settings demonstrate the effectiveness of the proposed algorithm. Additional material is available at: https://neural-shape-mating.github.io/) <|cite_end|>, which considers shape assembly from a pure geometric perspective, without relying on semantic information. This work randomly cuts an object into two pairs, and studies how to mate the fragment pairs into the original shape. Such design is practical in some applications such as object kitting <|cite_start|> (Reference: Kit-Net: Self-Supervised Learning to Kit Novel 3D Objects into Novel 3D Cavities: In industrial part kitting, 3D objects are inserted into cavities for transportation or subsequent assembly. Kitting is a critical step as it can decrease downstream processing and handling times and enable lower storage and shipping costs. We present Kit-Net, a framework for kitting previously unseen 3D objects into cavities given depth images of both the target cavity and an object held by a gripper in an unknown initial orientation. Kit-Net uses self-supervised deep learning and data augmentation to train a convolutional neural network (CNN) to robustly estimate 3D rotations between objects and matching concave or convex cavities using a large training dataset of simulated depth images pairs. Kit-Net then uses the trained CNN to implement a controller to orient and position novel objects for insertion into novel prismatic and conformal 3D cavities. Experiments in simulation suggest that Kit-Net can orient objects to have a 98.9% average intersection volume between the object mesh and that of the target cavity. Physical experiments with industrial objects succeed in 18% of trials using a baseline method and in 63% of trials with Kit-Net. Video, code, and data are available at https://github.com/BerkeleyAutomation/Kit-Net.) <|cite_end|> <|cite_start|> (Reference: Scene Editing as Teleoperation: A Case Study in 6DoF Kit Assembly: Studies in robot teleoperation have been centered around action specifications -- from continuous joint control to discrete end-effector pose control. However, these robot-centric interfaces often require skilled operators with extensive robotics expertise. To make teleoperation accessible to non-expert users, we propose the framework "Scene Editing as Teleoperation" (SEaT), where the key idea is to transform the traditional "robot-centric" interface into a "scene-centric" interface -- instead of controlling the robot, users focus on specifying the task's goal by manipulating digital twins of the real-world objects. As a result, a user can perform teleoperation without any expert knowledge of the robot hardware. To achieve this goal, we utilize a category-agnostic scene-completion algorithm that translates the real-world workspace (with unknown objects) into a manipulable virtual scene representation and an action-snapping algorithm that refines the user input before generating the robot's action plan. To train the algorithms, we procedurally generated a large-scale, diverse kit-assembly dataset that contains object-kit pairs that mimic real-world object-kitting tasks. Our experiments in simulation and on a real-world system demonstrate that our framework improves both the efficiency and success rate for 6DoF kit-assembly tasks. A user study demonstrates that SEaT framework participants achieve a higher task success rate and report a lower subjective workload compared to an alternative robot-centric interface. Video can be found at https://www.youtube.com/watch?v=-NdR3mkPbQQ .) <|cite_end|>, form fitting <|cite_start|> (Reference: Form2Fit: Learning Shape Priors for Generalizable Assembly from Disassembly: Is it possible to learn policies for robotic assembly that can generalize to new objects? We explore this idea in the context of the kit assembly task. Since classic methods rely heavily on object pose estimation, they often struggle to generalize to new objects without 3D CAD models or task-specific training data. In this work, we propose to formulate the kit assembly task as a shape matching problem, where the goal is to learn a shape descriptor that establishes geometric correspondences between object surfaces and their target placement locations from visual input. This formulation enables the model to acquire a broader understanding of how shapes and surfaces fit together for assembly -- allowing it to generalize to new objects and kits. To obtain training data for our model, we present a self-supervised data-collection pipeline that obtains ground truth object-to-placement correspondences by disassembling complete kits. Our resulting real-world system, Form2Fit, learns effective pick and place strategies for assembling objects into a variety of kits -- achieving $90\%$ average success rates under different initial conditions (e.g. varying object and kit poses), $94\%$ success under new configurations of multiple kits, and over $86\%$ success with completely new objects and kits.) <|cite_end|>, and protein binding <|cite_start|> (Reference: Fast end-to-end learning on protein surfaces: Proteins’ biological functions are defined by the geometric and chemical structure of their 3D molecular surfaces. Recent works have shown that geometric deep learning can be used on mesh-based representations of proteins to identify potential functional sites, such as binding targets for potential drugs. Unfortunately though, the use of meshes as the underlying representation for protein structure has multiple drawbacks including the need to pre-compute the input features and mesh connectivities. This becomes a bottleneck for many important tasks in protein science. In this paper, we present a new framework for deep learning on protein structures that addresses these limitations. Among the key advantages of our method are the computation and sampling of the molecular surface on-the-fly from the underlying atomic point cloud and a novel efficient geometric convolutional layer. As a result, we are able to process large collections of proteins in an end-to-end fashion, taking as the sole input the raw 3D coordinates and chemical types of their atoms, eliminating the need for any hand-crafted pre-computed features. To showcase the performance of our approach, we test it on two tasks in the field of protein structural bioinformatics: the identification of interaction sites and the prediction of protein-protein interactions. On both tasks, we achieve state-of-the-art performance with much faster run times and fewer parameters than previous models. These results will considerably ease the deployment of deep learning methods in protein science and open the door for end-to-end differentiable approaches in protein modeling tasks such as function prediction and design.) <|cite_end|>. In these tasks, the semantic information can hardly be acquired from the fragment shapes, and thus it is nearly impossible to predict fragments' poses relying on semantic information (e.g. part acts as a leg should be horizontally placed). Instead, such geometric mating tasks should be accomplished by relying on geometric cues. Furthermore, the pairwise assembly task can be extended to the multi-part assembly task, and thus the pose space will grow much larger. Recent work <|cite_start|> (Reference: Breaking Bad: A Dataset for Geometric Fracture and Reassembly: We introduce Breaking Bad, a large-scale dataset of fractured objects. Our dataset consists of over one million fractured objects simulated from ten thousand base models. The fracture simulation is powered by a recent physically based algorithm that efficiently generates a variety of fracture modes of an object. Existing shape assembly datasets decompose objects according to semantically meaningful parts, effectively modeling the construction process. In contrast, Breaking Bad models the destruction process of how a geometric object naturally breaks into fragments. Our dataset serves as a benchmark that enables the study of fractured object reassembly and presents new challenges for geometric shape understanding. We analyze our dataset with several geometry measurements and benchmark three state-of-the-art shape assembly deep learning methods under various settings. Extensive experimental results demonstrate the difficulty of our dataset, calling on future research in model designs specifically for the geometric shape assembly task. We host our dataset at https://breaking-bad-dataset.github.io/.) <|cite_end|>proposes a large-scale dataset named Breaking Bad, which models the destruction process of how an object breaks into fragments. For each object, there are multiple broken fragments with various and complex geometry, making it much more challenging for geometric shape understanding and assembly. Therefore, how to reduce the pose space and effectively assembly multiple fragments that are non-semantic but with diverse geometry still remains a problem. Compared to furniture assembly, which relies on both part semantics and geometry, geometric assembly that assembles diverse fractures mainly focuses on geometric information, while the space of part pose and geometry are much larger in this task. Therefore, \textbf{shape pose disentanglement} plays a significant role in boosting the performance of geometric shape assembly. Recently, achieving SE(3) equivariance for object representations is arousing much attention in 3D computer vision and robotics. Many works have studied SE(3)-equivariant architectures <|cite_start|> (Reference: Equivariant Point Network for 3D Point Cloud Analysis: Features that are equivariant to a larger group of symmetries have been shown to be more discriminative and powerful in recent studies. However, higher-order equivariant features often come with an exponentially-growing computational cost. Furthermore, it remains relatively less explored how rotation-equivariant features can be leveraged to tackle 3D shape alignment tasks. While many past approaches have been based on either non-equivariant or invariant descriptors to align 3D shapes, we argue that such tasks may benefit greatly from an equivariant framework. In this paper, we propose an effective and practical SE(3) (3D translation and rotation) equivariant network for point cloud analysis that addresses both problems. First, we present SE(3) separable point convolution, a novel framework that breaks down the 6D convolution into two separable convolutional operators alternatively performed in the 3D Euclidean and SO(3) spaces. This significantly reduces the computational cost without compromising the performance. Second, we introduce an attention layer to effectively harness the expressiveness of the equivariant features. While jointly trained with the network, the attention layer implicitly derives the intrinsic local frame in the feature space and generates attention vectors that can be integrated into different alignment tasks. We evaluate our approach through extensive studies and visual interpretations. The empirical results demonstrate that our proposed model outperforms strong baselines in a variety of benchmarks) <|cite_end|> <|cite_start|> (Reference: 3D Equivariant Graph Implicit Functions: In recent years, neural implicit representations have made remarkable progress in modeling of 3D shapes with arbitrary topology. In this work, we address two key limitations of such representations, in failing to capture local 3D geometric fine details, and to learn from and generalize to shapes with unseen 3D transformations. To this end, we introduce a novel family of graph implicit functions with equivariant layers that facilitates modeling fine local details and guaranteed robustness to various groups of geometric transformations, through local $k$-NN graph embeddings with sparse point set observations at multiple resolutions. Our method improves over the existing rotation-equivariant implicit function from 0.69 to 0.89 (IoU) on the ShapeNet reconstruction task. We also show that our equivariant implicit function can be extended to other types of similarity transformations and generalizes to unseen translations and scaling.) <|cite_end|> <|cite_start|> (Reference: Vector neurons: A general framework for so (3)-equivariant networks: Invariance and equivariance to the rotation group have been widely discussed in the 3D deep learning community for pointclouds. Yet most proposed methods either use complex mathematical tools that may limit their accessibility, or are tied to specific input data types and network architectures. In this paper, we introduce a general framework built on top of what we call Vector Neuron representations for creating SO (3) -equivariant neural networks for pointcloud processing. Extending neurons from 1D scalars to 3D vectors, our vector neurons enable a simple mapping of SO (3) actions to latent spaces thereby providing a framework for building equivariance in common neural operations – including linear layers, non-linearities, pooling, and normalizations. Due to their simplicity, vector neurons are versatile and, as we demonstrate, can be incorporated into diverse network architecture backbones, allowing them to process geometry inputs in arbitrary poses. Despite its simplicity, our method performs comparably well in accuracy and generalization with other more complex and specialized state-of-the-art methods on classification and segmentation tasks. We also show for the first time a rotation equivariant reconstruction network. Source code is available at https://github.com/FlyingGiraffe/vnn.) <|cite_end|> <|cite_start|> (Reference: Se (3)-transformers: 3d roto-translation equivariant attention networks: We introduce the SE(3)-Transformer, a variant of the self-attention module for 3D point clouds, which is equivariant under continuous 3D roto-translations. Equivariance is important to ensure stable and predictable performance in the presence of nuisance transformations of the data input. A positive corollary of equivariance is increased weight-tying within the model, leading to fewer trainable parameters and thus decreased sample complexity (i.e. we need less training data). The SE(3)-Transformer leverages the benefits of self-attention to operate on large point clouds with varying number of points, while guaranteeing SE(3)-equivariance for robustness. We evaluate our model on a toy $N$-body particle simulation dataset, showcasing the robustness of the predictions under rotations of the input. We further achieve competitive performance on two real-world datasets, ScanObjectNN and QM9. In all cases, our model outperforms a strong, non-equivariant attention baseline and an equivariant model without attention.) <|cite_end|> <|cite_start|> (Reference: Shape-Pose Disentanglement using SE (3)-equivariant Vector Neurons: We introduce an unsupervised technique for encoding point clouds into a canonical shape representation, by disentangling shape and pose. Our encoder is stable and consistent, meaning that the shape encoding is purely pose-invariant, while the extracted rotation and translation are able to semantically align different input shapes of the same class to a common canonical pose. Specifically, we design an auto-encoder based on Vector Neuron Networks, a rotation-equivariant neural network, whose layers we extend to provide translation-equivariance in addition to rotation-equivariance only. The resulting encoder produces pose-invariant shape encoding by construction, enabling our approach to focus on learning a consistent canonical pose for a class of objects. Quantitative and qualitative experiments validate the superior stability and consistency of our approach.) <|cite_end|> <|cite_start|> (Reference: {Tensor field networks: Rotation-and translation-equivariant neural networks for 3D point clouds: We introduce tensor field neural networks, which are locally equivariant to 3D rotations, translations, and permutations of points at every layer. 3D rotation equivariance removes the need for data augmentation to identify features in arbitrary orientations. Our network uses filters built from spherical harmonics; due to the mathematical consequences of this filter choice, each layer accepts as input (and guarantees as output) scalars, vectors, and higher-order tensors, in the geometric sense of these terms. We demonstrate the capabilities of tensor field networks with tasks in geometry, physics, and chemistry.) <|cite_end|> <|cite_start|> (Reference: Dynamic Graph CNN for Learning on Point Clouds: Point clouds provide a flexible geometric representation suitable for countless applications in computer graphics; they also comprise the raw output of most 3D data acquisition devices. While hand-designed features on point clouds have long been proposed in graphics and vision, however, the recent overwhelming success of convolutional neural networks (CNNs) for image analysis suggests the value of adapting insight from CNN to the point cloud world. Point clouds inherently lack topological information so designing a model to recover topology can enrich the representation power of point clouds. To this end, we propose a new neural network module dubbed EdgeConv suitable for CNN-based high-level tasks on point clouds including classification and segmentation. EdgeConv acts on graphs dynamically computed in each layer of the network. It is differentiable and can be plugged into existing architectures. Compared to existing modules operating in extrinsic space or treating each point independently, EdgeConv has several appealing properties: It incorporates local neighborhood information; it can be stacked applied to learn global shape properties; and in multi-layer systems affinity in feature space captures semantic characteristics over potentially long distances in the original embedding. We show the performance of our model on standard benchmarks including ModelNet40, ShapeNetPart, and S3DIS.) <|cite_end|> <|cite_start|> (Reference: 3D Steerable CNNs: Learning Rotationally Equivariant Features in Volumetric Data: We present a convolutional network that is equivariant to rigid body motions. The model uses scalar-, vector-, and tensor fields over 3D Euclidean space to represent data, and equivariant convolutions to map between such representations. These SE(3)-equivariant convolutions utilize kernels which are parameterized as a linear combination of a complete steerable kernel basis, which is derived analytically in this paper. We prove that equivariant convolutions are the most general equivariant linear maps between fields over R^3. Our experimental results confirm the effectiveness of 3D Steerable CNNs for the problem of amino acid propensity prediction and protein structure classification, both of which have inherent SE(3) symmetry.) <|cite_end|> <|cite_start|> (Reference: Quaternion Equivariant Capsule Networks for 3D Point Clouds: We present a 3D capsule module for processing point clouds that is equivariant to 3D rotations and translations, as well as invariant to permutations of the input points. The operator receives a sparse set of local reference frames, computed from an input point cloud and establishes end-to-end transformation equivariance through a novel dynamic routing procedure on quaternions. Further, we theoretically connect dynamic routing between capsules to the well-known Weiszfeld algorithm, a scheme for solving \emph{iterative re-weighted least squares} (IRLS) problems with provable convergence properties. It is shown that such group dynamic routing can be interpreted as robust IRLS rotation averaging on capsule votes, where information is routed based on the final inlier scores. Based on our operator, we build a capsule network that disentangles geometry from pose, paving the way for more informative descriptors and a structured latent space. Our architecture allows joint object classification and orientation estimation without explicit supervision of rotations. We validate our algorithm empirically on common benchmark datasets.) <|cite_end|>and leveraged SE(3) equivariance in object pose estimation <|cite_start|> (Reference: Leveraging SE(3) Equivariance for Self-Supervised Category-Level Object Pose Estimation: Category-level object pose estimation aims to find 6D object poses of previously unseen object instances from known categories without access to object CAD models. To reduce the huge amount of pose annotations needed for category-level learning, we propose for the first time a self-supervised learning framework to estimate category-level 6D object pose from single 3D point clouds.During training, our method assumes no ground-truth pose annotations, no CAD models, and no multi-view supervision. The key to our method is to disentangle shape and pose through an invariant shape reconstruction module and an equivariant pose estimation module, empowered by SE(3) equivariant point cloud networks.The invariant shape reconstruction module learns to perform aligned reconstructions, yielding a category-level reference frame without using any annotations. In addition, the equivariant pose estimation module achieves category-level pose estimation accuracy that is comparable to some fully supervised methods. Extensive experiments demonstrate the effectiveness of our approach on both complete and partial depth point clouds from the ModelNet40 benchmark, and on real depth point clouds from the NOCS-REAL 275 dataset. The project page with code and visualizations can be found at: https://dragonlong.github.io/equi-pose.) <|cite_end|> <|cite_start|> (Reference: Self-Supervised Category-Level Articulated Object Pose Estimation with Part-Level SE(3) Equivariance: Category-level articulated object pose estimation aims to estimate a hierarchy of articulation-aware object poses of an unseen articulated object from a known category. To reduce the heavy annotations needed for supervised learning methods, we present a novel self-supervised strategy that solves this problem without any human labels. Our key idea is to factorize canonical shapes and articulated object poses from input articulated shapes through part-level equivariant shape analysis. Specifically, we first introduce the concept of part-level SE(3) equivariance and devise a network to learn features of such property. Then, through a carefully designed fine-grained pose-shape disentanglement strategy, we expect that canonical spaces to support pose estimation could be induced automatically. Thus, we could further predict articulated object poses as per-part rigid transformations describing how parts transform from their canonical part spaces to the camera space. Extensive experiments demonstrate the effectiveness of our method on both complete and partial point clouds from synthetic and real articulated object datasets.) <|cite_end|>or robotic object manipulation <|cite_start|> (Reference: SE (2)-Equivariant Pushing Dynamics Models for Tabletop Object Manipulations: : For tabletop object manipulation tasks, learning an accurate pushing dynamics model, which predicts the objects’ motions when a robot pushes an object, is very important. In this work, we claim that an ideal pushing dynamics model should have the SE(2) - equivariance property, i.e., if tabletop objects’ poses and pushing action are transformed by some same planar rigid-body transformation, then the resulting motion should also be the result of the same transformation. Existing state-of-the-art data-driven approaches do not have this equivariance property, resulting in less-than-desirable learning performances. In this paper, we propose a new neural network architecture that by construction has the above equivariance property. Through extensive empirical validations, we show that the proposed model shows significantly improved learning performances over the existing methods. Also, we verify that our pushing dynamics model can be used for various downstream pushing manipulation tasks such as the object moving, singulation, and grasping in both simulation and real robot experiments. Code is available at https://github.com/seungyeon-k/SQPDNet-public.) <|cite_end|> <|cite_start|> (Reference: Equivariant descriptor fields: Se (3)-equivariant energy-based models for end-to-end visual robotic manipulation learning: End-to-end learning for visual robotic manipulation is known to suffer from sample inefficiency, requiring large numbers of demonstrations. The spatial roto-translation equivariance, or the SE(3)-equivariance can be exploited to improve the sample efficiency for learning robotic manipulation. In this paper, we present SE(3)-equivariant models for visual robotic manipulation from point clouds that can be trained fully end-to-end. By utilizing the representation theory of the Lie group, we construct novel SE(3)-equivariant energy-based models that allow highly sample efficient end-to-end learning. We show that our models can learn from scratch without prior knowledge and yet are highly sample efficient (5~10 demonstrations are enough). Furthermore, we show that our models can generalize to tasks with (i) previously unseen target object poses, (ii) previously unseen target object instances of the category, and (iii) previously unseen visual distractors. We experiment with 6-DoF robotic manipulation tasks to validate our models' sample efficiency and generalizability. Codes are available at: https://github.com/tomato1mule/edf) <|cite_end|> <|cite_start|> (Reference: {{SE: 包括人和家畜在内的大多数动物。氧气是体内生成能量所必需的物质。然而在其利用过程中,也同时会产生对机体有害的中间体物质,这种物质总称为“活性氧”。它在体内能氧化细咆膜和细胞内的饱和脂肪酸,变成过氧化脂质等,产生不利于畜体的过氧化物。) <|cite_end|> <|cite_start|> (Reference: Neural Descriptor Fields: SE (3)-Equivariant Object Representations for Manipulation: We present Neural Descriptor Fields (NDFs), an object representation that encodes both points and relative poses between an object and a target (such as a robot gripper or a rack used for hanging) via category-level descriptors. We employ this representation for object manipulation, where given a task demonstration, we want to repeat the same task on a new object instance from the same category. We propose to achieve this objective by searching (via optimization) for the pose whose descriptor matches that observed in the demonstration. NDFs are conveniently trained in a self-supervised fashion via a 3D auto-encoding task that does not rely on expert-labeled keypoints. Further, NDFs are SE(3)-equivariant, guaranteeing performance that generalizes across all possible 3D object translations and rotations. We demonstrate learning of manipulation tasks from few (∼5-10) demonstrations both in simulation and on a real robot. Our performance generalizes across both object instances and 6-DoF object poses, and significantly outperforms a recent baseline that relies on 2D descriptors. Project website: https://yilundu.github.io/ndf/) <|cite_end|> <|cite_start|> (Reference: USEEK: Unsupervised SE (3)-Equivariant 3D Keypoints for Generalizable Manipulation: Can a robot manipulate intra-category unseen objects in arbitrary poses with the help of a mere demonstration of grasping pose on a single object instance? In this paper, we try to address this intriguing challenge by using USEEK, an unsupervised SE(3)-equivariant keypoints method that enjoys alignment across instances in a category, to perform generaliz-able manipulation. USEEK follows a teacher-student structure to decouple the unsupervised keypoint discovery and SE(3)-equivariant keypoint detection. With USEEK in hand, the robot can infer the category-level task-relevant object frames in an efficient and explainable manner, enabling manipulation of any intra-category objects from and to any poses. Through extensive experiments, we demonstrate that the keypoints produced by USEEK possess rich semantics, thus successfully transferring the functional knowledge from the demonstration object to the novel ones. Compared with other object representations for manipulation, USEEK is more adaptive in the face of large intra-category shape variance, more robust with limited demonstrations, and more efficient at inference time. Project website: https://sites.google.com/view/useek/.) <|cite_end|>. SE(3) equivariance is suitable for the disentangling of shapes and poses of parts in geometric shape assembly. Specifically, like previous works <|cite_start|> (Reference: Neural Shape Mating: Self-Supervised Object Assembly with Adversarial Shape Priors: Learning to autonomously assemble shapes is a crucial skill for many robotic applications. While the majority of existing part assembly methods focus on correctly posing semantic parts to recreate a whole object, we interpret assembly more literally: as mating geometric parts together to achieve a snug fit. By focusing on shape alignment rather than semantic cues, we can achieve across-category generalization. In this paper, we introduce a novel task, pairwise 3D geometric shape mating, and propose Neural Shape Mating (NSM) to tackle this problem. Given the point clouds of two object parts of an unknown category, NSM learns to reason about the fit of the two parts and predict a pair of 3D poses that tightly mate them together. We couple the training of NSM with an implicit shape reconstruction task to make NSM more robust to imperfect point cloud observations. To train NSM, we present a self-supervised data collection pipeline that generates pairwise shape mating data with ground truth by randomly cutting an object mesh into two parts, resulting in a dataset that consists of 200K shape mating pairs from numerous object meshes with diverse cut types. We train NSM on the collected dataset and compare it with several point cloud registration methods and one part assembly baseline. Extensive experimental results and ablation studies under various settings demonstrate the effectiveness of the proposed algorithm. Additional material is available at: https://neural-shape-mating.github.io/) <|cite_end|> <|cite_start|> (Reference: Breaking Bad: A Dataset for Geometric Fracture and Reassembly: We introduce Breaking Bad, a large-scale dataset of fractured objects. Our dataset consists of over one million fractured objects simulated from ten thousand base models. The fracture simulation is powered by a recent physically based algorithm that efficiently generates a variety of fracture modes of an object. Existing shape assembly datasets decompose objects according to semantically meaningful parts, effectively modeling the construction process. In contrast, Breaking Bad models the destruction process of how a geometric object naturally breaks into fragments. Our dataset serves as a benchmark that enables the study of fractured object reassembly and presents new challenges for geometric shape understanding. We analyze our dataset with several geometry measurements and benchmark three state-of-the-art shape assembly deep learning methods under various settings. Extensive experimental results demonstrate the difficulty of our dataset, calling on future research in model designs specifically for the geometric shape assembly task. We host our dataset at https://breaking-bad-dataset.github.io/.) <|cite_end|>, we formulate the shape assembly task as a pose prediction problem, and the target is to predict the canonical SE(3) pose for each given fragment to compose a whole shape. For every single fragment, the predicted pose transformation should be \textit{equivariant} to its original pose, while being \textit{invariant} to other fragments' poses. Accordingly, the learned representations have two main features: \textit{consistency} and \textit{stability}. \textit{Consistency} means that parts with the same geometry but different poses should have \textit{equivariant} representations, while \textit{stability} means the representation of a specific part should be \textit{invariant} to all other parts' poses and only related to their geometry characteristics. Leveraging such properties, the network can reduce the large pose space of the complex geometric shape assembly task and thus focus on the fragments' geometric information for shape assembly. While most previous works in vision and robotics only leverage SE(3) equivariance representations on a single shape, there exist multiple complex fractured parts in our geometric shape assembly task, and extracting other parts' geometric information is essential to a successful reassembly. How to leverage SE(3)-equivariant representations for multi-parts shape assembly is not a trivial problem, as learned part representations should not only consider the certain part, but also consider correlations with other parts (\emph{e.g.}, whether the notches of two parts match each other), while keeping the equivariance property. We propose to utilize both equivariant and invariant representations of single parts to compose the equivariant part representations including part correlations. To the best of our knowledge, we are the first to leverage the SE(3) equivariance property among multiple objects. In summary, we make the following contributions: \begin{itemize} \item We propose to leverage SE(3) equivariance that disentangles shapes and poses of fractured parts for geometric shape assembly. \item Utilizing both SE(3)-equivariant and -invariant representations, we learn SE(3)-equivariant part representations with part correlations for multi-part assembly. \item Experiments on representative benchmarks, including both two-part and multi-part 3D geometric shape assembly, demonstrate the superiority of SE(3) equivariance and our proposed method. \end{itemize} Related Work \subsection{3D Shape Assembly} Shape assembly is a long-standing problem with a rich literature. Many works have been investigating how to construct a complete shape from given parts <|cite_start|> (Reference: Neural Shape Mating: Self-Supervised Object Assembly with Adversarial Shape Priors: Learning to autonomously assemble shapes is a crucial skill for many robotic applications. While the majority of existing part assembly methods focus on correctly posing semantic parts to recreate a whole object, we interpret assembly more literally: as mating geometric parts together to achieve a snug fit. By focusing on shape alignment rather than semantic cues, we can achieve across-category generalization. In this paper, we introduce a novel task, pairwise 3D geometric shape mating, and propose Neural Shape Mating (NSM) to tackle this problem. Given the point clouds of two object parts of an unknown category, NSM learns to reason about the fit of the two parts and predict a pair of 3D poses that tightly mate them together. We couple the training of NSM with an implicit shape reconstruction task to make NSM more robust to imperfect point cloud observations. To train NSM, we present a self-supervised data collection pipeline that generates pairwise shape mating data with ground truth by randomly cutting an object mesh into two parts, resulting in a dataset that consists of 200K shape mating pairs from numerous object meshes with diverse cut types. We train NSM on the collected dataset and compare it with several point cloud registration methods and one part assembly baseline. Extensive experimental results and ablation studies under various settings demonstrate the effectiveness of the proposed algorithm. Additional material is available at: https://neural-shape-mating.github.io/) <|cite_end|> <|cite_start|> (Reference: Learning how to match fresco fragments: One of the main problems faced during reconstruction of fractured archaeological artifacts is sorting through a large number of candidate matches between fragments to find the relatively few that are correct. Previous computer methods for this task provided scoring functions based on a variety of properties of potential matches, including color and geometric compatibility across fracture surfaces. However, they usually consider only one or at most a few properties at once, and therefore provide match predictions with very low precision. In this article, we investigate a machine learning approach that computes the probability that a match is correct based on the combination of many features. We explore this machine learning approach for ranking matches in three different sets of fresco fragments, finding that classifiers based on many match properties can be significantly more effective at ranking proposed matches than scores based on any single property alone. Our results suggest that it is possible to train a classifier on match properties in one dataset and then use it to rank predicted matches in another dataset effectively. We believe that this approach could be helpful in a variety of cultural heritage reconstruction systems.) <|cite_end|> <|cite_start|> (Reference: AutoMate: A Dataset and Learning Approach for Automatic Mating of CAD Assemblies: Assembly modeling is a core task of computer aided design (CAD), comprising around one third of the work in a CAD workflow. Optimizing this process therefore represents a huge opportunity in the design of a CAD system, but current research of assembly based modeling is not directly applicable to modern CAD systems because it eschews the dominant data structure of modern CAD: parametric boundary representations (BREPs). CAD assembly modeling defines assemblies as a system of pairwise constraints, called mates, between parts, which are defined relative to BREP topology rather than in world coordinates common to existing work. We propose SB-GCN, a representation learning scheme on BREPs that retains the topological structure of parts, and use these learned representations to predict CAD type mates. To train our system, we compiled the first large scale dataset of BREP CAD assemblies, which we are releasing along with benchmark mate prediction tasks. Finally, we demonstrate the compatibility of our model with an existing commercial CAD system by building a tool that assists users in mate creation by suggesting mate completions, with 72.2% accuracy.) <|cite_end|> <|cite_start|> (Reference: IKEA Furniture Assembly Environment for Long-Horizon Complex Manipulation Tasks: The IKEA Furniture Assembly Environment is one of the first benchmarks for testing and accelerating the automation of complex manipulation tasks. The environment is designed to advance reinforcement learning from simple toy tasks to complex tasks requiring both long-term planning and sophisticated low-level control. Our environment supports over 80 different furniture models, Sawyer and Baxter robot simulation, and domain randomization. The IKEA Furniture Assembly Environment is a testbed for methods aiming to solve complex manipulation tasks. The environment is publicly available at https://clvrai.com/furniture) <|cite_end|> <|cite_start|> (Reference: Learning 3D Part Assembly from a Single Image: Autonomous assembly is a crucial capability for robots in many applications. For this task, several problems such as obstacle avoidance, motion planning, and actuator control have been extensively studied in robotics. However, when it comes to task specification, the space of possibilities remains underexplored. Towards this end, we introduce a novel problem, single-image-guided 3D part assembly, along with a learningbased solution. We study this problem in the setting of furniture assembly from a given complete set of parts and a single image depicting the entire assembled object. Multiple challenges exist in this setting, including handling ambiguity among parts (e.g., slats in a chair back and leg stretchers) and 3D pose prediction for parts and part subassemblies, whether visible or occluded. We address these issues by proposing a two-module pipeline that leverages strong 2D-3D correspondences and assembly-oriented graph message-passing to infer part relationships. In experiments with a PartNet-based synthetic benchmark, we demonstrate the effectiveness of our framework as compared with three baseline approaches.) <|cite_end|> <|cite_start|> (Reference: RGL-NET: A Recurrent Graph Learning framework for Progressive Part Assembly: Autonomous assembly of objects is an essential task in robotics and 3D computer vision. It has been studied extensively in robotics as a problem of motion planning, actuator control and obstacle avoidance. However, the task of developing a generalized framework for assembly robust to structural variants remains relatively unexplored. In this work, we tackle this problem using a recurrent graph learning framework considering inter-part relations and the progressive update of the part pose. Our network can learn more plausible predictions of shape structure by accounting for priorly assembled parts. Compared to the current state-of-the-art, our network yields up to 10% improvement in part accuracy and up to 15% improvement in connectivity accuracy on the PartNet dataset. Moreover, our resulting latent space facilitates exciting applications such as shape recovery from the point-cloud components. We conduct extensive experiments to justify our design choices and demonstrate the effectiveness of the proposed framework.) <|cite_end|> <|cite_start|> (Reference: JoinABLe: Learning Bottom-up Assembly of Parametric CAD Joints: Physical products are often complex assemblies combining a multitude of 3D parts modeled in computer-aided design (CAD) software. CAD designers build up these assemblies by aligning individual parts to one another using constraints called joints. In this paper we introduce JoinABLe, a learning-based method that assembles parts together to form joints. JoinABLe uses the weak supervision available in standard parametric CAD files without the help of object class labels or human guidance. Our results show that by making network predictions over a graph representation of solid models we can outperform multiple baseline methods with an accuracy (79.53%) that approaches human performance (80%). Finally, to support future research we release the Fusion 360 Gallery assembly dataset, containing assemblies with rich information on joints, contact surfaces, holes, and the underlying assembly graph structure.) <|cite_end|> <|cite_start|> (Reference: COALESCE: Component Assembly by Learning to Synthesize Connections: We introduce COALESCE, the first data-driven framework for component-based shape assembly which employs deep learning to synthesize part connections. To handle geometric and topological mismatches between parts, we remove the mismatched portions via erosion, and rely on a joint synthesis step, which is learned from data, to fill the gap and arrive at a natural and plausible part joint. Given a set of input parts extracted from different objects, COALESCE automatically aligns them and synthesizes plausible joints to connect the parts into a coherent 3D object represented by a mesh. The joint synthesis network, designed to focus on joint regions, reconstructs the surface between the parts by predicting an implicit shape representation that agrees with existing parts, while generating a smooth and topologically meaningful connection. We employ test-time optimization to further ensure that the synthesized joint region closely aligns with the input parts to create realistic component assemblies from diverse input parts. We demonstrate that our method significantly outperforms prior approaches including baseline deep models for 3D shape synthesis, as well as state-of-the-art methods for shape completion.) <|cite_end|> <|cite_start|> (Reference: Generative 3D Part Assembly via Dynamic Graph Learning: Autonomous part assembly is a challenging yet crucial task in 3D computer vision and robotics. Analogous to buying an IKEA furniture, given a set of 3D parts that can assemble a single shape, an intelligent agent needs to perceive the 3D part geometry, reason to propose pose estimations for the input parts, and finally call robotic planning and control routines for actuation. In this paper, we focus on the pose estimation subproblem from the vision side involving geometric and relational reasoning over the input part geometry. Essentially, the task of generative 3D part assembly is to predict a 6-DoF part pose, including a rigid rotation and translation, for each input part that assembles a single 3D shape as the final output. To tackle this problem, we propose an assembly-oriented dynamic graph learning framework that leverages an iterative graph neural network as a backbone. It explicitly conducts sequential part assembly refinements in a coarse-to-fine manner, exploits a pair of part relation reasoning module and part aggregation module for dynamically adjusting both part features and their relations in the part graph. We conduct extensive experiments and quantitative comparisons to three strong baseline methods, demonstrating the effectiveness of the proposed approach.) <|cite_end|>, especially in application-specific domains. Based on PartNet, a large-scale dataset that contains diverse 3D objects with fine-grained part information, previous works propose a dynamic graph learning method <|cite_start|> (Reference: Generative 3D Part Assembly via Dynamic Graph Learning: Autonomous part assembly is a challenging yet crucial task in 3D computer vision and robotics. Analogous to buying an IKEA furniture, given a set of 3D parts that can assemble a single shape, an intelligent agent needs to perceive the 3D part geometry, reason to propose pose estimations for the input parts, and finally call robotic planning and control routines for actuation. In this paper, we focus on the pose estimation subproblem from the vision side involving geometric and relational reasoning over the input part geometry. Essentially, the task of generative 3D part assembly is to predict a 6-DoF part pose, including a rigid rotation and translation, for each input part that assembles a single 3D shape as the final output. To tackle this problem, we propose an assembly-oriented dynamic graph learning framework that leverages an iterative graph neural network as a backbone. It explicitly conducts sequential part assembly refinements in a coarse-to-fine manner, exploits a pair of part relation reasoning module and part aggregation module for dynamically adjusting both part features and their relations in the part graph. We conduct extensive experiments and quantitative comparisons to three strong baseline methods, demonstrating the effectiveness of the proposed approach.) <|cite_end|>to predict 6-DoF poses for each input part (\emph{e.g.}, the back, legs and bars of a chair) and then assemble them into a single shape as output, or study how to assemble 3D shape given a single image depicting the complete shape <|cite_start|> (Reference: Learning 3D Part Assembly from a Single Image: Autonomous assembly is a crucial capability for robots in many applications. For this task, several problems such as obstacle avoidance, motion planning, and actuator control have been extensively studied in robotics. However, when it comes to task specification, the space of possibilities remains underexplored. Towards this end, we introduce a novel problem, single-image-guided 3D part assembly, along with a learningbased solution. We study this problem in the setting of furniture assembly from a given complete set of parts and a single image depicting the entire assembled object. Multiple challenges exist in this setting, including handling ambiguity among parts (e.g., slats in a chair back and leg stretchers) and 3D pose prediction for parts and part subassemblies, whether visible or occluded. We address these issues by proposing a two-module pipeline that leverages strong 2D-3D correspondences and assembly-oriented graph message-passing to infer part relationships. In experiments with a PartNet-based synthetic benchmark, we demonstrate the effectiveness of our framework as compared with three baseline approaches.) <|cite_end|>. Besides, many works study the shape assembly problem for different applications like furniture assembly <|cite_start|> (Reference: IKEA Furniture Assembly Environment for Long-Horizon Complex Manipulation Tasks: The IKEA Furniture Assembly Environment is one of the first benchmarks for testing and accelerating the automation of complex manipulation tasks. The environment is designed to advance reinforcement learning from simple toy tasks to complex tasks requiring both long-term planning and sophisticated low-level control. Our environment supports over 80 different furniture models, Sawyer and Baxter robot simulation, and domain randomization. The IKEA Furniture Assembly Environment is a testbed for methods aiming to solve complex manipulation tasks. The environment is publicly available at https://clvrai.com/furniture) <|cite_end|>, or unique needs of CAD workflow <|cite_start|> (Reference: AutoMate: A Dataset and Learning Approach for Automatic Mating of CAD Assemblies: Assembly modeling is a core task of computer aided design (CAD), comprising around one third of the work in a CAD workflow. Optimizing this process therefore represents a huge opportunity in the design of a CAD system, but current research of assembly based modeling is not directly applicable to modern CAD systems because it eschews the dominant data structure of modern CAD: parametric boundary representations (BREPs). CAD assembly modeling defines assemblies as a system of pairwise constraints, called mates, between parts, which are defined relative to BREP topology rather than in world coordinates common to existing work. We propose SB-GCN, a representation learning scheme on BREPs that retains the topological structure of parts, and use these learned representations to predict CAD type mates. To train our system, we compiled the first large scale dataset of BREP CAD assemblies, which we are releasing along with benchmark mate prediction tasks. Finally, we demonstrate the compatibility of our model with an existing commercial CAD system by building a tool that assists users in mate creation by suggesting mate completions, with 72.2% accuracy.) <|cite_end|>. However, most previous works rely deeply on the semantic information of object parts, sometimes bypassing the geometric cues. As for the geometric cues, a recent work, NSM <|cite_start|> (Reference: Neural Shape Mating: Self-Supervised Object Assembly with Adversarial Shape Priors: Learning to autonomously assemble shapes is a crucial skill for many robotic applications. While the majority of existing part assembly methods focus on correctly posing semantic parts to recreate a whole object, we interpret assembly more literally: as mating geometric parts together to achieve a snug fit. By focusing on shape alignment rather than semantic cues, we can achieve across-category generalization. In this paper, we introduce a novel task, pairwise 3D geometric shape mating, and propose Neural Shape Mating (NSM) to tackle this problem. Given the point clouds of two object parts of an unknown category, NSM learns to reason about the fit of the two parts and predict a pair of 3D poses that tightly mate them together. We couple the training of NSM with an implicit shape reconstruction task to make NSM more robust to imperfect point cloud observations. To train NSM, we present a self-supervised data collection pipeline that generates pairwise shape mating data with ground truth by randomly cutting an object mesh into two parts, resulting in a dataset that consists of 200K shape mating pairs from numerous object meshes with diverse cut types. We train NSM on the collected dataset and compare it with several point cloud registration methods and one part assembly baseline. Extensive experimental results and ablation studies under various settings demonstrate the effectiveness of the proposed algorithm. Additional material is available at: https://neural-shape-mating.github.io/) <|cite_end|>, tries to solve the two-part mating problem by mainly focusing on shape geometries without particular semantic information. Besides, a new dataset, Breaking Bad <|cite_start|> (Reference: Breaking Bad: A Dataset for Geometric Fracture and Reassembly: We introduce Breaking Bad, a large-scale dataset of fractured objects. Our dataset consists of over one million fractured objects simulated from ten thousand base models. The fracture simulation is powered by a recent physically based algorithm that efficiently generates a variety of fracture modes of an object. Existing shape assembly datasets decompose objects according to semantically meaningful parts, effectively modeling the construction process. In contrast, Breaking Bad models the destruction process of how a geometric object naturally breaks into fragments. Our dataset serves as a benchmark that enables the study of fractured object reassembly and presents new challenges for geometric shape understanding. We analyze our dataset with several geometry measurements and benchmark three state-of-the-art shape assembly deep learning methods under various settings. Extensive experimental results demonstrate the difficulty of our dataset, calling on future research in model designs specifically for the geometric shape assembly task. We host our dataset at https://breaking-bad-dataset.github.io/.) <|cite_end|>, raises a new challenge about how to assemble multiple non-semantic fragments into a complete shape. This work demonstrates that fractured shape reassembly is still a quite open problem. Following these two works, we focus on studying the geometric information and tackling the pure geometric shape assembly problem. \subsection{SE(3)-Equivariant Representations} Recently, achieving SE(3) equivariance has attracted a lot of attention, and many SE(3)-equivariant architectures have emerged <|cite_start|> (Reference: Equivariant Point Network for 3D Point Cloud Analysis: Features that are equivariant to a larger group of symmetries have been shown to be more discriminative and powerful in recent studies. However, higher-order equivariant features often come with an exponentially-growing computational cost. Furthermore, it remains relatively less explored how rotation-equivariant features can be leveraged to tackle 3D shape alignment tasks. While many past approaches have been based on either non-equivariant or invariant descriptors to align 3D shapes, we argue that such tasks may benefit greatly from an equivariant framework. In this paper, we propose an effective and practical SE(3) (3D translation and rotation) equivariant network for point cloud analysis that addresses both problems. First, we present SE(3) separable point convolution, a novel framework that breaks down the 6D convolution into two separable convolutional operators alternatively performed in the 3D Euclidean and SO(3) spaces. This significantly reduces the computational cost without compromising the performance. Second, we introduce an attention layer to effectively harness the expressiveness of the equivariant features. While jointly trained with the network, the attention layer implicitly derives the intrinsic local frame in the feature space and generates attention vectors that can be integrated into different alignment tasks. We evaluate our approach through extensive studies and visual interpretations. The empirical results demonstrate that our proposed model outperforms strong baselines in a variety of benchmarks) <|cite_end|> <|cite_start|> (Reference: 3D Equivariant Graph Implicit Functions: In recent years, neural implicit representations have made remarkable progress in modeling of 3D shapes with arbitrary topology. In this work, we address two key limitations of such representations, in failing to capture local 3D geometric fine details, and to learn from and generalize to shapes with unseen 3D transformations. To this end, we introduce a novel family of graph implicit functions with equivariant layers that facilitates modeling fine local details and guaranteed robustness to various groups of geometric transformations, through local $k$-NN graph embeddings with sparse point set observations at multiple resolutions. Our method improves over the existing rotation-equivariant implicit function from 0.69 to 0.89 (IoU) on the ShapeNet reconstruction task. We also show that our equivariant implicit function can be extended to other types of similarity transformations and generalizes to unseen translations and scaling.) <|cite_end|> <|cite_start|> (Reference: Se (3)-transformers: 3d roto-translation equivariant attention networks: We introduce the SE(3)-Transformer, a variant of the self-attention module for 3D point clouds, which is equivariant under continuous 3D roto-translations. Equivariance is important to ensure stable and predictable performance in the presence of nuisance transformations of the data input. A positive corollary of equivariance is increased weight-tying within the model, leading to fewer trainable parameters and thus decreased sample complexity (i.e. we need less training data). The SE(3)-Transformer leverages the benefits of self-attention to operate on large point clouds with varying number of points, while guaranteeing SE(3)-equivariance for robustness. We evaluate our model on a toy $N$-body particle simulation dataset, showcasing the robustness of the predictions under rotations of the input. We further achieve competitive performance on two real-world datasets, ScanObjectNN and QM9. In all cases, our model outperforms a strong, non-equivariant attention baseline and an equivariant model without attention.) <|cite_end|> <|cite_start|> (Reference: Shape-Pose Disentanglement using SE (3)-equivariant Vector Neurons: We introduce an unsupervised technique for encoding point clouds into a canonical shape representation, by disentangling shape and pose. Our encoder is stable and consistent, meaning that the shape encoding is purely pose-invariant, while the extracted rotation and translation are able to semantically align different input shapes of the same class to a common canonical pose. Specifically, we design an auto-encoder based on Vector Neuron Networks, a rotation-equivariant neural network, whose layers we extend to provide translation-equivariance in addition to rotation-equivariance only. The resulting encoder produces pose-invariant shape encoding by construction, enabling our approach to focus on learning a consistent canonical pose for a class of objects. Quantitative and qualitative experiments validate the superior stability and consistency of our approach.) <|cite_end|> <|cite_start|> (Reference: {Tensor field networks: Rotation-and translation-equivariant neural networks for 3D point clouds: We introduce tensor field neural networks, which are locally equivariant to 3D rotations, translations, and permutations of points at every layer. 3D rotation equivariance removes the need for data augmentation to identify features in arbitrary orientations. Our network uses filters built from spherical harmonics; due to the mathematical consequences of this filter choice, each layer accepts as input (and guarantees as output) scalars, vectors, and higher-order tensors, in the geometric sense of these terms. We demonstrate the capabilities of tensor field networks with tasks in geometry, physics, and chemistry.) <|cite_end|> <|cite_start|> (Reference: Dynamic Graph CNN for Learning on Point Clouds: Point clouds provide a flexible geometric representation suitable for countless applications in computer graphics; they also comprise the raw output of most 3D data acquisition devices. While hand-designed features on point clouds have long been proposed in graphics and vision, however, the recent overwhelming success of convolutional neural networks (CNNs) for image analysis suggests the value of adapting insight from CNN to the point cloud world. Point clouds inherently lack topological information so designing a model to recover topology can enrich the representation power of point clouds. To this end, we propose a new neural network module dubbed EdgeConv suitable for CNN-based high-level tasks on point clouds including classification and segmentation. EdgeConv acts on graphs dynamically computed in each layer of the network. It is differentiable and can be plugged into existing architectures. Compared to existing modules operating in extrinsic space or treating each point independently, EdgeConv has several appealing properties: It incorporates local neighborhood information; it can be stacked applied to learn global shape properties; and in multi-layer systems affinity in feature space captures semantic characteristics over potentially long distances in the original embedding. We show the performance of our model on standard benchmarks including ModelNet40, ShapeNetPart, and S3DIS.) <|cite_end|> <|cite_start|> (Reference: 3D Steerable CNNs: Learning Rotationally Equivariant Features in Volumetric Data: We present a convolutional network that is equivariant to rigid body motions. The model uses scalar-, vector-, and tensor fields over 3D Euclidean space to represent data, and equivariant convolutions to map between such representations. These SE(3)-equivariant convolutions utilize kernels which are parameterized as a linear combination of a complete steerable kernel basis, which is derived analytically in this paper. We prove that equivariant convolutions are the most general equivariant linear maps between fields over R^3. Our experimental results confirm the effectiveness of 3D Steerable CNNs for the problem of amino acid propensity prediction and protein structure classification, both of which have inherent SE(3) symmetry.) <|cite_end|> <|cite_start|> (Reference: Quaternion Equivariant Capsule Networks for 3D Point Clouds: We present a 3D capsule module for processing point clouds that is equivariant to 3D rotations and translations, as well as invariant to permutations of the input points. The operator receives a sparse set of local reference frames, computed from an input point cloud and establishes end-to-end transformation equivariance through a novel dynamic routing procedure on quaternions. Further, we theoretically connect dynamic routing between capsules to the well-known Weiszfeld algorithm, a scheme for solving \emph{iterative re-weighted least squares} (IRLS) problems with provable convergence properties. It is shown that such group dynamic routing can be interpreted as robust IRLS rotation averaging on capsule votes, where information is routed based on the final inlier scores. Based on our operator, we build a capsule network that disentangles geometry from pose, paving the way for more informative descriptors and a structured latent space. Our architecture allows joint object classification and orientation estimation without explicit supervision of rotations. We validate our algorithm empirically on common benchmark datasets.) <|cite_end|>. Thomas \emph{et al.} <|cite_start|> (Reference: {Tensor field networks: Rotation-and translation-equivariant neural networks for 3D point clouds: We introduce tensor field neural networks, which are locally equivariant to 3D rotations, translations, and permutations of points at every layer. 3D rotation equivariance removes the need for data augmentation to identify features in arbitrary orientations. Our network uses filters built from spherical harmonics; due to the mathematical consequences of this filter choice, each layer accepts as input (and guarantees as output) scalars, vectors, and higher-order tensors, in the geometric sense of these terms. We demonstrate the capabilities of tensor field networks with tasks in geometry, physics, and chemistry.) <|cite_end|>propose a tensor field neural network that uses filters built from spherical, and Deng \emph{et al.} <|cite_start|> (Reference: Vector neurons: A general framework for so (3)-equivariant networks: Invariance and equivariance to the rotation group have been widely discussed in the 3D deep learning community for pointclouds. Yet most proposed methods either use complex mathematical tools that may limit their accessibility, or are tied to specific input data types and network architectures. In this paper, we introduce a general framework built on top of what we call Vector Neuron representations for creating SO (3) -equivariant neural networks for pointcloud processing. Extending neurons from 1D scalars to 3D vectors, our vector neurons enable a simple mapping of SO (3) actions to latent spaces thereby providing a framework for building equivariance in common neural operations – including linear layers, non-linearities, pooling, and normalizations. Due to their simplicity, vector neurons are versatile and, as we demonstrate, can be incorporated into diverse network architecture backbones, allowing them to process geometry inputs in arbitrary poses. Despite its simplicity, our method performs comparably well in accuracy and generalization with other more complex and specialized state-of-the-art methods on classification and segmentation tasks. We also show for the first time a rotation equivariant reconstruction network. Source code is available at https://github.com/FlyingGiraffe/vnn.) <|cite_end|>introduce Vector Neurons that can facilitate rotation equivariant neural networks by lifting standard neural network representations to 3D space. We follow Vector Neuron <|cite_start|> (Reference: Vector neurons: A general framework for so (3)-equivariant networks: Invariance and equivariance to the rotation group have been widely discussed in the 3D deep learning community for pointclouds. Yet most proposed methods either use complex mathematical tools that may limit their accessibility, or are tied to specific input data types and network architectures. In this paper, we introduce a general framework built on top of what we call Vector Neuron representations for creating SO (3) -equivariant neural networks for pointcloud processing. Extending neurons from 1D scalars to 3D vectors, our vector neurons enable a simple mapping of SO (3) actions to latent spaces thereby providing a framework for building equivariance in common neural operations – including linear layers, non-linearities, pooling, and normalizations. Due to their simplicity, vector neurons are versatile and, as we demonstrate, can be incorporated into diverse network architecture backbones, allowing them to process geometry inputs in arbitrary poses. Despite its simplicity, our method performs comparably well in accuracy and generalization with other more complex and specialized state-of-the-art methods on classification and segmentation tasks. We also show for the first time a rotation equivariant reconstruction network. Source code is available at https://github.com/FlyingGiraffe/vnn.) <|cite_end|>and apply the vector neuron version of DGCNN <|cite_start|> (Reference: Dynamic Graph CNN for Learning on Point Clouds: Point clouds provide a flexible geometric representation suitable for countless applications in computer graphics; they also comprise the raw output of most 3D data acquisition devices. While hand-designed features on point clouds have long been proposed in graphics and vision, however, the recent overwhelming success of convolutional neural networks (CNNs) for image analysis suggests the value of adapting insight from CNN to the point cloud world. Point clouds inherently lack topological information so designing a model to recover topology can enrich the representation power of point clouds. To this end, we propose a new neural network module dubbed EdgeConv suitable for CNN-based high-level tasks on point clouds including classification and segmentation. EdgeConv acts on graphs dynamically computed in each layer of the network. It is differentiable and can be plugged into existing architectures. Compared to existing modules operating in extrinsic space or treating each point independently, EdgeConv has several appealing properties: It incorporates local neighborhood information; it can be stacked applied to learn global shape properties; and in multi-layer systems affinity in feature space captures semantic characteristics over potentially long distances in the original embedding. We show the performance of our model on standard benchmarks including ModelNet40, ShapeNetPart, and S3DIS.) <|cite_end|>model in our pipeline. Meanwhile, many recent works have utilized equivariant models for point cloud registration <|cite_start|> (Reference: Coarse-to-fine point cloud registration with se (3)-equivariant representations: Point cloud registration is a crucial problem in computer vision and robotics. Existing methods either rely on matching local geometric features, which are sensitive to the pose differences, or leverage global shapes, which leads to inconsistency when facing distribution variances such as partial overlapping. Combining the advantages of both types of methods, we adopt a coarse-to-fine pipeline that concurrently handles both issues. We first reduce the pose differences between input point clouds by aligning global features; then we match the local features to further refine the inaccurate alignments resulting from distribution variances. As global feature alignment requires the features to preserve the poses of input point clouds and local feature matching expects the features to be invariant to these poses, we propose an SE(3)-equivariant feature extractor to simultaneously generate two types of features. In this feature extractor, representations that preserve the poses are first encoded by our novel SE(3)-equivariant network and then converted into pose-invariant ones by a pose-detaching module. Experiments demonstrate that our proposed method increases the recall rate by 20% compared to state-of-the-art methods when facing both pose differences and distribution variances.) <|cite_end|>, object detection <|cite_start|> (Reference: Rotationally Equivariant 3D Object Detection: Rotation equivariance has recently become a strongly desired property in the 3D deep learning community. Yet most existing methods focus on equivariance regarding a global input rotation while ignoring the fact that rotation symmetry has its own spatial support. Specifically, we consider the object detection problem in 3D scenes, where an object bounding box should be equivariant regarding the object pose, independent of the scene motion. This suggests a new desired property we call object-level rotation equivariance. To incorporate object-level rotation equivariance into 3D object detectors, we need a mechanism to extract equivariant features with local object-level spatial support while being able to model cross-object context information. To this end, we propose Equivariant Object detection Network (EON) with a rotation equivariance suspension design to achieve object-level equivariance. EON can be applied to modern point cloud object detectors, such as VoteNet and PointRCNN, enabling them to exploit object rotation symmetry in scene-scale inputs. Our experiments on both indoor scene and autonomous driving datasets show that significant improvements are obtained by plugging our EON design into existing state-of-the-art 3D object detectors.) <|cite_end|>, pose estimation
[ "<|reference_start|> IKEA Furniture Assembly Environment for Long-Horizon Complex Manipulation Tasks: The IKEA Furniture Assembly Environment is one of the first benchmarks for testing and accelerating the automation of complex manipulation tasks. The environment is designed to advance reinforcement learning from simple toy tasks to complex tasks requiring both long-term planning and sophisticated low-level control. Our environment supports over 80 different furniture models, Sawyer and Baxter robot simulation, and domain randomization. The IKEA Furniture Assembly Environment is a testbed for methods aiming to solve complex manipulation tasks. The environment is publicly available at https://clvrai.com/furniture <|reference_end|>", "<|reference_start|> Equivariant descriptor fields: Se (3)-equivariant energy-based models for end-to-end visual robotic manipulation learning: End-to-end learning for visual robotic manipulation is known to suffer from sample inefficiency, requiring large numbers of demonstrations. The spatial roto-translation equivariance, or the SE(3)-equivariance can be exploited to improve the sample efficiency for learning robotic manipulation. In this paper, we present SE(3)-equivariant models for visual robotic manipulation from point clouds that can be trained fully end-to-end. By utilizing the representation theory of the Lie group, we construct novel SE(3)-equivariant energy-based models that allow highly sample efficient end-to-end learning. We show that our models can learn from scratch without prior knowledge and yet are highly sample efficient (5~10 demonstrations are enough). Furthermore, we show that our models can generalize to tasks with (i) previously unseen target object poses, (ii) previously unseen target object instances of the category, and (iii) previously unseen visual distractors. We experiment with 6-DoF robotic manipulation tasks to validate our models' sample efficiency and generalizability. Codes are available at: https://github.com/tomato1mule/edf <|reference_end|>", "<|reference_start|> Neural Shape Mating: Self-Supervised Object Assembly with Adversarial Shape Priors: Learning to autonomously assemble shapes is a crucial skill for many robotic applications. While the majority of existing part assembly methods focus on correctly posing semantic parts to recreate a whole object, we interpret assembly more literally: as mating geometric parts together to achieve a snug fit. By focusing on shape alignment rather than semantic cues, we can achieve across-category generalization. In this paper, we introduce a novel task, pairwise 3D geometric shape mating, and propose Neural Shape Mating (NSM) to tackle this problem. Given the point clouds of two object parts of an unknown category, NSM learns to reason about the fit of the two parts and predict a pair of 3D poses that tightly mate them together. We couple the training of NSM with an implicit shape reconstruction task to make NSM more robust to imperfect point cloud observations. To train NSM, we present a self-supervised data collection pipeline that generates pairwise shape mating data with ground truth by randomly cutting an object mesh into two parts, resulting in a dataset that consists of 200K shape mating pairs from numerous object meshes with diverse cut types. We train NSM on the collected dataset and compare it with several point cloud registration methods and one part assembly baseline. Extensive experimental results and ablation studies under various settings demonstrate the effectiveness of the proposed algorithm. Additional material is available at: https://neural-shape-mating.github.io/ <|reference_end|>", "<|reference_start|> Generative 3D Part Assembly via Dynamic Graph Learning: Autonomous part assembly is a challenging yet crucial task in 3D computer vision and robotics. Analogous to buying an IKEA furniture, given a set of 3D parts that can assemble a single shape, an intelligent agent needs to perceive the 3D part geometry, reason to propose pose estimations for the input parts, and finally call robotic planning and control routines for actuation. In this paper, we focus on the pose estimation subproblem from the vision side involving geometric and relational reasoning over the input part geometry. Essentially, the task of generative 3D part assembly is to predict a 6-DoF part pose, including a rigid rotation and translation, for each input part that assembles a single 3D shape as the final output. To tackle this problem, we propose an assembly-oriented dynamic graph learning framework that leverages an iterative graph neural network as a backbone. It explicitly conducts sequential part assembly refinements in a coarse-to-fine manner, exploits a pair of part relation reasoning module and part aggregation module for dynamically adjusting both part features and their relations in the part graph. We conduct extensive experiments and quantitative comparisons to three strong baseline methods, demonstrating the effectiveness of the proposed approach. <|reference_end|>" ]
[ 0, 26, 30, 41 ]
{"<|multi_cite_1_1|>": "arxiv-234596", "<|multi_cite_1_2|>": "arxiv-271700", "<|multi_cite_2_1|>": "arxiv-423302", "<|multi_cite_2_2|>": "arxiv-455759", "<|multi_cite_3_1|>": "ss-1323002", "<|multi_cite_3_2|>": "arxiv-234596", "<|multi_cite_3_3|>": "arxiv-254922", "<|multi_cite_3_4|>": "arxiv-271700", "<|cite_4|>": "arxiv-423302", "<|multi_cite_5_1|>": "arxiv-354628", "<|multi_cite_5_2|>": "arxiv-372741", "<|cite_6|>": "arxiv-231555", "<|cite_7|>": "ss-1204247", "<|cite_8|>": "arxiv-455759", "<|multi_cite_9_1|>": "arxiv-329991", "<|multi_cite_9_2|>": "arxiv-410012", "<|multi_cite_9_3|>": "ss-777480", "<|multi_cite_9_4|>": "ss-963438", "<|multi_cite_9_5|>": "ss-964392", "<|multi_cite_9_6|>": "ss-1271169", "<|multi_cite_9_7|>": "arxiv-146225", "<|multi_cite_9_8|>": "arxiv-165033", "<|multi_cite_9_9|>": "arxiv-241098", "<|multi_cite_10_1|>": "ss-925178", "<|multi_cite_10_2|>": "ss-925179", "<|multi_cite_11_1|>": "ss-1488292", "<|multi_cite_11_2|>": "ss-834682", "<|multi_cite_11_3|>": "ss-692008", "<|multi_cite_11_4|>": "ss-1536064", "<|multi_cite_11_5|>": "ss-934845", "<|multi_cite_12_1|>": "arxiv-423302", "<|multi_cite_12_2|>": "arxiv-455759", "<|multi_cite_13_1|>": "arxiv-423302", "<|multi_cite_13_2|>": "ss-2354592", "<|multi_cite_13_3|>": "arxiv-343255", "<|multi_cite_13_4|>": "arxiv-234596", "<|multi_cite_13_5|>": "arxiv-254922", "<|multi_cite_13_6|>": "arxiv-357466", "<|multi_cite_13_7|>": "arxiv-383128", "<|multi_cite_13_8|>": "arxiv-282788", "<|multi_cite_13_9|>": "arxiv-271700", "<|cite_14|>": "arxiv-271700", "<|cite_15|>": "arxiv-254922", "<|cite_16|>": "arxiv-234596", "<|cite_17|>": "arxiv-343255", "<|cite_18|>": "arxiv-423302", "<|cite_19|>": "arxiv-455759", "<|multi_cite_20_1|>": "arxiv-329991", "<|multi_cite_20_2|>": "arxiv-410012", "<|multi_cite_20_3|>": "ss-963438", "<|multi_cite_20_4|>": "ss-964392", "<|multi_cite_20_5|>": "ss-1271169", "<|multi_cite_20_6|>": "arxiv-146225", "<|multi_cite_20_7|>": "arxiv-165033", "<|multi_cite_20_8|>": "arxiv-241098", "<|cite_21|>": "ss-1271169", "<|cite_22|>": "ss-777480", "<|cite_23|>": "ss-777480", "<|cite_24|>": "arxiv-146225", "<|cite_25|>": "ss-930240", "<|cite_26|>": "arxiv-416158", "<|multi_cite_27_1|>": "ss-925178", "<|multi_cite_27_2|>": "ss-925179", "<|multi_cite_28_1|>": "ss-1488292", "<|multi_cite_28_2|>": "ss-834682", "<|multi_cite_28_3|>": "ss-692008", "<|multi_cite_28_4|>": "ss-1536064", "<|multi_cite_28_5|>": "ss-934845"}
1812.02134
<|paper_start|> Title: An Unpaired Shape Transforming Method for Image Translation and Cross-Domain Retrieval Abstract: An Unpaired Shape Transforming Method for Image Translation and Cross-Domain Retrieval: We address the problem of unpaired geometric image-to-image translation. Rather than transferring the style of an image as a whole, our goal is to translate the geometry of an object as depicted in different domains while preserving its appearance characteristics. Our model is trained in an unpaired fashion, i.e. without the need of paired images during training. It performs all steps of the shape transfer within a single model and without additional post-processing stages. Extensive experiments on the VITON, CMU-Multi-PIE and our own FashionStyle datasets show the effectiveness of the method. In addition, we show that despite their low-dimensionality, the features learned by our model are useful to the item retrieval task. Introduction \label{intro} Image-to-image translation (I2I) refers to the process of generating a novel image, which is similar to the original input image yet different in some aspects. Typically, the input and output images belong to different {\em domains}, with images in the same domain sharing a common characteristic, e.g. going from photographs to paintings <|cite_start|> (Reference: Neural Style Transfer: A Review: The seminal work of Gatys et al. demonstrated the power of Convolutional Neural Networks (CNNs) in creating artistic imagery by separating and recombining image content and style. This process of using CNNs to render a content image in different styles is referred to as Neural Style Transfer (NST). Since then, NST has become a trending topic both in academic literature and industrial applications. It is receiving increasing attention and a variety of approaches are proposed to either improve or extend the original NST algorithm. In this paper, we aim to provide a comprehensive overview of the current progress towards NST. We first propose a taxonomy of current algorithms in the field of NST. Then, we present several evaluation methods and compare different NST algorithms both qualitatively and quantitatively. The review concludes with a discussion of various applications of NST and open problems for future research. A list of papers discussed in this review, corresponding codes, pre-trained models and more comparison results are publicly available at: https://osf.io/f8tu4/.) <|cite_end|>, from greyscale to color images <|cite_start|> (Reference: Unsupervised Diverse Colorization via Generative Adversarial Networks: Colorization of grayscale images has been a hot topic in computer vision. Previous research mainly focuses on producing a colored image to match the original one. However, since many colors share the same gray value, an input grayscale image could be diversely colored while maintaining its reality. In this paper, we design a novel solution for unsupervised diverse colorization. Specifically, we leverage conditional generative adversarial networks to model the distribution of real-world item colors, in which we develop a fully convolutional generator with multi-layer noise to enhance diversity, with multi-layer condition concatenation to maintain reality, and with stride 1 to keep spatial information. With such a novel network architecture, the model yields highly competitive performance on the open LSUN bedroom dataset. The Turing test of 80 humans further indicates our generated color schemes are highly convincible.) <|cite_end|>, or from virtual (synthetic) to real images <|cite_start|> (Reference: t: 选用岭南黄羽肉鸡初生雏840只,分为5组,在56天的饲养中,在1,3,4,5组日粮中分别添加0,400,800,1000mg/kg补益类中草药提取物'敌克素',在2组日粮中添加饲用金霉素(50mg/kg).结果,'敌克素'显著提高了56日龄黄羽肉鸡生产性能和血清CD4+/CD8+的比例(P<0.05),显著降低血清尿素氮浓度(P<0.05),效果优于金霉素.) <|cite_end|>. Apart from direct applications <|cite_start|> (Reference: Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network: Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.) <|cite_end|>, I2I has proven valuable as a tool for data augmentation or to learn a representation for cross-domain image retrieval <|cite_start|> (Reference: Sketch-Based Image Retrieval using Generative Adversarial Networks: For sketch-based image retrieval (SBIR), we propose a generative adversarial network trained on a large number of sketches and their corresponding real images. To imitate human search process, we attempt to match candidate images with theimaginary image in user single s mind instead of the sketch query, i.e., not only the shape information of sketches but their possible content information are considered in SBIR. Specifically, a conditional generative adversarial network (cGAN) is employed to enrich the content information of sketches and recover the imaginary images, and two VGG-based encoders, which work on real and imaginary images respectively, are used to constrain their perceptual consistency from the view of feature representations. During SBIR, we first generate an imaginary image from a given sketch via cGAN, and then take the output of the learned encoder for imaginary images as the feature of the query sketch. Finally, we build an interactive SBIR system that shows encouraging performance.) <|cite_end|>. Traditionally, each image domain is characterized by a different appearance or {\em style}, and I2I is therefore sometimes referred to as {\em style transfer} <|cite_start|> (Reference: Neural Style Transfer: A Review: The seminal work of Gatys et al. demonstrated the power of Convolutional Neural Networks (CNNs) in creating artistic imagery by separating and recombining image content and style. This process of using CNNs to render a content image in different styles is referred to as Neural Style Transfer (NST). Since then, NST has become a trending topic both in academic literature and industrial applications. It is receiving increasing attention and a variety of approaches are proposed to either improve or extend the original NST algorithm. In this paper, we aim to provide a comprehensive overview of the current progress towards NST. We first propose a taxonomy of current algorithms in the field of NST. Then, we present several evaluation methods and compare different NST algorithms both qualitatively and quantitatively. The review concludes with a discussion of various applications of NST and open problems for future research. A list of papers discussed in this review, corresponding codes, pre-trained models and more comparison results are publicly available at: https://osf.io/f8tu4/.) <|cite_end|>. While the translation process may drastically change the appearance or style of the input image, the image semantics are to be preserved, i.e. both input and output should represent the same objects and scene. Moreover, in most works, also the image geometry, i.e. the shape of the objects and the global image composition, is preserved. We refer to this as the image {\em content}. Most methods for I2I build on top of Generative Adversarial Networks (GANs) <|cite_start|> (Reference: Generative {{Adversarial Nets}}: CNN and RNN are classifiers for image and speech recognition, and are used in many computer vision. However, this model alone does not produce images...) <|cite_end|> <|cite_start|> (Reference: Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks: In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.) <|cite_end|> <|cite_start|> (Reference: Least Squares Generative Adversarial Networks: Unsupervised learning with generative adversarial networks (GANs) has proven hugely successful. Regular GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function. However, we found that this loss function may lead to the vanishing gradients problem during the learning process. To overcome such a problem, we propose in this paper the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator. We show that minimizing the objective function of LSGAN yields minimizing the Pearson $\chi^2$ divergence. There are two benefits of LSGANs over regular GANs. First, LSGANs are able to generate higher quality images than regular GANs. Second, LSGANs perform more stable during the learning process. We evaluate LSGANs on five scene datasets and the experimental results show that the images generated by LSGANs are of better quality than the ones generated by regular GANs. We also conduct two comparison experiments between LSGANs and regular GANs to illustrate the stability of LSGANs.) <|cite_end|> <|cite_start|> (Reference: Wasserstein Generative Adversarial Networks: We introduce a new algorithm named WGAN, an alternative to traditional GAN training. In this new model, we show that we can improve the stability of learning, get rid of problems like mode collapse, and provide meaningful learning curves useful for debugging and hyperparameter searches. Furthermore, we show that the corresponding optimization problem is sound, and provide extensive theoretical work highlighting the deep connections to different distances between distributions.) <|cite_end|> and are data-driven. They learn a translation model from example images of the two domains. While most methods require paired examples <|cite_start|> (Reference: Image-to-Image Translation with Conditional Adversarial Networks: We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.) <|cite_end|> <|cite_start|> (Reference: Pixel-Level Domain Transfer: We present an image-conditional image generation model. The model transfers an input domain to a target domain in semantic level, and generates the target image in pixel level. To generate realistic target images, we employ the real/fake-discriminator as in Generative Adversarial Nets, but also introduce a novel domain-discriminator to make the generated image relevant to the input image. We verify our model through a challenging task of generating a piece of clothing from an input image of a dressed person. We present a high quality clothing dataset containing the two domains, and succeed in demonstrating decent results.) <|cite_end|> <|cite_start|> (Reference: Towards Pose Invariant Face Recognition in the Wild: Pose variation is one key challenge in face recognition. As opposed to current techniques for pose invariant face recognition, which either directly extract pose invariant features for recognition, or first normalize profile face images to frontal pose before feature extraction, we argue that it is more desirable to perform both tasks jointly to allow them to benefit from each other. To this end, we propose a Pose Invariant Model (PIM) for face recognition in the wild, with three distinct novelties. First, PIM is a novel and unified deep architecture, containing a Face Frontalization sub-Net (FFN) and a Discriminative Learning sub-Net (DLN), which are jointly learned from end to end. Second, FFN is a well-designed dual-path Generative Adversarial Network (GAN) which simultaneously perceives global structures and local details, incorporated with an unsupervised cross-domain adversarial training and a "learning to learn" strategy for high-fidelity and identity-preserving frontal view synthesis. Third, DLN is a generic Convolutional Neural Network (CNN) for face recognition with our enforced cross-entropy optimization strategy for learning discriminative yet generalized feature representation. Qualitative and quantitative experiments on both controlled and in-the-wild benchmarks demonstrate the superiority of the proposed model over the state-of-the-arts.) <|cite_end|>, some recent methods do not <|cite_start|> (Reference: Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks: In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.) <|cite_end|> <|cite_start|> (Reference: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks: Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain $X$ to a target domain $Y$ in the absence of paired examples. Our goal is to learn a mapping $G: X \rightarrow Y$ such that the distribution of images from $G(X)$ is indistinguishable from the distribution $Y$ using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping $F: Y \rightarrow X$ and introduce a cycle consistency loss to push $F(G(X)) \approx X$ (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.) <|cite_end|> <|cite_start|> (Reference: Unsupervised one-to-many image translation: ) <|cite_end|>. To constrain the complexity of the problem, the training data is often restricted to a specific setting, e.g. close-ups of faces <|cite_start|> (Reference: Beyond Face Rotation: Global and Local Perception GAN for Photorealistic and Identity Preserving Frontal View Synthesis: Photorealistic frontal view synthesis from a single face image has a wide range of applications in the field of face recognition. Although data-driven deep learning methods have been proposed to address this problem by seeking solutions from ample face data, this problem is still challenging because it is intrinsically ill-posed. This paper proposes a Two-Pathway Generative Adversarial Network (TP-GAN) for photorealistic frontal view synthesis by simultaneously perceiving global structures and local details. Four landmark located patch networks are proposed to attend to local textures in addition to the commonly used global encoder-decoder network. Except for the novel architecture, we make this ill-posed problem well constrained by introducing a combination of adversarial loss, symmetry loss and identity preserving loss. The combined loss function leverages both frontal face distribution and pre-trained discriminative deep face models to guide an identity preserving inference of frontal views from profiles. Different from previous deep learning methods that mainly rely on intermediate features for recognition, our method directly leverages the synthesized identity preserving image for downstream tasks like face recognition and attribution estimation. Experimental results demonstrate that our method not only presents compelling perceptual results but also outperforms state-of-the-art results on large pose face recognition.) <|cite_end|> <|cite_start|> (Reference: Towards Pose Invariant Face Recognition in the Wild: Pose variation is one key challenge in face recognition. As opposed to current techniques for pose invariant face recognition, which either directly extract pose invariant features for recognition, or first normalize profile face images to frontal pose before feature extraction, we argue that it is more desirable to perform both tasks jointly to allow them to benefit from each other. To this end, we propose a Pose Invariant Model (PIM) for face recognition in the wild, with three distinct novelties. First, PIM is a novel and unified deep architecture, containing a Face Frontalization sub-Net (FFN) and a Discriminative Learning sub-Net (DLN), which are jointly learned from end to end. Second, FFN is a well-designed dual-path Generative Adversarial Network (GAN) which simultaneously perceives global structures and local details, incorporated with an unsupervised cross-domain adversarial training and a "learning to learn" strategy for high-fidelity and identity-preserving frontal view synthesis. Third, DLN is a generic Convolutional Neural Network (CNN) for face recognition with our enforced cross-entropy optimization strategy for learning discriminative yet generalized feature representation. Qualitative and quantitative experiments on both controlled and in-the-wild benchmarks demonstrate the superiority of the proposed model over the state-of-the-arts.) <|cite_end|>, people <|cite_start|> (Reference: Pose Guided Person Image Generation: This paper proposes the novel Pose Guided Person Generation Network (PG$^2$) that allows to synthesize person images in arbitrary poses, based on an image of that person and a novel pose. Our generation framework PG$^2$ utilizes the pose information explicitly and consists of two key stages: pose integration and image refinement. In the first stage the condition image and the target pose are fed into a U-Net-like network to generate an initial but coarse image of the person with the target pose. The second stage then refines the initial and blurry result by training a U-Net-like generator in an adversarial way. Extensive experimental results on both 128$\times$64 re-identification images and 256$\times$256 fashion photos show that our model generates high-quality person images with convincing details.) <|cite_end|> <|cite_start|> (Reference: Disentangled Person Image Generation: Generating novel, yet realistic, images of persons is a challenging task due to the complex interplay between the different image factors, such as the foreground, background and pose information. In this work, we aim at generating such images based on a novel, two-stage reconstruction pipeline that learns a disentangled representation of the aforementioned image factors and generates novel person images at the same time. First, a multi-branched reconstruction network is proposed to disentangle and encode the three factors into embedding features, which are then combined to re-compose the input image itself. Second, three corresponding mapping functions are learned in an adversarial manner in order to map Gaussian noise to the learned embedding feature space, for each factor respectively. Using the proposed framework, we can manipulate the foreground, background and pose of the input image, and also sample new embedding features to generate such targeted manipulations, that provide more control over the generation process. Experiments on Market-1501 and Deepfashion datasets show that our model does not only generate realistic person images with new foregrounds, backgrounds and poses, but also manipulates the generated factors and interpolates the in-between states. Another set of experiments on Market-1501 shows that our model can also be beneficial for the person re-identification task.) <|cite_end|>, traffic scenes, etc. We refer to these as different {\em domains}. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{imgs/Teaser_v6.pdf}. \caption{ \footnotesize Translating a clothing item from a "catalog" image domain to a domain of individuals wearing the indicated item (try-on task, top), and vice versa (take-off task, bottom). Notice how for both tasks the appearance details of the clothing items are preserved while their shape is effectively translated. } \label{fig:TeaserImg} \vspace{-6mm} \end{figure} In contrast to the traditional setting <|cite_start|> (Reference: Disentangled Person Image Generation: Generating novel, yet realistic, images of persons is a challenging task due to the complex interplay between the different image factors, such as the foreground, background and pose information. In this work, we aim at generating such images based on a novel, two-stage reconstruction pipeline that learns a disentangled representation of the aforementioned image factors and generates novel person images at the same time. First, a multi-branched reconstruction network is proposed to disentangle and encode the three factors into embedding features, which are then combined to re-compose the input image itself. Second, three corresponding mapping functions are learned in an adversarial manner in order to map Gaussian noise to the learned embedding feature space, for each factor respectively. Using the proposed framework, we can manipulate the foreground, background and pose of the input image, and also sample new embedding features to generate such targeted manipulations, that provide more control over the generation process. Experiments on Market-1501 and Deepfashion datasets show that our model does not only generate realistic person images with new foregrounds, backgrounds and poses, but also manipulates the generated factors and interpolates the in-between states. Another set of experiments on Market-1501 shows that our model can also be beneficial for the person re-identification task.) <|cite_end|> <|cite_start|> (Reference: Pixel-Level Domain Transfer: We present an image-conditional image generation model. The model transfers an input domain to a target domain in semantic level, and generates the target image in pixel level. To generate realistic target images, we employ the real/fake-discriminator as in Generative Adversarial Nets, but also introduce a novel domain-discriminator to make the generated image relevant to the input image. We verify our model through a challenging task of generating a piece of clothing from an input image of a dressed person. We present a high quality clothing dataset containing the two domains, and succeed in demonstrating decent results.) <|cite_end|> <|cite_start|> (Reference: Towards Pose Invariant Face Recognition in the Wild: Pose variation is one key challenge in face recognition. As opposed to current techniques for pose invariant face recognition, which either directly extract pose invariant features for recognition, or first normalize profile face images to frontal pose before feature extraction, we argue that it is more desirable to perform both tasks jointly to allow them to benefit from each other. To this end, we propose a Pose Invariant Model (PIM) for face recognition in the wild, with three distinct novelties. First, PIM is a novel and unified deep architecture, containing a Face Frontalization sub-Net (FFN) and a Discriminative Learning sub-Net (DLN), which are jointly learned from end to end. Second, FFN is a well-designed dual-path Generative Adversarial Network (GAN) which simultaneously perceives global structures and local details, incorporated with an unsupervised cross-domain adversarial training and a "learning to learn" strategy for high-fidelity and identity-preserving frontal view synthesis. Third, DLN is a generic Convolutional Neural Network (CNN) for face recognition with our enforced cross-entropy optimization strategy for learning discriminative yet generalized feature representation. Qualitative and quantitative experiments on both controlled and in-the-wild benchmarks demonstrate the superiority of the proposed model over the state-of-the-arts.) <|cite_end|>, we focus on the challenge where input and output do {\em not} belong to domains that share the same geometrical information. Instead, we work with one object-centric domain with standard shape and one that is more contextualized with large shape variation (using a reference image to provide the right context). For instance, we go from a single piece of clothing to a person wearing that same item; or from a frontal face crop to a wider shot with arbitrary viewpoint of that same person (see Fig.~\ref{fig:TeaserImg} \& \ref{fig:faceQuality}). This setting is significantly more challenging, as the image geometry changes. At the same time, the image semantics (e.g. the clothing pattern or face identity) should be preserved. Analogous to the term style transfer, we refer to this as {\em shape transfer}. While a couple of recent works have looked into this setting <|cite_start|> (Reference: Disentangled Person Image Generation: Generating novel, yet realistic, images of persons is a challenging task due to the complex interplay between the different image factors, such as the foreground, background and pose information. In this work, we aim at generating such images based on a novel, two-stage reconstruction pipeline that learns a disentangled representation of the aforementioned image factors and generates novel person images at the same time. First, a multi-branched reconstruction network is proposed to disentangle and encode the three factors into embedding features, which are then combined to re-compose the input image itself. Second, three corresponding mapping functions are learned in an adversarial manner in order to map Gaussian noise to the learned embedding feature space, for each factor respectively. Using the proposed framework, we can manipulate the foreground, background and pose of the input image, and also sample new embedding features to generate such targeted manipulations, that provide more control over the generation process. Experiments on Market-1501 and Deepfashion datasets show that our model does not only generate realistic person images with new foregrounds, backgrounds and poses, but also manipulates the generated factors and interpolates the in-between states. Another set of experiments on Market-1501 shows that our model can also be beneficial for the person re-identification task.) <|cite_end|> <|cite_start|> (Reference: Pixel-Level Domain Transfer: We present an image-conditional image generation model. The model transfers an input domain to a target domain in semantic level, and generates the target image in pixel level. To generate realistic target images, we employ the real/fake-discriminator as in Generative Adversarial Nets, but also introduce a novel domain-discriminator to make the generated image relevant to the input image. We verify our model through a challenging task of generating a piece of clothing from an input image of a dressed person. We present a high quality clothing dataset containing the two domains, and succeed in demonstrating decent results.) <|cite_end|> <|cite_start|> (Reference: Towards Pose Invariant Face Recognition in the Wild: Pose variation is one key challenge in face recognition. As opposed to current techniques for pose invariant face recognition, which either directly extract pose invariant features for recognition, or first normalize profile face images to frontal pose before feature extraction, we argue that it is more desirable to perform both tasks jointly to allow them to benefit from each other. To this end, we propose a Pose Invariant Model (PIM) for face recognition in the wild, with three distinct novelties. First, PIM is a novel and unified deep architecture, containing a Face Frontalization sub-Net (FFN) and a Discriminative Learning sub-Net (DLN), which are jointly learned from end to end. Second, FFN is a well-designed dual-path Generative Adversarial Network (GAN) which simultaneously perceives global structures and local details, incorporated with an unsupervised cross-domain adversarial training and a "learning to learn" strategy for high-fidelity and identity-preserving frontal view synthesis. Third, DLN is a generic Convolutional Neural Network (CNN) for face recognition with our enforced cross-entropy optimization strategy for learning discriminative yet generalized feature representation. Qualitative and quantitative experiments on both controlled and in-the-wild benchmarks demonstrate the superiority of the proposed model over the state-of-the-arts.) <|cite_end|>, to the best of our knowledge we are the first to propose a solution that does {\em not} require paired data, across different domains, for model training. This is important, as collecting paired data is cumbersome or even impossible. Either way, it limits the amount of data that can be used for training, while access to large amounts of data is crucial for the quality of the results. Methods working with unpaired training data have been proposed for style transfer <|cite_start|> (Reference: Multimodal Unsupervised Image-to-Image Translation: Unsupervised image-to-image translation is an important and challenging problem in computer vision. Given an image in the source domain, the goal is to learn the conditional distribution of corresponding images in the target domain, without seeing any pairs of corresponding images. While this conditional distribution is inherently multimodal, existing approaches make an overly simplified assumption, modeling it as a deterministic one-to-one mapping. As a result, they fail to generate diverse outputs from a given source domain image. To address this limitation, we propose a Multimodal Unsupervised Image-to-image Translation (MUNIT) framework. We assume that the image representation can be decomposed into a content code that is domain-invariant, and a style code that captures domain-specific properties. To translate an image to another domain, we recombine its content code with a random style code sampled from the style space of the target domain. We analyze the proposed framework and establish several theoretical results. Extensive experiments with comparisons to the state-of-the-art approaches further demonstrates the advantage of the proposed framework. Moreover, our framework allows users to control the style of translation outputs by providing an example style image. Code and pretrained models are available at https://github.com/nvlabs/MUNIT) <|cite_end|> <|cite_start|> (Reference: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks: Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain $X$ to a target domain $Y$ in the absence of paired examples. Our goal is to learn a mapping $G: X \rightarrow Y$ such that the distribution of images from $G(X)$ is indistinguishable from the distribution $Y$ using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping $F: Y \rightarrow X$ and introduce a cycle consistency loss to push $F(G(X)) \approx X$ (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.) <|cite_end|>, relying on low-level local transformations. However, these are not suited for the more challenging shape transfer setting, as clearly illustrated in Fig.~\ref{fig:baselineComparsion_intro}. Translating shapes in a unsupervised way is an unsolved task that is of interest for several reasons. First, it can be considered an alternative formulation of the novel-view synthesis problem, in the 2D image space, using only a single image as input. Second, shape translation can recover missing/occluded characteristics of an object instance which can help other tasks, such as recognition or tracking. The main contributions of this paper are four-fold: i) We analyze the task of \textit{\textbf{unsupervised shape translation}}. To the best of our knowledge, we are the first doing this from an unsupervised perspective. ii) We propose a method called Unsupervised Shape Transformer (UST), which does not need any paired data or refinement post-processing. In one stream, an object with standard shape is transformed to a contextualized domain with arbitrary shape, and vice versa in the other stream. iii) We achieve a one-to-many mapping by utilizing context and structure information guidance. iv) We show the potential of the features learned by our model on the cross-domain item retrieval task. This paper is organized as follows: Sec.~\ref{sec:relatedWork} positions our work in the literature. In Sec.~\ref{sec:methodology} we present the details of the proposed method. This is followed by an extensive evaluation in Sec.~\ref{sec:experiment}. Finally, we draw conclusions in Sec.~\ref{sec:conclusion}. Related Work \label{sec:relatedWork} Isola~\etal <|cite_start|> (Reference: Image-to-Image Translation with Conditional Adversarial Networks: We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.) <|cite_end|> first formulate the image-to-image translation problem with a conditional GAN model which learns a mapping from the source image distribution to the output image distribution using a U-Net neural network in an adversarial way. Zhu~\etal <|cite_start|> (Reference: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks: Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain $X$ to a target domain $Y$ in the absence of paired examples. Our goal is to learn a mapping $G: X \rightarrow Y$ such that the distribution of images from $G(X)$ is indistinguishable from the distribution $Y$ using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping $F: Y \rightarrow X$ and introduce a cycle consistency loss to push $F(G(X)) \approx X$ (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.) <|cite_end|> propose cycle-consistency to solve the I2I problem with unpaired data, which enables a lot of applications since it is usually expensive or even impossible to collect paired data for many tasks. Liu~\etal <|cite_start|> (Reference: Unsupervised Image-to-Image Translation Networks: Unsupervised image-to-image translation aims at learning a joint distribution of images in different domains by using images from the marginal distributions in individual domains. Since there exists an infinite set of joint distributions that can arrive the given marginal distributions, one could infer nothing about the joint distribution from the marginal distributions without additional assumptions. To address the problem, we make a shared-latent space assumption and propose an unsupervised image-to-image translation framework based on Coupled GANs. We compare the proposed framework with competing approaches and present high quality image translation results on various challenging unsupervised image translation tasks, including street scene image translation, animal image translation, and face image translation. We also apply the proposed framework to domain adaptation and achieve state-of-the-art performance on benchmark datasets. Code and additional results are available in https://github.com/mingyuliutw/unit .) <|cite_end|> assume that there exists a shared latent space for the two related domains and propose a weights-sharing based framework to enforce this constraint. These methods learn a one-to-one mapping function, \ie the input image is mapped to a deterministic output image. <|cite_start|> (Reference: Augmented CycleGAN: Learning Many-to-Many Mappings from Unpaired Data: Learning inter-domain mappings from unpaired data can improve performance in structured prediction tasks, such as image segmentation, by reducing the need for paired data. CycleGAN was recently proposed for this problem, but critically assumes the underlying inter-domain mapping is approximately deterministic and one-to-one. This assumption renders the model ineffective for tasks requiring flexible, many-to-many mappings. We propose a new model, called Augmented CycleGAN, which learns many-to-many mappings between domains. We examine Augmented CycleGAN qualitatively and quantitatively on several image datasets.) <|cite_end|> <|cite_start|> (Reference: Exemplar Guided Unsupervised Image-to-Image Translation with Semantic Consistency: Image-to-image translation has recently received significant attention due to advances in deep learning. Most works focus on learning either a one-to-one mapping in an unsupervised way or a many-to-many mapping in a supervised way. However, a more practical setting is many-to-many mapping in an unsupervised way, which is harder due to the lack of supervision and the complex inner- and cross-domain variations. To alleviate these issues, we propose the Exemplar Guided & Semantically Consistent Image-to-image Translation (EGSC-IT) network which conditions the translation process on an exemplar image in the target domain. We assume that an image comprises of a content component which is shared across domains, and a style component specific to each domain. Under the guidance of an exemplar from the target domain we apply Adaptive Instance Normalization to the shared content component, which allows us to transfer the style information of the target domain to the source domain. To avoid semantic inconsistencies during translation that naturally appear due to the large inner- and cross-domain variations, we introduce the concept of feature masks that provide coarse semantic guidance without requiring the use of any semantic labels. Experimental results on various datasets show that EGSC-IT does not only translate the source image to diverse instances in the target domain, but also preserves the semantic consistency during the process.) <|cite_end|> <|cite_start|> (Reference: Multimodal Unsupervised Image-to-Image Translation: Unsupervised image-to-image translation is an important and challenging problem in computer vision. Given an image in the source domain, the goal is to learn the conditional distribution of corresponding images in the target domain, without seeing any pairs of corresponding images. While this conditional distribution is inherently multimodal, existing approaches make an overly simplified assumption, modeling it as a deterministic one-to-one mapping. As a result, they fail to generate diverse outputs from a given source domain image. To address this limitation, we propose a Multimodal Unsupervised Image-to-image Translation (MUNIT) framework. We assume that the image representation can be decomposed into a content code that is domain-invariant, and a style code that captures domain-specific properties. To translate an image to another domain, we recombine its content code with a random style code sampled from the style space of the target domain. We analyze the proposed framework and establish several theoretical results. Extensive experiments with comparisons to the state-of-the-art approaches further demonstrates the advantage of the proposed framework. Moreover, our framework allows users to control the style of translation outputs by providing an example style image. Code and pretrained models are available at https://github.com/nvlabs/MUNIT) <|cite_end|> <|cite_start|> (Reference: Diverse Image-to-Image Translation via Disentangled Representations: Image-to-image translation aims to learn the mapping between two visual domains. There are two main challenges for many applications: 1) the lack of aligned training pairs and 2) multiple possible outputs from a single input image. In this work, we present an approach based on disentangled representation for producing diverse outputs without paired training images. To achieve diversity, we propose to embed images onto two spaces: a domain-invariant content space capturing shared information across domains and a domain-specific attribute space. Our model takes the encoded content features extracted from a given input and the attribute vectors sampled from the attribute space to produce diverse outputs at test time. To handle unpaired training data, we introduce a novel cross-cycle consistency loss based on disentangled representations. Qualitative results show that our model can generate diverse and realistic images on a wide range of tasks without paired training data. For quantitative comparisons, we measure realism with user study and diversity with a perceptual distance metric. We apply the proposed model to domain adaptation and show competitive performance when compared to the state-of-the-art on the MNIST-M and the LineMod datasets.) <|cite_end|> propose unpaired multimodal methods which either sample multiple styles from a Gaussian space or capture the styles from exemplar images to generate diverse outputs. \begin{figure} \centering \includegraphics[width=0.44\textwidth]{imgs/iccv_baseline_comparison_v2.pdf} \caption{\footnotesize Comparisons with CycleGAN <|cite_start|> (Reference: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks: Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain $X$ to a target domain $Y$ in the absence of paired examples. Our goal is to learn a mapping $G: X \rightarrow Y$ such that the distribution of images from $G(X)$ is indistinguishable from the distribution $Y$ using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping $F: Y \rightarrow X$ and introduce a cycle consistency loss to push $F(G(X)) \approx X$ (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.) <|cite_end|> and MUNIT <|cite_start|> (Reference: Multimodal Unsupervised Image-to-Image Translation: Unsupervised image-to-image translation is an important and challenging problem in computer vision. Given an image in the source domain, the goal is to learn the conditional distribution of corresponding images in the target domain, without seeing any pairs of corresponding images. While this conditional distribution is inherently multimodal, existing approaches make an overly simplified assumption, modeling it as a deterministic one-to-one mapping. As a result, they fail to generate diverse outputs from a given source domain image. To address this limitation, we propose a Multimodal Unsupervised Image-to-image Translation (MUNIT) framework. We assume that the image representation can be decomposed into a content code that is domain-invariant, and a style code that captures domain-specific properties. To translate an image to another domain, we recombine its content code with a random style code sampled from the style space of the target domain. We analyze the proposed framework and establish several theoretical results. Extensive experiments with comparisons to the state-of-the-art approaches further demonstrates the advantage of the proposed framework. Moreover, our framework allows users to control the style of translation outputs by providing an example style image. Code and pretrained models are available at https://github.com/nvlabs/MUNIT) <|cite_end|> for try-on (left) and take-off (right) on FashionStyle dataset. } \label{fig:baselineComparsion_intro} \vspace{-6mm} \end{figure} All the above methods focus on appearance transfer where the content depicted in the input and output images has an aligned geometric structure. <|cite_start|> (Reference: Pose Guided Person Image Generation: This paper proposes the novel Pose Guided Person Generation Network (PG$^2$) that allows to synthesize person images in arbitrary poses, based on an image of that person and a novel pose. Our generation framework PG$^2$ utilizes the pose information explicitly and consists of two key stages: pose integration and image refinement. In the first stage the condition image and the target pose are fed into a U-Net-like network to generate an initial but coarse image of the person with the target pose. The second stage then refines the initial and blurry result by training a U-Net-like generator in an adversarial way. Extensive experimental results on both 128$\times$64 re-identification images and 256$\times$256 fashion photos show that our model generates high-quality person images with convincing details.) <|cite_end|> <|cite_start|> (Reference: Disentangled Person Image Generation: Generating novel, yet realistic, images of persons is a challenging task due to the complex interplay between the different image factors, such as the foreground, background and pose information. In this work, we aim at generating such images based on a novel, two-stage reconstruction pipeline that learns a disentangled representation of the aforementioned image factors and generates novel person images at the same time. First, a multi-branched reconstruction network is proposed to disentangle and encode the three factors into embedding features, which are then combined to re-compose the input image itself. Second, three corresponding mapping functions are learned in an adversarial manner in order to map Gaussian noise to the learned embedding feature space, for each factor respectively. Using the proposed framework, we can manipulate the foreground, background and pose of the input image, and also sample new embedding features to generate such targeted manipulations, that provide more control over the generation process. Experiments on Market-1501 and Deepfashion datasets show that our model does not only generate realistic person images with new foregrounds, backgrounds and poses, but also manipulates the generated factors and interpolates the in-between states. Another set of experiments on Market-1501 shows that our model can also be beneficial for the person re-identification task.) <|cite_end|> <|cite_start|> (Reference: Synthesizing Images of Humans in Unseen Poses: We address the computational problem of novel human pose synthesis. Given an image of a person and a desired pose, we produce a depiction of that person in that pose, retaining the appearance of both the person and background. We present a modular generative neural network that synthesizes unseen poses using training pairs of images and poses taken from human action videos. Our network separates a scene into different body part and background layers, moves body parts to new locations and refines their appearances, and composites the new foreground with a hole-filled background. These subtasks, implemented with separate modules, are trained jointly using only a single target image as a supervised label. We use an adversarial discriminator to force our network to synthesize realistic details conditioned on pose. We demonstrate image synthesis results on three action classes: golf, yoga/workouts and tennis, and show that our method produces accurate results within action classes as well as across action classes. Given a sequence of desired poses, we also produce coherent videos of actions.) <|cite_end|> <|cite_start|> (Reference: SwapNet: Garment Transfer in Single View Images: ) <|cite_end|> <|cite_start|> (Reference: Beyond Face Rotation: Global and Local Perception GAN for Photorealistic and Identity Preserving Frontal View Synthesis: Photorealistic frontal view synthesis from a single face image has a wide range of applications in the field of face recognition. Although data-driven deep learning methods have been proposed to address this problem by seeking solutions from ample face data, this problem is still challenging because it is intrinsically ill-posed. This paper proposes a Two-Pathway Generative Adversarial Network (TP-GAN) for photorealistic frontal view synthesis by simultaneously perceiving global structures and local details. Four landmark located patch networks are proposed to attend to local textures in addition to the commonly used global encoder-decoder network. Except for the novel architecture, we make this ill-posed problem well constrained by introducing a combination of adversarial loss, symmetry loss and identity preserving loss. The combined loss function leverages both frontal face distribution and pre-trained discriminative deep face models to guide an identity preserving inference of frontal views from profiles. Different from previous deep learning methods that mainly rely on intermediate features for recognition, our method directly leverages the synthesized identity preserving image for downstream tasks like face recognition and attribution estimation. Experimental results demonstrate that our method not only presents compelling perceptual results but also outperforms state-of-the-art results on large pose face recognition.) <|cite_end|> <|cite_start|> (Reference: Towards Pose Invariant Face Recognition in the Wild: Pose variation is one key challenge in face recognition. As opposed to current techniques for pose invariant face recognition, which either directly extract pose invariant features for recognition, or first normalize profile face images to frontal pose before feature extraction, we argue that it is more desirable to perform both tasks jointly to allow them to benefit from each other. To this end, we propose a Pose Invariant Model (PIM) for face recognition in the wild, with three distinct novelties. First, PIM is a novel and unified deep architecture, containing a Face Frontalization sub-Net (FFN) and a Discriminative Learning sub-Net (DLN), which are jointly learned from end to end. Second, FFN is a well-designed dual-path Generative Adversarial Network (GAN) which simultaneously perceives global structures and local details, incorporated with an unsupervised cross-domain adversarial training and a "learning to learn" strategy for high-fidelity and identity-preserving frontal view synthesis. Third, DLN is a generic Convolutional Neural Network (CNN) for face recognition with our enforced cross-entropy optimization strategy for learning discriminative yet generalized feature representation. Qualitative and quantitative experiments on both controlled and in-the-wild benchmarks demonstrate the superiority of the proposed model over the state-of-the-arts.) <|cite_end|> aim at the case when the geometry itself is to be transferred. However, these methods focus on within-domain tasks (\eg person-to-person and face-to-face), which depicts reduced variability when compared to its cross-domain counterpart (\eg person-to-clothing). Yoo~\etal <|cite_start|> (Reference: Pixel-Level Domain Transfer: We present an image-conditional image generation model. The model transfers an input domain to a target domain in semantic level, and generates the target image in pixel level. To generate realistic target images, we employ the real/fake-discriminator as in Generative Adversarial Nets, but also introduce a novel domain-discriminator to make the generated image relevant to the input image. We verify our model through a challenging task of generating a piece of clothing from an input image of a dressed person. We present a high quality clothing dataset containing the two domains, and succeed in demonstrating decent results.) <|cite_end|> propose one of the first methods addressing cross-domain pixel-level translation. Their method semantically transfers a natural image depicting a person (source domain) to a clothing-item image corresponding to the clothing worn by that person on the upper body (target domain), and vice versa. Recently, <|cite_start|> (Reference: VITON: An Image-based Virtual Try-on Network: We present an image-based VIirtual Try-On Network (VITON) without using 3D information in any form, which seamlessly transfers a desired clothing item onto the corresponding region of a person using a coarse-to-fine strategy. Conditioned upon a new clothing-agnostic yet descriptive person representation, our framework first generates a coarse synthesized image with the target clothing item overlaid on that same person in the same pose. We further enhance the initial blurry clothing area with a refinement network. The network is trained to learn how much detail to utilize from the target clothing item, and where to apply to the person in order to synthesize a photo-realistic image in which the target item deforms naturally with clear visual patterns. Experiments on our newly collected Zalando dataset demonstrate its promise in the image-based virtual try-on task over state-of-the-art generative models.) <|cite_end|> <|cite_start|> (Reference: Toward Characteristic-Preserving Image-based Virtual Try-On Network: Image-based virtual try-on systems for fitting new in-shop clothes into a person image have attracted increasing research attention, yet is still challenging. A desirable pipeline should not only transform the target clothes into the most fitting shape seamlessly but also preserve well the clothes identity in the generated image, that is, the key characteristics (e.g. texture, logo, embroidery) that depict the original clothes. However, previous image-conditioned generation works fail to meet these critical requirements towards the plausible virtual try-on performance since they fail to handle large spatial misalignment between the input image and target clothes. Prior work explicitly tackled spatial deformation using shape context matching, but failed to preserve clothing details due to its coarse-to-fine strategy. In this work, we propose a new fully-learnable Characteristic-Preserving Virtual Try-On Network(CP-VTON) for addressing all real-world challenges in this task. First, CP-VTON learns a thin-plate spline transformation for transforming the in-shop clothes into fitting the body shape of the target person via a new Geometric Matching Module (GMM) rather than computing correspondences of interest points as prior works did. Second, to alleviate boundary artifacts of warped clothes and make the results more realistic, we employ a Try-On Module that learns a composition mask to integrate the warped clothes and the rendered image to ensure smoothness. Extensive experiments on a fashion dataset demonstrate our CP-VTON achieves the state-of-the-art virtual try-on performance both qualitatively and quantitatively.) <|cite_end|> propose two-stage warping-based methods aimed at virtual try-on of clothing items. These methods focus on learning a thin-plate spline (TPS) operation to transfer the pixel information directly. They rely on paired data to learn to transfer the shape in a first stage and then refine it in a second stage. In contrast, we propose a more general method that utilizes the context and shape guidance to perform translation across different domains without any paired data. In addition, different from previous works which divide the translation process into multiple stages, our method is able to handle the full appearance-preserving translation, in both directions, within a single model. Outside of the I2I literature, <|cite_start|> (Reference: Spatial Transformer Networks: Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations.) <|cite_end|> proposes a spatial transformer network (STN) which also aims at object-level transformations. Different from our method, which learns the plausible transformations from data and allows for user-suggested transformations through the use of "desired" target images, STNs start from a predefined set of possible transformations. In addition, STNs apply the same transformation to every pixel. Differently, our method implicitly allows deformable objects since different pixel-level transformations are possible as depicted in the training data. Finally, STNs makes no distinction between content and style information. \setlength{\abovedisplayskip}{6pt} \setlength{\belowdisplayskip}{6pt} \setlength{\abovedisplayshortskip}{6pt} \setlength{\belowdisplayshortskip}{6pt} \setlength{\jot}{2pt} <|paper_end|>
[ "<|reference_start|> Generative {{Adversarial Nets}}: CNN and RNN are classifiers for image and speech recognition, and are used in many computer vision. However, this model alone does not produce images... <|reference_end|>", "<|reference_start|> Towards Pose Invariant Face Recognition in the Wild: Pose variation is one key challenge in face recognition. As opposed to current techniques for pose invariant face recognition, which either directly extract pose invariant features for recognition, or first normalize profile face images to frontal pose before feature extraction, we argue that it is more desirable to perform both tasks jointly to allow them to benefit from each other. To this end, we propose a Pose Invariant Model (PIM) for face recognition in the wild, with three distinct novelties. First, PIM is a novel and unified deep architecture, containing a Face Frontalization sub-Net (FFN) and a Discriminative Learning sub-Net (DLN), which are jointly learned from end to end. Second, FFN is a well-designed dual-path Generative Adversarial Network (GAN) which simultaneously perceives global structures and local details, incorporated with an unsupervised cross-domain adversarial training and a \"learning to learn\" strategy for high-fidelity and identity-preserving frontal view synthesis. Third, DLN is a generic Convolutional Neural Network (CNN) for face recognition with our enforced cross-entropy optimization strategy for learning discriminative yet generalized feature representation. Qualitative and quantitative experiments on both controlled and in-the-wild benchmarks demonstrate the superiority of the proposed model over the state-of-the-arts. <|reference_end|>", "<|reference_start|> Multimodal Unsupervised Image-to-Image Translation: Unsupervised image-to-image translation is an important and challenging problem in computer vision. Given an image in the source domain, the goal is to learn the conditional distribution of corresponding images in the target domain, without seeing any pairs of corresponding images. While this conditional distribution is inherently multimodal, existing approaches make an overly simplified assumption, modeling it as a deterministic one-to-one mapping. As a result, they fail to generate diverse outputs from a given source domain image. To address this limitation, we propose a Multimodal Unsupervised Image-to-image Translation (MUNIT) framework. We assume that the image representation can be decomposed into a content code that is domain-invariant, and a style code that captures domain-specific properties. To translate an image to another domain, we recombine its content code with a random style code sampled from the style space of the target domain. We analyze the proposed framework and establish several theoretical results. Extensive experiments with comparisons to the state-of-the-art approaches further demonstrates the advantage of the proposed framework. Moreover, our framework allows users to control the style of translation outputs by providing an example style image. Code and pretrained models are available at https://github.com/nvlabs/MUNIT <|reference_end|>", "<|reference_start|> Multimodal Unsupervised Image-to-Image Translation: Unsupervised image-to-image translation is an important and challenging problem in computer vision. Given an image in the source domain, the goal is to learn the conditional distribution of corresponding images in the target domain, without seeing any pairs of corresponding images. While this conditional distribution is inherently multimodal, existing approaches make an overly simplified assumption, modeling it as a deterministic one-to-one mapping. As a result, they fail to generate diverse outputs from a given source domain image. To address this limitation, we propose a Multimodal Unsupervised Image-to-image Translation (MUNIT) framework. We assume that the image representation can be decomposed into a content code that is domain-invariant, and a style code that captures domain-specific properties. To translate an image to another domain, we recombine its content code with a random style code sampled from the style space of the target domain. We analyze the proposed framework and establish several theoretical results. Extensive experiments with comparisons to the state-of-the-art approaches further demonstrates the advantage of the proposed framework. Moreover, our framework allows users to control the style of translation outputs by providing an example style image. Code and pretrained models are available at https://github.com/nvlabs/MUNIT <|reference_end|>" ]
[ 6, 12, 26, 36 ]
{"<|cite_1|>": "ss-790133", "<|cite_2|>": "arxiv-117152", "<|cite_3|>": "ss-727034", "<|cite_4|>": "ss-958169", "<|cite_6|>": "ss-1228072", "<|cite_7|>": "ss-790133", "<|multi_cite_8_1|>": "ss-805363", "<|multi_cite_8_2|>": "arxiv-87648", "<|multi_cite_8_3|>": "arxiv-109984", "<|multi_cite_8_4|>": "ss-1258180", "<|multi_cite_9_1|>": "arxiv-110679", "<|multi_cite_9_2|>": "arxiv-94540", "<|multi_cite_9_3|>": "ss-1111788", "<|multi_cite_10_1|>": "arxiv-87648", "<|multi_cite_10_2|>": "arxiv-120450", "<|multi_cite_10_3|>": "ss-1822948", "<|multi_cite_11_1|>": "arxiv-121607", "<|multi_cite_11_2|>": "ss-1111788", "<|multi_cite_12_1|>": "arxiv-125170", "<|multi_cite_12_2|>": "arxiv-142531", "<|multi_cite_14_1|>": "arxiv-142531", "<|multi_cite_14_2|>": "arxiv-94540", "<|multi_cite_14_3|>": "ss-1111788", "<|multi_cite_15_1|>": "arxiv-142531", "<|multi_cite_15_2|>": "arxiv-94540", "<|multi_cite_15_3|>": "ss-1111788", "<|multi_cite_16_1|>": "arxiv-154821", "<|multi_cite_16_2|>": "arxiv-120450", "<|cite_17|>": "arxiv-110679", "<|cite_18|>": "arxiv-120450", "<|cite_19|>": "arxiv-118041", "<|multi_cite_20_1|>": "arxiv-149905", "<|multi_cite_20_2|>": "arxiv-160349", "<|multi_cite_20_3|>": "arxiv-154821", "<|multi_cite_20_4|>": "arxiv-168124", "<|cite_21|>": "arxiv-120450", "<|cite_22|>": "arxiv-154821", "<|multi_cite_23_1|>": "arxiv-125170", "<|multi_cite_23_2|>": "arxiv-142531", "<|multi_cite_23_3|>": "arxiv-155746", "<|multi_cite_23_4|>": "ss-1822949", "<|multi_cite_23_5|>": "arxiv-121607", "<|multi_cite_23_6|>": "ss-1111788", "<|cite_24|>": "arxiv-94540", "<|multi_cite_25_1|>": "arxiv-140986", "<|multi_cite_25_2|>": "arxiv-166550", "<|cite_26|>": "arxiv-78899"}
2102.08707-0
<|paper_start|> Title: Safety Analysis for Laser-based Optical Wireless Communications: A Tutorial Abstract: Safety Analysis for Laser-based Optical Wireless Communications: A Tutorial: Light amplification by stimulated emission of radiation (laser) sources have many advantages for use in high data rate optical wireless communications. In particular, the low cost and high-bandwidth properties of laser sources such as vertical-cavity surface-emitting lasers (VCSELs) make them attractive for future indoor optical wireless communications. In order to be integrated into future indoor networks, such lasers should conform to eye safety regulations determined by the international electrotechnical commission (IEC) standards for laser safety. In this paper, we provide a detailed study of beam propagation to evaluate the received power of various laser sources, based on which as well as the maximum permissible exposure (MPE) defined by the IEC 60825-1:2014 standard, we establish a comprehensive framework for eye safety analyses. This framework allows us to calculate the maximum allowable transmit power, which is crucial in the design of a reliable and safe laser-based wireless communication system. Initially, we consider a single-mode Gaussian beam and calculate the maximum permissible transmit power. Subsequently, we generalize this approach for higher-mode beams. It is shown that the M-squared-based approach for analysis of multimode lasers ensures the IEC eye safety limits, however, in some scenarios, it can be too conservative compared to the precise beam decomposition method. Laser safety analyses with consideration of optical elements such as lens and diffuser, as well as for VCSEL array have been also presented. Skin safety, as another significant factor of laser safety, has also been investigated in this paper. We have studied the impacts of various parameters such as wavelength, exposure duration and the divergence angle of laser sources on the safety analysis by presenting insightful results. Introduction \label{Section1} The number of mobile users is increasing rapidly, with expected numbers to exceed $5.7$ billion, that is anticipated to generate $79$\% of the global data traffic. It is forecast that smartphones, laptops, tablets and wireless sensors will struggle to get their required share of the radio frequency (RF) spectrum without a significant capacity increase. In the 5th generation ($5$G) and beyond of cellular networks, $1000$-fold capacity growth is expected with respect to the 4th generation ($4$G) and long-term evolution (LTE) wireless networks <|cite_start|> (Reference: 5G Network Capacity: Key Elements and Technologies: It has been projected that, in the next decade, a mobile traffic increase on the order of 1,000 times is expected compared to what we experience today. To meet that dramatic traffic growth, next-generation mobile networks are also expected to achieve a 1,000-fold capacity increase compared to the current generation of wireless network deployments. In this article, we discuss how such capacity growth could be achieved in a ten-year time frame. We discuss the techniques that we expect to have the highest opportunity for increasing the system capacity and estimate their gains based on analysis and simulation. We observe that the main driver of capacity growth is expected to come from network architecture advancements, with heterogeneous networks and convergence of information and communication technology being two of the key techniques. We also estimate that the air-interface evolution would focus not only on improving the link and system spectrum efficiency but also on facilitating the required network efficiency improvements. This article provides insights into the communication technology evolution and can be used as a guideline for technology development toward the fifth generation (5G).) <|cite_end|>. Therefore, the RF spectrum is becoming congested and wireless devices have to cope with increased co-channel interference. As a result, the data throughput of the devices is severely affected, and the established connection may be poor. Optical wireless communication (OWC) enables wireless connectivity harnessing a huge spectrum, which is available in infrared, visible or ultraviolet bands <|cite_start|> (Reference: \{: إن قدرة النص الأدبي خارقة في البحث عن أشكال قرائية جديدة باستمرار لفك أسراره . لذلك تشعبت الرؤى والتصورات والمناهج في ارتباط وثيق ومتعدد المقاربات بعدد من العلوم والحقول المعرفية والنظريات. وقد حقق الحقل السيميائي ، في التصورات الكبرى التأسيسية أو في الاجتهادات الموالية ، طفرة ملموسة في النقد وفي صوغ المفاهيم وفي استجلاء دلالات النصوص الأدبية . كما عرف الدرس السيميائي في النقد العربي اجتهادات حقيقة بالتتبع خصوصا على المستوى الأكاديمي الجامعي . ففي حالة المغرب مثلا ، شكلت جهود الباحثين السيميائيين : محمد مفتاح ، سعيد بنكراد ، عبد اللطيف محفوظ ، عبد المجيد نوسي ..أكثر من مدخل لتقديم مفاهيم ورؤى في هذا الحقل بالإضافة إلى تحليلات متقدمة في الشعر والقصة والرواية والخطاب عموما ... إنها مسألة متعلقة في الدرس السيميائي بالمغرب بالنص والخطاب والمرجعية النظرية والبناء المنهجي وسبل التوظيف أثناء القراءة والتأويل مما أسهم في تحقيق أدوات إجرائية موسعة تتسلح بعلوم ومعارف للنفاذ إلى تشعبات الخطابات .) <|cite_end|>. Moreover, OWC is license free thus leading to a cost-effective service <|cite_start|> (Reference: 16th International Conference on Transparent Optical Networks, ICTON 2014, Graz, Austria, July 6-10, 2014: ) <|cite_end|>. It is a solution to alleviate RF spectrum congestion for both indoor and outdoor scenarios <|cite_start|> (Reference: Optical Wireless Communication: Data is the new currency impacting everybody's lives. As the modern world receives & sends millions of Terabytes of data every day, the present-day wireless data communication technologies comprising of Wi-Fi & 4G-LTE is on the verge of becoming partially inept for information exchange as they suffer from spectrum congestion in both controlled and uncontrolled environments. Li-Fi, also known as light fidelity, is a full duplex communication network enabling transmittal of data. The potency of bidirectional Visible Light Communication allows us to build an ideal medium, independent of congested radio frequencies and interference from electromagnetic waves, thus, resulting in faster data transfer. Inception of LED technology for lighting in 90's paved the way for high growth trajectory for LED Lighting industry which we have witnessed from the last 2 decades. As semiconductors, LEDs were poised to develop much bigger applications like integrated sensors apart from normal dimming and ambient lighting. Li-Fi is a technology which creates a bridge between the world of data communication & LED Lighting. Multiple forward & backward integration are poised to happen in coming years when lighting players will develop enterprise communication enabled lighting products. Even system integrators will look forward to Li-Fi enabled luminaires for establishing wireless networks. Li-Fi is being seen as a big step forward in enabling 5G telecommunication networks. Security benefits and outdoor long-range communication capabilities Li-Fi a potential technology for Defence & Smart Cities applications. Li-Fi uses the visible and invisible frequency band (380nm - 1500nm) which is 10,000 times broader than usable RF frequency band. The property of light spectrum to be unlicensed and free from any health regulations makes it even more desirable for us. Its applications can extend in areas where the RF technology lacks its presence like aircrafts and hospitals (operation theatres), power plants and various other areas, where electromagnetic (Radio) interference is of great concern for safety and security of equipment's and people. Since there is no potential health hazard associated with light, it can be used safely in such locations or areas. Li-Fi / OWC has applications in both indoor (≅) and outdoor ( ) scenarios.) <|cite_end|> <|cite_start|> (Reference: {Indoor Optical Wireless Systems: Technology, Trends, and Applications: Indoor wireless traffic is evolving at a staggering pace, and is quickly depleting radio spectrum resources. Optical wireless communication (OWC) offers powerful solutions for resolving this imminent capacity crunch of radio-based wireless networks. OWC is not intended to fully replace radio wireless techniques such as WiFi, but to complement these and offload their high traffic loads. After discussing OWC's application domains, this paper gives a tutorial overview of two major directions in OWC: wide-coverage visible light communication which builds on LED illumination techniques and shares capacity among multiple devices, and communication with narrow 2-D steered infrared beams which offers unshared high capacity to devices individually. In addition, supporting techniques for wide field-of-view receivers, device localization, bidirectional hybrid optical/radio networks, and bidirectional all-optical wireless networks are discussed.) <|cite_end|>. OWC enables the creation of smaller communication cells in the network, known as attocells <|cite_start|> (Reference: {What Is LiFi?: Light-Fidelity (LiFi) takes visible light communication (VLC) further by using light emitting diodes (LEDs) to realise fully networked wireless systems. Synergies are harnessed as lights become LiFi attocells resulting in enhanced wireless capacity for the Internet-of-Things (IoT), 5G and beyond.) <|cite_end|>which is a vital key to unlock the path to exponential capacity growth for indoor scenarios <|cite_start|> (Reference: {Indoor optical wireless communication: potential and state-of-the-art: In recent years, interest in optical wireless (OW) as a promising complementary technology for RF technology has gained new momentum fueled by significant deployments in solid state lighting technology. This article aims at reviewing and summarizing recent advancements in OW communication, with the main focus on indoor deployment scenarios. This includes a discussion of challenges, potential applications, state of the art, and prospects. Related issues covered in this article are duplex transmission, multiple access, MAC protocols, and link capacity improvements.) <|cite_end|>. Indoor OWC can offload heavy traffic loads from congested RF wireless networks, therefore making the room available for extreme low-capacity streams such as in the Internet of Things (IoT). It can ensure higher security compared to the RF counterpart in the physical layer because optical signals do not penetrate walls or opaque objects <|cite_start|> (Reference: Wireless in-house data communication via diffuse infrared radiation: ) <|cite_end|>. Outdoor applications of OWC, which are widely referred to as free-space optical (FSO) communications, include satellite-to-satellite communications, inter-building connections as well as satellite-to-plane communications <|cite_start|> (Reference: Optical Communication in Space: Challenges and Mitigation Techniques: In recent years, free space optical communication has gained significant importance owing to its unique features: large bandwidth, license-free spectrum, high data rate, easy and quick deployability, less power and low mass requirements. FSO communication uses the optical carrier in the near infrared band to establish either terrestrial links within the Earth's atmosphere or inter-satellite or deep space links or ground-to-satellite or satellite-to-ground links. However, despite the great potential of FSO communication, its performance is limited by the adverse effects viz., absorption, scattering, and turbulence of the atmospheric channel. This paper presents a comprehensive survey on various challenges faced by FSO communication system for ground-to-satellite or satellite-to-ground and inter-satellite links. It also provides details of various performance mitigation techniques in order to have high link availability and reliability. The first part of the paper will focus on various types of impairments that pose a serious challenge to the performance of optical communication system for ground-to-satellite or satellite-to-ground and inter-satellite links. The latter part of the paper will provide the reader with an exhaustive review of various techniques both at physical layer as well as at the other layers i.e., link, network or transport layer to combat the adverse effects of the atmosphere. It also uniquely presents a recently developed technique using orbital angular momentum for utilizing the high capacity advantage of the optical carrier in case of space-based and near-Earth optical communication links. This survey provides the reader with comprehensive details on the use of space-based optical backhaul links in order to provide high-capacity and low-cost backhaul solutions.) <|cite_end|>. Underwater OWC is another application which has recently gained much attention since it is able to provide a much higher transmission bandwidth, and, as a result, much higher data rate in comparison to acoustic and RF counterparts <|cite_start|> (Reference: {Underwater Optical Wireless Communication: Underwater wireless information transfer is of great interest to the military, industry, and the scientific community, as it plays an important role in tactical surveillance, pollution monitoring, oil control and maintenance, offshore explorations, climate change monitoring, and oceanography research. In order to facilitate all these activities, there is an increase in the number of unmanned vehicles or devices deployed underwater, which require high bandwidth and high capacity for information transfer underwater. Although tremendous progress has been made in the field of acoustic communication underwater, however, it is limited by bandwidth. All this has led to the proliferation of underwater optical wireless communication (UOWC), as it provides higher data rates than the traditional acoustic communication systems with significantly lower power consumption and simpler computational complexities for short-range wireless links. UOWC has many potential applications ranging from deep oceans to coastal waters. However, the biggest challenge for underwater wireless communication originates from the fundamental characteristics of ocean or sea water; addressing these challenges requires a thorough understanding of complex physio-chemical biological systems. In this paper, the main focus is to understand the feasibility and the reliability of high data rate underwater optical links due to various propagation phenomena that impact the performance of the system. This paper provides an exhaustive overview of recent advances in UOWC. Channel characterization, modulation schemes, coding techniques, and various sources of noise which are specific to UOWC are discussed. This paper not only provides exhaustive research in underwater optical communication but also aims to provide the development of new ideas that would help in the growth of future underwater communication. A hybrid approach to an acousto-optic communication system is presented that complements the existing acoustic system, resulting in high data rates, low latency, and an energy-efficient system.) <|cite_end|> <|cite_start|> (Reference: A Survey of Underwater Optical Wireless Communications: Underwater wireless communications refer to data transmission in unguided water environment through wireless carriers, i.e., radio-frequency (RF) wave, acoustic wave, and optical wave. In comparison to RF and acoustic counterparts, underwater optical wireless communication (UOWC) can provide a much higher transmission bandwidth and much higher data rate. Therefore, we focus, in this paper, on the UOWC that employs optical wave as the transmission carrier. In recent years, many potential applications of UOWC systems have been proposed for environmental monitoring, offshore exploration, disaster precaution, and military operations. However, UOWC systems also suffer from severe absorption and scattering introduced by underwater channels. In order to overcome these technical barriers, several new system design approaches, which are different from the conventional terrestrial free-space optical communication, have been explored in recent years. We provide a comprehensive and exhaustive survey of the state-of-the-art UOWC research in three aspects: 1) channel characterization; 2) modulation; and 3) coding techniques, together with the practical implementations of UOWC.) <|cite_end|>. Vehicle-to-vehicle (V2V) communication via the optical spectrum is another recent area of focus, where experimental results have confirmed its reliability even under heavy-fog conditions <|cite_start|> (Reference: {Experimental Demonstration of VLC-Based Vehicle-to-Vehicle Communications Under Fog Conditions: Vehicle-to-vehicle (V2V) communication using visible light communication (VLC) technology under fog conditions is presented. Fog is known as one of the most detrimental atmospheric conditions that causes outdoor optical wireless communications to be unreliable. The effect of the fog conditions is experimentally analyzed in the VLC-based V2V system. Recognizing the least attenuation coefficient and a taillight color of a vehicle, a red light-emitting diode (LED) was employed in the experiment. In addition, a Fresnel lens and multiple photodiodes are utilized to efficiently counteract the impairment caused by fog. The experimental results demonstrate that the proposed VLC-based V2V system offers a reliable V2V data transmission over the fog-impaired optical channel with a relatively high signal-to-noise ratio (SNR), even under a heavy-fog condition.) <|cite_end|>. All the aforementioned applications of OWC and its capability to offer low-cost, high-speed, secure and reliable communication links have made OWC an indispensable piece of future generations of communications. Compared to light emitting diodes (LEDs), light amplification by stimulated emission of radiation (laser) diodes provide a larger modulation bandwidth that can achieve multi-Gb/s data rates, making them appealing to be used for Tb/s indoor networks <|cite_start|> (Reference: {Experimental Demonstration of Indoor Infrared Optical Wireless Communications With a Silicon Photonic Integrated Circuit: The optical wireless technology has great potential in realizing high-speed wireless communications in indoor applications, and the silicon photonics platform has been widely investigated to provide photonic integrations using advanced CMOS facilities. In this paper, the silicon integration of key beam steering function in high-speed infrared indoor optical wireless communication systems is proposed and investigated. The beam steering function is realized through edge couplers based silicon integrated optical phased array to achieve both wide operation bandwidth and high power efficiency. A 1 × 4 integrated phased array is designed and fabricated, and up to 12.5-Gb/s data transmission using the silicon integrated beam steering device through over 1.4 m free-space distance is experimentally demonstrated. Results show that error-free data transmission can be achieved with limited mobility provided to users, and the power penalty of the silicon integrated device is negligible. The outcomes successfully demonstrate the feasibility of using silicon photonic integrations in indoor optical wireless communication systems to realize compact and low-cost solutions.) <|cite_end|> <|cite_start|> (Reference: 2020 IEEE 31st Annual International Symposium on Personal, Indoor and Mobile Radio Communications: ) <|cite_end|> <|cite_start|> (Reference: 2020 IEEE 31st Annual International Symposium on Personal, Indoor and Mobile Radio Communications: ) <|cite_end|>required for the next generations of wireless networks. A data rate of $4~\times~12.5$~Gbps has been successfully demonstrated by experiments in <|cite_start|> (Reference: {4$\,\times\,$ 12.5 Gb/s WDM Optical Wireless Communication System for Indoor Applications: A novel high-speed optical wireless communication system incorporating wavelength division multiplexing technology for indoor personal area networking applications is proposed in this paper. Even with the simplest single wide field-of-view (45°) nonimaging receiver, bit rate as high as 4 × 12.5 Gb/s has been successfully demonstrated by experiments. It is shown that error-free reception (bit error rate <; 10-9) can be achieved over the entire beam footprint of about 1 m, so limited mobility can be provided to subscribers. In addition, a new localization system based on this optical wireless communication system has also been proposed and verified by experiments. When these two systems are incorporated together, high-speed optical wireless communication with mobility feature can be provided to users over the entire room.) <|cite_end|>via optical fiber over the entire beam footprint of about 1 m. Among different types of laser diodes, vertical cavity surface emitting lasers (VCSELs) are one of the strongest candidates to fulfil this role due to several outstanding features such as <|cite_start|> (Reference: {Surface-Emitting Laser-Its Birth and Generation of New Optoelectronics Field: The surface-emitting laser (SEL) is considered one of the most important devices for optical interconnects and LANs, enabling ultra parallel information transmission in lightwave and computer systems. We introduce its history, fabrication technology, and discuss the advantages. Then, we review the progress of the surface emitting laser and the vertical-cavity surface-emitting laser (VCSEL), covering the spectral band from infrared to ultraviolet by featuring its physics, materials, fabrication technology, and performances, such as threshold, output powers, polarizations, line-width, modulation, reliability, and so on.) <|cite_end|>: high-speed modulation (bandwidths over $28$ GHz) <|cite_start|> (Reference: {High-Speed 850-nm VCSELs with 28 GHz Modulation Bandwidth for Short Reach Communication: We present results from our new generation of high performance 850 nm oxide confined vertical cavity surface-emitting lasers (VCSELs). With devices optimized for high-speed operation under direct modulation, we achieve record high 3dB modulation bandwidths of 28 GHz for ~4 μm oxide aperture diameter VCSELs, and 27 GHz for devices with a ~7 μm oxide aperture diameter. Combined with a high-speed photoreceiver, the ~7 μm VCSEL enables error-free transmission at data rates up to 47 Gbit/s at room temperature, and up to 40 Gbit/s at 85°C.) <|cite_end|> <|cite_start|> (Reference: 2013 Conference on Lasers and Electro-Optics Europe and International Quantum Electronics Conference CLEO EUROPE/IQEC: ) <|cite_end|> <|cite_start|> (Reference: {30 GHz Bandwidth 850 nm VCSEL with sub-100 fJ/bit Energy Dissipation at 25--50 Gbit/s: A high-speed and energy-efficient oxide-confined 850 nm vertical-cavity surface-emitting laser (VCSEL) for optical interconnects is presented. A record-high modulation bandwidth of 30 GHz is reached for a 3.5 mu m oxide aperture VCSEL, with 25 GHz bandwidth already at a bias current of 1.8 mA. The high bandwidth at low currents enables energy-efficient transmission with a dissipated heat energy in the VCSEL of <100 fJ/bit at 25, 40 and 50 Gbit/s.) <|cite_end|>, high power conversion efficiency and low cost. Furthermore, high aggregate bit-rates exceeding $100$ Gbit/s have been confirmed by means of VCSEL arrays through experiments in many studies <|cite_start|> (Reference: {400-Gb/s PDM-4PAM WDM System Using a Monolithic 2×4 VCSEL Array and Coherent Detection: We generate a 400-Gb/s line rate signal using a directly modulated 2 × 4 monolithic vertical-cavity-surface-emitting-laser array. The signal consists of four wavelength-division-multiplexed channels at a 100-GHz channel spacing and each channel carries a 100-Gb/s polarization-division-multiplexed four-level pulse-amplitude-modulation signal. Using digital coherent detection, we successfully transmit the 400-Gb/s signal over 5 × 80-km standard-single-mode-fiber spans with erbium-doped fiber amplifiers at 20% overhead soft-decision forward-error correction, achieving a net information bit rate of 333 Gb/s.) <|cite_end|> <|cite_start|> (Reference: Biogenic Hydrogen Conversion of De-Oiled Jatropha Waste via Anaerobic Sequencing Batch Reactor Operation: Process Performance, Microbial Insights, and CO2 Reduction Efficiency: We report the semicontinuous, direct (anaerobic sequencing batch reactor operation) hydrogen fermentation of de-oiled jatropha waste (DJW). The effect of hydraulic retention time (HRT) was studied and results show that the stable and peak hydrogen production rate of 1.48 L/L∗d and hydrogen yield of 8.7 mL H2/g volatile solid added were attained when the reactor was operated at HRT 2 days (d) with a DJW concentration of 200 g/L, temperature 55°C, and pH 6.5. Reduced HRT enhanced the production performance until 1.75 d. Further reduction has lowered the process efficiency in terms of biogas production and hydrogen gas content. The effluent from hydrogen fermentor was utilized for methane fermentation in batch reactors using pig slurry and cow dung as seed sources. The results revealed that pig slurry was a feasible seed source for methane generation. Peak methane production rate of 0.43 L CH4/L∗d and methane yield of 20.5 mL CH4/g COD were observed at substrate concentration of 10 g COD/L, temperature 30°C, and pH 7.0. PCR-DGGE analysis revealed that combination of celluloytic and fermentative bacteria were present in the hydrogen producing ASBR.) <|cite_end|> <|cite_start|> (Reference: 2020 IEEE 31st Annual International Symposium on Personal, Indoor and Mobile Radio Communications: ) <|cite_end|> <|cite_start|> (Reference: 2020 IEEE 31st Annual International Symposium on Personal, Indoor and Mobile Radio Communications: ) <|cite_end|>. These attributes make VCSELs noteworthy for many applications, particularly for high-speed indoor networks <|cite_start|> (Reference: {Vertical-Cavity Surface-Emitting Lasers for Data Communication and Sensing: Vertical-cavity surface-emitting lasers (VCSELs) are the ideal optical sources for data communication and sensing. In data communication, large data rates combined with excellent energy efficiency and temperature stability have been achieved based on advanced device design and modulation formats. VCSELs are also promising sources for photonic integrated circuits due to their small footprint and low power consumption. Also, VCSELs are commonly used for a wide variety of applications in the consumer electronics market. These applications range from laser mice to three-dimensional (3D) sensing and imaging, including various 3D movement detections, such as gesture recognition or face recognition. Novel VCSEL types will include metastructures, exhibiting additional unique properties, of largest importance for next-generation data communication, sensing, and photonic integrated circuits.) <|cite_end|>. However, one of the major challenges of utilizing laser sources as transmitters is to meet eye safety regulations. In the following section, we review the relevant standards, research papers and handbooks on laser safety. \begin{figure}[!th] \begin{center} \begin{tikzpicture} \tikzstyle{Box} = [rectangle, rounded corners, minimum height=1.2cm,text centered, draw=black, fill=white] \node (B1) [Box,draw=black, fill=blue!20]{IEC 60825}; \node (B2) [Box,draw=black, fill=blue!20,right of=B1, yshift=4.5cm, xshift=2.2cm,text width=2.1cm]{Part 1}; \node (B3) [Box,draw=black, fill=blue!20, below of=B2, yshift=-0.3cm,text width=2.1cm]{Part 2}; \node (B4) [Box,draw=black, fill=blue!20, below of=B3, yshift=-0.3cm,text width=2.1cm]{Part 3}; \node (B5) [Box,draw=black, fill=blue!20, below of=B4, yshift=-0.3cm,text width=2.1cm]{Part 4}; \node (B6) [Box,draw=black, fill=blue!20, below of=B5, yshift=-0.3cm,text width=2.1cm]{Part 5}; \node (B7) [Box,draw=black, fill=blue!20, below of=B6, yshift=-0.3cm,text width=2.1cm]{Part 8}; \node (B8) [Box,draw=black, fill=blue!20, below of=B7, yshift=-0.3cm,text width=2.1cm]{Part 12}; \node (B9) [Box,draw=black, fill=blue!20, below of=B8, yshift=-0.3cm,text width=2.1cm]{Part 13}; \node (B10) [Box,draw=black, fill=blue!20, below of=B9, yshift=-0.3cm,text width=2.1cm]{Part 14}; \node (B10-1) [Box,draw=black, fill=blue!20, below of=B10, yshift=-0.3cm,text width=2.1cm]{Part 17}; \draw [->] (B1) |- (B2); \draw [->] (B1) |- (B3); \draw [->] (B1) |- (B4); \draw [->] (B1) |- (B5); \draw [->] (B1) |- (B6); \draw [->] (B1) |- (B7); \draw [->] (B1) |- (B8); \draw [->] (B1) |- (B9); \draw [->] (B1) |- (B10); \draw [->] (B1) |- (B10-1); \node (B11) [Box,draw=black, fill=blue!20,right of=B2, xshift=5.85cm, text width=9cm]{General laser safety in the wavelength range $180$ nm to $1$ mm}; \node (B12) [Box,draw=black, fill=blue!20,right of=B3, xshift=5.85cm, text width=9cm ]{Guidance for the safe operation and maintenance of optical fiber}; \node (B13) [Box,draw=black, fill=blue!20,right of=B4, xshift=5.85cm, text width=9cm ]{Guidance on the design, set-up and conduct of laser displays which use high power lasers}; \node (B14) [Box,draw=black, fill=blue!20,,right of=B5, xshift=5.85cm, text width=9cm ]{Specifies the requirements for laser guards}; \node (B15) [Box,draw=black, fill=blue!20,,right of=B6, xshift=5.85cm, text width=9cm]{Ensures that each new or modified design complies with the requirements of IEC 60825-1:2014}; \node (B16) [Box,draw=black, fill=blue!20,,right of=B7, xshift=5.85cm, text width=9cm]{Guidance for operators on the safe use of lasers classified as class 3B or class 4}; \node (B17) [Box,draw=black, fill=blue!20,,right of=B8, xshift=5.85cm, text width=9cm]{Safety of point-to-point or point-to-multipoint free space optical data transmission (180 nm to 1 mm)}; \node (B18) [Box,draw=black, fill=blue!20,,right of=B9, xshift=5.85cm, text width=9cm]{Guidance on safe radiometric measurements and information about calculating AELs and MPEs}; \node (B19) [Box,draw=black, fill=blue!20,,right of=B10, xshift=5.85cm, text width=9cm]{Guidance on best practice in the safe use of laser products that conform to IEC 60825-1:2014}; \node (B20) [Box,draw=black, fill=blue!20,,right of=B10-1, xshift=5.85cm, text width=9cm]{Safety measures against passive optical components and optical cables used in high power OFCS}; \draw [->] (B2) -- (B11); \draw [->] (B3) -- (B12); \draw [->] (B4) -- (B13); \draw [->] (B5) -- (B14); \draw [->] (B6) -- (B15); \draw [->] (B7) -- (B16); \draw [->] (B8) -- (B17); \draw [->] (B9) -- (B18); \draw [->] (B10) -- (B19); \draw [->] (B10-1) -- (B20); \end{tikzpicture} \end{center} \caption{IEC 60825 standard series on the safety of laser products along with their main focus.} \label{Fig-IEC} \end{figure} \begin{figure}[t!] \begin{center} \begin{tikzpicture} \tikzstyle{Box} = [rectangle, rounded corners, minimum height=1.2cm,text centered, draw=black, fill=white] \node (B1) [Box,draw=black, fill=red!20]{ANSI}; \node (B2) [Box,draw=black, fill=red!20,right of=B1, yshift=4.5cm, xshift=1.6cm]{ANSI Z136.1 <|cite_start|> (Reference: Procedure for the computation of hazards from diffusely scattering surfaces under the Z136.1-2000 American National Standard for Safe Use of Lasers: The current national consensus standard for laser safety in the United States is the American National Standard for Safe Use of Lasers (ANSI Z136.1). The most recent standard, Z136.1-2000, incorporates a wealth of recent bioeffects data and established a number of new maximum permissible exposure (MPE) limits for laser safety. The standard also includes recent procedures for the computation of MPE values from large or extended diffusely scattering sources, which must be understood by health physicists, laser safety officers, and others in the field of occupational safety. Here we present the fourth in a series of tutorial articles, written to clarify laser safety analysis procedures under this standard, with an emphasis on the MPE computation methods related to extended sources, and the determination of nominal hazard zones.) <|cite_end|>}; \node (B3) [Box,draw=black, fill=red!20, below of=B2, yshift=-0.3cm]{ANSI Z136.2 <|cite_start|> (Reference: American national standard for the safe use of optical fiber communications systems utilizing laser diodes and LED sources, ANSI Z136.1-1997: The 1989 American Standard for the Safe Use of Optical Fiber Communications Systems Utilizing Laser Diodes and LED Sources, ANSI Z136.2-1989, was recently updated to address changes in laser safety criteria and technology. The revised standard provides practical guidance for personnel installing and servicing optical fiber communications systems (OFCS). Such systems are, by definition, Class 1 except during service or installation and, therefore, the concept of “service group” (SG) instead of “class” is retained to as an indicator potential of risk. Factors such as the divergence of the energy emitted from the end of an optical fiber or connector, and anticipated realistic viewing conditions are included in the accessible emission limits (AELs) that define the different SGs. Consequently, the AELs and the measurement distances, limiting aperture diameters and exposure durations are different from the corresponding values used to classify conventional lasers and laser systems. Where appropriate, changes were made to harmonize with other standards, e.g., relaxation of the maximum permissible exposure (MPE) values in the IR. The rationale and innovative features of the revised standard are described below.The 1989 American Standard for the Safe Use of Optical Fiber Communications Systems Utilizing Laser Diodes and LED Sources, ANSI Z136.2-1989, was recently updated to address changes in laser safety criteria and technology. The revised standard provides practical guidance for personnel installing and servicing optical fiber communications systems (OFCS). Such systems are, by definition, Class 1 except during service or installation and, therefore, the concept of “service group” (SG) instead of “class” is retained to as an indicator potential of risk. Factors such as the divergence of the energy emitted from the end of an optical fiber or connector, and anticipated realistic viewing conditions are included in the accessible emission limits (AELs) that define the different SGs. Consequently, the AELs and the measurement distances, limiting aperture diameters and exposure durations are different from the corresponding values used to classify conventional lasers and laser systems. Where appropriate, changes we...) <|cite_end|>}; \node (B4) [Box,draw=black, fill=red!20, below of=B3, yshift=-0.3cm]{ANSI Z136.3 <|cite_start|> (Reference: OP-TEC national center for optics and photonics education and ANSI Z136.5 American National Standard for the safe use of lasers in educational institutions – How they will work together to improve laser safety in educational institutions: A consortium of two-year colleges, high schools, universities, national laboratories, industry partners, and professional societies created OP-TEC. This ATE-NSF program is committed to join forces in creating a secondary-to-postsecondary “pipeline” of highly qualified and strongly motivated students and empowering community colleges to meet the urgent need for technicians in optics and photonics. Part of OP-TEC is to act in an advisory role for high schools, colleges and universities to develop safe laser laboratories as they infuse photonics into their programs. Dr. Fred Seeber who is a Co-Investigator for OP-TEC also chairs the ANSI Z136.5 subcommittee for the Safe Use of Lasers in Educational Institutions. This presentation will discuss how the ANSI Z136.5 standard will guide OP-TEC when it advises educational institutions on how to safely instruct and create laser laboratories. The ANSI Z136.5 published in 2000 and nearing the end of a revision which will come out with the next addition in the beginning of 2009.A consortium of two-year colleges, high schools, universities, national laboratories, industry partners, and professional societies created OP-TEC. This ATE-NSF program is committed to join forces in creating a secondary-to-postsecondary “pipeline” of highly qualified and strongly motivated students and empowering community colleges to meet the urgent need for technicians in optics and photonics. Part of OP-TEC is to act in an advisory role for high schools, colleges and universities to develop safe laser laboratories as they infuse photonics into their programs. Dr. Fred Seeber who is a Co-Investigator for OP-TEC also chairs the ANSI Z136.5 subcommittee for the Safe Use of Lasers in Educational Institutions. This presentation will discuss how the ANSI Z136.5 standard will guide OP-TEC when it advises educational institutions on how to safely instruct and create laser laboratories. The ANSI Z136.5 published in 2000 and nearing the end of a revision which will come out with the next addition in the beginni...) <|cite_end|>}; \node (B5) [Box,draw=black, fill=red!20, below of=B4, yshift=-0.3cm]{ANSI Z136.4 <|cite_start|> (Reference: Update on the IEC 60825-13 technical report on laser radiation measurements and the ANSI Z136.4 recommended practice for laser safety measurements for hazard evaluation: ) <|cite_end|>}; \node (B6) [Box,draw=black, fill=red!20, below of=B5, yshift=-0.3cm]{ANSI Z136.5 <|cite_start|> (Reference: OP-TEC national center for optics and photonics education and ANSI Z136.5 American National Standard for the safe use of lasers in educational institutions – How they will work together to improve laser safety in educational institutions: A consortium of two-year colleges, high schools, universities, national laboratories, industry partners, and professional societies created OP-TEC. This ATE-NSF program is committed to join forces in creating a secondary-to-postsecondary “pipeline” of highly qualified and strongly motivated students and empowering community colleges to meet the urgent need for technicians in optics and photonics. Part of OP-TEC is to act in an advisory role for high schools, colleges and universities to develop safe laser laboratories as they infuse photonics into their programs. Dr. Fred Seeber who is a Co-Investigator for OP-TEC also chairs the ANSI Z136.5 subcommittee for the Safe Use of Lasers in Educational Institutions. This presentation will discuss how the ANSI Z136.5 standard will guide OP-TEC when it advises educational institutions on how to safely instruct and create laser laboratories. The ANSI Z136.5 published in 2000 and nearing the end of a revision which will come out with the next addition in the beginning of 2009.A consortium of two-year colleges, high schools, universities, national laboratories, industry partners, and professional societies created OP-TEC. This ATE-NSF program is committed to join forces in creating a secondary-to-postsecondary “pipeline” of highly qualified and strongly motivated students and empowering community colleges to meet the urgent need for technicians in optics and photonics. Part of OP-TEC is to act in an advisory role for high schools, colleges and universities to develop safe laser laboratories as they infuse photonics into their programs. Dr. Fred Seeber who is a Co-Investigator for OP-TEC also chairs the ANSI Z136.5 subcommittee for the Safe Use of Lasers in Educational Institutions. This presentation will discuss how the ANSI Z136.5 standard will guide OP-TEC when it advises educational institutions on how to safely instruct and create laser laboratories. The ANSI Z136.5 published in 2000 and nearing the end of a revision which will come out with the next addition in the beginni...) <|cite_end|>}; \node (B7) [Box,draw=black, fill=red!20, below of=B6, yshift=-0.3cm]{ANSI Z136.6}; \node (B8) [Box,draw=black, fill=red!20, below of=B7, yshift=-0.3cm]{ANSI Z136.7}; \node (B9) [Box,draw=black, fill=red!20, below of=B8, yshift=-0.3cm]{ANSI Z136.8}; \node (B10) [Box,draw=black, fill=red!20, below of=B9, yshift=-0.3cm]{ANSI Z136.9 <|cite_start|> (Reference: OP-TEC national center for optics and photonics education and ANSI Z136.5 American National Standard for the safe use of lasers in educational institutions – How they will work together to improve laser safety in educational institutions: A consortium of two-year colleges, high schools, universities, national laboratories, industry partners, and professional societies created OP-TEC. This ATE-NSF program is committed to join forces in creating a secondary-to-postsecondary “pipeline” of highly qualified and strongly motivated students and empowering community colleges to meet the urgent need for technicians in optics and photonics. Part of OP-TEC is to act in an advisory role for high schools, colleges and universities to develop safe laser laboratories as they infuse photonics into their programs. Dr. Fred Seeber who is a Co-Investigator for OP-TEC also chairs the ANSI Z136.5 subcommittee for the Safe Use of Lasers in Educational Institutions. This presentation will discuss how the ANSI Z136.5 standard will guide OP-TEC when it advises educational institutions on how to safely instruct and create laser laboratories. The ANSI Z136.5 published in 2000 and nearing the end of a revision which will come out with the next addition in the beginning of 2009.A consortium of two-year colleges, high schools, universities, national laboratories, industry partners, and professional societies created OP-TEC. This ATE-NSF program is committed to join forces in creating a secondary-to-postsecondary “pipeline” of highly qualified and strongly motivated students and empowering community colleges to meet the urgent need for technicians in optics and photonics. Part of OP-TEC is to act in an advisory role for high schools, colleges and universities to develop safe laser laboratories as they infuse photonics into their programs. Dr. Fred Seeber who is a Co-Investigator for OP-TEC also chairs the ANSI Z136.5 subcommittee for the Safe Use of Lasers in Educational Institutions. This presentation will discuss how the ANSI Z136.5 standard will guide OP-TEC when it advises educational institutions on how to safely instruct and create laser laboratories. The ANSI Z136.5 published in 2000 and nearing the end of a revision which will come out with the next addition in the beginni...) <|cite_end|>}; \draw [->] (B1) |- (B2); \draw [->] (B1) |- (B3); \draw [->] (B1) |- (B4); \draw [->] (B1) |- (B5); \draw [->] (B1) |- (B6); \draw [->] (B1) |- (B7); \draw [->] (B1) |- (B8); \draw [->] (B1) |- (B9); \draw [->] (B1) |- (B10); \node (B11) [Box,draw=black, fill=red!20,right of=B2, xshift=5.85cm, text width=9cm]{General safe use of lasers for industry, military, research and development}; \node (B12) [Box,draw=black, fill=red!20,right of=B3, xshift=5.85cm, text width=9cm ]{Safe use of optical communication systems utilizing laser diode and LED sources}; \node (B13) [Box,draw=black, fill=red!20,right of=B4, xshift=5.85cm, text width=9cm ]{Safe use of high power lasers (class 3B and class 4) in health care}; \node (B14) [Box,draw=black, fill=red!20,,right of=B5, xshift=5.85cm, text width=9cm ]{Guidance on safe measurements of the classification and hazard evaluation}; \node (B15) [Box,draw=black, fill=red!20,,right of=B6, xshift=5.85cm, text width=9cm]{Safe use of lasers in educational institutions}; \node (B16) [Box,draw=black, fill=red!20,,right of=B7, xshift=5.85cm, text width=9cm]{Safe use of lasers outdoors, e.g., construction, laser lightshows and military}; \node (B17) [Box,draw=black, fill=red!20,,right of=B8, xshift=5.85cm, text width=9cm]{Guidance on the test methods and protocols used to provide eye protective equipment}; \node (B18) [Box,draw=black, fill=red!20,,right of=B9, xshift=5.85cm, text width=9cm]{Guidance on the safe use of lasers in research, development, or testing}; \node (B19) [Box,draw=black, fill=red!20,,right of=B10, xshift=5.85cm, text width=9cm]{Policies to ensure laser safety in both public and private manufacturing environments}; \draw [->] (B2) -- (B11); \draw [->] (B3) -- (B12); \draw [->] (B4) -- (B13); \draw [->] (B5) -- (B14); \draw [->] (B6) -- (B15); \draw [->] (B7) -- (B16); \draw [->] (B8) -- (B17); \draw [->] (B9) -- (B18); \draw [->] (B10) -- (B19); \end{tikzpicture} \end{center} \caption{Classification of ANSI standards on laser safety including their main focus.} \label{Fig-ANSI} \end{figure} \subsection{Literature Review of Laser Safety} With the increased use of lasers in OWC for high-speed links, the safety of lasers\footnote{We note that safety of optical fiber is out of the focus of this paper. The IEC has released a standard for safety of optical fiber communication systems (OFCS) (part 2 of IEC 60825).} has emerged as a topic of interest. There are several standards on the safety of lasers. IEC 60825 consists of 10 parts\footnote{EN 60825 is European requirements documents which follow the IEC versions and BS EN 60825 is the UK national requirements documents. The USA has always had its own regulation on lasers which is known as 21 CFR 1040.10. This is a USA government regulation rather than a standard, which has not been updated for about 30 years and is consequently very outdated compared to the IEC / EN standard.}, under the general title safety of laser products. Fig.~\ref{Fig-IEC} illustrates the principle target of each part. The most relevant parts to OWC design with lasers are part 1 and 12, where general laser safety and the safety of free space optical communications are specified respectively. In addition, the American national standards institute (ANSI) has published a 9 part ANSI Z-136 series of standards. This series, along with an outline of the primary focus of their work, are presented in Fig.~\ref{Fig-ANSI}. Parts 1 and 2 are the most relevant with regards to the design of safe optical wireless links, where the safe use of lasers and the safety of fixed terrestrial point-to-point free-space links are discussed, respectively. The international commission on non-ionizing radiation protection (ICNIRP) has specified the maximum level of exposure to coherent laser sources for wavelengths between $180$ nm to $1$ mm <|cite_start|> (Reference: ICNIRP Guidelines on Limits of Exposure to Laser Radiation of Wavelengths between 180 nm and 1,000 μm.: Since the publication of the ICNIRP Revision of the Guidelines on Limits of Exposure to Laser Radiation (), further research supports amending the retinal thermal exposure limits in terms of spot size dependence, pulse duration dependence for short pulses and wavelength dependence between 1,200 nm and 1,400 nm. A detailed discussion of the rational for the changes is presented in the Appendix of these Guidelines (Rationale for updating the Guidelines).) <|cite_end|>\footnote{This includes the revision of ICNIRP on maximum levels of exposure laser radiation for wavelengths between 400 nm and 1.4 $\mu$m, which was released in October 2000.}. Table~\ref{table:standards} summarizes the most relevant standards on eye safety for laser products. It is worth mentioning that ICNIRP have determined exposure limits for the whole electromagnetic spectrum including static magnetic fields (SMF) <|cite_start|> (Reference: ICNIRP Guidelines GUIDELINES ON LIMITS OF EXPOSURE TO STATIC MAGNETIC FIELDS: THE RAPID development of technologies in industry and medicine using static magnetic fields has resulted in an increase in human exposure to these fields and has led to a number of scientific studies of their possible health effects. The World Health Organization (WHO) recently developed a health criteria document on static electric and magnetic fields within the Environmental Health Criteria Program (WHO 2006). The document contains a review of biological effects reported from exposure to static fields and, together with other recent publications [mainly International Commission on Non-Ionizing Radiation Protection (ICNIRP) 2003, McKinlay et al. 2004, and Noble et al. 2005], serves as the scientific database for the development of the rationale for the guidelines described in the current document, which supersede those published by ICNIRP in 1994 (ICNIRP 1994).) <|cite_end|>, static electric fields (SEF), low frequencies (LF) from $1$ Hz to $100$ kHz <|cite_start|> (Reference: GUIDELINES FOR LIMITING EXPOSURE TO TIME-VARYING ELECTRIC AND MAGNETIC FIELDS (1 Hz TO 100 kHz): IN THIS document, guidelines are established for the protection of humans exposed to electric and magnetic fields in the low-frequency range of the electromagnetic spectrum. The general principles for the development of ICNIRP guidelines are published elsewhere (ICNIRP 2002). For the purpose of this document, the low-frequency range extends from 1 Hz to 100 kHz. Above 100 kHz, effects such as heating need to be considered, which are covered by other ICNIRP guidelines. However, in the frequency range from 100 kHz up to approximately 10 MHz protection from both, low frequency effects on the nervous system as well as high frequency effects need to be considered depending on exposure conditions. Therefore, some guidance in this document is extended to 10 MHz to cover the nervous system effects in this frequency range. Guidelines for static magnetic fields have been issued in a separate document (ICNIRP 2009). Guidelines applicable to movement-induced electric fields or time-varying magnetic fields up to 1 Hz will be published separately. This publication replaces the low-frequency part of the 1998 guidelines (ICNIRP 1998). ICNIRP is currently revising the guidelines for the high-frequency portion of the spectrum (above 100 kHz).) <|cite_end|>, RF electromagnetic fields (EMF) from $100$ kHz to $300$ GHz <|cite_start|> (Reference: Guidelines for Limiting Exposure to Electromagnetic Fields (100 kHz to 300 GHz).: Radiofrequency electromagnetic fields (EMFs) are used to enable a number of modern devices, including mobile telecommunications infrastructure and phones, Wi-Fi, and Bluetooth. As radiofrequency EMFs at sufficiently high power levels can adversely affect health, ICNIRP published Guidelines in 1998 for human exposure to time-varying EMFs up to 300 GHz, which included the radiofrequency EMF spectrum. Since that time, there has been a considerable body of science further addressing the relation between radiofrequency EMFs and adverse health outcomes, as well as significant developments in the technologies that use radiofrequency EMFs. Accordingly, ICNIRP has updated the radiofrequency EMF part of the 1998 Guidelines. This document presents these revised Guidelines, which provide protection for humans from exposure to EMFs from 100 kHz to 300 GHz.) <|cite_end|>, Infrared from $780$ nm to $1$ mm <|cite_start|> (Reference: {ICNIRP Guidelines on Limits of Exposure to Incoherent Visible and Infrared Radiation: ABSTRACT Guidelines for exposure to visible and infrared radiation were first proposed by ICNIRP in 1997. Related guidelines on limits of exposure to ultraviolet radiation (UVR) and laser radiation have been published. This document presents a revision of the guidelines for broadband incoherent radiation.) <|cite_end|> <|cite_start|> (Reference: ICNIRP Guidelines on Limits of Exposure to Laser Radiation of Wavelengths between 180 nm and 1,000 μm.: Since the publication of the ICNIRP Revision of the Guidelines on Limits of Exposure to Laser Radiation (), further research supports amending the retinal thermal exposure limits in terms of spot size dependence, pulse duration dependence for short pulses and wavelength dependence between 1,200 nm and 1,400 nm. A detailed discussion of the rational for the changes is presented in the Appendix of these Guidelines (Rationale for updating the Guidelines).) <|cite_end|>, visible spectrum from $380$ nm to $780$ nm <|cite_start|> (Reference: {ICNIRP Guidelines on Limits of Exposure to Incoherent Visible and Infrared Radiation: ABSTRACT Guidelines for exposure to visible and infrared radiation were first proposed by ICNIRP in 1997. Related guidelines on limits of exposure to ultraviolet radiation (UVR) and laser radiation have been published. This document presents a revision of the guidelines for broadband incoherent radiation.) <|cite_end|> <|cite_start|> (Reference: ICNIRP Guidelines on Limits of Exposure to Laser Radiation of Wavelengths between 180 nm and 1,000 μm.: Since the publication of the ICNIRP Revision of the Guidelines on Limits of Exposure to Laser Radiation (), further research supports amending the retinal thermal exposure limits in terms of spot size dependence, pulse duration dependence for short pulses and wavelength dependence between 1,200 nm and 1,400 nm. A detailed discussion of the rational for the changes is presented in the Appendix of these Guidelines (Rationale for updating the Guidelines).) <|cite_end|>and ultraviolet (UV) from $100$ nm to $400$ nm <|cite_start|> (Reference: Guidelines on limits of exposure to ultraviolet radiation of wavelengths between 180 nm and 400 nm (incoherent optical radiation).: Guidelines on limits of exposure to ultraviolet radiation of wavelengths between 180 nm and 400 nm (incoherent optical radiation)) <|cite_end|>. Fig.~\ref{Fig-ICNIRP} illustrates the ICNIRP classification on limits of exposure for the electromagnetic spectrum. \begin{table*}[t] \caption{Related standards on eye safety for laser products.} \label{table:standards} \centering \begin{tabular}{|l|l|l|} \hline \multicolumn{1}{|c|}{Year} & \multicolumn{1}{|c|}{Standard} & \multicolumn{1}{|c|}{Remark} \\ \hline \hline 2014 & IEC 60825-1 & General laser safety \\ \hline 2019 & IEC 60825-12 & Free space optical communications\\ \hline 2014 & ANSI Z-136.1 <|cite_start|> (Reference: Procedure for the computation of hazards from diffusely scattering surfaces under the Z136.1-2000 American National Standard for Safe Use of Lasers: The current national consensus standard for laser safety in the United States is the American National Standard for Safe Use of Lasers (ANSI Z136.1). The most recent standard, Z136.1-2000, incorporates a wealth of recent bioeffects data and established a number of new maximum permissible exposure (MPE) limits for laser safety. The standard also includes recent procedures for the computation of MPE values from large or extended diffusely scattering sources, which must be understood by health physicists, laser safety officers, and others in the field of occupational safety. Here we present the fourth in a series of tutorial articles, written to clarify laser safety analysis procedures under this standard, with an emphasis on the MPE computation methods related to extended sources, and the determination of nominal hazard zones.) <|cite_end|>& Safe use of lasers \\ \hline 2012 & ANSI Z-136.2 <|cite_start|> (Reference: American national standard for the safe use of optical fiber communications systems utilizing laser diodes and LED sources, ANSI Z136.1-1997: The 1989 American Standard for the Safe Use of Optical Fiber Communications Systems Utilizing Laser Diodes and LED Sources, ANSI Z136.2-1989, was recently updated to address changes in laser safety criteria and technology. The revised standard provides practical guidance for personnel installing and servicing optical fiber communications systems (OFCS). Such systems are, by definition, Class 1 except during service or installation and, therefore, the concept of “service group” (SG) instead of “class” is retained to as an indicator potential of risk. Factors such as the divergence of the energy emitted from the end of an optical fiber or connector, and anticipated realistic viewing conditions are included in the accessible emission limits (AELs) that define the different SGs. Consequently, the AELs and the measurement distances, limiting aperture diameters and exposure durations are different from the corresponding values used to classify conventional lasers and laser systems. Where appropriate, changes were made to harmonize with other standards, e.g., relaxation of the maximum permissible exposure (MPE) values in the IR. The rationale and innovative features of the revised standard are described below.The 1989 American Standard for the Safe Use of Optical Fiber Communications Systems Utilizing Laser Diodes and LED Sources, ANSI Z136.2-1989, was recently updated to address changes in laser safety criteria and technology. The revised standard provides practical guidance for personnel installing and servicing optical fiber communications systems (OFCS). Such systems are, by definition, Class 1 except during service or installation and, therefore, the concept of “service group” (SG) instead of “class” is retained to as an indicator potential of risk. Factors such as the divergence of the energy emitted from the end of an optical fiber or connector, and anticipated realistic viewing conditions are included in the accessible emission limits (AELs) that define the different SGs. Consequently, the AELs and the measurement distances, limiting aperture diameters and exposure durations are different from the corresponding values used to classify conventional lasers and laser systems. Where appropriate, changes we...) <|cite_end|>& \begin{tabular}[c]{@{}l@{}} Safe use of optical communications systems utilizing laser\\ diodes or LEDs, including end-to-end optical fiber based\\ links, fixed terrestrial point-to-point free-space links, or a\\ combination of both.\end{tabular} \\ \hline 2013 & ICNIRP <|cite_start|> (Reference: ICNIRP Guidelines on Limits of Exposure to Laser Radiation of Wavelengths between 180 nm and 1,000 μm.: Since the publication of the ICNIRP Revision of the Guidelines on Limits of Exposure to Laser Radiation (), further research supports amending the retinal thermal exposure limits in terms of spot size dependence, pulse duration dependence for short pulses and wavelength dependence between 1,200 nm and 1,400 nm. A detailed discussion of the rational for the changes is presented in the Appendix of these Guidelines (Rationale for updating the Guidelines).) <|cite_end|>& \begin{tabular}[c]{@{}l@{}}Maximum levels of exposure to laser radiation for wavelengths\\ between 180 nm and 1000 $\mu$m.\end{tabular}\\ \hline \end{tabular} \end{table*} \begin{figure}[!th] \begin{center} \begin{tikzpicture} \tikzstyle{startstop} = [rectangle, rounded corners, minimum height=1cm,text centered, draw=black, fill=white] \node (ICNIRP) [startstop,draw=white, fill=white]{}; \node (B0) [startstop,xshift=-3 cm,fill=green!40] {ICNIRP}; \node (B1) [startstop, below of=ICNIRP, yshift=-1cm, xshift=-10cm, text width=2.2cm, fill=green!20] {SMF (0 Hz) SEF (0 Hz) \newline <|cite_start|> (Reference: ICNIRP Guidelines GUIDELINES ON LIMITS OF EXPOSURE TO STATIC MAGNETIC FIELDS: THE RAPID development of technologies in industry and medicine using static magnetic fields has resulted in an increase in human exposure to these fields and has led to a number of scientific studies of their possible health effects. The World Health Organization (WHO) recently developed a health criteria document on static electric and magnetic fields within the Environmental Health Criteria Program (WHO 2006). The document contains a review of biological effects reported from exposure to static fields and, together with other recent publications [mainly International Commission on Non-Ionizing Radiation Protection (ICNIRP) 2003, McKinlay et al. 2004, and Noble et al. 2005], serves as the scientific database for the development of the rationale for the guidelines described in the current document, which supersede those published by ICNIRP in 1994 (ICNIRP 1994).) <|cite_end|>}; \node (B2) [startstop, below of=ICNIRP, yshift=-1cm, xshift=-7 cm, text width=2.8cm, fill=green!20] {\hspace{1.1cm}{LF} \newline (1 Hz-100 kHz) \newline <|cite_start|> (Reference: GUIDELINES FOR LIMITING EXPOSURE TO TIME-VARYING ELECTRIC AND MAGNETIC FIELDS (1 Hz TO 100 kHz): IN THIS document, guidelines are established for the protection of humans exposed to electric and magnetic fields in the low-frequency range of the electromagnetic spectrum. The general principles for the development of ICNIRP guidelines are published elsewhere (ICNIRP 2002). For the purpose of this document, the low-frequency range extends from 1 Hz to 100 kHz. Above 100 kHz, effects such as heating need to be considered, which are covered by other ICNIRP guidelines. However, in the frequency range from 100 kHz up to approximately 10 MHz protection from both, low frequency effects on the nervous system as well as high frequency effects need to be considered depending on exposure conditions. Therefore, some guidance in this document is extended to 10 MHz to cover the nervous system effects in this frequency range. Guidelines for static magnetic fields have been issued in a separate document (ICNIRP 2009). Guidelines applicable to movement-induced electric fields or time-varying magnetic fields up to 1 Hz will be published separately. This publication replaces the low-frequency part of the 1998 guidelines (ICNIRP 1998). ICNIRP is currently revising the guidelines for the high-frequency portion of the spectrum (above 100 kHz).) <|cite_end|>}; \node (B3) [startstop, below of=ICNIRP, yshift=-1cm, xshift=-4 cm, text width=2.3cm, fill=green!20] {\hspace{0.4cm}{RF EMF} \newline (100 kHz- 300 GHz) \newline <|cite_start|> (Reference: Guidelines for Limiting Exposure to Electromagnetic Fields (100 kHz to 300 GHz).: Radiofrequency electromagnetic fields (EMFs) are used to enable a number of modern devices, including mobile telecommunications infrastructure and phones, Wi-Fi, and Bluetooth. As radiofrequency EMFs at sufficiently high power levels can adversely affect health, ICNIRP published Guidelines in 1998 for human exposure to time-varying EMFs up to 300 GHz, which included the radiofrequency EMF spectrum. Since that time, there has been a considerable body of science further addressing the relation between radiofrequency EMFs and adverse health outcomes, as well as significant developments in the technologies that use radiofrequency EMFs. Accordingly, ICNIRP has updated the radiofrequency EMF part of the 1998 Guidelines. This document presents these revised Guidelines, which provide protection for humans from exposure to EMFs from 100 kHz to 300 GHz.) <|cite_end|>}; \node (B4) [startstop, below of=ICNIRP, yshift=-1cm, xshift=-1.5 cm, text width=1.85cm, fill=green!20] {\hspace{0.2cm}{Infrared} \newline (780 nm- 1 mm)\newline <|cite_start|> (Reference: {ICNIRP Guidelines on Limits of Exposure to Incoherent Visible and Infrared Radiation: ABSTRACT Guidelines for exposure to visible and infrared radiation were first proposed by ICNIRP in 1997. Related guidelines on limits of exposure to ultraviolet radiation (UVR) and laser radiation have been published. This document presents a revision of the guidelines for broadband incoherent radiation.) <|cite_end|> <|cite_start|> (Reference: ICNIRP Guidelines on Limits of Exposure to Laser Radiation of Wavelengths between 180 nm and 1,000 μm.: Since the publication of the ICNIRP Revision of the Guidelines on Limits of Exposure to Laser Radiation (), further research supports amending the retinal thermal exposure limits in terms of spot size dependence, pulse duration dependence for short pulses and wavelength dependence between 1,200 nm and 1,400 nm. A detailed discussion of the rational for the changes is presented in the Appendix of these Guidelines (Rationale for updating the Guidelines).) <|cite_end|>}; \node (B5) [startstop, below of=ICNIRP, yshift=-1cm, xshift=0.78 cm, text width=1.8cm, fill=green!20] {\hspace{0.3cm}{Visible} \newline (380- \\780 mm) \newline <|cite_start|> (Reference: {ICNIRP Guidelines on Limits of Exposure to Incoherent Visible and Infrared Radiation: ABSTRACT Guidelines for exposure to visible and infrared radiation were first proposed by ICNIRP in 1997. Related guidelines on limits of exposure to ultraviolet radiation (UVR) and laser radiation have been published. This document presents a revision of the guidelines for broadband incoherent radiation.) <|cite_end|> <|cite_start|> (Reference: ICNIRP Guidelines on Limits of Exposure to Laser Radiation of Wavelengths between 180 nm and 1,000 μm.: Since the publication of the ICNIRP Revision of the Guidelines on Limits of Exposure to Laser Radiation (), further research supports amending the retinal thermal exposure limits in terms of spot size dependence, pulse duration dependence for short pulses and wavelength dependence between 1,200 nm and 1,400 nm. A detailed discussion of the rational for the changes is presented in the Appendix of these Guidelines (Rationale for updating the Guidelines).) <|cite_end|>}; \node (B6) [startstop, below of=ICNIRP, yshift=-1cm, xshift=3.05 cm, text width=1.8cm, fill=green!20] {\hspace{0.6cm}{UV} \newline (100-\\ 400 mm)\newline <|cite_start|> (Reference: Guidelines on limits of exposure to ultraviolet radiation of wavelengths between 180 nm and 400 nm (incoherent optical radiation).: Guidelines on limits of exposure to ultraviolet radiation of wavelengths between 180 nm and 400 nm (incoherent optical radiation)) <|cite_end|>}; \draw [->] (B0) -| (B1); \draw [->] (B0) -| (B2); \draw [->] (B0) -| (B3); \draw [->] (B0) -| (B4); \draw [->] (B0) -| (B5); \draw [->] (B0) -| (B6); \end{tikzpicture} \end{center} \caption{Classification of ICNIRP guidelines on limits of exposure.} \label{Fig-ICNIRP} \end{figure} Laser safety has been discussed in plenty of research papers with different applications such as industry, medicine, military, education and so on since the late 1960s <|cite_start|> (Reference: {Effects of Lasers on the Human Eye: In dealing with the relationship between human vision and lasers, this largely theoretical paper places particular emphasis upon the use of lasers within the normal operating range of the visual system, and upon the mechanisms by which laser radiation can cause threshold damage to the eye. Parallel but subordinate sections present some fundamentals of laser radiation, of the relevant aspects of the visual system, and of unit systems for the specification of laser output. A new approach to understanding laser radiation damage to the eye is developed by means of a model limited to conditions existing only at the threshold of damage. It is shown that such threshold damage to the visual system is primarily due to the effects of heat alone, but that photochemical effects and acoustic shockwaves can potentially be a cause of the threshold damage that cannot be entirely rejected under all conditions. A theoretical estimate of retinal irradiance for threshold damage is made and shotow nb e consistent with empirical findings. A survey of empirically determined damage thresholds is presented. A valid method of computing retinal irradiance from a laser is given, and the direction and magnitude of errors in earlier formulations are pointed out.) <|cite_end|> <|cite_start|> (Reference: Experimental determination of maximum permissible exposure to laser radiation of 1.54 μ wavelength: An experimental determination was made of the maximum permissible, from the point of view of safety, exposure HMPE of the human eye to single laser pulses of 1.54 μ wavelength. For pulses of τ = 40 nsec duration the value of this exposure, measured in terms of the energy density, was HMPE=0.16 J/cm2 and for τ = 10−3 sec pulses it was HMPE=0.3 J/cm2.) <|cite_end|>
[ "<|reference_start|> 2020 IEEE 31st Annual International Symposium on Personal, Indoor and Mobile Radio Communications: <|reference_end|>", "<|reference_start|> 2013 Conference on Lasers and Electro-Optics Europe and International Quantum Electronics Conference CLEO EUROPE/IQEC: <|reference_end|>", "<|reference_start|> {Vertical-Cavity Surface-Emitting Lasers for Data Communication and Sensing: Vertical-cavity surface-emitting lasers (VCSELs) are the ideal optical sources for data communication and sensing. In data communication, large data rates combined with excellent energy efficiency and temperature stability have been achieved based on advanced device design and modulation formats. VCSELs are also promising sources for photonic integrated circuits due to their small footprint and low power consumption. Also, VCSELs are commonly used for a wide variety of applications in the consumer electronics market. These applications range from laser mice to three-dimensional (3D) sensing and imaging, including various 3D movement detections, such as gesture recognition or face recognition. Novel VCSEL types will include metastructures, exhibiting additional unique properties, of largest importance for next-generation data communication, sensing, and photonic integrated circuits. <|reference_end|>", "<|reference_start|> ICNIRP Guidelines on Limits of Exposure to Laser Radiation of Wavelengths between 180 nm and 1,000 μm.: Since the publication of the ICNIRP Revision of the Guidelines on Limits of Exposure to Laser Radiation (), further research supports amending the retinal thermal exposure limits in terms of spot size dependence, pulse duration dependence for short pulses and wavelength dependence between 1,200 nm and 1,400 nm. A detailed discussion of the rational for the changes is presented in the Appendix of these Guidelines (Rationale for updating the Guidelines). <|reference_end|>" ]
[ 13, 18, 24, 31 ]
{"<|cite_3|>": "ss-1001226", "<|cite_4|>": "ss-701706", "<|cite_5|>": "ss-1086786", "<|multi_cite_6_1|>": "ss-1067958", "<|multi_cite_6_2|>": "ss-1915221", "<|cite_7|>": "ss-711079", "<|cite_8|>": "ss-1073275", "<|cite_9|>": "ss-969847", "<|cite_10|>": "arxiv-125522", "<|multi_cite_11_1|>": "ss-1915222", "<|multi_cite_11_2|>": "ss-1544677", "<|cite_12|>": "ss-1915223", "<|multi_cite_13_1|>": "ss-1915224", "<|multi_cite_13_2|>": "ss-718624", "<|multi_cite_13_3|>": "ss-718624", "<|cite_14|>": "ss-1915225", "<|cite_15|>": "ss-1915226", "<|multi_cite_16_1|>": "ss-1915227", "<|multi_cite_16_2|>": "ss-1915228", "<|multi_cite_16_3|>": "ss-1915229", "<|multi_cite_17_2|>": "ss-1915230", "<|multi_cite_17_3|>": "ss-723884", "<|multi_cite_17_4|>": "ss-718624", "<|multi_cite_17_5|>": "ss-718624", "<|cite_18|>": "ss-1915231", "<|cite_30|>": "ss-1915232", "<|cite_31|>": "ss-1915233", "<|cite_32|>": "ss-1915234", "<|cite_33|>": "ss-1915235", "<|cite_34|>": "ss-1915234", "<|cite_38|>": "ss-1915234", "<|cite_44|>": "ss-1915236", "<|cite_45|>": "ss-1915237", "<|cite_47|>": "ss-1915238", "<|cite_48|>": "ss-809363", "<|multi_cite_49_1|>": "ss-1915239", "<|multi_cite_49_2|>": "ss-1915236", "<|multi_cite_50_1|>": "ss-1915239", "<|multi_cite_50_2|>": "ss-1915236", "<|multi_cite_51_1|>": "ss-1915240", "<|cite_54|>": "ss-1915232", "<|cite_55|>": "ss-1915233", "<|cite_56|>": "ss-1915236", "<|multi_cite_57_1|>": "ss-1915237", "<|cite_58|>": "ss-1915238", "<|cite_59|>": "ss-809363", "<|multi_cite_60_1|>": "ss-1915239", "<|multi_cite_60_2|>": "ss-1915236", "<|multi_cite_61_1|>": "ss-1915239", "<|multi_cite_61_2|>": "ss-1915236", "<|multi_cite_62_1|>": "ss-1915240", "<|multi_cite_63_1|>": "ss-1915241", "<|multi_cite_63_2|>": "ss-1915242", "<|multi_cite_63_3|>": "ss-1915243", "<|multi_cite_63_4|>": "ss-1915244", "<|multi_cite_63_5|>": "ss-1915245", "<|multi_cite_63_6|>": "ss-1915246", "<|multi_cite_63_7|>": "ss-1915247", "<|multi_cite_63_8|>": "ss-1915248", "<|multi_cite_63_9|>": "ss-1915249", "<|multi_cite_63_10|>": "ss-1915250", "<|multi_cite_63_11|>": "ss-1915251", "<|multi_cite_63_12|>": "ss-1915252", "<|multi_cite_63_13|>": "ss-1915253", "<|multi_cite_63_14|>": "ss-1915254", "<|cite_64|>": "ss-1915241", "<|cite_65|>": "ss-1915242", "<|cite_66|>": "ss-1915243", "<|cite_67|>": "ss-1915244", "<|cite_68|>": "ss-1915245", "<|cite_69|>": "ss-1915246", "<|cite_70|>": "ss-1915247", "<|cite_71|>": "ss-1915248", "<|cite_72|>": "ss-1915249", "<|cite_73|>": "ss-1915250", "<|cite_74|>": "ss-1915251", "<|cite_75|>": "ss-1915252", "<|cite_76|>": "ss-1915253", "<|cite_77|>": "ss-1915254", "<|cite_78|>": "ss-1915255", "<|cite_79|>": "ss-1915256", "<|cite_80|>": "ss-1915257", "<|cite_81|>": "ss-1915258", "<|cite_82|>": "ss-1915259"}
2205.06359
<|paper_start|> Title: Deep Learning for Prawn Farming: Forecasting and Anomaly Detection Abstract: Deep Learning for Prawn Farming: Forecasting and Anomaly Detection: We present a decision support system for managing water quality in prawn ponds. The system uses various sources of data and deep learning models in a novel way to provide 24-hour forecasting and anomaly detection of water quality parameters. It provides prawn farmers with tools to proactively avoid a poor growing environment, thereby optimising growth and reducing the risk of losing stock. This is a major shift for farmers who are forced to manage ponds by reactively correcting poor water quality conditions. To our knowledge, we are the first to apply Transformer as an anomaly detection model, and the first to apply anomaly detection in general to this aquaculture problem. Our technical contributions include adapting ForecastNet for multivariate data and adapting Transformer and the Attention model to incorporate weather forecast data into their decoders. We attain an average mean absolute percentage error of 12% for dissolved oxygen forecasts and we demonstrate two anomaly detection case studies. The system is successfully running in its second year of deployment on a commercial prawn farm. Introduction The global trade of prawn (shrimp) is estimated at 28 billion US dollars per annum and this market continues to grow at a rate faster than any other aquaculture species. The main challenge in prawn farming is to manage the highly variable water quality in prawn ponds to optimise prawn health and growth <|cite_start|> (Reference: Pond Aquaculture Water Quality Management: ) <|cite_end|>. Dissolved oxygen (DO) is generally accepted as the most important water quality parameter in aquaculture <|cite_start|> (Reference: Australian prawn farming manual: health management for profit: This manual is an easy to read guide to best management practises, with a focus on health management for Australian prawn farmers. Using the combined knowledge of Australia's leading scientists, prawn health specialists, prawn farmers and extensionists this manual captures what is known about managing prawn health and maximizing the farm's productivity and profitability. Funded by the Australian Center for International Agricultural Research and developed in collaboration with the Australian Prawn Farmers' Association, The Queensland Department of Primary Industries and Fisheries and the New South Wales Department of Primary Industries, the manual draws on extensive research conducted across the Australasia region. The contents reflect the knowledge of a wide array of internationally recognized researchers and the wisdom and research gained through the efforts of the Australian prawn farming industry.) <|cite_end|>. Excessively low values in the diurnal cycle of DO (commonly referred to as a ``DO crash'') can cause the prawn to experience hypoxia, anoxia, or death. An entire crop (typically 8 to 12 tons of prawn) can be lost in a matter of hours <|cite_start|> (Reference: Australian prawn farming manual: health management for profit: This manual is an easy to read guide to best management practises, with a focus on health management for Australian prawn farmers. Using the combined knowledge of Australia's leading scientists, prawn health specialists, prawn farmers and extensionists this manual captures what is known about managing prawn health and maximizing the farm's productivity and profitability. Funded by the Australian Center for International Agricultural Research and developed in collaboration with the Australian Prawn Farmers' Association, The Queensland Department of Primary Industries and Fisheries and the New South Wales Department of Primary Industries, the manual draws on extensive research conducted across the Australasia region. The contents reflect the knowledge of a wide array of internationally recognized researchers and the wisdom and research gained through the efforts of the Australian prawn farming industry.) <|cite_end|>. Farmers typically monitor water quality parameters using sensors which, especially under continuous monitoring conditions, are be subject to high levels of biofouling and harsh conditions. Biofouling can reduce a sensor's accuracy or damage it (these can cost over US\$25,000). Maintaining water quality sensors can thus be a challenging task. This work is impactful as it provides forecasting and anomaly detection tools to assist a farmer in taking a \textit{proactive} pond management approach rather than a \textit{reactive} approach of correcting poor conditions. Forecasting gives an indication on how the temporal dynamics of a variable are expected to evolve into the future. Anomaly detection provides a means to identify any changes in the dynamics of a variable that are unusual, such as DO crashes and biofouling. With better control over water quality, animal stress can be reduced to improve growth, survivability, and consequently, production <|cite_start|> (Reference: Pond Aquaculture Water Quality Management: ) <|cite_end|>. The novelty of this study is in the way that we provide a combination of forecasting and anomaly detection for decision support for this particular domain. Our novel technical contributions include (1) applying Transformer <|cite_start|> (Reference: Attention Is All You Need: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.) <|cite_end|> for anomaly detection for the first time in the literature, (2) extending ForecastNet <|cite_start|> (Reference: ForecastNet: A Time-Variant Deep Feed-Forward Neural Network Architecture for Multi-Step-Ahead Time-Series Forecasting: Recurrent and convolutional neural networks are the most common architectures used for time series forecasting in deep learning literature. These networks use parameter sharing by repeating a set of fixed architectures with fixed parameters over time or space. The result is that the overall architecture is time-invariant (shift-invariant in the spatial domain) or stationary. We argue that time-invariance can reduce the capacity to perform multi-step-ahead forecasting, where modelling the dynamics at a range of scales and resolutions is required. We propose ForecastNet which uses a deep feed-forward architecture to provide a time-variant model. An additional novelty of ForecastNet is interleaved outputs, which we show assist in mitigating vanishing gradients. ForecastNet is demonstrated to outperform statistical and deep learning benchmark models on several datasets.) <|cite_end|> into a multivariate model, and (3) proposing a novel approach to incorporate weather forecast data into the decoders of the Transformer <|cite_start|> (Reference: Attention Is All You Need: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.) <|cite_end|> and Attention <|cite_start|> (Reference: Neural Machine Translation by Jointly Learning to Align and Translate: Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.) <|cite_end|> models in a forecasting context. This work additionally provides insight into the aquaculture domain and its challenges. Although this work is demonstrated with prawn farming, it is applicable to other domains such as other aquaculture farming industries (such as fish, molluscs, and other crustaceans), reservoir monitoring, lake monitoring, river monitoring, coastal water monitoring, and sewer monitoring. Related Work \label{sec:relatedWork} There are many challenges in precision agriculture, which have attracted various decision support tools, of which, water quality decision tools are the most common. These tools may provide decision support for determining optimal ranges of water quality <|cite_start|> (Reference: 2019 5th International Conference on New Media Studies (CONMEDIA): ) <|cite_end|> or provide sensing infrastructure <|cite_start|> (Reference: 2017 Wireless Telecommunications Symposium, WTS 2017, Chicago, IL, USA, April 26-28, 2017: ) <|cite_end|>. Various physical or chemical models have also been developed, often for scenario analysis <|cite_start|> (Reference: AquaFarm: simulation and decision support for aquaculture facility design and management planning: ) <|cite_end|>. Various water quality forecasting approaches have been also developed <|cite_start|> (Reference: State Space Models for Forecasting Water Quality Variables: An Application in Aquaculture Prawn Farming: A novel approach to deterministic modelling of diurnal water quality parameters in aquaculture prawn ponds is presented. The purpose is to provide assistance to prawn pond farmers in monitoring pond water quality with limited data. Obtaining sufficient water quality data is generally a challenge in commercial prawn farming applications. Farmers can sustain large losses in their crop if water quality is not well managed. The model presented provides a means for modelling and forecasting various water quality parameters. It is inspired by data dynamics and does not rely on physical ecosystem modelling. The model is constructed within the Bayesian filtering framework. The Kalman filter and the unscented Kalman filer are applied for inference. The results demonstrate generalisability to both variables and environments. The ability for short term forecasting with mean absolute percentage errors between 0.5% and 11% is demonstrated.) <|cite_end|>. Anomaly detection has been applied in water quality applications (e.g., <|cite_start|> (Reference: A survey of machine learning methods applied to anomaly detection on drinking-water quality data: ABSTRACT Traditional machine learning (ML) techniques such as support vector machine, logistic regression, and artificial neural network have been applied most frequently in water quality anomaly detection tasks. This paper presents a review of progress and advances made in detecting anomalies in water quality data using ML techniques. The review encompasses both traditional ML and deep learning (DL) approaches. Our findings indicate that: 1) Generally, DL approaches outperform traditional ML techniques in terms of feature learning accuracy and fewer false positive rates. However, it is difficult to make a fair comparison between studies because of different datasets, models and parameters employed. 2) We notice that despite advances made and the advantages of the extreme learning machine (ELM), its application is sparsely exploited in this domain. This study also proposes a hybrid DL-ELM framework as a possible solution that could be investigated further and used to detect anomalies in water quality data.) <|cite_end|>). However, to our knowledge, it has not been applied to aquaculture. Our system makes use of deep learning models for forecasting and anomaly detection. Temporal deep learning models are usually based on recurrent neural networks (RNNs) and convolutional neural networks (CNNs). These include the sequence-to-sequence (seq2seq) model <|cite_start|> (Reference: Sequence to Sequence Learning with Neural Networks: Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.) <|cite_end|>, the attention model <|cite_start|> (Reference: Neural Machine Translation by Jointly Learning to Align and Translate: Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.) <|cite_end|>, and DeepAnT <|cite_start|> (Reference: DeepAnT: A Deep Learning Approach for Unsupervised Anomaly Detection in Time Series: Traditional distance and density-based anomaly detection techniques are unable to detect periodic and seasonality related point anomalies which occur commonly in streaming data, leaving a big gap in time series anomaly detection in the current era of the IoT. To address this problem, we present a novel deep learning-based anomaly detection approach (DeepAnT) for time series data, which is equally applicable to the non-streaming cases. DeepAnT is capable of detecting a wide range of anomalies, i.e., point anomalies, contextual anomalies, and discords in time series data. In contrast to the anomaly detection methods where anomalies are learned, DeepAnT uses unlabeled data to capture and learn the data distribution that is used to forecast the normal behavior of a time series. DeepAnT consists of two modules: time series predictor and anomaly detector. The time series predictor module uses deep convolutional neural network (CNN) to predict the next time stamp on the defined horizon. This module takes a window of time series (used as a context) and attempts to predict the next time stamp. The predicted value is then passed to the anomaly detector module, which is responsible for tagging the corresponding time stamp as normal or abnormal. DeepAnT can be trained even without removing the anomalies from the given data set. Generally, in deep learning-based approaches, a lot of data are required to train a model. Whereas in DeepAnT, a model can be trained on relatively small data set while achieving good generalization capabilities due to the effective parameter sharing of the CNN. As the anomaly detection in DeepAnT is unsupervised, it does not rely on anomaly labels at the time of model generation. Therefore, this approach can be directly applied to real-life scenarios where it is practically impossible to label a big stream of data coming from heterogeneous sensors comprising of both normal as well as anomalous points. We have performed a detailed evaluation of 15 algorithms on 10 anomaly detection benchmarks, which contain a total of 433 real and synthetic time series. Experiments show that DeepAnT outperforms the state-of-the-art anomaly detection methods in most of the cases, while performing on par with others.) <|cite_end|>. However, there are also models that are not based on RNNs or CNNs, such as the Transformer model <|cite_start|> (Reference: Attention Is All You Need: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.) <|cite_end|> and ForecastNet <|cite_start|> (Reference: ForecastNet: A Time-Variant Deep Feed-Forward Neural Network Architecture for Multi-Step-Ahead Time-Series Forecasting: Recurrent and convolutional neural networks are the most common architectures used for time series forecasting in deep learning literature. These networks use parameter sharing by repeating a set of fixed architectures with fixed parameters over time or space. The result is that the overall architecture is time-invariant (shift-invariant in the spatial domain) or stationary. We argue that time-invariance can reduce the capacity to perform multi-step-ahead forecasting, where modelling the dynamics at a range of scales and resolutions is required. We propose ForecastNet which uses a deep feed-forward architecture to provide a time-variant model. An additional novelty of ForecastNet is interleaved outputs, which we show assist in mitigating vanishing gradients. ForecastNet is demonstrated to outperform statistical and deep learning benchmark models on several datasets.) <|cite_end|>. To our knowledge, Transformer has not been applied for anomaly detection before. According to a recent review on anomaly detection <|cite_start|> (Reference: Deep Learning for Anomaly Detection: A Review: Anomaly detection, a.k.a. outlier detection or novelty detection, has been a lasting yet active research area in various research communities for several decades. There are still some unique problem complexities and challenges that require advanced approaches. In recent years, deep learning enabled anomaly detection, i.e., deep anomaly detection, has emerged as a critical direction. This paper surveys the research of deep anomaly detection with a comprehensive taxonomy, covering advancements in three high-level categories and 11 fine-grained categories of the methods. We review their key intuitions, objective functions, underlying assumptions, advantages and disadvantages, and discuss how they address the aforementioned challenges. We further discuss a set of possible future opportunities and new perspectives on addressing the challenges.) <|cite_end|>, we consider ``generic normality feature learning'' anomaly detection approaches. <|paper_end|>
[ "<|reference_start|> Australian prawn farming manual: health management for profit: This manual is an easy to read guide to best management practises, with a focus on health management for Australian prawn farmers. Using the combined knowledge of Australia's leading scientists, prawn health specialists, prawn farmers and extensionists this manual captures what is known about managing prawn health and maximizing the farm's productivity and profitability. \n \nFunded by the Australian Center for International Agricultural Research and developed in collaboration with the Australian Prawn Farmers' Association, The Queensland Department of Primary Industries and Fisheries and the \nNew South Wales Department of Primary Industries, the manual draws on extensive research conducted across the Australasia region. The contents reflect the knowledge of a wide array of internationally recognized researchers and the wisdom and research gained through the efforts of the Australian prawn farming industry. <|reference_end|>", "<|reference_start|> AquaFarm: simulation and decision support for aquaculture facility design and management planning: <|reference_end|>", "<|reference_start|> Sequence to Sequence Learning with Neural Networks: Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier. <|reference_end|>", "<|reference_start|> Attention Is All You Need: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data. <|reference_end|>" ]
[ 2, 10, 13, 16 ]
{"<|cite_2|>": "ss-2057222", "<|cite_3|>": "ss-2057223", "<|cite_4|>": "ss-2057223", "<|cite_5|>": "ss-2057222", "<|cite_6|>": "arxiv-126595", "<|cite_7|>": "arxiv-247538", "<|cite_8|>": "arxiv-126595", "<|cite_9|>": "arxiv-65503", "<|cite_10|>": "ss-1204630", "<|cite_11|>": "ss-2057224", "<|cite_12|>": "ss-2057225", "<|cite_13|>": "ss-2057226", "<|cite_14|>": "ss-2057227", "<|cite_15|>": "arxiv-65933", "<|cite_16|>": "arxiv-65503", "<|cite_17|>": "ss-1183948", "<|cite_18|>": "arxiv-126595", "<|cite_19|>": "arxiv-247538", "<|cite_20|>": "arxiv-276515"}
1709.01630
<|paper_start|> Title: Using Cross-Model EgoSupervision to Learn Cooperative Basketball Intention Abstract: Using Cross-Model EgoSupervision to Learn Cooperative Basketball Intention: We present a first-person method for cooperative basketball intention prediction: we predict with whom the camera wearer will cooperate in the near future from unlabeled first-person images. This is a challenging task that requires inferring the camera wearer's visual attention, and decoding the social cues of other players. Our key observation is that a first-person view provides strong cues to infer the camera wearer's momentary visual attention, and his/her intentions. We exploit this observation by proposing a new cross-model EgoSupervision learning scheme that allows us to predict with whom the camera wearer will cooperate in the near future, without using manually labeled intention labels. Our cross-model EgoSupervision operates by transforming the outputs of a pretrained pose-estimation network, into pseudo ground truth labels, which are then used as a supervisory signal to train a new network for a cooperative intention task. We evaluate our method, and show that it achieves similar or even better accuracy than the fully supervised methods do. Introduction Consider a dynamic scene such as Figure~\ref{task_fig}, where you, as the camera wearer, are playing basketball. You need to make a decision with whom you will cooperate to maximize the overall benefit for your team. Looking ahead at your teammates, you make a conscious decision and then 2-3 seconds afterwards you perform a cooperative action such as passing the ball. In a team sport such as basketball, an effective cooperation among teammates is essential. Thus, in this paper, we aim to investigate whether we can use a single first-person image to infer with whom the camera wearer will cooperate 2-3 seconds from now? This is a challenging task because predicting camera wearer's cooperative intention requires 1) inferring his/her momentary visual attention, 2) decoding dominant social signals expressed by other players who want to cooperate, and 3) knowing who your teammates are when the players are not wearing any team-specific uniforms. \begin{figure} \centering \includegraphics[width=1\linewidth]{./paper_figures/task_figure/task_fig.pdf} \captionsetup{labelformat=default} \setcounter{figure}{0} \caption{With whom will I cooperate after 2-3 seconds? Given an \textbf{unlabeled} set of first-person basketball images, we predict with whom the camera wearer will cooperate 2 seconds from now. We refer to this problem as a cooperative basketball intention prediction.\vspace{-0.6cm}} \label{task_fig} \end{figure} \begin{figure*} \begin{center} \includegraphics[width=0.9\linewidth]{./paper_figures/arch/train_arch5.pdf} \end{center} \vspace{-0.4cm} \caption{The illustration of our cross-model EgoSupervision training scheme. As our base model we use a multi-person pose estimation network from <|cite_start|> (Reference: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields: We present an approach to efficiently detect the 2D pose of multiple people in an image. The approach uses a nonparametric representation, which we refer to as Part Affinity Fields (PAFs), to learn to associate body parts with individuals in the image. The architecture encodes global context, allowing a greedy bottom-up parsing step that maintains high accuracy while achieving realtime performance, irrespective of the number of people in the image. The architecture is designed to jointly learn part locations and their association via two branches of the same sequential prediction process. Our method placed first in the inaugural COCO 2016 keypoints challenge, and significantly exceeds the previous state-of-the-art result on the MPII Multi-Person benchmark, both in performance and efficiency.) <|cite_end|>, which predicts 1) pose estimates of all people in a given first-person image and 2) the bounding boxes around each person. Next, we feed these outputs to an EgoTransformer, which transforms them such that the transformed output would approximately capture the camera wearer's attention and intentions. Then, we use such transformed output as a supervisory signal to train the network for our cooperative basketball intention task.\vspace{-0.5cm}} \label{fig:train_arch} \end{figure*} To make this problem even more challenging we ask a question: ``Can we infer cooperative basketball intention without manually labeled first-person data?''. Building an unsupervised learning framework is important because manually collecting basketball intention labels is a costly and a time consuming process. In the context of a cooperative basketball intention task, an annotator needs to have highly specific basketball domain knowledge. Such a requirement limits the scalability of the annotation process because such annotators are difficult to find and costly to employ. However, we conjecture that we can learn cooperative basketball intention in an unsupervised fashion by exploiting the signal provided by the first-person camera. What people see reflects how they are going to act. A first-person camera placed on a basketball player's head allows us to indirectly tap into that person's mind and reason about his/her internal state based on what the camera wearer sees. To do so we propose a novel cross-model EgoSupervision learning scheme, which allows us to learn the camera wearer's intention without the manually labeled intention data. Our cross-model EgoSupervision scheme works as follows. First we transform the output of a pretrained pose-estimation network such that it would approximately reflect the camera wearer's internal state such as his/her visual attention and intentions. Then, we use such transformed output as a supervisory signal to train another network for our cooperative basketball intention task. We show that such a learning scheme allows us to train our model without manually annotated intention labels, and achieve similar or even better results as the fully supervised methods do. Related Work \textbf{First-Person Vision.} In the past, most first-person methods have focused on first-person object detection <|cite_start|> (Reference: Predicting Important Objects for Egocentric Video Summarization: We present a video summarization approach for egocentric or "wearable" camera data. Given hours of video, the proposed method produces a compact storyboard summary of the camera wearer's day. In contrast to traditional keyframe selection techniques, the resulting summary focuses on the most important objects and people with which the camera wearer interacts. To accomplish this, we develop region cues indicative of high-level saliency in egocentric video---such as the nearness to hands, gaze, and frequency of occurrence---and learn a regressor to predict the relative importance of any new region based on these cues. Using these predictions and a simple form of temporal event detection, our method selects frames for the storyboard that reflect the key object-driven happenings. We adjust the compactness of the final summary given either an importance selection criterion or a length budget; for the latter, we design an efficient dynamic programming solution that accounts for importance, visual uniqueness, and temporal displacement. Critically, the approach is neither camera-wearer-specific nor object-specific; that means the learned importance metric need not be trained for a given user or context, and it can predict the importance of objects and people that have never been seen previously. Our results on two egocentric video datasets show the method's promise relative to existing techniques for saliency and summarization.) <|cite_end|> <|cite_start|> (Reference: {You-Do, I-Learn: Discovering Task Relevant Objects and their Modes of Interaction from Multi-User Egocentric Video: We present a fully unsupervised approach for the discovery of i) task relevant objects and ii) how these objects have been used. A Task Relevant Object (TRO) is an object, or part of an object, with which a person interacts during task performance. Given egocentric video from multiple operators, the approach can discover objects with which the users interact, both static objects such as a coffee machine as well as movable ones such as a cup. Importantly, we also introduce the term Mode of Interaction (MOI) to refer to the different ways in which TROs are used. Say, a cup can be lifted, washed, or poured into. When harvesting interactions with the same object from multiple operators, common MOIs can be found. Setup and Dataset: Using a wearable camera and gaze tracker (Mobile Eye-XG from ASL), egocentric video is collected of users performing tasks, along with their gaze in pixel coordinates. Six locations were chosen: kitchen, workspace, laser printer, corridor with a locked door, cardiac gym and weight-lifting machine. The Bristol Egocentric Object Interactions Dataset is publically available .) <|cite_end|> <|cite_start|> (Reference: Figure-ground Segmentation Improves Handled Object Recognition in Egocentric Video: Identifying handled objects, i.e. objects being manipulated by a user, is essential for recognizing the person's activities. An egocentric camera as worn on the body enjoys many advantages such as having a natural first-person view and not needing to instrument the environment. It is also a challenging setting, where background clutter is known to be a major source of problems and is difficult to handle with the camera constantly and arbitrarily moving. In this work we develop a bottom-up motion-based approach to robustly segment out foreground objects in egocentric video and show that it greatly improves object recognition accuracy. Our key insight is that egocentric video of object manipulation is a special domain and many domain-specific cues can readily help. We compute dense optical flow and fit it into multiple affine layers. We then use a max-margin classifier to combine motion with empirical knowledge of object location and background movement as well as temporal cues of support region and color appearance. We evaluate our segmentation algorithm on the large Intel Egocentric Object Recognition dataset with 42 objects and 100K frames. We show that, when combined with temporal integration, figure-ground segmentation improves the accuracy of a SIFT-based recognition system from 33% to 60%, and that of a latent-HOG system from 64% to 86%.) <|cite_end|> <|cite_start|> (Reference: {Learning to recognize objects in egocentric activities: This paper addresses the problem of learning object models from egocentric video of household activities, using extremely weak supervision. For each activity sequence, we know only the names of the objects which are present within it, and have no other knowledge regarding the appearance or location of objects. The key to our approach is a robust, unsupervised bottom up segmentation method, which exploits the structure of the egocentric domain to partition each frame into hand, object, and background categories. By using Multiple Instance Learning to match object instances across sequences, we discover and localize object occurrences. Object representations are refined through transduction and object-level classifiers are trained. We demonstrate encouraging results in detecting novel object instances using models produced by weakly-supervised learning.) <|cite_end|> <|cite_start|> (Reference: First-person action-object detection with egonet: Unlike traditional third-person cameras mounted on robots, a first-person camera, captures a person's visual sensorimotor object interactions from up close. In this paper, we study the tight interplay between our momentary visual attention and motor action with objects from a first-person camera. We propose a concept of action-objects---the objects that capture person's conscious visual (watching a TV) or tactile (taking a cup) interactions. Action-objects may be task-dependent but since many tasks share common person-object spatial configurations, action-objects exhibit a characteristic 3D spatial distance and orientation with respect to the person. We design a predictive model that detects action-objects using EgoNet, a joint two-stream network that holistically integrates visual appearance (RGB) and 3D spatial layout (depth and height) cues to predict per-pixel likelihood of action-objects. Our network also incorporates a first-person coordinate embedding, which is designed to learn a spatial distribution of the action-objects in the first-person data. We demonstrate EgoNet's predictive power, by showing that it consistently outperforms previous baseline approaches. Furthermore, EgoNet also exhibits a strong generalization ability, i.e., it predicts semantically meaningful objects in novel first-person datasets. Our method's ability to effectively detect action-objects could be used to improve robots' understanding of human-object interactions.) <|cite_end|>, or activity recognition <|cite_start|> (Reference: Action Recognition in the Presence of One Egocentric and Multiple Static Cameras: ) <|cite_end|> <|cite_start|> (Reference: {First Person Action Recognition Using Deep Learned Descriptors: We focus on the problem of wearer's action recognition in first person a.k.a. egocentric videos. This problem is more challenging than third person activity recognition due to unavailability of wearer's pose and sharp movements in the videos caused by the natural head motion of the wearer. Carefully crafted features based on hands and objects cues for the problem have been shown to be successful for limited targeted datasets. We propose convolutional neural networks (CNNs) for end to end learning and classification of wearer's actions. The proposed network makes use of egocentric cues by capturing hand pose, head motion and saliency map. It is compact. It can also be trained from relatively small number of labeled egocentric videos that are available. We show that the proposed network can generalize and give state of the art performance on various disparate egocentric action datasets.) <|cite_end|> <|cite_start|> (Reference: Detecting activities of daily living in first-person camera views: We present a novel dataset and novel algorithms for the problem of detecting activities of daily living (ADL) in firstperson camera views. We have collected a dataset of 1 million frames of dozens of people performing unscripted, everyday activities. The dataset is annotated with activities, object tracks, hand positions, and interaction events. ADLs differ from typical actions in that they can involve long-scale temporal structure (making tea can take a few minutes) and complex object interactions (a fridge looks different when its door is open). We develop novel representations including (1) temporal pyramids, which generalize the well-known spatial pyramid to approximate temporal correspondence when scoring a model and (2) composite object models that exploit the fact that objects look different when being interacted with. We perform an extensive empirical evaluation and demonstrate that our novel representations produce a two-fold improvement over traditional approaches. Our analysis suggests that real-world ADL recognition is “all about the objects,” and in particular, “all about the objects being interacted with.”) <|cite_end|> <|cite_start|> (Reference: Delving into egocentric actions: We address the challenging problem of recognizing the camera wearer's actions from videos captured by an egocentric camera. Egocentric videos encode a rich set of signals regarding the camera wearer, including head movement, hand pose and gaze information. We propose to utilize these mid-level egocentric cues for egocentric action recognition. We present a novel set of egocentric features and show how they can be combined with motion and object features. The result is a compact representation with superior performance. In addition, we provide the first systematic evaluation of motion, object and egocentric cues in egocentric action recognition. Our benchmark leads to several surprising findings. These findings uncover the best practices for egocentric actions, with a significant performance boost over all previous state-of-the-art methods on three publicly available datasets.) <|cite_end|> <|cite_start|> (Reference: Going Deeper into First-Person Activity Recognition: We bring together ideas from recent work on feature design for egocentric action recognition under one framework by exploring the use of deep convolutional neural networks (CNN). Recent work has shown that features such as hand appearance, object attributes, local hand motion and camera ego-motion are important for characterizing first-person actions. To integrate these ideas under one framework, we propose a twin stream network architecture, where one stream analyzes appearance information and the other stream analyzes motion information. Our appearance stream encodes prior knowledge of the egocentric paradigm by explicitly training the network to segment hands and localize objects. By visualizing certain neuron activation of our network, we show that our proposed architecture naturally learns features that capture object attributes and hand-object configurations. Our extensive experiments on benchmark egocentric action datasets show that our deep architecture enables recognition rates that significantly outperform state-of-the-art techniques -- an average $6.6\%$ increase in accuracy over all datasets. Furthermore, by learning to recognize objects, actions and activities jointly, the performance of individual recognition tasks also increase by $30\%$ (actions) and $14\%$ (objects). We also include the results of extensive ablative analysis to highlight the importance of network design decisions..) <|cite_end|> <|cite_start|> (Reference: Understanding Egocentric Activities: We present a method to analyze daily activities, such as meal preparation, using video from an egocentric camera. Our method performs inference about activities, actions, hands, and objects. Daily activities are a challenging domain for activity recognition which are well-suited to an egocentric approach. In contrast to previous activity recognition methods, our approach does not require pre-trained detectors for objects and hands. Instead we demonstrate the ability to learn a hierarchical model of an activity by exploiting the consistent appearance of objects, hands, and actions that results from the egocentric context. We show that joint modeling of activities, actions, and objects leads to superior performance in comparison to the case where they are considered independently. We introduce a novel representation of actions based on object-hand interactions and experimentally demonstrate the superior performance of our representation in comparison to standard activity representations such as bag of words.) <|cite_end|>. Several methods have employed first-person videos to summarize videos <|cite_start|> (Reference: Predicting Important Objects for Egocentric Video Summarization: We present a video summarization approach for egocentric or "wearable" camera data. Given hours of video, the proposed method produces a compact storyboard summary of the camera wearer's day. In contrast to traditional keyframe selection techniques, the resulting summary focuses on the most important objects and people with which the camera wearer interacts. To accomplish this, we develop region cues indicative of high-level saliency in egocentric video---such as the nearness to hands, gaze, and frequency of occurrence---and learn a regressor to predict the relative importance of any new region based on these cues. Using these predictions and a simple form of temporal event detection, our method selects frames for the storyboard that reflect the key object-driven happenings. We adjust the compactness of the final summary given either an importance selection criterion or a length budget; for the latter, we design an efficient dynamic programming solution that accounts for importance, visual uniqueness, and temporal displacement. Critically, the approach is neither camera-wearer-specific nor object-specific; that means the learned importance metric need not be trained for a given user or context, and it can predict the importance of objects and people that have never been seen previously. Our results on two egocentric video datasets show the method's promise relative to existing techniques for saliency and summarization.) <|cite_end|> <|cite_start|> (Reference: {Story-Driven Summarization for Egocentric Video: We present a video summarization approach that discovers the story of an egocentric video. Given a long input video, our method selects a short chain of video sub shots depicting the essential events. Inspired by work in text analysis that links news articles over time, we define a random-walk based metric of influence between sub shots that reflects how visual objects contribute to the progression of events. Using this influence metric, we define an objective for the optimal k-subs hot summary. Whereas traditional methods optimize a summary's diversity or representative ness, ours explicitly accounts for how one sub-event "leads to" another-which, critically, captures event connectivity beyond simple object co-occurrence. As a result, our summaries provide a better sense of story. We apply our approach to over 12 hours of daily activity video taken from 23 unique camera wearers, and systematically evaluate its quality compared to multiple baselines with 34 human subjects.) <|cite_end|> while recently the work in <|cite_start|> (Reference: Detecting Engagement in Egocentric Video: In a wearable camera video, we see what the camera wearer sees. While this makes it easy to know roughly what he chose to look at, it does not immediately reveal when he was engaged with the environment. Specifically, at what moments did his focus linger, as he paused to gather more information about something he saw? Knowing this answer would benefit various applications in video summarization and augmented reality, yet prior work focuses solely on the "what" question (estimating saliency, gaze) without considering the "when" (engagement). We propose a learning-based approach that uses long-term egomotion cues to detect engagement, specifically in browsing scenarios where one frequently takes in new visual information (e.g., shopping, touring). We introduce a large, richly annotated dataset for ego-engagement that is the first of its kind. Our approach outperforms a wide array of existing methods. We show engagement can be detected well independent of both scene appearance and the camera wearer's identity.) <|cite_end|> proposed to predict the camera wearer's engagement detection from first-person videos. The work in <|cite_start|> (Reference: Social Interactions: A First-Person Perspective: This paper presents a method for the detection and recognition of social interactions in a day-long first-person video of u social event, like a trip to an amusement park. The location and orientation of faces are estimated and used to compute the line of sight for each face. The context provided by all the faces in a frame is used to convert the lines of sight into locations in space to which individuals attend. Further, individuals are assigned roles based on their patterns of attention. The rotes and locations of individuals are analyzed over time to detect and recognize the types of social interactions. In addition to patterns of face locations and attention, the head movements of the first-person can provide additional useful cues as to their attentional focus. We demonstrate encouraging results on detection and recognition of social interactions in first-person videos captured from multiple days of experience in amusement parks.) <|cite_end|> used a group of people wearing first-person cameras to infer their social interactions such as monologues, dialogues, or discussions. The method in <|cite_start|> (Reference: Force from Motion: Decoding Physical Sensation in a First Person Video: A first-person video can generate powerful physical sensations of action in an observer. In this paper, we focus on a problem of Force from Motion - decoding the sensation of 1) passive forces such as the gravity, 2) the physical scale of the motion (speed) and space, and 3) active forces exerted by the observer such as pedaling a bike or banking on a ski turn. The sensation of gravity can be observed in a natural image. We learn this image cue for predicting a gravity direction in a 2D image and integrate the prediction across images to estimate the 3D gravity direction using structure from motion. The sense of physical scale is revealed to us when the body is in a dynamically balanced state. We compute the unknown physical scale of 3D reconstructed camera motion by leveraging the torque equilibrium at a banked turn that relates the centripetal force, gravity, and the body leaning angle. The active force and torque governs 3D egomotion through the physics of rigid body dynamics. Using an inverse dynamics optimization, we directly minimize 2D reprojection error (in video) with respect to 3D world structure, active forces, and additional passive forces such as air drag and friction force. We use structure from motion with the physical scale and gravity direction as an initialization of our bundle adjustment for force estimation. Our method shows quantitatively equivalent reconstruction comparing to IMU measurements in terms of gravity and scale recovery and outperforms method based on 2D optical flow for an active action recognition task. We apply our method to first person videos of mountain biking, urban bike racing, skiing, speedflying with parachute, and wingsuit flying where inertial measurements are not accessible.) <|cite_end|> predicted physical forces experienced by the camera wearer, while the work in <|cite_start|> (Reference: Fast Unsupervised Ego-action Learning for First-person Sports Videos: Portable high-quality sports cameras (e.g. head or helmet mounted) built for recording dynamic first-person video footage are becoming a common item among many sports enthusiasts. We address the novel task of discovering first-person action categories (which we call ego-actions) which can be useful for such tasks as video indexing and retrieval. In order to learn ego-action categories, we investigate the use of motion-based histograms and unsupervised learning algorithms to quickly cluster video content. Our approach assumes a completely unsupervised scenario, where labeled training videos are not available, videos are not pre-segmented and the number of ego-action categories are unknown. In our proposed framework we show that a stacked Dirichlet process mixture model can be used to automatically learn a motion histogram codebook and the set of ego-action categories. We quantitatively evaluate our approach on both in-house and public YouTube videos and demonstrate robust ego-action categorization across several sports genres. Comparative analysis shows that our approach outperforms other state-of-the-art topic models with respect to both classification accuracy and computational speed. Preliminary results indicate that on average, the categorical content of a 10 minute video sequence can be indexed in under 5 seconds.) <|cite_end|> recognized the activities performed in various extreme sports. Several recent methods <|cite_start|> (Reference: Egocentric future localization: We presents a method for future localization: to predict plausible future trajectories of ego-motion in egocentric stereo images. Our paths avoid obstacles, move between objects, even turn around a corner into space behind objects. As a byproduct of the predicted trajectories, we discover the empty space occluded by foreground objects. One key innovation is the creation of an EgoRetinal map, akin to an illustrated tourist map, that 'rearranges' pixels taking into accounts depth information, the ground plane, and body motion direction, so that it allows motion planning and perception of objects on one image space. We learn to plan trajectories directly on this EgoRetinal map using first person experience of walking around in a variety of scenes. In a testing phase, given an novel scene, we find multiple hypotheses of future trajectories from the learned experience. We refine them by minimizing a cost function that describes compatibility between the obstacles in the EgoRetinal map and trajectories. We quantitatively evaluate our method to show predictive validity and apply to various real world daily activities including walking, shopping, and social interactions.) <|cite_end|> <|cite_start|> (Reference: Predicting behaviors of basketball players from first person videos: This paper presents a method to predict the future movements (location and gaze direction) of basketball players as a whole from their first person videos. The predicted behaviors reflect an individual physical space that affords to take the next actions while conforming to social behaviors by engaging to joint attention. Our key innovation is to use the 3D reconstruction of multiple first person cameras to automatically annotate each others visual semantics of social configurations. We leverage two learning signals uniquely embedded in first person videos. Individually, a first person video records the visual semantics of a spatial and social layout around a person that allows associating with past similar situations. Collectively, first person videos follow joint attention that can link the individuals to a group. We learn the egocentric visual semantics of group movements using a Siamese neural network to retrieve future trajectories. We consolidate the retrieved trajectories from all players by maximizing a measure of social compatibility&#x2014;the gaze alignment towards joint attention predicted by their social formation, where the dynamics of joint attention is learned by a long-term recurrent convolutional network. This allows us to characterize which social configuration is more plausible and predict future group trajectories.) <|cite_end|> also predicted the camera wearer's movement trajectories. Finally, first-person cameras have also been used for various robotics applications <|cite_start|> (Reference: Robot-centric activity prediction from first-person videos: What will they do to me?: In this paper, we present a core technology to enable robot recognition of human activities during human-robot interactions. In particular, we propose a methodology for early recognition of activities from robot-centric videos (i.e., first-person videos) obtained from a robot's viewpoint during its interaction with humans. Early recognition, which is also known as activity prediction, is an ability to infer an ongoing activity at its early stage. We present an algorithm to recognize human activities targeting the camera from streaming videos, enabling the robot to predict intended activities of the interacting person as early as possible and take fast reactions to such activities (e.g., avoiding harmful events targeting itself before they actually occur). We introduce the novel concept of'onset' that efficiently summarizes pre-activity observations, and design a recognition approach to consider event history in addition to visual features from first-person videos. We propose to represent an onset using a cascade histogram of time series gradients, and we describe a novel algorithmic setup to take advantage of such onset for early recognition of activities. The experimental results clearly illustrate that the proposed concept of onset enables better/earlier recognition of human activities from first-person videos collected with a robot. Categories and Subject Descriptors I.2.10 [Artificial Intelligence]: Vision and Scene Understanding–video analysis; I.4.8 [Image Processing and Computer Vision]: Scene Analysis-motion; I.2.9 [Artificial Intelligence]: Robotics–sensors) <|cite_end|> <|cite_start|> (Reference: Building unified human descriptors for multi-type activity recognition: Activity recognition is an important as well as a difficult task in computer vision. In the past years many types of activities -- single actions, two persons interactions or ego-centric activities to name a few -- have been analyzed. Nevertheless, researchers have always treated such types of activities separately. In this paper, we propose a new problem: labeling a complex scene where activities of different types happen in sequence or concurrently. We first present a new unified descriptor, called Relation History Image (RHI), which can be extracted from all the activity types we are interested in. We then propose a new method to recognize the activities and at the same time associate them to the humans who are performing them. Next, we evaluate our approach on a newly recorded dataset which is representative of the problem we are considering. Finally, we show the efficacy of the RHI descriptor on publicly available datasets performing extensive evaluations.) <|cite_end|> In comparison to these prior methods, we propose a novel cooperative basketball intention prediction task, that allows us to study cooperative behaviors of the basketball players. Furthermore, we note that these prior first-person methods (except <|cite_start|> (Reference: Fast Unsupervised Ego-action Learning for First-person Sports Videos: Portable high-quality sports cameras (e.g. head or helmet mounted) built for recording dynamic first-person video footage are becoming a common item among many sports enthusiasts. We address the novel task of discovering first-person action categories (which we call ego-actions) which can be useful for such tasks as video indexing and retrieval. In order to learn ego-action categories, we investigate the use of motion-based histograms and unsupervised learning algorithms to quickly cluster video content. Our approach assumes a completely unsupervised scenario, where labeled training videos are not available, videos are not pre-segmented and the number of ego-action categories are unknown. In our proposed framework we show that a stacked Dirichlet process mixture model can be used to automatically learn a motion histogram codebook and the set of ego-action categories. We quantitatively evaluate our approach on both in-house and public YouTube videos and demonstrate robust ego-action categorization across several sports genres. Comparative analysis shows that our approach outperforms other state-of-the-art topic models with respect to both classification accuracy and computational speed. Preliminary results indicate that on average, the categorical content of a 10 minute video sequence can be indexed in under 5 seconds.) <|cite_end|>) rely on manually annotated labels for their respective tasks whether it would be an object-detection, activity recognition, intention prediction or some other task. Instead, in this work, we demonstrate that we can solve a challenging cooperative basketball intention prediction task without using annotated first-person intention labels, which are time consuming and costly to obtain. \textbf{Knowledge Transfer across Models.} With the introduction of supervised CNN models <|cite_start|> (Reference: ImageNet classification with deep convolutional neural networks: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0%, respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.) <|cite_end|>, there has been a lot of interest in adapting generic set of features <|cite_start|> (Reference: DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition: We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be re-purposed to novel generic tasks. Our generic tasks may differ significantly from the originally trained tasks and there may be insufficient labeled or unlabeled data to conventionally train or adapt a deep architecture to the new tasks. We investigate and visualize the semantic clustering of deep convolutional features with respect to a variety of such tasks, including scene recognition, domain adaptation, and fine-grained recognition challenges. We compare the efficacy of relying on various network levels to define a fixed feature, and report novel results that significantly outperform the state-of-the-art on several important vision challenges. We are releasing DeCAF, an open-source implementation of these deep convolutional activation features, along with all associated network parameters to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.) <|cite_end|> for different tasks at hand <|cite_start|> (Reference: LSDA: Large Scale Detection Through Adaptation: A major challenge in scaling object detection is the difficulty of obtaining labeled images for large numbers of categories. Recently, deep convolutional neural networks (CNNs) have emerged as clear winners on object classification benchmarks, in part due to training with 1.2M+ labeled classification images. Unfortunately, only a small fraction of those labels are available for the detection task. It is much cheaper and easier to collect large quantities of image-level labels from search engines than it is to collect detection data and label it with precise bounding boxes. In this paper, we propose Large Scale Detection through Adaptation (LSDA), an algorithm which learns the difference between the two tasks and transfers this knowledge to classifiers for categories without bounding box annotated data, turning them into detectors. Our method has the potential to enable detection for the tens of thousands of categories that lack bounding box annotations, yet have plenty of classification data. Evaluation on the ImageNet LSVRC-2013 detection challenge demonstrates the efficacy of our approach. This algorithm enables us to produce a >7.6K detector by using available classification data from leaf nodes in the ImageNet tree. We additionally demonstrate how to modify our architecture to produce a fast detector (running at 2fps for the 7.6K detector). Models and software are available at) <|cite_end|> <|cite_start|> (Reference: DeepEdge: A Multi-Scale Bifurcated Deep Network for Top-Down Contour Detection: Contour detection has been a fundamental component in many image segmentation and object detection systems. Most previous work utilizes low-level features such as texture or saliency to detect contours and then use them as cues for a higher-level task such as object detection. However, we claim that recognizing objects and predicting contours are two mutually related tasks. Contrary to traditional approaches, we show that we can invert the commonly established pipeline: instead of detecting contours with low-level cues for a higher-level recognition task, we exploit object-related features as high-level cues for contour detection. We achieve this goal by means of a multi-scale deep network that consists of five convolutional layers and a bifurcated fully-connected sub-network. The section from the input layer to the fifth convolutional layer is fixed and directly lifted from a pre-trained network optimized over a large-scale object classification task. This section of the network is applied to four different scales of the image input. These four parallel and identical streams are then attached to a bifurcated sub-network consisting of two independently-trained branches. One branch learns to predict the contour likelihood (with a classification objective) whereas the other branch is trained to learn the fraction of human labelers agreeing about the contour presence at a given point (with a regression criterion). We show that without any feature engineering our multi-scale deep learning approach achieves state-of-the-art results in contour detection.) <|cite_end|> <|cite_start|> (Reference: Rich feature hierarchies for accurate object detection and semantic segmentation: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012---achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also compare R-CNN to OverFeat, a recently proposed sliding-window detector based on a similar CNN architecture. We find that R-CNN outperforms OverFeat by a large margin on the 200-class ILSVRC2013 detection dataset. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.) <|cite_end|> <|cite_start|> (Reference: Holistically-Nested Edge Detection: We develop a new edge detection algorithm that tackles two important issues in this long-standing vision problem: (1) holistic image training and prediction; and (2) multi-scale and multi-level feature learning. Our proposed method, holistically-nested edge detection (HED), performs image-to-image prediction by means of a deep learning model that leverages fully convolutional neural networks and deeply-supervised nets. HED automatically learns rich hierarchical representations (guided by deep supervision on side responses) that are important in order to approach the human ability resolve the challenging ambiguity in edge and object boundary detection. We significantly advance the state-of-the-art on the BSD500 dataset (ODS F-score of .782) and the NYU Depth dataset (ODS F-score of .746), and do so with an improved speed (0.4 second per image) that is orders of magnitude faster than some recent CNN-based edge detection algorithms.) <|cite_end|> <|cite_start|> (Reference: Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks: State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.) <|cite_end|> <|cite_start|> (Reference: OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks: We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat.) <|cite_end|>. Recently, generic image classification features were successfully used for the tasks such as edge detection <|cite_start|> (Reference: DeepEdge: A Multi-Scale Bifurcated Deep Network for Top-Down Contour Detection: Contour detection has been a fundamental component in many image segmentation and object detection systems. Most previous work utilizes low-level features such as texture or saliency to detect contours and then use them as cues for a higher-level task such as object detection. However, we claim that recognizing objects and predicting contours are two mutually related tasks. Contrary to traditional approaches, we show that we can invert the commonly established pipeline: instead of detecting contours with low-level cues for a higher-level recognition task, we exploit object-related features as high-level cues for contour detection. We achieve this goal by means of a multi-scale deep network that consists of five convolutional layers and a bifurcated fully-connected sub-network. The section from the input layer to the fifth convolutional layer is fixed and directly lifted from a pre-trained network optimized over a large-scale object classification task. This section of the network is applied to four different scales of the image input. These four parallel and identical streams are then attached to a bifurcated sub-network consisting of two independently-trained branches. One branch learns to predict the contour likelihood (with a classification objective) whereas the other branch is trained to learn the fraction of human labelers agreeing about the contour presence at a given point (with a regression criterion). We show that without any feature engineering our multi-scale deep learning approach achieves state-of-the-art results in contour detection.) <|cite_end|> <|cite_start|> (Reference: Holistically-Nested Edge Detection: We develop a new edge detection algorithm that tackles two important issues in this long-standing vision problem: (1) holistic image training and prediction; and (2) multi-scale and multi-level feature learning. Our proposed method, holistically-nested edge detection (HED), performs image-to-image prediction by means of a deep learning model that leverages fully convolutional neural networks and deeply-supervised nets. HED automatically learns rich hierarchical representations (guided by deep supervision on side responses) that are important in order to approach the human ability resolve the challenging ambiguity in edge and object boundary detection. We significantly advance the state-of-the-art on the BSD500 dataset (ODS F-score of .782) and the NYU Depth dataset (ODS F-score of .746), and do so with an improved speed (0.4 second per image) that is orders of magnitude faster than some recent CNN-based edge detection algorithms.) <|cite_end|>, object detection <|cite_start|> (Reference: Rich feature hierarchies for accurate object detection and semantic segmentation: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012---achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also compare R-CNN to OverFeat, a recently proposed sliding-window detector based on a similar CNN architecture. We find that R-CNN outperforms OverFeat by a large margin on the 200-class ILSVRC2013 detection dataset. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.) <|cite_end|> <|cite_start|> (Reference: Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks: State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.) <|cite_end|> <|cite_start|> (Reference: OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks: We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat.) <|cite_end|>, and semantic segmentation <|cite_start|> (Reference: Semantic Segmentation with Boundary Neural Fields: The state-of-the-art in semantic segmentation is currently represented by fully convolutional networks (FCNs). However, FCNs use large receptive fields and many pooling layers, both of which cause blurring and low spatial resolution in the deep layers. As a result FCNs tend to produce segmentations that are poorly localized around object boundaries. Prior work has attempted to address this issue in post-processing steps, for example using a color-based CRF on top of the FCN predictions. However, these approaches require additional parameters and low-level features that are difficult to tune and integrate into the original network architecture. Additionally, most CRFs use color-based pixel affinities, which are not well suited for semantic segmentation and lead to spatially disjoint predictions. To overcome these problems, we introduce a Boundary Neural Field (BNF), which is a global energy model integrating FCN predictions with boundary cues. The boundary information is used to enhance semantic segment coherence and to improve object localization. Specifically, we first show that the convolutional filters of semantic FCNs provide good features for boundary detection. We then employ the predicted boundaries to define pairwise potentials in our energy. Finally, we show that our energy decomposes semantic segmentation into multiple binary problems, which can be relaxed for efficient global optimization. We report extensive experiments demonstrating that minimization of our global boundary-based energy yields results superior to prior globalization methods, both quantitatively as well as qualitatively.) <|cite_end|> <|cite_start|> (Reference: Efficient piecewise training of deep structured models for semantic segmentation: Recent advances in semantic image segmentation have mostly been achieved by training deep convolutional neural networks (CNNs). We show how to improve semantic segmentation through the use of contextual information; specifically, we explore `patch-patch' context between image regions, and `patch-background' context. For learning from the patch-patch context, we formulate Conditional Random Fields (CRFs) with CNN-based pairwise potential functions to capture semantic correlations between neighboring patches. Efficient piecewise training of the proposed deep structured model is then applied to avoid repeated expensive CRF inference for back propagation. For capturing the patch-background context, we show that a network design with traditional multi-scale image input and sliding pyramid pooling is effective for improving performance. Our experimental results set new state-of-the-art performance on a number of popular semantic segmentation datasets, including NYUDv2, PASCAL VOC 2012, PASCAL-Context, and SIFT-flow. In particular, we achieve an intersection-over-union score of 78.0 on the challenging PASCAL VOC 2012 dataset.) <|cite_end|> <|cite_start|> (Reference: Fully Convolutional Networks for Semantic Segmentation: Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build "fully convolutional" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20% relative improvement to 62.2% mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.) <|cite_end|> <|cite_start|> (Reference: Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs: Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called "semantic image segmentation"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our "DeepLab" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6% IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.) <|cite_end|>. More related to our work, a recent line of research investigated how to transfer knowledge across different models by a combination of parameter updates <|cite_start|> (Reference: Tabula Rasa: Model transfer for object category detection: Our objective is transfer training of a discriminatively trained object category detector, in order to reduce the number of training images required. To this end we propose three transfer learning formulations where a template learnt previously for other categories is used to regularize the training of a new category. All the formulations result in convex optimization problems. Experiments (on PASCAL VOC) demonstrate significant performance gains by transfer learning from one class to another (e.g. motorbike to bicycle), including one-shot learning, specialization from class to a subordinate class (e.g. from quadruped to horse) and transfer using multiple components. In the case of multiple training samples it is shown that a detection performance approaching that of the state of the art can be achieved with substantially fewer training samples.) <|cite_end|> <|cite_start|> (Reference: Learning with Augmented Features for Heterogeneous Domain Adaptation: We propose a new learning method for heterogeneous domain adaptation (HDA), in which the data from the source domain and the target domain are represented by heterogeneous features with different dimensions. Using two different projection matrices, we first transform the data from two domains into a common subspace in order to measure the similarity between the data from two domains. We then propose two new feature mapping functions to augment the transformed data with their original features and zeros. The existing learning methods (e.g., SVM and SVR) can be readily incorporated with our newly proposed augmented feature representations to effectively utilize the data from both domains for HDA. Using the hinge loss function in SVM as an example, we introduce the detailed objective function in our method called Heterogeneous Feature Augmentation (HFA) for a linear case and also describe its kernelization in order to efficiently cope with the data with very high dimensions. Moreover, we also develop an alternating optimization algorithm to effectively solve the nontrivial optimization problem in our HFA method. Comprehensive experiments on two benchmark datasets clearly demonstrate that HFA outperforms the existing HDA methods.) <|cite_end|> <|cite_start|> (Reference: Efficient Learning of Domain-invariant Image Representations: We present an algorithm that learns representations which explicitly compensate for domain mismatch and which can be efficiently realized as linear classifiers. Specifically, we form a linear transformation that maps features from the target (test) domain to the source (training) domain as part of training the classifier. We optimize both the transformation and classifier parameters jointly, and introduce an efficient cost function based on misclassification loss. Our method combines several features previously unavailable in a single algorithm: multi-class adaptation through representation learning, ability to map across heterogeneous feature spaces, and scalability to large datasets. We present experiments on several image datasets that demonstrate improved accuracy and computational advantages compared to previous approaches.) <|cite_end|>, transformation learning <|cite_start|> (Reference: What you saw is not what you get: Domain adaptation using asymmetric kernel transforms: In real-world applications, “what you saw” during training is often not “what you get” during deployment: the distribution and even the type and dimensionality of features can change from one dataset to the next. In this paper, we address the problem of visual domain adaptation for transferring object models from one dataset or visual domain to another. We introduce ARC-t, a flexible model for supervised learning of non-linear transformations between domains. Our method is based on a novel theoretical result demonstrating that such transformations can be learned in kernel space. Unlike existing work, our model is not restricted to symmetric transformations, nor to features of the same type and dimensionality, making it applicable to a significantly wider set of adaptation scenarios than previous methods. Furthermore, the method can be applied to categories that were not available during training. We demonstrate the ability of our method to adapt object recognition models under a variety of situations, such as differing imaging conditions, feature types and codebooks.) <|cite_end|> <|cite_start|> (Reference: Geodesic flow kernel for unsupervised domain adaptation: In real-world applications of visual recognition, many factors - such as pose, illumination, or image quality - can cause a significant mismatch between the source domain on which classifiers are trained and the target domain to which those classifiers are applied. As such, the classifiers often perform poorly on the target domain. Domain adaptation techniques aim to correct the mismatch. Existing approaches have concentrated on learning feature representations that are invariant across domains, and they often do not directly exploit low-dimensional structures that are intrinsic to many vision datasets. In this paper, we propose a new kernel-based method that takes advantage of such structures. Our geodesic flow kernel models domain shift by integrating an infinite number of subspaces that characterize changes in geometric and statistical properties from the source to the target domain. Our approach is computationally advantageous, automatically inferring important algorithmic parameters without requiring extensive cross-validation or labeled data from either domain. We also introduce a metric that reliably measures the adaptability between a pair of source and target domains. For a given target domain and several source domains, the metric can be used to automatically select the optimal source domain to adapt and avoid less desirable ones. Empirical studies on standard datasets demonstrate the advantages of our approach over competing methods.) <|cite_end|>, network distillation <|cite_start|> (Reference: Distilling the Knowledge in a Neural Network: A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.) <|cite_end|> or cross-model supervision <|cite_start|> (Reference: Learning with Side Information Through Modality Hallucination: We present a modality hallucination architecture for training an RGB object detection model which incorporates depth side information at training time. Our convolutional hallucination network learns a new and complementary RGB image representation which is taught to mimic convolutional mid-level features from a depth network. At test time images are processed jointly through the RGB and hallucination networks to produce improved detection performance. Thus, our method transfers information commonly extracted from depth training data to a network which can extract that information from the RGB counterpart. We present results on the standard NYUDv2 dataset and report improvement on the RGB detection task.) <|cite_end|> <|cite_start|> (Reference: Cross Modal Distillation for Supervision Transfer: In this work we propose a technique that transfers supervision between images from different modalities. We use learned representations from a large labeled modality as a supervisory signal for training representations for a new unlabeled paired modality. Our method enables learning of rich representations for unlabeled modalities and can be used as a pre-training procedure for new modalities with limited labeled data. We show experimental results where we transfer supervision from labeled RGB images to unlabeled depth and optical flow images and demonstrate large improvements for both these cross modal supervision transfers. Code, data and pre-trained models are available at https://github.com/s-gupta/fast-rcnn/tree/distillation) <|cite_end|>. The most similar to our work are the methods in <|cite_start|> (Reference: Learning with Side Information Through Modality Hallucination: We present a modality hallucination architecture for training an RGB object detection model which incorporates depth side information at training time. Our convolutional hallucination network learns a new and complementary RGB image representation which is taught to mimic convolutional mid-level features from a depth network. At test time images are processed jointly through the RGB and hallucination networks to produce improved detection performance. Thus, our method transfers information commonly extracted from depth training data to a network which can extract that information from the RGB counterpart. We present results on the standard NYUDv2 dataset and report improvement on the RGB detection task.) <|cite_end|> <|cite_start|> (Reference: Cross Modal Distillation for Supervision Transfer: In this work we propose a technique that transfers supervision between images from different modalities. We use learned representations from a large labeled modality as a supervisory signal for training representations for a new unlabeled paired modality. Our method enables learning of rich representations for unlabeled modalities and can be used as a pre-training procedure for new modalities with limited labeled data. We show experimental results where we transfer supervision from labeled RGB images to unlabeled depth and optical flow images and demonstrate large improvements for both these cross modal supervision transfers. Code, data and pre-trained models are available at https://github.com/s-gupta/fast-rcnn/tree/distillation) <|cite_end|> that use cross-model supervision to transfer knowledge from one model to another. All of the above methods focus on the third-person data. In contrast, we show how to exploit a first-person view to solve a novel camera wearer's cooperative intention prediction task without using manually labeled first-person data. <|paper_end|>
[ "<|reference_start|> {You-Do, I-Learn: Discovering Task Relevant Objects and their Modes of Interaction from Multi-User Egocentric Video: We present a fully unsupervised approach for the discovery of i) task relevant objects and ii) how these objects have been used. A Task Relevant Object (TRO) is an object, or part of an object, with which a person interacts during task performance. Given egocentric video from multiple operators, the approach can discover objects with which the users interact, both static objects such as a coffee machine as well as movable ones such as a cup. Importantly, we also introduce the term Mode of Interaction (MOI) to refer to the different ways in which TROs are used. Say, a cup can be lifted, washed, or poured into. When harvesting interactions with the same object from multiple operators, common MOIs can be found. Setup and Dataset: Using a wearable camera and gaze tracker (Mobile Eye-XG from ASL), egocentric video is collected of users performing tasks, along with their gaze in pixel coordinates. Six locations were chosen: kitchen, workspace, laser printer, corridor with a locked door, cardiac gym and weight-lifting machine. The Bristol Egocentric Object Interactions Dataset is publically available . <|reference_end|>", "<|reference_start|> Robot-centric activity prediction from first-person videos: What will they do to me?: In this paper, we present a core technology to enable robot recognition of human activities during human-robot interactions. In particular, we propose a methodology for early recognition of activities from robot-centric videos (i.e., first-person videos) obtained from a robot's viewpoint during its interaction with humans. Early recognition, which is also known as activity prediction, is an ability to infer an ongoing activity at its early stage. We present an algorithm to recognize human activities targeting the camera from streaming videos, enabling the robot to predict intended activities of the interacting person as early as possible and take fast reactions to such activities (e.g., avoiding harmful events targeting itself before they actually occur). We introduce the novel concept of'onset' that efficiently summarizes pre-activity observations, and design a recognition approach to consider event history in addition to visual features from first-person videos. We propose to represent an onset using a cascade histogram of time series gradients, and we describe a novel algorithmic setup to take advantage of such onset for early recognition of activities. The experimental results clearly illustrate that the proposed concept of onset enables better/earlier recognition of human activities from first-person videos collected with a robot. Categories and Subject Descriptors I.2.10 [Artificial Intelligence]: Vision and Scene Understanding–video analysis; I.4.8 [Image Processing and Computer Vision]: Scene Analysis-motion; I.2.9 [Artificial Intelligence]: Robotics–sensors <|reference_end|>", "<|reference_start|> Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks: State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available. <|reference_end|>", "<|reference_start|> Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks: State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available. <|reference_end|>" ]
[ 2, 20, 29, 34 ]
{"<|cite_1|>": "arxiv-110915", "<|multi_cite_2_1|>": "arxiv-77935", "<|multi_cite_2_2|>": "ss-779186", "<|multi_cite_2_3|>": "ss-1628959", "<|multi_cite_2_4|>": "ss-984088", "<|multi_cite_2_5|>": "ss-1079907", "<|multi_cite_3_1|>": "ss-2103357", "<|multi_cite_3_2|>": "ss-779184", "<|multi_cite_3_3|>": "ss-678043", "<|multi_cite_3_4|>": "ss-1525328", "<|multi_cite_3_5|>": "arxiv-97797", "<|multi_cite_3_6|>": "ss-1270437", "<|multi_cite_4_1|>": "arxiv-77935", "<|multi_cite_4_2|>": "ss-1193897", "<|cite_5|>": "arxiv-95240", "<|cite_6|>": "ss-977568", "<|cite_7|>": "ss-1084037", "<|cite_8|>": "ss-1628961", "<|multi_cite_9_1|>": "ss-1294384", "<|multi_cite_9_2|>": "ss-1961436", "<|multi_cite_10_1|>": "ss-1197563", "<|multi_cite_10_2|>": "ss-1084036", "<|cite_11|>": "ss-1628961", "<|cite_12|>": "ss-690198", "<|cite_13|>": "arxiv-51159", "<|multi_cite_14_1|>": "arxiv-63718", "<|multi_cite_14_2|>": "arxiv-69610", "<|multi_cite_14_3|>": "arxiv-52559", "<|multi_cite_14_4|>": "arxiv-76603", "<|multi_cite_14_5|>": "arxiv-78819", "<|multi_cite_14_6|>": "arxiv-54395", "<|multi_cite_15_1|>": "arxiv-69610", "<|multi_cite_15_2|>": "arxiv-76603", "<|multi_cite_16_1|>": "arxiv-52559", "<|multi_cite_16_2|>": "arxiv-78819", "<|multi_cite_16_3|>": "arxiv-54395", "<|multi_cite_17_1|>": "arxiv-86827", "<|multi_cite_17_2|>": "arxiv-75585", "<|multi_cite_17_3|>": "arxiv-68791", "<|multi_cite_17_4|>": "arxiv-70691", "<|multi_cite_18_1|>": "ss-1643527", "<|multi_cite_18_2|>": "arxiv-32994", "<|multi_cite_18_3|>": "arxiv-40294", "<|multi_cite_19_1|>": "ss-1264272", "<|multi_cite_19_2|>": "ss-1467125", "<|cite_20|>": "arxiv-74282", "<|multi_cite_21_1|>": "ss-867735", "<|multi_cite_21_2|>": "arxiv-80344", "<|multi_cite_22_1|>": "ss-867735", "<|multi_cite_22_2|>": "arxiv-80344"}
2010.08218
<|paper_start|> Title: Deep-HOSeq: Deep Higher Order Sequence Fusion for Multimodal Sentiment Analysis Abstract: Deep-HOSeq: Deep Higher Order Sequence Fusion for Multimodal Sentiment Analysis: Multimodal sentiment analysis utilizes multiple heterogeneous modalities for sentiment classification. The recent multimodal fusion schemes customize LSTMs to discover intra-modal dynamics and design sophisticated attention mechanisms to discover the inter-modal dynamics from multimodal sequences. Although powerful, these schemes completely rely on attention mechanisms which is problematic due to two major drawbacks 1) deceptive attention masks, and 2) training dynamics. Nevertheless, strenuous efforts are required to optimize hyperparameters of these consolidate architectures, in particular their custom-designed LSTMs constrained by attention schemes. In this research, we first propose a common network to discover both intra-modal and inter-modal dynamics by utilizing basic LSTMs and tensor based convolution networks. We then propose unique networks to encapsulate temporal-granularity among the modalities which is essential while extracting information within asynchronous sequences. We then integrate these two kinds of information via a fusion layer and call our novel multimodal fusion scheme as Deep-HOSeq (Deep network with higher order Common and Unique Sequence information). The proposed Deep-HOSeq efficiently discovers all-important information from multimodal sequences and the effectiveness of utilizing both types of information is empirically demonstrated on CMU-MOSEI and CMU-MOSI benchmark datasets. The source code of our proposed Deep-HOSeq is and available at https://github.com/sverma88/Deep-HOSeq--ICDM-2020. Introduction There is increasing popularity with sharing opinionated videos on social media platforms such as YouTube, Facebook, etc. where the speaker's sentiments are available via multiple heterogeneous forms of information such as language (spoken words), visual-gestures, and acoustic (voice). While there has been significant development in utilizing language for sentiment analysis, a core research challenge for this domain is the efficient utilization of multimodal representations such as voice and visual gestures for sentiment prediction <|cite_start|> (Reference: {Multimodal data fusion: an overview of methods, challenges, and prospects: In various disciplines, information about the same phenomenon can be acquired from different types of detectors, at different conditions, in multiple experiments or subjects, among others. We use the term “modality” for each such acquisition framework. Due to the rich characteristics of natural phenomena, it is rare that a single modality provides complete knowledge of the phenomenon of interest. The increasing availability of several modalities reporting on the same system introduces new degrees of freedom, which raise questions beyond those related to exploiting each modality separately. As we argue, many of these questions, or “challenges,” are common to multiple domains. This paper deals with two key issues: “why we need data fusion” and “how we perform it.” The first issue is motivated by numerous examples in science and technology, followed by a mathematical framework that showcases some of the benefits that data fusion provides. In order to address the second issue, “diversity” is introduced as a key concept, and a number of data-driven solutions based on matrix and tensor decompositions are discussed, emphasizing how they account for diversity across the data sets. The aim of this paper is to provide the reader, regardless of his or her community of origin, with a taste of the vastness of the field, the prospects, and the opportunities that it holds.) <|cite_end|> <|cite_start|> (Reference: Multimodal Machine Learning: A Survey and Taxonomy: Our experience of the world is multimodal - we see objects, hear sounds, feel texture, smell odors, and taste flavors. Modality refers to the way in which something happens or is experienced and a research problem is characterized as multimodal when it includes multiple such modalities. In order for Artificial Intelligence to make progress in understanding the world around us, it needs to be able to interpret such multimodal signals together. Multimodal machine learning aims to build models that can process and relate information from multiple modalities. It is a vibrant multi-disciplinary field of increasing importance and with extraordinary potential. Instead of focusing on specific multimodal applications, this paper surveys the recent advances in multimodal machine learning itself and presents them in a common taxonomy. We go beyond the typical early and late fusion categorization and identify broader challenges that are faced by multimodal machine learning, namely: representation, translation, alignment, fusion, and co-learning. This new taxonomy will enable researchers to better understand the state of the field and identify directions for future research.) <|cite_end|>. Since utilizing cues from these interacting modalities often presents a more complete view of the underlying phenomenon and thus enhances the generalization performance for sentiment prediction <|cite_start|> (Reference: {Multimodal data fusion: an overview of methods, challenges, and prospects: In various disciplines, information about the same phenomenon can be acquired from different types of detectors, at different conditions, in multiple experiments or subjects, among others. We use the term “modality” for each such acquisition framework. Due to the rich characteristics of natural phenomena, it is rare that a single modality provides complete knowledge of the phenomenon of interest. The increasing availability of several modalities reporting on the same system introduces new degrees of freedom, which raise questions beyond those related to exploiting each modality separately. As we argue, many of these questions, or “challenges,” are common to multiple domains. This paper deals with two key issues: “why we need data fusion” and “how we perform it.” The first issue is motivated by numerous examples in science and technology, followed by a mathematical framework that showcases some of the benefits that data fusion provides. In order to address the second issue, “diversity” is introduced as a key concept, and a number of data-driven solutions based on matrix and tensor decompositions are discussed, emphasizing how they account for diversity across the data sets. The aim of this paper is to provide the reader, regardless of his or her community of origin, with a taste of the vastness of the field, the prospects, and the opportunities that it holds.) <|cite_end|> <|cite_start|> (Reference: Multimodal Machine Learning: A Survey and Taxonomy: Our experience of the world is multimodal - we see objects, hear sounds, feel texture, smell odors, and taste flavors. Modality refers to the way in which something happens or is experienced and a research problem is characterized as multimodal when it includes multiple such modalities. In order for Artificial Intelligence to make progress in understanding the world around us, it needs to be able to interpret such multimodal signals together. Multimodal machine learning aims to build models that can process and relate information from multiple modalities. It is a vibrant multi-disciplinary field of increasing importance and with extraordinary potential. Instead of focusing on specific multimodal applications, this paper surveys the recent advances in multimodal machine learning itself and presents them in a common taxonomy. We go beyond the typical early and late fusion categorization and identify broader challenges that are faced by multimodal machine learning, namely: representation, translation, alignment, fusion, and co-learning. This new taxonomy will enable researchers to better understand the state of the field and identify directions for future research.) <|cite_end|>. Although performing multimodal fusion for sentiment prediction is itself a challenging task due to multiple recurrent issues such as missing-values in the visual and acoustic modalities, misalignment, and etc. <|cite_start|> (Reference: Multimodal Sentiment Analysis: Addressing Key Issues and Setting up the Baselines: We compile baselines, along with dataset split, for multimodal sentiment analysis. In this paper, we explore three different deep-learning based architectures for multimodal sentiment classification, each improving upon the previous. Further, we evaluate these architectures with multiple datasets with fixed train/test partition. We also discuss some major issues, frequently ignored in multimodal sentiment analysis research, e.g., role of speaker-exclusive models, importance of different modalities, and generalizability. This framework illustrates the different facets of analysis to be considered while performing multimodal sentiment analysis and, hence, serves as a new benchmark for future research in this emerging field.) <|cite_end|> <|cite_start|> (Reference: DeepCU: Integrating both Common and Unique Latent Information for Multimodal Sentiment Analysis: Multimodal sentiment analysis combines information available from visual, textual, and acoustic representations for sentiment prediction. The recent multimodal fusion schemes combine multiple modalities as a tensor and obtain either; the common information by utilizing neural networks, or the unique information by modeling low-rank representation of the tensor. However, both of these information are essential as they render inter-modal and intra-modal relationships of the data. In this research, we first propose a novel deep architecture to extract the common information from the multi-mode representations. Furthermore, we propose unique networks to obtain the modality-specific information that enhances the generalization performance of our multimodal system. Finally, we integrate these two aspects of information via a fusion layer and propose a novel multimodal data fusion architecture, which we call DeepCU (Deep network with both Common and Unique latent information). The proposed DeepCU consolidates the two networks for joint utilization and discovery of all-important latent information. Comprehensive experiments are conducted to demonstrate the effectiveness of utilizing both common and unique information discovered by DeepCU on multiple real-world datasets. The source code of proposed DeepCU is available at https://github.com/sverma88/DeepCU-IJCAI19.) <|cite_end|>. The challenge is exacerbated when the fusion is required in the temporal domain as the multimodal temporal-interaction possesses the dual nature of promising the data-granularity and concealing its ambiguity as peril. A motivating example for this scenario is presented in Fig.~\ref{MM} where the speakers in both the sequences utilize the same words to express their sentiments differently. \begin{figure}[t] \begin{center} \captionsetup{justification=justified} \includegraphics[width = 0.47\textwidth]{figures/Sequence_Example.pdf} \setlength{\belowcaptionskip}{-0.2cm} \caption{A typical scenario illustrating different sentiments expressed with same spoken-utterance but visual gestures and vocal intonations. The asynchronous visual-gesture (occurring after the end of spoken words) at time $t_{3}$ paramountly aids in identification of the speaker's sentiment in the two sequences. Efficient processing of such asynchronous (and synchronous) temporal-interactions are a necessity for sentiment analysis through multimodal fusion.} \label{MM} \end{center} \end{figure} The speakers in both the sequences of Fig.~\ref{MM} utilize the same utterances (spoken words) to express their sentiments. Although both the sequences contain the same spoken words, the interactions between facial expressions and vocal intonations asynchronously occurring with each spoken word unveil critical information that necessitates their disparate labeling of the sequences. In particular, the facial expression at time $t_3$ drives the identification of the speaker's sentiment in the sequences. Therefore, discarding such temporal-granularity will result in the loss of critical information that substantially helps in identifying the speaker's true sentiment. While these interactions can occur in the form of synchronous and asynchronous\footnote{Visual-Gesture occurring at the end of the spoken words.} multimodal interactions and hence, combining these temporal-cues will enhance the robustness of sentiment prediction with multimodal temporal sequences. In this regard, to enhance the predictive power by utilizing such temporal-cues recent multimodal approaches such as MARN (Multi-Attention Recurrent Network) <|cite_start|> (Reference: Multi-attention Recurrent Network for Human Communication Comprehension: Human face-to-face communication is a complex multimodal signal. We use words (language modality), gestures (vision modality) and changes in tone (acoustic modality) to convey our intentions. Humans easily process and understand face-to-face communication, however, comprehending this form of communication remains a significant challenge for Artificial Intelligence (AI). AI must understand each modality and the interactions between them that shape human communication. In this paper, we present a novel neural architecture for understanding human communication called the Multi-attention Recurrent Network (MARN). The main strength of our model comes from discovering interactions between modalities through time using a neural component called the Multi-attention Block (MAB) and storing them in the hybrid memory of a recurrent component called the Long-short Term Hybrid Memory (LSTHM). We perform extensive comparisons on six publicly available datasets for multimodal sentiment analysis, speaker trait recognition and emotion recognition. MARN shows state-of-the-art performance on all the datasets.) <|cite_end|> and MFN (Memory Fusion Network) <|cite_start|> (Reference: Memory Fusion Network for Multi-view Sequential Learning: Multi-view sequential learning is a fundamental problem in machine learning dealing with multi-view sequences. In a multi-view sequence, there exists two forms of interactions between different views: view-specific interactions and cross-view interactions. In this paper, we present a new neural architecture for multi-view sequential learning called the Memory Fusion Network (MFN) that explicitly accounts for both interactions in a neural architecture and continuously models them through time. The first component of the MFN is called the System of LSTMs, where view-specific interactions are learned in isolation through assigning an LSTM function to each view. The cross-view interactions are then identified using a special attention mechanism called the Delta-memory Attention Network (DMAN) and summarized through time with a Multi-view Gated Memory. Through extensive experimentation, MFN is compared to various proposed approaches for multi-view sequential learning on multiple publicly available benchmark datasets. MFN outperforms all the existing multi-view approaches. Furthermore, MFN outperforms all current state-of-the-art models, setting new state-of-the-art results for these multi-view datasets.) <|cite_end|> combined both the inter-modal and intra-modal interaction while performing multimodal fusion. These schemes utilize series of LSTMs to obtain intra-modal dynamics and constrain them with sophisticated attention schemes (multi-attention head in MARN and delta-attention memory in MFN) to exploit the inter-modal temporal-interactions. Although both of these techniques unanimously conclude that the utilization of both these types of information positively impacts multimodal sentiment analysis however these schemes entirely rely on the attention mechanism to discover inter-modal information and amalgamate it with intra-modal information. Their complete reliance on attention scheme is problematic due to two reasons: 1) they have deceptive attention masks (in MFN), and hence it is obscure whether the gain in prediction is attributable to inter-modal interactions <|cite_start|> (Reference: Learning to Deceive with Attention-Based Explanations: Attention mechanisms are ubiquitous components in neural architectures applied to natural language processing. In addition to yielding gains in predictive accuracy, attention weights are often claimed to confer interpretability, purportedly useful both for providing insights to practitioners and for explaining why a model makes its decisions to stakeholders. We call the latter use of attention mechanisms into question by demonstrating a simple method for training models to produce deceptive attention masks. Our method diminishes the total weight assigned to designated impermissible tokens, even when the models can be shown to nevertheless rely on these features to drive predictions. Across multiple models and tasks, our approach manipulates attention weights while paying surprisingly little cost in accuracy. Through a human study, we show that our manipulated attention-based explanations deceive people into thinking that predictions from a model biased against gender minorities do not rely on the gender. Consequently, our results cast doubt on attention's reliability as a tool for auditing algorithms in the context of fairness and accountability.) <|cite_end|> and, 2) the role of training dynamics (in MARN) instead of multiple-heads <|cite_start|> (Reference: Are Sixteen Heads Really Better than One?: Attention is a powerful and ubiquitous mechanism for allowing neural models to focus on particular salient pieces of information by taking their weighted average when making predictions. In particular, multi-headed attention is a driving force behind many recent state-of-the-art NLP models such as Transformer-based MT models and BERT. These models apply multiple attention mechanisms in parallel, with each attention "head" potentially focusing on different parts of the input, which makes it possible to express sophisticated functions beyond the simple weighted average. In this paper we make the surprising observation that even if models have been trained using multiple heads, in practice, a large percentage of attention heads can be removed at test time without significantly impacting performance. In fact, some layers can even be reduced to a single head. We further examine greedy algorithms for pruning down models, and the potential speed, memory efficiency, and accuracy improvements obtainable therefrom. Finally, we analyze the results with respect to which parts of the model are more reliant on having multiple heads, and provide precursory evidence that training dynamics play a role in the gains provided by multi-head attention.) <|cite_end|>. Nevertheless, both these schemes require substantial efforts to optimize the hyperparameters of their consolidated architectures to perform multimodal sequence fusion efficiently. To alleviate these drawbacks, we propose \textit{Deep-HOSeq} to perform multimodal fusion, in particular when the modalities are available as temporal sequences. The \textit{Deep-HOSeq} performs multimodal fusion by extracting two kinds of contrasting information from multimodal temporal sequences. The first kind of information is the amalgamation of both inter-modal and intra-modal information and can be perceived as the common\footnote{\label{note1}It should be noted that the terms `common' and `unique' information are also utilized in DeepCU <|cite_start|> (Reference: DeepCU: Integrating both Common and Unique Latent Information for Multimodal Sentiment Analysis: Multimodal sentiment analysis combines information available from visual, textual, and acoustic representations for sentiment prediction. The recent multimodal fusion schemes combine multiple modalities as a tensor and obtain either; the common information by utilizing neural networks, or the unique information by modeling low-rank representation of the tensor. However, both of these information are essential as they render inter-modal and intra-modal relationships of the data. In this research, we first propose a novel deep architecture to extract the common information from the multi-mode representations. Furthermore, we propose unique networks to obtain the modality-specific information that enhances the generalization performance of our multimodal system. Finally, we integrate these two aspects of information via a fusion layer and propose a novel multimodal data fusion architecture, which we call DeepCU (Deep network with both Common and Unique latent information). The proposed DeepCU consolidates the two networks for joint utilization and discovery of all-important latent information. Comprehensive experiments are conducted to demonstrate the effectiveness of utilizing both common and unique information discovered by DeepCU on multiple real-world datasets. The source code of proposed DeepCU is available at https://github.com/sverma88/DeepCU-IJCAI19.) <|cite_end|> to refer to a different but related concept. The concept of common information in DeepCU is limited to inter-modal information, whereas in \textit{Deep-HOSeq}, the common information is comprised of both inter-modal and intra-modal information. Besides, the unique information in the DeepCU is comprised of factorized information from unimodality' integrated by late fusion. In contrast, the unique information in \textit{Deep-HOSeq} refers to the information present via asynchronous and synchronous temporal occurrence among modalities.} information extracted from the modality interaction. The second type of information exploits the temporal-granularity (synchronous and asynchronous interactions within modalities, as shown in Fig.\ref{MM}) among the multimodal sequences and is derived as unique information while performing multimodal fusion. To aid the understating of proposed \textit{Deep-HOSeq} we illustrate its workflow in Fig.~\ref{DCUSeq}. To extract these two kinds of information, we design a common network that first utilizes basic LSTM to obtain the intra-modal information from each unimodality. Then the obtained intra-modal information from each modality is amalgamated as multi-mode tensors by taking their outer-product. The elements within this multi-mode tensor reflect the strength of inter-modal interactions as correlations <|cite_start|> (Reference: Deep multimodal multilinear fusion with high-order polynomial pooling: Tensor-based multimodal fusion techniques have exhibited great predictive performance. However, one limitation is that existing approaches only consider bilinear or trilinear pooling, which fails to unleash the complete expressive power of multilinear fusion with restricted orders of interactions. More importantly, simply fusing features all at once ignores the complex local intercorrelations, leading to the deterioration of prediction. In this work, we first propose a polynomial tensor pooling (PTP) block for integrating multimodal features by considering high-order moments, followed by a tensorized fully connected layer. Treating PTP as a building block, we further establish a hierarchical polynomial fusion network (HPFN) to recursively transmit local correlations into global ones. By stacking multiple PTPs, the expressivity capacity of HPFN enjoys an exponential growth w.r.t. the number of layers, which is shown by the equivalence to a very deep convolutional arithmetic circuits. Various experiments demonstrate that it can achieve the state-of-the-art performance.) <|cite_end|>, and this rich inter-modal information is finally captured by utilizing convolution kernels followed by fully connected layers. On the other hand, we also design a unique network for leveraging the temporal-granularity among multimodal sequences. This is achieved by first obtaining latent features from each unimodality by utilizing feed-forward layers to increase their discriminative power. We then obtain higher-order interactions within the modalities at each temporal-step followed by feature extraction with convolution layers and fully connected layers (as in the common network). We finally unify the information from all the temporal-steps with a pooling operation, which encapsulates the temporal-granularity as the unique information in \textit{Deep-HOSeq}. Although one may argue that our choice of unification scheme is not sophisticated but this scheme efficiently captures the temporal-dynamics within multimodal sequences and is demonstrated in the results section. \begin{table}[t] \scriptsize \captionsetup{justification=centering} \caption{Comparison of various multimodal fusion schemes} \centering \begin{tabular}{@{}l|ccccc@{}} \toprule[1pt] \multicolumn{1}{l}{\begin{tabular}[c]{@{}c@{}}Fusion \\ Schemes\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Inter \\ Modal \end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Intra \\ Modal\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Attention \\ Reliance\end{tabular}} & \multicolumn{1}{c}{Convolution} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Multimode\\ Representation\end{tabular} } \\ \midrule TFN & \checkmark & $\times$ & $\times$ & $\times$ & \checkmark \\ \midrule LMF & \checkmark & $\times$ & $\times$ & $\times$ & $\times$ \\ \midrule DeepCU & \checkmark & $\times$ & $\times$ & \checkmark & \checkmark \\ \midrule MFN & \checkmark & \checkmark & \checkmark & $\times$ & $\times$ \\ \midrule MARN & \checkmark & \checkmark & \checkmark & $\times$ & $\times$ \\ \midrule \textit{Deep-HOSeq} & \checkmark & \checkmark & $\times$ &\checkmark & \checkmark \\ \bottomrule[1pt] \end{tabular} \label{comparet} \end{table} We finally integrate both these kinds of information with a fusion layer to perform multimodal sentiment prediction and call our novel multimodal fusion scheme as \textit{Deep-HOSeq} (Deep Higher-Order Sequence Fusion). An important characteristic of our \textit{Deep-HOSeq} is that it does not rely on attention-based schemes and hence does not face the same critiques as state of the art (SOTA) techniques such as MARN and MFN. Its superiority lies in simple but careful design choices that enable joint discovery and utilization of all-essential information to perform multimodal fusion. To aid the understanding of our technique, we summarize the similarities and differences between \textit{Deep-HOSeq} and SOTA techniques in Table.~\ref{comparet}. Besides, our major contributions in this work are summarized as below: \begin{enumerate} \item We design a common network to extract both intra-modal and inter-modal information in a cascaded framework for multimodal fusion. Conceptually, the information obtained by our common network is more expressive than the SOTA as we utilize convolution on multi-mode tensors, which efficiently captures all-essential inter-modal interactions. Besides, the use of basic LSTMs efficiently discovers the underlying intra-modal dynamics and does not require strenuous efforts for parameter optimization. \item We design a unique network that encapsulates the temporal-granularity from multimodal sequences. This enhances the \textit{Deep-HOSeq}'s robustness with multimodal synchronous and asynchronous interactions. \item We design a deep consolidated network for joint discovery and utilization of both common and unique information from multimodal temporal sequences, which we call as \textit{Deep-HOSeq}. \item We perform comprehensive experiments on multimodal CMU-MOSEI and CMU-MOSI datasets and demonstrate the effectiveness of utilizing both common and unique information in comparison to other techniques. \end{enumerate} The rest of the paper is organized in the following sections: Sec.~\ref{PW} presents literature review of existing multimodal fusion techniques followed by details of our proposed \textit{Deep-HOSeq} in Sec.~\ref{PM}. Experimental setup and results are described in Sec.~\ref{exp} and Sec.~\ref{result}, respectively. We finally conclude our work and discuss its possible future directions in Sec.~\ref{FW}. Related Work \label{PW} We focus our review on techniques performing neural-based fusion of multimodal sequences where arguably the simplest deep architecture performing fusion of heterogeneous data is Deep Multimodal Fusion (DMF) <|cite_start|> (Reference: Deep Multimodal Fusion for Persuasiveness Prediction: Persuasiveness is a high-level personality trait that quantifies the influence a speaker has on the beliefs, attitudes, intentions, motivations, and behavior of the audience. With social multimedia becoming an important channel in propagating ideas and opinions, analyzing persuasiveness is very important. In this work, we use the publicly available Persuasive Opinion Multimedia (POM) dataset to study persuasion. One of the challenges associated with this problem is the limited amount of annotated data. To tackle this challenge, we present a deep multimodal fusion architecture which is able to leverage complementary information from individual modalities for predicting persuasiveness. Our methods show significant improvement in performance over previous approaches.) <|cite_end|>. The DMF is a successor to the Early Fusion (EF) <|cite_start|> (Reference: Towards multimodal sentiment analysis: harvesting opinions from the web: With more than 10,000 new videos posted online every day on social websites such as YouTube and Facebook, the internet is becoming an almost infinite source of information. One crucial challenge for the coming decade is to be able to harvest relevant information from this constant flow of multimodal data. This paper addresses the task of multimodal sentiment analysis, and conducts proof-of-concept experiments that demonstrate that a joint model that integrates visual, audio, and textual features can be effectively used to identify sentiment in Web videos. This paper makes three important contributions. First, it addresses for the first time the task of tri-modal sentiment analysis, and shows that it is a feasible task that can benefit from the joint exploitation of visual, audio and textual modalities. Second, it identifies a subset of audio-visual features relevant to sentiment analysis and present guidelines on how to integrate these features. Finally, it introduces a new dataset consisting of real online data, which will be useful for future research in this area.) <|cite_end|>, which is one of the most utilized non-neural technique performing multimodal data fusion. The DMF is developed to perform both a) EF: combine raw (or latent) features by concatenating them; b) late fusion: process each modality with a deep network and then synthesize their decisions. Although powerful, the DMF (and EF) is a basic technique and assumes that a modality (for example, visual) does not share any relevant information within itself. In other words, it can not leverage the intra-modal information a particular modality might offer. Hence, it is limited to express only the inter-modal interactions and thus faces the same limitation as in EF <|cite_start|> (Reference: Memory Fusion Network for Multi-view Sequential Learning: Multi-view sequential learning is a fundamental problem in machine learning dealing with multi-view sequences. In a multi-view sequence, there exists two forms of interactions between different views: view-specific interactions and cross-view interactions. In this paper, we present a new neural architecture for multi-view sequential learning called the Memory Fusion Network (MFN) that explicitly accounts for both interactions in a neural architecture and continuously models them through time. The first component of the MFN is called the System of LSTMs, where view-specific interactions are learned in isolation through assigning an LSTM function to each view. The cross-view interactions are then identified using a special attention mechanism called the Delta-memory Attention Network (DMAN) and summarized through time with a Multi-view Gated Memory. Through extensive experimentation, MFN is compared to various proposed approaches for multi-view sequential learning on multiple publicly available benchmark datasets. MFN outperforms all the existing multi-view approaches. Furthermore, MFN outperforms all current state-of-the-art models, setting new state-of-the-art results for these multi-view datasets.) <|cite_end|>. We now review SOTA that leverages both inter-modal and intra-modal relationships while performing multimodal fusion. \paragraph{Memory Fusion Network (MFN)} <|cite_start|> (Reference: Memory Fusion Network for Multi-view Sequential Learning: Multi-view sequential learning is a fundamental problem in machine learning dealing with multi-view sequences. In a multi-view sequence, there exists two forms of interactions between different views: view-specific interactions and cross-view interactions. In this paper, we present a new neural architecture for multi-view sequential learning called the Memory Fusion Network (MFN) that explicitly accounts for both interactions in a neural architecture and continuously models them through time. The first component of the MFN is called the System of LSTMs, where view-specific interactions are learned in isolation through assigning an LSTM function to each view. The cross-view interactions are then identified using a special attention mechanism called the Delta-memory Attention Network (DMAN) and summarized through time with a Multi-view Gated Memory. Through extensive experimentation, MFN is compared to various proposed approaches for multi-view sequential learning on multiple publicly available benchmark datasets. MFN outperforms all the existing multi-view approaches. Furthermore, MFN outperforms all current state-of-the-art models, setting new state-of-the-art results for these multi-view datasets.) <|cite_end|> is a recurrent model that consists of three sub-modules a) System of LSTMs to obtain intra-modal dynamics from each unimodality; b) Delta-memory Attention Network which discovers inter-modal dynamics; and c) Multi-view Gated Memory responsible for integrating the intra-modal and inter-modal dynamics. The final input for fusion in MFN comprises of concatenated intra-modal information from LSTMs and the final state of the Multi-View Gated Memory and hence can be assumed as a sophisticated EF system. Albeit powerful, the MFN relies entirely on the attention network and the Multi-View Gated Memory to obtain inter-modal dynamics while performing multimodal fusion. This complete reliance on attention mechanism is problematic as the MFN assumes synchronous inputs which is hard to achieve in real-world scenarios, and more importantly, the reliability of attention-memory to discover inter-modal interactions is questionable as shown with deceptive attention masks in <|cite_start|> (Reference: Learning to Deceive with Attention-Based Explanations: Attention mechanisms are ubiquitous components in neural architectures applied to natural language processing. In addition to yielding gains in predictive accuracy, attention weights are often claimed to confer interpretability, purportedly useful both for providing insights to practitioners and for explaining why a model makes its decisions to stakeholders. We call the latter use of attention mechanisms into question by demonstrating a simple method for training models to produce deceptive attention masks. Our method diminishes the total weight assigned to designated impermissible tokens, even when the models can be shown to nevertheless rely on these features to drive predictions. Across multiple models and tasks, our approach manipulates attention weights while paying surprisingly little cost in accuracy. Through a human study, we show that our manipulated attention-based explanations deceive people into thinking that predictions from a model biased against gender minorities do not rely on the gender. Consequently, our results cast doubt on attention's reliability as a tool for auditing algorithms in the context of fairness and accountability.) <|cite_end|>. \paragraph{Multi-attention Recurrent Network (MARN)} <|cite_start|> (Reference: Multi-attention Recurrent Network for Human Communication Comprehension: Human face-to-face communication is a complex multimodal signal. We use words (language modality), gestures (vision modality) and changes in tone (acoustic modality) to convey our intentions. Humans easily process and understand face-to-face communication, however, comprehending this form of communication remains a significant challenge for Artificial Intelligence (AI). AI must understand each modality and the interactions between them that shape human communication. In this paper, we present a novel neural architecture for understanding human communication called the Multi-attention Recurrent Network (MARN). The main strength of our model comes from discovering interactions between modalities through time using a neural component called the Multi-attention Block (MAB) and storing them in the hybrid memory of a recurrent component called the Long-short Term Hybrid Memory (LSTHM). We perform extensive comparisons on six publicly available datasets for multimodal sentiment analysis, speaker trait recognition and emotion recognition. MARN shows state-of-the-art performance on all the datasets.) <|cite_end|> is also a recurrent model and consists of two sub-modules a) Long-short Term Hybrid Memory (LSTHM) that amalgamates intra-modal dynamics inter-modal temporal-dynamics by explicitly augmenting LSTM with a hybrid memory; and b) Multi-attention Block (MAB) which discovers the inter-modal dynamics and successively updates the hybrid memory of LSTHMs. Similar to MFN, the MARN also completely relies on the attention scheme to obtain the inter-modal dynamics. The key difference between the two is that the earlier utilizes basic LSTMs <|cite_start|> (Reference: Long {Short-Term} memory: Model compression is significant for the wide adoption of Recurrent Neural Networks (RNNs) in both user devices possessing limited resources and business clusters requiring quick responses to large-scale service requests. This work aims to learn structurally-sparse Long Short-Term Memory (LSTM) by reducing the sizes of basic structures within LSTM units, including input updates, gates, hidden states, cell states and outputs. Independently reducing the sizes of basic structures can result in inconsistent dimensions among them, and consequently, end up with invalid LSTM units. To overcome the problem, we propose Intrinsic Sparse Structures (ISS) in LSTMs. Removing a component of ISS will simultaneously decrease the sizes of all basic structures by one and thereby always maintain the dimension consistency. By learning ISS within LSTM units, the obtained LSTMs remain regular while having much smaller basic structures. Based on group Lasso regularization, our method achieves 10.59× speedup without losing any perplexity of a language modeling of Penn TreeBank dataset. It is also successfully evaluated through a compact model with only 2.69M weights for machine Question Answering of SQuAD dataset. Our approach is successfully extended to nonLSTM RNNs, like Recurrent Highway Networks (RHNs). Our source code is available1.) <|cite_end|>, whereas the latter augments a hybrid memory within the LSTMs. Besides, the MARN attributes usage of multiple-attention for gains in the predictive performance, but it is obscure whether it is from the discovery of inter-modal information or the training dynamics of the MAB (and LSTHM, which are much strenuous than MFN) <|cite_start|> (Reference: Are Sixteen Heads Really Better than One?: Attention is a powerful and ubiquitous mechanism for allowing neural models to focus on particular salient pieces of information by taking their weighted average when making predictions. In particular, multi-headed attention is a driving force behind many recent state-of-the-art NLP models such as Transformer-based MT models and BERT. These models apply multiple attention mechanisms in parallel, with each attention "head" potentially focusing on different parts of the input, which makes it possible to express sophisticated functions beyond the simple weighted average. In this paper we make the surprising observation that even if models have been trained using multiple heads, in practice, a large percentage of attention heads can be removed at test time without significantly impacting performance. In fact, some layers can even be reduced to a single head. We further examine greedy algorithms for pruning down models, and the potential speed, memory efficiency, and accuracy improvements obtainable therefrom. Finally, we analyze the results with respect to which parts of the model are more reliant on having multiple heads, and provide precursory evidence that training dynamics play a role in the gains provided by multi-head attention.) <|cite_end|>. Differently from the above, few notable multimodal fusion techniques which do utilize sophisticated attention mechanisms are Tensor Fusion Networks (TFN) <|cite_start|> (Reference: Tensor Fusion Network for Multimodal Sentiment Analysis: Multimodal sentiment analysis is an increasingly popular research area, which extends the conventional language-based definition of sentiment analysis to a multimodal setup where other relevant modalities accompany language. In this paper, we pose the problem of multimodal sentiment analysis as modeling intra-modality and inter-modality dynamics. We introduce a novel model, termed Tensor Fusion Network, which learns both such dynamics end-to-end. The proposed approach is tailored for the volatile nature of spoken language in online videos as well as accompanying gestures and voice. In the experiments, our model outperforms state-of-the-art approaches for both multimodal and unimodal sentiment analysis.) <|cite_end|>, Low-rank Multimodal Fusion (LMF) <|cite_start|> (Reference: Efficient Low-rank Multimodal Fusion with Modality-Specific Factors: Multimodal research is an emerging field of artificial intelligence, and one of the main research problems in this field is multimodal fusion. The fusion of multimodal data is the process of integrating multiple unimodal representations into one compact multimodal representation. Previous research in this field has exploited the expressiveness of tensors for multimodal representation. However, these methods often suffer from exponential increase in dimensions and in computational complexity introduced by transformation of input into tensor. In this paper, we propose the Low-rank Multimodal Fusion method, which performs multimodal fusion using low-rank tensors to improve efficiency. We evaluate our model on three different tasks: multimodal sentiment analysis, speaker trait analysis, and emotion recognition. Our model achieves competitive results on all these tasks while drastically reducing computational complexity. Additional experiments also show that our model can perform robustly for a wide range of low-rank settings, and is indeed much more efficient in both training and inference compared to other methods that utilize tensor representations.) <|cite_end|>, and DeepCU <|cite_start|> (Reference: DeepCU: Integrating both Common and Unique Latent Information for Multimodal Sentiment Analysis: Multimodal sentiment analysis combines information available from visual, textual, and acoustic representations for sentiment prediction. The recent multimodal fusion schemes combine multiple modalities as a tensor and obtain either; the common information by utilizing neural networks, or the unique information by modeling low-rank representation of the tensor. However, both of these information are essential as they render inter-modal and intra-modal relationships of the data. In this research, we first propose a novel deep architecture to extract the common information from the multi-mode representations. Furthermore, we propose unique networks to obtain the modality-specific information that enhances the generalization performance of our multimodal system. Finally, we integrate these two aspects of information via a fusion layer and propose a novel multimodal data fusion architecture, which we call DeepCU (Deep network with both Common and Unique latent information). The proposed DeepCU consolidates the two networks for joint utilization and discovery of all-important latent information. Comprehensive experiments are conducted to demonstrate the effectiveness of utilizing both common and unique information discovered by DeepCU on multiple real-world datasets. The source code of proposed DeepCU is available at https://github.com/sverma88/DeepCU-IJCAI19.) <|cite_end|>. These techniques perform multimodal fusion by utilizing the summarized information within visual (and acoustic) modality as its average. Although this leads to the loss of sequential information present in the form of visual and acoustic interactions. These techniques compensate for this information loss by modelling multiple combinations of inter-modal interactions, either as tensors or its low-rank factorized representation. Our proposed \textit{Deep-HOSeq} is similar to the above as it also aims to exploit both the inter-modality and intra-modality relationship while performing multimodal fusion, but substantially differs from them due to the following: \renewcommand{\labelenumi}{\Roman{enumi}.} \begin{enumerate} \item The common network in \textit{Deep-HOSeq} extracts information from inter-modal tensors obtained via modelling the intra-modal information in multimodal sequences. Since the elements of this tensor signify the correlation strength between the fusion modalities, the information obtained is not obscure or deceptive as in MARN and MFN. \item Obtaining inter-modal temporal-granularity independently with unique network is a distinctive characteristic of \textit{Deep-HOSeq}, and inclusion of this information enhances the \textit{Deep-HOSeq}'s capability while dealing with asynchronous (and synchronous) multimodal sequences. \item The fusion layer integrates both the common and the unique information to perform multimodal sentiment analysis. It is worth mentioning that this layer uses averaging and hence does not introduce extra model parameters. More importantly, it also refrains the common network to influence the parameters of the unique sub-network and vice-versa. This restriction allows the sub-networks to obtain complementary information and hence increase the diversity during fusion. \end{enumerate} \emph{Although, all the techniques mentioned above are fundamentally different from proposed Deep-HOSeq; one must not consider the equality of Deep-HOSeq in particular to DeepCU -- based on the terms common and unique. The concept of common and unique information in both techniques is disparate and explained in detail in the footnote$^{\ref{note1}}$. Furthermore, the feature dissection process is also distinct in the technique where the earlier is proposed to perform multimodal fusion from asynchronous (and synchronous) interactions within temporal sequences whereas the latter is proposed to perform fusion of independent data units.} <|paper_end|>
[ "<|reference_start|> {Multimodal data fusion: an overview of methods, challenges, and prospects: In various disciplines, information about the same phenomenon can be acquired from different types of detectors, at different conditions, in multiple experiments or subjects, among others. We use the term “modality” for each such acquisition framework. Due to the rich characteristics of natural phenomena, it is rare that a single modality provides complete knowledge of the phenomenon of interest. The increasing availability of several modalities reporting on the same system introduces new degrees of freedom, which raise questions beyond those related to exploiting each modality separately. As we argue, many of these questions, or “challenges,” are common to multiple domains. This paper deals with two key issues: “why we need data fusion” and “how we perform it.” The first issue is motivated by numerous examples in science and technology, followed by a mathematical framework that showcases some of the benefits that data fusion provides. In order to address the second issue, “diversity” is introduced as a key concept, and a number of data-driven solutions based on matrix and tensor decompositions are discussed, emphasizing how they account for diversity across the data sets. The aim of this paper is to provide the reader, regardless of his or her community of origin, with a taste of the vastness of the field, the prospects, and the opportunities that it holds. <|reference_end|>", "<|reference_start|> Are Sixteen Heads Really Better than One?: Attention is a powerful and ubiquitous mechanism for allowing neural models to focus on particular salient pieces of information by taking their weighted average when making predictions. In particular, multi-headed attention is a driving force behind many recent state-of-the-art NLP models such as Transformer-based MT models and BERT. These models apply multiple attention mechanisms in parallel, with each attention \"head\" potentially focusing on different parts of the input, which makes it possible to express sophisticated functions beyond the simple weighted average. In this paper we make the surprising observation that even if models have been trained using multiple heads, in practice, a large percentage of attention heads can be removed at test time without significantly impacting performance. In fact, some layers can even be reduced to a single head. We further examine greedy algorithms for pruning down models, and the potential speed, memory efficiency, and accuracy improvements obtainable therefrom. Finally, we analyze the results with respect to which parts of the model are more reliant on having multiple heads, and provide precursory evidence that training dynamics play a role in the gains provided by multi-head attention. <|reference_end|>", "<|reference_start|> Deep Multimodal Fusion for Persuasiveness Prediction: Persuasiveness is a high-level personality trait that quantifies the influence a speaker has on the beliefs, attitudes, intentions, motivations, and behavior of the audience. With social multimedia becoming an important channel in propagating ideas and opinions, analyzing persuasiveness is very important. In this work, we use the publicly available Persuasive Opinion Multimedia (POM) dataset to study persuasion. One of the challenges associated with this problem is the limited amount of annotated data. To tackle this challenge, we present a deep multimodal fusion architecture which is able to leverage complementary information from individual modalities for predicting persuasiveness. Our methods show significant improvement in performance over previous approaches. <|reference_end|>", "<|reference_start|> DeepCU: Integrating both Common and Unique Latent Information for Multimodal Sentiment Analysis: Multimodal sentiment analysis combines information available from visual, textual, and acoustic representations for sentiment prediction. The recent multimodal fusion schemes combine multiple modalities as a tensor and obtain either; the common information by utilizing neural networks, or the unique information by modeling low-rank representation of the tensor. However, both of these information are essential as they render inter-modal and intra-modal relationships of the data. In this research, we first propose a novel deep architecture to extract the common information from the multi-mode representations. Furthermore, we propose unique networks to obtain the modality-specific information that enhances the generalization performance of our multimodal system. Finally, we integrate these two aspects of information via a fusion layer and propose a novel multimodal data fusion architecture, which we call DeepCU (Deep network with both Common and Unique latent information). The proposed DeepCU consolidates the two networks for joint utilization and discovery of all-important latent information. Comprehensive experiments are conducted to demonstrate the effectiveness of utilizing both common and unique information discovered by DeepCU on multiple real-world datasets. The source code of proposed DeepCU is available at https://github.com/sverma88/DeepCU-IJCAI19. <|reference_end|>" ]
[ 0, 9, 12, 22 ]
{"<|multi_cite_1_1|>": "ss-1238004", "<|multi_cite_1_2|>": "arxiv-125183", "<|multi_cite_2_1|>": "ss-1238004", "<|multi_cite_2_2|>": "arxiv-125183", "<|multi_cite_3_1|>": "arxiv-152191", "<|multi_cite_3_2|>": "ss-1975018", "<|cite_4|>": "arxiv-147128", "<|cite_5|>": "arxiv-147131", "<|cite_6|>": "arxiv-224199", "<|cite_7|>": "arxiv-205919", "<|cite_8|>": "ss-1975018", "<|cite_9|>": "ss-1268202", "<|cite_10|>": "ss-2377710", "<|cite_11|>": "ss-855006", "<|cite_12|>": "arxiv-147131", "<|cite_13|>": "arxiv-147131", "<|cite_14|>": "arxiv-224199", "<|cite_15|>": "arxiv-147128", "<|cite_16|>": "ss-710343", "<|cite_17|>": "arxiv-205919", "<|cite_18|>": "arxiv-130067", "<|cite_19|>": "arxiv-160825", "<|cite_20|>": "ss-1975018"}
2406.18212-1
<|cite_start|> (Reference: Two-Stage Convolutional Neural Network for Breast Cancer Histology Image Classification: This paper explores the problem of breast tissue classification of microscopy images. Based on the predominant cancer type the goal is to classify images into four categories of normal, benign, in situ carcinoma, and invasive carcinoma. Given a suitable training dataset, we utilize deep learning techniques to address the classification problem. Due to the large size of each image in the training dataset, we propose a patch-based technique which consists of two consecutive convolutional neural networks. The first "patch-wise" network acts as an auto-encoder that extracts the most salient features of image patches while the second "image-wise" network performs classification of the whole image. The first network is pre-trained and aimed at extracting local information while the second network obtains global information of an input image. We trained the networks using the ICIAR 2018 grand challenge on BreAst Cancer Histology (BACH) dataset. The proposed method yields 95 % accuracy on the validation set compared to previously reported 77 % accuracy rates in the literature. Our code is publicly available at https://github.com/ImagingLab/ICIAR2018) <|cite_end|> <|cite_start|> (Reference: The whole slide breast histopathology image detection based on a fused model and heatmaps: ) <|cite_end|> <|cite_start|> (Reference: Data-efficient and weakly supervised computational pathology on whole-slide images: ) <|cite_end|>first extract patches from the WSI and then extract features from the patches by using pre-trained models for the most salient and more similar features from the similar patches. Few other methods, high nuclear density patches <|cite_start|> (Reference: Classification of Breast Cancer Histology using Deep Learning: Breast Cancer is a major cause of death worldwide among women. Hematoxylin and Eosin (H&E) stained breast tissue samples from biopsies are observed under microscopes for the primary diagnosis of breast cancer. In this paper, we propose a deep learning-based method for classification of H&E stained breast tissue images released for BACH challenge 2018 by fine-tuning Inception-v3 convolutional neural network (CNN) proposed by Szegedy et al. These images are to be classified into four classes namely, i) normal tissue, ii) benign tumor, iii) in-situ carcinoma and iv) invasive carcinoma. Our strategy is to extract patches based on nuclei density instead of random or grid sampling, along with rejection of patches that are not rich in nuclei (non-epithelial) regions for training and testing. Every patch (nuclei-dense region) in an image is classified in one of the four above mentioned categories. The class of the entire image is determined using majority voting over the nuclear classes. We obtained an average four class accuracy of 85% and an average two class (non-cancer vs. carcinoma) accuracy of 93%, which improves upon a previous benchmark by Araujo et al.) <|cite_end|>or downsampling <|cite_start|> (Reference: A Deep Learning Study on Osteosarcoma Detection from Histological Images: In the U.S, 5-10\% of new pediatric cases of cancer are primary bone tumors. The most common type of primary malignant bone tumor is osteosarcoma. The intention of the present work is to improve the detection and diagnosis of osteosarcoma using computer-aided detection (CAD) and diagnosis (CADx). Such tools as convolutional neural networks (CNNs) can significantly decrease the surgeon's workload and make a better prognosis of patient conditions. CNNs need to be trained on a large amount of data in order to achieve a more trustworthy performance. In this study, transfer learning techniques, pre-trained CNNs, are adapted to a public dataset on osteosarcoma histological images to detect necrotic images from non-necrotic and healthy tissues. First, the dataset was preprocessed, and different classifications are applied. Then, Transfer learning models including VGG19 and Inception V3 are used and trained on Whole Slide Images (WSI) with no patches, to improve the accuracy of the outputs. Finally, the models are applied to different classification problems, including binary and multi-class classifiers. Experimental results show that the accuracy of the VGG19 has the highest, 96\%, performance amongst all binary classes and multiclass classification. Our fine-tuned model demonstrates state-of-the-art performance on detecting malignancy of Osteosarcoma based on histologic images.) <|cite_end|>, which leverages the learned representations from pre-trained models. Consequently, feature extraction via pretrained models methods <|cite_start|> (Reference: Two-Stage Convolutional Neural Network for Breast Cancer Histology Image Classification: This paper explores the problem of breast tissue classification of microscopy images. Based on the predominant cancer type the goal is to classify images into four categories of normal, benign, in situ carcinoma, and invasive carcinoma. Given a suitable training dataset, we utilize deep learning techniques to address the classification problem. Due to the large size of each image in the training dataset, we propose a patch-based technique which consists of two consecutive convolutional neural networks. The first "patch-wise" network acts as an auto-encoder that extracts the most salient features of image patches while the second "image-wise" network performs classification of the whole image. The first network is pre-trained and aimed at extracting local information while the second network obtains global information of an input image. We trained the networks using the ICIAR 2018 grand challenge on BreAst Cancer Histology (BACH) dataset. The proposed method yields 95 % accuracy on the validation set compared to previously reported 77 % accuracy rates in the literature. Our code is publicly available at https://github.com/ImagingLab/ICIAR2018) <|cite_end|> <|cite_start|> (Reference: Accumulated bispectral image-based respiratory sound signal classification using deep learning: ) <|cite_end|> <|cite_start|> (Reference: The whole slide breast histopathology image detection based on a fused model and heatmaps: ) <|cite_end|> <|cite_start|> (Reference: Data-efficient and weakly supervised computational pathology on whole-slide images: ) <|cite_end|>become a crucial component in handling large and complex datasets. \noindent\textbf{Frequency Analysis of WSI:} The spatial information of a WSI is mostly used for both supervised and weakly supervised learning, however, the use of the frequency domain along with the spatial domain is not thoroughly investigated, especially for WSI analysis. For natural image classification, Kai Xu \emph{et al} <|cite_start|> (Reference: Learning in the Frequency Domain: Deep neural networks have achieved remarkable success in computer vision tasks. Existing neural networks mainly operate in the spatial domain with fixed input sizes. For practical applications, images are usually large and have to be downsampled to the predetermined input size of neural networks. Even though the downsampling operations reduce computation and the required communication bandwidth, it removes both redundant and salient information obliviously, which results in accuracy degradation. Inspired by digital signal processing theories, we analyze the spectral bias from the frequency perspective and propose a learning-based frequency selection method to identify the trivial frequency components which can be removed without accuracy loss. The proposed method of learning in the frequency domain leverages identical structures of the well-known neural networks, such as ResNet-50, MobileNetV2, and Mask R-CNN, while accepting the frequency-domain information as the input. Experiment results show that learning in the frequency domain with static channel selection can achieve higher accuracy than the conventional spatial downsampling approach and meanwhile further reduce the input data size. Specifically for ImageNet classification with the same input size, the proposed method achieves 1.41% and 0.66% top-1 accuracy improvements on ResNet-50 and MobileNetV2, respectively. Even with half input size, the proposed method still improves the top-1 accuracy on ResNet-50 by 1%. In addition, we observe a 0.8% average precision improvement on Mask R-CNN for instance segmentation on the COCO dataset.) <|cite_end|>used the information from the frequency domain to perform the classification tasks. W Luo \emph{et al} <|cite_start|> (Reference: Frequency-Based Convolutional Neural Network for Efficient Segmentation of Histopathology Whole Slide Images: ) <|cite_end|>proposed a light-weight CNN-based architecture for the WSI segmentation using the frequency information and the proposed model reduced the bandwidth requirement for CPU-GPU transmission by reduction of 96\% parameters and the floating-point operations by 98\% as compared to the common CNN-based method with spatial information. Abdullah-Al Nahid \emph{et al} <|cite_start|> (Reference: Histopathological breast-image classification using local and frequency domains by convolutional neural network: Identification of the malignancy of tissues from Histopathological images has always been an issue of concern to doctors and radiologists. This task is time-consuming, tedious and moreover very challenging. Success in finding malignancy from Histopathological images primarily depends on long-term experience, though sometimes experts disagree on their decisions. However, Computer Aided Diagnosis (CAD) techniques help the radiologist to give a second opinion that can increase the reliability of the radiologist’s decision. Among the different image analysis techniques, classification of the images has always been a challenging task. Due to the intense complexity of biomedical images, it is always very challenging to provide a reliable decision about an image. The state-of-the-art Convolutional Neural Network (CNN) technique has had great success in natural image classification. Utilizing advanced engineering techniques along with the CNN, in this paper, we have classified a set of Histopathological Breast-Cancer (BC) images utilizing a state-of-the-art CNN model containing a residual block. Conventional CNN operation takes raw images as input and extracts the global features; however, the object oriented local features also contain significant information—for example, the Local Binary Pattern (LBP) represents the effective textural information, Histogram represent the pixel strength distribution, Contourlet Transform (CT) gives much detailed information about the smoothness about the edges, and Discrete Fourier Transform (DFT) derives frequency-domain information from the image. Utilizing these advantages, along with our proposed novel CNN model, we have examined the performance of the novel CNN model as Histopathological image classifier. To do so, we have introduced five cases: (a) Convolutional Neural Network Raw Image (CNN-I); (b) Convolutional Neural Network CT Histogram (CNN-CH); (c) Convolutional Neural Network CT LBP (CNN-CL); (d) Convolutional Neural Network Discrete Fourier Transform (CNN-DF); (e) Convolutional Neural Network Discrete Cosine Transform (CNN-DC). We have performed our experiments on the BreakHis image dataset. The best performance is achieved when we utilize the CNN-CH model on a 200× dataset that provides Accuracy, Sensitivity, False Positive Rate, False Negative Rate, Recall Value, Precision and F-measure of 92.19%, 94.94%, 5.07%, 1.70%, 98.20%, 98.00% and 98.00%, respectively.) <|cite_end|>proposed a CNN for the image size {$700 \times 460$} and learned from the hand-crafted and frequency information for the Histopathological Breast-Image (Benign and Malignant) classification. For spatial feature extraction, they used Contourlet Transform (CT), Histogram information, and Local Binary Pattern (LBP), and for frequency analysis, the author used the Discrete Fourier Transform (DFT) and the Discrete Cosine Transform (DCT). The experiments were performed using the BreakHis image dataset <|cite_start|> (Reference: A Dataset for Breast Cancer Histopathological Image Classification: Today, medical image analysis papers require solid experiments to prove the usefulness of proposed methods. However, experiments are often performed on data selected by the researchers, which may come from different institutions, scanners, and populations. Different evaluation measures may be used, making it difficult to compare the methods. In this paper, we introduce a dataset of 7909 breast cancer histopathology images acquired on 82 patients, which is now publicly available from http://web.inf.ufpr.br/vri/breast-cancer-database. The dataset includes both benign and malignant images. The task associated with this dataset is the automated classification of these images in two classes, which would be a valuable computer-aided diagnosis tool for the clinician. In order to assess the difficulty of this task, we show some preliminary results obtained with state-of-the-art image classification systems. The accuracy ranges from 80% to 85%, showing room for improvement is left. By providing this dataset and a standardized evaluation protocol to the scientific community, we hope to gather researchers in both the medical and the machine learning field to advance toward this clinical application.) <|cite_end|>}. However, the authors have not performed a joint analysis of frequency and spatial domain and dealt with a single-factor binary class problem. Although the combination of multiple sources has shown improved performance compared to any single domain features <|cite_start|> (Reference: Covid-19 detection using spectral and statistical features of cough and breath sounds: The pandemic situation due to Corona Virus Disease of 2019 (COVID-19) is significant public health risk around the world. The infected people can spread this virus very quickly. Due to this reason, the early detection is essential to reduce its spread. This research effort aims to develop a method for diagnosis of COVID-19 based on the recording of cough and breath sounds. In this paper, a convolutional neural network (CNN) classifier is applied after train and test splitting for cough and breath sound features. The present work show that the combination of MFCC and cepstrum-based statistical features along with ZCR improve the accuracy of detection to the great extent. It shows great potential in the development of automatic COVID-19 detection tool.) <|cite_end|> <|cite_start|> (Reference: COVID-19 Respiratory Sound Signal Detection Using HOS-Based Linear Frequency Cepstral Coefficients and Deep Learning: ) <|cite_end|>. In addition to the above-mentioned approaches, the previous approaches for WSI classification are based on feature attention learning. <|cite_start|> (Reference: Patch-based Convolutional Neural Network for Whole Slide Tissue Image Classification: Convolutional Neural Networks (CNN) are state-of-the-art models for many image classification tasks. However, to recognize cancer subtypes automatically, training a CNN on gigapixel resolution Whole Slide Tissue Images (WSI) is currently computationally impossible. The differentiation of cancer subtypes is based on cellular-level visual features observed on image patch scale. Therefore, we argue that in this situation, training a patch-level classifier on image patches will perform better than or similar to an image-level classifier. The challenge becomes how to intelligently combine patch-level classification results and model the fact that not all patches will be discriminative. We propose to train a decision fusion model to aggregate patch-level predictions given by patch-level CNNs, which to the best of our knowledge has not been shown before. Furthermore, we formulate a novel Expectation-Maximization (EM) based method that automatically locates discriminative patches robustly by utilizing the spatial relationships of patches. We apply our method to the classification of glioma and non-small-cell lung carcinoma cases into subtypes. The classification accuracy of our method is similar to the inter-observer agreement between pathologists. Although it is impossible to train CNNs on WSIs, we experimentally demonstrate using a comparable non-cancer dataset of smaller images that a patch-based CNN can outperform an image-based CNN.) <|cite_end|> <|cite_start|> (Reference: Deep convolutional activation features for large scale brain tumor histopathology image classification and segmentation: We propose a simple, efficient and effective method using deep convolutional activation features (CNNs) to achieve stat- of-the-art classification and segmentation for the MICCAI 2014 Brain Tumor Digital Pathology Challenge. Common traits of such medical image challenges are characterized by large image dimensions (up to the gigabyte size of an image), a limited amount of training data, and significant clinical feature representations. To tackle these challenges, we transfer the features extracted from CNNs trained with a very large general image database to the medical image challenge. In this paper, we used CNN activations trained by ImageNet to extract features (4096 neurons, 13.3% active). In addition, feature selection, feature pooling, and data augmentation are used in our work. Our system obtained 97.5% accuracy on classification and 84% accuracy on segmentation, demonstrating a significant performance gain over other participating teams.) <|cite_end|> <|cite_start|> (Reference: Multiple Instance Learning with Center Embeddings for Histopathology Classification: ) <|cite_end|> <|cite_start|> (Reference: Data-efficient and weakly supervised computational pathology on whole-slide images: ) <|cite_end|>. The attention methods are divided into two types: Instance-based and embedding-based approaches.} These approaches include binary class <|cite_start|> (Reference: Predicting Axillary Lymph Node Metastasis in Early Breast Cancer Using Deep Learning on Primary Tumor Biopsy Slides: Objectives: To develop and validate a deep learning (DL)-based primary tumor biopsy signature for predicting axillary lymph node (ALN) metastasis preoperatively in early breast cancer (EBC) patients with clinically negative ALN. Methods: A total of 1,058 EBC patients with pathologically confirmed ALN status were enrolled from May 2010 to August 2020. A DL core-needle biopsy (DL-CNB) model was built on the attention-based multiple instance-learning (AMIL) framework to predict ALN status utilizing the DL features, which were extracted from the cancer areas of digitized whole-slide images (WSIs) of breast CNB specimens annotated by two pathologists. Accuracy, sensitivity, specificity, receiver operating characteristic (ROC) curves, and areas under the ROC curve (AUCs) were analyzed to evaluate our model. Results: The best-performing DL-CNB model with VGG16_BN as the feature extractor achieved an AUC of 0.816 (95% confidence interval (CI): 0.758, 0.865) in predicting positive ALN metastasis in the independent test cohort. Furthermore, our model incorporating the clinical data, which was called DL-CNB+C, yielded the best accuracy of 0.831 (95%CI: 0.775, 0.878), especially for patients younger than 50 years (AUC: 0.918, 95%CI: 0.825, 0.971). The interpretation of DL-CNB model showed that the top signatures most predictive of ALN metastasis were characterized by the nucleus features including density ($p$ = 0.015), circumference ($p$ = 0.009), circularity ($p$ = 0.010), and orientation ($p$ = 0.012). Conclusion: Our study provides a novel DL-based biomarker on primary tumor CNB slides to predict the metastatic status of ALN preoperatively for patients with EBC. The codes and dataset are available at https://github.com/bupt-ai-cz/BALNMP) <|cite_end|> <|cite_start|> (Reference: Deep learning-enabled breast cancer hormonal receptor status determination from base-level H&E stains: ) <|cite_end|>or multi-class <|cite_start|> (Reference: Two-Stage Convolutional Neural Network for Breast Cancer Histology Image Classification: This paper explores the problem of breast tissue classification of microscopy images. Based on the predominant cancer type the goal is to classify images into four categories of normal, benign, in situ carcinoma, and invasive carcinoma. Given a suitable training dataset, we utilize deep learning techniques to address the classification problem. Due to the large size of each image in the training dataset, we propose a patch-based technique which consists of two consecutive convolutional neural networks. The first "patch-wise" network acts as an auto-encoder that extracts the most salient features of image patches while the second "image-wise" network performs classification of the whole image. The first network is pre-trained and aimed at extracting local information while the second network obtains global information of an input image. We trained the networks using the ICIAR 2018 grand challenge on BreAst Cancer Histology (BACH) dataset. The proposed method yields 95 % accuracy on the validation set compared to previously reported 77 % accuracy rates in the literature. Our code is publicly available at https://github.com/ImagingLab/ICIAR2018) <|cite_end|> <|cite_start|> (Reference: Context-Aware Convolutional Neural Network for Grading of Colorectal Cancer Histology Images: Digital histology images are amenable to the application of convolutional neural network (CNN) for analysis due to the sheer size of pixel data present in them. CNNs are generally used for representation learning from small image patches (e.g. 224x224) extracted from digital histology images due to computational and memory constraints. However, this approach does not incorporate high-resolution contextual information in histology images. We propose a novel way to incorporate larger context by a context-aware neural network based on images with a dimension of 1,792x1,792 pixels. The proposed framework first encodes the local representation of a histology image into high dimensional features then aggregates the features by considering their spatial organization to make a final prediction. The proposed method is evaluated for colorectal cancer grading and breast cancer classification. A comprehensive analysis of some variants of the proposed method is presented. Our method outperformed the traditional patch-based approaches, problem-specific methods, and existing context-based methods quantitatively by a margin of 3.61%. Code and dataset related information is available at this link: https://tia-lab.github.io/Context-Aware-CNN) <|cite_end|> <|cite_start|> (Reference: TransMIL: Transformer based Correlated Multiple Instance Learning for Whole Slide Image Classification: Multiple instance learning (MIL) is a powerful tool to solve the weakly supervised classification in whole slide image (WSI) based pathology diagnosis. However, the current MIL methods are usually based on independent and identical distribution hypothesis, thus neglect the correlation among different instances. To address this problem, we proposed a new framework, called correlated MIL, and provided a proof for convergence. Based on this framework, we devised a Transformer based MIL (TransMIL), which explored both morphological and spatial information. The proposed TransMIL can effectively deal with unbalanced/balanced and binary/multiple classification with great visualization and interpretability. We conducted various experiments for three different computational pathology problems and achieved better performance and faster convergence compared with state-of-the-art methods. The test AUC for the binary tumor classification can be up to 93.09% over CAMELYON16 dataset. And the AUC over the cancer subtypes classification can be up to 96.03% and 98.82% over TCGA-NSCLC dataset and TCGA-RCC dataset, respectively. Implementation is available at: https://github.com/szc19990412/TransMIL.) <|cite_end|> <|cite_start|> (Reference: Deep learning for semantic segmentation vs. classification in computational pathology: application to mitosis analysis in breast cancer grading: Existing computational approaches have not yet resulted in effective and efficient computer-aided tools that are used in pathologists' daily practice. Focusing on a computer-based qualification for breast cancer diagnosis, the present study proposes two deep learning architectures to efficiently and effectively detect and classify mitosis in a histopathological tissue sample. The first method consists of two parts, entailing a preprocessing of the digital histological image and a free-handcrafted-feature Convolutional Neural Network (CNN) used for binary classification. Results show that the methodology proposed can achieve 95% accuracy in testing, with an F1-score of 94.35%. This result is higher than the results using classical image processing techniques and also higher than the approaches combining CCNs with handcrafted features. The second approach is an end-to-end methodology using semantic segmentation. Results showed that this algorithm can achieve an accuracy higher than 95% in testing and an average Dice index of 0.6, higher than the existing results using CNNs (0.9 F1-score). Additionally, due to the semantic properties of the deep learning approach, an end-to-end deep learning framework is viable to perform both tasks: detection and classification of mitosis. The results show the potential of deep learning in the analysis of Whole Slide Images (WSI) and its integration to computer-aided systems. The extension of this work to whole slide images is also addressed in the last sections; as well as, some computational key points that are useful when constructing a computer-aided-system inspired by the proposed technology.) <|cite_end|>frameworks. In contrast to the research works mentioned earlier, in our paper, we classify the six essential BC indicates bio-markers and factors (ER, PR, HER2, ALN, HG, MS) by explicitly combining spatial and frequency information { without using any clinical information for early BC diagnosis. Instead of choosing a fixed number of patches from the WSI, we tackle a variable number of malignant ROIs from the image and use the MRL mechanism for effective patch integration to improve the classification.} \begin{figure*}[ht] \centering \includegraphics[width=0.98\textwidth]{BNCB_dataset.pdf} \caption{Examples of different WSIs and their malignant ROIs with class labels from Breast Cancer Core Needle Biopsy (BNCB) dataset: The shapes, sizes, and numbers of malignant ROIs from all the WSIs are different. The class of Estrogen receptor (ER), Progesterone receptor (PR), Human epidermal growth factor receptor 2 (HER2) gene, Histological grade (HG), Auxiliary lymph node (ALN) status, and Molecular subtype (MS) are also given.} \label{fig:dataset_show} \end{figure*} <|paper_end|>
[ "<|reference_start|> Two-Stage Convolutional Neural Network for Breast Cancer Histology Image Classification: This paper explores the problem of breast tissue classification of microscopy images. Based on the predominant cancer type the goal is to classify images into four categories of normal, benign, in situ carcinoma, and invasive carcinoma. Given a suitable training dataset, we utilize deep learning techniques to address the classification problem. Due to the large size of each image in the training dataset, we propose a patch-based technique which consists of two consecutive convolutional neural networks. The first \"patch-wise\" network acts as an auto-encoder that extracts the most salient features of image patches while the second \"image-wise\" network performs classification of the whole image. The first network is pre-trained and aimed at extracting local information while the second network obtains global information of an input image. We trained the networks using the ICIAR 2018 grand challenge on BreAst Cancer Histology (BACH) dataset. The proposed method yields 95 % accuracy on the validation set compared to previously reported 77 % accuracy rates in the literature. Our code is publicly available at https://github.com/ImagingLab/ICIAR2018 <|reference_end|>", "<|reference_start|> The whole slide breast histopathology image detection based on a fused model and heatmaps: <|reference_end|>", "<|reference_start|> Data-efficient and weakly supervised computational pathology on whole-slide images: <|reference_end|>", "<|reference_start|> Deep learning for semantic segmentation vs. classification in computational pathology: application to mitosis analysis in breast cancer grading: Existing computational approaches have not yet resulted in effective and efficient computer-aided tools that are used in pathologists' daily practice. Focusing on a computer-based qualification for breast cancer diagnosis, the present study proposes two deep learning architectures to efficiently and effectively detect and classify mitosis in a histopathological tissue sample. The first method consists of two parts, entailing a preprocessing of the digital histological image and a free-handcrafted-feature Convolutional Neural Network (CNN) used for binary classification. Results show that the methodology proposed can achieve 95% accuracy in testing, with an F1-score of 94.35%. This result is higher than the results using classical image processing techniques and also higher than the approaches combining CCNs with handcrafted features. The second approach is an end-to-end methodology using semantic segmentation. Results showed that this algorithm can achieve an accuracy higher than 95% in testing and an average Dice index of 0.6, higher than the existing results using CNNs (0.9 F1-score). Additionally, due to the semantic properties of the deep learning approach, an end-to-end deep learning framework is viable to perform both tasks: detection and classification of mitosis. The results show the potential of deep learning in the analysis of Whole Slide Images (WSI) and its integration to computer-aided systems. The extension of this work to whole slide images is also addressed in the last sections; as well as, some computational key points that are useful when constructing a computer-aided-system inspired by the proposed technology. <|reference_end|>" ]
[ 0, 7, 8, 24 ]
{"<|cite_1|>": "ss-1184439", "<|cite_3|>": "ss-1372058", "<|cite_4|>": "ss-1372059", "<|cite_5|>": "ss-1372060", "<|cite_6|>": "ss-1372061", "<|multi_cite_7_1|>": "ss-1361182", "<|multi_cite_7_2|>": "ss-1372062", "<|cite_8|>": "ss-1361182", "<|cite_9|>": "ss-1372062", "<|cite_10|>": "arxiv-292764", "<|cite_11|>": "ss-1862870", "<|cite_12|>": "ss-1372063", "<|multi_cite_13_1|>": "ss-783786", "<|multi_cite_13_2|>": "arxiv-215540", "<|multi_cite_13_3|>": "arxiv-385253", "<|multi_cite_14_1|>": "ss-1372064", "<|multi_cite_14_2|>": "arxiv-170170", "<|multi_cite_14_3|>": "arxiv-215540", "<|multi_cite_14_4|>": "ss-1372065", "<|cite_15|>": "arxiv-385253", "<|cite_16|>": "ss-1372066", "<|cite_17|>": "ss-783786", "<|multi_cite_18_1|>": "ss-1334207", "<|multi_cite_18_2|>": "arxiv-149306", "<|multi_cite_18_3|>": "ss-756041", "<|cite_19|>": "arxiv-215540", "<|multi_cite_20_1|>": "ss-1372064", "<|multi_cite_20_2|>": "arxiv-170170", "<|multi_cite_20_3|>": "arxiv-215540", "<|multi_cite_20_4|>": "arxiv-385253", "<|multi_cite_20_5|>": "ss-783786", "<|multi_cite_21_1|>": "ss-1372067", "<|multi_cite_21_2|>": "ss-1372068", "<|cite_22|>": "ss-1372069", "<|multi_cite_23_1|>": "ss-1372064", "<|multi_cite_23_2|>": "arxiv-170170", "<|multi_cite_23_3|>": "arxiv-215540", "<|multi_cite_23_4|>": "arxiv-385253", "<|cite_24|>": "arxiv-149306", "<|cite_25|>": "arxiv-88377", "<|cite_26|>": "arxiv-169053", "<|cite_27|>": "arxiv-301001", "<|cite_28|>": "arxiv-65675", "<|cite_29|>": "ss-1372070", "<|cite_30|>": "arxiv-151207", "<|cite_31|>": "arxiv-149306", "<|cite_32|>": "ss-1372071", "<|cite_33|>": "ss-1197300", "<|cite_34|>": "arxiv-437473", "<|cite_35|>": "arxiv-385253", "<|cite_36|>": "arxiv-148247", "<|cite_37|>": "ss-756041", "<|cite_38|>": "arxiv-148247", "<|cite_39|>": "ss-940175", "<|cite_40|>": "ss-1372068", "<|multi_cite_41_1|>": "arxiv-151207", "<|multi_cite_41_2|>": "ss-1372068", "<|multi_cite_41_3|>": "ss-756041", "<|cite_42|>": "arxiv-149306", "<|cite_43|>": "arxiv-301001", "<|multi_cite_44_1|>": "arxiv-151207", "<|multi_cite_44_2|>": "ss-1372072", "<|multi_cite_44_3|>": "ss-1372068", "<|multi_cite_44_4|>": "ss-756041", "<|cite_45|>": "arxiv-250929", "<|cite_46|>": "ss-1372073", "<|cite_47|>": "ss-1372074", "<|cite_48|>": "ss-1197300", "<|multi_cite_49_1|>": "ss-1372075", "<|multi_cite_49_2|>": "ss-1372076", "<|multi_cite_50_1|>": "arxiv-76928", "<|multi_cite_50_2|>": "ss-2272445", "<|multi_cite_50_3|>": "ss-1525173", "<|multi_cite_50_4|>": "ss-756041", "<|multi_cite_51_1|>": "arxiv-385253", "<|multi_cite_51_2|>": "ss-783786", "<|multi_cite_52_1|>": "arxiv-151207", "<|multi_cite_52_2|>": "arxiv-215540", "<|multi_cite_52_3|>": "arxiv-345032", "<|multi_cite_52_4|>": "ss-1372064"}
2206.08802-0
<|paper_start|> Title: Open-Sampling: Exploring Out-of-Distribution data for Re-balancing Long-tailed datasets Abstract: Open-Sampling: Exploring Out-of-Distribution data for Re-balancing Long-tailed datasets: Deep neural networks usually perform poorly when the training dataset suffers from extreme class imbalance. Recent studies found that directly training with out-of-distribution data (i.e., open-set samples) in a semi-supervised manner would harm the generalization performance. In this work, we theoretically show that out-of-distribution data can still be leveraged to augment the minority classes from a Bayesian perspective. Based on this motivation, we propose a novel method called Open-sampling, which utilizes open-set noisy labels to re-balance the class priors of the training dataset. For each open-set instance, the label is sampled from our pre-defined distribution that is complementary to the distribution of original class priors. We empirically show that Open-sampling not only re-balances the class priors but also encourages the neural network to learn separable representations. Extensive experiments demonstrate that our proposed method significantly outperforms existing data re-balancing methods and can boost the performance of existing state-of-the-art methods. Introduction The success of deep neural networks (DNNs) heavily relies on large-scale datasets with balanced distribution <|cite_start|> (Reference: {Learning Multiple Layers of Features From Tiny Images: Groups at MIT and NYU have collected a dataset of millions of tiny colour images from the web. It is, in principle, an excellent dataset for unsupervised training of deep generative models, but previous researchers who have tried this have found it dicult to learn a good set of lters from the images. We show how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex. Using a novel parallelization algorithm to distribute the work among multiple machines connected on a network, we show how training such a model can be done in reasonable time. A second problematic aspect of the tiny images dataset is that there are no reliable class labels which makes it hard to use for object recognition experiments. We created two sets of reliable labels. The CIFAR-10 set has 6000 examples of each of 10 classes and the CIFAR-100 set has 600 examples of each of 100 non-overlapping classes. Using these labels, we show that object recognition is signicantly improved by pre-training a layer of features on a large set of unlabeled tiny images.) <|cite_end|> <|cite_start|> (Reference: ImageNet Large Scale Visual Recognition Challenge: The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the five years of the challenge, and propose future directions and improvements.) <|cite_end|>. However, in real-world applications like autonomous driving and medical diagnosis, large-scale datasets naturally exhibit imbalanced and long-tailed distributions, i.e., a few classes (majority classes) occupy most of the data while most classes (minority classes) are under-represented <|cite_start|> (Reference: Places: A 10 Million Image Database for Scene Recognition: The rise of multi-million-item dataset initiatives has enabled data-hungry machine learning algorithms to reach near-human semantic classification performance at tasks such as visual object and scene recognition. Here we describe the Places Database, a repository of 10 million scene photographs, labeled with scene semantic categories, comprising a large and diverse list of the types of environments encountered in the world. Using the state-of-the-art Convolutional Neural Networks (CNNs), we provide scene classification CNNs (Places-CNNs) as baselines, that significantly outperform the previous approaches. Visualization of the CNNs trained on Places shows that object detectors emerge as an intermediate representation of scene classification. With its high-coverage and high-diversity of exemplars, the Places Database along with the Places-CNNs offer a novel resource to guide future progress on scene recognition problems.) <|cite_end|> <|cite_start|> (Reference: The iNaturalist Species Classification and Detection Dataset: Existing image classification datasets used in computer vision tend to have a uniform distribution of images across object categories. In contrast, the natural world is heavily imbalanced, as some species are more abundant and easier to photograph than others. To encourage further progress in challenging real world conditions we present the iNaturalist species classification and detection dataset, consisting of 859,000 images from over 5,000 different species of plants and animals. It features visually similar species, captured in a wide variety of situations, from all over the world. Images were collected with different camera types, have varying image quality, feature a large class imbalance, and have been verified by multiple citizen scientists. We discuss the collection of the dataset and present extensive baseline experiments using state-of-the-art computer vision classification and detection models. Results show that current non-ensemble based methods achieve only 67% top one classification accuracy, illustrating the difficulty of the dataset. Specifically, we observe poor results for classes with small numbers of training examples suggesting more attention is needed in low-shot learning.) <|cite_end|> <|cite_start|> (Reference: Microsoft COCO: Common Objects in Context: We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.) <|cite_end|>. It has been shown that training on long-tailed datasets leads to poor generalization performance, especially on the minority classes <|cite_start|> (Reference: BBN: Bilateral-Branch Network with Cumulative Learning for Long-Tailed Visual Recognition: Our work focuses on tackling the challenging but natural visual recognition task of long-tailed data distribution (i.e., a few classes occupy most of the data, while most classes have rarely few samples). In the literature, class re-balancing strategies (e.g., re-weighting and re-sampling) are the prominent and effective methods proposed to alleviate the extreme imbalance for dealing with long-tailed problems. In this paper, we firstly discover that these re-balancing methods achieving satisfactory recognition accuracy owe to that they could significantly promote the classifier learning of deep networks. However, at the same time, they will unexpectedly damage the representative ability of the learned deep features to some extent. Therefore, we propose a unified Bilateral-Branch Network (BBN) to take care of both representation learning and classifier learning simultaneously, where each branch does perform its own duty separately. In particular, our BBN model is further equipped with a novel cumulative learning strategy, which is designed to first learn the universal patterns and then pay attention to the tail data gradually. Extensive experiments on four benchmark datasets, including the large-scale iNaturalist ones, justify that the proposed BBN can significantly outperform state-of-the-art methods. Furthermore, validation experiments can demonstrate both our preliminary discovery and effectiveness of tailored designs in BBN for long-tailed problems. Our method won the first place in the iNaturalist 2019 large scale species classification competition, and our code is open-source and available at https://github.com/Megvii-Nanjing/BBN.) <|cite_end|> <|cite_start|> (Reference: Large-Scale Long-Tailed Recognition in an Open World: Real world data often have a long-tailed and open-ended distribution. A practical recognition system must classify among majority and minority classes, generalize from a few known instances, and acknowledge novelty upon a never seen instance. We define Open Long-Tailed Recognition (OLTR) as learning from such naturally distributed data and optimizing the classification accuracy over a balanced test set which include head, tail, and open classes. OLTR must handle imbalanced classification, few-shot learning, and open-set recognition in one integrated algorithm, whereas existing classification approaches focus only on one aspect and deliver poorly over the entire class spectrum. The key challenges are how to share visual knowledge between head and tail classes and how to reduce confusion between tail and open classes. We develop an integrated OLTR algorithm that maps an image to a feature space such that visual concepts can easily relate to each other based on a learned metric that respects the closed-world classification while acknowledging the novelty of the open world. Our so-called dynamic meta-embedding combines a direct image feature and an associated memory feature, with the feature norm indicating the familiarity to known classes. On three large-scale OLTR datasets we curate from object-centric ImageNet, scene-centric Places, and face-centric MS1M data, our method consistently outperforms the state-of-the-art. Our code, datasets, and models enable future OLTR research and are publicly available at https://liuziwei7.github.io/projects/LongTail.html.) <|cite_end|> <|cite_start|> (Reference: Decoupling Representation and Classifier for Long-Tailed Recognition: The long-tail distribution of the visual world poses great challenges for deep learning based classification models on how to handle the class imbalance problem. Existing solutions usually involve class-balancing strategies, e.g., by loss re-weighting, data re-sampling, or transfer learning from head- to tail-classes, but most of them adhere to the scheme of jointly learning representations and classifiers. In this work, we decouple the learning procedure into representation learning and classification, and systematically explore how different balancing strategies affect them for long-tailed recognition. The findings are surprising: (1) data imbalance might not be an issue in learning high-quality representations; (2) with representations learned with the simplest instance-balanced (natural) sampling, it is also possible to achieve strong long-tailed recognition ability by adjusting only the classifier. We conduct extensive experiments and set new state-of-the-art performance on common long-tailed benchmarks like ImageNet-LT, Places-LT and iNaturalist, showing that it is possible to outperform carefully designed losses, sampling strategies, even complex modules with memory, by using a straightforward approach that decouples representation and classification. Our code is available at https://github.com/facebookresearch/classifier-balancing.) <|cite_end|>. Thus, designing effective algorithms to handle class imbalance is of great practical importance. In the literature, a popular direction in long-tailed learning is to re-balance the data distribution by data re-sampling <|cite_start|> (Reference: The Class Imbalance Problem: A Systematic Study: In machine learning problems, differences in prior class probabilities -- or class imbalances -- have been reported to hinder the performance of some standard classifiers, such as decision trees. This paper presents a systematic study aimed at answering three different questions. First, we attempt to understand the nature of the class imbalance problem by establishing a relationship between concept complexity, size of the training set and class imbalance level. Second, we discuss several basic re-sampling or cost-modifying methods previously proposed to deal with the class imbalance problem and compare their effectiveness. The results obtained by such methods on artificial domains are linked to results in real-world domains. Finally, we investigate the assumption that the class imbalance problem does not only affect decision tree systems but also affects other classification systems such as Neural Networks and Support Vector Machines.) <|cite_end|> <|cite_start|> (Reference: Learning From Imbalanced Data: With the continuous expansion of data availability in many large-scale, complex, and networked systems, such as surveillance, security, Internet, and finance, it becomes critical to advance the fundamental understanding of knowledge discovery and analysis from raw data to support decision-making processes. Although existing knowledge discovery and data engineering techniques have shown great success in many real-world applications, the problem of learning from imbalanced data (the imbalanced learning problem) is a relatively new challenge that has attracted growing attention from both academia and industry. The imbalanced learning problem is concerned with the performance of learning algorithms in the presence of underrepresented data and severe class distribution skews. Due to the inherent complex characteristics of imbalanced data sets, learning from such data requires new understandings, principles, algorithms, and tools to transform vast amounts of raw data efficiently into information and knowledge representation. In this paper, we provide a comprehensive review of the development of research in learning from imbalanced data. Our focus is to provide a critical review of the nature of the problem, the state-of-the-art technologies, and the current assessment metrics used to evaluate learning performance under the imbalanced learning scenario. Furthermore, in order to stimulate future research in this field, we also highlight the major opportunities and challenges, as well as potential important research directions for learning from imbalanced data.) <|cite_end|>. For example, Over-sampling <|cite_start|> (Reference: What is the Effect of Importance Weighting in Deep Learning?: Importance-weighted risk minimization is a key ingredient in many machine learning algorithms for causal inference, domain adaptation, class imbalance, and off-policy reinforcement learning. While the effect of importance weighting is well-characterized for low-capacity misspecified models, little is known about how it impacts over-parameterized, deep neural networks. This work is inspired by recent theoretical results showing that on (linearly) separable data, deep linear networks optimized by SGD learn weight-agnostic solutions, prompting us to ask, for realistic deep networks, for which many practical datasets are separable, what is the effect of importance weighting? We present the surprising finding that while importance weighting impacts models early in training, its effect diminishes over successive epochs. Moreover, while L2 regularization and batch normalization (but not dropout), restore some of the impact of importance weighting, they express the effect via (seemingly) the wrong abstraction: why should practitioners tweak the L2 regularization, and by how much, to produce the correct weighting effect? Our experiments confirm these findings across a range of architectures and datasets.) <|cite_end|> <|cite_start|> (Reference: Relay Backpropagation for Effective Learning of Deep Convolutional Neural Networks: Learning deeper convolutional neural networks becomes a tendency in recent years. However, many empirical evidences suggest that performance improvement cannot be gained by simply stacking more layers. In this paper, we consider the issue from an information theoretical perspective, and propose a novel method Relay Backpropagation, that encourages the propagation of effective information through the network in training stage. By virtue of the method, we achieved the first place in ILSVRC 2015 Scene Classification Challenge. Extensive experiments on two challenging large scale datasets demonstrate the effectiveness of our method is not restricted to a specific dataset or network architecture. Our models will be available to the research community later.) <|cite_end|>repeats samples from under-presented classes, but it usually causes over-fitting to the minority classes. To alleviate the over-fitting issue, synthesized novel samples are introduced to augment the minority classes without repetition <|cite_start|> (Reference: SMOTE: Synthetic Minority Over-sampling Technique: An approach to the construction of classifiers from imbalanced datasets is described. A dataset is imbalanced if the classification categories are not approximately equally represented. Often real-world data sets are predominately composed of "normal" examples with only a small percentage of "abnormal" or "interesting" examples. It is also the case that the cost of misclassifying an abnormal (interesting) example as a normal example is often much higher than the cost of the reverse error. Under-sampling of the majority (normal) class has been proposed as a good means of increasing the sensitivity of a classifier to the minority class. This paper shows that a combination of our method of over-sampling the minority (abnormal) class and under-sampling the majority (normal) class can achieve better classifier performance (in ROC space) than only under-sampling the majority class. This paper also shows that a combination of our method of over-sampling the minority class and under-sampling the majority class can achieve better classifier performance (in ROC space) than varying the loss ratios in Ripper or class priors in Naive Bayes. Our method of over-sampling the minority class involves creating synthetic minority class examples. Experiments are performed using C4.5, Ripper and a Naive Bayes classifier. The method is evaluated using the area under the Receiver Operating Characteristic curve (AUC) and the ROC convex hull strategy.) <|cite_end|>. As a result, the model is still error-prone due to noise in the synthesized samples <|cite_start|> (Reference: Class-Balanced Loss Based on Effective Number of Samples: With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.) <|cite_end|>A recent work <|cite_start|> (Reference: Rethinking the Value of Labels for Improving Class-Imbalanced Learning: Real-world data often exhibits long-tailed distributions with heavy class imbalance, posing great challenges for deep recognition models. We identify a persisting dilemma on the value of labels in the context of imbalanced learning: on the one hand, supervision from labels typically leads to better results than its unsupervised counterparts; on the other hand, heavily imbalanced data naturally incurs "label bias" in the classifier, where the decision boundary can be drastically altered by the majority classes. In this work, we systematically investigate these two facets of labels. We demonstrate, theoretically and empirically, that class-imbalanced learning can significantly benefit in both semi-supervised and self-supervised manners. Specifically, we confirm that (1) positively, imbalanced labels are valuable: given more unlabeled data, the original labels can be leveraged with the extra data to reduce label bias in a semi-supervised manner, which greatly improves the final classifier; (2) negatively however, we argue that imbalanced labels are not useful always: classifiers that are first pre-trained in a self-supervised manner consistently outperform their corresponding baselines. Extensive experiments on large-scale imbalanced datasets verify our theoretically grounded strategies, showing superior performance over previous state-of-the-arts. Our intriguing findings highlight the need to rethink the usage of imbalanced labels in realistic long-tailed tasks. Code is available at https://github.com/YyzHarry/imbalanced-semi-self.) <|cite_end|>introduced unlabeled-in-distribution data to compensate for the lack of training samples, and showed that directly adding unlabeled data from mismatched classes (\emph{i.e.}, out-of-distribution data) by semi-supervised learning hurts the generalization performance. These data augmentation methods normally require in-distribution data with precise labels for selected classes. However, such kind of data would be extremely hard to collect in real-world scenarios, due to the expensive labeling cost. This fatal weakness of previous methods motivates us to explore the possibility of using \textit{out-of-distribution} (OOD) data for long-tailed imbalanced learning. In this paper, we theoretically show that out-of-distribution data (i.e., open-set samples) could be leveraged to augment the minority classes from a Bayesian perspective. Based on this motivation, we propose a simple yet effective method called Open-sampling, which uses open-set noisy labels to re-balance the label priors of the training dataset. For each OOD instance, the label is sampled from our pre-defined distribution that is complementary to the original class priors. To alleviate the over-fitting issue on the minority classes, a class-dependent weight is used in the training objective to provide stronger regularization on the minority classes than the majority classes. In this way, the open-set noisy labels could be used to re-balance the class priors while retaining their non-toxicity. To provide a comprehensive understanding, we conduct a series of analyses to illustrate the properties of the proposed Open-sampling method. From these empirical analyses, we show that: 1) the Complementary Distribution is superior to the commonly used Class Balanced distribution (CB) as the former is closer to the uniform distribution, which reduces the harmfulness of the open-set noisy labels; 2) real-world datasets with large sample size are the best choices for the open-set auxiliary dataset in Open-sampling and the diversity (i.e., number of classes) is not an important factor in the method; 3) the Open-sampling method not only re-balances the class prior, but also promotes the neural network to learn more separable representations. To the best of our knowledge, we are the first to explore the benefits of OOD instances in learning from long-tailed datasets. To verify the effectiveness of our method, we conduct experiments on four long-tailed image classification benchmark datasets, including long-tailed CIFAR10/100 <|cite_start|> (Reference: {Learning Multiple Layers of Features From Tiny Images: Groups at MIT and NYU have collected a dataset of millions of tiny colour images from the web. It is, in principle, an excellent dataset for unsupervised training of deep generative models, but previous researchers who have tried this have found it dicult to learn a good set of lters from the images. We show how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex. Using a novel parallelization algorithm to distribute the work among multiple machines connected on a network, we show how training such a model can be done in reasonable time. A second problematic aspect of the tiny images dataset is that there are no reliable class labels which makes it hard to use for object recognition experiments. We created two sets of reliable labels. The CIFAR-10 set has 6000 examples of each of 10 classes and the CIFAR-100 set has 600 examples of each of 100 non-overlapping classes. Using these labels, we show that object recognition is signicantly improved by pre-training a layer of features on a large set of unlabeled tiny images.) <|cite_end|>, CelebA-5 <|cite_start|> (Reference: Deep Learning Face Attributes in the Wild: Predicting face attributes in the wild is challenging due to complex face variations. We propose a novel deep learning framework for attribute prediction in the wild. It cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently. LNet is pre-trained by massive general object categories for face localization, while ANet is pre-trained by massive face identities for attribute prediction. This framework not only outperforms the state-of-the-art with a large margin, but also reveals valuable facts on learning face representation. (1) It shows how the performances of face localization (LNet) and attribute prediction (ANet) can be improved by different pre-training strategies. (2) It reveals that although the filters of LNet are fine-tuned only with image-level attribute tags, their response maps over entire images have strong indication of face locations. This fact enables training LNet for face localization with only image-level annotations, but without face bounding boxes or landmarks, which are required by all attribute recognition works. (3) It also demonstrates that the high-level hidden neurons of ANet automatically discover semantic concepts after pre-training with massive face identities, and such concepts are significantly enriched after fine-tuning with attribute tags. Each attribute can be well explained with a sparse linear combination of these concepts.) <|cite_end|> <|cite_start|> (Reference: M2m: Imbalanced Classification via Major-to-minor Translation: In most real-world scenarios, labeled training datasets are highly class-imbalanced, where deep neural networks suffer from generalizing to a balanced testing criterion. In this paper, we explore a novel yet simple way to alleviate this issue by augmenting less-frequent classes via translating samples (e.g., images) from more-frequent classes. This simple approach enables a classifier to learn more generalizable features of minority classes, by transferring and leveraging the diversity of the majority information. Our experimental results on a variety of class-imbalanced datasets show that the proposed method improves the generalization on minority classes significantly compared to other existing re-sampling or re-weighting methods. The performance of our method even surpasses those of previous state-of-the-art methods for the imbalanced classification.) <|cite_end|>, and Places-LT <|cite_start|> (Reference: Places: A 10 Million Image Database for Scene Recognition: The rise of multi-million-item dataset initiatives has enabled data-hungry machine learning algorithms to reach near-human semantic classification performance at tasks such as visual object and scene recognition. Here we describe the Places Database, a repository of 10 million scene photographs, labeled with scene semantic categories, comprising a large and diverse list of the types of environments encountered in the world. Using the state-of-the-art Convolutional Neural Networks (CNNs), we provide scene classification CNNs (Places-CNNs) as baselines, that significantly outperform the previous approaches. Visualization of the CNNs trained on Places shows that object detectors emerge as an intermediate representation of scene classification. With its high-coverage and high-diversity of exemplars, the Places Database along with the Places-CNNs offer a novel resource to guide future progress on scene recognition problems.) <|cite_end|>. Empirical results show that our method can be easily incorporated into existing state-of-the art methods to enhance their performance on long-tailed imbalanced classification tasks. Furthermore, experimental results validate that our method could also achieve impressive performance for detecting OOD examples under class-imbalanced setting. Code and data are publicly available at \url{https://github.com/hongxin001/open-sampling}. Related Work \textbf{Re-sampling.} Re-sampling methods aims to re-balance the class priors of the training dataset. Under-sampling methods remove examples from the majority classes, which is infeasible under extremely data imbalanced settings <|cite_start|> (Reference: Learning From Imbalanced Data: With the continuous expansion of data availability in many large-scale, complex, and networked systems, such as surveillance, security, Internet, and finance, it becomes critical to advance the fundamental understanding of knowledge discovery and analysis from raw data to support decision-making processes. Although existing knowledge discovery and data engineering techniques have shown great success in many real-world applications, the problem of learning from imbalanced data (the imbalanced learning problem) is a relatively new challenge that has attracted growing attention from both academia and industry. The imbalanced learning problem is concerned with the performance of learning algorithms in the presence of underrepresented data and severe class distribution skews. Due to the inherent complex characteristics of imbalanced data sets, learning from such data requires new understandings, principles, algorithms, and tools to transform vast amounts of raw data efficiently into information and knowledge representation. In this paper, we provide a comprehensive review of the development of research in learning from imbalanced data. Our focus is to provide a critical review of the nature of the problem, the state-of-the-art technologies, and the current assessment metrics used to evaluate learning performance under the imbalanced learning scenario. Furthermore, in order to stimulate future research in this field, we also highlight the major opportunities and challenges, as well as potential important research directions for learning from imbalanced data.) <|cite_end|> <|cite_start|> (Reference: The Class Imbalance Problem: A Systematic Study: In machine learning problems, differences in prior class probabilities -- or class imbalances -- have been reported to hinder the performance of some standard classifiers, such as decision trees. This paper presents a systematic study aimed at answering three different questions. First, we attempt to understand the nature of the class imbalance problem by establishing a relationship between concept complexity, size of the training set and class imbalance level. Second, we discuss several basic re-sampling or cost-modifying methods previously proposed to deal with the class imbalance problem and compare their effectiveness. The results obtained by such methods on artificial domains are linked to results in real-world domains. Finally, we investigate the assumption that the class imbalance problem does not only affect decision tree systems but also affects other classification systems such as Neural Networks and Support Vector Machines.) <|cite_end|>. The over-sampling method adds repeated samples for the minority classes, usually causing over-fitting to the minority classes <|cite_start|> (Reference: A systematic study of the class imbalance problem in convolutional neural networks: In this study, we systematically investigate the impact of class imbalance on classification performance of convolutional neural networks (CNNs) and compare frequently used methods to address the issue. Class imbalance is a common problem that has been comprehensively studied in classical machine learning, yet very limited systematic research is available in the context of deep learning. In our study, we use three benchmark datasets of increasing complexity, MNIST, CIFAR-10 and ImageNet, to investigate the effects of imbalance on classification and perform an extensive comparison of several methods to address the issue: oversampling, undersampling, two-phase training, and thresholding that compensates for prior class probabilities. Our main evaluation metric is area under the receiver operating characteristic curve (ROC AUC) adjusted to multi-class tasks since overall accuracy metric is associated with notable difficulties in the context of imbalanced data. Based on results from our experiments we conclude that (i) the effect of class imbalance on classification performance is detrimental; (ii) the method of addressing class imbalance that emerged as dominant in almost all analyzed scenarios was oversampling; (iii) oversampling should be applied to the level that completely eliminates the imbalance, whereas the optimal undersampling ratio depends on the extent of imbalance; (iv) as opposed to some classical machine learning models, oversampling does not cause overfitting of CNNs; (v) thresholding should be applied to compensate for prior class probabilities when overall number of properly classified cases is of interest.) <|cite_end|> <|cite_start|> (Reference: What is the Effect of Importance Weighting in Deep Learning?: Importance-weighted risk minimization is a key ingredient in many machine learning algorithms for causal inference, domain adaptation, class imbalance, and off-policy reinforcement learning. While the effect of importance weighting is well-characterized for low-capacity misspecified models, little is known about how it impacts over-parameterized, deep neural networks. This work is inspired by recent theoretical results showing that on (linearly) separable data, deep linear networks optimized by SGD learn weight-agnostic solutions, prompting us to ask, for realistic deep networks, for which many practical datasets are separable, what is the effect of importance weighting? We present the surprising finding that while importance weighting impacts models early in training, its effect diminishes over successive epochs. Moreover, while L2 regularization and batch normalization (but not dropout), restore some of the impact of importance weighting, they express the effect via (seemingly) the wrong abstraction: why should practitioners tweak the L2 regularization, and by how much, to produce the correct weighting effect? Our experiments confirm these findings across a range of architectures and datasets.) <|cite_end|> <|cite_start|> (Reference: Relay Backpropagation for Effective Learning of Deep Convolutional Neural Networks: Learning deeper convolutional neural networks becomes a tendency in recent years. However, many empirical evidences suggest that performance improvement cannot be gained by simply stacking more layers. In this paper, we consider the issue from an information theoretical perspective, and propose a novel method Relay Backpropagation, that encourages the propagation of effective information through the network in training stage. By virtue of the method, we achieved the first place in ILSVRC 2015 Scene Classification Challenge. Extensive experiments on two challenging large scale datasets demonstrate the effectiveness of our method is not restricted to a specific dataset or network architecture. Our models will be available to the research community later.) <|cite_end|>. Some methods utilize synthesized in-distribution samples to alleviate the over-fitting issue but introduce extra noise <|cite_start|> (Reference: SMOTE: Synthetic Minority Over-sampling Technique: An approach to the construction of classifiers from imbalanced datasets is described. A dataset is imbalanced if the classification categories are not approximately equally represented. Often real-world data sets are predominately composed of "normal" examples with only a small percentage of "abnormal" or "interesting" examples. It is also the case that the cost of misclassifying an abnormal (interesting) example as a normal example is often much higher than the cost of the reverse error. Under-sampling of the majority (normal) class has been proposed as a good means of increasing the sensitivity of a classifier to the minority class. This paper shows that a combination of our method of over-sampling the minority (abnormal) class and under-sampling the majority (normal) class can achieve better classifier performance (in ROC space) than only under-sampling the majority class. This paper also shows that a combination of our method of over-sampling the minority class and under-sampling the majority class can achieve better classifier performance (in ROC space) than varying the loss ratios in Ripper or class priors in Naive Bayes. Our method of over-sampling the minority class involves creating synthetic minority class examples. Experiments are performed using C4.5, Ripper and a Naive Bayes classifier. The method is evaluated using the area under the Receiver Operating Characteristic curve (AUC) and the ROC convex hull strategy.) <|cite_end|> <|cite_start|> (Reference: ADASYN: Adaptive synthetic sampling approach for imbalanced learning: This paper presents a novel adaptive synthetic (ADASYN) sampling approach for learning from imbalanced data sets. The essential idea of ADASYN is to use a weighted distribution for different minority class examples according to their level of difficulty in learning, where more synthetic data is generated for minority class examples that are harder to learn compared to those minority examples that are easier to learn. As a result, the ADASYN approach improves learning with respect to the data distributions in two ways: (1) reducing the bias introduced by the class imbalance, and (2) adaptively shifting the classification decision boundary toward the difficult examples. Simulation analyses on several machine learning data sets show the effectiveness of this method across five evaluation metrics.) <|cite_end|> <|cite_start|> (Reference: M2m: Imbalanced Classification via Major-to-minor Translation: In most real-world scenarios, labeled training datasets are highly class-imbalanced, where deep neural networks suffer from generalizing to a balanced testing criterion. In this paper, we explore a novel yet simple way to alleviate this issue by augmenting less-frequent classes via translating samples (e.g., images) from more-frequent classes. This simple approach enables a classifier to learn more generalizable features of minority classes, by transferring and leveraging the diversity of the majority information. Our experimental results on a variety of class-imbalanced datasets show that the proposed method improves the generalization on minority classes significantly compared to other existing re-sampling or re-weighting methods. The performance of our method even surpasses those of previous state-of-the-art methods for the imbalanced classification.) <|cite_end|>. In contrast to in-distribution samples used in existing methods, our approach exploits OOD instances to re-balance the class priors of the training dataset. \textbf{Re-weighting.} Re-weighting methods propose to assign adaptive weights for different classes or samples. Generally, the vanilla scheme re-weights classes proportionally to the inverse of their frequency <|cite_start|> (Reference: {Learning deep representation for imbalanced classification: Data in vision domain often exhibit highly-skewed class distribution, i.e., most data belong to a few majority classes, while the minority classes only contain a scarce amount of instances. To mitigate this issue, contemporary classification methods based on deep convolutional neural network (CNN) typically follow classic strategies such as class re-sampling or cost-sensitive training. In this paper, we conduct extensive and systematic experiments to validate the effectiveness of these classic schemes for representation learning on class-imbalanced data. We further demonstrate that more discriminative deep representation can be learned by enforcing a deep network to maintain both intercluster and inter-class margins. This tighter constraint effectively reduces the class imbalance inherent in the local data neighborhood. We show that the margins can be easily deployed in standard deep learning framework through quintuplet instance sampling and the associated triple-header hinge loss. The representation learned by our approach, when combined with a simple k-nearest neighbor (kNN) algorithm, shows significant improvements over existing methods on both high-and low-level vision classification tasks that exhibit imbalanced class distribution.) <|cite_end|>. Focal loss <|cite_start|> (Reference: Focal Loss for Dense Object Detection: The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: https://github.com/facebookresearch/Detectron.) <|cite_end|>assigns low weights to the well-classified examples. Class-balanced loss <|cite_start|> (Reference: Class-Balanced Loss Based on Effective Number of Samples: With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.) <|cite_end|>proposes to re-weight by the inverse effective number of samples. However, these re-weighting methods tend to make the optimization of DNNs difficult under extremely data imbalanced settings <|cite_start|> (Reference: Learning to {{Model}} the {{Tail}}: We describe an approach to learning from long-tailed, imbalanced datasets that are prevalent in real-world settings. Here, the challenge is to learn accurate "few-shot'' models for classes in the tail of the class distribution, for which little data is available. We cast this problem as transfer learning, where knowledge from the data-rich classes in the head of the distribution is transferred to the data-poor classes in the tail. Our key insights are as follows. First, we propose to transfer meta-knowledge about learning-to-learn from the head classes. This knowledge is encoded with a meta-network that operates on the space of model parameters, that is trained to predict many-shot model parameters from few-shot model parameters. Second, we transfer this meta-knowledge in a progressive manner, from classes in the head to the "body'', and from the "body'' to the tail. That is, we transfer knowledge in a gradual fashion, regularizing meta-networks for few-shot regression with those trained with more training data. This allows our final network to capture a notion of model dynamics, that predicts how model parameters are likely to change as more training data is gradually added. We demonstrate results on image classification datasets (SUN, Places, and ImageNet) tuned for the long-tailed setting, that significantly outperform common heuristics, such as data resampling or reweighting.) <|cite_end|> <|cite_start|> (Reference: Distributed Representations of Words and Phrases and their Compositionality: The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.) <|cite_end|>. \textbf{Other methods for long-tailed datasets}. In addition to the data re-balancing approaches, some other solutions are also applied for class-imbalanced learning, including transfer-learning based methods <|cite_start|> (Reference: Feature Transfer Learning for Face Recognition With Under-Represented Data: Despite the large volume of face recognition datasets, there is a significant portion of subjects, of which the samples are insufficient and thus under-represented. Ignoring such significant portion results in insufficient training data. Training with under-represented data leads to biased classifiers in conventionally-trained deep networks. In this paper, we propose a center-based feature transfer framework to augment the feature space of under-represented subjects from the regular subjects that have sufficiently diverse samples. A Gaussian prior of the variance is assumed across all subjects and the variance from regular ones are transferred to the under-represented ones. This encourages the under-represented distribution to be closer to the regular distribution. Further, an alternating training regimen is proposed to simultaneously achieve less biased classifiers and a more discriminative feature representation. We conduct ablative study to mimic the under-represented datasets by varying the portion of under-represented classes on the MS-Celeb-1M dataset. Advantageous results on LFW, IJB-A and MS-Celeb-1M demonstrate the effectiveness of our feature transfer and training strategy, compared to both general baselines and state-of-the-art methods. Moreover, our feature transfer successfully presents smooth visual interpolation, which conducts disentanglement to preserve identity of a class while augmenting its feature space with non-identity variations such as pose and lighting.) <|cite_end|> <|cite_start|> (Reference: Rethinking Class-Balanced Methods for Long-Tailed Visual Recognition from a Domain Adaptation Perspective: Object frequency in the real world often follows a power law, leading to a mismatch between datasets with long-tailed class distributions seen by a machine learning model and our expectation of the model to perform well on all classes. We analyze this mismatch from a domain adaptation point of view. First of all, we connect existing class-balanced methods for long-tailed classification to target shift, a well-studied scenario in domain adaptation. The connection reveals that these methods implicitly assume that the training data and test data share the same class-conditioned distribution, which does not hold in general and especially for the tail classes. While a head class could contain abundant and diverse training examples that well represent the expected data at inference time, the tail classes are often short of representative training data. To this end, we propose to augment the classic class-balanced learning by explicitly estimating the differences between the class-conditioned distributions with a meta-learning approach. We validate our approach with six benchmark datasets and three loss functions.) <|cite_end|>, two-stage training methods <|cite_start|> (Reference: Decoupling Representation and Classifier for Long-Tailed Recognition: The long-tail distribution of the visual world poses great challenges for deep learning based classification models on how to handle the class imbalance problem. Existing solutions usually involve class-balancing strategies, e.g., by loss re-weighting, data re-sampling, or transfer learning from head- to tail-classes, but most of them adhere to the scheme of jointly learning representations and classifiers. In this work, we decouple the learning procedure into representation learning and classification, and systematically explore how different balancing strategies affect them for long-tailed recognition. The findings are surprising: (1) data imbalance might not be an issue in learning high-quality representations; (2) with representations learned with the simplest instance-balanced (natural) sampling, it is also possible to achieve strong long-tailed recognition ability by adjusting only the classifier. We conduct extensive experiments and set new state-of-the-art performance on common long-tailed benchmarks like ImageNet-LT, Places-LT and iNaturalist, showing that it is possible to outperform carefully designed losses, sampling strategies, even complex modules with memory, by using a straightforward approach that decouples representation and classification. Our code is available at https://github.com/facebookresearch/classifier-balancing.) <|cite_end|> <|cite_start|> (Reference: Improving Calibration for Long-Tailed Recognition: Deep neural networks may perform poorly when training datasets are heavily class-imbalanced. Recently, two-stage methods decouple representation learning and classifier learning to improve performance. But there is still the vital issue of miscalibration. To address it, we design two methods to improve calibration and performance in such scenarios. Motivated by the fact that predicted probability distributions of classes are highly related to the numbers of class instances, we propose label-aware smoothing to deal with different degrees of over-confidence for classes and improve classifier learning. For dataset bias between these two stages due to different samplers, we further propose shifted batch normalization in the decoupling framework. Our proposed methods set new records on multiple popular long-tailed recognition benchmark datasets, including CIFAR-10-LT, CIFAR-100-LT, ImageNet-LT, Places-LT, and iNaturalist 2018. Code will be available at https://github.com/Jia-Research-Lab/MiSLAS.) <|cite_end|> <|cite_start|> (Reference: Distribution Alignment: A Unified Framework for Long-tail Visual Recognition: Despite the recent success of deep neural networks, it remains challenging to effectively model the long-tail class distribution in visual recognition tasks. To address this problem, we first investigate the performance bottleneck of the two-stage learning framework via ablative study. Motivated by our discovery, we propose a unified distribution alignment strategy for long-tail visual recognition. Specifically, we develop an adaptive calibration function that enables us to adjust the classification scores for each data point. We then introduce a generalized re-weight method in the two-stage learning to balance the class prior, which provides a flexible and unified solution to diverse scenarios in visual recognition tasks. We validate our method by extensive experiments on four tasks, including image classification, semantic segmentation, object detection, and instance segmentation. Our approach achieves the state-of-the-art results across all four recognition tasks with a simple and unified framework. The code and models will be made publicly available at: https://github.com/Megvii-BaseDetection/DisAlign) <|cite_end|> <|cite_start|> (Reference: Large-Scale Long-Tailed Recognition in an Open World: Real world data often have a long-tailed and open-ended distribution. A practical recognition system must classify among majority and minority classes, generalize from a few known instances, and acknowledge novelty upon a never seen instance. We define Open Long-Tailed Recognition (OLTR) as learning from such naturally distributed data and optimizing the classification accuracy over a balanced test set which include head, tail, and open classes. OLTR must handle imbalanced classification, few-shot learning, and open-set recognition in one integrated algorithm, whereas existing classification approaches focus only on one aspect and deliver poorly over the entire class spectrum. The key challenges are how to share visual knowledge between head and tail classes and how to reduce confusion between tail and open classes. We develop an integrated OLTR algorithm that maps an image to a feature space such that visual concepts can easily relate to each other based on a learned metric that respects the closed-world classification while acknowledging the novelty of the open world. Our so-called dynamic meta-embedding combines a direct image feature and an associated memory feature, with the feature norm indicating the familiarity to known classes. On three large-scale OLTR datasets we curate from object-centric ImageNet, scene-centric Places, and face-centric MS1M data, our method consistently outperforms the state-of-the-art. Our code, datasets, and models enable future OLTR research and are publicly available at https://liuziwei7.github.io/projects/LongTail.html.) <|cite_end|>, training objective based methods <|cite_start|> (Reference: Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss: Deep learning algorithms can fare poorly when the training dataset suffers from heavy class-imbalance but the testing criterion requires good generalization on less frequent classes. We design two novel methods to improve performance in such scenarios. First, we propose a theoretically-principled label-distribution-aware margin (LDAM) loss motivated by minimizing a margin-based generalization bound. This loss replaces the standard cross-entropy objective during training and can be applied with prior strategies for training with class-imbalance such as re-weighting or re-sampling. Second, we propose a simple, yet effective, training schedule that defers re-weighting until after the initial stage, allowing the model to learn an initial representation while avoiding some of the complications associated with re-weighting or re-sampling. We test our methods on several benchmark vision tasks including the real-world imbalanced dataset iNaturalist 2018. Our experiments show that either of these methods alone can already improve over existing techniques and their combination achieves even better performance gains.) <|cite_end|> <|cite_start|> (Reference: Balanced Meta-Softmax for Long-Tailed Visual Recognition: Deep classifiers have achieved great success in visual recognition. However, real-world data is long-tailed by nature, leading to the mismatch between training and testing distributions. In this paper, we show that the Softmax function, though used in most classification tasks, gives a biased gradient estimation under the long-tailed setup. This paper presents Balanced Softmax, an elegant unbiased extension of Softmax, to accommodate the label distribution shift between training and testing. Theoretically, we derive the generalization bound for multiclass Softmax regression and show our loss minimizes the bound. In addition, we introduce Balanced Meta-Softmax, applying a complementary Meta Sampler to estimate the optimal class sample rate and further improve long-tailed learning. In our experiments, we demonstrate that Balanced Meta-Softmax outperforms state-of-the-art long-tailed classification solutions on both visual recognition and instance segmentation tasks.) <|cite_end|> <|cite_start|> (Reference: Disentangling Label Distribution for Long-tailed Visual Recognition: The current evaluation protocol of long-tailed visual recognition trains the classification model on the long-tailed source label distribution and evaluates its performance on the uniform target label distribution. Such protocol has questionable practicality since the target may also be long-tailed. Therefore, we formulate long-tailed visual recognition as a label shift problem where the target and source label distributions are different. One of the significant hurdles in dealing with the label shift problem is the entanglement between the source label distribution and the model prediction. In this paper, we focus on disentangling the source label distribution from the model prediction. We first introduce a simple but overlooked baseline method that matches the target label distribution by post-processing the model prediction trained by the cross-entropy loss and the Softmax function. Although this method surpasses state-of-the-art methods on benchmark datasets, it can be further improved by directly disentangling the source label distribution from the model prediction in the training phase. Thus, we propose a novel method, LAbel distribution DisEntangling (LADE) loss based on the optimal bound of Donsker-Varadhan representation. LADE achieves state-of-the-art performance on benchmark datasets such as CIFAR-100-LT, Places-LT, ImageNet-LT, and iNaturalist 2018. Moreover, LADE outperforms existing methods on various shifted target label distributions, showing the general adaptability of our proposed method.) <|cite_end|>, expert methods <|cite_start|> (Reference: Long-tailed Recognition by Routing Diverse Distribution-Aware Experts: Natural data are often long-tail distributed over semantic classes. Existing recognition methods tackle this imbalanced classification by placing more emphasis on the tail data, through class re-balancing/re-weighting or ensembling over different data groups, resulting in increased tail accuracies but reduced head accuracies. We take a dynamic view of the training data and provide a principled model bias and variance analysis as the training data fluctuates: Existing long-tail classifiers invariably increase the model variance and the head-tail model bias gap remains large, due to more and larger confusion with hard negatives for the tail. We propose a new long-tailed classifier called RoutIng Diverse Experts (RIDE). It reduces the model variance with multiple experts, reduces the model bias with a distribution-aware diversity loss, reduces the computational cost with a dynamic expert routing module. RIDE outperforms the state-of-the-art by 5% to 7% on CIFAR100-LT, ImageNet-LT and iNaturalist 2018 benchmarks. It is also a universal framework that is applicable to various backbone networks, long-tailed algorithms, and training mechanisms for consistent performance gains. Our code is available at: https://github.com/frank-xwang/RIDE-LongTailRecognition.) <|cite_end|>and self-supervised learning methods <|cite_start|> (Reference: Rethinking the Value of Labels for Improving Class-Imbalanced Learning: Real-world data often exhibits long-tailed distributions with heavy class imbalance, posing great challenges for deep recognition models. We identify a persisting dilemma on the value of labels in the context of imbalanced learning: on the one hand, supervision from labels typically leads to better results than its unsupervised counterparts; on the other hand, heavily imbalanced data naturally incurs "label bias" in the classifier, where the decision boundary can be drastically altered by the majority classes. In this work, we systematically investigate these two facets of labels. We demonstrate, theoretically and empirically, that class-imbalanced learning can significantly benefit in both semi-supervised and self-supervised manners. Specifically, we confirm that (1) positively, imbalanced labels are valuable: given more unlabeled data, the original labels can be leveraged with the extra data to reduce label bias in a semi-supervised manner, which greatly improves the final classifier; (2) negatively however, we argue that imbalanced labels are not useful always: classifiers that are first pre-trained in a self-supervised manner consistently outperform their corresponding baselines. Extensive experiments on large-scale imbalanced datasets verify our theoretically grounded strategies, showing superior performance over previous state-of-the-arts. Our intriguing findings highlight the need to rethink the usage of imbalanced labels in realistic long-tailed tasks. Code is available at https://github.com/YyzHarry/imbalanced-semi-self.) <|cite_end|> <|cite_start|> (Reference: Parametric Contrastive Learning: In this paper, we propose Parametric Contrastive Learning (PaCo) to tackle long-tailed recognition. Based on theoretical analysis, we observe supervised contrastive loss tends to bias on high-frequency classes and thus increases the difficulty of imbalanced learning. We introduce a set of parametric class-wise learnable centers to rebalance from an optimization perspective. Further, we analyze our PaCo loss under a balanced setting. Our analysis demonstrates that PaCo can adaptively enhance the intensity of pushing samples of the same class close as more samples are pulled together with their corresponding centers and benefit hard example learning. Experiments on long-tailed CIFAR, ImageNet, Places, and iNaturalist 2018 manifest the new state-of-the-art for long-tailed recognition. On full ImageNet, models trained with PaCo loss surpass supervised contrastive learning across various ResNet backbones, e.g., our ResNet-200 achieves 81.8% top-1 accuracy. Our code is available at https://github.com/dvlab-research/Parametric-Contrastive-Learning.) <|cite_end|>. For example, transfer-learning based methods addressed the class-imbalanced issue by transferring features learned from head classes with abundant training instances to under-represented tail classes <|cite_start|> (Reference: Feature Transfer Learning for Face Recognition With Under-Represented Data: Despite the large volume of face recognition datasets, there is a significant portion of subjects, of which the samples are insufficient and thus under-represented. Ignoring such significant portion results in insufficient training data. Training with under-represented data leads to biased classifiers in conventionally-trained deep networks. In this paper, we propose a center-based feature transfer framework to augment the feature space of under-represented subjects from the regular subjects that have sufficiently diverse samples. A Gaussian prior of the variance is assumed across all subjects and the variance from regular ones are transferred to the under-represented ones. This encourages the under-represented distribution to be closer to the regular distribution. Further, an alternating training regimen is proposed to simultaneously achieve less biased classifiers and a more discriminative feature representation. We conduct ablative study to mimic the under-represented datasets by varying the portion of under-represented classes on the MS-Celeb-1M dataset. Advantageous results on LFW, IJB-A and MS-Celeb-1M demonstrate the effectiveness of our feature transfer and training strategy, compared to both general baselines and state-of-the-art methods. Moreover, our feature transfer successfully presents smooth visual interpolation, which conducts disentanglement to preserve identity of a class while augmenting its feature space with non-identity variations such as pose and lighting.) <|cite_end|> <|cite_start|> (Reference: Rethinking Class-Balanced Methods for Long-Tailed Visual Recognition from a Domain Adaptation Perspective: Object frequency in the real world often follows a power law, leading to a mismatch between datasets with long-tailed class distributions seen by a machine learning model and our expectation of the model to perform well on all classes. We analyze this mismatch from a domain adaptation point of view. First of all, we connect existing class-balanced methods for long-tailed classification to target shift, a well-studied scenario in domain adaptation. The connection reveals that these methods implicitly assume that the training data and test data share the same class-conditioned distribution, which does not hold in general and especially for the tail classes. While a head class could contain abundant and diverse training examples that well represent the expected data at inference time, the tail classes are often short of representative training data. To this end, we propose to augment the classic class-balanced learning by explicitly estimating the differences between the class-conditioned distributions with a meta-learning approach. We validate our approach with six benchmark datasets and three loss functions.) <|cite_end|>. Two-stage training methods proposed to apply decoupled training where the classifier is re-balanced during the fine-tuning stage <|cite_start|> (Reference: Decoupling Representation and Classifier for Long-Tailed Recognition: The long-tail distribution of the visual world poses great challenges for deep learning based classification models on how to handle the class imbalance problem. Existing solutions usually involve class-balancing strategies, e.g., by loss re-weighting, data re-sampling, or transfer learning from head- to tail-classes, but most of them adhere to the scheme of jointly learning representations and classifiers. In this work, we decouple the learning procedure into representation learning and classification, and systematically explore how different balancing strategies affect them for long-tailed recognition. The findings are surprising: (1) data imbalance might not be an issue in learning high-quality representations; (2) with representations learned with the simplest instance-balanced (natural) sampling, it is also possible to achieve strong long-tailed recognition ability by adjusting only the classifier. We conduct extensive experiments and set new state-of-the-art performance on common long-tailed benchmarks like ImageNet-LT, Places-LT and iNaturalist, showing that it is possible to outperform carefully designed losses, sampling strategies, even complex modules with memory, by using a straightforward approach that decouples representation and classification. Our code is available at https://github.com/facebookresearch/classifier-balancing.) <|cite_end|> <|cite_start|> (Reference: Improving Calibration for Long-Tailed Recognition: Deep neural networks may perform poorly when training datasets are heavily class-imbalanced. Recently, two-stage methods decouple representation learning and classifier learning to improve performance. But there is still the vital issue of miscalibration. To address it, we design two methods to improve calibration and performance in such scenarios. Motivated by the fact that predicted probability distributions of classes are highly related to the numbers of class instances, we propose label-aware smoothing to deal with different degrees of over-confidence for classes and improve classifier learning. For dataset bias between these two stages due to different samplers, we further propose shifted batch normalization in the decoupling framework. Our proposed methods set new records on multiple popular long-tailed recognition benchmark datasets, including CIFAR-10-LT, CIFAR-100-LT, ImageNet-LT, Places-LT, and iNaturalist 2018. Code will be available at https://github.com/Jia-Research-Lab/MiSLAS.) <|cite_end|> <|cite_start|> (Reference: Distribution Alignment: A Unified Framework for Long-tail Visual Recognition: Despite the recent success of deep neural networks, it remains challenging to effectively model the long-tail class distribution in visual recognition tasks. To address this problem, we first investigate the performance bottleneck of the two-stage learning framework via ablative study. Motivated by our discovery, we propose a unified distribution alignment strategy for long-tail visual recognition. Specifically, we develop an adaptive calibration function that enables us to adjust the classification scores for each data point. We then introduce a generalized re-weight method in the two-stage learning to balance the class prior, which provides a flexible and unified solution to diverse scenarios in visual recognition tasks. We validate our method by extensive experiments on four tasks, including image classification, semantic segmentation, object detection, and instance segmentation. Our approach achieves the state-of-the-art results across all four recognition tasks with a simple and unified framework. The code and models will be made publicly available at: https://github.com/Megvii-BaseDetection/DisAlign) <|cite_end|> <|cite_start|> (Reference: Large-Scale Long-Tailed Recognition in an Open World: Real world data often have a long-tailed and open-ended distribution. A practical recognition system must classify among majority and minority classes, generalize from a few known instances, and acknowledge novelty upon a never seen instance. We define Open Long-Tailed Recognition (OLTR) as learning from such naturally distributed data and optimizing the classification accuracy over a balanced test set which include head, tail, and open classes. OLTR must handle imbalanced classification, few-shot learning, and open-set recognition in one integrated algorithm, whereas existing classification approaches focus only on one aspect and deliver poorly over the entire class spectrum. The key challenges are how to share visual knowledge between head and tail classes and how to reduce confusion between tail and open classes. We develop an integrated OLTR algorithm that maps an image to a feature space such that visual concepts can easily relate to each other based on a learned metric that respects the closed-world classification while acknowledging the novelty of the open world. Our so-called dynamic meta-embedding combines a direct image feature and an associated memory feature, with the feature norm indicating the familiarity to known classes. On three large-scale OLTR datasets we curate from object-centric ImageNet, scene-centric Places, and face-centric MS1M data, our method consistently outperforms the state-of-the-art. Our code, datasets, and models enable future OLTR research and are publicly available at https://liuziwei7.github.io/projects/LongTail.html.) <|cite_end|>. Generally, our proposed method is complementary to these existing methods and our method can further improve their performance, which are explicitly shown in our experiments. \textbf{Utilizing auxiliary dataset}. In the deep learning community, auxiliary dataset is utilized in various contexts, e.g., adversarial machine learning <|cite_start|> (Reference: Towards Deep Learning Models Resistant to Adversarial Attacks: Recent work has demonstrated that deep neural networks are vulnerable to adversarial examples---inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much of the prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against any adversary. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. They also suggest the notion of security against a first-order adversary as a natural and broad security guarantee. We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models. Code and pre-trained models are available at https://github.com/MadryLab/mnist_challenge and https://github.com/MadryLab/cifar10_challenge.) <|cite_end|> <|cite_start|> (Reference: Removing Undesirable Feature Contributions Using Out-of-Distribution Data: Several data augmentation methods deploy unlabeled-in-distribution (UID) data to bridge the gap between the training and inference of neural networks. However, these methods have clear limitations in terms of availability of UID data and dependence of algorithms on pseudo-labels. Herein, we propose a data augmentation method to improve generalization in both adversarial and standard learning by using out-of-distribution (OOD) data that are devoid of the abovementioned issues. We show how to improve generalization theoretically using OOD data in each learning scenario and complement our theoretical analysis with experiments on CIFAR-10, CIFAR-100, and a subset of ImageNet. The results indicate that undesirable features are shared even among image data that seem to have little correlation from a human point of view. We also present the advantages of the proposed method through comparison with other data augmentation methods, which can be used in the absence of UID data. Furthermore, we demonstrate that the proposed method can further improve the existing state-of-the-art adversarial training.) <|cite_end|>and weakly supervised learning <|cite_start|> (Reference: Combating noisy labels by agreement: A joint training method with co-regularization: Deep Learning with noisy labels is a practically challenging problem in weakly supervised learning. The state-of-the-art approaches "Decoupling" and "Co-teaching+" claim that the "disagreement" strategy is crucial for alleviating the problem of learning with noisy labels. In this paper, we start from a different perspective and propose a robust learning paradigm called JoCoR, which aims to reduce the diversity of two networks during training. Specifically, we first use two networks to make predictions on the same mini-batch data and calculate a joint loss with Co-Regularization for each training example. Then we select small-loss examples to update the parameters of both two networks simultaneously. Trained by the joint loss, these two networks would be more and more similar due to the effect of Co-Regularization. Extensive experimental results on corrupted data from benchmark datasets including MNIST, CIFAR-10, CIFAR-100 and Clothing1M demonstrate that JoCoR is superior to many state-of-the-art approaches for learning with noisy labels.) <|cite_end|> <|cite_start|> (Reference: MetaInfoNet: Learning Task-Guided Information for Sample Reweighting: Deep neural networks have been shown to easily overfit to biased training data with label noise or class imbalance. Meta-learning algorithms are commonly designed to alleviate this issue in the form of sample reweighting, by learning a meta weighting network that takes training losses as inputs to generate sample weights. In this paper, we advocate that choosing proper inputs for the meta weighting network is crucial for desired sample weights in a specific task, while training loss is not always the correct answer. In view of this, we propose a novel meta-learning algorithm, MetaInfoNet, which automatically learns effective representations as inputs for the meta weighting network by emphasizing task-related information with an information bottleneck strategy. Extensive experimental results on benchmark datasets with label noise or class imbalance validate that MetaInfoNet is superior to many state-of-the-art methods.) <|cite_end|> <|cite_start|> (Reference: A Second-Order Approach to Learning with Instance-Dependent Label Noise: The presence of label noise often misleads the training of deep neural networks. Departing from the recent literature which largely assumes the label noise rate is only determined by the true label class, the errors in human-annotated labels are more likely to be dependent on the difficulty levels of tasks, resulting in settings with instance-dependent label noise. We first provide evidences that the heterogeneous instance-dependent label noise is effectively down-weighting the examples with higher noise rates in a non-uniform way and thus causes imbalances, rendering the strategy of directly applying methods for class-dependent label noise questionable. Built on a recent work peer loss [24], we then propose and study the potentials of a second-order approach that leverages the estimation of several covariance terms defined between the instance-dependent noise rates and the Bayes optimal label. We show that this set of second-order statistics successfully captures the induced imbalances. We further proceed to show that with the help of the estimated second-order statistics, we identify a new loss function whose expected risk of a classifier under instance-dependent label noise is equivalent to a new problem with only class-dependent label noise. This fact allows us to apply existing solutions to handle this better-studied setting. We provide an efficient procedure to estimate these second-order statistics without accessing either ground truth labels or prior knowledge of the noise rates. Experiments on CIFAR10 and CIFAR100 with synthetic instance-dependent label noise and Clothing1M with real-world human label noise verify our approach. Our implementation is available at https://github.com/UCSC-REAL/CAL.) <|cite_end|> <|cite_start|> (Reference: Learning with Instance-Dependent Label Noise: A Sample Sieve Approach: Human-annotated labels are often prone to noise, and the presence of such noise will degrade the performance of the resulting deep neural network (DNN) models. Much of the literature (with several recent exceptions) of learning with noisy labels focuses on the case when the label noise is independent of features. Practically, annotations errors tend to be instance-dependent and often depend on the difficulty levels of recognizing a certain task. Applying existing results from instance-independent settings would require a significant amount of estimation of noise rates. Therefore, providing theoretically rigorous solutions for learning with instance-dependent label noise remains a challenge. In this paper, we propose CORES$^{2}$ (COnfidence REgularized Sample Sieve), which progressively sieves out corrupted examples. The implementation of CORES$^{2}$ does not require specifying noise rates and yet we are able to provide theoretical guarantees of CORES$^{2}$ in filtering out the corrupted examples. This high-quality sample sieve allows us to treat clean examples and the corrupted ones separately in training a DNN solution, and such a separation is shown to be advantageous in the instance-dependent noise setting. We demonstrate the performance of CORES$^{2}$ on CIFAR10 and CIFAR100 datasets with synthetic instance-dependent label noise and Clothing1M with real-world human noise. As of independent interests, our sample sieve provides a generic machinery for anatomizing noisy datasets and provides a flexible interface for various robust training techniques to further improve the performance. Code is available at https://github.com/UCSC-REAL/cores.) <|cite_end|> <|cite_start|> (Reference: Policy Learning Using Weak Supervision: Most existing policy learning solutions require the learning agents to receive high-quality supervision signals such as well-designed rewards in reinforcement learning (RL) or high-quality expert demonstrations in behavioral cloning (BC). These quality supervisions are usually infeasible or prohibitively expensive to obtain in practice. We aim for a unified framework that leverages the available cheap weak supervisions to perform policy learning efficiently. To handle this problem, we treat the "weak supervision" as imperfect information coming from a peer agent, and evaluate the learning agent's policy based on a "correlated agreement" with the peer agent's policy (instead of simple agreements). Our approach explicitly punishes a policy for overfitting to the weak supervision. In addition to theoretical guarantees, extensive evaluations on tasks including RL with noisy rewards, BC with weak demonstrations, and standard policy co-training show that our method leads to substantial performance improvements, especially when the complexity or the noise of the learning environments is high.) <|cite_end|> <|cite_start|> (Reference: Clusterability as an Alternative to Anchor Points When Learning with Noisy Labels: The label noise transition matrix, characterizing the probabilities of a training instance being wrongly annotated, is crucial to designing popular solutions to learning with noisy labels. Existing works heavily rely on finding "anchor points" or their approximates, defined as instances belonging to a particular class almost surely. Nonetheless, finding anchor points remains a non-trivial task, and the estimation accuracy is also often throttled by the number of available anchor points. In this paper, we propose an alternative option to the above task. Our main contribution is the discovery of an efficient estimation procedure based on a clusterability condition. We prove that with clusterable representations of features, using up to third-order consensuses of noisy labels among neighbor representations is sufficient to estimate a unique transition matrix. Compared with methods using anchor points, our approach uses substantially more instances and benefits from a much better sample complexity. We demonstrate the estimation accuracy and advantages of our estimates using both synthetic noisy labels (on CIFAR-10/100) and real human-level noisy labels (on Clothing1M and our self-collected human-annotated CIFAR-10). Our code and human-level noisy CIFAR-10 labels are available at https://github.com/UCSC-REAL/HOC.) <|cite_end|>
[ "<|reference_start|> The Class Imbalance Problem: A Systematic Study: In machine learning problems, differences in prior class probabilities -- or class imbalances -- have been reported to hinder the performance of some standard classifiers, such as decision trees. This paper presents a systematic study aimed at answering three different questions. First, we attempt to understand the nature of the class imbalance problem by establishing a relationship between concept complexity, size of the training set and class imbalance level. Second, we discuss several basic re-sampling or cost-modifying methods previously proposed to deal with the class imbalance problem and compare their effectiveness. The results obtained by such methods on artificial domains are linked to results in real-world domains. Finally, we investigate the assumption that the class imbalance problem does not only affect decision tree systems but also affects other classification systems such as Neural Networks and Support Vector Machines. <|reference_end|>", "<|reference_start|> Rethinking the Value of Labels for Improving Class-Imbalanced Learning: Real-world data often exhibits long-tailed distributions with heavy class imbalance, posing great challenges for deep recognition models. We identify a persisting dilemma on the value of labels in the context of imbalanced learning: on the one hand, supervision from labels typically leads to better results than its unsupervised counterparts; on the other hand, heavily imbalanced data naturally incurs \"label bias\" in the classifier, where the decision boundary can be drastically altered by the majority classes. In this work, we systematically investigate these two facets of labels. We demonstrate, theoretically and empirically, that class-imbalanced learning can significantly benefit in both semi-supervised and self-supervised manners. Specifically, we confirm that (1) positively, imbalanced labels are valuable: given more unlabeled data, the original labels can be leveraged with the extra data to reduce label bias in a semi-supervised manner, which greatly improves the final classifier; (2) negatively however, we argue that imbalanced labels are not useful always: classifiers that are first pre-trained in a self-supervised manner consistently outperform their corresponding baselines. Extensive experiments on large-scale imbalanced datasets verify our theoretically grounded strategies, showing superior performance over previous state-of-the-arts. Our intriguing findings highlight the need to rethink the usage of imbalanced labels in realistic long-tailed tasks. Code is available at https://github.com/YyzHarry/imbalanced-semi-self. <|reference_end|>", "<|reference_start|> ADASYN: Adaptive synthetic sampling approach for imbalanced learning: This paper presents a novel adaptive synthetic (ADASYN) sampling approach for learning from imbalanced data sets. The essential idea of ADASYN is to use a weighted distribution for different minority class examples according to their level of difficulty in learning, where more synthetic data is generated for minority class examples that are harder to learn compared to those minority examples that are easier to learn. As a result, the ADASYN approach improves learning with respect to the data distributions in two ways: (1) reducing the bias introduced by the class imbalance, and (2) adaptively shifting the classification decision boundary toward the difficult examples. Simulation analyses on several machine learning data sets show the effectiveness of this method across five evaluation metrics. <|reference_end|>", "<|reference_start|> Balanced Meta-Softmax for Long-Tailed Visual Recognition: Deep classifiers have achieved great success in visual recognition. However, real-world data is long-tailed by nature, leading to the mismatch between training and testing distributions. In this paper, we show that the Softmax function, though used in most classification tasks, gives a biased gradient estimation under the long-tailed setup. This paper presents Balanced Softmax, an elegant unbiased extension of Softmax, to accommodate the label distribution shift between training and testing. Theoretically, we derive the generalization bound for multiclass Softmax regression and show our loss minimizes the bound. In addition, we introduce Balanced Meta-Softmax, applying a complementary Meta Sampler to estimate the optimal class sample rate and further improve long-tailed learning. In our experiments, we demonstrate that Balanced Meta-Softmax outperforms state-of-the-art long-tailed classification solutions on both visual recognition and instance segmentation tasks. <|reference_end|>" ]
[ 8, 14, 25, 39 ]
{"<|multi_cite_1_1|>": "ss-779980", "<|multi_cite_1_2|>": "arxiv-65515", "<|multi_cite_2_1|>": "ss-1370143", "<|multi_cite_2_2|>": "arxiv-129893", "<|multi_cite_2_3|>": "arxiv-60292", "<|multi_cite_3_1|>": "arxiv-237885", "<|multi_cite_3_2|>": "arxiv-199278", "<|multi_cite_3_3|>": "arxiv-229837", "<|multi_cite_4_1|>": "ss-681207", "<|multi_cite_4_2|>": "ss-1374399", "<|multi_cite_5_1|>": "arxiv-183807", "<|multi_cite_5_2|>": "arxiv-89252", "<|cite_6|>": "arxiv-22115", "<|cite_7|>": "arxiv-187800", "<|cite_8|>": "arxiv-271579", "<|cite_9|>": "ss-779980", "<|multi_cite_10_1|>": "arxiv-69417", "<|multi_cite_10_2|>": "arxiv-256871", "<|cite_11|>": "ss-1370143", "<|multi_cite_12_1|>": "ss-1374399", "<|multi_cite_12_2|>": "ss-681207", "<|multi_cite_13_1|>": "arxiv-137312", "<|multi_cite_13_2|>": "arxiv-183807", "<|multi_cite_13_3|>": "arxiv-89252", "<|multi_cite_14_1|>": "arxiv-22115", "<|multi_cite_14_2|>": "ss-682394", "<|multi_cite_14_3|>": "arxiv-256871", "<|cite_15|>": "ss-728819", "<|cite_16|>": "arxiv-131338", "<|cite_17|>": "arxiv-187800", "<|multi_cite_18_1|>": "ss-1332470", "<|multi_cite_18_2|>": "arxiv-51600", "<|multi_cite_19_1|>": "ss-1096383", "<|multi_cite_19_2|>": "arxiv-255323", "<|multi_cite_20_1|>": "arxiv-229837", "<|multi_cite_20_2|>": "arxiv-331588", "<|multi_cite_20_3|>": "arxiv-331013", "<|multi_cite_20_4|>": "arxiv-199278", "<|multi_cite_21_1|>": "arxiv-210264", "<|multi_cite_21_2|>": "arxiv-279934", "<|multi_cite_21_3|>": "arxiv-306882", "<|cite_22|>": "arxiv-293859", "<|multi_cite_23_1|>": "arxiv-271579", "<|multi_cite_23_2|>": "arxiv-357132", "<|multi_cite_24_1|>": "ss-1096383", "<|multi_cite_24_2|>": "arxiv-255323", "<|multi_cite_25_1|>": "arxiv-229837", "<|multi_cite_25_2|>": "arxiv-331588", "<|multi_cite_25_3|>": "arxiv-331013", "<|multi_cite_25_4|>": "arxiv-199278", "<|multi_cite_26_1|>": "arxiv-127148", "<|multi_cite_26_2|>": "arxiv-315526", "<|multi_cite_27_1|>": "arxiv-252208", "<|multi_cite_27_2|>": "arxiv-308961", "<|multi_cite_27_3|>": "arxiv-311500", "<|multi_cite_27_4|>": "arxiv-294079", "<|multi_cite_27_5|>": "arxiv-293831", "<|multi_cite_27_6|>": "arxiv-320243", "<|multi_cite_27_7|>": "arxiv-376253", "<|multi_cite_27_8|>": "arxiv-373588", "<|multi_cite_27_9|>": "arxiv-396561", "<|cite_28|>": "arxiv-184156", "<|cite_29|>": "arxiv-315526", "<|cite_30|>": "arxiv-349790", "<|cite_31|>": "arxiv-271579", "<|cite_32|>": "arxiv-331675", "<|multi_cite_33_1|>": "arxiv-107403", "<|multi_cite_33_2|>": "arxiv-184156", "<|multi_cite_33_3|>": "arxiv-294724", "<|multi_cite_33_4|>": "arxiv-420515", "<|cite_34|>": "arxiv-107403", "<|cite_35|>": "arxiv-184463", "<|cite_36|>": "arxiv-62064", "<|multi_cite_37_1|>": "arxiv-184156", "<|multi_cite_37_2|>": "arxiv-141224", "<|cite_38|>": "arxiv-294724"}
2209.06203
<|paper_start|> Title: Normalizing Flows for Interventional Density Estimation Abstract: Normalizing Flows for Interventional Density Estimation: Existing machine learning methods for causal inference usually estimate quantities expressed via the mean of potential outcomes (e.g., average treatment effect). However, such quantities do not capture the full information about the distribution of potential outcomes. In this work, we estimate the density of potential outcomes after interventions from observational data. For this, we propose a novel, fully-parametric deep learning method called Interventional Normalizing Flows. Specifically, we combine two normalizing flows, namely (i) a nuisance flow for estimating nuisance parameters and (ii) a target flow for parametric estimation of the density of potential outcomes. We further develop a tractable optimization objective based on a one-step bias correction for efficient and doubly robust estimation of the target flow parameters. As a result, our Interventional Normalizing Flows offer a properly normalized density estimator. Across various experiments, we demonstrate that our Interventional Normalizing Flows are expressive and highly effective, and scale well with both sample size and high-dimensional confounding. To the best of our knowledge, our Interventional Normalizing Flows are the first proper fully-parametric, deep learning method for density estimation of potential outcomes. Introduction Causal inference increasingly makes use of machine learning methods to estimate treatment effects from observational data \citep[e.g.,][]{van2011targeted,kunzel2019metalearners,curth2021nonparametric,kennedy2022semiparametric}. This is relevant for various fields including medicine \citep[e.g.,][]{bica2021real}, marketing \citep[e.g.,][]{yang2020targeting}, and policy-making \citep[e.g.,][]{huenermund2021causal}. Here, causal inference from observational data promises great value, especially when experiments for determining treatment effects are costly or even unethical. The vast majority of the machine learning methods for causal inference estimate \emph{averaged} quantities expressed by the (conditional) mean of potential outcomes. Examples of such quantities are the average treatment effect (ATE) \citep[e.g.,][]{shi2019adapting, hatt2021estimating}, the conditional average treatment effect (CATE) \citep[e.g.,][]{shalit2017estimating, hassanpour2019learning, zhang2020learning}, and treatment-response curves \citep[e.g.,][]{bica2020estimating, nie2021vcnet}. Importantly, these estimates only describe averages \emph{without} distributional properties. However, making decisions based on averaged causal quantities can be misleading and, in some applications, even dangerous <|cite_start|> (Reference: Risk and uncertainty communication: This review briefly examines the vast range of techniques used to communicate risk assessments arising from statistical analysis. After discussing essential psychological and sociological issues, I focus on individual health risks and relevant research on communicating numbers, verbal expressions, graphics, and conveying deeper uncertainty. I then consider practice in a selection of diverse case studies, including gambling, the benefits and risks of pharmaceuticals, weather forecasting, natural hazards, climate change, environmental exposures, security and intelligence, industrial reliability, and catastrophic national and global risks. There are some tentative final conclusions, but the primary message is to acknowledge expert guidance, be clear about objectives, and work closely with intended audiences.) <|cite_end|> <|cite_start|> (Reference: Communicating uncertainty about facts, numbers and science: Uncertainty is an inherent part of knowledge, and yet in an era of contested expertise, many shy away from openly communicating their uncertainty about what they know, fearful of their audience's reaction. But what effect does communication of such epistemic uncertainty have? Empirical research is widely scattered across many disciplines. This interdisciplinary review structures and summarizes current practice and research across domains, combining a statistical and psychological perspective. This informs a framework for uncertainty communication in which we identify three objects of uncertainty—facts, numbers and science—and two levels of uncertainty: direct and indirect. An examination of current practices provides a scale of nine expressions of direct uncertainty. We discuss attempts to codify indirect uncertainty in terms of quality of the underlying evidence. We review the limited literature about the effects of communicating epistemic uncertainty on cognition, affect, trust and decision-making. While there is some evidence that communicating epistemic uncertainty does not necessarily affect audiences negatively, impact can vary between individuals and communication formats. Case studies in economic statistics and climate change illustrate our framework in action. We conclude with advice to guide both communicators and future researchers in this important but so far rather neglected field.) <|cite_end|>. On the one hand, if potential outcomes have different variances or number of modes, relying on the average quantities provides incomplete information about potential outcomes, and may inadvertently lead to local -- and not global -- optima during decision-making. On the other hand, distributional knowledge is needed to account for uncertainty in potential outcomes and thus informs how likely a certain outcome is. For example, in medicine, knowing the distribution of potential outcomes is highly important <|cite_start|> (Reference: Beyond the Mean: A Flexible Framework for Studying Causal Effects Using Linear Models: ) <|cite_end|>: it gives the probability that the potential outcome lies in a desired range, and thus defines the probability of treatment success or failure.\footnote{{For example, patients with prediabetes are oftentimes treated with metformin monotherapy, which reduces blood glucose sugar (HbA1c) by an \emph{average} of 1.1\% (95\% confidence interval: 0.9 to 1.3\%) <|cite_start|> (Reference: Quantifying the effect of metformin treatment and dose on glycemic control: OBJECTIVE Metformin is the first-line oral medication recommended for glycemic control in patients with type 2 diabetes. We reviewed the literature to quantify the effect of metformin treatment on glycated hemoglobin (HbA1c) levels in all types of diabetes and examine the impact of differing doses on glycemic control. RESEARCH DESIGN AND METHODS MEDLINE, EMBASE, and the Cochrane Library were searched from 1950 to June 2010 for trials of at least 12 weeks’ duration in which diabetic patients were treated with either metformin monotherapy or as an add-on therapy. Data on change in HbA1c were pooled in a meta-analysis. Data from dose-comparison trials were separately pooled. RESULTS A total of 35 trials were identified for the main analysis and 7 for the dose-comparison analysis. Metformin monotherapy lowered HbA1c by 1.12% (95% CI 0.92–1.32; I2 = 80%) versus placebo, metformin added to oral therapy lowered HbA1c by 0.95% (0.77–1.13; I2 = 77%) versus placebo added to oral therapy, and metformin added to insulin therapy lowered HbA1c by 0.60% (0.30–0.91; I2 = 79.8%) versus insulin only. There was a significantly greater reduction in HbA1c using higher doses of metformin compared with lower doses of metformin with no significant increase in side effects. CONCLUSIONS Evidence supports the effectiveness of metformin therapy in a clinically important lowering of HbA1c used as monotherapy and in combination with other therapeutic agents. There is potential for using higher doses of metformin to maximize glycemic control in diabetic patients without increasing gastrointestinal effects.) <|cite_end|>. Yet, there is often large \emph{skewness} in the potential outcome. While metformin monotherapy is highly effective for some individuals, it fails to achieve glycemic targets for 50\% of the patients <|cite_start|> (Reference: Second-line Glucose-Lowering Therapy in Type 2 Diabetes Mellitus: ) <|cite_end|>. Here, it is indicated that a second-line anti-diabetes drug is prescribed. Crucially, standard confidence intervals cannot disclose that metformin is harmful to some patients while densities can.}} Motivated by this, we aim to estimate the \textbf{\emph{density}} of potential outcomes. \begin{figure*}[tbp] \vspace{-0.1cm} \begin{center} \begin{minipage}{.59\textwidth} \includegraphics[width=\textwidth]{figures/cond-inter-counter} \end{minipage} \hskip 0.05in \begin{minipage}{.2\textwidth} \tiny \vspace{-0.2cm} \begin{align*} X := & U_X; \quad U_X \sim \text{Mixture}\big(0.5 N(0, 1) + 0.5 N(b, 1) \big) \\ \pi(x) & = \frac{N(X; 0, 1)}{N(X; 0, 1) + N(X; b, 1)} \\ A := & \begin{cases} 1, & -U_A < \log \big( \pi(x) / (1 - \pi(x))\big)\\ 0, & \text{otherwise} \end{cases}; U_A \sim \text{Logistic}(0, 1) \\ Y := & U_Y + \begin{cases} X^2 -1.82 X + 2, & A = 1 \\ <|paper_end|>
[ "<|reference_start|> Risk and uncertainty communication: This review briefly examines the vast range of techniques used to communicate risk assessments arising from statistical analysis. After discussing essential psychological and sociological issues, I focus on individual health risks and relevant research on communicating numbers, verbal expressions, graphics, and conveying deeper uncertainty. I then consider practice in a selection of diverse case studies, including gambling, the benefits and risks of pharmaceuticals, weather forecasting, natural hazards, climate change, environmental exposures, security and intelligence, industrial reliability, and catastrophic national and global risks. There are some tentative final conclusions, but the primary message is to acknowledge expert guidance, be clear about objectives, and work closely with intended audiences. <|reference_end|>", "<|reference_start|> Communicating uncertainty about facts, numbers and science: Uncertainty is an inherent part of knowledge, and yet in an era of contested expertise, many shy away from openly communicating their uncertainty about what they know, fearful of their audience's reaction. But what effect does communication of such epistemic uncertainty have? Empirical research is widely scattered across many disciplines. This interdisciplinary review structures and summarizes current practice and research across domains, combining a statistical and psychological perspective. This informs a framework for uncertainty communication in which we identify three objects of uncertainty—facts, numbers and science—and two levels of uncertainty: direct and indirect. An examination of current practices provides a scale of nine expressions of direct uncertainty. We discuss attempts to codify indirect uncertainty in terms of quality of the underlying evidence. We review the limited literature about the effects of communicating epistemic uncertainty on cognition, affect, trust and decision-making. While there is some evidence that communicating epistemic uncertainty does not necessarily affect audiences negatively, impact can vary between individuals and communication formats. Case studies in economic statistics and climate change illustrate our framework in action. We conclude with advice to guide both communicators and future researchers in this important but so far rather neglected field. <|reference_end|>", "<|reference_start|> Beyond the Mean: A Flexible Framework for Studying Causal Effects Using Linear Models: <|reference_end|>", "<|reference_start|> Second-line Glucose-Lowering Therapy in Type 2 Diabetes Mellitus: <|reference_end|>" ]
[ 0, 1, 2, 4 ]
{"<|multi_cite_2_1|>": "ss-950779", "<|multi_cite_2_2|>": "ss-950780", "<|cite_3|>": "ss-950781", "<|cite_4|>": "ss-950782", "<|cite_1|>": "ss-950778"}
2012.02948
<|paper_start|> Title: A Mechanical System Inspired Microscopic Traffic Model: Modeling, Analysis, and Validation Abstract: A Mechanical System Inspired Microscopic Traffic Model: Modeling, Analysis, and Validation: In this paper, we develop a mechanical system inspired microscopic traffic model to characterize the longitudinal interaction dynamics among a chain of vehicles. In particular, we extend our prior work on mass-spring-damper-clutch based car-following model between two vehicles to multi-vehicle scenario. This model can naturally capture the driver's tendency to maintain the same speed as the vehicle ahead while keeping a (speed-dependent) desired spacing. It is also capable of characterizing the impact of the following vehicle on the preceding vehicle, which is generally neglected in existing models. A new string stability criterion is defined for the considered multi-vehicle dynamics, and stability analysis is performed on the system parameters and time delays. An efficient online parameter identification algorithm, sequential recursive least squares with inverse QR decomposition (SRLS-IQR), is developed to estimate the driving-related model parameters. These real-time estimated parameters can be employed in advanced longitudinal control systems to enable accurate prediction of vehicle trajectories for improved safety and fuel efficiency. The proposed model and the parameter identification algorithm are validated on NGSIM, a naturalistic driving dataset, as well as our own connected vehicle driving data. Promising performance is demonstrated. Introduction Rising traffic congestion has become an increasingly frustrating societal problem, especially in large metropolitan areas across the globe. It has led to a variety of issues including great loss in time and money <|cite_start|> (Reference: INRIX Global Traffic Scorecard: ) <|cite_end|>, elevated stress and frustration in drivers <|cite_start|> (Reference: The relationship between traffic congestion, driver stress and direct versus indirect coping behaviours: Drivers experiencing rush hour congestion were interviewed using cellular telephones to study stress and coping responses. Measures were taken of each driver's predisposition to stress (trait stress) as well as their reactions to the experience of either low or high traffic congestion (state stress). Two interviews were conducted during the trip when drivers experienced both low and high congestion conditions. Although state stress was greatest for all drivers experiencing the high congestion condition, a trait X situation interaction was obtained, indicating that stress levels were highest for high trait stress drivers experiencing the congested roadway. In terms of trait coping behaviours, participants indicated a preference for direct over indirect behaviours. A greater variety of direct and indirect behaviours were reported in high congestion. Reports of aggressive behaviours showed the greatest increase from low to high congestion. Comments on the use of cellular telephones in methodology are offered.) <|cite_end|>, and intensified air pollution <|cite_start|> (Reference: Air pollution and health risks due to vehicle traffic.: ) <|cite_end|>. Based on a recent report from INRIX <|cite_start|> (Reference: INRIX Global Traffic Scorecard: ) <|cite_end|>, traffic congestion cost U.S. more than \$300 billion dollars and drivers in big cities spent more than 100 hours in congestion in the year of 2017 alone. A number of traffic control technologies have thus been pursued to mitigate the congestion, including ramp metering <|cite_start|> (Reference: Local Ramp Metering in the Presence of a Distant Downstream Bottleneck: Theoretical Analysis and Simulation Study: The well-known feedback ramp metering algorithm ALINEA can be applied for local ramp metering or used as a key component in a coordinated ramp metering system. ALINEA uses real-time occupancy measurements from the ramp-flow merging area that may be at most few hundred meters downstream of the metered on-ramp nose. In many practical cases, however, bottlenecks with smaller capacity than the merging area may exist further downstream for various reasons, which suggests using measurements from those further downstream bottlenecks rather than from the merging area. This paper addresses the local ramp metering problem in such a downstream bottleneck case. Theoretical analysis indicates that ALINEA may lead to a poorly damped closed-loop behavior in this case, but PI-ALINEA, which is a suitable proportional-integral (PI) extension of ALINEA, can lead to satisfactory control performance. The stability of the closed-loop ramp metering system with PI-ALINEA is rigorously proved by Lyapunov stability arguments. The root locus method is also employed to analyze the linearized closed-loop system performance of ALINEA and PI-ALINEA with and without a downstream bottleneck to provide insights on both controllers' performance. Simulation studies are conducted using a macroscopic traffic flow model to demonstrate that the ramp metering performance of ALINEA indeed deteriorates in the distant downstream bottleneck case, whereas a significant improvement is obtained using PI-ALINEA. Moreover, with its control parameters appropriately tuned, PI-ALINEA is found to be universally applicable to a range of distances between the on-ramp and downstream bottlenecks. This indicates that little fine-tuning would be necessary in field applications.) <|cite_end|> <|cite_start|> (Reference: Traffic-Responsive Linked Ramp-Metering Control: A new traffic-responsive ramp-metering strategy is presented that coordinates local ramp-metering actions, thus enabling the linked control of the inflow from two (or more) consecutive on-ramps to the freeway mainstream. The proposed linked ramp-metering scheme is simple and utterly reactive, i.e., based on readily available real-time measurements without any need for real-time model calculations or external disturbance prediction. The well-known feedback strategy, known as Asservissement LINeaire d'Entree Autoroutiere (ALINEA), is used at a local level. Simulation results are presented for a hypothetical freeway axis with two successive on-ramps. Some pitfalls and misapplications of the local ramp metering are also illustrated via appropriately designed simulation scenarios. The proposed linked strategy is demonstrated to outperform the uncoordinated local ramp metering and, thus, to increase the achievable control benefit over the no-control case. In fact, the new strategy is shown to reach the efficiency of sophisticated proactive optimal control schemes.) <|cite_end|>, dynamic speed limits <|cite_start|> (Reference: Microsimulation Analysis of Practical Aspects of Traffic Control with Variable Speed Limits: Mainstream traffic flow control (MTFC) with variable speed limits (VSLs) is a freeway traffic control method that aims to maximize throughput by regulating the mainstream flow upstream from a bottleneck. Previous studies in a macroscopic simulator have shown optimal and feedback MTFC potential to improve traffic conditions. In this paper, local feedback MTFC is applied in microscopic simulation for an on-ramp merge bottleneck. Traffic behavior reveals important aspects that had not been previously captured in macroscopic simulation. Mainly, the more realistic VSL application at specific points instead of along an entire freeway section produces a slower traffic response to speed limit changes. In addition, the nonlinear capacity flow/speed limit relation observed in the microscopic model is more pronounced than what was observed at the macroscopic level. After appropriate modifications in the control law, significant improvements in traffic conditions are obtained.) <|cite_end|> <|cite_start|> (Reference: Optimal coordination of variable speed limits to suppress shock waves: A model predictive control (MPC) approach is presented to optimally coordinate variable speed limits for highway traffic. A safety constraint incorporated in the controller is formulated that prevents drivers from encountering speed limit drops larger than, say, 10 km/h. The control objective is to minimize the total time that vehicles spend in the network. This approach results in dynamic speed limits that reduce or even eliminate shock waves. To predict the evolution of the traffic flows in the network, which is required by MPC, an adapted version of the METANET model is used that takes the variable speed limits into account. The performance of the discrete-valued and safety-constrained controllers is compared with the performance of the continuous-valued unconstrained controller. It is found that both types of controllers result in a network with less congestion, a higher outflow, and hence a lower total time spent for drivers. For the benchmark problem, the performance of the discrete controller with safety constraints is comparable with the continuous controller without constraints.) <|cite_end|>, vehicle platooning <|cite_start|> (Reference: Vehicle Platooning: A Brief Survey and Categorization: In this paper, the vehicle platooning literature published between 1994 and 2010 is categorized and discussed. The paper includes a general introduction and overview of vehicle platooning and a technical description of the methodology. Recent trends in Vehicle Platooning are presented and discussed. The results are reviewed and the vehicle platooning literature is categorized into subcategories within the broader division of application focused and theory focused results. Issues and challenges faced in platooning are discussed.Copyright © 2011 by ASME) <|cite_end|> <|cite_start|> (Reference: Experimental evaluation of decentralized cooperative cruise control for heavy-duty vehicle platooning: ) <|cite_end|>, and active traffic light control <|cite_start|> (Reference: Intelligent Traffic Light Controlling Algorithms Using Vehicular Networks: In this paper, we propose an intelligent traffic light controlling (ITLC) algorithm. ITLC is intended to schedule the phases of each isolated traffic light efficiently. This algorithm considers the real-time traffic characteristics of the competing traffic flows at the signalized road intersection. Moreover, we have adopted the ITLC algorithm to design a traffic scheduling algorithm for an arterial street scenario; we have thus proposed an arterial traffic light (ATL) controlling algorithm. In the ATL controlling algorithm, the intelligent traffic lights installed at each road intersection coordinate with each other to generate an efficient traffic schedule for the entire road network. We report on the performance of ITLC and ATL algorithms for several scenarios using NS-2. From the experimental results, we infer that the ITLC algorithm reduces, at each isolated traffic light, the queuing delay and increases the traffic fluency by 30% compared with the online algorithm (OAF) traffic light scheduling algorithm. The latter algorithm achieved the best performance when compared with the OAF traffic light scheduling algorithm. On the other hand, the ATL controlling algorithm increases the traffic fluency of traveling vehicles at arterial street coordinations by 70% more than the random and separate traffic light scheduling system. Furthermore, compared with the previously introduced traffic scheduling ART-SYS, the ATL controlling algorithm decreases the average delay at each traffic light by 10%.) <|cite_end|> <|cite_start|> (Reference: Adaptive Quasi-Dynamic Traffic Light Control: We consider the traffic light control problem for a single intersection modeled as a stochastic hybrid system. We study a quasi-dynamic policy based on partial state information defined by detecting whether vehicle backlogs are above or below certain thresholds. The policy is parameterized by green and red cycle lengths as well as the road content thresholds. Using infinitesimal perturbation analysis, we derive online gradient estimators of a cost metric with respect to the controllable light cycles and threshold parameters and use these estimators to iteratively adjust all the controllable parameters through an online gradient-based algorithm so as to improve the overall system performance under various traffic conditions. The results obtained by applying this methodology to a simulated urban setting are also included.) <|cite_end|> <|cite_start|> (Reference: Multi-Agent Deep Reinforcement Learning for Large-scale Traffic Signal Control: Reinforcement learning (RL) is a promising data-driven approach for adaptive traffic signal control (ATSC) in complex urban traffic networks, and deep neural networks further enhance its learning power. However, centralized RL is infeasible for large-scale ATSC due to the extremely high dimension of the joint action space. Multi-agent RL (MARL) overcomes the scalability issue by distributing the global control to each local RL agent, but it introduces new challenges: now the environment becomes partially observable from the viewpoint of each local agent due to limited communication among agents. Most existing studies in MARL focus on designing efficient communication and coordination among traditional Q-learning agents. This paper presents, for the first time, a fully scalable and decentralized MARL algorithm for the state-of-the-art deep RL agent: advantage actor critic (A2C), within the context of ATSC. In particular, two methods are proposed to stabilize the learning procedure, by improving the observability and reducing the learning difficulty of each local agent. The proposed multi-agent A2C is compared against independent A2C and independent Q-learning algorithms, in both a large synthetic traffic grid and a large real-world traffic network of Monaco city, under simulated peak-hour traffic dynamics. Results demonstrate its optimality, robustness, and sample efficiency over other state-of-the-art decentralized MARL algorithms.) <|cite_end|>. It is worth noting that all those technologies require accurate estimation and prediction of real-time traffic, which creates a critical need to have a good understanding of the traffic dynamics and flow. Therefore, a multitude of traffic models have been investigated to study traffic characteristics and flow evolution. These models can generally be classified into two categories: macroscopic and microscopic. Inspired by continuum fluid flow theories, macroscopic models focus on the study of macroscopic traffic characteristics such as flow speed, traffic density, and traffic volume <|cite_start|> (Reference: THE CELL TRANSMISSION MODEL, PART II: NETWORK TRAFFIC: ) <|cite_end|>. These models can be further classified, based on different assumptions made, as kinematic flow models <|cite_start|> (Reference: Shock Waves on the Highway: A simple theory of traffic flow is developed by replacing individual vehicles with a continuous “fluid” density and applying an empirical relation between speed and density. Characteristic features of the resulting theory are a simple “graph-shearing” process for following the development of traffic waves in time and the frequent appearance of shock waves. The effect of a traffic signal on traffic streams is studied and found to exhibit a threshold effect wherein the disturbances are minor for light traffic but suddenly build to large values when a critical density is exceeded.) <|cite_end|> <|cite_start|> (Reference: A finite difference approximation of the kinematic wave model of traffic flow: ) <|cite_end|> <|cite_start|> (Reference: MASTER: macroscopic traffic simulation based on a gas-kinetic, non-local traffic model: ) <|cite_end|>, dynamic flow models <|cite_start|> (Reference: Cluster effect in initially homogeneous traffic flow: This paper presents the nonlinear cluster effect in initially homogeneous traffic flow. It is shown that, in an initially homogeneous traffic flow, a region of high density and low average velocity of cars can spontaneously appear, if the density of cars in the flow exceeds some critical value. This region, a cluster of cars, can move with constant velocity in the opposite direction or in the direction of the flow, depending on the selected parameters and the initial conditions of the traffic flow. Based on numerical simulations, the kinetics of cluster formation and the shape of stationary moving clusters are found. These results can explain the spontaneous appearance of a traffic congestion, or phantom traffic jam, that appears in real traffic flow without obvious reasons.) <|cite_end|> <|cite_start|> (Reference: Dynamic Model for estimating the Macroscopic Fundamental Diagram: ) <|cite_end|>, and lattice hydrodynamic models <|cite_start|> (Reference: Analyses of a heterogeneous lattice hydrodynamic model with low and high-sensitivity vehicles: ) <|cite_end|> <|cite_start|> (Reference: The theoretical analysis of the lattice hydrodynamic models for traffic flow theory: ) <|cite_end|>. On the other hand, microscopic traffic models deal with local vehicle interactions in terms of relative spacing, speed, and acceleration for individual vehicles. There are two main types of microscopic models: cellular automata (CA) models and car-following (CF) models. The CA models characterize human driving behaviors using stochastic discrete event system, which is capable of modeling lane change behaviors <|cite_start|> (Reference: A behavioural car-following model for computer simulation: ) <|cite_end|> whereas the CF models are concerned with the following vehicle's interaction with its lead vehicle in a single lane setting <|cite_start|> (Reference: An operational analysis of traffic dynamics: The dynamics of a line of traffic composed of n vehicles is studied mathematically. It is postulated that the movements of the several vehicles are controlled by an idealized ``law of separation.'' The law considered in the analysis specifies that each vehicle must maintain a certain prescribed ``following distance'' from the preceding vehicle. This distance is the sum of a distance proportional to the velocity of the following vehicle and a certain given minimum distance of separation when the vehicles are at rest. By the application of this postulated law to the motion of the column of vehicles, the differential equations governing the dynamic state of the system are obtained.The solution of the dynamical equations for several assumed types of motion of the leading vehicle is effected by the operational or Laplace transform method and the velocities and accelerations of the various vehicles are thus obtained. Consideration is given to the use of an electrical analog computer for studying the dynamical e...) <|cite_end|> <|cite_start|> (Reference: Nonlinear follow-the-leader models of traffic flow: A variety of nonlinear follow-the-leader models of traffic flow are discussed in the light of available observational and experimental data. Emphasis is placed on steady-state flow equations. Some trends regarding the advantages of certain follow-the-leader functionals over others are established. However, it is found from extensive correlation studies that more data are needed before one can establish the unequivocal superiority of one particular model. A discussion is given of some ideas concerning the possible reasons for the existence of a bimodal flow versus concentration curve especially for multilane highways.) <|cite_end|>. As the CF models underpin the important design principals of Advanced Driver Assistant Systems (ADAS) such as adaptive cruise control <|cite_start|> (Reference: Car-following: a historical review: ) <|cite_end|>, it will be the focus of this paper. Many CF models have been developed since the 1950s <|cite_start|> (Reference: An operational analysis of traffic dynamics: The dynamics of a line of traffic composed of n vehicles is studied mathematically. It is postulated that the movements of the several vehicles are controlled by an idealized ``law of separation.'' The law considered in the analysis specifies that each vehicle must maintain a certain prescribed ``following distance'' from the preceding vehicle. This distance is the sum of a distance proportional to the velocity of the following vehicle and a certain given minimum distance of separation when the vehicles are at rest. By the application of this postulated law to the motion of the column of vehicles, the differential equations governing the dynamic state of the system are obtained.The solution of the dynamical equations for several assumed types of motion of the leading vehicle is effected by the operational or Laplace transform method and the velocities and accelerations of the various vehicles are thus obtained. Consideration is given to the use of an electrical analog computer for studying the dynamical e...) <|cite_end|> <|cite_start|> (Reference: Nonlinear follow-the-leader models of traffic flow: A variety of nonlinear follow-the-leader models of traffic flow are discussed in the light of available observational and experimental data. Emphasis is placed on steady-state flow equations. Some trends regarding the advantages of certain follow-the-leader functionals over others are established. However, it is found from extensive correlation studies that more data are needed before one can establish the unequivocal superiority of one particular model. A discussion is given of some ideas concerning the possible reasons for the existence of a bimodal flow versus concentration curve especially for multilane highways.) <|cite_end|> <|cite_start|> (Reference: Non integer car following models: AN INVESTIGATION WAS MADE OF A CONTINUUM OF NON-INTEGER CAR FOLLOWING MODELS FOR THE DEVELOPMENT OF DETERMINISTIC FLOW MODELS, WHICH DESCRIBE INTERRELATIONSHIPS BETWEEN FLOW CHARACTERISTICS. GAZIS AND OTHERS HAVE DEVELOPED THE GENERALIZED CAR FOLLOWING EQUATION. GAZIS AND DREW HAVE SHOWN THAT THERE IS A RELATION BETWEEN CAR FOLLOWING MODELS AND MACROSCOPIC MODELS. THROUGH SUCH INTERRELATIONSHIPS, MICROSCOPIC AND MACROSCOPIC MODELS CAN BE MUTUALLY COMPARED. THEORETICAL AND EMPIRICAL MODELS WERE TESTED AGAINST A DATA SET FROM THE EISENHOWER EXPRESSWAY IN CHICAGO. THE PROCEDURE OF EVALUATION USED LINEAR REGRESSION ANALYSIS TO DETERMINE ESTIMATED SPEED-DENSITY CURVES. THE CURVES WERE EVALUATED BY A STATISTICAL METHOD /MINIMIZATION OF THE MEAN DEVIATIONS OF THE DATA POINTS FROM THE REGRESSION CURVE/ AND THE APPLICATION OF TRAFFIC FLOW CHARACTERISTICS AS EVALUATION CRITERIA. MODELS OF BEST FIT HAVE BEEN OBTAINED FROM /CONTINUUM/ PLANE IN WHICH CONTOUR MAPS OF MEAN DEVIATION AND CRITICAL FLOW CHARACTERISTIC LEVELS WERE GRAPHICALLY SUPERIMPOSED. THE SUPERPOSITION THEN LIMITS AREA COMBINATIONS WHICH FULFILL THE CRITERIA OF EVALUATION. THE STUDY STRESSES THE IMPORTANT INFLUENCE OF THE FLOW CHARACTERISTICS ON THE EVALUATION. MAINLY, THE CRITICAL LEVEL OF THE FREE SPEED, JAM DENSITY, AND MAXIMUM FLOW IS VERY SIGNIFICANT AND IMPOSES GREAT RESTRICTIONS ON THE SELECTION OF APPROPRIATE MODELS.) <|cite_end|> <|cite_start|> (Reference: CAR FOLLOWING IN AN URBAN NETWORK: SIMULATION AND EXPERIMENTS: ) <|cite_end|> <|cite_start|> (Reference: A parameter identification of a car following model: It has been observed on Japanese motorways recently that traffic congestion occurs at bottlenecks, such as sags and tunnels on uninterrupted sections. These bottleneck phenomena have not been studied in detail partly because of the difficulty in acquisition of observed data from real traffic flow. In order to study the bottleneck phenomenon and car-following behavior on sags, an observation is made using a kite balloon at a sag bottleneck on Tomei Expressway. Parameter identification of a new car-following model suggested by Koshi is conducted using the observed data for each vehicle by the kite balloon. The identification is formulated for each observed vehicle by an output error least squares method (OELS) in terms of output errors in spacing and speed, and is solved through a complex search algorithm. It is found that the OELS formulation using the performance of spacing gives better results than that using speed. Both simulated speed and spacing are close to the observed data using the estimated parameters of the car-following model. The car-following model is found to be able to simulate the real traffic flow to a certain acceptable extent through the car-following simulation. Parameter analyses including probabilistic distribution and sensitivity for each parameter of the model are also discussed.) <|cite_end|> <|cite_start|> (Reference: Car-following: a historical review: ) <|cite_end|>, among which the Gazis-Herman-Rothery (GHR) model is arguably the most popular CF model. It was developed in the late 1950s by the General Motors research lab <|cite_start|> (Reference: Nonlinear follow-the-leader models of traffic flow: A variety of nonlinear follow-the-leader models of traffic flow are discussed in the light of available observational and experimental data. Emphasis is placed on steady-state flow equations. Some trends regarding the advantages of certain follow-the-leader functionals over others are established. However, it is found from extensive correlation studies that more data are needed before one can establish the unequivocal superiority of one particular model. A discussion is given of some ideas concerning the possible reasons for the existence of a bimodal flow versus concentration curve especially for multilane highways.) <|cite_end|> with the underlying hypothesis that the instantaneous acceleration of the ego vehicle is directly proportional and inversely proportional, respectively, to the relative speed and the relative distance from the lead vehicle, evaluated at time $\tau$ earlier (i.e., delay due to reaction time). Model parameters including the polynomial orders of the speed and relative distance terms, as well as a gain term, were calibrated using on-road driving data from wire-linked vehicles. A great number of GHR model variants have been developed since then, proposing different ``optimal'' parameter combinations based on driving data from various experimental setup <|cite_start|> (Reference: Non integer car following models: AN INVESTIGATION WAS MADE OF A CONTINUUM OF NON-INTEGER CAR FOLLOWING MODELS FOR THE DEVELOPMENT OF DETERMINISTIC FLOW MODELS, WHICH DESCRIBE INTERRELATIONSHIPS BETWEEN FLOW CHARACTERISTICS. GAZIS AND OTHERS HAVE DEVELOPED THE GENERALIZED CAR FOLLOWING EQUATION. GAZIS AND DREW HAVE SHOWN THAT THERE IS A RELATION BETWEEN CAR FOLLOWING MODELS AND MACROSCOPIC MODELS. THROUGH SUCH INTERRELATIONSHIPS, MICROSCOPIC AND MACROSCOPIC MODELS CAN BE MUTUALLY COMPARED. THEORETICAL AND EMPIRICAL MODELS WERE TESTED AGAINST A DATA SET FROM THE EISENHOWER EXPRESSWAY IN CHICAGO. THE PROCEDURE OF EVALUATION USED LINEAR REGRESSION ANALYSIS TO DETERMINE ESTIMATED SPEED-DENSITY CURVES. THE CURVES WERE EVALUATED BY A STATISTICAL METHOD /MINIMIZATION OF THE MEAN DEVIATIONS OF THE DATA POINTS FROM THE REGRESSION CURVE/ AND THE APPLICATION OF TRAFFIC FLOW CHARACTERISTICS AS EVALUATION CRITERIA. MODELS OF BEST FIT HAVE BEEN OBTAINED FROM /CONTINUUM/ PLANE IN WHICH CONTOUR MAPS OF MEAN DEVIATION AND CRITICAL FLOW CHARACTERISTIC LEVELS WERE GRAPHICALLY SUPERIMPOSED. THE SUPERPOSITION THEN LIMITS AREA COMBINATIONS WHICH FULFILL THE CRITERIA OF EVALUATION. THE STUDY STRESSES THE IMPORTANT INFLUENCE OF THE FLOW CHARACTERISTICS ON THE EVALUATION. MAINLY, THE CRITICAL LEVEL OF THE FREE SPEED, JAM DENSITY, AND MAXIMUM FLOW IS VERY SIGNIFICANT AND IMPOSES GREAT RESTRICTIONS ON THE SELECTION OF APPROPRIATE MODELS.) <|cite_end|> <|cite_start|> (Reference: REACTION AND ANTICIPATION IN THE CAR-FOLLOWING BEHAVIOR.: ) <|cite_end|>. Another class of popular CF models are the Helly models (also known as optimal velocity model), which introduce the idea of desired spacing dependent on speed and/or acceleration as well as explicitly consider an error term. Different experimental setups were later proposed and several variants have then been developed based on various experimental datasets <|cite_start|> (Reference: CAR FOLLOWING IN AN URBAN NETWORK: SIMULATION AND EXPERIMENTS: ) <|cite_end|> <|cite_start|> (Reference: A parameter identification of a car following model: It has been observed on Japanese motorways recently that traffic congestion occurs at bottlenecks, such as sags and tunnels on uninterrupted sections. These bottleneck phenomena have not been studied in detail partly because of the difficulty in acquisition of observed data from real traffic flow. In order to study the bottleneck phenomenon and car-following behavior on sags, an observation is made using a kite balloon at a sag bottleneck on Tomei Expressway. Parameter identification of a new car-following model suggested by Koshi is conducted using the observed data for each vehicle by the kite balloon. The identification is formulated for each observed vehicle by an output error least squares method (OELS) in terms of output errors in spacing and speed, and is solved through a complex search algorithm. It is found that the OELS formulation using the performance of spacing gives better results than that using speed. Both simulated speed and spacing are close to the observed data using the estimated parameters of the car-following model. The car-following model is found to be able to simulate the real traffic flow to a certain acceptable extent through the car-following simulation. Parameter analyses including probabilistic distribution and sensitivity for each parameter of the model are also discussed.) <|cite_end|>. Other types of models also exist, including fuzzy logic-based models <|cite_start|> (Reference: CAR-FOLLOWING MODEL BASED ON FUZZY INFERENCE SYSTEM: Car-following theory has been receiving renewed attention for its use in the analysis of traffic flow characteristics and vehicle separation control under the IVHS. A car-following model that uses the fuzzy inference system, which consists of many straightforward natural language-based driving rules, is proposed. It predicts the reaction of the driver of the following vehicle (acceleration-deceleration rates) given the action of the leading vehicle. A range of possible reaction is predicted and expressed by the fuzzy membership function. The model is applied to the analysis of traffic stability and speed-density relationship. For traffic stability, the results are compared with those derived from the deterministic approach. The speed-density relationship derived from the model is compared with a set of actual flow data. The predicted range is found to be reasonable. The proposed fuzzy approach helps explain the scatter of the actual data as possibility rather than random variation.) <|cite_end|>, collision avoidance models <|cite_start|> (Reference: A behavioural car-following model for computer simulation: ) <|cite_end|>, and psychophysical models <|cite_start|> (Reference: Perceptual Factors in Car-Following: ) <|cite_end|> <|cite_start|> (Reference: EXPERIMENTAL MEASUREMENTS OF PERCEPTUAL THRESHOLDS IN CAR-FOLLOWING: A SERIES OF EXPERIMENTS WAS CARRIED OUT TO MEASURE PERCEPTUAL THRESHOLDS OF DRIVERS IN CAR-FOLLOWING. EARLY RESULTS FROM PILOT EXPERIMENTS REVEALED 2 BASIC DIFFICULTIES ASSOCIATED WITH PREVIOUS ATTEMPTS REPORTED IN THE LITERATURE TO MEASURE RELATIVE MOTION THRESHOLDS IN CAR- FOLLOWING: FIRST, IT WAS SOMETIMES POSSIBLE FOR THE SUBJECT TO PERCEIVE THE PITCHING OF THE LEAD CAR IN RESPONSE TO THE INITIATION OF AN ACCELERATION OR DECELERATION MANEUVER AND THEREBY INFER IMMEDIATELY THAT A CHANGE HAD OCCURRED. SECOND, PERMITTING THE SUBJECTS TO RESPOND WHEN THEY WERE SUFFICIENTLY CONFIDENT THAT THEY HAD DETECTED A CHANGE INTRODUCED AN UNMEASURABLE VARIABLE. THIS UNMEASURABLE VARIABLE ARISES BECAUSE A SUBJECT WISHING TO MAKE NO ERRORS MIGHT WAIT FOR LARGER STIMULI THAN ONE WISHING TO REGISTER QUICK RESPONSES, EVEN THOUGH BOTH MIGHT HAVE THE SAME SENSITIVITY. AN EXPERIMENT WAS DESIGNED TO CIRCUMVENT THESE DIFFICULTIES. BY MEANS OF AN OCCLUSION DEVICE, SUBJECTS SEATED AS PASSENGERS IN A FOLLWOING CAR TRAVELING AT 45 MPH WERE GIVEN CONTROLLED LOOKS, NORMALLY OF 4-SEC DURATION, AT A LEAD CAR MOVING AT A CONSTANT SPEED. FOR EACH EXPOSURE THE SUBJECTS INDICATED WHETHER THEY PERCEIVED NEGATIVE (THAT IS, THE CARS CAME CLOSER) OR POSITIVE RELATIVE MOTION. THE RESULTS INDICATE THAT (A) THE DOMINANT CUE USED TO JUDGE THE SIGN OF RELATIVE MOTION IS THE AVERAGE VALUE OF RELATIVE SPEED DIVIDED BY SPACING; (B) THERE IS RESPONSE BIAS IN FAVOR OF INDICATING NEGATIVE RATHER THAN POSTIVE RELATIVE MOTION; AND (C) THERE IS A HIGH LEVEL OF SENSITIVITY TO RELATIVE MOTION. FOR EXAMPLE, IF A LEAD CAR WERE CLOSING ON A FOLLOWING CAR AT 3 MPH, THE FOLLOWING DRIVER'S PROBABILITY OF CORRECTLY IDENTIFYING THE SIGN OF RELATIVE MOTION AS NEGATIVE RATHER THAN POSITIVE AFTER A 4-SEC OBSERVATION IS 0.99 WHEN THE SPACING IS 200 FT. /AUTHOR/) <|cite_end|>. The readers are referred to <|cite_start|> (Reference: Car-following: a historical review: ) <|cite_end|> for a comprehensive review of the CF models. Despite the large number of existing CF models, it is pointed out in <|cite_start|> (Reference: Car-following: a historical review: ) <|cite_end|> that the available relationships are still not rigorously understood and proven. \begin{figure*}[!t] \centering \includegraphics[width=0.98\textwidth]{figs/Traffic.pdf} \caption{\small A mechanical system inspired traffic model. The car-following behavior of a driver is characterized by a spring, a damper, a force scaling factor, and a clutch (delay).}\label{fig:traffic} \vspace{-10pt} \end{figure*} To obtain a CF model with better physical interpretability, in our prior work <|cite_start|> (Reference: A New Microscopic Traffic Model Using a Spring-Mass-Damper-Clutch System: Microscopic traffic models describe how cars interact with their neighbors in an uninterrupted traffic flow and are frequently used for reference in advanced vehicle control design. In this paper, we propose a novel mechanical system inspired microscopic traffic model using a mass-spring-damper-clutch system. This model naturally captures the ego vehicle's resistance to large relative speed and deviation from a (driver and speed dependent) desired relative distance when following the lead vehicle. Comparing to existing car following (CF) models, this model offers physically interpretable insights on the underlying CF dynamics, and is able to characterize the impact of the ego vehicle on the lead vehicle, which is neglected in existing CF models. Thanks to the nonlinear wave propagation analysis techniques for mechanical systems, the proposed model therefore has great scalability so that multiple mass-spring-damper-clutch system can be chained to study the macroscopic traffic flow. We investigate the stability of the proposed model on the system parameters and the time delay using spectral element method. We also develop a parallel recursive least square with inverse QR decomposition (PRLS-IQR) algorithm to identify the model parameters online. These real-time estimated parameters can be used to predict the driving trajectory that can be incorporated in advanced vehicle longitudinal control systems for improved safety and fuel efficiency. The PRLS-IQR is computationally efficient and numerically stable so it is suitable for online implementation. The traffic model and the parameter identification algorithm are validated on both simulations and naturalistic driving data from multiple drivers. Promising performance is demonstrated.) <|cite_end|>, we developed a mass-spring-damper-clutch traffic model that captures many natural driving behaviors in car-following dynamics. Specifically, a mechanical spring between two masses (the lead and ego vehicles) resembles the ego vehicle's tendency to accelerate/decelerate when the relative distance to the preceding vehicle is too large/small; a mechanical damper is used to characterize the ego vehicle's tendency to follow a similar speed as the preceding vehicle, and a mechanical clutch system is to model driver's delayed actions due to the reaction time. The model is validated using naturalistic driving data <|cite_start|> (Reference: A New Microscopic Traffic Model Using a Spring-Mass-Damper-Clutch System: Microscopic traffic models describe how cars interact with their neighbors in an uninterrupted traffic flow and are frequently used for reference in advanced vehicle control design. In this paper, we propose a novel mechanical system inspired microscopic traffic model using a mass-spring-damper-clutch system. This model naturally captures the ego vehicle's resistance to large relative speed and deviation from a (driver and speed dependent) desired relative distance when following the lead vehicle. Comparing to existing car following (CF) models, this model offers physically interpretable insights on the underlying CF dynamics, and is able to characterize the impact of the ego vehicle on the lead vehicle, which is neglected in existing CF models. Thanks to the nonlinear wave propagation analysis techniques for mechanical systems, the proposed model therefore has great scalability so that multiple mass-spring-damper-clutch system can be chained to study the macroscopic traffic flow. We investigate the stability of the proposed model on the system parameters and the time delay using spectral element method. We also develop a parallel recursive least square with inverse QR decomposition (PRLS-IQR) algorithm to identify the model parameters online. These real-time estimated parameters can be used to predict the driving trajectory that can be incorporated in advanced vehicle longitudinal control systems for improved safety and fuel efficiency. The PRLS-IQR is computationally efficient and numerically stable so it is suitable for online implementation. The traffic model and the parameter identification algorithm are validated on both simulations and naturalistic driving data from multiple drivers. Promising performance is demonstrated.) <|cite_end|>. Based on our preliminary work <|cite_start|> (Reference: A New Microscopic Traffic Model Using a Spring-Mass-Damper-Clutch System: Microscopic traffic models describe how cars interact with their neighbors in an uninterrupted traffic flow and are frequently used for reference in advanced vehicle control design. In this paper, we propose a novel mechanical system inspired microscopic traffic model using a mass-spring-damper-clutch system. This model naturally captures the ego vehicle's resistance to large relative speed and deviation from a (driver and speed dependent) desired relative distance when following the lead vehicle. Comparing to existing car following (CF) models, this model offers physically interpretable insights on the underlying CF dynamics, and is able to characterize the impact of the ego vehicle on the lead vehicle, which is neglected in existing CF models. Thanks to the nonlinear wave propagation analysis techniques for mechanical systems, the proposed model therefore has great scalability so that multiple mass-spring-damper-clutch system can be chained to study the macroscopic traffic flow. We investigate the stability of the proposed model on the system parameters and the time delay using spectral element method. We also develop a parallel recursive least square with inverse QR decomposition (PRLS-IQR) algorithm to identify the model parameters online. These real-time estimated parameters can be used to predict the driving trajectory that can be incorporated in advanced vehicle longitudinal control systems for improved safety and fuel efficiency. The PRLS-IQR is computationally efficient and numerically stable so it is suitable for online implementation. The traffic model and the parameter identification algorithm are validated on both simulations and naturalistic driving data from multiple drivers. Promising performance is demonstrated.) <|cite_end|>, in this paper, we extend the mass-spring-damper-clutch model from two vehicles to multiple vehicles. In particular, we model the interactions of multiple vehicles in a single lane as a chain of masses interconnected with springs and dampers between neighboring vehicles. This extended mechanical system-inspired model retains the physical interpretability and can capture the impact of the following vehicle on the leading vehicle, which is neglected in existing CF models. Due to the coupled dynamics resulting from the reaction forces between the masses (representing the vehicles), we define a new string stability criterion and analyze it on system parameters and reaction delays. Furthermore, real-time prediction of driving trajectory has shown to be the key enabling technology in ADAS to achieve improved fuel economy and road safety <|cite_start|> (Reference: Connected cruise control among human-driven vehicles: Experiment-based parameter estimation and optimal control design: ) <|cite_end|>. Towards this end, we develop an efficient online parameter identification algorithm that exploits inverse QR decomposition <|cite_start|> (Reference: A Method for Recursive Least Squares Filtering Based Upon an Inverse QR Decomposition: A new computationally efficient algorithm for re- cursive least squares filtering is derived, which is based upon an inverse QR decomposition. The method solves directly for the time-recursive least squares filter vector, while avoiding the highly serial backsubstitution step required in previously de- rived direct QR approaches. Furthermore, the method employs orthogonal rotation operations to recursively update the filter, and thus preserves the inherent stability properties of QR ap- proaches to recursive least squares filtering. The results of sim- ulations over extremely long data sets are also presented, which suggest stability of the new time-recursive algorithm. Finally, parallel implementation of the resulting method is briefly dis- cussed, and computational wavefronts are displayed.) <|cite_end|> to identify the model parameters in real-time. With identified driving-related parameters, the vehicle trajectories can be predicted accordingly. The algorithm is computationally efficient numerically stable, making it suitable for real-time implementations. Furthermore, we validate the proposed model and the parameter identification framework on a real-world driving dataset NGSIM, as well as data from our own connected vehicle studies. Promising performance is demonstrated. The contributions of this paper include the following. First, we develop a mechanical system inspired microscopic traffic model for multiple vehicle using a chained mass-spring damper system. The new model has great physical interpretability and can characterize the impact of the following vehicle on the preceding vehicle, which is ignored in existing models <|cite_start|> (Reference: Nonlinear follow-the-leader models of traffic flow: A variety of nonlinear follow-the-leader models of traffic flow are discussed in the light of available observational and experimental data. Emphasis is placed on steady-state flow equations. Some trends regarding the advantages of certain follow-the-leader functionals over others are established. However, it is found from extensive correlation studies that more data are needed before one can establish the unequivocal superiority of one particular model. A discussion is given of some ideas concerning the possible reasons for the existence of a bimodal flow versus concentration curve especially for multilane highways.) <|cite_end|> <|cite_start|> (Reference: CAR FOLLOWING IN AN URBAN NETWORK: SIMULATION AND EXPERIMENTS: ) <|cite_end|>. Secondly, based on the proposed model, we define a new string stability criterion and perform analysis to determine the string stability on different system parameters and various time delays. Last but not least, we develop an online parameter identification algorithm with recursive least square and inverse QR decomposition to estimate the model parameters in real time with great computational efficiency and numerical stability. The proposed models and the online parameter identification algorithm are validated on two naturalistic driving dataset, NGSIM and one collected from our own connected vehicle study, with promising results demonstrated. The remainder of this paper is organized as follows. We present our mechanical system inspired traffic model in Section~\ref{sec:2}, followed by the string stability definition and analysis of the model in Section~III. In Section~IV, the online parameter identification algorithm is described whereas the validation of the model and parameter identification framework is presented in Section~V. Finally, concluding remarks and future works are discussed in Section~VI. <|paper_end|>
[ "<|reference_start|> Intelligent Traffic Light Controlling Algorithms Using Vehicular Networks: In this paper, we propose an intelligent traffic light controlling (ITLC) algorithm. ITLC is intended to schedule the phases of each isolated traffic light efficiently. This algorithm considers the real-time traffic characteristics of the competing traffic flows at the signalized road intersection. Moreover, we have adopted the ITLC algorithm to design a traffic scheduling algorithm for an arterial street scenario; we have thus proposed an arterial traffic light (ATL) controlling algorithm. In the ATL controlling algorithm, the intelligent traffic lights installed at each road intersection coordinate with each other to generate an efficient traffic schedule for the entire road network. We report on the performance of ITLC and ATL algorithms for several scenarios using NS-2. From the experimental results, we infer that the ITLC algorithm reduces, at each isolated traffic light, the queuing delay and increases the traffic fluency by 30% compared with the online algorithm (OAF) traffic light scheduling algorithm. The latter algorithm achieved the best performance when compared with the OAF traffic light scheduling algorithm. On the other hand, the ATL controlling algorithm increases the traffic fluency of traveling vehicles at arterial street coordinations by 70% more than the random and separate traffic light scheduling system. Furthermore, compared with the previously introduced traffic scheduling ART-SYS, the ATL controlling algorithm decreases the average delay at each traffic light by 10%. <|reference_end|>", "<|reference_start|> Analyses of a heterogeneous lattice hydrodynamic model with low and high-sensitivity vehicles: <|reference_end|>", "<|reference_start|> CAR-FOLLOWING MODEL BASED ON FUZZY INFERENCE SYSTEM: Car-following theory has been receiving renewed attention for its use in the analysis of traffic flow characteristics and vehicle separation control under the IVHS. A car-following model that uses the fuzzy inference system, which consists of many straightforward natural language-based driving rules, is proposed. It predicts the reaction of the driver of the following vehicle (acceleration-deceleration rates) given the action of the leading vehicle. A range of possible reaction is predicted and expressed by the fuzzy membership function. The model is applied to the analysis of traffic stability and speed-density relationship. For traffic stability, the results are compared with those derived from the deterministic approach. The speed-density relationship derived from the model is compared with a set of actual flow data. The predicted range is found to be reasonable. The proposed fuzzy approach helps explain the scatter of the actual data as possibility rather than random variation. <|reference_end|>", "<|reference_start|> A New Microscopic Traffic Model Using a Spring-Mass-Damper-Clutch System: Microscopic traffic models describe how cars interact with their neighbors in an uninterrupted traffic flow and are frequently used for reference in advanced vehicle control design. In this paper, we propose a novel mechanical system inspired microscopic traffic model using a mass-spring-damper-clutch system. This model naturally captures the ego vehicle's resistance to large relative speed and deviation from a (driver and speed dependent) desired relative distance when following the lead vehicle. Comparing to existing car following (CF) models, this model offers physically interpretable insights on the underlying CF dynamics, and is able to characterize the impact of the ego vehicle on the lead vehicle, which is neglected in existing CF models. Thanks to the nonlinear wave propagation analysis techniques for mechanical systems, the proposed model therefore has great scalability so that multiple mass-spring-damper-clutch system can be chained to study the macroscopic traffic flow. We investigate the stability of the proposed model on the system parameters and the time delay using spectral element method. We also develop a parallel recursive least square with inverse QR decomposition (PRLS-IQR) algorithm to identify the model parameters online. These real-time estimated parameters can be used to predict the driving trajectory that can be incorporated in advanced vehicle longitudinal control systems for improved safety and fuel efficiency. The PRLS-IQR is computationally efficient and numerically stable so it is suitable for online implementation. The traffic model and the parameter identification algorithm are validated on both simulations and naturalistic driving data from multiple drivers. Promising performance is demonstrated. <|reference_end|>" ]
[ 10, 19, 36, 44 ]
{"<|cite_1|>": "ss-1451527", "<|cite_2|>": "ss-2332646", "<|cite_3|>": "ss-751453", "<|cite_4|>": "ss-1451527", "<|multi_cite_5_1|>": "ss-930699", "<|multi_cite_5_2|>": "ss-1051020", "<|multi_cite_6_1|>": "ss-1196532", "<|multi_cite_6_2|>": "ss-1138507", "<|multi_cite_7_1|>": "ss-1712784", "<|multi_cite_7_2|>": "ss-2007688", "<|multi_cite_8_1|>": "ss-1051021", "<|multi_cite_8_2|>": "ss-1051022", "<|multi_cite_8_3|>": "arxiv-194924", "<|cite_9|>": "ss-749211", "<|multi_cite_10_1|>": "ss-1171746", "<|multi_cite_10_2|>": "ss-2332647", "<|multi_cite_10_3|>": "ss-1524467", "<|multi_cite_11_1|>": "ss-2332648", "<|multi_cite_11_2|>": "ss-1051023", "<|multi_cite_12_1|>": "ss-2332649", "<|multi_cite_12_2|>": "ss-2332650", "<|multi_cite_13_2|>": "ss-1147483", "<|multi_cite_14_1|>": "ss-1201974", "<|multi_cite_14_2|>": "ss-1076925", "<|cite_15|>": "ss-1700894", "<|multi_cite_16_1|>": "ss-1201974", "<|multi_cite_16_2|>": "ss-1076925", "<|multi_cite_16_4|>": "ss-2332651", "<|multi_cite_16_5|>": "ss-2332653", "<|multi_cite_16_6|>": "ss-2332654", "<|multi_cite_16_7|>": "ss-1700894", "<|cite_17|>": "ss-1076925", "<|multi_cite_18_2|>": "ss-2332651", "<|multi_cite_18_3|>": "ss-2332652", "<|multi_cite_20_1|>": "ss-2332653", "<|multi_cite_20_2|>": "ss-2332654", "<|cite_21|>": "ss-2332657", "<|multi_cite_22_2|>": "ss-1147483", "<|multi_cite_23_1|>": "ss-2332655", "<|multi_cite_23_2|>": "ss-2332656", "<|cite_24|>": "ss-1700894", "<|cite_25|>": "ss-1700894", "<|cite_26|>": "arxiv-194910", "<|cite_27|>": "arxiv-194910", "<|cite_28|>": "arxiv-194910", "<|cite_29|>": "ss-1804862", "<|cite_30|>": "ss-1051024", "<|multi_cite_31_1|>": "ss-1076925", "<|multi_cite_31_4|>": "ss-2332653"}
2110.12484
<|paper_start|> Title: Enabling Large Batch Size Training for DNN Models Beyond the Memory Limit While Maintaining Performance Abstract: Enabling Large Batch Size Training for DNN Models Beyond the Memory Limit While Maintaining Performance: Recent deep learning models are difficult to train using a large batch size, because commodity machines may not have enough memory to accommodate both the model and a large data batch size. The batch size is one of the hyper-parameters used in the training model, and it is dependent on and is limited by the target machine memory capacity because the batch size can only fit into the remaining memory after the model is uploaded. Moreover, the data item size is also an important factor because if each data item size is larger then the batch size that can fit into the remaining memory becomes smaller. This paper proposes a method called Micro-Batch Processing (MBP) to address this problem. This method helps deep learning models to train by providing a batch processing method that splits a batch into a size that can fit in the remaining memory and processes them sequentially. After processing the small batches individually, a loss normalization algorithm based on the gradient accumulation is used to maintain the performance. The purpose of our method is to allow deep learning models to train using larger batch sizes that exceed the memory capacity of a system without increasing the memory size or using multiple devices (GPUs). Introduction \label{introduction} Recently, many research use heavy Deep Neural Network (DNN) models that need a lot of memory to execute. Plus, data sizes are increasing such that it is difficult to increase the mini-batch size which is used for the training of the models. The mini-batch size is an important hyper-parameter that determines the number of data set items that are used in one iteration of a training process and the mini-batch size may affect the overall performance of the DNN as shown in <|cite_start|> (Reference: MegDet: A Large Mini-Batch Object Detector: The improvements in recent CNN-based object detection works, from R-CNN [11], Fast/Faster R-CNN [10, 31] to recent Mask R-CNN [14] and RetinaNet [24], mainly come from new network, new framework, or novel loss design. But mini-batch size, a key factor in the training, has not been well studied. In this paper, we propose a Large MiniBatch Object Detector (MegDet) to enable the training with much larger mini-batch size than before (e.g. from 16 to 256), so that we can effectively utilize multiple GPUs (up to 128 in our experiments) to significantly shorten the training time. Technically, we suggest a learning rate policy and Cross-GPU Batch Normalization, which together allow us to successfully train a large mini-batch detector in much less time (e.g., from 33 hours to 4 hours), and achieve even better accuracy. The MegDet is the backbone of our submission (mmAP 52.5%) to COCO 2017 Challenge, where we won the 1st place of Detection task.) <|cite_end|>. The results in table \ref{tab:motivation} show the comparison between a large mini-batch and a small mini-batch used for the image classification and semantic segmentation problems. Utilizing a large mini-batch shows 21.88\% and 2.01\% higher than a small mini-batch when using higher resolution images for image classification and semantic segmentation problems, respectively. The image size may also affect the model performance, because the image size may limit the mini-batch size. Also, higher resolution images (i.e., larger image size) contain more information about objects. Table \ref{tab:motivation} shows the comparison of results between the higher resolution images and the lower resolution images that are used for the image classification and semantic segmentation problems. Using the higher resolution image data shows 21.64\% and 2.01\% higher than the lower resolution image data in a large mini-batch for image classification and semantic segmentation problems, respectively. This aspect is also similar when using a small mini-batch size. \begin{table}[t] \caption{ The effect of batch size and image size for a image classification model (ResNet-50) using Flower-102 dataset and a semantic segmentation model (U-Net) using Carvana dataset. } \label{tab:motivation} \centering \begin{tabular}{c|c|cc|cc} \toprule \multicolumn{2}{c|}{Model} & \multicolumn{2}{c|}{ResNet-50} & \multicolumn{2}{c}{U-Net} \\ \midrule \multicolumn{2}{c|}{Metric} & \multicolumn{2}{c|}{Max. acc (\%)} & \multicolumn{2}{c}{Max. IoU (\%)} \\ \midrule \multicolumn{2}{c|}{Image size} & 32\(\times\)32 & 224\(\times\)224 & 96\(\times\)96 & 384\(\times\)384 \\ \midrule Batch & 2 & 48.66 & 61.86 & 92.30 & 93.61 \\ Size & 16 & 62.10 & \textbf{83.74} & 93.61 & \textbf{95.62} \\ \bottomrule \end{tabular} \end{table} The total memory size of a mini-batch can only be increased to the remaining memory after the model is loaded. Thus, there is a limit on the size of the mini-batch and the number of data items included in the mini-batch. If the memory requirement of a certain mini-batch size is larger than the free remaining memory size, the mini-batch cannot be allocated to the GPU memory and the model cannot be trained. If the optimal mini-batch size for a particular resolution dataset is larger than the device memory, then the mini-batch size must be reduced when increasing the image resolution or use low-resolution image data to increase the mini-batch size. Therefore, the significant increase in image size makes it more challenging to train DNN models. Many researchers tried various techniques such as data parallelism and/or model parallelism to alleviate the problems that deep learning methods face. Data parallelism (e.g., <|cite_start|> (Reference: More Effective Distributed ML via a Stale Synchronous Parallel Parameter Server: We propose a parameter server system for distributed ML, which follows a Stale Synchronous Parallel (SSP) model of computation that maximizes the time computational workers spend doing useful work on ML algorithms, while still providing correctness guarantees. The parameter server provides an easy-to-use shared interface for read/write access to an ML model's values (parameters and variables), and the SSP model allows distributed workers to read older, stale versions of these values from a local cache, instead of waiting to get them from a central storage. This significantly increases the proportion of time workers spend computing, as opposed to waiting. Furthermore, the SSP model ensures ML algorithm correctness by limiting the maximum age of the stale values. We provide a proof of correctness under SSP, as well as empirical results demonstrating that the SSP model achieves faster algorithm convergence on several different ML problems, compared to fully-synchronous and asynchronous schemes.) <|cite_end|> <|cite_start|> (Reference: Decoupled Parallel Backpropagation with Convergence Guarantee: Backpropagation algorithm is indispensable for the training of feedforward neural networks. It requires propagating error gradients sequentially from the output layer all the way back to the input layer. The backward locking in backpropagation algorithm constrains us from updating network layers in parallel and fully leveraging the computing resources. Recently, several algorithms have been proposed for breaking the backward locking. However, their performances degrade seriously when networks are deep. In this paper, we propose decoupled parallel backpropagation algorithm for deep learning optimization with convergence guarantee. Firstly, we decouple the backpropagation algorithm using delayed gradients, and show that the backward locking is removed when we split the networks into multiple modules. Then, we utilize decoupled parallel backpropagation in two stochastic methods and prove that our method guarantees convergence to critical points for the non-convex problem. Finally, we perform experiments for training deep convolutional neural networks on benchmark datasets. The experimental results not only confirm our theoretical analysis, but also demonstrate that the proposed method can achieve significant speedup without loss of accuracy.) <|cite_end|> <|cite_start|> (Reference: Beyond Data and Model Parallelism for Deep Neural Networks.: The computational requirements for training deep neural networks (DNNs) have grown to the point that it is now standard practice to parallelize training. Existing deep learning systems commonly use data or model parallelism, but unfortunately, these strategies often result in suboptimal parallelization performance. In this paper, we define a more comprehensive search space of parallelization strategies for DNNs called SOAP, which includes strategies to parallelize a DNN in the Sample, Operation, Attribute, and Parameter dimensions. We also propose FlexFlow, a deep learning framework that uses guided randomized search of the SOAP space to find a fast parallelization strategy for a specific parallel machine. To accelerate this search, FlexFlow introduces a novel execution simulator that can accurately predict a parallelization strategy's performance and is three orders of magnitude faster than prior approaches that have to execute each strategy. We evaluate FlexFlow with six real-world DNN benchmarks on two GPU clusters and show that FlexFlow can increase training throughput by up to 3.8x over state-of-the-art approaches, even when including its search time, and also improves scalability.) <|cite_end|>) is usually used when the mini-batch size is too large to be fit into a single device's memory. A mini-batch is partitioned for computation and scattered to multiple devices and each device has a full copy of the learning model. When all data within a mini-batch is processed, weights are updated across devices through communications. Model parallelism partitions the learning model into cells and distributes the cells to multiple devices. It is usually used when the model is too large to fit into a device's memory (e.g., <|cite_start|> (Reference: Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis: Deep Neural Networks (DNNs) are becoming an important tool in modern computing applications. Accelerating their training is a major challenge and techniques range from distributed algorithms to low-level circuit design. In this survey, we describe the problem from a theoretical perspective, followed by approaches for its parallelization. We present trends in DNN architectures and the resulting implications on parallelization strategies. We then review and model the different types of concurrency in DNNs: from the single operator, through parallelism in network inference and training, to distributed deep learning. We discuss asynchronous stochastic optimization, distributed system architectures, communication schemes, and neural architecture search. Based on those approaches, we extrapolate potential directions for parallelism in deep learning.) <|cite_end|> <|cite_start|> (Reference: Large Scale Distributed Deep Networks: Recent work in unsupervised feature learning and deep learning has shown that being able to train large models can dramatically improve performance. In this paper, we consider the problem of training a deep network with billions of parameters using tens of thousands of CPU cores. We have developed a software framework called DistBelief that can utilize computing clusters with thousands of machines to train large models. Within this framework, we have developed two algorithms for large-scale distributed training: (i) Downpour SGD, an asynchronous stochastic gradient descent procedure supporting a large number of model replicas, and (ii) Sandblaster, a framework that supports a variety of distributed batch optimization procedures, including a distributed implementation of L-BFGS. Downpour SGD and Sandblaster L-BFGS both increase the scale and speed of deep network training. We have successfully used our system to train a deep network 30x larger than previously reported in the literature, and achieves state-of-the-art performance on ImageNet, a visual object recognition task with 16 million images and 21k categories. We show that these same techniques dramatically accelerate the training of a more modestly- sized deep network for a commercial speech recognition service. Although we focus on and report performance of these methods as applied to training large neural networks, the underlying algorithms are applicable to any gradient-based machine learning algorithm.) <|cite_end|> <|cite_start|> (Reference: On Model Parallelization and Scheduling Strategies for Distributed Machine Learning: Distributed machine learning has typically been approached from a data parallel perspective, where big data are partitioned to multiple workers and an algorithm is executed concurrently over different data subsets under various synchronization schemes to ensure speed-up and/or correctness. A sibling problem that has received relatively less attention is how to ensure efficient and correct model parallel execution of ML algorithms, where parameters of an ML program are partitioned to different workers and undergone concurrent iterative updates. We argue that model and data parallelisms impose rather different challenges for system design, algorithmic adjustment, and theoretical analysis. In this paper, we develop a system for model-parallelism, STRADS, that provides a programming abstraction for scheduling parameter updates by discovering and leveraging changing structural properties of ML programs. STRADS enables a flexible tradeoff between scheduling efficiency and fidelity to intrinsic dependencies within the models, and improves memory efficiency of distributed ML. We demonstrate the efficacy of model-parallel algorithms implemented on STRADS versus popular implementations for topic modeling, matrix factorization, and Lasso.) <|cite_end|>). Other methods that use pipeline parallelism are proposed (e.g., <|cite_start|> (Reference: GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism: Scaling up deep neural network capacity has been known as an effective approach to improving model quality for several different machine learning tasks. In many cases, increasing model capacity beyond the memory limit of a single accelerator has required developing special algorithms or infrastructure. These solutions are often architecture-specific and do not transfer to other tasks. To address the need for efficient and task-independent model parallelism, we introduce GPipe, a pipeline parallelism library that allows scaling any network that can be expressed as a sequence of layers. By pipelining different sub-sequences of layers on separate accelerators, GPipe provides the flexibility of scaling a variety of different networks to gigantic sizes efficiently. Moreover, GPipe utilizes a novel batch-splitting pipelining algorithm, resulting in almost linear speedup when a model is partitioned across multiple accelerators. We demonstrate the advantages of GPipe by training large-scale neural networks on two different tasks with distinct network architectures: (i) Image Classification: We train a 557-million-parameter AmoebaNet model and attain a top-1 accuracy of 84.4% on ImageNet-2012, (ii) Multilingual Neural Machine Translation: We train a single 6-billion-parameter, 128-layer Transformer model on a corpus spanning over 100 languages and achieve better quality than all bilingual models.) <|cite_end|> <|cite_start|> (Reference: PipeDream: Generalized Pipeline Parallelism for DNN Training: DNN training is extremely time-consuming, necessitating efficient multi-accelerator parallelization. Current approaches to parallelizing training primarily use intra-batch parallelization, where a single iteration of training is split over the available workers, but suffer from diminishing returns at higher worker counts. We present PipeDream, a system that adds inter-batch pipelining to intra-batch parallelism to further improve parallel training throughput, helping to better overlap computation with communication and reduce the amount of communication when possible. Unlike traditional pipelining, DNN training is bi-directional, where a forward pass through the computation graph is followed by a backward pass that uses state and intermediate data computed during the forward pass. Naïve pipelining can thus result in mismatches in state versions used in the forward and backward passes, or excessive pipeline flushes and lower hardware efficiency. To address these challenges, PipeDream versions model parameters for numerically correct gradient computations, and schedules forward and backward passes of different minibatches concurrently on different workers with minimal pipeline stalls. PipeDream also automatically partitions DNN layers among workers to balance work and minimize communication. Extensive experimentation with a range of DNN tasks, models, and hardware configurations shows that PipeDream trains models to high accuracy up to 5.3X faster than commonly used intra-batch parallelism techniques.) <|cite_end|>). These methods employ Data-Parallel Synchronous Stochastic Gradient Descent (SGD), which distributes mini-batches across many machines in unit of micro-batches and executes them in a pipelined fashion. Although, all of these research improve the learning and performance of the models, they still have the problem of the mini-batch size being limited by the device memory size. This paper proposes the Micro-Batch Streaming (MBS), which is a method that can fetch a large batch of data using the stream-based pipeline scheme to train models even if the batch cannot fit into the memory without increasing the device memory or using multiple devices (GPUs). Thus, allowing researchers to experiment using large mini-batch sizes on a single device. The idea is to split a mini-batch into \(n\) micro-batches ( <|cite_start|> (Reference: GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism: Scaling up deep neural network capacity has been known as an effective approach to improving model quality for several different machine learning tasks. In many cases, increasing model capacity beyond the memory limit of a single accelerator has required developing special algorithms or infrastructure. These solutions are often architecture-specific and do not transfer to other tasks. To address the need for efficient and task-independent model parallelism, we introduce GPipe, a pipeline parallelism library that allows scaling any network that can be expressed as a sequence of layers. By pipelining different sub-sequences of layers on separate accelerators, GPipe provides the flexibility of scaling a variety of different networks to gigantic sizes efficiently. Moreover, GPipe utilizes a novel batch-splitting pipelining algorithm, resulting in almost linear speedup when a model is partitioned across multiple accelerators. We demonstrate the advantages of GPipe by training large-scale neural networks on two different tasks with distinct network architectures: (i) Image Classification: We train a 557-million-parameter AmoebaNet model and attain a top-1 accuracy of 84.4% on ImageNet-2012, (ii) Multilingual Neural Machine Translation: We train a single 6-billion-parameter, 128-layer Transformer model on a corpus spanning over 100 languages and achieve better quality than all bilingual models.) <|cite_end|> <|cite_start|> (Reference: PipeDream: Generalized Pipeline Parallelism for DNN Training: DNN training is extremely time-consuming, necessitating efficient multi-accelerator parallelization. Current approaches to parallelizing training primarily use intra-batch parallelization, where a single iteration of training is split over the available workers, but suffer from diminishing returns at higher worker counts. We present PipeDream, a system that adds inter-batch pipelining to intra-batch parallelism to further improve parallel training throughput, helping to better overlap computation with communication and reduce the amount of communication when possible. Unlike traditional pipelining, DNN training is bi-directional, where a forward pass through the computation graph is followed by a backward pass that uses state and intermediate data computed during the forward pass. Naïve pipelining can thus result in mismatches in state versions used in the forward and backward passes, or excessive pipeline flushes and lower hardware efficiency. To address these challenges, PipeDream versions model parameters for numerically correct gradient computations, and schedules forward and backward passes of different minibatches concurrently on different workers with minimal pipeline stalls. PipeDream also automatically partitions DNN layers among workers to balance work and minimize communication. Extensive experimentation with a range of DNN tasks, models, and hardware configurations shows that PipeDream trains models to high accuracy up to 5.3X faster than commonly used intra-batch parallelism techniques.) <|cite_end|>)and stream them sequentially to a GPU. Results show that MBS can increase the training batch size to the full size of the training set regardless of the model type, dataset, and data size. To maintain the performance, MBS computes the gradient for a large batch using loss normalization based on micro-batch gradient accumulation, a method that accumulates the gradient of multiple micro-batches. This paper is organized as follows. Related work is described in Section \ref{section:related_work}. Section \ref{section:MBS} introduces MBS and terminologies used throughout this paper is described, and the loss normalization algorithm used to maintain the performance is introduced. Section \ref{section:evaluation} depicts the results. Finally, Section \ref{section:conclusion} concludes the paper. <|paper_end|>
[ "<|reference_start|> More Effective Distributed ML via a Stale Synchronous Parallel Parameter\nServer: We propose a parameter server system for distributed ML, which follows a Stale Synchronous Parallel (SSP) model of computation that maximizes the time computational workers spend doing useful work on ML algorithms, while still providing correctness guarantees. The parameter server provides an easy-to-use shared interface for read/write access to an ML model's values (parameters and variables), and the SSP model allows distributed workers to read older, stale versions of these values from a local cache, instead of waiting to get them from a central storage. This significantly increases the proportion of time workers spend computing, as opposed to waiting. Furthermore, the SSP model ensures ML algorithm correctness by limiting the maximum age of the stale values. We provide a proof of correctness under SSP, as well as empirical results demonstrating that the SSP model achieves faster algorithm convergence on several different ML problems, compared to fully-synchronous and asynchronous schemes. <|reference_end|>", "<|reference_start|> Decoupled Parallel Backpropagation with Convergence Guarantee: Backpropagation algorithm is indispensable for the training of feedforward neural networks. It requires propagating error gradients sequentially from the output layer all the way back to the input layer. The backward locking in backpropagation algorithm constrains us from updating network layers in parallel and fully leveraging the computing resources. Recently, several algorithms have been proposed for breaking the backward locking. However, their performances degrade seriously when networks are deep. In this paper, we propose decoupled parallel backpropagation algorithm for deep learning optimization with convergence guarantee. Firstly, we decouple the backpropagation algorithm using delayed gradients, and show that the backward locking is removed when we split the networks into multiple modules. Then, we utilize decoupled parallel backpropagation in two stochastic methods and prove that our method guarantees convergence to critical points for the non-convex problem. Finally, we perform experiments for training deep convolutional neural networks on benchmark datasets. The experimental results not only confirm our theoretical analysis, but also demonstrate that the proposed method can achieve significant speedup without loss of accuracy. <|reference_end|>", "<|reference_start|> GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism: Scaling up deep neural network capacity has been known as an effective approach to improving model quality for several different machine learning tasks. In many cases, increasing model capacity beyond the memory limit of a single accelerator has required developing special algorithms or infrastructure. These solutions are often architecture-specific and do not transfer to other tasks. To address the need for efficient and task-independent model parallelism, we introduce GPipe, a pipeline parallelism library that allows scaling any network that can be expressed as a sequence of layers. By pipelining different sub-sequences of layers on separate accelerators, GPipe provides the flexibility of scaling a variety of different networks to gigantic sizes efficiently. Moreover, GPipe utilizes a novel batch-splitting pipelining algorithm, resulting in almost linear speedup when a model is partitioned across multiple accelerators. We demonstrate the advantages of GPipe by training large-scale neural networks on two different tasks with distinct network architectures: (i) Image Classification: We train a 557-million-parameter AmoebaNet model and attain a top-1 accuracy of 84.4% on ImageNet-2012, (ii) Multilingual Neural Machine Translation: We train a single 6-billion-parameter, 128-layer Transformer model on a corpus spanning over 100 languages and achieve better quality than all bilingual models. <|reference_end|>", "<|reference_start|> PipeDream: Generalized Pipeline Parallelism for DNN Training: DNN training is extremely time-consuming, necessitating efficient multi-accelerator parallelization. Current approaches to parallelizing training primarily use intra-batch parallelization, where a single iteration of training is split over the available workers, but suffer from diminishing returns at higher worker counts. We present PipeDream, a system that adds inter-batch pipelining to intra-batch parallelism to further improve parallel training throughput, helping to better overlap computation with communication and reduce the amount of communication when possible. Unlike traditional pipelining, DNN training is bi-directional, where a forward pass through the computation graph is followed by a backward pass that uses state and intermediate data computed during the forward pass. Naïve pipelining can thus result in mismatches in state versions used in the forward and backward passes, or excessive pipeline flushes and lower hardware efficiency. To address these challenges, PipeDream versions model parameters for numerically correct gradient computations, and schedules forward and backward passes of different minibatches concurrently on different workers with minimal pipeline stalls. PipeDream also automatically partitions DNN layers among workers to balance work and minimize communication. Extensive experimentation with a range of DNN tasks, models, and hardware configurations shows that PipeDream trains models to high accuracy up to 5.3X faster than commonly used intra-batch parallelism techniques. <|reference_end|>" ]
[ 1, 2, 7, 8 ]
{"<|cite_1|>": "arxiv-140634", "<|multi_cite_2_1|>": "ss-1093362", "<|multi_cite_2_2|>": "arxiv-156594", "<|multi_cite_2_3|>": "ss-1320267", "<|multi_cite_3_1|>": "arxiv-149848", "<|multi_cite_3_2|>": "ss-1017681", "<|multi_cite_3_3|>": "ss-1716600", "<|multi_cite_4_1|>": "arxiv-180728", "<|multi_cite_4_2|>": "ss-1279738", "<|multi_cite_5_1|>": "arxiv-180728", "<|multi_cite_5_2|>": "ss-1279738"}
2106.09748
<|paper_start|> Title: DeepLab2: A TensorFlow Library for Deep Labeling Abstract: DeepLab2: A TensorFlow Library for Deep Labeling: DeepLab2 is a TensorFlow library for deep labeling, aiming to provide a state-of-the-art and easy-to-use TensorFlow codebase for general dense pixel prediction problems in computer vision. DeepLab2 includes all our recently developed DeepLab model variants with pretrained checkpoints as well as model training and evaluation code, allowing the community to reproduce and further improve upon the state-of-art systems. To showcase the effectiveness of DeepLab2, our Panoptic-DeepLab employing Axial-SWideRNet as network backbone achieves 68.0% PQ or 83.5% mIoU on Cityscaspes validation set, with only single-scale inference and ImageNet-1K pretrained checkpoints. We hope that publicly sharing our library could facilitate future research on dense pixel labeling tasks and envision new applications of this technology. Code is made publicly available at \url{https://github.com/google-research/deeplab2}. Introduction Deep labeling refers to solving certain computer vision problems by assigning a predicted value for each pixel (\ie, label each pixel) in an image or video with a deep neural network <|cite_start|> (Reference: gradient-based learning applied to document recognition: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.) <|cite_end|> <|cite_start|> (Reference: Fully Convolutional Networks for Semantic Segmentation: Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build "fully convolutional" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20% relative improvement to 62.2% mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.) <|cite_end|> <|cite_start|> (Reference: Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs: Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called "semantic image segmentation"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our "DeepLab" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6% IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.) <|cite_end|>. Typical dense prediction problems include, but not limited to, semantic segmentation <|cite_start|> (Reference: {Multiscale Conditional Random Fields for Image Labeling: We propose an approach to include contextual features for labeling images, in which each pixel is assigned to one of a finite set of labels. The features are incorporated into a probabilistic framework, which combines the outputs of several components. Components differ in the information they encode. Some focus on the image-label mapping, while others focus solely on patterns within the label field. Components also differ in their scale, as some focus on fine-resolution patterns while others on coarser, more global structure. A supervised version of the contrastive divergence algorithm is applied to learn these features from labeled image data. We demonstrate performance on two real-world image databases and compare it to a classifier and a Markov random field.) <|cite_end|> <|cite_start|> (Reference: Associative Hierarchical CRFs for Object Class Image Segmentation: Most methods for object class segmentation are formulated as a labelling problem over a single choice of quantisation of an image space - pixels, segments or group of segments. It is well known that each quantisation has its fair share of pros and cons; and the existence of a common optimal quantisation level suitable for all object categories is highly unlikely. Motivated by this observation, we propose a hierarchical random field model, that allows integration of features computed at different levels of the quantisation hierarchy. MAP inference in this model can be performed efficiently using powerful graph cut based move making algorithms. Our framework generalises much of the previous work based on pixels or segments. We evaluate its efficiency on some of the most challenging data-sets for object class segmentation, and show it obtains state-of-the-art results.) <|cite_end|> <|cite_start|> (Reference: International Journal of Computer Vision manuscript No. (will be inserted by the editor) The PASCAL Visual Object Classes (VOC) Challenge: ) <|cite_end|>, instance segmentation <|cite_start|> (Reference: Simultaneous Detection and Segmentation: We aim to detect all instances of a category in an image and, for each instance, mark the pixels that belong to it. We call this task Simultaneous Detection and Segmentation (SDS). Unlike classical bounding box detection, SDS requires a segmentation and not just a box. Unlike classical semantic segmentation, we require individual object instances. We build on recent work that uses convolutional neural networks to classify category-independent region proposals (R-CNN [16]), introducing a novel architecture tailored for SDS. We then use category-specific, top- down figure-ground predictions to refine our bottom-up proposals. We show a 7 point boost (16% relative) over our baselines on SDS, a 5 point boost (10% relative) over state-of-the-art on semantic segmentation, and state-of-the-art performance in object detection. Finally, we provide diagnostic tools that unpack performance and provide directions for future work.) <|cite_end|> <|cite_start|> (Reference: Microsoft COCO: Common Objects in Context: We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.) <|cite_end|>, panoptic segmentation <|cite_start|> (Reference: Panoptic Segmentation: We propose and study a task we name panoptic segmentation (PS). Panoptic segmentation unifies the typically distinct tasks of semantic segmentation (assign a class label to each pixel) and instance segmentation (detect and segment each object instance). The proposed task requires generating a coherent scene segmentation that is rich and complete, an important step toward real-world vision systems. While early work in computer vision addressed related image/scene parsing tasks, these are not currently popular, possibly due to lack of appropriate metrics or associated recognition challenges. To address this, we propose a novel panoptic quality (PQ) metric that captures performance for all classes (stuff and things) in an interpretable and unified manner. Using the proposed metric, we perform a rigorous study of both human and machine performance for PS on three existing datasets, revealing interesting insights about the task. The aim of our work is to revive the interest of the community in a more unified view of image segmentation.) <|cite_end|> <|cite_start|> (Reference: The mapillary vistas dataset for semantic understanding of street scenes.: The Mapillary Vistas Dataset is a novel, large-scale street-level image dataset containing 25000 high-resolution images annotated into 66 object categories with additional, instance-specific labels for 37 classes. Annotation is performed in a dense and fine-grained style by using polygons for delineating individual objects. Our dataset is 5× larger than the total amount of fine annotations for Cityscapes and contains images from all around the world, captured at various conditions regarding weather, season and daytime. Images come from different imaging devices (mobile phones, tablets, action cameras, professional capturing rigs) and differently experienced photographers. In such a way, our dataset has been designed and compiled to cover diversity, richness of detail and geographic extent. As default benchmark tasks, we define semantic image segmentation and instance-specific image segmentation, aiming to significantly further the development of state-of-the-art methods for visual road-scene understanding.) <|cite_end|>, depth estimation <|cite_start|> (Reference: Indoor Segmentation and Support Inference from RGBD Images: ) <|cite_end|> <|cite_start|> (Reference: Vision meets robotics: The KITTI dataset: We present a novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research. In total, we recorded 6 hours of traffic scenarios at 10–100 Hz using a variety of sensor modalities such as high-resolution color and grayscale stereo cameras, a Velodyne 3D laser scanner and a high-precision GPS/IMU inertial navigation system. The scenarios are diverse, capturing real-world traffic situations, and range from freeways over rural areas to inner-city scenes with many static and dynamic objects. Our data is calibrated, synchronized and timestamped, and we provide the rectified and raw image sequences. Our dataset also contains object labels in the form of 3D tracklets, and we provide online benchmarks for stereo, optical flow, object detection and other tasks. This paper describes our recording platform, the data format and the utilities that we provide.) <|cite_end|>, video panoptic segmentation <|cite_start|> (Reference: Video Panoptic Segmentation: Panoptic segmentation has become a new standard of visual recognition task by unifying previous semantic segmentation and instance segmentation tasks in concert. In this paper, we propose and explore a new video extension of this task, called video panoptic segmentation. The task requires generating consistent panoptic segmentation as well as an association of instance ids across video frames. To invigorate research on this new task, we present two types of video panoptic datasets. The first is a re-organization of the synthetic VIPER dataset into the video panoptic format to exploit its large-scale pixel annotations. The second is a temporal extension on the Cityscapes val. set, by providing new video panoptic annotations (Cityscapes-VPS). Moreover, we propose a novel video panoptic segmentation network (VPSNet) which jointly predicts object classes, bounding boxes, masks, instance id tracking, and semantic segmentation in video frames. To provide appropriate metrics for this task, we propose a video panoptic quality (VPQ) metric and evaluate our method and several other baselines. Experimental results demonstrate the effectiveness of the presented two datasets. We achieve state-of-the-art results in image PQ on Cityscapes and also in VPQ on Cityscapes-VPS and VIPER datasets. The datasets and code are made publicly available.) <|cite_end|> <|cite_start|> (Reference: STEP: Segmenting and Tracking Every Pixel: The task of assigning semantic classes and track identities to every pixel in a video is called video panoptic segmentation. Our work is the first that targets this task in a real-world setting requiring dense interpretation in both spatial and temporal domains. As the ground-truth for this task is difficult and expensive to obtain, existing datasets are either constructed synthetically or only sparsely annotated within short video clips. To overcome this, we introduce a new benchmark encompassing two datasets, KITTI-STEP, and MOTChallenge-STEP. The datasets contain long video sequences, providing challenging examples and a test-bed for studying long-term pixel-precise segmentation and tracking under real-world conditions. We further propose a novel evaluation metric Segmentation and Tracking Quality (STQ) that fairly balances semantic and tracking aspects of this task and is more appropriate for evaluating sequences of arbitrary length. Finally, we provide several baselines to evaluate the status of existing methods on this new challenging dataset. We have made our datasets, metric, benchmark servers, and baselines publicly available, and hope this will inspire future research.) <|cite_end|>, and depth-aware video panoptic segmentation <|cite_start|> (Reference: ViP-DeepLab: Learning Visual Perception with Depth-aware Video Panoptic Segmentation: In this paper, we present ViP-DeepLab, a unified model attempting to tackle the long-standing and challenging inverse projection problem in vision, which we model as restoring the point clouds from perspective image sequences while providing each point with instance-level semantic interpretations. Solving this problem requires the vision models to predict the spatial location, semantic class, and temporally consistent instance label for each 3D point. ViP-DeepLab approaches it by jointly performing monocular depth estimation and video panoptic segmentation. We name this joint task as Depth-aware Video Panoptic Segmentation, and propose a new evaluation metric along with two derived datasets for it, which will be made available to the public. On the individual sub-tasks, ViP-DeepLab also achieves state-of-the-art results, outperforming previous methods by 5.1% VPQ on Cityscapes-VPS, ranking 1st on the KITTI monocular depth estimation benchmark, and 1st on KITTI MOTS pedestrian. The datasets and the evaluation codes are made publicly available.) <|cite_end|>. Going beyond our previous open source library\footnote{\url{https://github.com/tensorflow/models/tree/master/research/deeplab}} in 2018 (which could only tackle image semantic segmentation with the first few DeepLab model variants <|cite_start|> (Reference: Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs: Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called "semantic image segmentation"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our "DeepLab" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6% IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.) <|cite_end|> <|cite_start|> (Reference: Assisted Landscape Design Model Based on Deep Lab v3+ and Computer Vision: ) <|cite_end|> <|cite_start|> (Reference: Rethinking Atrous Convolution for Semantic Image Segmentation: In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter's field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales, with image-level features encoding global context and further boost performance. We also elaborate on implementation details and share our experience on training our system. The proposed `DeepLabv3' system significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.) <|cite_end|> <|cite_start|> (Reference: Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation: Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We demonstrate the effectiveness of the proposed model on PASCAL VOC 2012 and Cityscapes datasets, achieving the test set performance of 89.0\% and 82.1\% without any post-processing. Our paper is accompanied with a publicly available reference implementation of the proposed models in Tensorflow at \url{https://github.com/tensorflow/models/tree/master/research/deeplab}.) <|cite_end|>), we introduce DeepLab2, a modern TensorFlow library <|cite_start|> (Reference: TensorFlow: A system for large-scale machine learning: TensorFlow is a machine learning system that operates at large scale and in heterogeneous environments. TensorFlow uses dataflow graphs to represent computation, shared state, and the operations that mutate that state. It maps the nodes of a dataflow graph across many machines in a cluster, and within a machine across multiple computational devices, including multicore CPUs, general-purpose GPUs, and custom designed ASICs known as Tensor Processing Units (TPUs). This architecture gives flexibility to the application developer: whereas in previous "parameter server" designs the management of shared state is built into the system, TensorFlow enables developers to experiment with novel optimizations and training algorithms. TensorFlow supports a variety of applications, with particularly strong support for training and inference on deep neural networks. Several Google services use TensorFlow in production, we have released it as an open-source project, and it has become widely used for machine learning research. In this paper, we describe the TensorFlow dataflow model in contrast to existing systems, and demonstrate the compelling performance that TensorFlow achieves for several real-world applications.) <|cite_end|> for deep labeling, aiming to provide a unified and easy-to-use TensorFlow codebase for general dense pixel labeling tasks. \textit{Re-implemented} in TensorFlow2, this release includes \textit{all} our recently developed DeepLab model variants <|cite_start|> (Reference: Panoptic-DeepLab: We present Panoptic-DeepLab, a bottom-up and single-shot approach for panoptic segmentation. Our Panoptic-DeepLab is conceptually simple and delivers state-of-the-art results. In particular, we adopt the dual-ASPP and dual-decoder structures specific to semantic, and instance segmentation, respectively. The semantic segmentation branch is the same as the typical design of any semantic segmentation model (e.g., DeepLab), while the instance segmentation branch is class-agnostic, involving a simple instance center regression. Our single Panoptic-DeepLab sets the new state-of-art at all three Cityscapes benchmarks, reaching 84.2% mIoU, 39.0% AP, and 65.5% PQ on test set, and advances results on the other challenging Mapillary Vistas.) <|cite_end|> <|cite_start|> (Reference: Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation: Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes.) <|cite_end|> <|cite_start|> (Reference: MaX-DeepLab: End-to-End Panoptic Segmentation with Mask Transformers: We present MaX-DeepLab, the first end-to-end model for panoptic segmentation. Our approach simplifies the current pipeline that depends heavily on surrogate sub-tasks and hand-designed components, such as box detection, non-maximum suppression, thing-stuff merging, etc. Although these sub-tasks are tackled by area experts, they fail to comprehensively solve the target task. By contrast, our MaX-DeepLab directly predicts class-labeled masks with a mask transformer, and is trained with a panoptic quality inspired loss via bipartite matching. Our mask transformer employs a dual-path architecture that introduces a global memory path in addition to a CNN path, allowing direct communication with any CNN layers. As a result, MaX-DeepLab shows a significant 7.1% PQ gain in the box-free regime on the challenging COCO dataset, closing the gap between box-based and box-free methods for the first time. A small variant of MaX-DeepLab improves 3.0% PQ over DETR with similar parameters and M-Adds. Furthermore, MaX-DeepLab, without test time augmentation, achieves new state-of-the-art 51.3% PQ on COCO test-dev set. Code is available at https://github.com/google-research/deeplab2.) <|cite_end|> <|cite_start|> (Reference: STEP: Segmenting and Tracking Every Pixel: The task of assigning semantic classes and track identities to every pixel in a video is called video panoptic segmentation. Our work is the first that targets this task in a real-world setting requiring dense interpretation in both spatial and temporal domains. As the ground-truth for this task is difficult and expensive to obtain, existing datasets are either constructed synthetically or only sparsely annotated within short video clips. To overcome this, we introduce a new benchmark encompassing two datasets, KITTI-STEP, and MOTChallenge-STEP. The datasets contain long video sequences, providing challenging examples and a test-bed for studying long-term pixel-precise segmentation and tracking under real-world conditions. We further propose a novel evaluation metric Segmentation and Tracking Quality (STQ) that fairly balances semantic and tracking aspects of this task and is more appropriate for evaluating sequences of arbitrary length. Finally, we provide several baselines to evaluate the status of existing methods on this new challenging dataset. We have made our datasets, metric, benchmark servers, and baselines publicly available, and hope this will inspire future research.) <|cite_end|> <|cite_start|> (Reference: ViP-DeepLab: Learning Visual Perception with Depth-aware Video Panoptic Segmentation: In this paper, we present ViP-DeepLab, a unified model attempting to tackle the long-standing and challenging inverse projection problem in vision, which we model as restoring the point clouds from perspective image sequences while providing each point with instance-level semantic interpretations. Solving this problem requires the vision models to predict the spatial location, semantic class, and temporally consistent instance label for each 3D point. ViP-DeepLab approaches it by jointly performing monocular depth estimation and video panoptic segmentation. We name this joint task as Depth-aware Video Panoptic Segmentation, and propose a new evaluation metric along with two derived datasets for it, which will be made available to the public. On the individual sub-tasks, ViP-DeepLab also achieves state-of-the-art results, outperforming previous methods by 5.1% VPQ on Cityscapes-VPS, ranking 1st on the KITTI monocular depth estimation benchmark, and 1st on KITTI MOTS pedestrian. The datasets and the evaluation codes are made publicly available.) <|cite_end|>, model training and evaluation code, and several pretrained checkpoints, allowing the community to reproduce and further improve upon the state-of-art systems. We hope that the open source DeepLab2 would facilitate future research on dense pixel labeling tasks, and anticipate novel breakthroughs and new applications that adopt this technology. In the following sections, we detail a few popular dense prediction tasks as well as the provided state-of-the-art models in the DeepLab2 library. <|paper_end|>
[ "<|reference_start|> Associative Hierarchical CRFs for Object Class Image Segmentation: Most methods for object class segmentation are formulated as a labelling problem over a single choice of quantisation of an image space - pixels, segments or group of segments. It is well known that each quantisation has its fair share of pros and cons; and the existence of a common optimal quantisation level suitable for all object categories is highly unlikely. Motivated by this observation, we propose a hierarchical random field model, that allows integration of features computed at different levels of the quantisation hierarchy. MAP inference in this model can be performed efficiently using powerful graph cut based move making algorithms. Our framework generalises much of the previous work based on pixels or segments. We evaluate its efficiency on some of the most challenging data-sets for object class segmentation, and show it obtains state-of-the-art results. <|reference_end|>", "<|reference_start|> International Journal of Computer Vision manuscript No. (will be inserted by the editor) The PASCAL Visual Object Classes (VOC) Challenge: <|reference_end|>", "<|reference_start|> Indoor Segmentation and Support Inference from RGBD Images: <|reference_end|>", "<|reference_start|> Rethinking Atrous Convolution for Semantic Image Segmentation: In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter's field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales, with image-level features encoding global context and further boost performance. We also elaborate on implementation details and share our experience on training our system. The proposed `DeepLabv3' system significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark. <|reference_end|>" ]
[ 4, 5, 10, 17 ]
{"<|multi_cite_1_1|>": "ss-1056505", "<|multi_cite_1_2|>": "arxiv-68791", "<|multi_cite_1_3|>": "arxiv-70691", "<|multi_cite_2_1|>": "ss-691568", "<|multi_cite_2_2|>": "ss-792895", "<|multi_cite_2_3|>": "ss-757917", "<|multi_cite_3_1|>": "arxiv-63209", "<|multi_cite_3_2|>": "arxiv-60292", "<|multi_cite_4_1|>": "arxiv-144485", "<|multi_cite_4_2|>": "ss-683240", "<|multi_cite_5_1|>": "ss-783679", "<|multi_cite_5_2|>": "ss-682917", "<|multi_cite_6_1|>": "arxiv-273197", "<|multi_cite_6_2|>": "arxiv-323167", "<|cite_7|>": "arxiv-308954", "<|multi_cite_8_1|>": "arxiv-70691", "<|multi_cite_8_2|>": "ss-1519957", "<|multi_cite_8_3|>": "arxiv-127025", "<|multi_cite_8_4|>": "arxiv-147571", "<|cite_9|>": "arxiv-98825", "<|multi_cite_10_1|>": "arxiv-228184", "<|multi_cite_10_2|>": "arxiv-254187", "<|multi_cite_10_3|>": "arxiv-307051", "<|multi_cite_10_4|>": "arxiv-323167", "<|multi_cite_10_5|>": "arxiv-308954"}
1902.08588
<|paper_start|> Title: Towards Neural Mixture Recommender for Long Range Dependent User Sequences Abstract: Towards Neural Mixture Recommender for Long Range Dependent User Sequences: Understanding temporal dynamics has proved to be highly valuable for accurate recommendation. Sequential recommenders have been successful in modeling the dynamics of users and items over time. However, while different model architectures excel at capturing various temporal ranges or dynamics, distinct application contexts require adapting to diverse behaviors. In this paper we examine how to build a model that can make use of different temporal ranges and dynamics depending on the request context. We begin with the analysis of an anonymized Youtube dataset comprising millions of user sequences. We quantify the degree of long-range dependence in these sequences and demonstrate that both short-term and long-term dependent behavioral patterns co-exist. We then propose a neural Multi-temporal-range Mixture Model (M3) as a tailored solution to deal with both short-term and long-term dependencies. Our approach employs a mixture of models, each with a different temporal range. These models are combined by a learned gating mechanism capable of exerting different model combinations given different contextual information. In empirical evaluations on a public dataset and our own anonymized YouTube dataset, M3 consistently outperforms state-of-the-art sequential recommendation methods. Introduction \label{sec:intro} Across the web and mobile applications, recommender systems are relied upon to surface the right items to users at the right time. Some of their success can be attributed to advances in modeling as well as the ingenuity of applied researchers in adopting and inventing new techniques to solve this important problem <|cite_start|> (Reference: {Item-based Collaborative Filtering Recommendation Algorithms: Recommender systems apply knowledge discovery techniques to the problem of making personalized recommendations for information, products or services during a live interaction. These systems, especially the k-nearest neighbor collaborative ltering based ones, are achieving widespread success on the Web. The tremendous growth in the amount of available information and the number of visitors to Web sites in recent years poses some key challenges for recommender systems. These are: producing high quality recommendations, performing many recommendations per second for millions of users and items and achieving high coverage in the face of data sparsity. In traditional collaborative ltering systems the amount of work increases with the number of participants in the system. New recommender system technologies are needed that can quickly produce high quality recommendations, even for very large-scale problems. To address these issues we have explored item-based collaborative ltering techniques. Item-based techniques rst analyze the user-item matrix to identify relationships between di erent items, and then use these relationships to indirectly compute recommendations for users. In this paper we analyze di erent item-based recommendation generation algorithms. We look into di erent techniques for computing item-item similarities (e.g., item-item correlation vs. cosine similarities between item vectors) and di erent techniques for obtaining recommendations from them (e.g., weighted sum vs. regression model). Finally, we experimentally evaluate our results and compare them to the basic k-nearest neighbor approach. Our experiments suggest that item-based algorithms provide dramatically better performance than user-based algorithms, while at the same time providing better quality than the best available userbased algorithms.) <|cite_end|> <|cite_start|> (Reference: MATRIX FACTORIZATION TECHNIQUES FOR RECOMMENDER SYSTEMS: As the Netflix Prize competition has demonstrated, matrix factorization models are superior to classic nearest neighbor techniques for producing product recommendations, allowing the incorporation of additional information such as implicit feedback, temporal effects, and confidence levels.) <|cite_end|> <|cite_start|> (Reference: BPR: Bayesian Personalized Ranking from Implicit Feedback: Item recommendation is the task of predicting a personalized ranking on a set of items (e.g. websites, movies, products). In this paper, we investigate the most common scenario with implicit feedback (e.g. clicks, purchases). There are many methods for item recommendation from implicit feedback like matrix factorization (MF) or adaptive knearest-neighbor (kNN). Even though these methods are designed for the item prediction task of personalized ranking, none of them is directly optimized for ranking. In this paper we present a generic optimization criterion BPR-Opt for personalized ranking that is the maximum posterior estimator derived from a Bayesian analysis of the problem. We also provide a generic learning algorithm for optimizing models with respect to BPR-Opt. The learning method is based on stochastic gradient descent with bootstrap sampling. We show how to apply our method to two state-of-the-art recommender models: matrix factorization and adaptive kNN. Our experiments indicate that for the task of personalized ranking our optimization method outperforms the standard learning techniques for MF and kNN. The results show the importance of optimizing models for the right criterion.) <|cite_end|> <|cite_start|> (Reference: Neural Collaborative Filtering: In recent years, deep neural networks have yielded immense success on speech recognition, computer vision and natural language processing. However, the exploration of deep neural networks on recommender systems has received relatively less scrutiny. In this work, we strive to develop techniques based on neural networks to tackle the key problem in recommendation -- collaborative filtering -- on the basis of implicit feedback. Although some recent work has employed deep learning for recommendation, they primarily used it to model auxiliary information, such as textual descriptions of items and acoustic features of musics. When it comes to model the key factor in collaborative filtering -- the interaction between user and item features, they still resorted to matrix factorization and applied an inner product on the latent features of users and items. By replacing the inner product with a neural architecture that can learn an arbitrary function from data, we present a general framework named NCF, short for Neural network-based Collaborative Filtering. NCF is generic and can express and generalize matrix factorization under its framework. To supercharge NCF modelling with non-linearities, we propose to leverage a multi-layer perceptron to learn the user-item interaction function. Extensive experiments on two real-world datasets show significant improvements of our proposed NCF framework over the state-of-the-art methods. Empirical evidence shows that using deeper layers of neural networks offers better recommendation performance.) <|cite_end|>. Fundamentally, recommenders match users in a particular context with the best personalized items that they will engage with <|cite_start|> (Reference: Amazon.com Recommendations: Item-to-Item Collaborative Filtering: R ecommendation algorithms are best known for their use on e-commerce Web sites,1 where they use input about a customer’s interests to generate a list of recommended items. Many applications use only the items that customers purchase and explicitly rate to represent their interests, but they can also use other attributes, including items viewed, demographic data, subject interests, and favorite artists. At Amazon.com, we use recommendation algorithms to personalize the online store for each customer. The store radically changes based on customer interests, showing programming titles to a software engineer and baby toys to a new mother. The click-through and conversion rates — two important measures of Web-based and email advertising effectiveness — vastly exceed those of untargeted content such as banner advertisements and top-seller lists. E-commerce recommendation algorithms often operate in a challenging environment. For example:) <|cite_end|>. In order to do this effectively, recommenders need to understand the users, typically based on their previous actions, and to understand items, most often based on the users who previously interacted with them. This presents a fundamental challenge: users' preferences and items' perception are continuously changing over time, and the recommender system needs to understand these dynamics. A significant amount of research has recognized forms of this problem. Sequence information has been generally shown to improve recommender performance <|cite_start|> (Reference: Fusing Similarity Models with Markov Chains for Sparse Sequential Recommendation: Predicting personalized sequential behavior is a key task for recommender systems. In order to predict user actions such as the next product to purchase, movie to watch, or place to visit, it is essential to take into account both long-term user preferences and sequential patterns (i.e., short-term dynamics). Matrix Factorization and Markov Chain methods have emerged as two separate but powerful paradigms for modeling the two respectively. Combining these ideas has led to unified methods that accommodate long- and short-term dynamics simultaneously by modeling pairwise user-item and item-item interactions. In spite of the success of such methods for tackling dense data, they are challenged by sparsity issues, which are prevalent in real-world datasets. In recent years, similarity-based methods have been proposed for (sequentially-unaware) item recommendation with promising results on sparse datasets. In this paper, we propose to fuse such methods with Markov Chains to make personalized sequential recommendations. We evaluate our method, Fossil, on a variety of large, real-world datasets. We show quantitatively that Fossil outperforms alternative algorithms, especially on sparse datasets, and qualitatively that it captures personalized dynamics and is able to make meaningful recommendations.) <|cite_end|> <|cite_start|> (Reference: {Factorizing Personalized Markov Chains for Next-basket Recommendation: Recommender systems are an important component of many websites. Two of the most popular approaches are based on matrix factorization (MF) and Markov chains (MC). MF methods learn the general taste of a user by factorizing the matrix over observed user-item preferences. On the other hand, MC methods model sequential behavior by learning a transition graph over items that is used to predict the next action based on the recent actions of a user. In this paper, we present a method bringing both approaches together. Our method is based on personalized transition graphs over underlying Markov chains. That means for each user an own transition matrix is learned - thus in total the method uses a transition cube. As the observations for estimating the transitions are usually very limited, our method factorizes the transition cube with a pairwise interaction model which is a special case of the Tucker Decomposition. We show that our factorized personalized MC (FPMC) model subsumes both a common Markov chain and the normal matrix factorization model. For learning the model parameters, we introduce an adaption of the Bayesian Personalized Ranking (BPR) framework for sequential basket data. Empirically, we show that our FPMC model outperforms both the common matrix factorization and the unpersonalized MC model both learned with and without factorization.) <|cite_end|>. <|cite_start|> (Reference: {Collaborative filtering with temporal dynamics: Customer preferences for products are drifting over time. Product perception and popularity are constantly changing as new selection emerges. Similarly, customer inclinations are evolving, leading them to ever redefine their taste. Thus, modeling temporal dynamics should be a key when designing recommender systems or general customer preference models. However, this raises unique challenges. Within the eco-system intersecting multiple products and customers, many different characteristics are shifting simultaneously, while many of them influence each other and often those shifts are delicate and associated with a few data instances. This distinguishes the problem from concept drift explorations, where mostly a single concept is tracked. Classical time-window or instance-decay approaches cannot work, as they lose too much signal when discarding data instances. A more sensitive approach is required, which can make better distinctions between transient effects and long term patterns. The paradigm we offer is creating a model tracking the time changing behavior throughout the life span of the data. This allows us to exploit the relevant components of all data instances, while discarding only what is modeled as being irrelevant. Accordingly, we revamp two leading collaborative filtering recommendation approaches. Evaluation is made on a large movie rating dataset by Netflix. Results are encouraging and better than those previously reported on this dataset.) <|cite_end|> identified multiple user and item dynamics in the Netflix Prize competition, and incorporated these dynamics as biases in a collaborative filtering model. <|cite_start|> (Reference: Recurrent recommender networks: Recommender systems traditionally assume that user profiles and movie attributes are static. Temporal dynamics are purely reactive, that is, they are inferred after they are observed, e.g. after a user's taste has changed or based on hand-engineered temporal bias corrections for movies. We propose Recurrent Recommender Networks (RRN) that are able to predict future behavioral trajectories. This is achieved by endowing both users and movies with a Long Short-Term Memory (LSTM) autoregressive model that captures dynamics, in addition to a more traditional low-rank factorization. On multiple real-world datasets, our model offers excellent prediction accuracy and it is very compact, since we need not learn latent state but rather just the state transition function.) <|cite_end|> <|cite_start|> (Reference: {Latent cross: Making use of context in recurrent recommender systems: The success of recommender systems often depends on their ability to understand and make use of the context of the recommendation request. Significant research has focused on how time, location, interfaces, and a plethora of other contextual features affect recommendations. However, in using deep neural networks for recommender systems, researchers often ignore these contexts or incorporate them as ordinary features in the model. In this paper, we study how to effectively treat contextual data in neural recommender systems. We begin with an empirical analysis of the conventional approach to context as features in feed-forward recommenders and demonstrate that this approach is inefficient in capturing common feature crosses. We apply this insight to design a state-of-the-art RNN recommender system. We first describe our RNN-based recommender system in use at YouTube. Next, we offer "Latent Cross," an easy-to-use technique to incorporate contextual data in the RNN by embedding the context feature first and then performing an element-wise product of the context embedding with model's hidden states. We demonstrate the improvement in performance by using this Latent Cross technique in multiple experimental settings.) <|cite_end|> demonstrated that Recurrent Neural Networks (RNNs) could learn many of these patterns, and likewise <|cite_start|> (Reference: Session-based Recommendations with Recurrent Neural Networks: We apply recurrent neural networks (RNN) on a new domain, namely recommender systems. Real-life recommender systems often face the problem of having to base recommendations only on short session-based data (e.g. a small sportsware website) instead of long user histories (as in the case of Netflix). In this situation the frequently praised matrix factorization approaches are not accurate. This problem is usually overcome in practice by resorting to item-to-item recommendations, i.e. recommending similar items. We argue that by modeling the whole session, more accurate recommendations can be provided. We therefore propose an RNN-based approach for session-based recommendations. Our approach also considers practical aspects of the task and introduces several modifications to classic RNNs such as a ranking loss function that make it more viable for this specific problem. Experimental results on two data-sets show marked improvements over widely used approaches.) <|cite_end|> demonstrated that RNNs can learn patterns in individual sessions. Despite these successes, RNNs are known to have difficulties learning long-range dependent temporal patterns <|cite_start|> (Reference: Factorized Recurrent Neural Architectures for Longer Range Dependence: The ability to capture Long Range Dependence (LRD) in a stochastic process is of prime importance in the context of predictive models. A sequential model with a longer-term memory is better able contextualize recent observations. In this article, we apply the theory of LRD stochastic processes to modern recurrent architectures, such as LSTMs and GRUs, and prove they do not provide LRD under assumptions sufficient for gradients to vanish. Motivated by an information-theoretic analysis, we provide a modified recurrent neural architecture that mitigates the issue of faulty memory through redundancy while keeping the compute time constant. Experimental results on a synthetic copy task, the Youtube-8m video classification task and a recommender system show that we enable better memorization and longer-term memory.) <|cite_end|>. We observe and study an open challenge for such sequential recommender systems: \emph{ while different applications and contexts require different temporal ranges and patterns, model architectures are typically designed to capture a particular temporal dynamic. } For example, when a user comes to the Amazon home page they may be looking for something new to buy or watch, but on an item specific page they may be looking for other items that are closely related to recently browsed items. \emph{How can we design a model that works, simultaneously, across all of these contexts and temporal ranges?} \textbf{Contributions:} We address the issue of providing a single model adapted to the diversity of contexts and scales of temporal dependencies in sequential recommendations through data analysis and the design of a Multi-temporal-range Mixture Model, or \emph{M3} for short. We make the following contributions to this problem: \begin{itemize} \item \textbf{Data-driven design:} We demonstrate that in real world recommendation tasks there are significant long-range temporal dependencies in user sequence data, and that previous approaches are limited in their ability to capture those dynamics. M3's design is informed by this quantitative analysis. \item \textbf{Multi-range Model:} We offer a single model, M3, which is a mixture model consisting of three sub-models (each with a distinct manually designed architecture) that specialize in capturing different ranges of temporal dependencies. M3 can learn how to dynamically choose to focus on different temporal dynamics and ranges depending on the application context. \item \textbf{Empirical Benefits and Interpretability:} We show on both public academic and private data that our approach provides significantly better recommendations. Further, using its interpretable design, we analyze how M3 dynamically switches between patterns present at different temporal ranges for different contexts, thus showing the value in enabling context-specific multi-range modeling. Our private dataset consists in anonymized user sequences from YouTube. To the best of our knowledge this paper is the first to focus on sequential patterns in such a setting. \end{itemize} Related Work \label{sec:related} Before we describe our sequential recommendation problem and provide the quantitative insights orienting the design of a novel sequential neural model based on a mixture of models, we briefly introduce the reader to some key pre-existing related work. Matrix factorization <|cite_start|> (Reference: MATRIX FACTORIZATION TECHNIQUES FOR RECOMMENDER SYSTEMS: As the Netflix Prize competition has demonstrated, matrix factorization models are superior to classic nearest neighbor techniques for producing product recommendations, allowing the incorporation of additional information such as implicit feedback, temporal effects, and confidence levels.) <|cite_end|> is among the most popular techniques used in classic recommender research, in which a similarity score for each user-item pair is learned by building latent user and item representations to recover historical user-item interactions. The predicted similarity score is then used to indicate the \emph{relatedness} and find the most relevant items to recommend to a user. Followup work on introducing auxiliary sources of information beyond user-item interactions have been proven successful <|cite_start|> (Reference: Deep neural Networks for Youtube Recommendations: YouTube represents one of the largest scale and most sophisticated industrial recommendation systems in existence. In this paper, we describe the system at a high level and focus on the dramatic performance improvements brought by deep learning. The paper is split according to the classic two-stage information retrieval dichotomy: first, we detail a deep candidate generation model and then describe a separate deep ranking model. We also provide practical lessons and insights derived from designing, iterating and maintaining a massive recommendation system with enormous user-facing impact.) <|cite_end|>, especially for cold-start problems. <|cite_start|> (Reference: Content-Based Recommendation Systems: ) <|cite_end|> use item content (\emph{e.g.,} product image, video's visual/audio content, etc) to provide a better item representation. \textbf{Neural Recommender Systems.} Deep neural networks have gained tremendous success in the fields of Computer Vision <|cite_start|> (Reference: {Large-Scale Video Classification with Convolutional Neural Networks: Convolutional Neural Networks (CNNs) have been established as a powerful class of models for image recognition problems. Encouraged by these results, we provide an extensive empirical evaluation of CNNs on large-scale video classification using a new dataset of 1 million YouTube videos belonging to 487 classes. We study multiple approaches for extending the connectivity of a CNN in time domain to take advantage of local spatio-temporal information and suggest a multiresolution, foveated architecture as a promising way of speeding up the training. Our best spatio-temporal networks display significant performance improvements compared to strong feature-based baselines (55.3% to 63.9%), but only a surprisingly modest improvement compared to single-frame models (59.3% to 60.9%). We further study the generalization performance of our best model by retraining the top layers on the UCF-101 Action Recognition dataset and observe significant performance improvements compared to the UCF-101 baseline model (63.3% up from 43.9%).) <|cite_end|> <|cite_start|> (Reference: ImageNet classification with deep convolutional neural networks: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0%, respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.) <|cite_end|> and Natural Language Processing <|cite_start|> (Reference: Neural Machine Translation by Jointly Learning to Align and Translate: Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.) <|cite_end|> <|cite_start|> (Reference: Recurrent neural network based language model: A new recurrent neural network based language model (RNN LM) with applications to speech recognition is presented. Results indicate that it is possible to obtain around 50% reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. Speech recognition experiments show around 18% reduction of word error rate on the Wall Street Journal task when comparing models trained on the same amount of data, and around 5% on the much harder NIST RT05 task, even when the backoff model is trained on much more data than the RNN LM. We provide ample empirical evidence to suggest that connectionist language models are superior to standard n-gram techniques, except their high computational (training) complexity. Index Terms: language modeling, recurrent neural networks, speech recognition) <|cite_end|>. In recommender research, we have witnessed growing interest of using deep neural networks to model complex contextual interactions between user and items, which surpass classic factorization-based methods <|cite_start|> (Reference: MATRIX FACTORIZATION TECHNIQUES FOR RECOMMENDER SYSTEMS: As the Netflix Prize competition has demonstrated, matrix factorization models are superior to classic nearest neighbor techniques for producing product recommendations, allowing the incorporation of additional information such as implicit feedback, temporal effects, and confidence levels.) <|cite_end|> <|cite_start|> (Reference: {Factorization Machines: In this paper, we introduce Factorization Machines (FM) which are a new model class that combines the advantages of Support Vector Machines (SVM) with factorization models. Like SVMs, FMs are a general predictor working with any real valued feature vector. In contrast to SVMs, FMs model all interactions between variables using factorized parameters. Thus they are able to estimate interactions even in problems with huge sparsity (like recommender systems) where SVMs fail. We show that the model equation of FMs can be calculated in linear time and thus FMs can be optimized directly. So unlike nonlinear SVMs, a transformation in the dual form is not necessary and the model parameters can be estimated directly without the need of any support vector in the solution. We show the relationship to SVMs and the advantages of FMs for parameter estimation in sparse settings. On the other hand there are many different factorization models like matrix factorization, parallel factor analysis or specialized models like SVD++, PITF or FPMC. The drawback of these models is that they are not applicable for general prediction tasks but work only with special input data. Furthermore their model equations and optimization algorithms are derived individually for each task. We show that FMs can mimic these models just by specifying the input data (i.e. the feature vectors). This makes FMs easily applicable even for users without expert knowledge in factorization models.) <|cite_end|>. Auto-encoders <|cite_start|> (Reference: Autorec: Autoencoders meet collaborative filtering: This paper proposes AutoRec, a novel autoencoder framework for collaborative filtering (CF). Empirically, AutoRec's compact and efficiently trainable model outperforms state-of-the-art CF techniques (biased matrix factorization, RBM-CF and LLORMA) on the Movielens and Netflix datasets.) <|cite_end|> <|cite_start|> (Reference: Collaborative denoising auto-Encoders for top-N recommender systems: Most real-world recommender services measure their performance based on the top-N results shown to the end users. Thus, advances in top-N recommendation have far-ranging consequences in practical applications. In this paper, we present a novel method, called Collaborative Denoising Auto-Encoder (CDAE), for top-N recommendation that utilizes the idea of Denoising Auto-Encoders. We demonstrate that the proposed model is a generalization of several well-known collaborative filtering models but with more flexible components. Thorough experiments are conducted to understand the performance of CDAE under various component settings. Furthermore, experimental results on several public datasets demonstrate that CDAE consistently outperforms state-of-the-art top-N recommendation methods on a variety of common evaluation metrics.) <|cite_end|> <|cite_start|> (Reference: Variational Autoencoders for Collaborative Filtering: We extend variational autoencoders (VAEs) to collaborative filtering for implicit feedback. This non-linear probabilistic model enables us to go beyond the limited modeling capacity of linear factor models which still largely dominate collaborative filtering research.We introduce a generative model with multinomial likelihood and use Bayesian inference for parameter estimation. Despite widespread use in language modeling and economics, the multinomial likelihood receives less attention in the recommender systems literature. We introduce a different regularization parameter for the learning objective, which proves to be crucial for achieving competitive performance. Remarkably, there is an efficient way to tune the parameter using annealing. The resulting model and learning algorithm has information-theoretic connections to maximum entropy discrimination and the information bottleneck principle. Empirically, we show that the proposed approach significantly outperforms several state-of-the-art baselines, including two recently-proposed neural network approaches, on several real-world datasets. We also provide extended experiments comparing the multinomial likelihood with other commonly used likelihood functions in the latent factor collaborative filtering literature and show favorable results. Finally, we identify the pros and cons of employing a principled Bayesian inference approach and characterize settings where it provides the most significant improvements.) <|cite_end|> constitute an early example of success for a framework based on neural networks to better infer un-observed user/item affinities in a recommendation problem. <|cite_start|> (Reference: Neural Collaborative Filtering: In recent years, deep neural networks have yielded immense success on speech recognition, computer vision and natural language processing. However, the exploration of deep neural networks on recommender systems has received relatively less scrutiny. In this work, we strive to develop techniques based on neural networks to tackle the key problem in recommendation -- collaborative filtering -- on the basis of implicit feedback. Although some recent work has employed deep learning for recommendation, they primarily used it to model auxiliary information, such as textual descriptions of items and acoustic features of musics. When it comes to model the key factor in collaborative filtering -- the interaction between user and item features, they still resorted to matrix factorization and applied an inner product on the latent features of users and items. By replacing the inner product with a neural architecture that can learn an arbitrary function from data, we present a general framework named NCF, short for Neural network-based Collaborative Filtering. NCF is generic and can express and generalize matrix factorization under its framework. To supercharge NCF modelling with non-linearities, we propose to leverage a multi-layer perceptron to learn the user-item interaction function. Extensive experiments on two real-world datasets show significant improvements of our proposed NCF framework over the state-of-the-art methods. Empirical evidence shows that using deeper layers of neural networks offers better recommendation performance.) <|cite_end|> also proved that traditional Collaborative Filtering methods can be effectively generalized by a deep neural network. Besides, For the specific problem of sequential recommendation using neural networks, RNNs <|cite_start|> (Reference: Session-based Recommendations with Recurrent Neural Networks: We apply recurrent neural networks (RNN) on a new domain, namely recommender systems. Real-life recommender systems often face the problem of having to base recommendations only on short session-based data (e.g. a small sportsware website) instead of long user histories (as in the case of Netflix). In this situation the frequently praised matrix factorization approaches are not accurate. This problem is usually overcome in practice by resorting to item-to-item recommendations, i.e. recommending similar items. We argue that by modeling the whole session, more accurate recommendations can be provided. We therefore propose an RNN-based approach for session-based recommendations. Our approach also considers practical aspects of the task and introduces several modifications to classic RNNs such as a ranking loss function that make it more viable for this specific problem. Experimental results on two data-sets show marked improvements over widely used approaches.) <|cite_end|> <|cite_start|> (Reference: Recurrent recommender networks: Recommender systems traditionally assume that user profiles and movie attributes are static. Temporal dynamics are purely reactive, that is, they are inferred after they are observed, e.g. after a user's taste has changed or based on hand-engineered temporal bias corrections for movies. We propose Recurrent Recommender Networks (RRN) that are able to predict future behavioral trajectories. This is achieved by endowing both users and movies with a Long Short-Term Memory (LSTM) autoregressive model that captures dynamics, in addition to a more traditional low-rank factorization. On multiple real-world datasets, our model offers excellent prediction accuracy and it is very compact, since we need not learn latent state but rather just the state transition function.) <|cite_end|> have become a common choice. Other methods based on Convolutional Neural Networks (CNNs) <|cite_start|> (Reference: Personalized Top-N Sequential Recommendation via Convolutional Sequence Embedding: Top-$N$ sequential recommendation models each user as a sequence of items interacted in the past and aims to predict top-$N$ ranked items that a user will likely interact in a `near future'. The order of interaction implies that sequential patterns play an important role where more recent items in a sequence have a larger impact on the next item. In this paper, we propose a Convolutional Sequence Embedding Recommendation Model (\emph{Caser}) as a solution to address this requirement. The idea is to embed a sequence of recent items into an `image' in the time and latent spaces and learn sequential patterns as local features of the image using convolutional filters. This approach provides a unified and flexible network structure for capturing both general preferences and sequential patterns. The experiments on public datasets demonstrated that Caser consistently outperforms state-of-the-art sequential recommendation methods on a variety of common evaluation metrics.) <|cite_end|> <|cite_start|> (Reference: A Simple Convolutional Generative Network for Next Item Recommendation: Convolutional Neural Networks (CNNs) have been recently introduced in the domain of session-based next item recommendation. An ordered collection of past items the user has interacted with in a session (or sequence) are embedded into a 2-dimensional latent matrix, and treated as an image. The convolution and pooling operations are then applied to the mapped item embeddings. In this paper, we first examine the typical session-based CNN recommender and show that both the generative model and network architecture are suboptimal when modeling long-range dependencies in the item sequence. To address the issues, we introduce a simple, but very effective generative model that is capable of learning high-level representation from both short- and long-range item dependencies. The network architecture of the proposed model is formed of a stack of \emph{holed} convolutional layers, which can efficiently increase the receptive fields without relying on the pooling operation. Another contribution is the effective use of residual block structure in recommender systems, which can ease the optimization for much deeper networks. The proposed generative model attains state-of-the-art accuracy with less training time in the next item recommendation task. It accordingly can be used as a powerful recommendation baseline to beat in future, especially when there are long sequences of user feedback.) <|cite_end|>, Attention Models <|cite_start|> (Reference: Deep Interest Network for Click-Through Rate Prediction: Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding\&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding\&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.) <|cite_end|> have also been explored. While most of existing methods developed for sequential recommendations perform well <|cite_start|> (Reference: Fusing Similarity Models with Markov Chains for Sparse Sequential Recommendation: Predicting personalized sequential behavior is a key task for recommender systems. In order to predict user actions such as the next product to purchase, movie to watch, or place to visit, it is essential to take into account both long-term user preferences and sequential patterns (i.e., short-term dynamics). Matrix Factorization and Markov Chain methods have emerged as two separate but powerful paradigms for modeling the two respectively. Combining these ideas has led to unified methods that accommodate long- and short-term dynamics simultaneously by modeling pairwise user-item and item-item interactions. In spite of the success of such methods for tackling dense data, they are challenged by sparsity issues, which are prevalent in real-world datasets. In recent years, similarity-based methods have been proposed for (sequentially-unaware) item recommendation with promising results on sparse datasets. In this paper, we propose to fuse such methods with Markov Chains to make personalized sequential recommendations. We evaluate our method, Fossil, on a variety of large, real-world datasets. We show quantitatively that Fossil outperforms alternative algorithms, especially on sparse datasets, and qualitatively that it captures personalized dynamics and is able to make meaningful recommendations.) <|cite_end|> <|cite_start|> (Reference: {Factorizing Personalized Markov Chains for Next-basket Recommendation: Recommender systems are an important component of many websites. Two of the most popular approaches are based on matrix factorization (MF) and Markov chains (MC). MF methods learn the general taste of a user by factorizing the matrix over observed user-item preferences. On the other hand, MC methods model sequential behavior by learning a transition graph over items that is used to predict the next action based on the recent actions of a user. In this paper, we present a method bringing both approaches together. Our method is based on personalized transition graphs over underlying Markov chains. That means for each user an own transition matrix is learned - thus in total the method uses a transition cube. As the observations for estimating the transitions are usually very limited, our method factorizes the transition cube with a pairwise interaction model which is a special case of the Tucker Decomposition. We show that our factorized personalized MC (FPMC) model subsumes both a common Markov chain and the normal matrix factorization model. For learning the model parameters, we introduce an adaption of the Bayesian Personalized Ranking (BPR) framework for sequential basket data. Empirically, we show that our FPMC model outperforms both the common matrix factorization and the unpersonalized MC model both learned with and without factorization.) <|cite_end|> <|cite_start|> (Reference: Personalized Top-N Sequential Recommendation via Convolutional Sequence Embedding: Top-$N$ sequential recommendation models each user as a sequence of items interacted in the past and aims to predict top-$N$ ranked items that a user will likely interact in a `near future'. The order of interaction implies that sequential patterns play an important role where more recent items in a sequence have a larger impact on the next item. In this paper, we propose a Convolutional Sequence Embedding Recommendation Model (\emph{Caser}) as a solution to address this requirement. The idea is to embed a sequence of recent items into an `image' in the time and latent spaces and learn sequential patterns as local features of the image using convolutional filters. This approach provides a unified and flexible network structure for capturing both general preferences and sequential patterns. The experiments on public datasets demonstrated that Caser consistently outperforms state-of-the-art sequential recommendation methods on a variety of common evaluation metrics.) <|cite_end|> <|cite_start|> (Reference: Session-based Recommendations with Recurrent Neural Networks: We apply recurrent neural networks (RNN) on a new domain, namely recommender systems. Real-life recommender systems often face the problem of having to base recommendations only on short session-based data (e.g. a small sportsware website) instead of long user histories (as in the case of Netflix). In this situation the frequently praised matrix factorization approaches are not accurate. This problem is usually overcome in practice by resorting to item-to-item recommendations, i.e. recommending similar items. We argue that by modeling the whole session, more accurate recommendations can be provided. We therefore propose an RNN-based approach for session-based recommendations. Our approach also considers practical aspects of the task and introduces several modifications to classic RNNs such as a ranking loss function that make it more viable for this specific problem. Experimental results on two data-sets show marked improvements over widely used approaches.) <|cite_end|> <|cite_start|> (Reference: Contextual Sequence Modeling for Recommendation with Recurrent Neural Networks: Recommendations can greatly benefit from good representations of the user state at recommendation time. Recent approaches that leverage Recurrent Neural Networks (RNNs) for session-based recommendations have shown that Deep Learning models can provide useful user representations for recommendation. However, current RNN modeling approaches summarize the user state by only taking into account the sequence of items that the user has interacted with in the past, without taking into account other essential types of context information such as the associated types of user-item interactions, the time gaps between events and the time of day for each interaction. To address this, we propose a new class of Contextual Recurrent Neural Networks for Recommendation (CRNNs) that can take into account the contextual information both in the input and output layers and modifying the behavior of the RNN by combining the context embedding with the item embedding and more explicitly, in the model dynamics, by parametrizing the hidden unit transitions as a function of context information. We compare our CRNNs approach with RNNs and non-sequential baselines and show good improvements on the next event prediction task.) <|cite_end|>, they still have some limitations when dealing with long user sequences found in production recommender systems. As we shall discuss in Section~\ref{sec:pre}, such approaches do not scale well to very long sequences. \textbf{Mixture of Models.} Despite being simpler and more elegant, monolithic models are in general less effective than mixtures of models to take advantage of different model capacities and architectural biases. <|cite_start|> (Reference: Convolutional Sequence to Sequence Learning: The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.) <|cite_end|> used an RNN in combination with an attention model for neural machine translation which provided a substantial performance gain. <|cite_start|> (Reference: Recurrent Convolutional Neural Networks for Scene Labeling: The goal of the scene labeling task is to assign a class label to each pixel in an image. To ensure a good visual coherence and a high class accuracy, it is essential for a model to capture long range (pixel) label dependencies in images. In a feed-forward architecture, this can be achieved simply by considering a sufficiently large input context patch, around each pixel to be labeled. We propose an approach that consists of a recurrent convolutional neural network which allows us to consider a large input context while limiting the capacity of the model. Contrary to most standard approaches, our method does not rely on any segmentation technique nor any task-specific features. The system is trained in an end-to-end manner over raw pixels, and models complex spatial dependencies with low inference cost. As the context size increases with the built-in recurrence, the system identifies and corrects its own errors. Our approach yields state-of-the-art performance on both the Stanford Background Dataset and the SIFT Flow Dataset, while remaining very fast at test time.) <|cite_end|> proposed to combine a CNN with an RNN for scene labeling. In the field of sequential recommendation, an earlier work on mixing of a Latent Factor Model~(LFM) and a Factorized Markov Chain~(FMC) has been shown to offer superior performance than each individual one <|cite_start|> (Reference: {Factorizing Personalized Markov Chains for Next-basket Recommendation: Recommender systems are an important component of many websites. Two of the most popular approaches are based on matrix factorization (MF) and Markov chains (MC). MF methods learn the general taste of a user by factorizing the matrix over observed user-item preferences. On the other hand, MC methods model sequential behavior by learning a transition graph over items that is used to predict the next action based on the recent actions of a user. In this paper, we present a method bringing both approaches together. Our method is based on personalized transition graphs over underlying Markov chains. That means for each user an own transition matrix is learned - thus in total the method uses a transition cube. As the observations for estimating the transitions are usually very limited, our method factorizes the transition cube with a pairwise interaction model which is a special case of the Tucker Decomposition. We show that our factorized personalized MC (FPMC) model subsumes both a common Markov chain and the normal matrix factorization model. For learning the model parameters, we introduce an adaption of the Bayesian Personalized Ranking (BPR) framework for sequential basket data. Empirically, we show that our FPMC model outperforms both the common matrix factorization and the unpersonalized MC model both learned with and without factorization.) <|cite_end|>. A similar trend was observed in <|cite_start|> (Reference: Fusing Similarity Models with Markov Chains for Sparse Sequential Recommendation: Predicting personalized sequential behavior is a key task for recommender systems. In order to predict user actions such as the next product to purchase, movie to watch, or place to visit, it is essential to take into account both long-term user preferences and sequential patterns (i.e., short-term dynamics). Matrix Factorization and Markov Chain methods have emerged as two separate but powerful paradigms for modeling the two respectively. Combining these ideas has led to unified methods that accommodate long- and short-term dynamics simultaneously by modeling pairwise user-item and item-item interactions. In spite of the success of such methods for tackling dense data, they are challenged by sparsity issues, which are prevalent in real-world datasets. In recent years, similarity-based methods have been proposed for (sequentially-unaware) item recommendation with promising results on sparse datasets. In this paper, we propose to fuse such methods with Markov Chains to make personalized sequential recommendations. We evaluate our method, Fossil, on a variety of large, real-world datasets. We show quantitatively that Fossil outperforms alternative algorithms, especially on sparse datasets, and qualitatively that it captures personalized dynamics and is able to make meaningful recommendations.) <|cite_end|> <|cite_start|> (Reference: Deep Interest Evolution Network for Click-Through Rate Prediction: Click-through rate~(CTR) prediction, whose goal is to estimate the probability of the user clicks, has become one of the core tasks in advertising systems. For CTR prediction model, it is necessary to capture the latent user interest behind the user behavior data. Besides, considering the changing of the external environment and the internal cognition, user interest evolves over time dynamically. There are several CTR prediction methods for interest modeling, while most of them regard the representation of behavior as the interest directly, and lack specially modeling for latent interest behind the concrete behavior. Moreover, few work consider the changing trend of interest. In this paper, we propose a novel model, named Deep Interest Evolution Network~(DIEN), for CTR prediction. Specifically, we design interest extractor layer to capture temporal interests from history behavior sequence. At this layer, we introduce an auxiliary loss to supervise interest extracting at each step. As user interests are diverse, especially in the e-commerce system, we propose interest evolving layer to capture interest evolving process that is relative to the target item. At interest evolving layer, attention mechanism is embedded into the sequential structure novelly, and the effects of relative interests are strengthened during interest evolution. In the experiments on both public and industrial datasets, DIEN significantly outperforms the state-of-the-art solutions. Notably, DIEN has been deployed in the display advertisement system of Taobao, and obtained 20.7\% improvement on CTR.) <|cite_end|>. While sharing similar spirit to these aforementioned methods, we designed our mixture of models with the goal to model varying ranges of dependence in long user sequences found in real production systems. Unlike model ensembles <|cite_start|> (Reference: Ensemble methods in Machine Learning: Ensemble methods are learning algorithms that construct a set of classiiers and then classify new data points by taking a (weighted) vote of their predictions. The original ensemble method is Bayesian averaging , but more recent algorithms include error-correcting output coding, Bagging, and boosting. This paper reviews these methods and explains why ensembles can often perform better than any single classiier. Some previous studies comparing ensemble methods are reviewed, and some new experiments are presented to uncover the reasons that Adaboost does not overrt rapidly.) <|cite_end|> <|cite_start|> (Reference: Ensembling neural networks: Many could be better than all: ) <|cite_end|> <|cite_start|> (Reference: A Brand-level Ranking System with the Customized Attention-GRU Model: In e-commerce websites like Taobao, brand is playing a more important role in influencing users' decision of click/purchase, partly because users are now attaching more importance to the quality of products and brand is an indicator of quality. However, existing ranking systems are not specifically designed to satisfy this kind of demand. Some design tricks may partially alleviate this problem, but still cannot provide satisfactory results or may create additional interaction cost. In this paper, we design the first brand-level ranking system to address this problem. The key challenge of this system is how to sufficiently exploit users' rich behavior in e-commerce websites to rank the brands. In our solution, we firstly conduct the feature engineering specifically tailored for the personalized brand ranking problem and then rank the brands by an adapted Attention-GRU model containing three important modifications. Note that our proposed modifications can also apply to many other machine learning models on various tasks. We conduct a series of experiments to evaluate the effectiveness of our proposed ranking model and test the response to the brand-level ranking system from real users on a large-scale e-commerce platform, i.e. Taobao.) <|cite_end|> that learn individual models separately prior to ensembling them, a mixture of models learns individual models as well as combination logic simultaneously. <|paper_end|>
[ "<|reference_start|> Deep neural Networks for Youtube Recommendations: YouTube represents one of the largest scale and most sophisticated industrial recommendation systems in existence. In this paper, we describe the system at a high level and focus on the dramatic performance improvements brought by deep learning. The paper is split according to the classic two-stage information retrieval dichotomy: first, we detail a deep candidate generation model and then describe a separate deep ranking model. We also provide practical lessons and insights derived from designing, iterating and maintaining a massive recommendation system with enormous user-facing impact. <|reference_end|>", "<|reference_start|> {Factorizing Personalized Markov Chains for Next-basket Recommendation: Recommender systems are an important component of many websites. Two of the most popular approaches are based on matrix factorization (MF) and Markov chains (MC). MF methods learn the general taste of a user by factorizing the matrix over observed user-item preferences. On the other hand, MC methods model sequential behavior by learning a transition graph over items that is used to predict the next action based on the recent actions of a user. In this paper, we present a method bringing both approaches together. Our method is based on personalized transition graphs over underlying Markov chains. That means for each user an own transition matrix is learned - thus in total the method uses a transition cube. As the observations for estimating the transitions are usually very limited, our method factorizes the transition cube with a pairwise interaction model which is a special case of the Tucker Decomposition. We show that our factorized personalized MC (FPMC) model subsumes both a common Markov chain and the normal matrix factorization model. For learning the model parameters, we introduce an adaption of the Bayesian Personalized Ranking (BPR) framework for sequential basket data. Empirically, we show that our FPMC model outperforms both the common matrix factorization and the unpersonalized MC model both learned with and without factorization. <|reference_end|>", "<|reference_start|> Ensemble methods in Machine Learning: Ensemble methods are learning algorithms that construct a set of classiiers and then classify new data points by taking a (weighted) vote of their predictions. The original ensemble method is Bayesian averaging , but more recent algorithms include error-correcting output coding, Bagging, and boosting. This paper reviews these methods and explains why ensembles can often perform better than any single classiier. Some previous studies comparing ensemble methods are reviewed, and some new experiments are presented to uncover the reasons that Adaboost does not overrt rapidly. <|reference_end|>", "<|reference_start|> Ensembling neural networks: Many could be better than all: <|reference_end|>" ]
[ 13, 37, 40, 41 ]
{"<|multi_cite_1_1|>": "ss-862258", "<|multi_cite_1_2|>": "ss-678252", "<|multi_cite_1_3|>": "arxiv-31691", "<|multi_cite_1_4|>": "arxiv-132115", "<|multi_cite_2_1|>": "ss-1204622", "<|multi_cite_3_1|>": "arxiv-106812", "<|multi_cite_3_2|>": "ss-696734", "<|cite_20|>": "ss-692527", "<|multi_cite_4_1|>": "ss-1269221", "<|multi_cite_4_2|>": "ss-1396488", "<|cite_5|>": "arxiv-87783", "<|cite_6|>": "ss-1877559", "<|cite_7|>": "ss-678252", "<|cite_8|>": "ss-1221553", "<|cite_21|>": "ss-754049", "<|multi_cite_9_1|>": "ss-974412", "<|multi_cite_9_2|>": "ss-690198", "<|multi_cite_10_1|>": "arxiv-65503", "<|multi_cite_10_2|>": "ss-1931808", "<|multi_cite_11_1|>": "ss-678252", "<|multi_cite_11_2|>": "ss-956011", "<|multi_cite_12_1|>": "ss-1258523", "<|multi_cite_12_2|>": "ss-804296", "<|multi_cite_12_3|>": "arxiv-148561", "<|cite_22|>": "arxiv-132115", "<|multi_cite_13_1|>": "arxiv-87783", "<|multi_cite_13_2|>": "ss-1269221", "<|multi_cite_14_1|>": "arxiv-173335", "<|multi_cite_14_2|>": "arxiv-169323", "<|cite_15|>": "arxiv-127373", "<|multi_cite_16_1|>": "arxiv-106812", "<|multi_cite_16_2|>": "ss-696734", "<|multi_cite_16_3|>": "arxiv-173335", "<|multi_cite_16_4|>": "arxiv-87783", "<|multi_cite_16_5|>": "arxiv-127520", "<|cite_23|>": "arxiv-123607", "<|cite_24|>": "ss-1262944", "<|cite_17|>": "ss-696734", "<|multi_cite_18_1|>": "arxiv-106812", "<|multi_cite_18_2|>": "arxiv-172170", "<|multi_cite_19_1|>": "ss-1038069", "<|multi_cite_19_2|>": "ss-847325", "<|multi_cite_19_3|>": "arxiv-159627"}
0809.0124
<|paper_start|> Title: A Uniform Approach to Analogies, Synonyms, Antonyms, and Associations Abstract: A Uniform Approach to Analogies, Synonyms, Antonyms, and Associations: Recognizing analogies, synonyms, antonyms, and associations appear to be four distinct tasks, requiring distinct NLP algorithms. In the past, the four tasks have been treated independently, using a wide variety of algorithms. These four semantic classes, however, are a tiny sample of the full range of semantic phenomena, and we cannot afford to create ad hoc algorithms for each semantic phenomenon; we need to seek a unified approach. We propose to subsume a broad range of phenomena under analogies. To limit the scope of this paper, we restrict our attention to the subsumption of synonyms, antonyms, and associations. We introduce a supervised corpus-based machine learning algorithm for classifying analogous word pairs, and we show that it can solve multiple-choice SAT analogy questions, TOEFL synonym questions, ESL synonym-antonym questions, and similar-associated-both questions from cognitive psychology. Introduction A pair of words (petrify:stone) is \emph{analogous} to another pair (vaporize:gas) when the semantic relations between the words in the first pair are highly similar to the relations in the second pair. Two words (levied and imposed) are \emph{synonymous} in a context (levied a tax) when they can be interchanged (imposed a tax), they are are \emph{antonymous} when they have opposite meanings (black and white), and they are \emph{associated} when they tend to co-occur (doctor and hospital). On the surface, it appears that these are four distinct semantic classes, requiring distinct NLP algorithms, but we propose a uniform approach to all four. We subsume synonyms, antonyms, and associations under analogies. In essence, we say that $X$ and $Y$ are antonyms when the pair $X$:$Y$ is analogous to the pair black:white, $X$ and $Y$ are synonyms when they are analogous to the pair levied:imposed, and $X$ and $Y$ are associated when they are analogous to the pair doctor:hospital. There is past work on recognizing analogies, synonyms <|cite_start|> (Reference: A Solution to Plato's Problem: The Latent Semantic Analysis Theory of Acquisition, Induction, and Representation of Knowledge.: How do people know as much as they do with as little information as they get? The problem takes many forms; learning vocabulary from text is an especially dramatic and convenient case for research. A new general theory of acquired similarity and knowledge representation, latent semantic analysis (LSA), is presented and used to successfully simulate such learning and several other psycholinguistic phenomena. By inducing global knowledge indirectly from local co-occurrence data in a large body of representative text, LSA acquired knowledge about the full vocabulary of English at a comparable rate to schoolchildren. LSA uses no prior linguistic or perceptual similarity knowledge; it is based solely on a general mathematical learning method that achieves powerful inductive effects by extracting the right number of dimensions (e.g., 300) to represent objects and contexts. Relations to other theories, phenomena, and problems are sketched.) <|cite_end|>, antonyms <|cite_start|> (Reference: Identifying Synonyms among Distributionally Similar Words: There have been many proposals to compute similarities between words based on their distributions in contexts. However, these approaches do not distinguish between synonyms and antonyms. We present two methods for identifying synonyms among distributionally similar words.) <|cite_end|>, and associations <|cite_start|> (Reference: Word-word associations in document retrieval systems: The SMART automatic document retrieval system is used to study association procedures for automatic content analysis. The effect of word frequency and other parameters on the association process is investigated through examination of related pairs and through retrieval experiments. Associated pairs of words usually reflect localized word meanings, and true synonyms cannot readily be found from first or second order relationships in our document collections. There is little overlap between word relationships found through associations and those used in thesaurus construction, and the effects of word associations and a thesaurus in retrieval are independent. The use of associations in retrieval experiments improves not only recall, by permitting new matches between requests and documents, but also precision, by reinforcing existing matches. In our experiments, the precision effect is responsible for most of the improvement possible with associations. A properly constructed thesaurus, however, offers better performance than statistical association methods.) <|cite_end|>, but each of these four tasks has been examined separately, in isolation from the others. As far as we know, the algorithm proposed here is the first attempt to deal with all four tasks using a uniform approach. We believe that it is important to seek NLP algorithms that can handle a broad range of semantic phenomena, because developing a specialized algorithm for each phenomenon is a very inefficient research strategy. It might seem that a lexicon, such as WordNet <|cite_start|> (Reference: Wordnet: An Electronic Lexical Database: Part 1 The lexical database: nouns in WordNet, George A. Miller modifiers in WordNet, Katherine J. Miller a semantic network of English verbs, Christiane Fellbaum design and implementation of the WordNet lexical database and searching software, Randee I. Tengi. Part 2: automated discovery of WordNet relations, Marti A. Hearst representing verb alterations in WordNet, Karen T. Kohl et al the formalization of WordNet by methods of relational concept analysis, Uta E. Priss. Part 3 Applications of WordNet: building semantic concordances, Shari Landes et al performance and confidence in a semantic annotation task, Christiane Fellbaum et al WordNet and class-based probabilities, Philip Resnik combining local context and WordNet similarity for word sense identification, Claudia Leacock and Martin Chodorow using WordNet for text retrieval, Ellen M. Voorhees lexical chains as representations of context for the detection and correction of malapropisms, Graeme Hirst and David St-Onge temporal indexing through lexical chaining, Reem Al-Halimi and Rick Kazman COLOR-X - using knowledge from WordNet for conceptual modelling, J.F.M. Burg and R.P. van de Riet knowledge processing on an extended WordNet, Sanda M. Harabagiu and Dan I Moldovan appendix - obtaining and using WordNet.) <|cite_end|>, contains all the information we need to handle these four tasks. However, we prefer to take a corpus-based approach to semantics. Veale \shortcite{veale04} used WordNet to answer 374 multiple-choice SAT analogy questions, achieving an accuracy of 43\%, but the best corpus-based approach attains an accuracy of 56\% <|cite_start|> (Reference: Similarity of Semantic Relations: There are at least two kinds of similarity. Relational similarity is correspondence between relations, in contrast with attributional similarity, which is correspondence between attributes. When two words have a high degree of attributional similarity, we call them synonyms. When two pairs of words have a high degree of relational similarity, we say that their relations are analogous. For example, the word pair mason:stone is analogous to the pair carpenter:wood. This paper introduces Latent Relational Analysis (LRA), a method for measuring relational similarity. LRA has potential applications in many areas, including information extraction, word sense disambiguation, and information retrieval. Recently the Vector Space Model (VSM) of information retrieval has been adapted to measuring relational similarity, achieving a score of 47% on a collection of 374 college-level multiple-choice word analogy questions. In the VSM approach, the relation between a pair of words is characterized by a vector of frequencies of predefined patterns in a large corpus. LRA extends the VSM approach in three ways: (1) the patterns are derived automatically from the corpus, (2) the Singular Value Decomposition (SVD) is used to smooth the frequency data, and (3) automatically generated synonyms are used to explore variations of the word pairs. LRA achieves 56% on the 374 analogy questions, statistically equivalent to the average human score of 57%. On the related problem of classifying semantic relations, LRA achieves similar gains over the VSM.) <|cite_end|>. Another reason to prefer a corpus-based approach to a lexicon-based approach is that the former requires less human labour, and thus it is easier to extend to other languages. In Section~\ref{sec:analogy-perception}, we describe our algorithm for recognizing analogies. We use a standard supervised machine learning approach, with feature vectors based on the frequencies of patterns in a large corpus. We use a support vector machine (SVM) to learn how to classify the feature vectors <|cite_start|> (Reference: Fast Training of Support Vector Machines Using Sequential Minimal Optimization: This chapter describes a new algorithm for training Support Vector Machines: Sequential Minimal Optimization, or SMO. Training a Support Vector Machine (SVM) requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possible QP problems. These small QP problems are solved analytically, which avoids using a time-consuming numerical QP optimization as an inner loop. The amount of memory required for SMO is linear in the training set size, which allows SMO to handle very large training sets. Because large matrix computation is avoided, SMO scales somewhere between linear and quadratic in the training set size for various test problems, while a standard projected conjugate gradient (PCG) chunking algorithm scales somewhere between linear and cubic in the training set size. SMO's computation time is dominated by SVM evaluation, hence SMO is fastest for linear SVMs and sparse data sets. For the MNIST database, SMO is as fast as PCG chunking; while for the UCI Adult database and linear SVMs, SMO can be more than 1000 times faster than the PCG chunking algorithm.) <|cite_end|> <|cite_start|> (Reference: Data mining: practical machine learning tools and techniques with Java implementations: Thank you for reading data mining practical machine learning tools and techniques with java implementations. As you may know, people have look hundreds times for their favorite novels like this data mining practical machine learning tools and techniques with java implementations, but end up in infectious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they juggled with some malicious bugs inside their laptop.) <|cite_end|>. Section~\ref{sec:experiments} presents four sets of experiments. We apply our algorithm for recognizing analogies to multiple-choice analogy questions from the SAT college entrance test, multiple-choice synonym questions from the TOEFL (test of English as a foreign language), ESL (English as a second language) practice questions for distinguishing synonyms and antonyms, and a set of word pairs that are labeled \emph{similar}, \emph{associated}, and \emph{both}, developed for experiments in cognitive psychology. We discuss the results of the experiments in Section~\ref{sec:discussion}. The accuracy of the algorithm is competitive with other systems, but the strength of the algorithm is that it is able to handle all four tasks, with no tuning of the learning parameters to the particular task. It performs well, although it is competing against specialized algorithms, developed for single tasks. Related work is examined in Section~\ref{sec:related} and limitations and future work are considered in Section~\ref{sec:limitations}. We conclude in Section~\ref{sec:conclusion}. Related Work \label{sec:related} One of the first papers using supervised machine learning to classify word pairs was Rosario and Hearst's \shortcite{rosario01} paper on classifying noun-modifier pairs in the medical domain. For example, the noun-modifier expression \emph{brain biopsy} was classified as \emph{Procedure}. Rosario and Hearst \shortcite{rosario01} constructed feature vectors for each noun-modifier pair using MeSH (Medical Subject Headings) and UMLS (Unified Medical Language System) as lexical resources. They then trained a neural network to distinguish 13 classes of semantic relations, such as \emph{Cause}, \emph{Location}, \emph{Measure}, and \emph{Instrument}. Nastase and Szpakowicz \shortcite{nastase03} explored a similar approach to classifying general-domain noun-modifier pairs, using WordNet and Roget's Thesaurus as lexical resources. Turney and Littman \shortcite{turneylittman05} used corpus-based features for classifying noun-modifier pairs. Their features were based on 128 hand-coded patterns. They used a nearest-neighbour learning algorithm to classify general-domain noun-modifier pairs into 30 different classes of semantic relations. Turney \shortcite{turney06} later addressed the same problem using 8000 automatically generated patterns. One of the tasks in SemEval 2007 was the classification of semantic relations between nominals <|cite_start|> (Reference: Semeval-2007 task 04: Classification of semantic relations between nominals: The NLP community has shown a renewed interest in deeper semantic analyses, among them automatic recognition of relations between pairs of words in a text. We present an evaluation task designed to provide a framework for comparing different approaches to classifying semantic relations between nominals in a sentence. This is part of SemEval, the 4th edition of the semantic evaluation event previously known as SensEval. We define the task, describe the training/test data and their creation, list the participating systems and discuss their results. There were 14 teams who submitted 15 systems.) <|cite_end|>. The problem is to classify semantic relations between nouns and noun compounds in the context of a sentence. The task attracted 14 teams who created 15 systems, all of which used supervised machine learning with features that were lexicon-based, corpus-based, or both. PairClass is most similar to the algorithm of Turney \shortcite{turney06}, but it differs in the following ways: \begin{myitemize} \item PairClass does not use a lexicon to find synonyms for the input word pairs. One of our goals in this paper is to show that a pure corpus-based algorithm can handle synonyms without a lexicon. This considerably simplifies the algorithm. \item PairClass uses a support vector machine (SVM) instead of a nearest neighbour (NN) learning algorithm. \item PairClass does not use the singular value decomposition (SVD) to smooth the feature vectors. It has been our experience that SVD is not necessary with SVMs. \item PairClass generates probability estimates, whereas Turney \shortcite{turney06} uses a cosine measure of similarity. Probability estimates can be readily used in further downstream processing, but cosines are less useful. \item The automatically generated patterns in PairClass are slightly more general than the patterns of Turney \shortcite{turney06}. \item The morphological processing in PairClass <|cite_start|> (Reference: Applied morphological processing of {English}: We describe two newly developed computational tools for morphological processing: a program for analysis of English inflectional morphology, and a morphological generator, automatically derived from the analyser. The tools are fast, being based on finite-state techniques, have wide coverage, incorporating data from various corpora and machine readable dictionaries, and are robust, in that they are able to deal effectively with unknown words. The tools are freely available. We evaluate the accuracy and speed of both tools and discuss a number of practical applications in which they have been put to use.) <|cite_end|> is more sophisticated than in Turney \shortcite{turney06}. \end{myitemize} \noindent However, we believe that the main contribution of this paper is not PairClass itself, but the extension of supervised word pair classification beyond the classification of noun-modifier pairs and semantic relations between nominals, to analogies, synonyms, antonyms, and associations. As far as we know, this has not been done before. <|paper_end|>
[ "<|reference_start|> A Solution to Plato's Problem: The Latent Semantic Analysis Theory of Acquisition, Induction, and Representation of Knowledge.: How do people know as much as they do with as little information as they get? The problem takes many forms; learning vocabulary from text is an especially dramatic and convenient case for research. A new general theory of acquired similarity and knowledge representation, latent semantic analysis (LSA), is presented and used to successfully simulate such learning and several other psycholinguistic phenomena. By inducing global knowledge indirectly from local co-occurrence data in a large body of representative text, LSA acquired knowledge about the full vocabulary of English at a comparable rate to schoolchildren. LSA uses no prior linguistic or perceptual similarity knowledge; it is based solely on a general mathematical learning method that achieves powerful inductive effects by extracting the right number of dimensions (e.g., 300) to represent objects and contexts. Relations to other theories, phenomena, and problems are sketched. <|reference_end|>", "<|reference_start|> Wordnet: An Electronic Lexical Database: Part 1 The lexical database: nouns in WordNet, George A. Miller modifiers in WordNet, Katherine J. Miller a semantic network of English verbs, Christiane Fellbaum design and implementation of the WordNet lexical database and searching software, Randee I. Tengi. Part 2: automated discovery of WordNet relations, Marti A. Hearst representing verb alterations in WordNet, Karen T. Kohl et al the formalization of WordNet by methods of relational concept analysis, Uta E. Priss. Part 3 Applications of WordNet: building semantic concordances, Shari Landes et al performance and confidence in a semantic annotation task, Christiane Fellbaum et al WordNet and class-based probabilities, Philip Resnik combining local context and WordNet similarity for word sense identification, Claudia Leacock and Martin Chodorow using WordNet for text retrieval, Ellen M. Voorhees lexical chains as representations of context for the detection and correction of malapropisms, Graeme Hirst and David St-Onge temporal indexing through lexical chaining, Reem Al-Halimi and Rick Kazman COLOR-X - using knowledge from WordNet for conceptual modelling, J.F.M. Burg and R.P. van de Riet knowledge processing on an extended WordNet, Sanda M. Harabagiu and Dan I Moldovan appendix - obtaining and using WordNet. <|reference_end|>", "<|reference_start|> Fast Training of Support Vector Machines Using Sequential Minimal Optimization: This chapter describes a new algorithm for training Support Vector Machines: Sequential Minimal Optimization, or SMO. Training a Support Vector Machine (SVM) requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possible QP problems. These small QP problems are solved analytically, which avoids using a time-consuming numerical QP optimization as an inner loop. The amount of memory required for SMO is linear in the training set size, which allows SMO to handle very large training sets. Because large matrix computation is avoided, SMO scales somewhere between linear and quadratic in the training set size for various test problems, while a standard projected conjugate gradient (PCG) chunking algorithm scales somewhere between linear and cubic in the training set size. SMO's computation time is dominated by SVM evaluation, hence SMO is fastest for linear SVMs and sparse data sets. For the MNIST database, SMO is as fast as PCG chunking; while for the UCI Adult database and linear SVMs, SMO can be more than 1000 times faster than the PCG chunking algorithm. <|reference_end|>", "<|reference_start|> Semeval-2007 task 04: Classification of semantic relations between nominals: The NLP community has shown a renewed interest in deeper semantic analyses, among them automatic recognition of relations between pairs of words in a text. We present an evaluation task designed to provide a framework for comparing different approaches to classifying semantic relations between nominals in a sentence. This is part of SemEval, the 4th edition of the semantic evaluation event previously known as SensEval. We define the task, describe the training/test data and their creation, list the participating systems and discuss their results. There were 14 teams who submitted 15 systems. <|reference_end|>" ]
[ 0, 3, 5, 7 ]
{"<|cite_2|>": "ss-1028849", "<|cite_3|>": "ss-1505941", "<|cite_4|>": "ss-1291268", "<|cite_5|>": "ss-995558", "<|cite_6|>": "arxiv-674680", "<|multi_cite_7_1|>": "ss-688321", "<|multi_cite_7_2|>": "ss-687753", "<|cite_8|>": "ss-835893", "<|cite_9|>": "ss-808298"}
2207.09457
<|paper_start|> Title: A Deep Learning Framework for Wind Turbine Repair Action Prediction Using Alarm Sequences and Long Short Term Memory Algorithms Abstract: A Deep Learning Framework for Wind Turbine Repair Action Prediction Using Alarm Sequences and Long Short Term Memory Algorithms: With an increasing emphasis on driving down the costs of Operations and Maintenance (O&M) in the Offshore Wind (OSW) sector, comes the requirement to explore new methodology and applications of Deep Learning (DL) to the domain. Condition-based monitoring (CBM) has been at the forefront of recent research developing alarm-based systems and data-driven decision making. This paper provides a brief insight into the research being conducted in this area, with a specific focus on alarm sequence modelling and the associated challenges faced in its implementation. The paper proposes a novel idea to predict a set of relevant repair actions from an input sequence of alarm sequences, comparing Long Short-term Memory (LSTM) and Bidirectional LSTM (biLSTM) models. Achieving training accuracy results of up to 80.23%, and test accuracy results of up to 76.01% with biLSTM gives a strong indication to the potential benefits of the proposed approach that can be furthered in future research. The paper introduces a framework that integrates the proposed approach into O$\&$M procedures and discusses the potential benefits which include the reduction of a confusing plethora of alarms, as well as unnecessary vessel transfers to the turbines for fault diagnosis and correction. Introduction O$\&$M is currently the second largest sub-sector market within OSW and is projected to rise to the largest sub-sector by 2050. This development leads to an increased interest into research both to drive down costs associated with O$\&$M, as well as the safety of alarm-based systems.\\ Currently, O$\&$M consists of three major methods: preventative, failure-based and condition-based monitoring <|cite_start|> (Reference: An opportunistic condition-based maintenance strategy for offshore wind farm based on predictive analytics: ) <|cite_end|>. The latter of these is the method most relevant to the success of alarm systems and alarm sequencing prediction. CBM relies on the alarm systems currently in place within turbines as well as SCADA systems <|cite_start|> (Reference: Using scada data for wind turbine condition monitoring: A systematic literature review: Operation and maintenance (O&M) activities represent a significant share of the total expenditure of a wind farm. Of these expenses, costs associated with unexpected failures account for the highest percentage. Therefore, it is clear that early detection of wind turbine (WT) failures, which can be achieved through appropriate condition monitoring (CM), is critical to reduce O&M costs. The use of Supervisory Control and Data Acquisition (SCADA) data has recently been recognized as an effective solution for CM since most modern WTs record large amounts of parameters using their SCADA systems. Artificial intelligence (AI) techniques can convert SCADA data into information that can be used for early detection of WT failures. This work presents a systematic literature review (SLR) with the aim to assess the use of SCADA data and AI for CM of WTs. To this end, we formulated four research questions as follows: (i) What are the current challenges of WT CM? (ii) What are the WT components to which CM has been applied? (iii) What are the SCADA variables used? and (iv) What AI techniques are currently under research? Further to answering the research questions, we identify the lack of accessible WT SCADA data towards research and the need for its standardization. Our SLR was developed by reviewing more than 95 scientific articles published in the last three years.) <|cite_end|>. SCADA has become a standard installation in larger turbines offshore in recent years, which means that the data collected is increasing rapidly, thus creating potential for new performance benchmarks within machine learning (ML) models applied in this area <|cite_start|> (Reference: Scada-compatible and scaleable visualization tool for corrosion monitoring of offshore wind turbine structures: The exploitation of offshore windfarms (WFs) goes hand in hand with large capital expenditures (CAPEX) and operational expenditures (OPEX), as these mechanical installations operate continuously for multiple decades in harsh, saline conditions. OPEX can account for up to 30% of the levelised cost of energy (LCoE) for a deployed offshore wind farm. To maintain the cost-competitiveness of deployed offshore WFs versus other renewable energy sources, their LCoE has to be kept in check, both by minimising the OPEX and optimising the offshore wind energy production. As corrosion, in particular uniform corrosion, is a major cause of failure of offshore wind turbine structures, there is an urgent need for corrosion management systems for deployed offshore wind turbine structures (WTs). Despite the fact that initial corrosion protection solutions are already integrated on some critical structural components such as WT towers, WT transition pieces or WT sub-structure (fixed or floating platforms), these components can still be harshly damaged by the corrosive environmental offshore conditions. The traditional preventive maintenance strategy, in which regular manual inspections by experts are necessary, is widely implemented nowadays in wind farm applications. Unfortunately, for such challenging operating environments, regular human inspections have a significant cost, which eventually increase the OPEX. To minimise the OPEX, remote corrosion monitoring solutions combined with supporting software (SW) tools are thus necessary. This paper focuses on the development of a software (SW) tool for the visualisation of corrosion measurement data. To this end, criteria for efficient structural corrosion analysis were identified, namely a scaleable, SCADA-compatible, secure, web accessible tool that can visualise 3D relationships. In order to be effective, the SW tool requires a tight integration with decision support tools. This paper provides three insights: Firstly, through a literature study and non-exhaustive market study, it is shown that a combined visualisation and decision SW tool is currently non-existing in the market. This gap motivates a need for the development of a custom SW tool. Secondly, the capabilities of the developed custom software tool, consisting of a backend layer and visualisation browser designed for this task are demonstrated and discussed in this paper. This indicates that a SCADA-compatible visualisation software tool is possible, and can be a major stepping stone towards a semi-automated decision support toolchain for offshore wind turbine corrosion monitoring.) <|cite_end|>. The alarms linked to typical SCADA systems for OSW turbines allow monitoring of almost all sub-components <|cite_start|> (Reference: A parameter selection method for wind turbine health management through scada data: Wind turbine anomaly or failure detection using machine learning techniques through supervisory control and data acquisition (SCADA) system is drawing wide attention from academic and industry While parameter selection is important for modelling a wind turbine’s condition, only a few papers have been published focusing on this issue and in those papers interconnections among sub-components in a wind turbine are used to address this problem. However, merely the interconnections for decision making sometimes is too general to provide a parameter list considering the differences of each SCADA dataset. In this paper, a method is proposed to provide more detailed suggestions on parameter selection based on mutual information. First, the copula is proven to be capable of simplifying the estimation of mutual information. Then an empirical copulabased mutual information estimation method (ECMI) is introduced for application. After that, a real SCADA dataset is adopted to test the method, and the results show the effectiveness of the ECMI in providing parameter selection suggestions when physical knowledge is not accurate enough.) <|cite_end|>. This is not an easy task for a number of reasons. Firstly, there is a certain risk of alarm flooding, especially since alarms may cascade during disturbances, as one symptom of the disturbance follows another. Alarm flooding refers to the relationship between alarm sequences and is defined as “10 or more enunciated alarms within a 10-minute period per operator”. Where one alarm sounds, it is likely to then trigger other alarms due to the close relationships between components’ behaviour and overall performance. Multiple activated alarms can often distract from the original fault, leading to more downtime on the site whilst diagnostic reports are produced <|cite_start|> (Reference: Wind turbine fault diagnosis by the approach of scada alarms analysis: Wind farm operators are overwhelmed by a large amount of supervisory control and data acquisition (SCADA) alarms when faults occur. This paper presents an online root fault identification method for SCADA alarms to assist operators in wind turbine fault diagnosis. The proposed method is based on the similarity analysis between an unknown alarm vector and the feature vectors of known faults. The alarm vector is obtained from segmented alarm lists, which are filtered and simplified. The feature vector, which is a unique signature representing the occurrence of a fault, is extracted from the alarm lists belonging to the same fault. To mine the coupling correspondence between alarms and faults, we define the weights of the alarms in each fault. The similarities is measured by the weighted Euclidean distance and the weighted Hamming distance, respectively. One year of SCADA alarms and maintenance records are used to verify the proposed method. The results show that the performance of the weighted Hamming distance is better than that of the weighted Euclidean distance; 84.1% of alarm lists are labeled with the right root fault.) <|cite_end|>. Secondly, systems can generate false alarms, i.e. alarms caused by sensor failures and not as a result of process disturbances. To cascading and false alarms, one can add alarms created during maintenance. False and maintenance alarms are not only confusing for operators that oversee the health and safety of the turbine, but can also confuse automated ML algorithms, that are trained on this data, e.g. for the purposes of fault isolation or generation of repair actions <|cite_start|> (Reference: Offshore wind turbine fault alarm prediction: ) <|cite_end|>. Current standards across industries, including EEMUA-191 and ANSI/ISA-18.2 <|cite_start|> (Reference: Ergonomics Analysis of Alarm Systems and Alarm Management in Process Industries: ) <|cite_end|>, detail the design, management and procurement of alarm systems as well as the alarm management specific to process industries <|cite_start|> (Reference: Process alarm prediction using deep learning and word embedding methods.: ) <|cite_end|>. These standards are used as a foundation for improving alarm processing and prediction of likely repair schedules, but they don’t prescribe or enforce specific techniques that address the significant problems mentioned above. To address these issues and to achieve appropriate fault isolation and ultimately repair action prediction, in this paper, we propose a novel approach utilising alarm sequences to predict repair actions accurately and efficiently. Our contributions are: \begin{itemize} \item A DL based approach to predict repair actions from a sequence of alarms. The paper experiments with both LSTM and BiLSTM algorithms for comparison of performance on this problem. \item A conceptual framework to integrate the idea of repair action prediction into OSW farm O$\&$M procedures. \item The proposed use of reinforcement learning in a human-in-the-loop procedure to improve the accuracy of the DL model over time. \end{itemize} In section 2, we discuss the research question. In section 3, we detail our methodology and compare it to other approaches within the domain. The methodology section discusses the pre-processing of data, the design of the neural network, and experiments. Section 4 discusses results and application to industry and conclusions follow in Section 5. <|paper_end|>
[ "<|reference_start|> Using scada data for wind turbine condition monitoring: A systematic literature review: Operation and maintenance (O&M) activities represent a significant share of the total expenditure of a wind farm. Of these expenses, costs associated with unexpected failures account for the highest percentage. Therefore, it is clear that early detection of wind turbine (WT) failures, which can be achieved through appropriate condition monitoring (CM), is critical to reduce O&M costs. The use of Supervisory Control and Data Acquisition (SCADA) data has recently been recognized as an effective solution for CM since most modern WTs record large amounts of parameters using their SCADA systems. Artificial intelligence (AI) techniques can convert SCADA data into information that can be used for early detection of WT failures. This work presents a systematic literature review (SLR) with the aim to assess the use of SCADA data and AI for CM of WTs. To this end, we formulated four research questions as follows: (i) What are the current challenges of WT CM? (ii) What are the WT components to which CM has been applied? (iii) What are the SCADA variables used? and (iv) What AI techniques are currently under research? Further to answering the research questions, we identify the lack of accessible WT SCADA data towards research and the need for its standardization. Our SLR was developed by reviewing more than 95 scientific articles published in the last three years. <|reference_end|>", "<|reference_start|> Wind turbine fault diagnosis by the approach of scada alarms analysis: Wind farm operators are overwhelmed by a large amount of supervisory control and data acquisition (SCADA) alarms when faults occur. This paper presents an online root fault identification method for SCADA alarms to assist operators in wind turbine fault diagnosis. The proposed method is based on the similarity analysis between an unknown alarm vector and the feature vectors of known faults. The alarm vector is obtained from segmented alarm lists, which are filtered and simplified. The feature vector, which is a unique signature representing the occurrence of a fault, is extracted from the alarm lists belonging to the same fault. To mine the coupling correspondence between alarms and faults, we define the weights of the alarms in each fault. The similarities is measured by the weighted Euclidean distance and the weighted Hamming distance, respectively. One year of SCADA alarms and maintenance records are used to verify the proposed method. The results show that the performance of the weighted Hamming distance is better than that of the weighted Euclidean distance; 84.1% of alarm lists are labeled with the right root fault. <|reference_end|>", "<|reference_start|> Offshore wind turbine fault alarm prediction: <|reference_end|>", "<|reference_start|> Process alarm prediction using deep learning and word embedding methods.: <|reference_end|>" ]
[ 1, 4, 5, 7 ]
{"<|cite_3|>": "ss-786862", "<|cite_4|>": "ss-786863", "<|cite_5|>": "ss-786864", "<|cite_6|>": "ss-786865", "<|cite_8|>": "ss-786866", "<|cite_9|>": "ss-786867", "<|cite_11|>": "ss-786868", "<|cite_12|>": "ss-2176568"}
1701.05596
<|paper_start|> Title: The Parallel Distributed Image Search Engine (ParaDISE) Abstract: The Parallel Distributed Image Search Engine (ParaDISE): Image retrieval is a complex task that differs according to the context and the user requirements in any specific field, for example in a medical environment. Search by text is often not possible or optimal and retrieval by the visual content does not always succeed in modelling high-level concepts that a user is looking for. Modern image retrieval techniques consist of multiple steps and aim to retrieve information from large--scale datasets and not only based on global image appearance but local features and if possible in a connection between visual features and text or semantics. This paper presents the Parallel Distributed Image Search Engine (ParaDISE), an image retrieval system that combines visual search with text--based retrieval and that is available as open source and free of charge. The main design concepts of ParaDISE are flexibility, expandability, scalability and interoperability. These concepts constitute the system, able to be used both in real-world applications and as an image retrieval research platform. Apart from the architecture and the implementation of the system, two use cases are described, an application of ParaDISE in retrieval of images from the medical literature and a visual feature evaluation for medical image retrieval. Future steps include the creation of an open source community that will contribute and expand this platform based on the existing parts. Introduction Image retrieval just like general information retrieval is a popular and frequent activity in many fields such as journalism <|cite_start|> (Reference: Searching for photos -- journalists' practices in pictorial {IR}: This paper reports the results of a field study on journalists’ practices in requesting, searching for and selecting photos in the course of their daily work. The study addresses different types of search topics common in journalistic illustration tasks, journalists’ searching behaviour and the criteria they apply in selecting photos. Data were collected by observing journalists in their work and interviewing them. A sample of requests received by the archive was also analysed. The results indicate that specific needs dominate the use of newspaper photo archives. Photos of objects, themes, or abstract topics expressed in general terms were also needed, but finding them and formulating queries in these cases especially was considered problematic. The results suggest that browsing is an essential strategy in accessing digital photo archives. Journalists tend to browse but the present archive systems support browsing poorly. The paper concludes with suggestions for the improvement of end-user access to photo archives. The possible applications of current feature-based indexing and retrieval methods in the newspaper photo archive are discussed in the light of the results.) <|cite_end|> and medicine <|cite_start|> (Reference: A survey on visual information search behavior and requirements of radiologists: Summary Objectives: The main objective of this study is to learn more on the image use and search requirements of radiologists. These requirements will then be taken into account to develop a new search system for images and associated meta data search in the Khresmoi project. Methods: Observations of the radiology workflow, case discussions and a literature review were performed to construct a survey form that was given online and in paper form to radiologists. Eye tracking was performed on a radiology viewing station to analyze typical tasks and to complement the survey. Results: In total 34 radiologists answered the survey online or on paper. Image search was mentioned as a frequent and common task, particularly for finding cases of interest for differential diagnosis. Sources of information besides the Internet are books and discussions with colleagues. Search for images is unsuccessful in around 25% of the cases, stopping the search after around 10 minutes. The most common reason for failure is that target images are considered rare. Important additions for search requested in the survey are filtering by pathology and modality, as well as search for visually similar images and cases. Few radiologists are familiar with visual retrieval but they desire the option to upload images for searching similar ones. Conclusions: Image search is common in radiology but few radiologists are fully aware of visual information retrieval. Taking into account the many unsuccessful searches and time spent for this, a good image search could improve the situation and help in clinical practice.) <|cite_end|>. In certain cases, describing with keywords the images to retrieve is often not possible or optimal. Content--based image retrieval (CBIR) is an alternative approach to image search that uses the visual content of the image to find similar images. Querying by image example can be really time efficient, especially with the use of user interaction techniques such as relevance feedback <|cite_start|> (Reference: Relevance feedback in information retrieval: ) <|cite_end|>, which allows quick query refinement by marking relevant results. However, due to the use of low--level visual characteristics, such as color, shape and texture, by CBIR in order to represent an image, it is difficult to describe high--level concepts, e.g. a pathology found in an X--ray. This is particularly important in difficult cases, e.g. medical image retrieval where abnormalities and pathologies may be found in small areas of the image. Multi--modal approaches are one way to cope with this ``semantic gap'', combining text and visual information to determine relevancy to the query <|cite_start|> (Reference: Multimodal fusion for multimedia analysis: a survey: ) <|cite_end|>. Research on CBIR has been carried out in several fields such as object and scene retrieval <|cite_start|> (Reference: Video Google: A Text Retrieval Approach to Object Matching in Videos: We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors. The analogy with text retrieval is in the implementation where matches on descriptors are pre-computed (using vector quantization), and inverted file systems and document rankings are used. The result is that retrieved is immediate, returning a ranked list of key frames/shots in the manner of Google. The method is illustrated for matching in two full length feature films.) <|cite_end|> and remote sensing <|cite_start|> (Reference: Information mining in remote sensing image archives: System concepts: In this paper, we demonstrate the concepts of a prototype of a knowledge-driven content-based information mining system produced to manage and explore large volumes of remote sensing image data. The system consists of a computationally intensive offline part and an online interface. The offline part aims at the extraction of primitive image features, their compression, and data reduction, the generation of a completely unsupervised image content-index, and the ingestion of the catalogue entry in the database management system. Then, the user's interests-semantic interpretations of the image content-are linked with Bayesian networks to the content-index. Since this calculation is only based on a few training samples, the link can be computed online, and the complete image archive can be searched for images that contain the defined cover type. Practical applications exemplified with different remote sensing datasets show the potential of the system.) <|cite_end|>. In the early years, mathematical models where used to represent the visual content of the image in a holistic manner <|cite_start|> (Reference: Modeling the Shape of the Scene: A Holistic Representation of the Spatial Envelope: ) <|cite_end|>. Later, local descriptors <|cite_start|> (Reference: Distinctive image features from scale--invariant keypoints: This paper presents a method for extracting distinctive inv ar ant features from images that can be used to perform reliable matching between diff rent views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a a substantial r ange of affine distortion, change in 3D viewpoint, addition of noise, and chan ge in illumination. The features are highly distinctive, in the sense that a sing le feature can be correctly matched with high probability against a large databa se of features from many images. This paper also describes an approach to using t hese features for object recognition. The recognition proceeds by matchi ng individual features to a database of features from known objects using a fas t ne rest-neighbor algorithm, followed by a Hough transform to identify cluste r belonging to a single object, and finally performing verification through leas t-squares solution for consistent pose parameters. This approach to recognition c an robustly identify objects among clutter and occlusion while achieving near re al-time performance. Accepted for publication in the International Journal of Computer Vision, 2004.) <|cite_end|> modelling the information around specific points or ROIs were shown to outperform global descriptors in several tasks <|cite_start|> (Reference: A Performance evaluation of local descriptors: In this paper we compare the performance of interest point descriptors. Many different descriptors have been proposed in the literature. However, it is unclear which descriptors are more appropriate and how their performance depends on the interest point detector. The descriptors should be distinctive and at the same time robust to changes in viewing conditions as well as to errors of the point detector. Our evaluation uses as criterion detection rate with respect to false positive rate and is carried out for different image transformations. We compare SIFT descriptors (Lowe, 1999), steerable filters (Freeman and Adelson, 1991), differential invariants (Koenderink ad van Doorn, 1987), complex filters (Schaffalitzky and Zisserman, 2002), moment invariants (Van Gool et al., 1996) and cross-correlation for different types of interest points. In this evaluation, we observe that the ranking of the descriptors does not depend on the point detector and that SIFT descriptors perform best. Steerable filters come second ; they can be considered a good choice given the low dimensionality.) <|cite_end|> <|cite_start|> (Reference: Features for Image Retrieval: A Quantitative Comparison: ) <|cite_end|>. While local descriptors allowed for partial matching of images and showed scale and rotation invariance, they were inefficient for search within large--scale image collections. For this reason, more compact representations inspired from text--based information retrieval such as Bag--of--Visual-Words (BoVW) <|cite_start|> (Reference: Video Google: A Text Retrieval Approach to Object Matching in Videos: We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors. The analogy with text retrieval is in the implementation where matches on descriptors are pre-computed (using vector quantization), and inverted file systems and document rankings are used. The result is that retrieved is immediate, returning a ranked list of key frames/shots in the manner of Google. The method is illustrated for matching in two full length feature films.) <|cite_end|> have been developed. Efficient indexing structures such as the Inverted Index have also been employed to allow for fast real--time search. Several projects have already been realized in the field of information retrieval and made systems available as open source. Among them is the Viper project <|cite_start|> (Reference: Design and evaluation of a content--based image retrieval system: The growth in size and accessibility of multimedia databases have changed our approach to information retrieval. Classical text-based systems show their limitations in the context of multimedia retrieval. In this chapter, we address the problem of conceiving and evaluating a content-based image retrieval system. First, we investigate the use of the query-by-example (QBE) paradigm as a base paradigm for the development of a content-based image retrieval system (CBIRS). We show that it should be considered as a complement to the classical textual-based paradigms. We then evaluate the capabilities of the most up-to-date computer vision techniques in contributing to the realisation of such a system. Further, beyond the necessity of accurate image understanding techniques, we show that the amount of data involved by the process of describing image content should also be considered as an important issue. This aspect of our study is largely based on the experience acquired by the text retrieval (TR) community, which we adapt to the context of CBIR. Similarly, the text retrieval community has also developed a significant experience in evaluating retrieval systems, where judgements include subjectivity and context dependency. Extending this experience, we study a coherent framework for performing the evaluation of a CBIRS. As a practical example, we use our Viper CBIR system, using a novel communication protocol called MRML to pinpoint the importance of the sharing of resource in facilitating the evaluation and therefore the development of CBIRS.) <|cite_end|>, the outcome of which was the GNU Image--Finding Tool (GIFT), a CBIR system that enables users to perform ``Query By Example'' search operations and improve the quality of results using relevance feedback. The system contained a relatively small bank of outdated visual features which was hard to modify and expand. Another noteworthy project is Lucene Image Retrieval (LIRe) <|cite_start|> (Reference: Lire: Lucene Image Retrieval: An Extensible Java CBIR Library: LIRe (Lucene Image Retrieval) is a light weight open source Java library for content based image retrieval. It provides common and state of the art global image features and offers means for indexing and retrieval. Due to the fact that it is based on a light weight embedded text search engine, it can be integrated easily in applications without relying on a database server.) <|cite_end|>, a library based on the Lucene text retrieval software, which provides various visual features. The system uses purely visual search and provides little support for several state--of--the--art representations (such as spatial pyramid matching or bag--of--colors), indexing parallelization or flexible index structuring. Flexible Image Retrieval Engine (FIRE) is another example of a CBIR system <|cite_start|> (Reference: Features for image retrieval: an experimental comparison: ) <|cite_end|>, also used in medical image retrieval evaluation apart from other applications. The system allows also combination with text queries. Being developed before 2007, the system does not support state--of--the--art mid--level representations (such as BoVW or Vectors of locally aggregated descriptors (VLAD) <|cite_start|> (Reference: Aggregating local descriptors into a compact image representation: We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms.) <|cite_end|>). No parallelization schema is mentioned for indexing large scale datasets, either. In <|cite_start|> (Reference: NIR: Content based image retrieval on cloud computing: NIR is an open source cloud computing enabled content based image retrieval system. With the development and popularization of cloud computing, more and more researchers from different research areas do research with the help of cloud computing. Nowadays content based image retrieval as one of the challenging and emerging technologies is high computation task because of the algorithm computation complexity and big amount of data. As based on cloud computing infrastructure, NIR is easy to extent and flexible for deployment. As an open source project, NIR can be improved on demand and integrated to other existing systems. This paper presents our ideas, findings, design and the system from our work of NIR.) <|cite_end|> a CBIR system, NIR, Nutch <|cite_start|> (Reference: Nutch: A flexible and scalable open-source web search engine: is an open-source Web search engine that can be used at global, local, and even personal scale. Its initial design goal was to enable a transparent alternative for global Web search in the public interest — one of its signature features is the ability to "explain" its result rankings. Recent work has emphasized how it can also be used for intranets; by local communities with richer data models, such as the Creative Commons metadata-enabled search for licensed content; on a personal scale to index a user's files, email, and web-surfing history; and we also report on several other research projects built on Nutch. In this paper, we present how the architecture of the Nutch system enables it to be more flexible and scalable than other comparable systems today.) <|cite_end|> and LIRe is presented. It uses Hadoop <|cite_start|> (Reference: Hadoop: the definitive guide: Hadoop: The Definitive Guide helps you harness the power of your data. Ideal for processing large datasets, the Apache Hadoop framework is an open source implementation of the MapReduce algorithm on which Google built its empire. This comprehensive resource demonstrates how to use Hadoop to build reliable, scalable, distributed systems: programmers will find details for analyzing large datasets, and administrators will learn how to set up and run Hadoop clusters. Complete with case studies that illustrate how Hadoop solves specific problems, this book helps you: Use the Hadoop Distributed File System (HDFS) for storing large datasets, and run distributed computations over those datasets using MapReduce Become familiar with Hadoop's data and I/O building blocks for compression, data integrity, serialization, and persistence Discover common pitfalls and advanced features for writing real-world MapReduce programs Design, build, and administer a dedicated Hadoop cluster, or run Hadoop in the cloud Use Pig, a high-level query language for large-scale data processing Take advantage of HBase, Hadoop's database for structured and semi-structured data Learn ZooKeeper, a toolkit of coordination primitives for building distributed systems If you have lots of data -- whether it's gigabytes or petabytes -- Hadoop is the perfect solution. Hadoop: The Definitive Guide is the most thorough book available on the subject. "Now you have the opportunity to learn about Hadoop from a master-not only of the technology, but also of common sense and plain talk." -- Doug Cutting, Hadoop Founder, Yahoo!) <|cite_end|>, which is an implementation of the MapReduce framework <|cite_start|> (Reference: MapReduce: Simplified data processing on large clusters: MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day.) <|cite_end|>, for parallel computing. A small bank of outdated features is used to demonstrate the system using Hadoop. MapReduce was also used for the online processes even though this is not advised <|cite_start|> (Reference: Parallel data processing with MapReduce: a survey: A prominent parallel data processing tool MapReduce is gaining significant momentum from both industry and academia as the volume of data to analyze grows rapidly. While MapReduce is used in many areas where massive data analysis is required, there are still debates on its performance, efficiency per node, and simple abstraction. This survey intends to assist the database and open source communities in understanding various technical aspects of the MapReduce framework. In this survey, we characterize the MapReduce framework and discuss its inherent pros and cons. We then introduce its optimization strategies reported in the recent literature. We also discuss the open issues and challenges raised on parallel data analysis with MapReduce.) <|cite_end|>. The indexing and retrieval times were demonstrated in a relatively small database. Another system called Distributed Image Retrieval System (DIRS) is described in <|cite_start|> (Reference: DIRS: Distributed image retrieval system based on MapReduce: With information technology developing rapidly, variety and quantity of image data is increasing fast. How to retrieve desired images among massive images storage is getting to be an urgent problem. In this paper, we established a Distributed Image Retrieval System (DIRS), in which images are retrieved in a content-based way, and the retrieval among massive image data storage is speeded up by utilizing MapReduce distributed computing model. Moreover, fault tolerance, ability to run in a heterogeneous environment and scalability are supported in our system. Experiments are carried out to verify the improvement of performance when MapReduce model is utilized. Results have shown that image storage and image retrieval based on MapReduce outperform that in centralized way greatly when total number of images is large.) <|cite_end|> using LIRe and HBase(\footnote{\texttt{http://hbase.apache.org/}}). Data sets of up to 100,000 images are used for testing the query times. When using datasets above 20,000 images, the retrieval times reported are restrictive for online use even though they are faster than without Hadoop use. This study presents the Parallel Distributed Image Search Engine (ParaDISE). ParaDISE is an image retrieval system that combines CBIR and text--based retrieval. The design of the system was based on the difficult use case of medical image retrieval, after a survey on radiologists image search information needs <|cite_start|> (Reference: A survey on visual information search behavior and requirements of radiologists: Summary Objectives: The main objective of this study is to learn more on the image use and search requirements of radiologists. These requirements will then be taken into account to develop a new search system for images and associated meta data search in the Khresmoi project. Methods: Observations of the radiology workflow, case discussions and a literature review were performed to construct a survey form that was given online and in paper form to radiologists. Eye tracking was performed on a radiology viewing station to analyze typical tasks and to complement the survey. Results: In total 34 radiologists answered the survey online or on paper. Image search was mentioned as a frequent and common task, particularly for finding cases of interest for differential diagnosis. Sources of information besides the Internet are books and discussions with colleagues. Search for images is unsuccessful in around 25% of the cases, stopping the search after around 10 minutes. The most common reason for failure is that target images are considered rare. Important additions for search requested in the survey are filtering by pathology and modality, as well as search for visually similar images and cases. Few radiologists are familiar with visual retrieval but they desire the option to upload images for searching similar ones. Conclusions: Image search is common in radiology but few radiologists are fully aware of visual information retrieval. Taking into account the many unsuccessful searches and time spent for this, a good image search could improve the situation and help in clinical practice.) <|cite_end|>. The design concepts are, however, relevant to any image retrieval field. ParaDISE constitutes a platform that could be used both in research, for CBIR and multi--modal image retrieval, but also in large--scale applications. The design and implementation of ParaDISE is described in Section~\ref{sec:system}. Two use cases demonstrating the applications of ParaDISE are presented in Section~\ref{sec:usecases}. The system design concepts and implementation choices are discussed in Section~\ref{sec:discussion}. <|paper_end|>
[ "<|reference_start|> Relevance feedback in information retrieval: <|reference_end|>", "<|reference_start|> Multimodal fusion for multimedia analysis: a survey: <|reference_end|>", "<|reference_start|> Information mining in remote sensing image archives: System concepts: In this paper, we demonstrate the concepts of a prototype of a knowledge-driven content-based information mining system produced to manage and explore large volumes of remote sensing image data. The system consists of a computationally intensive offline part and an online interface. The offline part aims at the extraction of primitive image features, their compression, and data reduction, the generation of a completely unsupervised image content-index, and the ingestion of the catalogue entry in the database management system. Then, the user's interests-semantic interpretations of the image content-are linked with Bayesian networks to the content-index. Since this calculation is only based on a few training samples, the link can be computed online, and the complete image archive can be searched for images that contain the defined cover type. Practical applications exemplified with different remote sensing datasets show the potential of the system. <|reference_end|>", "<|reference_start|> Design and evaluation of a content--based image retrieval system: The growth in size and accessibility of multimedia databases have changed our approach to information retrieval. Classical text-based systems show their limitations in the context of multimedia retrieval. In this chapter, we address the problem of conceiving and evaluating a content-based image retrieval system. First, we investigate the use of the query-by-example (QBE) paradigm as a base paradigm for the development of a content-based image retrieval system (CBIRS). We show that it should be considered as a complement to the classical textual-based paradigms. We then evaluate the capabilities of the most up-to-date computer vision techniques in contributing to the realisation of such a system. Further, beyond the necessity of accurate image understanding techniques, we show that the amount of data involved by the process of describing image content should also be considered as an important issue. This aspect of our study is largely based on the experience acquired by the text retrieval (TR) community, which we adapt to the context of CBIR. Similarly, the text retrieval community has also developed a significant experience in evaluating retrieval systems, where judgements include subjectivity and context dependency. Extending this experience, we study a coherent framework for performing the evaluation of a CBIRS. As a practical example, we use our Viper CBIR system, using a novel communication protocol called MRML to pinpoint the importance of the sharing of resource in facilitating the evaluation and therefore the development of CBIRS. <|reference_end|>" ]
[ 2, 3, 5, 11 ]
{"<|cite_1|>": "ss-2001393", "<|cite_2|>": "ss-2001394", "<|cite_3|>": "ss-1513021", "<|cite_4|>": "ss-1062886", "<|cite_5|>": "ss-1213862", "<|cite_6|>": "ss-1145405", "<|cite_7|>": "ss-976815", "<|cite_8|>": "ss-1063541", "<|multi_cite_9_1|>": "ss-793137", "<|multi_cite_9_2|>": "ss-2001395", "<|cite_10|>": "ss-1213862", "<|cite_12|>": "ss-2001396", "<|cite_13|>": "ss-1896052", "<|cite_14|>": "ss-2001397", "<|cite_15|>": "ss-1091343", "<|cite_16|>": "ss-2001398", "<|cite_17|>": "ss-976169", "<|cite_18|>": "ss-994185", "<|cite_19|>": "ss-1107884", "<|cite_20|>": "ss-914973", "<|cite_21|>": "ss-2001399", "<|cite_22|>": "ss-2001394"}
1003.4369
<|paper_start|> Title: A Modal Logic for Termgraph Rewriting Abstract: A Modal Logic for Termgraph Rewriting: We propose a modal logic tailored to describe graph transformations and discuss some of its properties. We focus on a particular class of graphs called termgraphs. They are first-order terms augmented with sharing and cycles. Termgraphs allow one to describe classical data-structures (possibly with pointers) such as doubly-linked lists, circular lists etc. We show how the proposed logic can faithfully describe (i) termgraphs as well as (ii) the application of a termgraph rewrite rule (i.e. matching and replacement) and (iii) the computation of normal forms with respect to a given rewrite system. We also show how the proposed logic, which is more expressive than propositional dynamic logic, can be used to specify shapes of classical data-structures (e.g. binary trees, circular lists etc.). Introduction Graphs are common structures widely used in several areas in computer science and discrete mathematics. Their transformation constitute a domain of research per se with a large number of potential applications <|cite_start|> (Reference: Handbook of Graph Grammars and Computing by Graph Transformations, Volume 1: Foundations: A graph program consists of declarations of conditional graph transformation rules G. Rozenberg, editors: Handbook of Graph Grammars and Computing. We introduce s-graph grammars, a new grammar formalism for computing Handbook of Graph Grammars and Computing by Graph Transformation, pp. The double-pushout approach to graph transformation, which was invented in the early 1970's, is Handbook of Graph Grammars and Computing by Graph.) <|cite_end|> <|cite_start|> (Reference: Handbook of graph grammars and computing by graph transformation: vol. 2: applications, languages, and tools: ) <|cite_end|> <|cite_start|> (Reference: Handbook of graph grammars and computing by graph transformation: vol. 3: concurrency, parallelism, and distribution: ) <|cite_end|>. There are many different ways to define graphs and graph transformation. We consider in this paper structures known as \emph{termgraphs} and their transformation via rewrite rules <|cite_start|> (Reference: Term Graph Rewriting: ) <|cite_end|> <|cite_start|> (Reference: Term Graph Rewriting: ) <|cite_end|>. Roughly speaking, a termgraph is a first-order term with possible sharing (of sub-terms) and cycles. Below we depict three examples of termgraphs : $G_0$ is a classical first-order term. $G_1$ represents the same expression as $G_0$ but argument $x$ is shared. $G_1$ is often used to define the function double $double(x) = G_1$. The second termgraph $G_2$ represents a circular list of two ``records'' (represented here by operator $cons$) sharing the same content $G_1$. $$ \xymatrix@R=1pc@C=1pc {& +\ar[dl]\ar[dr] \\ x & & x \\ & \\ & G_0 } \hspace{1cm} \xymatrix@R=1pc@C=1pc {+\ar@/_/[d]\ar@/^/[d] \\ x \\ \\ G_1 }\hspace{1cm} \xymatrix@R=1pc@C=1pc { cons \ar[dr]\ar[rr] & & cons \ar[dl] \ar@/_/[ll] \\ & + \ar@/_/[d]\ar@/^/[d] & \\ & x & \\ & G_3 }$$ Termgraphs allow to represent real-world data structures (with pointers) such as circular lists, doubly-linked lists etc <|cite_start|> (Reference: Inductively Sequential Term-Graph Rewrite Systems: ) <|cite_end|>, and rewriting allows to efficiently process such graphs. They are thus a suitable framework for declarative languages dealing with such complex data structures. However, while there exist rewriting-based proof methods for first-order terms, there is a lack of appropriate termgraph rewriting proof methods, diminishing thus their operational benefits. Indeed, equational logic provides a logical setting for first-order term rewriting <|cite_start|> (Reference: Term rewriting and all that: Preface 1. Motivating examples 2. Abstract reduction systems 3. Universal algebra 4. Equational problems 5. Termination 6. Confluence 7. Completion 8. Grobner bases and Buchberger's algorithm 9. Combination problems 10. Equational unification 11. Extensions Appendix 1. Ordered sets Appendix 2. A bluffer's guide to ML Bibliography Index.) <|cite_end|>, and many theorem provers use rewrite techniques in order to efficiently achieve equational reasoning. In <|cite_start|> (Reference: A term-graph clausal logic: Completeness and incompleteness results: A clausal logic allowing to handle term-graphs is defined. Term-graphs are a generalization of terms (in the usual sense) possibly containing shared subterms and cycles. The satisfiability problem for this logic is shown to be undecidable (not even semi-decidable), but some fragments are identified for which it is semi-decidable. A complete (w.r.t validity) calculus for these fragments is proposed. Some simple examples give a taste of this calculus at work.) <|cite_end|> an extension of first-order (clausal) logic dealing with termgraphs has been proposed to give a logic counterpart of termgraph rewriting. In such a logic operations are interpreted as continuous functions and bisimilar graphs cannot be distinguished (two termgraphs are bisimilar if and only if they represent the same rational term). Due to that, reasoning on termgraphs is unfortunately much trickier than in first-order classical logic. For example, equational theories on termgraphs are not recursively enumerable whereas equational theories on terms are r.e.). In this paper, we investigate a modal logic with possible worlds semantics which better fits the operational features of termgraph rewriting systems. Termgraphs can easily be interpreted within the framework of possible worlds semantics, where nodes are considered as worlds and edges as modalities. Based on this observation, we investigate a new modal logic which has been tailored to fit termgraph rewriting. We show how termgraphs as well as rewrite rules can be specified by means of modal formulae. In particular we show how a rewrite step can be defined by means of a modal formula which encodes termgraph matching (graph homomorphism) and termgraph replacement (graph construction and modification). We show also how to define properties on such structures, such as being a list, a circular list, a tree, a binary tree. The computation of termgraph normal form is formulated in this new logic. In addition, we formulate invariant preservation by rewriting rules and discuss subclasses for which validity is decidable. The next two sections introduce respectively the considered class of termgraph rewrite systems and the proposed modal logic. In section~\ref{sec-definability} we discuss briefly the expressive power of the modal logic and show particularly how graph homomorphisms can be encoded. In section~\ref{sec-logic-transformation} we show how elementary graph transformations can be expressed as modal logic formulae whareas section~\ref{sec-traduction} shows how termgraph rewriting can be specified as modal formulae. Section~\ref{conclusion} gives some concluding remarks. <|paper_end|>
[ "<|reference_start|> Handbook of Graph Grammars and Computing by Graph Transformations,\nVolume 1: Foundations: A graph program consists of declarations of conditional graph transformation rules G. Rozenberg, editors: Handbook of Graph Grammars and Computing. We introduce s-graph grammars, a new grammar formalism for computing Handbook of Graph Grammars and Computing by Graph Transformation, pp. The double-pushout approach to graph transformation, which was invented in the early 1970's, is Handbook of Graph Grammars and Computing by Graph. <|reference_end|>", "<|reference_start|> Handbook of graph grammars and computing by graph transformation: vol. 3: concurrency, parallelism, and distribution: <|reference_end|>", "<|reference_start|> Term Graph Rewriting: <|reference_end|>", "<|reference_start|> Term rewriting and all that: Preface 1. Motivating examples 2. Abstract reduction systems 3. Universal algebra 4. Equational problems 5. Termination 6. Confluence 7. Completion 8. Grobner bases and Buchberger's algorithm 9. Combination problems 10. Equational unification 11. Extensions Appendix 1. Ordered sets Appendix 2. A bluffer's guide to ML Bibliography Index. <|reference_end|>" ]
[ 0, 2, 4, 6 ]
{"<|multi_cite_1_1|>": "ss-1283771", "<|multi_cite_1_2|>": "ss-991264", "<|multi_cite_1_3|>": "ss-991265", "<|multi_cite_2_1|>": "ss-1361691", "<|multi_cite_2_2|>": "ss-1361691", "<|cite_3|>": "ss-1029323", "<|cite_4|>": "ss-2281340", "<|cite_5|>": "ss-1029324"}
2402.13464-1
<|cite_start|> (Reference: Comparing the effects of paper and digital checklists on team performance in time-critical work: This mixed-methods study examines the effects of a tablet-based checklist system on team performance during a dynamic and safety-critical process of trauma resuscitation. We compared team performance from 47 resuscitations that used a paper checklist to that from 47 cases with a digital checklist to determine if digitizing a checklist led to improvements in task completion rates and in how fast the tasks were initiated for 18 most critical assessment and treatment tasks. We also compared if the checklist compliance increased with the digital design. We found that using the digital checklist led to more frequent completions of the initial airway assessment task but fewer completions of ear and lower extremities exams. We did not observe any significant differences in time to task performance, but found increased compliance with the checklist. Although improvements in team performance with the digital checklist were minor, our findings are important because they showed no adverse effects as a result of the digital checklist introduction. We conclude by discussing the takeaways and implications of these results for effective digitization of medical work.) <|cite_end|> <|cite_start|> (Reference: Exploring design opportunities for a context-adaptive medical checklist through technology probe approach: This paper explores the workflow and use of an interactive medical checklist for trauma resuscitation--an emerging technology developed for trauma team leaders to support decision making and task coordination among team members. We used a technology probe approach and ethnographic methods, including video review, interviews, and content analysis of checklist logs, to examine how team leaders use the checklist probe during live resuscitations. We found that team leaders of various experience levels use the technology differently. Some leaders frequently glance at the checklist and take notes during task performance, while others place the checklist on a stand and only interact with the checklist when checking items. We compared checklist timestamps to task activities and found that most items are checked off after tasks are performed. We conclude by discussing design implications and new design opportunities for a future dynamic, adaptive checklist.) <|cite_end|> <|cite_start|> (Reference: Supporting Awareness of Dynamic Data: Approaches to Designing and Capturing Data within Interactive Clinical Checklists: Automatically integrating data within interactive clinical checklists allows for enhanced dynamic displays, while also providing information needed for checklist adaptation to the context of the medical event. In this mixed-methods study, we used user-centered design sessions with clinicians to design a checklist interface that automatically captures and displays dynamic patient data. We compared the manual and automatic checklist versions during video-guided simulation sessions, evaluating the effects of automatic capture on clinicians’ interactions with dynamic data and their situation awareness. Despite clinicians’ concerns that automatic data capture would affect situation awareness, we found no significant difference in awareness scores. Participants preferred the automatic version, highlighting its improved accuracy and completeness. From our findings, we propose a framework for capturing dynamic data and designing dynamic data interfaces within interactive checklists. We conclude by discussing barriers and design opportunities for supporting awareness of data trends through checklists.) <|cite_end|>), treatment recommendations (e.g., providing clinical guidelines for pneumonia <|cite_start|> (Reference: {CDS in a learning health care system: Identifying physicians' reasons for rejection of best-practice recommendations in pneumonia through computerized clinical decision support: Abstract Background Local implementation of guidelines for pneumonia care is strongly recommended, but the context of care that affects implementation is poorly understood. In a learning health care system, computerized clinical decision support (CDS) provides an opportunity to both improve and track practice, providing insights into the implementation process. Objectives This article examines physician interactions with a CDS to identify reasons for rejection of guideline recommendations. Methods We implemented a multicenter bedside CDS for the emergency department management of pneumonia that integrated patient data with guideline-based recommendations. We examined the frequency of adoption versus rejection of recommendations for site-of-care and antibiotic selection. We analyzed free-text responses provided by physicians explaining their clinical reasoning for rejection, using concept mapping and thematic analysis. Results Among 1,722 patient episodes, physicians rejected recommendations to send a patient home in 24%, leaving text in 53%; reasons for rejection of the recommendations included additional or alternative diagnoses beyond pneumonia, and comorbidities or signs of physiologic derangement contributing to risk of outpatient failure that were not processed by the CDS. Physicians rejected broad-spectrum antibiotic recommendations in 10%, leaving text in 76%; differences in pathogen risk assessment, additional patient information, concern about antibiotic properties, and admitting physician preferences were given as reasons for rejection. Conclusion While adoption of CDS recommendations for pneumonia was high, physicians rejecting recommendations frequently provided feedback, reporting alternative diagnoses, additional individual patient characteristics, and provider preferences as major reasons for rejection. CDS that collects user feedback is feasible and can contribute to a learning health system.) <|cite_end|>, sepsis <|cite_start|> (Reference: Ignore, Trust, or Negotiate: Understanding Clinician Acceptance of AI-Based Treatment Recommendations in Health Care: Artificial intelligence (AI) in healthcare has the potential to improve patient outcomes, but clinician acceptance remains a critical barrier. We developed a novel decision support interface that provides interpretable treatment recommendations for sepsis, a life-threatening condition in which decisional uncertainty is common, treatment practices vary widely, and poor outcomes can occur even with optimal decisions. This system formed the basis of a mixed-methods study in which 24 intensive care clinicians made AI-assisted decisions on real patient cases. We found that explanations generally increased confidence in the AI, but concordance with specific recommendations varied beyond the binary acceptance or rejection described in prior work. Although clinicians sometimes ignored or trusted the AI, they also often prioritized aspects of the recommendations to follow, reject, or delay in a process we term "negotiation." These results reveal novel barriers to adoption of treatment-focused AI tools and suggest ways to better support differing clinician perspectives.) <|cite_end|>, and cancer <|cite_start|> (Reference: Clinical decision support for therapeutic decision-making in cancer: A systematic review: ) <|cite_end|>), and guideline-specific dashboards (e.g., displaying a list of patients that qualify for a protocol <|cite_start|> (Reference: Clinical impact of an electronic dashboard and alert system for sedation minimization and ventilator liberation: a before-after study: Supplemental Digital Content is available in the text.) <|cite_end|>). While helpful in operationalizing broad best-practice guidelines, these approaches are often limited at the bedside as they do not account for patient-level variation <|cite_start|> (Reference: Current sepsis mandates are overly prescriptive, and some aspects may be harmful: There is no debate that sepsis is a major public health problem that merits the vigorous attention of the medical and public health community. We strongly support efforts to increase sepsis awareness and to operationalize best practices in all hospitals. We are concerned, however, that some aspects of the Centers for Medicare and Medicaid Services’(CMS) Severe Sepsis and Septic Shock Early Management Bundle (SEP-1) mandatory reporting requirement and similar state-based mandates may paradoxically harm some patients by pressuring clinicians to provide aggressive, rapid, rigid, and reflexive care that is not suitable for all patients. The complexity and subjectivity of sepsis diagnosis make forced, fixed treatment rules dangerous and data collection onerous yet unreliable. We believe patients will be better served by allowing clinicians more discretion to determine which patients need sepsis care, requiring a narrower set of more evidence-based interventions, and incentivizing objective measurement strategies so that hospitals and policy makers can reliably assess the impact of their efforts. SEP-1 and most state-based mandates are modeled upon Surviving Sepsis Campaign guidelines (1). Clinicians are required to check lactate levels, draw blood cultures, administer broad-spectrum antibiotics for all patients, and infuse greater than or equal to 30 cc/kg of crystalloid fluids for patients that are hypotensive or have lactate levels greater than or equal to 4 mmol/L, all within 3 hours of time zero. Clinicians must also start vasopressors for patients with persistent hypotension, reassess volume status, and recheck lactate if the initial level was elevated, all within 6 hours of time zero. Time zero is defined as the first moment when the patient has documentation of suspected sepsis, or suspected infection plus greater than or equal to 2 systemic inflammatory response syndrome criteria and evidence of organ dysfunction within a 6-hour window. This bundle includes a mix of measures that are well supported by data and others that are not. Timely antibiotics have repeatedly been associated with better outcomes in critically ill patients with serious infections. Retrospective analyses of almost 50,000 patients treated for sepsis in New York State and 35,000 patients treated for sepsis in Northern California, for example, reported strong associations between time-to-antibiotics and in-hospital mortality (2, 3). A Bayesian hierarchical analysis of 37 studies of early goal-directed therapy found that the only aspects of goal-directed therapy associated with lower mortality were the timing and appropriateness of antibiotics (4). The data on lactate and volume resuscitation, however, are more equivocal (5). Lactate levels and lactate clearance rates correlate well with outcomes, but an elevated lactate level is not specific for sepsis, and there are very little data indicating that checking lactate levels (particularly serial levels) improves outcomes (6–9). Of greater concern, large volume infusions may be harmful to patients who are hypervolemic or euvolemic at presentation, as well as to patients with cardiomyopathy, renal dysfunction, limited pulmonary reserve, or malnutrition. Notably, the New York State analysis found no association between timeto-completion of the initial 30 cc/kg fluid bolus and mortality in patients with hypotension or lactate greater than or equal to 4 mmol/L (2). More concerningly, a growing body of literature associates larger volume resuscitation and positive daily fluid balances with higher mortality rates (10–14). Even the data on time to antibiotics, however, are more nuanced than SEP-1 allows (15). Timely antibiotics appear to matter most for patients with septic shock. The association between time-to-antibiotics and mortality in New York State and Northern California was clearest only in the subset of patients who required vasopressors (2, 3). A randomized trial of prehospital antibiotics versus antibiotics in the emergency department in a population of patients without shock found no difference in mortality rates despite a 90-minute difference in time-to-antibiotics (16). Over and above the strengths and limitations of each SEP-1 component, it is unclear whether the bundle as a whole leads 1 Department of Population Medicine, Harvard Medical School and Harvard Pilgrim Health Care Institute, Boston, MA. 2Department of Medicine, Brigham and Women’s Hospital, Boston, MA. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality. Dr. Klompas’ institution received funding from Centers for Disease Control and Prevention and the Massachusetts Department of Public Health. Dr. Rhee’s institution received funding from Agency for Healthcare Research and Quality (AHRQ) (K08HS025008), and he received support for article research from AHRQ. For information regarding this article, E-mail: [email protected] Current Sepsis Mandates Are Overly Prescriptive, and Some Aspects May Be Harmful) <|cite_end|>. Recent work highlighted the potential of AI technologies to combine clinical guidelines with patient-level variables to deliver more specific and personalized treatment recommendations <|cite_start|> (Reference: Ignore, Trust, or Negotiate: Understanding Clinician Acceptance of AI-Based Treatment Recommendations in Health Care: Artificial intelligence (AI) in healthcare has the potential to improve patient outcomes, but clinician acceptance remains a critical barrier. We developed a novel decision support interface that provides interpretable treatment recommendations for sepsis, a life-threatening condition in which decisional uncertainty is common, treatment practices vary widely, and poor outcomes can occur even with optimal decisions. This system formed the basis of a mixed-methods study in which 24 intensive care clinicians made AI-assisted decisions on real patient cases. We found that explanations generally increased confidence in the AI, but concordance with specific recommendations varied beyond the binary acceptance or rejection described in prior work. Although clinicians sometimes ignored or trusted the AI, they also often prioritized aspects of the recommendations to follow, reject, or delay in a process we term "negotiation." These results reveal novel barriers to adoption of treatment-focused AI tools and suggest ways to better support differing clinician perspectives.) <|cite_end|>. We build on this line of research to understand information needs for WUB to discover opportunities for EHR-based and AI-based interventions that make it easier to consider and perform clinical guidelines.} \subsection{\textcolor{black}{AI Systems in Healthcare and ICU}} \textcolor{black}{A large body of research has explored data-driven and AI applications in healthcare, often in the form of clinical decision support systems (CDDS) (e.g., <|cite_start|> (Reference: Unremarkable AI: Fitting Intelligent Decision Support into Critical, Clinical Decision-Making Processes: Clinical decision support tools (DST) promise improved healthcare outcomes by offering data-driven insights. While effective in lab settings, almost all DSTs have failed in practice. Empirical research diagnosed poor contextual fit as the cause. This paper describes the design and field evaluation of a radically new form of DST. It automatically generates slides for clinicians' decision meetings with subtly embedded machine prognostics. This design took inspiration from the notion of "Unremarkable Computing", that by augmenting the users' routines technology/AI can have significant importance for the users yet remain unobtrusive. Our field evaluation suggests clinicians are more likely to encounter and embrace such a DST. Drawing on their responses, we discuss the importance and intricacies of finding the right level of unremarkableness in DST design, and share lessons learned in prototyping critical AI systems as a situated experience.) <|cite_end|> <|cite_start|> (Reference: "Hello AI": uncovering the onboarding needs of medical practitioners for human-AI collaborative decision-making: Although rapid advances in machine learning have made it increasingly applicable to expert decision-making, the delivery of accurate algorithmic predictions alone is insufficient for effective human-AI collaboration. In this work, we investigate the key types of information medical experts desire when they are first introduced to a diagnostic AI assistant. In a qualitative lab study, we interviewed 21 pathologists before, during, and after being presented deep neural network (DNN) predictions for prostate cancer diagnosis, to learn the types of information that they desired about the AI assistant. Our findings reveal that, far beyond understanding the local, case-specific reasoning behind any model decision, clinicians desired upfront information about basic, global properties of the model, such as its known strengths and limitations, its subjective point-of-view, and its overall design objective--what it's designed to be optimized for. Participants compared these information needs to the collaborative mental models they develop of their medical colleagues when seeking a second opinion: the medical perspectives and standards that those colleagues embody, and the compatibility of those perspectives with their own diagnostic patterns. These findings broaden and enrich discussions surrounding AI transparency for collaborative decision-making, providing a richer understanding of what experts find important in their introduction to AI assistants before integrating them into routine practice.) <|cite_end|> <|cite_start|> (Reference: Ignore, Trust, or Negotiate: Understanding Clinician Acceptance of AI-Based Treatment Recommendations in Health Care: Artificial intelligence (AI) in healthcare has the potential to improve patient outcomes, but clinician acceptance remains a critical barrier. We developed a novel decision support interface that provides interpretable treatment recommendations for sepsis, a life-threatening condition in which decisional uncertainty is common, treatment practices vary widely, and poor outcomes can occur even with optimal decisions. This system formed the basis of a mixed-methods study in which 24 intensive care clinicians made AI-assisted decisions on real patient cases. We found that explanations generally increased confidence in the AI, but concordance with specific recommendations varied beyond the binary acceptance or rejection described in prior work. Although clinicians sometimes ignored or trusted the AI, they also often prioritized aspects of the recommendations to follow, reject, or delay in a process we term "negotiation." These results reveal novel barriers to adoption of treatment-focused AI tools and suggest ways to better support differing clinician perspectives.) <|cite_end|>). Within the context of ICU, the majority of AI applications has focused on automating documentation related tasks to save time (e.g., transcribing ICU rounds <|cite_start|> (Reference: A voice-based digital assistant for intelligent prompting of evidence-based practices during ICU rounds: ) <|cite_end|>) or providing diagnostic or prognostic insights to help with decision making (e.g., predicting the onset of conditions such as sepsis <|cite_start|> (Reference: {An interpretable machine learning model for accurate prediction of sepsis in the ICU: Objectives: Sepsis is among the leading causes of morbidity, mortality, and cost overruns in critically ill patients. Early intervention with antibiotics improves survival in septic patients. However, no clinically validated system exists for real-time prediction of sepsis onset. We aimed to develop and validate an Artificial Intelligence Sepsis Expert algorithm for early prediction of sepsis. Design: Observational cohort study. Setting: Academic medical center from January 2013 to December 2015. Patients: Over 31,000 admissions to the ICUs at two Emory University hospitals (development cohort), in addition to over 52,000 ICU patients from the publicly available Medical Information Mart for Intensive Care-III ICU database (validation cohort). Patients who met the Third International Consensus Definitions for Sepsis (Sepsis-3) prior to or within 4 hours of their ICU admission were excluded, resulting in roughly 27,000 and 42,000 patients within our development and validation cohorts, respectively. Interventions: None. Measurements and Main Results: High-resolution vital signs time series and electronic medical record data were extracted. A set of 65 features (variables) were calculated on hourly basis and passed to the Artificial Intelligence Sepsis Expert algorithm to predict onset of sepsis in the proceeding T hours (where T = 12, 8, 6, or 4). Artificial Intelligence Sepsis Expert was used to predict onset of sepsis in the proceeding T hours and to produce a list of the most significant contributing factors. For the 12-, 8-, 6-, and 4-hour ahead prediction of sepsis, Artificial Intelligence Sepsis Expert achieved area under the receiver operating characteristic in the range of 0.83–0.85. Performance of the Artificial Intelligence Sepsis Expert on the development and validation cohorts was indistinguishable. Conclusions: Using data available in the ICU in real-time, Artificial Intelligence Sepsis Expert can accurately predict the onset of sepsis in an ICU patient 4–12 hours prior to clinical recognition. A prospective study is necessary to determine the clinical utility of the proposed sepsis prediction model.) <|cite_end|> <|cite_start|> (Reference: Human–machine teaming is key to AI adoption: clinicians’ experiences with a deployed machine learning system: ) <|cite_end|>, tachycardia <|cite_start|> (Reference: TOP-Net prediction model using bidirectional long short-term memory and medical-grade wearable multisensor system for tachycardia onset: algorithm development study: Background Without timely diagnosis and treatment, tachycardia, also called tachyarrhythmia, can cause serious complications such as heart failure, cardiac arrest, and even death. The predictive performance of conventional clinical diagnostic procedures needs improvement in order to assist physicians in detecting risk early on. Objective We aimed to develop a deep tachycardia onset prediction (TOP-Net) model based on deep learning (ie, bidirectional long short-term memory) for early tachycardia diagnosis with easily accessible data. Methods TOP-Net leverages 2 easily accessible data sources: vital signs, including heart rate, respiratory rate, and blood oxygen saturation (SpO2) acquired continuously by wearable embedded systems, and electronic health records, containing age, gender, admission type, first care unit, and cardiovascular disease history. The model was trained with a large data set from an intensive care unit and then transferred to a real-world scenario in the general ward. In this study, 3 experiments incorporated merging patients’ personal information, temporal memory, and different feature combinations. Six metrics (area under the receiver operating characteristic curve [AUROC], sensitivity, specificity, accuracy, F1 score, and precision) were used to evaluate predictive performance. Results TOP-Net outperformed the baseline models on the large critical care data set (AUROC 0.796, 95% CI 0.768-0.824; sensitivity 0.753, 95% CI 0.663-0.793; specificity 0.720, 95% CI 0.645-0.758; accuracy 0.721; F1 score 0.718; precision 0.686) when predicting tachycardia onset 6 hours in advance. When predicting tachycardia onset 2 hours in advance with data acquired from our hospital using the transferred TOP-Net, the 6 metrics were 0.965, 0.955, 0.881, 0.937, 0.793, and 0.680, respectively. The best performance was achieved using comprehensive vital signs (heart rate, respiratory rate, and SpO2) statistical information. Conclusions TOP-Net is an early tachycardia prediction model that uses 8 types of data from wearable sensors and electronic health records. When validated in clinical scenarios, the model achieved a prediction performance that outperformed baseline models 0 to 6 hours before tachycardia onset in the intensive care unit and 2 hours before tachycardia onset in the general ward. Because of the model’s implementation and use of easily accessible data from wearable sensors, the model can assist physicians with early discovery of patients at risk in general wards and houses.) <|cite_end|>or hypotension <|cite_start|> (Reference: Prediction of hypotension events with physiologic vital sign signatures in the intensive care unit: ) <|cite_end|>). Recently, with the availability of high-density EHR data on large numbers of mechanically ventilated ICU patients (e.g., MIMIC <|cite_start|> (Reference: Mimic-iv: 目的 鉴于脓毒症的高发病率和高病死率,早期识别高风险患者并及时干预至关重要,而现有死亡风险预测模型在操作、适用性和预测长期预后等方面均存在不足。本研究旨在探讨脓毒症患者死亡的危险因素,构建近期和远期死亡风险预测模型。 方法 从美国重症监护医学信息数据库IV(Medical Information Mart for Intensive Care-IV,MIMIC-IV)中选取符合脓毒症3.0诊断标准的人群,按7꞉3的比例随机分为建模组和验证组,分析患者的基线资料。采用单因素Cox回归分析和全子集回归确定脓毒症患者死亡的危险因素并筛选出构建预测模型的变量。分别用时间依赖性曲线下面积(area under the curve,AUC)、校准曲线和决策曲线评估模型的区分度、校准度和临床实用性。 结果 共纳入14 240例脓毒症患者,28 d和1年病死率分别为21.45%(3 054例)和36.50%(5 198例)。高龄、女性、高感染相关器官衰竭评分(sepsis-related organ failure assessment,SOFA)、高简明急性生理学评分(simplified acute physiology score II,SAPS II)、心率快、呼吸频率快、脓毒症休克、充血性心力衰竭、慢性阻塞性肺疾病、肝脏疾病、肾脏疾病、糖尿病、恶性肿瘤、高白细胞计数(white blood cell count,WBC)、长凝血酶原时间(prothrombin time,PT)、高血肌酐(serum creatinine,SCr)水平均为脓毒症死亡的危险因素(均P<0.05)。由PT、呼吸频率、体温、合并恶性肿瘤、合并肝脏疾病、脓毒症休克、SAPS II及年龄8个变量构建的模型,其28 d和1年生存的AUC分别为0.717(95% CI 0.710~0.724)和0.716(95% CI 0.707~0.725)。校准曲线和决策曲线表明该模型具有良好的校准度及较好的临床应用价值。 结论 基于MIMIC-IV建立的脓毒症患者近期和远期死亡风险预测模型有较好的识别能力,对患者预后风险评估及干预治疗具有一定的临床参考意义。) <|cite_end|>), researchers created AI systems that predict if a patient will need a ventilator <|cite_start|> (Reference: Clinical Intervention Prediction and Understanding using Deep Networks: Real-time prediction of clinical interventions remains a challenge within intensive care units (ICUs). This task is complicated by data sources that are noisy, sparse, heterogeneous and outcomes that are imbalanced. In this paper, we integrate data from all available ICU sources (vitals, labs, notes, demographics) and focus on learning rich representations of this data to predict onset and weaning of multiple invasive interventions. In particular, we compare both long short-term memory networks (LSTM) and convolutional neural networks (CNN) for prediction of five intervention tasks: invasive ventilation, non-invasive ventilation, vasopressors, colloid boluses, and crystalloid boluses. Our predictions are done in a forward-facing manner to enable "real-time" performance, and predictions are made with a six hour gap time to support clinically actionable planning. We achieve state-of-the-art results on our predictive tasks using deep architectures. We explore the use of feature occlusion to interpret LSTM models, and compare this to the interpretability gained from examining inputs that maximally activate CNN outputs. We show that our models are able to significantly outperform baselines in intervention prediction, and provide insight into model learning, which is crucial for the adoption of such models in practice.) <|cite_end|>, predict optimal ventilator settings for a patient <|cite_start|> (Reference: Development and validation of a reinforcement learning algorithm to dynamically optimize mechanical ventilation in critical care: ) <|cite_end|>, and predict the risk of patient extubation failure <|cite_start|> (Reference: Development and validation of a machine-learning model for prediction of extubation failure in intensive care units: Background: Extubation failure (EF) can lead to an increased chance of ventilator-associated pneumonia, longer hospital stays, and a higher mortality rate. This study aimed to develop and validate an accurate machine-learning model to predict EF in intensive care units (ICUs). Methods: Patients who underwent extubation in the Medical Information Mart for Intensive Care (MIMIC)-IV database were included. EF was defined as the need for ventilatory support (non-invasive ventilation or reintubation) or death within 48 h following extubation. A machine-learning model called Categorical Boosting (CatBoost) was developed based on 89 clinical and laboratory variables. SHapley Additive exPlanations (SHAP) values were calculated to evaluate feature importance and the recursive feature elimination (RFE) algorithm was used to select key features. Hyperparameter optimization was conducted using an automated machine-learning toolkit (Neural Network Intelligence). The final model was trained based on key features and compared with 10 other models. The model was then prospectively validated in patients enrolled in the Cardiac Surgical ICU of Zhongshan Hospital, Fudan University. In addition, a web-based tool was developed to help clinicians use our model. Results: Of 16,189 patients included in the MIMIC-IV cohort, 2,756 (17.0%) had EF. Nineteen key features were selected using the RFE algorithm, including age, body mass index, stroke, heart rate, respiratory rate, mean arterial pressure, peripheral oxygen saturation, temperature, pH, central venous pressure, tidal volume, positive end-expiratory pressure, mean airway pressure, pressure support ventilation (PSV) level, mechanical ventilation (MV) durations, spontaneous breathing trial success times, urine output, crystalloid amount, and antibiotic types. After hyperparameter optimization, our model had the greatest area under the receiver operating characteristic (AUROC: 0.835) in internal validation. Significant differences in mortality, reintubation rates, and NIV rates were shown between patients with a high predicted risk and those with a low predicted risk. In the prospective validation, the superiority of our model was also observed (AUROC: 0.803). According to the SHAP values, MV duration and PSV level were the most important features for prediction. Conclusions: In conclusion, this study developed and prospectively validated a CatBoost model, which better predicted EF in ICUs than other models.) <|cite_end|> <|cite_start|> (Reference: Prediction of extubation outcome in critically ill patients: a systematic review and meta-analysis: ) <|cite_end|>.} \textcolor{black}{While proof-of-concept predictive models showcase initial feasibility, AI systems often fail when moving from research labs to clinical practice <|cite_start|> (Reference: Identifying challenges and opportunities in human-AI collaboration in healthcare: The proposed workshop will identify research questions that will enable the field to uncover the types of work, labor relations, and social impacts that should be considered when designing AI-based healthcare technology. The workshop aims to outline key challenges, guidelines, and future agendas for the field, and provide collaboration opportunities for CSCW researchers, social scientists, AI researchers, clinicians, and relevant stakeholders in healthcare, to share their perspectives and co-create sociotechnical approaches to tackle timely issues related to AI and automation in healthcare work.) <|cite_end|> <|cite_start|> (Reference: Investigating the Heart Pump Implant Decision Process: Opportunities for Decision Support Tools to Help: Clinical decision support tools (DSTs) are computational systems that aid healthcare decision-making. While effective in labs, almost all these systems failed when they moved into clinical practice. Healthcare researchers speculated it is most likely due to a lack of user-centered HCI considerations in the design of these systems. This paper describes a field study investigating how clinicians make a heart pump implant decision with a focus on how to best integrate an intelligent DST into their work process. Our findings reveal a lack of perceived need for and trust of machine intelligence, as well as many barriers to computer use at the point of clinical decision-making. These findings suggest an alternative perspective to the traditional use models, in which clinicians engage with DSTs at the point of making a decision. We identify situations across patients' healthcare trajectories when decision supports would help, and we discuss new forms it might take in these situations.) <|cite_end|> <|cite_start|> (Reference: Randomized trial of automated, electronic monitoring to facilitate early detection of sepsis in the intensive care unit: Objective:To determine whether automated identification with physician notification of the systemic inflammatory response syndrome in medical intensive care unit patients expedites early administration of new antibiotics or improvement of other patient outcomes in patients with sepsis. Design:A prospective randomized, controlled, single center study. Setting:Medical intensive care unit of an academic, tertiary care medical center. Patients:Four hundred forty-two consecutive patients admitted over a 4-month period who met modified systemic inflammatory response syndrome criteria in a medical intensive care unit. Intervention:Patients were randomized to monitoring by an electronic “Listening Application” to detect modified (systemic inflammatory response syndrome) criteria vs. usual care. The listening application notified physicians in real time when modified systemic inflammatory response syndrome criteria were detected, but did not provide management recommendations. Measurements and Main Results:The median time to new antibiotics was similar between the intervention and usual care groups when comparing among all patients (6.0 hr vs. 6.1 hr, p = .95), patients with sepsis (5.3 hr vs. 5.1 hr; p = .90), patients on antibiotics at enrollment (5.2 hr vs. 7.0 hr, p = .27), or patients not on antibiotics at enrollment (5.2 hr vs. 5.1 hr, p = .85). The amount of fluid administered following detection of modified systemic inflammatory response syndrome criteria was similar between groups whether comparing all patients or only patients who were hypotensive at enrollment. Other clinical outcomes including intensive care unit length of stay, hospital length of stay, and mortality were not shown to be different between patients in the intervention and control groups. Conclusions:Realtime alerts of modified systemic inflammatory response syndrome criteria to physicians in one tertiary care medical intensive care unit were feasible and safe but did not influence measured therapeutic interventions for sepsis or significantly alter clinical outcomes.) <|cite_end|> <|cite_start|> (Reference: Automated, electronic alerts for acute kidney injury: a single-blind, parallel-group, randomised controlled trial: ) <|cite_end|>. HCI researchers point out that the clinical utility and actionability – the specific actions clinicians can take based on a prediction – of predictive models often remain unclear <|cite_start|> (Reference: Framing Machine Learning Opportunities for Hypotension Prediction in Perioperative Care: A Socio-Technical Perspective: Hypotension during perioperative care, if undetected or uncontrolled, can lead to serious clinical complications. Predictive machine learning models, based on routinely collected EHR data, offer potential for early warning of hypotension to enable proactive clinical intervention. However, while research has demonstrated the feasibility of such machine learning models, little effort is made to ground their formulation and development in socio-technical context of perioperative care work. To address this, we present a study of collaborative work practices of clinical teams during and after surgery with specific emphasis on the organisation of hypotension management. The findings highlight where predictive insights could be usefully deployed to reconfigure care and facilitate more proactive management of hypotension. We further explore how the socio-technical insights help define key parameters of machine learning prediction tasks to align with the demands of collaborative clinical practice. We discuss more general implications for the design of predictive machine learning in hospital care.) <|cite_end|> <|cite_start|> (Reference: Inclusion of clinicians in the development and evaluation of clinical artificial intelligence tools: a systematic literature review: The application of machine learning (ML) and artificial intelligence (AI) in healthcare domains has received much attention in recent years, yet significant questions remain about how these new tools integrate into frontline user workflow, and how their design will impact implementation. Lack of acceptance among clinicians is a major barrier to the translation of healthcare innovations into clinical practice. In this systematic review, we examine when and how clinicians are consulted about their needs and desires for clinical AI tools. Forty-five articles met criteria for inclusion, of which 24 were considered design studies. The design studies used a variety of methods to solicit and gather user feedback, with interviews, surveys, and user evaluations. Our findings show that tool designers consult clinicians at various but inconsistent points during the design process, and most typically at later stages in the design cycle (82%, 19/24 design studies). We also observed a smaller amount of studies adopting a human-centered approach and where clinician input was solicited throughout the design process (22%, 5/24). A third (15/45) of all studies reported on clinician trust in clinical AI algorithms and tools. The surveyed articles did not universally report validation against the “gold standard” of clinical expertise or provide detailed descriptions of the algorithms or computational methods used in their work. To realize the full potential of AI tools within healthcare settings, our review suggests there are opportunities to more thoroughly integrate frontline users’ needs and feedback in the design process.) <|cite_end|> <|cite_start|> (Reference: Technical Feasibility, Financial Viability, and Clinician Acceptance: On the Many Challenges to AI in Clinical Practice.: Artificial intelligence (AI) applications in healthcare offer the promise of improved decision making for clinicians, and better healthcare outcomes for patients. While technical AI advances in healthcare showcase impressive performances in lab settings, they seem to fail when moving to clinical practice. In this position paper, we reflect on our experiences of designing for AI acceptance and discuss three interrelated challenges to AI in clinical practice: technical feasibility, fi-nancial viability, and clinician acceptance. We discuss each challenge and their implications for future research in clinical AI. We encourage the research community to take on these lenses in collaboratively tackling the challenges of moving AI systems into real-world healthcare applications.) <|cite_end|>; and that seamless integration into current workflows is critical for clinician acceptance <|cite_start|> (Reference: Realizing AI in healthcare: challenges appearing in the wild: The last several years have shown a strong growth of Artificial Intelligence (AI) technologies with promising results for many areas of healthcare. HCI has contributed to these discussions, mainly with studies on explainability of advanced algorithms. However, there are only few AI-systems based on machine learning algorithms that make it to the real world and everyday care. This challenging move has been named the “last mile” of AI in healthcare, emphasizing the sociotechnical uncertainties and unforeseen learnings from involving users in the design or use of AI-based systems. The aim of this workshop is to set the stage for a new wave of HCI research that accounts for and begins to develop new insights, concepts, and methods, for transitioning from development to implementation and use of AI in healthcare. Participants are invited to collaboratively define an HCI research agenda focused on healthcare AI in the wild, which will require examining end-user engagements and questioning underlying concepts of AI in healthcare.) <|cite_end|> <|cite_start|> (Reference: Designing human-centered AI for mental health: Developing clinically relevant applications for online CBT treatment: Recent advances in AI and machine learning (ML) promise significant transformations in the future delivery of healthcare. Despite a surge in research and development, few works have moved beyond demonstrations of technical feasibility and algorithmic performance. However, to realize many of the ambitious visions for how AI can contribute to clinical impact requires the closer design and study of AI tools or interventions within specific health and care contexts. This article outlines our collaborative, human-centered approach to developing an AI application that predicts treatment outcomes for patients who are receiving human-supported, internet-delivered Cognitive Behavioral Therapy (iCBT) for symptoms of depression and anxiety. Intersecting the fields of HCI, AI, and healthcare, we describe how we addressed the specific challenges of (1) identifying clinically relevant AI applications; and (2) designing AI applications for sensitive use contexts like mental health. Aiming to better assist the work practices of iCBT supporters, we share how learnings from an interview study with 15 iCBT supporters surfaced their practices and information needs and revealed new opportunities for the use of AI. Combined with insights from the clinical literature and technical feasibility constraints, this led to the development of two clinical outcome prediction models. To clarify their potential utility for use in practice, we conducted 13 design sessions with iCBT supporters that utilized interface mock-ups to concretize the AI output and derive additional design requirements. Our findings demonstrate how design choices can impact interpretations of the AI predictions as well as supporter motivation and sense of agency. We detail how this analysis and the design principles derived from it enabled the integration of the prediction models into a production interface. Reporting on identified risks of over-reliance on AI outputs and needs for balanced information assessment and preservation of a focus on individualized care, we discuss and reflect on what constitutes a responsible, human-centered approach to AI design in this healthcare context.) <|cite_end|> <|cite_start|> (Reference: Investigating the Heart Pump Implant Decision Process: Opportunities for Decision Support Tools to Help: Clinical decision support tools (DSTs) are computational systems that aid healthcare decision-making. While effective in labs, almost all these systems failed when they moved into clinical practice. Healthcare researchers speculated it is most likely due to a lack of user-centered HCI considerations in the design of these systems. This paper describes a field study investigating how clinicians make a heart pump implant decision with a focus on how to best integrate an intelligent DST into their work process. Our findings reveal a lack of perceived need for and trust of machine intelligence, as well as many barriers to computer use at the point of clinical decision-making. These findings suggest an alternative perspective to the traditional use models, in which clinicians engage with DSTs at the point of making a decision. We identify situations across patients' healthcare trajectories when decision supports would help, and we discuss new forms it might take in these situations.) <|cite_end|>. In response, an increasing body of HCI literature has called for socio-technical, participatory approaches for understanding clinical workflows and engaging healthcare stakeholders early in the AI system development <|cite_start|> (Reference: Designing AI for Trust and Collaboration in Time-Constrained Medical Decisions: A Sociotechnical Lens: Major depressive disorder is a debilitating disease affecting 264 million people worldwide. While many antidepressant medications are available, few clinical guidelines support choosing among them. Decision support tools (DSTs) embodying machine learning models may help improve the treatment selection process, but often fail in clinical practice due to poor system integration. We use an iterative, co-design process to investigate clinicians' perceptions of using DSTs in antidepressant treatment decisions. We identify ways in which DSTs need to engage with the healthcare sociotechnical system, including clinical processes, patient preferences, resource constraints, and domain knowledge. Our results suggest that clinical DSTs should be designed as multi-user systems that support patient-provider collaboration and offer on-demand explanations that address discrepancies between predictions and current standards of care. Through this work, we demonstrate how current trends in explainable AI may be inappropriate for clinical environments and consider paths towards designing these tools for real-world medical systems.) <|cite_end|> <|cite_start|> (Reference: Onboarding Materials as Cross-functional Boundary Objects for Developing AI Assistants: Deep neural networks (DNNs) routinely achieve state-of-the-art performance in a wide range of tasks, but it can often be challenging for them to meet end-user needs in practice. This case study reports on the development of human-AI onboarding materials (i.e., training materials for users prior to using an AI) for a DNN-based medical AI Assistant to aid in the grading of prostate cancer. Specifically, we describe how the process of developing these materials changed the team’s understanding of end-user requirements, contributing to modifications in the development and assessment of the underlying machine learning model. Importantly, we discovered that onboarding materials served as a useful boundary object for cross-functional teams, uncovering a new way to assess the ML model and specify its end-user requirements. We also present evidence of the utility of the onboarding materials by describing how it affected user strategies and decision-making with AI in a study deployment to pathologists.) <|cite_end|> <|cite_start|> (Reference: Investigating the Heart Pump Implant Decision Process: Opportunities for Decision Support Tools to Help: Clinical decision support tools (DSTs) are computational systems that aid healthcare decision-making. While effective in labs, almost all these systems failed when they moved into clinical practice. Healthcare researchers speculated it is most likely due to a lack of user-centered HCI considerations in the design of these systems. This paper describes a field study investigating how clinicians make a heart pump implant decision with a focus on how to best integrate an intelligent DST into their work process. Our findings reveal a lack of perceived need for and trust of machine intelligence, as well as many barriers to computer use at the point of clinical decision-making. These findings suggest an alternative perspective to the traditional use models, in which clinicians engage with DSTs at the point of making a decision. We identify situations across patients' healthcare trajectories when decision supports would help, and we discuss new forms it might take in these situations.) <|cite_end|> <|cite_start|> (Reference: Framing Machine Learning Opportunities for Hypotension Prediction in Perioperative Care: A Socio-Technical Perspective: Hypotension during perioperative care, if undetected or uncontrolled, can lead to serious clinical complications. Predictive machine learning models, based on routinely collected EHR data, offer potential for early warning of hypotension to enable proactive clinical intervention. However, while research has demonstrated the feasibility of such machine learning models, little effort is made to ground their formulation and development in socio-technical context of perioperative care work. To address this, we present a study of collaborative work practices of clinical teams during and after surgery with specific emphasis on the organisation of hypotension management. The findings highlight where predictive insights could be usefully deployed to reconfigure care and facilitate more proactive management of hypotension. We further explore how the socio-technical insights help define key parameters of machine learning prediction tasks to align with the demands of collaborative clinical practice. We discuss more general implications for the design of predictive machine learning in hospital care.) <|cite_end|> <|cite_start|> (Reference: Investigating How Practitioners Use Human-AI Guidelines: A Case Study on the People+ AI Guidebook: Artificial intelligence (AI) presents new challenges for the user experience (UX) of products and services. Recently, practitioner-facing resources and design guidelines have become available to ease some of these challenges. However, little research has investigated if and how these guidelines are used, and how they impact practice. In this paper, we investigated how industry practitioners use the People + AI Guidebook. We conducted interviews with 31 practitioners (i.e., designers, product managers) to understand how they use human-AI guidelines when designing AI-enabled products. Our findings revealed that practitioners use the guidebook not only for addressing AI’s design challenges, but also for education, cross-functional communication, and for developing internal resources. We uncovered that practitioners desire more support for early phase ideation and problem formulation to avoid AI product failures. We discuss the implications for future resources aiming to help practitioners in designing AI products.) <|cite_end|> <|cite_start|> (Reference: What If I Don't Like Any Of The Choices? The Limits of Preference Elicitation for Participatory Algorithm Design: Emerging methods for participatory algorithm design have proposed collecting and aggregating individual stakeholder preferences to create algorithmic systems that account for those stakeholders' values. Using algorithmic student assignment as a case study, we argue that optimizing for individual preference satisfaction in the distribution of limited resources may actually inhibit progress towards social and distributive justice. Individual preferences can be a useful signal but should be expanded to support more expressive and inclusive forms of democratic participation.) <|cite_end|> <|cite_start|> (Reference: Introduction to the Special Issue on Human-Centred AI in Healthcare: Challenges Appearing in the Wild: Concepts:) <|cite_end|>. In the context of intensive care, a recent interview study explored \textit{what predictions would be useful} for ICU physicians and nurses <|cite_start|> (Reference: Tell me something interesting: Clinical utility of machine learning prediction models in the ICU: ) <|cite_end|>. Interestingly, clinicians expressed desires for predictions around patient trajectory and prioritization, mainly to reduce the high cognitive load caused by tracking the status of multiple highly dynamic patients rather than aiding decision making.} \textcolor{black}{Our research builds on this line of work by investigating current workflows for mechanically ventilated patient care with an eye for opportunities for clinically relevant AI prediction tasks to support the use of WUB in clinical practice.} <|paper_end|>
[ "<|reference_start|> {CDS in a learning health care system: Identifying physicians' reasons for rejection of best-practice recommendations in pneumonia through computerized clinical decision support: Abstract Background Local implementation of guidelines for pneumonia care is strongly recommended, but the context of care that affects implementation is poorly understood. In a learning health care system, computerized clinical decision support (CDS) provides an opportunity to both improve and track practice, providing insights into the implementation process. Objectives This article examines physician interactions with a CDS to identify reasons for rejection of guideline recommendations. Methods We implemented a multicenter bedside CDS for the emergency department management of pneumonia that integrated patient data with guideline-based recommendations. We examined the frequency of adoption versus rejection of recommendations for site-of-care and antibiotic selection. We analyzed free-text responses provided by physicians explaining their clinical reasoning for rejection, using concept mapping and thematic analysis. Results Among 1,722 patient episodes, physicians rejected recommendations to send a patient home in 24%, leaving text in 53%; reasons for rejection of the recommendations included additional or alternative diagnoses beyond pneumonia, and comorbidities or signs of physiologic derangement contributing to risk of outpatient failure that were not processed by the CDS. Physicians rejected broad-spectrum antibiotic recommendations in 10%, leaving text in 76%; differences in pathogen risk assessment, additional patient information, concern about antibiotic properties, and admitting physician preferences were given as reasons for rejection. Conclusion While adoption of CDS recommendations for pneumonia was high, physicians rejecting recommendations frequently provided feedback, reporting alternative diagnoses, additional individual patient characteristics, and provider preferences as major reasons for rejection. CDS that collects user feedback is feasible and can contribute to a learning health system. <|reference_end|>", "<|reference_start|> Clinical impact of an electronic dashboard and alert system for sedation minimization and ventilator liberation: a before-after study: Supplemental Digital Content is available in the text. <|reference_end|>", "<|reference_start|> Mimic-iv: 目的 鉴于脓毒症的高发病率和高病死率,早期识别高风险患者并及时干预至关重要,而现有死亡风险预测模型在操作、适用性和预测长期预后等方面均存在不足。本研究旨在探讨脓毒症患者死亡的危险因素,构建近期和远期死亡风险预测模型。 方法 从美国重症监护医学信息数据库IV(Medical Information Mart for Intensive Care-IV,MIMIC-IV)中选取符合脓毒症3.0诊断标准的人群,按7꞉3的比例随机分为建模组和验证组,分析患者的基线资料。采用单因素Cox回归分析和全子集回归确定脓毒症患者死亡的危险因素并筛选出构建预测模型的变量。分别用时间依赖性曲线下面积(area under the curve,AUC)、校准曲线和决策曲线评估模型的区分度、校准度和临床实用性。 结果 共纳入14 240例脓毒症患者,28 d和1年病死率分别为21.45%(3 054例)和36.50%(5 198例)。高龄、女性、高感染相关器官衰竭评分(sepsis-related organ failure assessment,SOFA)、高简明急性生理学评分(simplified acute physiology score II,SAPS II)、心率快、呼吸频率快、脓毒症休克、充血性心力衰竭、慢性阻塞性肺疾病、肝脏疾病、肾脏疾病、糖尿病、恶性肿瘤、高白细胞计数(white blood cell count,WBC)、长凝血酶原时间(prothrombin time,PT)、高血肌酐(serum creatinine,SCr)水平均为脓毒症死亡的危险因素(均P<0.05)。由PT、呼吸频率、体温、合并恶性肿瘤、合并肝脏疾病、脓毒症休克、SAPS II及年龄8个变量构建的模型,其28 d和1年生存的AUC分别为0.717(95% CI 0.710~0.724)和0.716(95% CI 0.707~0.725)。校准曲线和决策曲线表明该模型具有良好的校准度及较好的临床应用价值。 结论 基于MIMIC-IV建立的脓毒症患者近期和远期死亡风险预测模型有较好的识别能力,对患者预后风险评估及干预治疗具有一定的临床参考意义。 <|reference_end|>", "<|reference_start|> Randomized trial of automated, electronic monitoring to facilitate early detection of sepsis in the intensive care unit: Objective:To determine whether automated identification with physician notification of the systemic inflammatory response syndrome in medical intensive care unit patients expedites early administration of new antibiotics or improvement of other patient outcomes in patients with sepsis. Design:A prospective randomized, controlled, single center study. Setting:Medical intensive care unit of an academic, tertiary care medical center. Patients:Four hundred forty-two consecutive patients admitted over a 4-month period who met modified systemic inflammatory response syndrome criteria in a medical intensive care unit. Intervention:Patients were randomized to monitoring by an electronic “Listening Application” to detect modified (systemic inflammatory response syndrome) criteria vs. usual care. The listening application notified physicians in real time when modified systemic inflammatory response syndrome criteria were detected, but did not provide management recommendations. Measurements and Main Results:The median time to new antibiotics was similar between the intervention and usual care groups when comparing among all patients (6.0 hr vs. 6.1 hr, p = .95), patients with sepsis (5.3 hr vs. 5.1 hr; p = .90), patients on antibiotics at enrollment (5.2 hr vs. 7.0 hr, p = .27), or patients not on antibiotics at enrollment (5.2 hr vs. 5.1 hr, p = .85). The amount of fluid administered following detection of modified systemic inflammatory response syndrome criteria was similar between groups whether comparing all patients or only patients who were hypotensive at enrollment. Other clinical outcomes including intensive care unit length of stay, hospital length of stay, and mortality were not shown to be different between patients in the intervention and control groups. Conclusions:Realtime alerts of modified systemic inflammatory response syndrome criteria to physicians in one tertiary care medical intensive care unit were feasible and safe but did not influence measured therapeutic interventions for sepsis or significantly alter clinical outcomes. <|reference_end|>" ]
[ 3, 6, 17, 24 ]
{"<|multi_cite_1_1|>": "ss-1745563", "<|multi_cite_1_2|>": "ss-2130637", "<|cite_2|>": "ss-2130637", "<|cite_3|>": "ss-2130638", "<|cite_4|>": "ss-2130639", "<|cite_5|>": "ss-1940294", "<|cite_6|>": "ss-2130640", "<|multi_cite_8_1|>": "ss-2130640", "<|multi_cite_8_2|>": "ss-2130641", "<|cite_9|>": "ss-2130642", "<|multi_cite_10_1|>": "ss-1916987", "<|multi_cite_10_2|>": "ss-2124418", "<|cite_11|>": "ss-1173738", "<|cite_12|>": "ss-2130643", "<|cite_13|>": "ss-1544154", "<|multi_cite_14_1|>": "ss-2130644", "<|multi_cite_14_2|>": "ss-2124419", "<|cite_15|>": "ss-2130645", "<|cite_16|>": "ss-1916990", "<|multi_cite_17_1|>": "ss-2130646", "<|multi_cite_17_2|>": "ss-2130647", "<|cite_18|>": "ss-1139217", "<|multi_cite_19_1|>": "ss-2130648", "<|multi_cite_19_2|>": "ss-1916982", "<|multi_cite_19_3|>": "ss-2130647", "<|cite_20|>": "ss-2130649", "<|cite_21|>": "ss-2130650", "<|cite_22|>": "ss-1173738", "<|multi_cite_23_1|>": "ss-2130646", "<|multi_cite_23_2|>": "ss-2130651", "<|multi_cite_24_1|>": "ss-1916992", "<|multi_cite_24_2|>": "ss-2130652", "<|multi_cite_24_3|>": "ss-2173592", "<|multi_cite_24_4|>": "ss-2130653", "<|multi_cite_24_5|>": "ss-2130654", "<|multi_cite_24_6|>": "ss-2130655", "<|multi_cite_25_1|>": "ss-2124417", "<|multi_cite_25_2|>": "ss-2124418", "<|multi_cite_25_3|>": "ss-2130642", "<|multi_cite_25_4|>": "ss-2130656", "<|multi_cite_26_1|>": "ss-2130657", "<|multi_cite_26_2|>": "ss-2130658", "<|multi_cite_26_3|>": "ss-2130659", "<|cite_27|>": "ss-1139217", "<|multi_cite_28_1|>": "ss-2130660", "<|multi_cite_28_2|>": "ss-2130661", "<|multi_cite_28_3|>": "ss-2130662", "<|cite_29|>": "ss-2130659", "<|cite_30|>": "ss-2130663", "<|multi_cite_31_1|>": "ss-1745563", "<|multi_cite_31_2|>": "ss-2130637", "<|cite_32|>": "ss-1745563", "<|cite_33|>": "ss-2130664", "<|cite_34|>": "ss-2130665", "<|multi_cite_35_1|>": "ss-2130666", "<|multi_cite_35_2|>": "ss-2130667", "<|multi_cite_35_3|>": "ss-2130668", "<|multi_cite_36_1|>": "ss-1745563", "<|multi_cite_36_2|>": "ss-2130668", "<|cite_37|>": "ss-2130637", "<|cite_38|>": "ss-1745563", "<|multi_cite_39_1|>": "ss-2130669", "<|multi_cite_39_2|>": "ss-2130670", "<|multi_cite_40_1|>": "ss-2130668", "<|multi_cite_40_2|>": "ss-2130671", "<|cite_41|>": "ss-2130668", "<|multi_cite_42_1|>": "ss-2130672", "<|multi_cite_42_2|>": "ss-2130673", "<|multi_cite_43_1|>": "ss-1231176", "<|multi_cite_43_2|>": "ss-2130674", "<|multi_cite_43_3|>": "ss-2130675", "<|cite_44|>": "ss-1745567", "<|cite_45|>": "arxiv-478200", "<|cite_46|>": "ss-1745568", "<|cite_47|>": "ss-2130639", "<|cite_48|>": "ss-2130676", "<|cite_49|>": "arxiv-478200", "<|multi_cite_50_1|>": "arxiv-200915", "<|multi_cite_50_2|>": "ss-683874", "<|multi_cite_50_3|>": "arxiv-478200", "<|cite_51|>": "ss-2124397", "<|multi_cite_52_1|>": "ss-1051895", "<|multi_cite_52_2|>": "ss-1178893", "<|cite_53|>": "ss-2130677", "<|cite_54|>": "ss-2130678", "<|cite_55|>": "ss-791118", "<|cite_56|>": "arxiv-124948", "<|cite_57|>": "ss-740297", "<|multi_cite_58_1|>": "ss-2130679", "<|multi_cite_58_2|>": "ss-2130680", "<|multi_cite_59_1|>": "ss-1617558", "<|multi_cite_59_2|>": "ss-1122508", "<|multi_cite_59_3|>": "ss-2130681", "<|multi_cite_59_4|>": "ss-2130682", "<|multi_cite_60_1|>": "ss-2110575", "<|multi_cite_60_2|>": "ss-1166208", "<|multi_cite_60_3|>": "ss-2290586", "<|multi_cite_61_1|>": "ss-1166206", "<|multi_cite_61_2|>": "ss-1166217", "<|multi_cite_61_3|>": "ss-1122508", "<|multi_cite_62_1|>": "arxiv-318238", "<|multi_cite_62_2|>": "ss-1166219", "<|multi_cite_62_3|>": "ss-1122508", "<|multi_cite_62_4|>": "ss-2110575", "<|multi_cite_62_5|>": "ss-2289658", "<|multi_cite_62_6|>": "arxiv-278214", "<|multi_cite_62_7|>": "ss-2110576", "<|cite_63|>": "ss-731776"}
2403.07202
<|paper_start|> Title: SPAWNing Structural Priming Predictions from a Cognitively Motivated Parser Abstract: SPAWNing Structural Priming Predictions from a Cognitively Motivated Parser: Structural priming is a widely used psycholinguistic paradigm to study human sentence representations. In this work we propose a framework for using empirical priming patterns to build a theory characterizing the structural representations humans construct when processing sentences. This framework uses a new cognitively motivated parser, SPAWN, to generate quantitative priming predictions from theoretical syntax and evaluate these predictions with empirical human behavior. As a case study, we apply this framework to study reduced relative clause representations in English. We use SPAWN to generate priming predictions from two theoretical accounts which make different assumptions about the structure of relative clauses. We find that the predictions from only one of these theories (Participial-Phase) align with empirical priming patterns, thus highlighting which assumptions about relative clause better capture human sentence representations. Introduction \setlength{\Exlabelwidth}{0.25em} \setlength{\SubExleftmargin}{1.3em} Structural priming <|cite_start|> (Reference: An experimental approach to linguistic representation: Abstract Within the cognitive sciences, most researchers assume that it is the job of linguists to investigate how language is represented, and that they do so largely by building theories based on explicit judgments about patterns of acceptability – whereas it is the task of psychologists to determine how language is processed, and that in doing so, they do not typically question the linguists' representational assumptions. We challenge this division of labor by arguing that structural priming provides an implicit method of investigating linguistic representations that should end the current reliance on acceptability judgments. Moreover, structural priming has now reached sufficient methodological maturity to provide substantial evidence about such representations. We argue that evidence from speakers' tendency to repeat their own and others' structural choices supports a linguistic architecture involving a single shallow level of syntax connected to a semantic level containing information about quantification, thematic relations, and information structure, as well as to a phonological level. Many of the linguistic distinctions often used to support complex (or multilevel) syntactic structure are instead captured by semantics; however, the syntactic level includes some specification of “missing” elements that are not realized at the phonological level. We also show that structural priming provides evidence about the consistency of representations across languages and about language development. In sum, we propose that structural priming provides a new basis for understanding the nature of language.) <|cite_end|> is a widely used paradigm in psycholinguistics to study the structural representations that people construct when processing sentences. In this paradigm, researchers measure the extent to which the production or processing of \textit{target} sentences is facilitated (or \textit{primed}) by preceding \textit{prime} sentences, and then use the pattern of priming behavior to draw inferences about the representations people construct. For example, consider a \textit{target sentence} like \ref{ex:po1}. \vspace{-0.25em} \ex. \label{ex:po1} The boy threw the ball to the dog. \vspace{-0.5em} Prior work found that targets like \ref{ex:po1} were produced more often, and were processed more rapidly, when they were preceded by primes like \ref{ex:po2}, that have the same structure, than when they were preceded by primes like \ref{ex:do2}, which, while describing the same transfer event as \ref{ex:po2}, have a different structure. \vspace{-0.25em} \ex. \label{ex:po2} The lawyer sent the letter to the client. \vspace{-0.5em} \ex. \label{ex:do2} The lawyer sent the client the letter. \vspace{-0.5em} From this result, <|cite_start|> (Reference: Syntactic priming: Investigating the mental representation of language: ) <|cite_end|> inferred that participants' mental representation of \ref{ex:po1} is more similar to that of \ref{ex:po2} than of \ref{ex:do2} . <|cite_start|> (Reference: An experimental approach to linguistic representation: Abstract Within the cognitive sciences, most researchers assume that it is the job of linguists to investigate how language is represented, and that they do so largely by building theories based on explicit judgments about patterns of acceptability – whereas it is the task of psychologists to determine how language is processed, and that in doing so, they do not typically question the linguists' representational assumptions. We challenge this division of labor by arguing that structural priming provides an implicit method of investigating linguistic representations that should end the current reliance on acceptability judgments. Moreover, structural priming has now reached sufficient methodological maturity to provide substantial evidence about such representations. We argue that evidence from speakers' tendency to repeat their own and others' structural choices supports a linguistic architecture involving a single shallow level of syntax connected to a semantic level containing information about quantification, thematic relations, and information structure, as well as to a phonological level. Many of the linguistic distinctions often used to support complex (or multilevel) syntactic structure are instead captured by semantics; however, the syntactic level includes some specification of “missing” elements that are not realized at the phonological level. We also show that structural priming provides evidence about the consistency of representations across languages and about language development. In sum, we propose that structural priming provides a new basis for understanding the nature of language.) <|cite_end|> propose that by carefully studying which sentences prime each other we can build a theory of human structural representations. Building such a theory requires us to generate hypotheses about what are the interesting prime-target pairs to compare. Insights from theoretical syntax, a field that has spent decades studying the structure of sentences, can help constrain this hypothesis space <|cite_start|> (Reference: Total word count : 1104 The logic of syntactic priming and acceptability judgments: word count: 53 Main text word count: 993 References word count: 58 Total word count: 1104 The logic of syntactic priming and acceptability judgments Phoebe Gaston, Nick Huang, and Colin Phillips University of Maryland Mailing address: 1401 Marie Mount Hall, College Park, MD 20742 Phone: +1 301 405 3082 Emails: [email protected], [email protected], [email protected] Homepage URLs: https://phoebegaston.wordpress.com/, http://ling.umd.edu/~znhuang, http://colinphillips.net) <|cite_end|>. In this work we introduce a new parser, the Serial Parser in ACT-R With Null elements (SPAWN), that can generate quantitative priming predictions from theories in syntax. By comparing SPAWN predictions from competing theoretical accounts, we can study which theoretical differences result in differing priming predictions, and therefore might be meaningful for psycholinguistics. Then, by comparing these predictions to empirical priming behavior in humans, we can study which theoretical assumptions are consistent with the representations that humans construct when processing sentences. SPAWN is a cognitively motivated parser in which all of the parsing decisions are based on the computational principles proposed by a general purpose cognitive architecture, Adaptive Control of Thought-Rational (ACT-R; <|cite_start|> (Reference: An integrated theory of the mind.: Adaptive control of thought-rational (ACT-R; J. R. Anderson & C. Lebiere, 1998) has evolved into a theory that consists of multiple modules but also explains how these modules are integrated to produce coherent cognition. The perceptual-motor modules, the goal module, and the declarative memory module are presented as examples of specialized systems in ACT-R. These modules are associated with distinct cortical regions. These modules place chunks in buffers where they can be detected by a production system that responds to patterns of information in the buffers. At any point in time, a single production rule is selected to respond to the current pattern. Subsymbolic processes serve to guide the selection of rules to fire as well as the internal operations of some modules. Much of learning involves tuning of these subsymbolic processes. A number of simple and complex empirical examples are described to illustrate how these modules function singly and in concert.) <|cite_end|>). Thus, SPAWN not only describes the computations underlying human parsing (Marr's \textit{computational} level), but also specifies the cognitive processes involved (Marr's \textit{algorithmic} level). This level of specification is necessary to explain \textit{why} a sentence A is primed more by sentence B compared to C, and therefore necessary to generate quantitative behavioral priming predictions from syntactic theories. Existing algorithmic models of parsing <|cite_start|> (Reference: An activation-based model of sentence processing as skilled memory retrieval: We present a detailed process theory of the moment-by-moment working-memory retrievals and associated control structure that subserve sentence comprehension. The theory is derived from the application of independently motivated principles of memory and cognitive skill to the specialized task of sentence parsing. The resulting theory construes sentence processing as a series of skilled associative memory retrievals modulated by similarity-based interference and fluctuating activation. The cognitive principles are formalized in computational form in the Adaptive Control of Thought-Rational (ACT-R) architecture, and our process model is realized in ACT-R. We present the results of 6 sets of simulations: 5 simulation sets provide quantitative accounts of the effects of length and structural interference on both unambiguous and garden-path structures. A final simulation set provides a graded taxonomy of double center embeddings ranging from relatively easy to extremely difficult. The explanation of center-embedding difficulty is a novel one that derives from the model' complete reliance on discriminating retrieval cues in the absence of an explicit representation of serial order information. All fits were obtained with only 1 free scaling parameter fixed across the simulations; all other parameters were ACT-R defaults. The modeling results support the hypothesis that fluctuating activation and similarity-based interference are the key factors shaping working memory in sentence processing. We contrast the theory and empirical predictions with several related accounts of sentence-processing complexity.) <|cite_end|> can account for only a limited set of linguistic phenomena and theoretical assumptions. SPAWN can model a wider range of phenomena and assumptions because it uses a more flexible grammar formalism and specifies mechanisms to handle covert lexical items. As a case study, we use SPAWN to study the mental representations of sentences with relative clauses (RCs) such as \ref{ex:rrc1} and \ref{ex:frc1}. \vspace{-0.25em} \ex. \label{ex:rrc1} The cat examined by the doctor was skittish. \vspace{-0.5em} \ex. \label{ex:frc1} The cat which was examined by the doctor was skittish. \vspace{-0.5em} We generate priming predictions from two competing syntactic theories: Whiz-Deletion, which assumes that the structure of \ref{ex:rrc1} is identical to the structure of \ref{ex:frc1}, and Participial-Phase <|cite_start|> (Reference: Reduced Relatives and Extended Phases: A Phase-Based Analysis of the Inflectional Restrictions on English Reduced Relative Clauses: This article aims to provide an analysis for a curious fact about reduced relative clauses in Standard English: while full relative clauses permit all forms of inflection, reduced relative clauses are restricted to passive and progressive inflections. This puzzle is explained by claiming that, while full relative clauses are comprised of both phases of the clausal spine, reduced relative clauses are comprised solely of the clause‐internal phase. Following the claim that the clause‐internal phase in English in fact extends as far as the progressive aspectual layer (Harwood 2013, 2015; Wurmbrand 2013, 2014; Ramchand & Svenonius 2014; Aelbrecht & Harwood 2015) this fully accounts for the inflectional restrictions on reduced relative clauses in Standard English.) <|cite_end|> which assumes that \ref{ex:rrc1} and \ref{ex:frc1} have different structures. We describe these theories in more detail in \S~\ref{sec:syntactic-theory}. We generate six sets of predictions from the two theories by modulating the strength of prior knowledge and the specific re-analysis mechanism used. Then, we compare the predictions from these two theories to empirical human data we collected using a novel web-based comprehension-to-production priming paradigm. We found that the predictions from the Whiz-Deletion never aligned with the qualitative pattern of human priming behavior, whereas the predictions from Participial-Phase theory aligned with the empirical pattern when the predictions are generated from a SPAWN model with very weak to no prior knowledge; this observation held for the two variations of the reanalysis the mechanisms we tried. Taken together, these results suggest that the Participial-Phase account better characterizes human sentence representations, and more broadly highlight how SPAWN can be used to adjudicate between competing theoretical assumptions. \begin{figure} \centering \includegraphics[width=\linewidth]{figs/SPAWN-ACL-decision-tree.pdf} \caption{How is SPAWN different from other models?} \label{fig:enter-label} \end{figure} Related Work \subsection{ACT-R framework} ACT-R is a cognitive architecture with modules which are designed to explain general cognition through a small set of general computational principles and mechanisms that are relevant to a wide range of tasks and domains. One such mechanism which is particularly relevant in SPAWN is the retrieval of information from memory. The specific computational principles and algorithms that guide retrieval in ACT-R are outlined in \S~\ref{sec:retrieval}. Crucially, since ACT-R is intended to be a general purpose cognitive mechanism, most of the hyperparameters involved in this algorithm are already fixed based on data from a wide range of experimental paradigms and cognitive phenomena. This is useful because it restricts the degrees of freedom, thereby constraining the space of predictions that can be generated from any given theory. \subsection{Prior models of parsing} In most existing symbolic and neural network based parsers, the parsing decisions are not driven by specific cognitive principles such as the ones proposed by ACT-R. Therefore, generating predictions about observable human behavior (e.g., reading times) from these parsers requires making some additional \textit{linking hypotheses}. Most prior hypotheses that link parsing decisions to human behavior have focused on notions of processing effort, such as the number of parse states explored <|cite_start|> (Reference: What a rational parser would do: This article examines cognitive process models of human sentence comprehension based on the idea of informed search. These models are rational in the sense that they strive to find a good syntactic analysis quickly. Informed search derives a new account of garden pathing that handles traditional counterexamples. It supports a symbolic explanation for local coherence as well as an algorithmic account of entropy reduction. The models are expressed in a broad framework for theories of human sentence comprehension.) <|cite_end|>, the maximum number of items on the stack at any given point <|cite_start|> (Reference: Processing crossed and nested dependencies: An automation perspective on the psycholinguistic results: Abstract The clause-final verbal clusters in Dutch and German (and, in general, in West Germanic languages) have been studied extensively in different syntactic theories. Standard Dutch prefers crossed dependencies (between verbs and their arguments), whereas Standard German prefers nested dependencies. Recently, Bach, Brown, and Marslen-Wilson (1986) investigated the consequences of these differences between Dutch and German for the processing complexity of sentences, containing either crossed or nested dependencies. Stated very simply, their results show that Dutch is “easier” than German, thus showing that the push-down automaton (PDA) cannot be the universal basis for the human parsing mechanism. They provide an explanation for the inadequacy of PDA in terms of the kinds of partial interpretations the dependencies allow the listener to construct. Motivated by their results and their discussion of these results, we introduce a principle of partial interpretation (PPI) and present an automaton, embedded...) <|cite_end|>, or the maximum amount of time a node stays in memory <|cite_start|> (Reference: A Minimalist Approach to Facilitatory Effects in Stacked Relative Clauses: A top-down parser for Minimalist grammars (MGs; Stabler, 2013) can successfully predict a variety of off-line processing preferences, via metrics linking parsing behavior to memory load (Kobele et al., 2013; Gerth, 2015; Graf et al., 2017). The increasing empirical coverage of this model is intriguing, given its close association to modern minimalist syntax. Recently however, Zhang (2017) has argued that this framework is unable to account for a set of complexity profiles reported for English and Mandarin Chinese stacked relative clauses. Based on these obser-vations, this paper proposes extensions to this model implementing a notion of memory reactivation , in the form of memory metrics sensitive to repetitions of movement features. We then show how these metrics derive the correct predictions for the stacked RC processing contrasts.) <|cite_end|>. These hypotheses cannot be used to generate priming predictions because they do not specify a mechanism by which a prime sentence might facilitate the processing of a target sentence. One notable exception is the ACT-R based left-corner repair parser proposed by <|cite_start|> (Reference: An activation-based model of sentence processing as skilled memory retrieval: We present a detailed process theory of the moment-by-moment working-memory retrievals and associated control structure that subserve sentence comprehension. The theory is derived from the application of independently motivated principles of memory and cognitive skill to the specialized task of sentence parsing. The resulting theory construes sentence processing as a series of skilled associative memory retrievals modulated by similarity-based interference and fluctuating activation. The cognitive principles are formalized in computational form in the Adaptive Control of Thought-Rational (ACT-R) architecture, and our process model is realized in ACT-R. We present the results of 6 sets of simulations: 5 simulation sets provide quantitative accounts of the effects of length and structural interference on both unambiguous and garden-path structures. A final simulation set provides a graded taxonomy of double center embeddings ranging from relatively easy to extremely difficult. The explanation of center-embedding difficulty is a novel one that derives from the model' complete reliance on discriminating retrieval cues in the absence of an explicit representation of serial order information. All fits were obtained with only 1 free scaling parameter fixed across the simulations; all other parameters were ACT-R defaults. The modeling results support the hypothesis that fluctuating activation and similarity-based interference are the key factors shaping working memory in sentence processing. We contrast the theory and empirical predictions with several related accounts of sentence-processing complexity.) <|cite_end|>, in which parsing decisions are made based on the activation of different \textit{chunks} in the memory (such as words or grammar rules). The activation of chunks in this model can capture notions of both processing difficulty and priming. However, this model assumes a strong dissociation between the grammar and the lexicon and therefore cannot be adopted directly to generate predictions from lexicalized grammar formalisms such as Minimalist Grammar <|cite_start|> (Reference: Derivational Minimalism: ) <|cite_end|>, Combinatorial grammar <|cite_start|> (Reference: Combinators and Grammars: ) <|cite_end|>, Lexical-Functional Grammar <|cite_start|> (Reference: Lexical-functional grammar: A formal system for grammatical representation: In learning their native language children develop a remarkable set of capabilities They acquire knowledge and skills that enable them to pro duce and comprehend an inde nite number of novel utterances and to make quite subtle judgments about certain of their properties The ma jor goal of psycholinguistic research is to devise an explanatory account of the mental operations that underlie these linguistic abilities In pursuing this goal we have adopted what we call the Competence Hypothesis as a methodological principle We assume that an explana tory model of human language performance will incorporate a theoreti cally justi ed representation of the native speaker s linguistic knowledge a grammar as a component separate both from the computational mech anisms that operate on it a processor and from other nongrammatical processing parameters that might in uence the processor s behavior To a certain extent the various components that we postulate can be studied independently guided where appropriate by the well established methods and evaluation standards of linguistics computer science and experimen tal psychology However the requirement that the various components ultimately must t together in a consistent and coherent model imposes even stronger constraints on their structure and operation) <|cite_end|> or Head-Driven Phrase Structure Grammar <|cite_start|> (Reference: Head-driven Phrase Structure Grammar: Head-Driven Phrase Structure Grammar (Background) • developed in the 80s as a successor of GPSG • main publications Pollard and Sag, 1987, 1994, many contributions since then • syntactic theory • language typology • computational linguistics, grammar development (German, Englisch, French, Norwegian, Japanese, Spanish, Persian, Maltese, Danish, Polish, Mandarin Chinese, . . . ) • Phonology, morphology, syntax, semantics, and pragmatics (information structure) are covered. • since 1994 yearly conferences: conference volumes are published by CSLI online publications • Web pages: http://hpsg.stanford.edu/ and http://hpsg.fu-berlin.de/HPSG-Bib/ (Literature)) <|cite_end|>. The goal of this work is to develop a framework that can be used to generate priming predictions from \textit{any} theory of syntax that can generate parse trees. \subsection{Prior models of priming} <|cite_start|> (Reference: A computational cognitive model of syntactic priming: The psycholinguistic literature has identified two syntactic adaptation effects in language production: rapidly decaying short-term priming and long-lasting adaptation. To explain both effects, we present an ACT-R model of syntactic priming based on a wide-coverage, lexicalized syntactic theory that explains priming as facilitation of lexical access. In this model, two well-established ACT-R mechanisms, base-level learning and spreading activation, account for long-term adaptation and short-term priming, respectively. Our model simulates incremental language production and in a series of modeling studies, we show that it accounts for (a) the inverse frequency interaction; (b) the absence of a decay in long-term priming; and (c) the cumulativity of long-term adaptation. The model also explains the lexical boost effect and the fact that it only applies to short-term priming. We also present corpus data that verify a prediction of the model, that is, that the lexical boost affects all lexical material, rather than just heads.) <|cite_end|> proposed an ACT-R based model of priming in which lexical and grammatical knowledge are more strongly interconnected than in the parser proposed by <|cite_start|> (Reference: An activation-based model of sentence processing as skilled memory retrieval: We present a detailed process theory of the moment-by-moment working-memory retrievals and associated control structure that subserve sentence comprehension. The theory is derived from the application of independently motivated principles of memory and cognitive skill to the specialized task of sentence parsing. The resulting theory construes sentence processing as a series of skilled associative memory retrievals modulated by similarity-based interference and fluctuating activation. The cognitive principles are formalized in computational form in the Adaptive Control of Thought-Rational (ACT-R) architecture, and our process model is realized in ACT-R. We present the results of 6 sets of simulations: 5 simulation sets provide quantitative accounts of the effects of length and structural interference on both unambiguous and garden-path structures. A final simulation set provides a graded taxonomy of double center embeddings ranging from relatively easy to extremely difficult. The explanation of center-embedding difficulty is a novel one that derives from the model' complete reliance on discriminating retrieval cues in the absence of an explicit representation of serial order information. All fits were obtained with only 1 free scaling parameter fixed across the simulations; all other parameters were ACT-R defaults. The modeling results support the hypothesis that fluctuating activation and similarity-based interference are the key factors shaping working memory in sentence processing. We contrast the theory and empirical predictions with several related accounts of sentence-processing complexity.) <|cite_end|>. However, this model can only generate sentences given a semantic description, and therefore can only be used to model sentence production and not sentence processing. The models of priming that \textit{can} model processing either do not explicitly model syntactic structure <|cite_start|> (Reference: Becoming syntactic.: Psycholinguistic research has shown that the influence of abstract syntactic knowledge on performance is shaped by particular sentences that have been experienced. To explore this idea, the authors applied a connectionist model of sentence production to the development and use of abstract syntax. The model makes use of (a) error-based learning to acquire and adapt sequencing mechanisms and (b) meaning-form mappings to derive syntactic representations. The model is able to account for most of what is known about structural priming in adult speakers, as well as key findings in preferential looking and elicited production studies of language acquisition. The model suggests how abstract knowledge and concrete experience are balanced in the development and use of syntax.) <|cite_end|> <|cite_start|> (Reference: Dynamics of structural priming: This thesis is about how our syntactic choice changes with linguistic experience. Studies on syntactic priming show that our decisions are influenced by sentences that we have recently heard or recently spoken. They also show that not all sentences have an equal amount of influence; that repetition of verbs increases priming (the lexical-boost effect) and that some verbs are more susceptible to priming than others. This thesis explores how and why syntactic decisions change with time and what these observations tell us about the cognitive mechanism of speaking. Specifically, we set out to develop a theoretical account of syntactic priming. Theoretical accounts require mathematical models and this thesis develops a sequence of mathematical models for understanding various aspects of syntactic priming. Cognitive processes are modelled as dynamical systems that can change their behaviour when they process information. We use these dynamical systems to investigate how each episode of language comprehension or production affects syntactic decisions. We also use these systems to investigate how long priming persists, how groups of consecutive sentences affect structural decisions, why repeating words leads to greater syntactic priming and what this tells us about how words, concepts and syntax are cognitively represented. We obtain two kinds of results by simulating these mathematical models. The first kind of results reveal how syntactic priming evolves over time. We find that structural priming itself shows a gradual decay with time but the lexical enhancement of priming decays catastrophically – a result consistent with experimental observations. We also find that consecutive episodes of language processing add up nonlinearly in memory, which challenges the design of some existing psycholinguistic experiments. The second kind of results reveal how our syntax module might be connected to other cognitive modules. We find that the lexical enhancement of syntactic priming might be a consequence of how the modules of attention and working memory influence syntactic decisions. These models suggest a mechanism of priming that is in contrast to a previous prediction-based account. This prediction-based account proposes that we actively predict what we hear and structural priming is due to error-correction whenever our predictions do not match the stimuli. In contrast, our account embodies syntactic priming in cognitive processes of attention, working memory and long-term memory. It asserts that our linguistic decisions are not based solely on abstract rules but also depend on the cognitive implementation of each module. Our investigations also contribute a novel theoretical framework for studying syntactic priming. Previous studies analyse priming using error-correction or Hebbian learning algorithms. We introduce the formalism of dynamical systems. This formalism allows us to trace the effect of information processing through time. It explains how residual activation from a previous episode might play a role in structural decisions, thereby enriching our understanding of syntactic priming. Since these dynamical systems are also used to model neural processes, this theoretical framework brings our understanding of priming one step closer to its biological implementation, bridging the gap between neural processes and abstract thoughts.) <|cite_end|> <|cite_start|> (Reference: Using Priming to Uncover the Organization of Syntactic Representations in Neural Language Models: Neural language models (LMs) perform well on tasks that require sensitivity to syntactic structure. Drawing on the syntactic priming paradigm from psycholinguistics, we propose a novel technique to analyze the representations that enable such success. By establishing a gradient similarity metric between structures, this technique allows us to reconstruct the organization of the LMs' syntactic representational space. We use this technique to demonstrate that LSTM LMs' representations of different types of sentences with relative clauses are organized hierarchically in a linguistically interpretable manner, suggesting that the LMs track abstract properties of the sentence.) <|cite_end|> <|cite_start|> (Reference: Structural Persistence in Language Models: Priming as a Window into Abstract Language Representations: We investigate the extent to which modern, neural language models are susceptible to structural priming, the phenomenon whereby the structure of a sentence makes the same structure more probable in a follow-up sentence. We explore how priming can be used to study the potential of these models to learn abstract structural information, which is a prerequisite for good performance on tasks that require natural language understanding skills. We introduce a novel metric and release Prime-LM, a large corpus where we control for various linguistic factors which interact with priming strength. We find that Transformer models indeed show evidence of structural priming, but also that the generalisations they learned are to some extent modulated by semantic information. Our experiments also show that the representations acquired by the models may not only encode abstract sequential structure but involve certain level of hierarchical syntactic information. More generally, our study shows that the priming paradigm is a useful, additional tool for gaining insights into the capacities of language models and opens the door to future priming-based investigations that probe the model's internal states.) <|cite_end|> or do not explicitly implement the priming mechanism <|cite_start|> (Reference: Similarity and structural priming: Similarity and Structural Priming Neal Snider ([email protected]) Department of Brain and Cognitive Sciences, Meliora Hall, Box 270268 Rochester, NY 14627-0268 USA Abstract production. Priming is a phenomenon where the processing of a stimulus (the ‘target’) is facilitated if a similar stimulus (the ‘prime’) has been processed previously. This facilitation is greater the more similar the prime and the target, and in fact priming only occurs if they are similar along some cogni- tive dimension. This is why priming can illuminate the men- tal representation of knowledge, because if people’s behavior is sensitive to this similarity, then that similarity must arise from the two structures having the same cognitive represen- tation of that dimension. Thus, by experimentally finding the dimensions of similarity between stimuli that cause priming, one can determine the nature of the mental representations of those stimuli. One of the most basic findings in lexical prim- ing is that the magnitude of priming is affected by semantic or associative relatedness. People have less latency in producing a target word following a prime word that is closely related to it (Ratcliff & McKoon, 1981). This paper presents a pair of corpus studies to investigate whether structural priming of the ditransitive alternation is also sensitive to similarity. The ditransitive alternation in- volves the choice between the Double Object (DO, 1a) con- struction, where the recipient argument Noun Phrase (NP) ap- pears before the theme NP, and the Prepositional Object (PO, 1b) construction, where the theme appears before the recipi- ent: The increasing evidence that language processing is sensitive to lexical and structural co-occurrences at different levels of granularity and abstraction (Jurafsky, Bell, Gregory, & Ray- mond, 2001; Bybee, 2006) has led to the hypothesis that lexical and structural processing may be unified (MacDonald, Pearl- mutter, & Seidenberg, 1994; Jurafsky, 1996). This paper ex- amines the specific hypothesis that structural priming and lex- ical priming may be due to the same underlying mechanism. Lexical priming is known to exhibit sensitivity to the similarity between the prime and the target: the more similar the prime and target words, the greater the magnitude of the priming ef- fect (Ratcliff & McKoon, 1981). Two corpus studies show ev- idence of an effect of similarity on structural priming. Struc- tural and semantic similarity of the prime and target structures are modeled using a database of ditransitives extracted from the Switchboard corpus and a nearest-neighbor similarity met- ric. More similar prime and target structures are found to be more likely to occur in the same construction. This effect is in addition to the known similarity effect of verb identity (Picker- ing & Branigan, 1998), which is controlled through simultane- ous multiple regression and model comparison. This suggests that lexical and structural priming could be the same process. Implications for models of representation and processing are discussed. Keywords: priming; corpus; mixed models; similarity; lexi- con; syntax Introduction There is increasing evidence that language processing is sen- sitive to lexical and structural co-occurrences at different lev- els of granularity and abstraction, and that syntactic knowl- edge consists of chunks of idiosyncratic linguistic experience that are larger than the word level (Bybee, 2006; Goldberg, 2006; Tomasello, 2003). This is opposed to the traditional “words and rules” view summarized in Pinker (1999), where generalizations about word order and argument structure are stored in a set of rules, and all idiosyncratic information is stored in a lexicon of words. Contrary to the predictions of this view, a series of empirical studies going back over the last couple of decades have shown that people’s behavior is sensitive to the frequency of occurrence of rather large group- ings of words, and also that behavior is sensitive to very fine- grained distinctions between stimuli (Jurafsky et al., 2001). These results are also interesting because they indicate that lexical and syntactic processing may be unified in the same model of representation and processing (MacDonald et al., 1994; Jurafsky, 1996). The purpose of the present study is to investigate the pos- sibility that lexical and syntactic priming are sensitive to the same factors and thus may be the same process operating on the same representational base. The methodology is struc- tural priming. Priming is a particularly useful method for studying the representations involved in lexical and structural (1)a. we ... give [a country] recipient [money] theme 1 (DO) b. we give [money] theme [to a country] recipient 2 (PO) Priming in the ditransitive was demonstrated in the very ear- liest studies of structural priming (Bock, 1986; Bock & Loe- bell, 1990) and has been extensively studied in the interven- ing years (Pickering & Branigan, 1998, etc.). Priming for the ditransitive alternation in conversational speech has also been demonstrated (Gries, 2005; Bresnan, Cueni, Nikitina, & Baayen, 2007). Like lexical priming, at the structural level, there is some evidence that more similar prime and target structures are more likely to appear in the same construction. Pickering and Branigan (1998) presented a series of production priming ex- periments using sentence completion. They tested whether various types of structural overlap between prime and target would increase priming. They found that when the same verb is used in both the prime and target, priming is much more likely. They did not find that priming was more likely in their materials when the prime and target verbs shared the same tense, aspect, or agreement. They argued based one these re- 1 Example from the Switchboard corpus alternant of (1a) 2 Hypothetical) <|cite_end|>. Hence, the goal of this work is to bridge this gap and develop a model that explicitly models both syntactic structure and the priming mechanism, which can be used to generate behavioral processing predictions. <|paper_end|>
[ "<|reference_start|> An experimental approach to linguistic representation: Abstract Within the cognitive sciences, most researchers assume that it is the job of linguists to investigate how language is represented, and that they do so largely by building theories based on explicit judgments about patterns of acceptability – whereas it is the task of psychologists to determine how language is processed, and that in doing so, they do not typically question the linguists' representational assumptions. We challenge this division of labor by arguing that structural priming provides an implicit method of investigating linguistic representations that should end the current reliance on acceptability judgments. Moreover, structural priming has now reached sufficient methodological maturity to provide substantial evidence about such representations. We argue that evidence from speakers' tendency to repeat their own and others' structural choices supports a linguistic architecture involving a single shallow level of syntax connected to a semantic level containing information about quantification, thematic relations, and information structure, as well as to a phonological level. Many of the linguistic distinctions often used to support complex (or multilevel) syntactic structure are instead captured by semantics; however, the syntactic level includes some specification of “missing” elements that are not realized at the phonological level. We also show that structural priming provides evidence about the consistency of representations across languages and about language development. In sum, we propose that structural priming provides a new basis for understanding the nature of language. <|reference_end|>", "<|reference_start|> An experimental approach to linguistic representation: Abstract Within the cognitive sciences, most researchers assume that it is the job of linguists to investigate how language is represented, and that they do so largely by building theories based on explicit judgments about patterns of acceptability – whereas it is the task of psychologists to determine how language is processed, and that in doing so, they do not typically question the linguists' representational assumptions. We challenge this division of labor by arguing that structural priming provides an implicit method of investigating linguistic representations that should end the current reliance on acceptability judgments. Moreover, structural priming has now reached sufficient methodological maturity to provide substantial evidence about such representations. We argue that evidence from speakers' tendency to repeat their own and others' structural choices supports a linguistic architecture involving a single shallow level of syntax connected to a semantic level containing information about quantification, thematic relations, and information structure, as well as to a phonological level. Many of the linguistic distinctions often used to support complex (or multilevel) syntactic structure are instead captured by semantics; however, the syntactic level includes some specification of “missing” elements that are not realized at the phonological level. We also show that structural priming provides evidence about the consistency of representations across languages and about language development. In sum, we propose that structural priming provides a new basis for understanding the nature of language. <|reference_end|>", "<|reference_start|> Total word count : 1104 The logic of syntactic priming and acceptability judgments: word count: 53 Main text word count: 993 References word count: 58 Total word count: 1104 The logic of syntactic priming and acceptability judgments Phoebe Gaston, Nick Huang, and Colin Phillips University of Maryland Mailing address: 1401 Marie Mount Hall, College Park, MD 20742 Phone: +1 301 405 3082 Emails: [email protected], [email protected], [email protected] Homepage URLs: https://phoebegaston.wordpress.com/, http://ling.umd.edu/~znhuang, http://colinphillips.net <|reference_end|>", "<|reference_start|> Becoming syntactic.: Psycholinguistic research has shown that the influence of abstract syntactic knowledge on performance is shaped by particular sentences that have been experienced. To explore this idea, the authors applied a connectionist model of sentence production to the development and use of abstract syntax. The model makes use of (a) error-based learning to acquire and adapt sequencing mechanisms and (b) meaning-form mappings to derive syntactic representations. The model is able to account for most of what is known about structural priming in adult speakers, as well as key findings in preferential looking and elicited production studies of language acquisition. The model suggests how abstract knowledge and concrete experience are balanced in the development and use of syntax. <|reference_end|>" ]
[ 0, 2, 3, 17 ]
{"<|cite_10|>": "ss-2510128", "<|cite_15|>": "ss-1607076", "<|cite_16|>": "ss-2510128", "<|cite_1|>": "ss-1607068", "<|cite_20|>": "ss-684271", "<|cite_2|>": "ss-1022658", "<|cite_12|>": "ss-1607073", "<|cite_3|>": "ss-800048", "<|cite_4|>": "ss-1607069", "<|cite_5|>": "ss-1607070", "<|cite_17|>": "ss-1022658", "<|cite_6|>": "ss-1607071", "<|cite_7|>": "ss-1607072", "<|cite_8|>": "ss-1379446", "<|cite_9|>": "ss-1252723", "<|cite_18|>": "ss-1522523", "<|cite_19|>": "ss-1022658", "<|multi_cite_13_1|>": "ss-1454289", "<|multi_cite_13_2|>": "ss-1607074", "<|multi_cite_13_3|>": "arxiv-225179", "<|multi_cite_13_4|>": "arxiv-370526", "<|cite_14|>": "ss-1607075"}
2010.00679
<|paper_start|> Title: Implicit Rank-Minimizing Autoencoder Abstract: Implicit Rank-Minimizing Autoencoder: An important component of autoencoders is the method by which the information capacity of the latent representation is minimized or limited. In this work, the rank of the covariance matrix of the codes is implicitly minimized by relying on the fact that gradient descent learning in multi-layer linear networks leads to minimum-rank solutions. By inserting a number of extra linear layers between the encoder and the decoder, the system spontaneously learns representations with a low effective dimension. The model, dubbed Implicit Rank-Minimizing Autoencoder (IRMAE), is simple, deterministic, and learns compact latent spaces. We demonstrate the validity of the method on several image generation and representation learning tasks. Introduction Optimizing a {\em linear} multi-layer neural network through gradient descent leads to a low-rank solution. This phenomenon is known as implicit regularization and has been extensively studied under the context of matrix factorization <|cite_start|> (Reference: Implicit Regularization in Matrix Factorization: We study implicit regularization when optimizing an underdetermined quadratic objective over a matrix $X$ with gradient descent on a factorization of $X$. We conjecture and provide empirical and theoretical evidence that with small enough step sizes and initialization close enough to the origin, gradient descent on a full dimensional factorization converges to the minimum nuclear norm solution.) <|cite_end|> <|cite_start|> (Reference: Implicit Regularization in Deep Matrix Factorization: Efforts to understand the generalization mystery in deep learning have led to the belief that gradient-based optimization induces a form of implicit regularization, a bias towards models of low "complexity." We study the implicit regularization of gradient descent over deep linear neural networks for matrix completion and sensing, a model referred to as deep matrix factorization. Our first finding, supported by theory and experiments, is that adding depth to a matrix factorization enhances an implicit tendency towards low-rank solutions, oftentimes leading to more accurate recovery. Secondly, we present theoretical and empirical arguments questioning a nascent view by which implicit regularization in matrix factorization can be captured using simple mathematical norms. Our results point to the possibility that the language of standard regularizers may not be rich enough to fully encompass the implicit regularization brought forth by gradient-based optimization.) <|cite_end|> <|cite_start|> (Reference: Implicit Regularization in Deep Learning May Not Be Explainable by Norms: Mathematically characterizing the implicit regularization induced by gradient-based optimization is a longstanding pursuit in the theory of deep learning. A widespread hope is that a characterization based on minimization of norms may apply, and a standard test-bed for studying this prospect is matrix factorization (matrix completion via linear neural networks). It is an open question whether norms can explain the implicit regularization in matrix factorization. The current paper resolves this open question in the negative, by proving that there exist natural matrix factorization problems on which the implicit regularization drives all norms (and quasi-norms) towards infinity. Our results suggest that, rather than perceiving the implicit regularization via norms, a potentially more useful interpretation is minimization of rank. We demonstrate empirically that this interpretation extends to a certain class of non-linear neural networks, and hypothesize that it may be key to explaining generalization in deep learning.) <|cite_end|>, linear regression <|cite_start|> (Reference: A mathematical theory of semantic development in deep neural networks: An extensive body of empirical research has revealed remarkable regularities in the acquisition, organization, deployment, and neural representation of human semantic knowledge, thereby raising a fundamental conceptual question: what are the theoretical principles governing the ability of neural networks to acquire, organize, and deploy abstract knowledge by integrating across many individual experiences? We address this question by mathematically analyzing the nonlinear dynamics of learning in deep linear networks. We find exact solutions to this learning dynamics that yield a conceptual explanation for the prevalence of many disparate phenomena in semantic cognition, including the hierarchical differentiation of concepts through rapid developmental transitions, the ubiquity of semantic illusions between such transitions, the emergence of item typicality and category coherence as factors controlling the speed of semantic processing, changing patterns of inductive projection over development, and the conservation of semantic similarity in neural representations across species. Thus, surprisingly, our simple neural model qualitatively recapitulates many diverse regularities underlying semantic development, while providing analytic insight into how the statistical structure of an environment can interact with nonlinear deep learning dynamics to give rise to these regularities.) <|cite_end|> <|cite_start|> (Reference: Implicit Regularization of Discrete Gradient Dynamics in Linear Neural Networks: When optimizing over-parameterized models, such as deep neural networks, a large set of parameters can achieve zero training error. In such cases, the choice of the optimization algorithm and its respective hyper-parameters introduces biases that will lead to convergence to specific minimizers of the objective. Consequently, this choice can be considered as an implicit regularization for the training of over-parametrized models. In this work, we push this idea further by studying the discrete gradient dynamics of the training of a two-layer linear network with the least-squares loss. Using a time rescaling, we show that, with a vanishing initialization and a small enough step size, this dynamics sequentially learns the solutions of a reduced-rank regression with a gradually increasing rank.) <|cite_end|>, logistic regression <|cite_start|> (Reference: The Implicit Bias of Gradient Descent on Separable Data: We examine gradient descent on unregularized logistic regression problems, with homogeneous linear predictors on linearly separable datasets. We show the predictor converges to the direction of the max-margin (hard margin SVM) solution. The result also generalizes to other monotone decreasing loss functions with an infimum at infinity, to multi-class problems, and to training a weight layer in a deep network in a certain restricted setting. Furthermore, we show this convergence is very slow, and only logarithmic in the convergence of the loss itself. This can help explain the benefit of continuing to optimize the logistic or cross-entropy loss even after the training error is zero and the training loss is extremely small, and, as we show, even if the validation loss increases. Our methodology can also aid in understanding implicit regularization n more complex models and with other optimization methods.) <|cite_end|>, and linear convolutional neural networks <|cite_start|> (Reference: Implicit Bias of Gradient Descent on Linear Convolutional Networks: We show that gradient descent on full-width linear convolutional networks of depth $L$ converges to a linear predictor related to the $\ell_{2/L}$ bridge penalty in the frequency domain. This is in contrast to linearly fully connected networks, where gradient descent converges to the hard margin linear support vector machine solution, regardless of depth.) <|cite_end|>. The main goal of these prior works were to understand the generalization ability of deep neural networks. By contrast, the goal of the present work is to design an architecture that takes advantage of this phenomenon to improve the quality of learned representations. Learning good representations remains a core issue in AI <|cite_start|> (Reference: Representation Learning: A Review and New Perspectives: The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, auto-encoders, manifold learning, and deep networks. This motivates longer-term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning.) <|cite_end|>. Representations learned in a self-supervised (or unsupervised) manner can be used for downstream tasks such as generation and classification. Autoencoders (AE) are a popular class of method for learning representations without requiring labeled data. The internal representation of an AE must have a limited information capacity to prevent the AE from learning a trivial identity function. Variants of AEs differ by how they perform this limitation. Bottleneck AE (sometimes called "Diabolo networks") simply use low-dimensional codes <|cite_start|> (Reference: Learning internal representations by error propagation: ) <|cite_end|>, noisy AE, such as variational AE add noise to the codes while limiting the variance of their distribution <|cite_start|> (Reference: Sparse Coding of Natural Images Using an Overcomplete Set of Limited Capacity Units: It has been suggested that the primary goal of the sensory system is to represent input in such a way as to reduce the high degree of redundancy. Given a noisy neural representation, however, solely reducing redundancy is not desirable, since redundancy is the only clue to reduce the effects of noise. Here we propose a model that best balances redundancy reduction and redundant representation. Like previous models, our model accounts for the localized and oriented structure of simple cells, but it also predicts a different organization for the population. With noisy, limited-capacity units, the optimal representation becomes an overcomplete, multi-scale representation, which, compared to previous models, is in closer agreement with physiological data. These results offer a new perspective on the expansion of the number of neurons from retina to V1 and provide a theoretical model of incorporating useful redundancy into efficient neural representations.) <|cite_end|> <|cite_start|> (Reference: Auto-Encoding Variational Bayes: How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions are two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.) <|cite_end|>, quantizing AE (such as VQ-VAE) quantize the codes into discrete clusters <|cite_start|> (Reference: Neural Discrete Representation Learning: Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector Quantised-Variational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of "posterior collapse" -- where the latents are ignored when they are paired with a powerful autoregressive decoder -- typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations.) <|cite_end|>, sparse AE impose a sparsity penalty on the code <|cite_start|> (Reference: Research on denoising sparse autoencoder: ) <|cite_end|> <|cite_start|> (Reference: Sparse Feature Learning for Deep Belief Networks: Unsupervised learning algorithms aim to discover the structure hidden in the data, and to learn representations that are more suitable as input to a supervised machine than the raw input. Many unsupervised methods are based on reconstructing the input from the representation, while constraining the representation to have certain desirable properties (e.g. low dimension, sparsity, etc). Others are based on approximating density by stochastically reconstructing the input from the representation. We describe a novel and efficient algorithm to learn sparse representations, and compare it theoretically and experimentally with a similar machine trained probabilistically, namely a Restricted Boltzmann Machine. We propose a simple criterion to compare and select different unsupervised machines based on the trade-off between the reconstruction error and the information content of the representation. We demonstrate this method by extracting features from a dataset of handwritten numerals, and from a dataset of natural image patches. We show that by stacking multiple levels of such machines and by training sequentially, high-order dependencies between the input observed variables can be captured.) <|cite_end|>, contracting and saturating AE minimize the curvature of the network function in directions outside the data manifold <|cite_start|> (Reference: Contractive Auto-encoders: Explicit invariance during feature extraction: We present in this paper a novel approach for training deterministic auto-encoders. We show that by adding a well chosen penalty term to the classical reconstruction cost function, we can achieve results that equal or surpass those attained by other regularized auto-encoders as well as denoising auto-encoders on a range of datasets. This penalty term corresponds to the Frobenius norm of the Jacobian matrix of the encoder activations with respect to the input. We show that this penalty term results in a localized space contraction which in turn yields robust features on the activation layer. Furthermore, we show how this penalty term is related to both regularized auto-encoders and denoising auto-encoders and how it can be seen as a link between deterministic and non-deterministic auto-encoders. We find empirically that this penalty helps to carve a representation that better captures the local directions of variation dictated by the data, corresponding to a lower-dimensional non-linear manifold, while being more invariant to the vast majority of directions orthogonal to the manifold. Finally, we show that by using the learned features to initialize a MLP, we achieve state of the art classification error on a range of datasets, surpassing other methods of pretraining.) <|cite_end|> <|cite_start|> (Reference: Saturating Auto-Encoders: We introduce a simple new regularizer for auto-encoders whose hidden-unit activation functions contain at least one zero-gradient (saturated) region. This regularizer explicitly encourages activations in the saturated region(s) of the corresponding activation function. We call these Saturating Auto-Encoders (SATAE). We show that the saturation regularizer explicitly limits the SATAE's ability to reconstruct inputs which are not near the data manifold. Furthermore, we show that a wide variety of features can be learned when different activation functions are used. Finally, connections are established with the Contractive and Sparse Auto-Encoders.) <|cite_end|>, and denoising AE are trained to produce large reconstruction error for corrupted samples <|cite_start|> (Reference: {Extracting and Composing Robust Features with Denoising Autoencoders: Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite.) <|cite_end|>. In this work, we propose a new method to implicitly minimize the rank/dimensionality of the latent code of an autoencoder. We call this model Implicit Rank-Minimizing Autoencoder (IRMAE). This method consists in inserting extra linear layers between the encoder and the decoder of a standard autoencoder. This additional linear network is trained jointly with the rest of the autoencoder through classical backpropagation. As a result, the system spontaneously learns representations with a low effective dimensionality. Like other regularization methods, this extra linear neural network does not appear at inference time as the linear matrices collapse into one. Thus, the encoder and the decoder architecture of the model is identical to the original model. In practice, we fold the collapsed linear matrices into the last layer of the encoder at inference time. We empirically demonstrate IRMAE's regularization behavior through a synthetic dataset and show that it learns good representation with a much smaller latent dimension. Then we demonstrate superior representation learning performance of our method against a standard deterministic autoencoder and comparable performance to a variational autoencoder on MNIST dataset and CelebA dataset through a variety of generative tasks, including interpolation, sample generation from noise, PCA interpolation in low dimension, and a downstream classification task. We also conducted an ablation study to verify that the advantage of implicit regularization comes from gradient descent learning dynamics. We summarize our contributions as follows: \begin{itemize} \item We proposed a method of inserting extra linear layers in deep neural networks for rank regularization; \item We proposed a simple, deterministic rank-minimization autoencoder that learns low-dimensional representation; \item We demonstrated a superior performance of our method compared to a standard deterministic autoencoder and a variational autoencoder on a variety of generative and downstream classification tasks. \end{itemize} Related Work The implicit regularization provided by gradient descent optimization is widely believed to be one of the keys to deep neural networks' generalization ability. Many works focusing on linear cases are trying to study this behavior empirically and theoretically. Soudry et al. <|cite_start|> (Reference: The Implicit Bias of Gradient Descent on Separable Data: We examine gradient descent on unregularized logistic regression problems, with homogeneous linear predictors on linearly separable datasets. We show the predictor converges to the direction of the max-margin (hard margin SVM) solution. The result also generalizes to other monotone decreasing loss functions with an infimum at infinity, to multi-class problems, and to training a weight layer in a deep network in a certain restricted setting. Furthermore, we show this convergence is very slow, and only logarithmic in the convergence of the loss itself. This can help explain the benefit of continuing to optimize the logistic or cross-entropy loss even after the training error is zero and the training loss is extremely small, and, as we show, even if the validation loss increases. Our methodology can also aid in understanding implicit regularization n more complex models and with other optimization methods.) <|cite_end|> show that implicit bias helps to learn logistic regression. Saxe et al. <|cite_start|> (Reference: A mathematical theory of semantic development in deep neural networks: An extensive body of empirical research has revealed remarkable regularities in the acquisition, organization, deployment, and neural representation of human semantic knowledge, thereby raising a fundamental conceptual question: what are the theoretical principles governing the ability of neural networks to acquire, organize, and deploy abstract knowledge by integrating across many individual experiences? We address this question by mathematically analyzing the nonlinear dynamics of learning in deep linear networks. We find exact solutions to this learning dynamics that yield a conceptual explanation for the prevalence of many disparate phenomena in semantic cognition, including the hierarchical differentiation of concepts through rapid developmental transitions, the ubiquity of semantic illusions between such transitions, the emergence of item typicality and category coherence as factors controlling the speed of semantic processing, changing patterns of inductive projection over development, and the conservation of semantic similarity in neural representations across species. Thus, surprisingly, our simple neural model qualitatively recapitulates many diverse regularities underlying semantic development, while providing analytic insight into how the statistical structure of an environment can interact with nonlinear deep learning dynamics to give rise to these regularities.) <|cite_end|> study a 2-layer linear regression and theoretically demonstrated that continuous gradient descent could lead to a low-rank solution. Gidel et al. <|cite_start|> (Reference: Implicit Regularization of Discrete Gradient Dynamics in Linear Neural Networks: When optimizing over-parameterized models, such as deep neural networks, a large set of parameters can achieve zero training error. In such cases, the choice of the optimization algorithm and its respective hyper-parameters introduces biases that will lead to convergence to specific minimizers of the objective. Consequently, this choice can be considered as an implicit regularization for the training of over-parametrized models. In this work, we push this idea further by studying the discrete gradient dynamics of the training of a two-layer linear network with the least-squares loss. Using a time rescaling, we show that, with a vanishing initialization and a small enough step size, this dynamics sequentially learns the solutions of a reduced-rank regression with a gradually increasing rank.) <|cite_end|> extend such theory to a discrete case for linear regression problems. In the field of matrix factorization, Gunasekar et al. <|cite_start|> (Reference: Implicit Regularization in Matrix Factorization: We study implicit regularization when optimizing an underdetermined quadratic objective over a matrix $X$ with gradient descent on a factorization of $X$. We conjecture and provide empirical and theoretical evidence that with small enough step sizes and initialization close enough to the origin, gradient descent on a full dimensional factorization converges to the minimum nuclear norm solution.) <|cite_end|> theoretically prove that gradient descent can derive minimal nuclear norm solution. Arora et al. <|cite_start|> (Reference: Implicit Regularization in Deep Matrix Factorization: Efforts to understand the generalization mystery in deep learning have led to the belief that gradient-based optimization induces a form of implicit regularization, a bias towards models of low "complexity." We study the implicit regularization of gradient descent over deep linear neural networks for matrix completion and sensing, a model referred to as deep matrix factorization. Our first finding, supported by theory and experiments, is that adding depth to a matrix factorization enhances an implicit tendency towards low-rank solutions, oftentimes leading to more accurate recovery. Secondly, we present theoretical and empirical arguments questioning a nascent view by which implicit regularization in matrix factorization can be captured using simple mathematical norms. Our results point to the possibility that the language of standard regularizers may not be rich enough to fully encompass the implicit regularization brought forth by gradient-based optimization.) <|cite_end|> extend this concept to the deep linear network case by theoretically and empirically demonstrating that a deep linear network can derive low-rank solutions. Gunasekar et al. <|cite_start|> (Reference: Implicit Bias of Gradient Descent on Linear Convolutional Networks: We show that gradient descent on full-width linear convolutional networks of depth $L$ converges to a linear predictor related to the $\ell_{2/L}$ bridge penalty in the frequency domain. This is in contrast to linearly fully connected networks, where gradient descent converges to the hard margin linear support vector machine solution, regardless of depth.) <|cite_end|> prove that gradient descent has a regularization effect in linear convolutional networks. All these works are trying to understand why gradient descent can help generalization in existing approaches. On the contrary, we take advantage of this phenomenon to develop better algorithms. Also, the current implicit regularization study requires a small gradient and vanishing initialization, while our method is more general and can be used with complicated optimizers such as Adam <|cite_start|> (Reference: Adam: A Method for Stochastic Optimization: We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.) <|cite_end|> and allow combination with more complicated components. Autoencoders are popular for representation learning. It is important to limit the latent capacity as the data are embedded in a lower-dimensional space. A big family of them are based on variational autoencoders <|cite_start|> (Reference: Auto-Encoding Variational Bayes: How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions are two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.) <|cite_end|> such as beta-VAE <|cite_start|> (Reference: {beta-VAE: LEARNING BASIC VISUAL CONCEPTS WITH A CONSTRAINED VARIATIONAL FRAMEWORK: an) <|cite_end|>. These methods tend to generate blurry images due to its intrinsic probabilistic nature. On the other hand, a naive deterministic autoencoder is considered a failure in generative tasks and has ``holes'' in its latent space, due to the absence of explicit constraint on the latent distribution. Many methods with deterministic autoencoder are proposed to solve this problem, such as RAE <|cite_start|> (Reference: From Variational to Deterministic Autoencoders: Variational Autoencoders (VAEs) provide a theoretically-backed and popular framework for deep generative models. However, learning a VAE from data poses still unanswered theoretical questions and considerable practical challenges. In this work, we propose an alternative framework for generative modeling that is simpler, easier to train, and deterministic, yet has many of the advantages of VAEs. We observe that sampling a stochastic encoder in a Gaussian VAE can be interpreted as simply injecting noise into the input of a deterministic decoder. We investigate how substituting this kind of stochasticity, with other explicit and implicit regularization schemes, can lead to an equally smooth and meaningful latent space without forcing it to conform to an arbitrarily chosen prior. To retrieve a generative mechanism to sample new data, we introduce an ex-post density estimation step that can be readily applied also to existing VAEs, improving their sample quality. We show, in a rigorous empirical study, that the proposed regularized deterministic autoencoders are able to generate samples that are comparable to, or better than, those of VAEs and more powerful alternatives when applied to images as well as to structured data such as molecules. \footnote{An implementation is available at: \url{https://github.com/ParthaEth/Regularized_autoencoders-RAE-}}) <|cite_end|>, WAE <|cite_start|> (Reference: Wasserstein Auto-Encoders: We propose the Wasserstein Auto-Encoder (WAE)---a new algorithm for building a generative model of the data distribution. WAE minimizes a penalized form of the Wasserstein distance between the model distribution and the target distribution, which leads to a different regularizer than the one used by the Variational Auto-Encoder (VAE). This regularizer encourages the encoded training distribution to match the prior. We compare our algorithm with several other techniques and show that it is a generalization of adversarial auto-encoders (AAE). Our experiments show that WAE shares many of the properties of VAEs (stable training, encoder-decoder architecture, nice latent manifold structure) while generating samples of better quality, as measured by the FID score.) <|cite_end|>, VQ-VAE <|cite_start|> (Reference: Neural Discrete Representation Learning: Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector Quantised-Variational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of "posterior collapse" -- where the latents are ignored when they are paired with a powerful autoregressive decoder -- typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations.) <|cite_end|>. <|paper_end|>
[ "<|reference_start|> Implicit Regularization of Discrete Gradient Dynamics in Linear Neural Networks: When optimizing over-parameterized models, such as deep neural networks, a large set of parameters can achieve zero training error. In such cases, the choice of the optimization algorithm and its respective hyper-parameters introduces biases that will lead to convergence to specific minimizers of the objective. Consequently, this choice can be considered as an implicit regularization for the training of over-parametrized models. In this work, we push this idea further by studying the discrete gradient dynamics of the training of a two-layer linear network with the least-squares loss. Using a time rescaling, we show that, with a vanishing initialization and a small enough step size, this dynamics sequentially learns the solutions of a reduced-rank regression with a gradually increasing rank. <|reference_end|>", "<|reference_start|> Learning internal representations by error propagation: <|reference_end|>", "<|reference_start|> Implicit Regularization of Discrete Gradient Dynamics in Linear Neural Networks: When optimizing over-parameterized models, such as deep neural networks, a large set of parameters can achieve zero training error. In such cases, the choice of the optimization algorithm and its respective hyper-parameters introduces biases that will lead to convergence to specific minimizers of the objective. Consequently, this choice can be considered as an implicit regularization for the training of over-parametrized models. In this work, we push this idea further by studying the discrete gradient dynamics of the training of a two-layer linear network with the least-squares loss. Using a time rescaling, we show that, with a vanishing initialization and a small enough step size, this dynamics sequentially learns the solutions of a reduced-rank regression with a gradually increasing rank. <|reference_end|>", "<|reference_start|> From Variational to Deterministic Autoencoders: Variational Autoencoders (VAEs) provide a theoretically-backed and popular framework for deep generative models. However, learning a VAE from data poses still unanswered theoretical questions and considerable practical challenges. In this work, we propose an alternative framework for generative modeling that is simpler, easier to train, and deterministic, yet has many of the advantages of VAEs. We observe that sampling a stochastic encoder in a Gaussian VAE can be interpreted as simply injecting noise into the input of a deterministic decoder. We investigate how substituting this kind of stochasticity, with other explicit and implicit regularization schemes, can lead to an equally smooth and meaningful latent space without forcing it to conform to an arbitrarily chosen prior. To retrieve a generative mechanism to sample new data, we introduce an ex-post density estimation step that can be readily applied also to existing VAEs, improving their sample quality. We show, in a rigorous empirical study, that the proposed regularized deterministic autoencoders are able to generate samples that are comparable to, or better than, those of VAEs and more powerful alternatives when applied to images as well as to structured data such as molecules. \\footnote{An implementation is available at: \\url{https://github.com/ParthaEth/Regularized_autoencoders-RAE-}} <|reference_end|>" ]
[ 4, 8, 19, 26 ]
{"<|multi_cite_1_1|>": "arxiv-125147", "<|multi_cite_1_2|>": "arxiv-207237", "<|multi_cite_1_3|>": "arxiv-265220", "<|multi_cite_2_1|>": "arxiv-177484", "<|multi_cite_2_2|>": "arxiv-202151", "<|cite_3|>": "arxiv-138411", "<|cite_4|>": "arxiv-160943", "<|cite_5|>": "arxiv-33186", "<|cite_6|>": "ss-844053", "<|multi_cite_7_1|>": "ss-1968414", "<|multi_cite_7_2|>": "arxiv-54350", "<|cite_8|>": "arxiv-139013", "<|multi_cite_9_1|>": "ss-2274949", "<|multi_cite_9_2|>": "ss-741770", "<|multi_cite_10_1|>": "ss-1006716", "<|multi_cite_10_2|>": "arxiv-40353", "<|cite_11|>": "ss-779190", "<|cite_12|>": "arxiv-138411", "<|cite_13|>": "arxiv-177484", "<|cite_14|>": "arxiv-202151", "<|cite_15|>": "arxiv-125147", "<|cite_16|>": "arxiv-207237", "<|cite_17|>": "arxiv-160943", "<|cite_18|>": "arxiv-70669", "<|cite_19|>": "arxiv-54350", "<|cite_20|>": "ss-709389", "<|cite_21|>": "arxiv-197295", "<|cite_22|>": "arxiv-139177", "<|cite_23|>": "arxiv-139013"}
2402.02207
<|paper_start|> Title: Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models Abstract: Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models: Current vision large language models (VLLMs) exhibit remarkable capabilities yet are prone to generate harmful content and are vulnerable to even the simplest jailbreaking attacks. Our initial analysis finds that this is due to the presence of harmful data during vision-language instruction fine-tuning, and that VLLM fine-tuning can cause forgetting of safety alignment previously learned by the underpinning LLM. To address this issue, we first curate a vision-language safe instruction-following dataset VLGuard covering various harmful categories. Our experiments demonstrate that integrating this dataset into standard vision-language fine-tuning or utilizing it for post-hoc fine-tuning effectively safety aligns VLLMs. This alignment is achieved with minimal impact on, or even enhancement of, the models' helpfulness. The versatility of our safety fine-tuning dataset makes it a valuable resource for safety-testing existing VLLMs, training new models or safeguarding pre-trained VLLMs. Empirical results demonstrate that fine-tuned VLLMs effectively reject unsafe instructions and substantially reduce the success rates of several black-box adversarial attacks, which approach zero in many cases. The code and dataset are available at https://github.com/ys-zong/VLGuard. Introduction \label{sec:intro} Vision Large Language Models (VLLMs) <|cite_start|> (Reference: A Survey on Multimodal Large Language Models: Recently, Multimodal Large Language Model (MLLM) represented by GPT-4V has been a new rising research hotspot, which uses powerful Large Language Models (LLMs) as a brain to perform multimodal tasks. The surprising emergent capabilities of MLLM, such as writing stories based on images and OCR-free math reasoning, are rare in traditional multimodal methods, suggesting a potential path to artificial general intelligence. To this end, both academia and industry have endeavored to develop MLLMs that can compete with or even better than GPT-4V, pushing the limit of research at a surprising speed. In this paper, we aim to trace and summarize the recent progress of MLLMs. First of all, we present the basic formulation of MLLM and delineate its related concepts, including architecture, training strategy and data, as well as evaluation. Then, we introduce research topics about how MLLMs can be extended to support more granularity, modalities, languages, and scenarios. We continue with multimodal hallucination and extended techniques, including Multimodal ICL (M-ICL), Multimodal CoT (M-CoT), and LLM-Aided Visual Reasoning (LAVR). To conclude the paper, we discuss existing challenges and point out promising research directions. In light of the fact that the era of MLLM has only just begun, we will keep updating this survey and hope it can inspire more research. An associated GitHub link collecting the latest papers is available at https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models.) <|cite_end|> <|cite_start|> (Reference: GPT-4 Technical Report: We report the development of GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs. While less capable than humans in many real-world scenarios, GPT-4 exhibits human-level performance on various professional and academic benchmarks, including passing a simulated bar exam with a score around the top 10% of test takers. GPT-4 is a Transformer-based model pre-trained to predict the next token in a document. The post-training alignment process results in improved performance on measures of factuality and adherence to desired behavior. A core component of this project was developing infrastructure and optimization methods that behave predictably across a wide range of scales. This allowed us to accurately predict some aspects of GPT-4's performance based on models trained with no more than 1/1,000th the compute of GPT-4.) <|cite_end|> <|cite_start|> (Reference: Visual Instruction Tuning: Instruction tuning large language models (LLMs) using machine-generated instruction-following data has improved zero-shot capabilities on new tasks, but the idea is less explored in the multimodal field. In this paper, we present the first attempt to use language-only GPT-4 to generate multimodal language-image instruction-following data. By instruction tuning on such generated data, we introduce LLaVA: Large Language and Vision Assistant, an end-to-end trained large multimodal model that connects a vision encoder and LLM for general-purpose visual and language understanding.Our early experiments show that LLaVA demonstrates impressive multimodel chat abilities, sometimes exhibiting the behaviors of multimodal GPT-4 on unseen images/instructions, and yields a 85.1% relative score compared with GPT-4 on a synthetic multimodal instruction-following dataset. When fine-tuned on Science QA, the synergy of LLaVA and GPT-4 achieves a new state-of-the-art accuracy of 92.53%. We make GPT-4 generated visual instruction tuning data, our model and code base publicly available.) <|cite_end|>, building on top of large language models (LLMs), have attracted significant attention for their remarkable multi-modal capabilities. However, as the adoption of VLLMs accelerates, emerging studies reveal a critical challenge: these models are susceptible to generating harmful content and are vulnerable to adversarial attacks <|cite_start|> (Reference: Are aligned neural networks adversarially aligned?: Large language models are now tuned to align with the goals of their creators, namely to be "helpful and harmless." These models should respond helpfully to user questions, but refuse to answer requests that could cause harm. However, adversarial users can construct inputs which circumvent attempts at alignment. In this work, we study adversarial alignment, and ask to what extent these models remain aligned when interacting with an adversarial user who constructs worst-case inputs (adversarial examples). These inputs are designed to cause the model to emit harmful content that would otherwise be prohibited. We show that existing NLP-based optimization attacks are insufficiently powerful to reliably attack aligned text models: even when current NLP-based attacks fail, we can find adversarial inputs with brute force. As a result, the failure of current attacks should not be seen as proof that aligned text models remain aligned under adversarial inputs. However the recent trend in large-scale ML models is multimodal models that allow users to provide images that influence the text that is generated. We show these models can be easily attacked, i.e., induced to perform arbitrary un-aligned behavior through adversarial perturbation of the input image. We conjecture that improved NLP attacks may demonstrate this same level of adversarial control over text-only models.) <|cite_end|> <|cite_start|> (Reference: Figstep: Jailbreaking large vision-language models via typographic visual prompts.: Ensuring the safety of artificial intelligence-generated content (AIGC) is a longstanding topic in the artificial intelligence (AI) community, and the safety concerns associated with Large Language Models (LLMs) have been widely investigated. Recently, large vision-language models (VLMs) represent an unprecedented revolution, as they are built upon LLMs but can incorporate additional modalities (e.g., images). However, the safety of VLMs lacks systematic evaluation, and there may be an overconfidence in the safety guarantees provided by their underlying LLMs. In this paper, to demonstrate that introducing additional modality modules leads to unforeseen AI safety issues, we propose FigStep, a straightforward yet effective jailbreaking algorithm against VLMs. Instead of feeding textual harmful instructions directly, FigStep converts the harmful content into images through typography to bypass the safety alignment within the textual module of the VLMs, inducing VLMs to output unsafe responses that violate common AI safety policies. In our evaluation, we manually review 46,500 model responses generated by 3 families of the promising open-source VLMs, i.e., LLaVA, MiniGPT4, and CogVLM (a total of 6 VLMs). The experimental results show that FigStep can achieve an average attack success rate of 82.50% on 500 harmful queries in 10 topics. Moreover, we demonstrate that the methodology of FigStep can even jailbreak GPT-4V, which already leverages an OCR detector to filter harmful queries. Above all, our work reveals that VLMs are vulnerable to jailbreaking attacks, which highlights the necessity of novel safety alignments between visual and textual modalities.) <|cite_end|> <|cite_start|> (Reference: Visual Adversarial Examples Jailbreak Large Language Models: for) <|cite_end|>. This vulnerability poses a significant concern for their deployment in practical settings, where there is a risk of malicious users attacking VLLMs to elicit desired harmful outputs, hijack model behaviors, obtain information for illegal activities, etc. \begin{figure}[t] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=0.9\columnwidth]{imgs/teaser.pdf}} \caption{Training vision large language models usually consists of fine-tuning previously aligned LLMs, which breaks their established alignment and leads the trained VLLMs to exhibit worse safety than their LLMs. To analyze and address this issue, we construct~\dataset~for VLLMs safety fine-tuning and evaluation.} \label{fig:teaser} \end{center} \vskip -0.2in \end{figure} There has been tremendous interest in ``jailbreaking'' or ``red-teaming'' LLMs and VLLMs in both academia <|cite_start|> (Reference: Jailbroken: How Does LLM Safety Training Fail?: Large language models trained for safety and harmlessness remain susceptible to adversarial misuse, as evidenced by the prevalence of "jailbreak" attacks on early releases of ChatGPT that elicit undesired behavior. Going beyond recognition of the issue, we investigate why such attacks succeed and how they can be created. We hypothesize two failure modes of safety training: competing objectives and mismatched generalization. Competing objectives arise when a model's capabilities and safety goals conflict, while mismatched generalization occurs when safety training fails to generalize to a domain for which capabilities exist. We use these failure modes to guide jailbreak design and then evaluate state-of-the-art models, including OpenAI's GPT-4 and Anthropic's Claude v1.3, against both existing and newly designed attacks. We find that vulnerabilities persist despite the extensive red-teaming and safety-training efforts behind these models. Notably, new attacks utilizing our failure modes succeed on every prompt in a collection of unsafe requests from the models' red-teaming evaluation sets and outperform existing ad hoc jailbreaks. Our analysis emphasizes the need for safety-capability parity -- that safety mechanisms should be as sophisticated as the underlying model -- and argues against the idea that scaling alone can resolve these safety failure modes.) <|cite_end|> <|cite_start|> (Reference: Are aligned neural networks adversarially aligned?: Large language models are now tuned to align with the goals of their creators, namely to be "helpful and harmless." These models should respond helpfully to user questions, but refuse to answer requests that could cause harm. However, adversarial users can construct inputs which circumvent attempts at alignment. In this work, we study adversarial alignment, and ask to what extent these models remain aligned when interacting with an adversarial user who constructs worst-case inputs (adversarial examples). These inputs are designed to cause the model to emit harmful content that would otherwise be prohibited. We show that existing NLP-based optimization attacks are insufficiently powerful to reliably attack aligned text models: even when current NLP-based attacks fail, we can find adversarial inputs with brute force. As a result, the failure of current attacks should not be seen as proof that aligned text models remain aligned under adversarial inputs. However the recent trend in large-scale ML models is multimodal models that allow users to provide images that influence the text that is generated. We show these models can be easily attacked, i.e., induced to perform arbitrary un-aligned behavior through adversarial perturbation of the input image. We conjecture that improved NLP attacks may demonstrate this same level of adversarial control over text-only models.) <|cite_end|> <|cite_start|> (Reference: Visual Adversarial Examples Jailbreak Large Language Models: for) <|cite_end|> <|cite_start|> (Reference: Figstep: Jailbreaking large vision-language models via typographic visual prompts.: Ensuring the safety of artificial intelligence-generated content (AIGC) is a longstanding topic in the artificial intelligence (AI) community, and the safety concerns associated with Large Language Models (LLMs) have been widely investigated. Recently, large vision-language models (VLMs) represent an unprecedented revolution, as they are built upon LLMs but can incorporate additional modalities (e.g., images). However, the safety of VLMs lacks systematic evaluation, and there may be an overconfidence in the safety guarantees provided by their underlying LLMs. In this paper, to demonstrate that introducing additional modality modules leads to unforeseen AI safety issues, we propose FigStep, a straightforward yet effective jailbreaking algorithm against VLMs. Instead of feeding textual harmful instructions directly, FigStep converts the harmful content into images through typography to bypass the safety alignment within the textual module of the VLMs, inducing VLMs to output unsafe responses that violate common AI safety policies. In our evaluation, we manually review 46,500 model responses generated by 3 families of the promising open-source VLMs, i.e., LLaVA, MiniGPT4, and CogVLM (a total of 6 VLMs). The experimental results show that FigStep can achieve an average attack success rate of 82.50% on 500 harmful queries in 10 topics. Moreover, we demonstrate that the methodology of FigStep can even jailbreak GPT-4V, which already leverages an OCR detector to filter harmful queries. Above all, our work reveals that VLMs are vulnerable to jailbreaking attacks, which highlights the necessity of novel safety alignments between visual and textual modalities.) <|cite_end|> <|cite_start|> (Reference: "Do Anything Now": Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models: The misuse of large language models (LLMs) has drawn significant attention from the general public and LLM vendors. One particular type of adversarial prompt, known as jailbreak prompt, has emerged as the main attack vector to bypass the safeguards and elicit harmful content from LLMs. In this paper, employing our new framework JailbreakHub, we conduct a comprehensive analysis of 1,405 jailbreak prompts spanning from December 2022 to December 2023. We identify 131 jailbreak communities and discover unique characteristics of jailbreak prompts and their major attack strategies, such as prompt injection and privilege escalation. We also observe that jailbreak prompts increasingly shift from online Web communities to prompt-aggregation websites and 28 user accounts have consistently optimized jailbreak prompts over 100 days. To assess the potential harm caused by jailbreak prompts, we create a question set comprising 107,250 samples across 13 forbidden scenarios. Leveraging this dataset, our experiments on six popular LLMs show that their safeguards cannot adequately defend jailbreak prompts in all scenarios. Particularly, we identify five highly effective jailbreak prompts that achieve 0.95 attack success rates on ChatGPT (GPT-3.5) and GPT-4, and the earliest one has persisted online for over 240 days. We hope that our study can facilitate the research community and LLM vendors in promoting safer and regulated LLMs.) <|cite_end|> and social media. In response, researchers have proposed various methods to safeguard LLMs, such as Reinforcement Learning from Human Feedback (RLHF) <|cite_start|> (Reference: Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback: We apply preference modeling and reinforcement learning from human feedback (RLHF) to finetune language models to act as helpful and harmless assistants. We find this alignment training improves performance on almost all NLP evaluations, and is fully compatible with training for specialized skills such as python coding and summarization. We explore an iterated online mode of training, where preference models and RL policies are updated on a weekly cadence with fresh human feedback data, efficiently improving our datasets and models. Finally, we investigate the robustness of RLHF training, and identify a roughly linear relation between the RL reward and the square root of the KL divergence between the policy and its initialization. Alongside our main results, we perform peripheral analyses on calibration, competing objectives, and the use of OOD detection, compare our models with human writers, and provide samples from our models using prompts appearing in recent related work.) <|cite_end|> <|cite_start|> (Reference: Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned.: We describe our early efforts to red team language models in order to simultaneously discover, measure, and attempt to reduce their potentially harmful outputs. We make three main contributions. First, we investigate scaling behaviors for red teaming across 3 model sizes (2.7B, 13B, and 52B parameters) and 4 model types: a plain language model (LM); an LM prompted to be helpful, honest, and harmless; an LM with rejection sampling; and a model trained to be helpful and harmless using reinforcement learning from human feedback (RLHF). We find that the RLHF models are increasingly difficult to red team as they scale, and we find a flat trend with scale for the other model types. Second, we release our dataset of 38,961 red team attacks for others to analyze and learn from. We provide our own analysis of the data and find a variety of harmful outputs, which range from offensive language to more subtly harmful non-violent unethical outputs. Third, we exhaustively describe our instructions, processes, statistical methodologies, and uncertainty about red teaming. We hope that this transparency accelerates our ability to work together as a community in order to develop shared norms, practices, and technical standards for how to red team language models.) <|cite_end|>, These efforts, often termed as \textit{alignment}, focus on ensuring that LLMs remain ``helpful and harmless'', aiming to align their outputs with ethical and legal standards. VLLMs suffer greater vulnerability compared to LLMs due to potential attacks from two fronts: (1) text-only inputs, where we shall see that VLLMs are often more susceptible than LLMs because visual instruction-following fine-tuning breaks the LLMs' alignment, and (2) vision-language inputs, where the addition of the visual modality introduces new risk factors. Consequently, directly adapting text-only LLM safety techniques to VLLMs is not straightforward and there is currently \textit{no} existing safeguarding strategy for VLLMs. In light of these challenges, we propose a simple yet effective safety fine-tuning strategy for safeguarding VLLMs. We first collect and curate a safety instruction-following dataset~\dataset~consisting of vision-language data. We show that fine-tuning existing VLLMs on our dataset achieves significant improvement in safety while resulting in negligible or no helpfulness degradation, achieving a good balance in the helpfulness-harmlessness tradeoff. To summarize, our contributions are: \begin{itemize} \item We analyze existing VLLMs and underpinning LLMs and show how popular VLM instruction-following protocols make VLLMs substantially more susceptible to jailbreak attacks than the corresponding LLMs. \item To the best of our knowledge, we build the first safety fine-tuning dataset~\dataset~for VLLMs.~\dataset~also comes with a test suite for evaluation. \item We propose two strategies for VLLM safety alignment: post-hoc and mixed fine-tuning. Experimental results with state-of-the-art open-source VLLMs show that our fine-tuning strategy and data significantly reduce the initial safety risks and also add robustness to several black-box attacks while not hurting helpfulness. \end{itemize} Related Work \subsection{Safety Concerns of LLMs and VLLMs} The rising use of LLMs and VLLMs has spurred interest in probing their safety vulnerabilities through jailbreaking methods, which can be broadly categorized into white-box and black-box attacks. In black-box attacks, where attackers have no access to the model's internals and interact only through interfaces like APIs, strategies like prompt engineering (e.g., role play) <|cite_start|> (Reference: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study: Large Language Models (LLMs), like ChatGPT, have demonstrated vast potential but also introduce challenges related to content constraints and potential misuse. Our study investigates three key research questions: (1) the number of different prompt types that can jailbreak LLMs, (2) the effectiveness of jailbreak prompts in circumventing LLM constraints, and (3) the resilience of ChatGPT against these jailbreak prompts. Initially, we develop a classification model to analyze the distribution of existing prompts, identifying ten distinct patterns and three categories of jailbreak prompts. Subsequently, we assess the jailbreak capability of prompts with ChatGPT versions 3.5 and 4.0, utilizing a dataset of 3,120 jailbreak questions across eight prohibited scenarios. Finally, we evaluate the resistance of ChatGPT against jailbreak prompts, finding that the prompts can consistently evade the restrictions in 40 use-case scenarios. The study underscores the importance of prompt structures in jailbreaking LLMs and discusses the challenges of robust jailbreak prompt generation and prevention.) <|cite_end|> <|cite_start|> (Reference: "Do Anything Now": Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models: The misuse of large language models (LLMs) has drawn significant attention from the general public and LLM vendors. One particular type of adversarial prompt, known as jailbreak prompt, has emerged as the main attack vector to bypass the safeguards and elicit harmful content from LLMs. In this paper, employing our new framework JailbreakHub, we conduct a comprehensive analysis of 1,405 jailbreak prompts spanning from December 2022 to December 2023. We identify 131 jailbreak communities and discover unique characteristics of jailbreak prompts and their major attack strategies, such as prompt injection and privilege escalation. We also observe that jailbreak prompts increasingly shift from online Web communities to prompt-aggregation websites and 28 user accounts have consistently optimized jailbreak prompts over 100 days. To assess the potential harm caused by jailbreak prompts, we create a question set comprising 107,250 samples across 13 forbidden scenarios. Leveraging this dataset, our experiments on six popular LLMs show that their safeguards cannot adequately defend jailbreak prompts in all scenarios. Particularly, we identify five highly effective jailbreak prompts that achieve 0.95 attack success rates on ChatGPT (GPT-3.5) and GPT-4, and the earliest one has persisted online for over 240 days. We hope that our study can facilitate the research community and LLM vendors in promoting safer and regulated LLMs.) <|cite_end|> <|cite_start|> (Reference: Jailbroken: How Does LLM Safety Training Fail?: Large language models trained for safety and harmlessness remain susceptible to adversarial misuse, as evidenced by the prevalence of "jailbreak" attacks on early releases of ChatGPT that elicit undesired behavior. Going beyond recognition of the issue, we investigate why such attacks succeed and how they can be created. We hypothesize two failure modes of safety training: competing objectives and mismatched generalization. Competing objectives arise when a model's capabilities and safety goals conflict, while mismatched generalization occurs when safety training fails to generalize to a domain for which capabilities exist. We use these failure modes to guide jailbreak design and then evaluate state-of-the-art models, including OpenAI's GPT-4 and Anthropic's Claude v1.3, against both existing and newly designed attacks. We find that vulnerabilities persist despite the extensive red-teaming and safety-training efforts behind these models. Notably, new attacks utilizing our failure modes succeed on every prompt in a collection of unsafe requests from the models' red-teaming evaluation sets and outperform existing ad hoc jailbreaks. Our analysis emphasizes the need for safety-capability parity -- that safety mechanisms should be as sophisticated as the underlying model -- and argues against the idea that scaling alone can resolve these safety failure modes.) <|cite_end|> or using additional attacker LLMs <|cite_start|> (Reference: Jailbreaking Black Box Large Language Models in Twenty Queries: There is growing interest in ensuring that large language models (LLMs) align with human values. However, the alignment of such models is vulnerable to adversarial jailbreaks, which coax LLMs into overriding their safety guardrails. The identification of these vulnerabilities is therefore instrumental in understanding inherent weaknesses and preventing future misuse. To this end, we propose Prompt Automatic Iterative Refinement (PAIR), an algorithm that generates semantic jailbreaks with only black-box access to an LLM. PAIR -- which is inspired by social engineering attacks -- uses an attacker LLM to automatically generate jailbreaks for a separate targeted LLM without human intervention. In this way, the attacker LLM iteratively queries the target LLM to update and refine a candidate jailbreak. Empirically, PAIR often requires fewer than twenty queries to produce a jailbreak, which is orders of magnitude more efficient than existing algorithms. PAIR also achieves competitive jailbreaking success rates and transferability on open and closed-source LLMs, including GPT-3.5/4, Vicuna, and Gemini.) <|cite_end|> have been explored. For VLLMs, it has been demonstrated that inputting harmful instruction screenshots <|cite_start|> (Reference: Figstep: Jailbreaking large vision-language models via typographic visual prompts.: Ensuring the safety of artificial intelligence-generated content (AIGC) is a longstanding topic in the artificial intelligence (AI) community, and the safety concerns associated with Large Language Models (LLMs) have been widely investigated. Recently, large vision-language models (VLMs) represent an unprecedented revolution, as they are built upon LLMs but can incorporate additional modalities (e.g., images). However, the safety of VLMs lacks systematic evaluation, and there may be an overconfidence in the safety guarantees provided by their underlying LLMs. In this paper, to demonstrate that introducing additional modality modules leads to unforeseen AI safety issues, we propose FigStep, a straightforward yet effective jailbreaking algorithm against VLMs. Instead of feeding textual harmful instructions directly, FigStep converts the harmful content into images through typography to bypass the safety alignment within the textual module of the VLMs, inducing VLMs to output unsafe responses that violate common AI safety policies. In our evaluation, we manually review 46,500 model responses generated by 3 families of the promising open-source VLMs, i.e., LLaVA, MiniGPT4, and CogVLM (a total of 6 VLMs). The experimental results show that FigStep can achieve an average attack success rate of 82.50% on 500 harmful queries in 10 topics. Moreover, we demonstrate that the methodology of FigStep can even jailbreak GPT-4V, which already leverages an OCR detector to filter harmful queries. Above all, our work reveals that VLMs are vulnerable to jailbreaking attacks, which highlights the necessity of novel safety alignments between visual and textual modalities.) <|cite_end|> or related images <|cite_start|> (Reference: Query-Relevant Images Jailbreak Large Multi-Modal Models: Warning: This paper contains examples of harmful language and images, and reader discretion is recommended. The security concerns surrounding Large Language Models (LLMs) have been extensively explored, yet the safety of Large Multi-Modal Models (LMMs) remains understudied. In our study, we present a novel visual prompt attack that exploits query-relevant images to jailbreak the open-source LMMs. Our method creates a composite image from one image generated by diffusion models and another that displays the text as typography, based on keywords extracted from a malicious query. We show LLMs can be easily at-tacked by our approach, even if the employed Large Language Models are safely aligned. To evaluate the extent of this vulnerability in open-source LMMs, we have compiled a substantial dataset encompassing 13 scenarios with a total of 5,040 text-image pairs, using our presented attack technique. Our evaluation of 12 cutting-edge LMMs using this dataset shows the vulnerability of existing multi-modal models on adversarial attacks. This finding under-scores the need for a concerted effort to strengthen and enhance the safety measures of open-source LMMs against potential malicious exploits. The resource is available at https://github.com/isXinLiu/MM-SafetyBench.) <|cite_end|> can effectively jailbreak these models. White-box attacks, on the other hand, involve gradient-based searches for adversarial text <|cite_start|> (Reference: Universal and Transferable Adversarial Attacks on Aligned Language Models: Because "out-of-the-box" large language models are capable of generating a great deal of objectionable content, recent work has focused on aligning these models in an attempt to prevent undesirable generation. While there has been some success at circumventing these measures -- so-called "jailbreaks" against LLMs -- these attacks have required significant human ingenuity and are brittle in practice. In this paper, we propose a simple and effective attack method that causes aligned language models to generate objectionable behaviors. Specifically, our approach finds a suffix that, when attached to a wide range of queries for an LLM to produce objectionable content, aims to maximize the probability that the model produces an affirmative response (rather than refusing to answer). However, instead of relying on manual engineering, our approach automatically produces these adversarial suffixes by a combination of greedy and gradient-based search techniques, and also improves over past automatic prompt generation methods. Surprisingly, we find that the adversarial prompts generated by our approach are quite transferable, including to black-box, publicly released LLMs. Specifically, we train an adversarial attack suffix on multiple prompts (i.e., queries asking for many different types of objectionable content), as well as multiple models (in our case, Vicuna-7B and 13B). When doing so, the resulting attack suffix is able to induce objectionable content in the public interfaces to ChatGPT, Bard, and Claude, as well as open source LLMs such as LLaMA-2-Chat, Pythia, Falcon, and others. In total, this work significantly advances the state-of-the-art in adversarial attacks against aligned language models, raising important questions about how such systems can be prevented from producing objectionable information. Code is available at github.com/llm-attacks/llm-attacks.) <|cite_end|> <|cite_start|> (Reference: Are aligned neural networks adversarially aligned?: Large language models are now tuned to align with the goals of their creators, namely to be "helpful and harmless." These models should respond helpfully to user questions, but refuse to answer requests that could cause harm. However, adversarial users can construct inputs which circumvent attempts at alignment. In this work, we study adversarial alignment, and ask to what extent these models remain aligned when interacting with an adversarial user who constructs worst-case inputs (adversarial examples). These inputs are designed to cause the model to emit harmful content that would otherwise be prohibited. We show that existing NLP-based optimization attacks are insufficiently powerful to reliably attack aligned text models: even when current NLP-based attacks fail, we can find adversarial inputs with brute force. As a result, the failure of current attacks should not be seen as proof that aligned text models remain aligned under adversarial inputs. However the recent trend in large-scale ML models is multimodal models that allow users to provide images that influence the text that is generated. We show these models can be easily attacked, i.e., induced to perform arbitrary un-aligned behavior through adversarial perturbation of the input image. We conjecture that improved NLP attacks may demonstrate this same level of adversarial control over text-only models.) <|cite_end|> or image input <|cite_start|> (Reference: Visual Adversarial Examples Jailbreak Large Language Models: for) <|cite_end|> <|cite_start|> (Reference: Are aligned neural networks adversarially aligned?: Large language models are now tuned to align with the goals of their creators, namely to be "helpful and harmless." These models should respond helpfully to user questions, but refuse to answer requests that could cause harm. However, adversarial users can construct inputs which circumvent attempts at alignment. In this work, we study adversarial alignment, and ask to what extent these models remain aligned when interacting with an adversarial user who constructs worst-case inputs (adversarial examples). These inputs are designed to cause the model to emit harmful content that would otherwise be prohibited. We show that existing NLP-based optimization attacks are insufficiently powerful to reliably attack aligned text models: even when current NLP-based attacks fail, we can find adversarial inputs with brute force. As a result, the failure of current attacks should not be seen as proof that aligned text models remain aligned under adversarial inputs. However the recent trend in large-scale ML models is multimodal models that allow users to provide images that influence the text that is generated. We show these models can be easily attacked, i.e., induced to perform arbitrary un-aligned behavior through adversarial perturbation of the input image. We conjecture that improved NLP attacks may demonstrate this same level of adversarial control over text-only models.) <|cite_end|> that make the model produce harmful content. This paper focuses on safeguarding VLLMs against black-box attacks. As we have shown, the VLLMs can be easily broken even by the most straightforward prompts without the need for gradient-based search. Additionally, this is also a practical consideration for models deployed as web services, where users lack access to internal model information, as in the case of GPT-4. \subsection{Safeguarding LLMs} Researchers have also explored methods to safeguard LLMs through techniques like Reinforcement learning from human feedbacks (RLHF) <|cite_start|> (Reference: Deep reinforcement learning from human preferences: For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than one percent of our agent's interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any that have been previously learned from human feedback.) <|cite_end|> <|cite_start|> (Reference: Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback: We apply preference modeling and reinforcement learning from human feedback (RLHF) to finetune language models to act as helpful and harmless assistants. We find this alignment training improves performance on almost all NLP evaluations, and is fully compatible with training for specialized skills such as python coding and summarization. We explore an iterated online mode of training, where preference models and RL policies are updated on a weekly cadence with fresh human feedback data, efficiently improving our datasets and models. Finally, we investigate the robustness of RLHF training, and identify a roughly linear relation between the RL reward and the square root of the KL divergence between the policy and its initialization. Alongside our main results, we perform peripheral analyses on calibration, competing objectives, and the use of OOD detection, compare our models with human writers, and provide samples from our models using prompts appearing in recent related work.) <|cite_end|> <|cite_start|> (Reference: Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned.: We describe our early efforts to red team language models in order to simultaneously discover, measure, and attempt to reduce their potentially harmful outputs. We make three main contributions. First, we investigate scaling behaviors for red teaming across 3 model sizes (2.7B, 13B, and 52B parameters) and 4 model types: a plain language model (LM); an LM prompted to be helpful, honest, and harmless; an LM with rejection sampling; and a model trained to be helpful and harmless using reinforcement learning from human feedback (RLHF). We find that the RLHF models are increasingly difficult to red team as they scale, and we find a flat trend with scale for the other model types. Second, we release our dataset of 38,961 red team attacks for others to analyze and learn from. We provide our own analysis of the data and find a variety of harmful outputs, which range from offensive language to more subtly harmful non-violent unethical outputs. Third, we exhaustively describe our instructions, processes, statistical methodologies, and uncertainty about red teaming. We hope that this transparency accelerates our ability to work together as a community in order to develop shared norms, practices, and technical standards for how to red team language models.) <|cite_end|>. However, RLHF is resource-intensive, requiring considerable human annotation and is challenging to train. The work most closely related to ours is <|cite_start|> (Reference: Safety-tuned llamas: Lessons from improving the safety of large language models that follow instructions.: Training large language models to follow instructions makes them perform better on a wide range of tasks and generally become more helpful. However, a perfectly helpful model will follow even the most malicious instructions and readily generate harmful content. In this paper, we raise concerns over the safety of models that only emphasize helpfulness, not harmlessness, in their instruction-tuning. We show that several popular instruction-tuned models are highly unsafe. Moreover, we show that adding just 3% safety examples (a few hundred demonstrations) when fine-tuning a model like LLaMA can substantially improve its safety. Our safety-tuning does not make models significantly less capable or helpful as measured by standard benchmarks. However, we do find exaggerated safety behaviours, where too much safety-tuning makes models refuse perfectly safe prompts if they superficially resemble unsafe ones. As a whole, our results illustrate trade-offs in training LLMs to be helpful and training them to be safe.) <|cite_end|>, which involves fine-tuning \textit{text-only} LLMs for safety. However, this approach does not extend to the visual modality. To the best of our knowledge, there is no existing dataset or method for safeguarding VLLMs. Our contribution is to introduce the first dataset and fine-tuning strategy to enhance the safety of VLLMs. <|paper_end|>
[ "<|reference_start|> Are aligned neural networks adversarially aligned?: Large language models are now tuned to align with the goals of their creators, namely to be \"helpful and harmless.\" These models should respond helpfully to user questions, but refuse to answer requests that could cause harm. However, adversarial users can construct inputs which circumvent attempts at alignment. In this work, we study adversarial alignment, and ask to what extent these models remain aligned when interacting with an adversarial user who constructs worst-case inputs (adversarial examples). These inputs are designed to cause the model to emit harmful content that would otherwise be prohibited. We show that existing NLP-based optimization attacks are insufficiently powerful to reliably attack aligned text models: even when current NLP-based attacks fail, we can find adversarial inputs with brute force. As a result, the failure of current attacks should not be seen as proof that aligned text models remain aligned under adversarial inputs. However the recent trend in large-scale ML models is multimodal models that allow users to provide images that influence the text that is generated. We show these models can be easily attacked, i.e., induced to perform arbitrary un-aligned behavior through adversarial perturbation of the input image. We conjecture that improved NLP attacks may demonstrate this same level of adversarial control over text-only models. <|reference_end|>", "<|reference_start|> Are aligned neural networks adversarially aligned?: Large language models are now tuned to align with the goals of their creators, namely to be \"helpful and harmless.\" These models should respond helpfully to user questions, but refuse to answer requests that could cause harm. However, adversarial users can construct inputs which circumvent attempts at alignment. In this work, we study adversarial alignment, and ask to what extent these models remain aligned when interacting with an adversarial user who constructs worst-case inputs (adversarial examples). These inputs are designed to cause the model to emit harmful content that would otherwise be prohibited. We show that existing NLP-based optimization attacks are insufficiently powerful to reliably attack aligned text models: even when current NLP-based attacks fail, we can find adversarial inputs with brute force. As a result, the failure of current attacks should not be seen as proof that aligned text models remain aligned under adversarial inputs. However the recent trend in large-scale ML models is multimodal models that allow users to provide images that influence the text that is generated. We show these models can be easily attacked, i.e., induced to perform arbitrary un-aligned behavior through adversarial perturbation of the input image. We conjecture that improved NLP attacks may demonstrate this same level of adversarial control over text-only models. <|reference_end|>", "<|reference_start|> Query-Relevant Images Jailbreak Large Multi-Modal Models: Warning: This paper contains examples of harmful language and images, and reader discretion is recommended. The security concerns surrounding Large Language Models (LLMs) have been extensively explored, yet the safety of Large Multi-Modal Models (LMMs) remains understudied. In our study, we present a novel visual prompt attack that exploits query-relevant images to jailbreak the open-source LMMs. Our method creates a composite image from one image generated by diffusion models and another that displays the text as typography, based on keywords extracted from a malicious query. We show LLMs can be easily at-tacked by our approach, even if the employed Large Language Models are safely aligned. To evaluate the extent of this vulnerability in open-source LMMs, we have compiled a substantial dataset encompassing 13 scenarios with a total of 5,040 text-image pairs, using our presented attack technique. Our evaluation of 12 cutting-edge LMMs using this dataset shows the vulnerability of existing multi-modal models on adversarial attacks. This finding under-scores the need for a concerted effort to strengthen and enhance the safety measures of open-source LMMs against potential malicious exploits. The resource is available at https://github.com/isXinLiu/MM-SafetyBench. <|reference_end|>", "<|reference_start|> Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned.: We describe our early efforts to red team language models in order to simultaneously discover, measure, and attempt to reduce their potentially harmful outputs. We make three main contributions. First, we investigate scaling behaviors for red teaming across 3 model sizes (2.7B, 13B, and 52B parameters) and 4 model types: a plain language model (LM); an LM prompted to be helpful, honest, and harmless; an LM with rejection sampling; and a model trained to be helpful and harmless using reinforcement learning from human feedback (RLHF). We find that the RLHF models are increasingly difficult to red team as they scale, and we find a flat trend with scale for the other model types. Second, we release our dataset of 38,961 red team attacks for others to analyze and learn from. We provide our own analysis of the data and find a variety of harmful outputs, which range from offensive language to more subtly harmful non-violent unethical outputs. Third, we exhaustively describe our instructions, processes, statistical methodologies, and uncertainty about red teaming. We hope that this transparency accelerates our ability to work together as a community in order to develop shared norms, practices, and technical standards for how to red team language models. <|reference_end|>" ]
[ 3, 7, 18, 25 ]
{"<|multi_cite_2_1|>": "arxiv-518078", "<|multi_cite_2_2|>": "arxiv-489148", "<|multi_cite_2_4|>": "arxiv-497716", "<|multi_cite_3_1|>": "arxiv-518946", "<|multi_cite_3_2|>": "ss-2118264", "<|multi_cite_3_3|>": "ss-1836586", "<|multi_cite_4_1|>": "arxiv-521098", "<|multi_cite_4_2|>": "arxiv-518946", "<|multi_cite_4_3|>": "ss-1836586", "<|multi_cite_4_4|>": "ss-2118264", "<|multi_cite_4_5|>": "arxiv-529392", "<|multi_cite_1_1|>": "arxiv-412682", "<|multi_cite_1_2|>": "ss-1834246", "<|multi_cite_6_1|>": "arxiv-507754", "<|multi_cite_6_2|>": "arxiv-529392", "<|multi_cite_6_3|>": "arxiv-521098", "<|cite_7|>": "arxiv-548465", "<|cite_8|>": "ss-2118264", "<|cite_9|>": "ss-1355227", "<|multi_cite_10_1|>": "arxiv-526801", "<|multi_cite_10_2|>": "arxiv-518946", "<|multi_cite_11_1|>": "ss-1836586", "<|multi_cite_11_2|>": "arxiv-518946", "<|multi_cite_12_1|>": "arxiv-126589", "<|multi_cite_12_2|>": "arxiv-412682", "<|multi_cite_12_3|>": "ss-1834246", "<|cite_13|>": "ss-2118265"}
2308.05620
<|paper_start|> Title: A Robust and Rapidly Deployable Waypoint Navigation Architecture for Long-Duration Operations in GPS-Denied Environments Abstract: A Robust and Rapidly Deployable Waypoint Navigation Architecture for Long-Duration Operations in GPS-Denied Environments: For long-duration operations in GPS-denied environments, accurate and repeatable waypoint navigation is an essential capability. While simultaneous localization and mapping (SLAM) works well for single-session operations, repeated, multi-session operations require robots to navigate to the same spot(s) accurately and precisely each and every time. Localization and navigation errors can build up from one session to the next if they are not accounted for. Localization using a global reference map works well, but there are no publicly available packages for quickly building maps and navigating with them. We propose a new architecture using a combination of two publicly available packages with a newly released package to create a fully functional multi-session navigation system for ground vehicles. The system takes just a few hours from the beginning of the first manual scan to perform autonomous waypoint navigation. Introduction Mobile robots are often programmed for repeatable tasks, and each instance typically requires the same code. However, repeatable tasks require consistency between attempts, and localization is an important contributing factor to this consistency. For unmanned ground vehicles (UGVs) like Clearpath's Jackal seen in Fig. \ref{fig:jackal}, reliable navigation to specified waypoints can facilitate a wide range of repeatable tasks. For example, mobile manipulator platforms would be able to perform repeatable grasping and manipulation tasks at specified locations. However, there are currently no simple, publicly available localization methods and implementations compatible with repeated waypoint navigation that incorporate fast map construction from scratch. Simultaneous localization and mapping (SLAM) has become an essential tool for mobile robotics, and successful approaches allow vehicles to navigate around a previously unknown environment with confidence. However, many repeatable tasks occur in indoor environments that undergo few, if any, changes to their structure. Creating a new map on each attempt is unnecessary and time consuming for simple tasks. Additionally, new maps may not have the same orientation or origin as prior maps, resulting in different outcomes of repeated tasks. While key locations in a map can sometimes be identified semantically, the added complexity of doing so may sometimes be undesirable on embedded systems. \begin{figure}[ht] \centering \includegraphics[width=0.5\textwidth]{images/jackal_hilltop_2.jpg} \caption{The Clearpath Jackal unmanned ground vehicle (UGV) used in this research, equipped with a Velodyne VLP-16 ``puck" lidar.} \label{fig:jackal} \end{figure} Localization using a prior map will ensure waypoints are placed at the same global location every time. Some common packages exist for SLAM within the Robot Operating System (ROS) <|cite_start|> (Reference: {{ROS: 小鼠体内发育的囊胚、扩张囊胚和从1-细胞期取出在CzB培养液中培养的囊胚、扩张囊胚的染色体数目分别是39.00,38.43和42.47,39.56.经统计分析,体内与体外CzB培养的染色体数目无显著性差异.体外CZB培养的不同时期添加外源性H2O2,发育至囊胚、扩张囊胚阶段,制作胚胎标本,观察胚胎的染色体数目.结果显示:扩张囊胚的0~72 h组的染色体数目与对照组的染色体数目有显著性差异(P<0.05),而其它各发育阶段的各处理组染色体数目之间均无显著性差异.说明经H2O2处理之后存活下来的胚胎与正常体外发育的胚胎染色体数目没有差异.) <|cite_end|>. Currently there are no packages for localization using a prior map that also include navigation, therefore we have implemented a fast approach for building and using a prior map with a localization package for waypoint navigation in ROS, which is described in detail below. Related Work Mobile robot navigation has been researched extensively to open a pathway for more advanced tasks. However, navigation requires both localization and environmental data to make informed decisions. SLAM has been successful at satisfying both of those requirements. Most SLAM algorithms define a map origin based on the initial position of the robot when it begins its operations. To perform multi-session tasks, we need to ensure a common origin is used for each session. \subsection{LiDAR-aided Pose Estimation} Autonomous ground vehicles rely on accurate pose estimation for navigation <|cite_start|> (Reference: Head Pose Estimation with Uncertainty: ) <|cite_end|>. While there are many methods for performing pose estimation <|cite_start|> (Reference: Multisensor data fusion for robust pose estimation of a six-legged walking robot: For autonomous navigation tasks it is important that the robot always has a good estimate of its current pose with respect to its starting position and - in terms of orientation - with respect to the gravity vector. For this, the robot should make use of all available information and be robust against the failure of single sensors. In this paper a multisensor data fusion algorithm for the six-legged walking robot DLR Crawler is presented. The algorithm is based on an indirect feedback information filter that fuses measurements from an inertial measurement unit (IMU) with relative 3D leg odometry measurements and relative 3D visual odometry measurements from a stereo camera. Errors of the visual odometry are computed and considered in the filtering process in order to achieve accurate pose estimates which are robust against visual odometry failure. The algorithm was successfully tested and results are presented.) <|cite_end|>, simplifying the task down to a common sensor type reduces the complexity. LiDAR sensors provide enough data for localization with comparative algorithms. Managing thousands of data points from each scan can be daunting from a computational complexity standpoint, however there are methods to reduce the complexity such as semantic labeling. This approach labels and groups a subset of point cloud data together to appear as a single entity, thereby significantly reducing the overall number of comparisons. Using semantics and Random Sample Consensus (RANSAC), <|cite_start|> (Reference: A robust registration method for autonomous driving pose estimation in urban dynamic environment using LiDAR: The registration of point clouds in urban environments faces problems such as dynamic vehicles and pedestrians, changeable road environments, and GPS inaccuracies. The state-of-the-art methodologies have usually combined the dynamic object tracking and/or static feature extraction data into a point cloud towards the solution of these problems. However, there is the occurrence of minor initial position errors due to these methodologies. In this paper, the authors propose a fast and robust registration method that exhibits no need for the detection of any dynamic and/or static objects. This proposed methodology may be able to adapt to higher initial errors. The initial steps of this methodology involved the optimization of the object segmentation under the application of a series of constraints. Based on this algorithm, a novel multi-layer nested RANSAC algorithmic framework is proposed to iteratively update the registration results. The robustness and efficiency of this algorithm is demonstrated on several high dynamic scenes of both short and long time intervals with varying initial offsets. A LiDAR odometry experiment was performed on the KITTI data set and our extracted urban data-set with a high dynamic urban road, and the average of the horizontal position errors was compared to the distance traveled that resulted in 0.45% and 0.55% respectively.) <|cite_end|> performs stable and accurate pose estimation while eliminating dynamic obstacles from comparisons. Semantic modeling can be a powerful tool for pose estimation as proved in <|cite_start|> (Reference: OneShot Global Localization: Instant LiDAR-Visual Pose Estimation: Globally localizing in a given map is a crucial ability for robots to perform a wide range of autonomous navigation tasks. This paper presents OneShot - a global localization algorithm that uses only a single 3D LiDAR scan at a time, while outperforming approaches based on integrating a sequence of point clouds. Our approach, which does not require the robot to move, relies on learning-based descriptors of point cloud segments and computes the full 6 degree-of-freedom pose in a map. The segments are extracted from the current LiDAR scan and are matched against a database using the computed descriptors. Candidate matches are then verified with a geometric consistency test. We additionally present a strategy to further improve the performance of the segment descriptors by augmenting them with visual information provided by a camera. For this purpose, a custom-tailored neural network architecture is proposed. We demonstrate that our LiDAR-only approach outperforms a state-of-the-art baseline on a sequence of the KITTI dataset and also evaluate its performance on the challenging NCLT dataset. Finally, we show that fusing in visual information boosts segment retrieval rates by up to 26% compared to LiDAR-only description.) <|cite_end|>, which solved global localization by registering a single LiDAR scan overlapped with a camera to a reference map using segmentation and neural network training. Localization can be performed by focusing on one semantic object class while attaining high accuracy in handling the surrounding data <|cite_start|> (Reference: RO-LOAM: 3D Reference Object-based Trajectory and Map Optimization in LiDAR Odometry and Mapping: We propose an extension to the LiDAR Odometry and Mapping framework (LOAM) that enables reference object-based trajectory and map optimization. Our approach assumes that the location and geometry of a large reference object are known, e.g., as a CAD model from Building Information Modeling (BIM) or a previously captured dense point cloud model. We do not expect the reference object to be present in every LiDAR scan. Our approach uses the poses of the LOAM algorithm as an initial guess to refine them with scan-to-model alignment. To evaluate if the alignment was accurate, an EKF-based motion prior filtering step is employed. Subsequently, the past trajectory is optimized by adding the model-aligned pose as a pose graph constraint and the map of the LOAM algorithm is corrected to improve future localization and mapping. We evaluate our approach with data captured in a visual airplane inspection scenario inside an aircraft hangar. A 3D LiDAR sensor is mounted via a gimbal on an Unmanned Aerial Vehicle (UAV) and is continuously actuated. We compare the localization accuracy of the LOAM and R-LOAM algorithms when enabling or disabling our proposed reference object-based trajectory and map optimization extension. For three recorded datasets, enabling the proposed extension yields a reduction in Absolute Pose Error compared to conventional LOAM and R-LOAM, while being able to run online. This reduces drift and improves map quality.) <|cite_end|>. \subsection{Multi-Session SLAM} Two forms of multi-session SLAM persist in mobile robotics research. The first expands the boundaries of a previously defined map <|cite_start|> (Reference: Online Global Loop Closure Detection for Large-Scale Multi-Session Graph-Based SLAM: For large-scale and long-term simultaneous localization and mapping (SLAM), a robot has to deal with unknown initial positioning caused by either the kidnapped robot problem or multi-session mapping. This paper addresses these problems by tying the SLAM system with a global loop closure detection approach, which intrinsically handles these situations. However, online processing for global loop closure detection approaches is generally influenced by the size of the environment. The proposed graph-based SLAM system uses a memory management approach that only consider portions of the map to satisfy online processing requirements. The approach is tested and demonstrated using five indoor mapping sessions of a building using a robot equipped with a laser rangefinder and a Kinect.) <|cite_end|>, while the second revisits the same location from a previous map, where the environment may have changed <|cite_start|> (Reference: Real-time 6-DOF multi-session visual SLAM over large-scale environments: ) <|cite_end|>. Real world environments are not static, which requires maps to be updated from time to time for accurate localization. Therefore, there needs to be some tolerance for robots to use prior maps with outdated information, by either having enough static points of reference on a prior map or updating a prior map during each session. One example of enough reference data is the work produced by Labbe et al. <|cite_start|> (Reference: Multi-Session Visual SLAM for Illumination-Invariant Re-Localization in Indoor Environments: For robots navigating using only a camera, illumination changes in indoor environments can cause re-localization failures during autonomous navigation. In this paper, we present a multi-session visual SLAM approach to create a map made of multiple variations of the same locations in different illumination conditions. The multi-session map can then be used at any hour of the day for improved re-localization capability. The approach presented is independent of the visual features used, and this is demonstrated by comparing re-localization performance between multi-session maps created using the RTAB-Map library with SURF, SIFT, BRIEF, BRISK, KAZE, DAISY, and SuperPoint visual features. The approach is tested on six mapping and six localization sessions recorded at 30 min intervals during sunset using a Google Tango phone in a real apartment.) <|cite_end|> in illumination invariant visual SLAM, where distinctly different visual references are capable of localizing successfully. Another method separates changes between the current map and the prior <|cite_start|> (Reference: Posemap: Lifelong, multi-environment 3D lidar localization: Reliable long-term localization is key for robotic systems in dynamic environments. In this paper, we propose a novel approach for long-term localization using 3D LiDARs, coined PoseMap. In essence, we extract distinctive features from range measurements and bundle these into local views along with observation poses. The sensor's trajectory is then estimated in a sliding window fashion by matching current and old features and minimizing the distances in-between. The map representation facilitates finding a suitable set of old features, by selecting the closest local map(s) for matching. Similarly to a visibility analysis, this procedure provides a suitable set of features for localization but at a fraction of the computational cost. PoseMap also allows for updates and extensions of the map at any time by replacing and adding local maps when necessary. We evaluate our approach using two platforms both equipped with a 3D LiDAR and an IMU, demonstrating localization at 8 Hz and robustness to changes in the environment such as moving vehicles and changing vegetation. PoseMap was implemented on an autonomous vehicle allowing it to drive autonomously over a period of 18 months through a mix of industrial and unstructured off-road environments, covering more than 100 kms without a single localization failure.) <|cite_end|>. Other researchers prefer to update the global reference map such as Zhao et al. <|cite_start|> (Reference: A General Framework for Lifelong Localization and Mapping in Changing Environment: The environment of most real-world scenarios such as malls and supermarkets changes at all times. A pre-built map that does not account for these changes becomes out-of-date easily. Therefore, it is necessary to have an up-to-date model of the environment to facilitate long-term operation of a robot. To this end, this paper presents a general lifelong simultaneous localization and mapping (SLAM) framework. Our framework uses a multiple session map representation, and exploits an efficient map updating strategy that includes map building, pose graph refinement and sparsification. To mitigate the unbounded increase of memory usage, we propose a map-trimming method based on the Chow-Liu maximum-mutual-information spanning tree. The proposed SLAM framework has been comprehensively validated by over a month of robot deployment in real supermarket environment. Furthermore, we release the dataset collected from the indoor and outdoor changing environment with the hope to accelerate lifelong SLAM research in the community. Our dataset is available at https://github.com/sanduan168/lifelong-SLAM-dataset.) <|cite_end|>, who acknowledge active environment changes such as new stores within a mall should be manageable. \subsection{Repeated Navigation} While consistent localization through pose estimation is required for multi-session waypoint navigation, environments are rarely static. One method to handle this is ignoring data unrelated to localization. Indoor artificial landmarks such as fiducial markers can be placed in an environment where other features may change <|cite_start|> (Reference: Realtime 2D code based localization for indoor robot navigation: In this paper ceiling affixed ARToolKitPlus 2D code artificial landmarks are evaluated for purposes of robot localization and navigation. Ceiling affixed codes rarely come in contact with people, equipment or robots, and for this reason they are more likely to stay detectable over a longer period of time. Multi threshold averaging, light gradient compensation and neighbourhood search techniques further enhanced AR-ToolKitPlus performance. Multi threshold averaging collects positioning results at each gray scale threshold level. After all of the threshold levels are analyzed, the related results are averaged into a final result. Light gradient compensation eliminates effects of uneven lighting in the neighbourhood of a 2D code. Neighbourhood search for a 2D code requires fewer computational resources than a global search. Further repeatability improvements are achieved by means of averaging localizations. Localization performance is evaluated at varying distances of the 2D code from the camera. Experimental results show substantial improvements in repeatability and reliability over the baseline ARToolKitPlus performance. Improved performance will allow for realtime 2D code based localization for indoor robot navigation.) <|cite_end|>. Visual teach and repeat methods <|cite_start|> (Reference: An Efficient Locally Reactive Controller for Safe Navigation in Visual Teach and Repeat Missions: To achieve successful field autonomy, mobile robots need to freely adapt to changes in their environment. Visual navigation systems such as Visual Teach and Repeat (VT&R) often assume the space around the reference trajectory is free, but if the environment is obstructed path tracking can fail or the robot could collide with a previously unseen obstacle. In this work, we present a locally reactive controller for a VT&R system that allows a robot to navigate safely despite physical changes to the environment. Our controller uses a local elevation map to compute vector representations and outputs twist commands for navigation at 10 Hz. They are combined in a Riemannian Motion Policies (RMP) controller that requires <2 ms to run on a CPU. We integrated our controller with a VT&R system onboard an ANYmal C robot and tested it in indoor cluttered spaces and a large-scale underground mine. We demonstrate that our locally reactive controller keeps the robot safe when physical occlusions or loss of visual tracking occur such as when walking close to walls, crossing doorways, or traversing narrow corridors. Video: https://youtu.be/G_AwNec5AwU) <|cite_end|> <|cite_start|> (Reference: Navigation without localisation: reliable teach and repeat based on the convergence theorem: We present a novel concept for teach-and-repeat visual navigation. The proposed concept is based on a mathematical model, which indicates that in teach-and-repeat navigation scenarios, mobile robots do not need to perform explicit localisation. Rather than that, a mobile robot which repeats a previously taught path can simply `replay' the learned velocities, while using its camera information only to correct its heading relative to the intended path. To support our claim, we establish a position error model of a robot, which traverses a taught path by only correcting its heading. Then, we outline a mathematical proof which shows that this position error does not diverge over time. Based on the insights from the model, we present a simple monocular teach-and-repeat navigation method. The method is computationally efficient, it does not require camera calibration, and it can learn and autonomously traverse arbitrarily-shaped paths. In a series of experiments, we demonstrate that the method can reliably guide mobile robots in realistic indoor and outdoor conditions, and can cope with imperfect odometry, landmark deficiency, illumination variations and naturally-occurring environment changes. Furthermore, we provide the navigation system and the datasets gathered at http://www.github.com/gestom/stroll_bearnav.) <|cite_end|> allow for repeated navigation without localization which can function even under seasonal environmental changes <|cite_start|> (Reference: A survey on Visual-Based Localization: On the benefit of heterogeneous data: ) <|cite_end|> <|cite_start|> (Reference: Image features for visual teach-and-repeat navigation in changing environments: ) <|cite_end|>. However, robust localization methods can sometimes handle large static changes in the environment. \subsection{Model Localization} Prior maps can be built from many different data sources. One helpful source is 3D models constructed using computer aided design (CAD). Building Information Models can also be used to generate maps for both geometric and semantic localization <|cite_start|> (Reference: Semantic localization in BIM using a 3D LiDAR sensor: Conventional sensor-based localization relies on high-precision maps. These maps are generally built using specialized mapping techniques, which involve high labor and computational costs. While in the architectural, engineering and construction industry, building information models (BIMs) are available and can provide informative descriptions of environments. This paper explores an e ff ective way to localize a mobile 3D LiDAR sensor in BIM considering both geometric and semantic properties. Specifically, we first convert original BIM to semantic maps using categories and locations of BIM elements. After that, a coarse-to-fine semantic localization is performed to align laser points to the map via iterative closest point registration. The experimental results show that the semantic localization can track the pose with only scan matching and present centimeter-level errors over 340 meters traveling, thus demonstrating the feasibility of the proposed mapping-free localization framework. The results also show that using semantic information can help reduce localization errors in BIM.) <|cite_end|>. Not many real world structures have complete 3D models, however a 3D mesh can be approximately generated from a 2D floorplan for precise robot localization <|cite_start|> (Reference: Precise Robot Localization in Architectural 3D Plans: This paper presents a localization system for mobile robots enabling precise localization in inaccurate building models. The approach leverages local referencing to counteract inherent deviations between as-planned and as-built data for locally accurate registration. We further fuse a novel image-based robust outlier detector with LiDAR data to reject a wide range of outlier measurements from clutter, dynamic objects, and sensor failures. We evaluate the proposed approach on a mobile robot in a challenging real world building construction site. It consistently outperforms the traditional ICP-based alingment, reducing localization error by at least 30%.) <|cite_end|>. 3D models of real world environments work well in scenarios where the models already exist. For scenarios that begin with a completely unknown environment, the goal of this paper is to provide an easy-to-deploy alternative solution. The main contributions of this work are: \begin{itemize} \item Simple to use and quick to implement prior map localization with globally referenced waypoints, for repeated mobile robot navigation tasks. \item An autonomous waypoint distributor package publishing 2D waypoint locations for repeatable navigation. \item A 3D localization package robust enough to handle sources of error caused by differences in current LiDAR data and global reference maps, due to dynamic objects, displaced static objects and occlusions. \item A recommended package for building a global reference map quickly and accurately. \end{itemize} This unique framework will enable researchers to minimize time spent building custom solutions for repeated waypoint navigation tasks. We hope our implementation can be a stepping stone for work addressing highly complex tasks. <|paper_end|>
[ "<|reference_start|> {{ROS: 小鼠体内发育的囊胚、扩张囊胚和从1-细胞期取出在CzB培养液中培养的囊胚、扩张囊胚的染色体数目分别是39.00,38.43和42.47,39.56.经统计分析,体内与体外CzB培养的染色体数目无显著性差异.体外CZB培养的不同时期添加外源性H2O2,发育至囊胚、扩张囊胚阶段,制作胚胎标本,观察胚胎的染色体数目.结果显示:扩张囊胚的0~72 h组的染色体数目与对照组的染色体数目有显著性差异(P<0.05),而其它各发育阶段的各处理组染色体数目之间均无显著性差异.说明经H2O2处理之后存活下来的胚胎与正常体外发育的胚胎染色体数目没有差异. <|reference_end|>", "<|reference_start|> RO-LOAM: 3D Reference Object-based Trajectory and Map Optimization in LiDAR Odometry and Mapping: We propose an extension to the LiDAR Odometry and Mapping framework (LOAM) that enables reference object-based trajectory and map optimization. Our approach assumes that the location and geometry of a large reference object are known, e.g., as a CAD model from Building Information Modeling (BIM) or a previously captured dense point cloud model. We do not expect the reference object to be present in every LiDAR scan. Our approach uses the poses of the LOAM algorithm as an initial guess to refine them with scan-to-model alignment. To evaluate if the alignment was accurate, an EKF-based motion prior filtering step is employed. Subsequently, the past trajectory is optimized by adding the model-aligned pose as a pose graph constraint and the map of the LOAM algorithm is corrected to improve future localization and mapping. We evaluate our approach with data captured in a visual airplane inspection scenario inside an aircraft hangar. A 3D LiDAR sensor is mounted via a gimbal on an Unmanned Aerial Vehicle (UAV) and is continuously actuated. We compare the localization accuracy of the LOAM and R-LOAM algorithms when enabling or disabling our proposed reference object-based trajectory and map optimization extension. For three recorded datasets, enabling the proposed extension yields a reduction in Absolute Pose Error compared to conventional LOAM and R-LOAM, while being able to run online. This reduces drift and improves map quality. <|reference_end|>", "<|reference_start|> Online Global Loop Closure Detection for Large-Scale Multi-Session Graph-Based SLAM: For large-scale and long-term simultaneous localization and mapping (SLAM), a robot has to deal with unknown initial positioning caused by either the kidnapped robot problem or multi-session mapping. This paper addresses these problems by tying the SLAM system with a global loop closure detection approach, which intrinsically handles these situations. However, online processing for global loop closure detection approaches is generally influenced by the size of the environment. The proposed graph-based SLAM system uses a memory management approach that only consider portions of the map to satisfy online processing requirements. The approach is tested and demonstrated using five indoor mapping sessions of a building using a robot equipped with a laser rangefinder and a Kinect. <|reference_end|>", "<|reference_start|> Navigation without localisation: reliable teach and repeat based on the convergence theorem: We present a novel concept for teach-and-repeat visual navigation. The proposed concept is based on a mathematical model, which indicates that in teach-and-repeat navigation scenarios, mobile robots do not need to perform explicit localisation. Rather than that, a mobile robot which repeats a previously taught path can simply `replay' the learned velocities, while using its camera information only to correct its heading relative to the intended path. To support our claim, we establish a position error model of a robot, which traverses a taught path by only correcting its heading. Then, we outline a mathematical proof which shows that this position error does not diverge over time. Based on the insights from the model, we present a simple monocular teach-and-repeat navigation method. The method is computationally efficient, it does not require camera calibration, and it can learn and autonomously traverse arbitrarily-shaped paths. In a series of experiments, we demonstrate that the method can reliably guide mobile robots in realistic indoor and outdoor conditions, and can cope with imperfect odometry, landmark deficiency, illumination variations and naturally-occurring environment changes. Furthermore, we provide the navigation system and the datasets gathered at http://www.github.com/gestom/stroll_bearnav. <|reference_end|>" ]
[ 0, 5, 6, 13 ]
{"<|cite_1|>": "ss-986679", "<|multi_cite_2_1|>": "ss-851275", "<|cite_3|>": "ss-2003430", "<|cite_4|>": "ss-1206169", "<|cite_5|>": "arxiv-256268", "<|cite_6|>": "ss-1736722", "<|cite_7|>": "arxiv-641291", "<|cite_8|>": "ss-1038894", "<|cite_9|>": "ss-1473930", "<|cite_10|>": "ss-1188744", "<|cite_11|>": "arxiv-382404", "<|cite_12|>": "ss-1736723", "<|multi_cite_13_1|>": "arxiv-392057", "<|multi_cite_13_2|>": "arxiv-140101", "<|multi_cite_14_1|>": "ss-845046", "<|multi_cite_14_2|>": "ss-1092160", "<|cite_15|>": "ss-1736724", "<|cite_16|>": "arxiv-270504"}
1307.8083
<|paper_start|> Title: TOFEC: Achieving Optimal Throughput-Delay Trade-off of Cloud Storage Using Erasure Codes Abstract: TOFEC: Achieving Optimal Throughput-Delay Trade-off of Cloud Storage Using Erasure Codes: Our paper presents solutions using erasure coding, parallel connections to storage cloud and limited chunking (i.e., dividing the object into a few smaller segments) together to significantly improve the delay performance of uploading and downloading data in and out of cloud storage. TOFEC is a strategy that helps front-end proxy adapt to level of workload by treating scalable cloud storage (e.g. Amazon S3) as a shared resource requiring admission control. Under light workloads, TOFEC creates more smaller chunks and uses more parallel connections per file, minimizing service delay. Under heavy workloads, TOFEC automatically reduces the level of chunking (fewer chunks with increased size) and uses fewer parallel connections to reduce overhead, resulting in higher throughput and preventing queueing delay. Our trace-driven simulation results show that TOFEC's adaptation mechanism converges to an appropriate code that provides the optimal delay-throughput trade-off without reducing system capacity. Compared to a non-adaptive strategy optimized for throughput, TOFEC delivers 2.5x lower latency under light workloads; compared to a non-adaptive strategy optimized for latency, TOFEC can scale to support over 3x as many requests. Introduction \label{sec:intro} Cloud storage has been gaining popularity rapidly as an economic, flexible and reliable data storage service that many cloud-based applications nowadays are implemented on. Typical cloud storage systems are implemented as key-value stores in which data objects are stored and retrieved via their unique keys. To provide high degree of availability, scalability, and data durability, each object is replicated several times within the internal distributed file system and sometimes also further protected by erasure codes to more efficiently use the storage capacity while attaining very high durability guarantees <|cite_start|> (Reference: Erasure Coding in {Windows Azure Storage}: Windows Azure Storage (WAS) is a cloud storage system that provides customers the ability to store seemingly limitless amounts of data for any duration of time. WAS customers have access to their data from anywhere, at any time, and only pay for what they use and store. To provide durability for that data and to keep the cost of storage low, WAS uses erasure coding. In this paper we introduce a new set of codes for erasure coding called Local Reconstruction Codes (LRC). LRC reduces the number of erasure coding fragments that need to be read when reconstructing data fragments that are offline, while still keeping the storage overhead low. The important benefits of LRC are that it reduces the bandwidth and I/Os required for repair reads over prior codes, while still allowing a significant reduction in storage overhead. We describe how LRC is used in WAS to provide low overhead durable storage with consistently low read latencies.) <|cite_end|>. Cloud storage providers usually implement a variety of optimization mechanisms such as load balancing and caching/prefetching internally to improve performance. Despite all such efforts, still evaluations of large scale systems indicate that there is a high degree of randomness in delay performance <|cite_start|> (Reference: An evaluation of AmazonÕs grid computing services: EC2, S3 and SQS: Amazon.com’s Elastic Compute Cloud (EC2), Simple Storage Service (S3) and Simple Queue Service (SQS) offer enterprise-class computing, storage and coordination facilities to any organization or individual in the world with a valid credit card. This paper details our experience working with these commodity grid computing services between November 2006 and May 2007, including an analysis of the overall system’s API and ease-of-use; an analysis of EC2’s management and security facilities; an end-to-end performance analysis of S3’s throughput and latency as observed from Amazon’s EC2 cluster and other locations on the Internet; and an analysis of the SQS operation and performance. We conclude with a report of our experience moving a large-scale research application from dedicated hardware to the Amazon offering. We find that this collection of AmazonWeb Services (AWS) has great promise but are hobbled by service consistency problems, the lack of a Service Level Agreement (SLA), and a problematic Web Services Licensing Agreement (WSLA).) <|cite_end|>. Thus, services that require more robust and predictable Quality of Service (QoS) must deploy their own external solutions such as sending multiple/redundant requests (in parallel or sequentially), chunking large objects into smaller ones and read/write each chunk through parallel connections, replicate the same object using multiple distinct keys, etc. In this paper, we present \ourproposal ~-- a strategy that can provide much better throughput-delay performance for file accessing on cloud storage utilizing erasure coding. Although we base our analysis and evaluation on Amazon S3 service and present \ourproposal as an external solution, \ourproposal can be applied to many other cloud storage systems both externally and internally with small modifications. \begin{figure}[!t] \centering \includegraphics[width = \onewidth]{fixedDelays} \vspace{-7pt} \caption{Delay for downloading 3MB files using fixed MDS codes} \label{fig:fixedDelays} \vspace{\shrinkaftercaption} \end{figure} \subsection{State of the Art} Among the vast amount of research on improving cloud storage system's delay performance emerged in the past few years, two groups are in particular closely related to our work presented in this paper: {\bf Erasure Coding with Redundant Requests:} As proposed by authors of <|cite_start|> (Reference: FAST CLOUD: Pushing the Envelope on Delay Performance of Cloud Storage with Coding: Our paper presents solutions that can significantly improve the delay performance of putting and retrieving data in and out of cloud storage. We first focus on measuring the delay performance of a very popular cloud storage service Amazon S3. We establish that there is significant randomness in service times for reading and writing small and medium size objects when assigned distinct keys. We further demonstrate that using erasure coding, parallel connections to storage cloud and limited chunking (i.e., dividing the object into a few smaller objects) together pushes the envelope on service time distributions significantly (e.g., 76%, 80%, and 85% reductions in mean, 90th, and 99th percentiles for 2 Mbyte files) at the expense of additional storage (e.g., 1.75x). However, chunking and erasure coding increase the load and hence the queuing delays while reducing the supportable rate region in number of requests per second per node. Thus, in the second part of our paper we focus on analyzing the delay performance when chunking, FEC, and parallel connections are used together. Based on this analysis, we develop load adaptive algorithms that can pick the best code rate on a per request basis by using off-line computed queue backlog thresholds. The solutions work with homogeneous services with fixed object sizes, chunk sizes, operation type (e.g., read or write) as well as heterogeneous services with mixture of object sizes, chunk sizes, and operation types. We also present a simple greedy solution that opportunistically uses idle connections and picks the erasure coding rate accordingly on the fly. Both backlog and greedy solutions support the full rate region and provide best mean delay performance when compared to the best fixed coding rate policy. Our evaluations show that backlog based solutions achieve better delay performance at higher percentile values than the greedy solution.) <|cite_end|> <|cite_start|> (Reference: Codes Can Reduce Queueing Delay in Data Centers: In this paper, we quantify how much codes can reduce the data retrieval latency in storage systems. By combining a simple linear code with a novel request scheduling algorithm, which we call Blocking-one Scheduling (BoS), we show analytically that it is possible to reduce data retrieval delay by up to 17% over currently popular replication-based strategies. Although in this work we focus on a simplified setting where the storage system stores a single content, the methodology developed can be applied to more general settings with multiple contents. The results also offer insightful guidance to the design of storage systems in data centers and content distribution networks.) <|cite_end|> <|cite_start|> (Reference: The mds queue: Analysing latency performance of codes and redundant requests: In order to scale economically, data centers are increasingly evolving their data storage methods from the use of simple data replication to the use of more powerful erasure codes, which provide the same level of reliability as replication-based methods at a significantly lower storage cost. In particular, it is well known that MaximumDistance-Separable (MDS) codes, such as Reed-Solomon codes, provide the maximum storage efficiency. While the use of codes for providing improved reliability in archival storage systems, where the data is less frequently accessed (or so-called “cold data”), is well understood, the role of codes in the storage of more frequently accessed and active “hot data”, where latency is the key metric, is less clear. In this paper, we study data storage systems based on MDS codes through the lens of queueing theory, and term this the “MDS queue.” We analytically characterize the latency performance of MDS queues, for which we present insightful scheduling policies that form upper and lower bounds to performance, and show that they are quite tight. Extensive simulations using Monte Carlo methods are also provided and used to validate our theoretical analysis. As a side note, our lower-bound analytical method based on the so-called MDS-Reservation(t) queue, represents an elegant practical scheme that requires the maintenance of considerably smaller state, depending on the parameter t, than that of the full-fledged MDS queue (which corresponds to t =∞), and may be of independent interest in practical systems. Comparisons with replication-based systems reveal that codes provide a superior latency-performance (by up to 70%) than replication. The second part of the paper considers an alternative method of (potentially) reducing latency in data centers, that of sending redundant requests. Here, a request is sent to more servers than required, and is deemed served when any requisite number of servers complete service. Several recent works provide empirical evidence of the benefits of redundant requests in various settings, and in this paper, we aim to analytically characterize the situations when can redundant requests actually help. We show that under the MDS queue model (with exponential service times and negligible costs of cancelling jobs), in a replication-based system, the average latency strictly reduces with more redundancy in the requests, and that under a general MDS code, the average latency is minimized when requests are sent to all servers. To the best of our knowledge, these are the first analytical results that prove the benefits of sending redundant requests.) <|cite_end|>, files are divided into a {\em pre-determined} number of $k$ chunks, each of which is $1/k$ the size of the original file, and encoded into $n>k$ of ``coded chunks'' using a $(n,k)$ Forward Error Correction (FEC) code, or more generally an Maximum Distance Separable (MDS) code. Downloading/uploading of the original file is accomplished by downloading/uploading $n$ coded chunks using parallel connections simultaneously and is deemed served when download/upload of any $k$ coded chunks complete. Such mechanisms significantly improves the delay performance under light workload. However, as shown in our previous work <|cite_start|> (Reference: FAST CLOUD: Pushing the Envelope on Delay Performance of Cloud Storage with Coding: Our paper presents solutions that can significantly improve the delay performance of putting and retrieving data in and out of cloud storage. We first focus on measuring the delay performance of a very popular cloud storage service Amazon S3. We establish that there is significant randomness in service times for reading and writing small and medium size objects when assigned distinct keys. We further demonstrate that using erasure coding, parallel connections to storage cloud and limited chunking (i.e., dividing the object into a few smaller objects) together pushes the envelope on service time distributions significantly (e.g., 76%, 80%, and 85% reductions in mean, 90th, and 99th percentiles for 2 Mbyte files) at the expense of additional storage (e.g., 1.75x). However, chunking and erasure coding increase the load and hence the queuing delays while reducing the supportable rate region in number of requests per second per node. Thus, in the second part of our paper we focus on analyzing the delay performance when chunking, FEC, and parallel connections are used together. Based on this analysis, we develop load adaptive algorithms that can pick the best code rate on a per request basis by using off-line computed queue backlog thresholds. The solutions work with homogeneous services with fixed object sizes, chunk sizes, operation type (e.g., read or write) as well as heterogeneous services with mixture of object sizes, chunk sizes, and operation types. We also present a simple greedy solution that opportunistically uses idle connections and picks the erasure coding rate accordingly on the fly. Both backlog and greedy solutions support the full rate region and provide best mean delay performance when compared to the best fixed coding rate policy. Our evaluations show that backlog based solutions achieve better delay performance at higher percentile values than the greedy solution.) <|cite_end|> and later reconfirmed by <|cite_start|> (Reference: The mds queue: Analysing latency performance of codes and redundant requests: In order to scale economically, data centers are increasingly evolving their data storage methods from the use of simple data replication to the use of more powerful erasure codes, which provide the same level of reliability as replication-based methods at a significantly lower storage cost. In particular, it is well known that MaximumDistance-Separable (MDS) codes, such as Reed-Solomon codes, provide the maximum storage efficiency. While the use of codes for providing improved reliability in archival storage systems, where the data is less frequently accessed (or so-called “cold data”), is well understood, the role of codes in the storage of more frequently accessed and active “hot data”, where latency is the key metric, is less clear. In this paper, we study data storage systems based on MDS codes through the lens of queueing theory, and term this the “MDS queue.” We analytically characterize the latency performance of MDS queues, for which we present insightful scheduling policies that form upper and lower bounds to performance, and show that they are quite tight. Extensive simulations using Monte Carlo methods are also provided and used to validate our theoretical analysis. As a side note, our lower-bound analytical method based on the so-called MDS-Reservation(t) queue, represents an elegant practical scheme that requires the maintenance of considerably smaller state, depending on the parameter t, than that of the full-fledged MDS queue (which corresponds to t =∞), and may be of independent interest in practical systems. Comparisons with replication-based systems reveal that codes provide a superior latency-performance (by up to 70%) than replication. The second part of the paper considers an alternative method of (potentially) reducing latency in data centers, that of sending redundant requests. Here, a request is sent to more servers than required, and is deemed served when any requisite number of servers complete service. Several recent works provide empirical evidence of the benefits of redundant requests in various settings, and in this paper, we aim to analytically characterize the situations when can redundant requests actually help. We show that under the MDS queue model (with exponential service times and negligible costs of cancelling jobs), in a replication-based system, the average latency strictly reduces with more redundancy in the requests, and that under a general MDS code, the average latency is minimized when requests are sent to all servers. To the best of our knowledge, these are the first analytical results that prove the benefits of sending redundant requests.) <|cite_end|>, system capacity is reduced due to the overhead for using smaller chunks and redundant requests. This phenomenon is illustrated in Fig.\ref{fig:fixedDelays} where we plot the delay-throughput trade-off for using different MDS codes from our simulations using delays traces collected on Amazon S3. Codes with different $k$ are grouped in different colors. Using a code with high level of chunking and redundancy, in this case a $(6,3)$ code, although delivers $2\times$ gain in delay at light workload, reduces system capacity to only $30\%$ of the original basic strategy without chunking and redundancy, i.e., $(1,1)$ code! This problem is partially addressed in <|cite_start|> (Reference: FAST CLOUD: Pushing the Envelope on Delay Performance of Cloud Storage with Coding: Our paper presents solutions that can significantly improve the delay performance of putting and retrieving data in and out of cloud storage. We first focus on measuring the delay performance of a very popular cloud storage service Amazon S3. We establish that there is significant randomness in service times for reading and writing small and medium size objects when assigned distinct keys. We further demonstrate that using erasure coding, parallel connections to storage cloud and limited chunking (i.e., dividing the object into a few smaller objects) together pushes the envelope on service time distributions significantly (e.g., 76%, 80%, and 85% reductions in mean, 90th, and 99th percentiles for 2 Mbyte files) at the expense of additional storage (e.g., 1.75x). However, chunking and erasure coding increase the load and hence the queuing delays while reducing the supportable rate region in number of requests per second per node. Thus, in the second part of our paper we focus on analyzing the delay performance when chunking, FEC, and parallel connections are used together. Based on this analysis, we develop load adaptive algorithms that can pick the best code rate on a per request basis by using off-line computed queue backlog thresholds. The solutions work with homogeneous services with fixed object sizes, chunk sizes, operation type (e.g., read or write) as well as heterogeneous services with mixture of object sizes, chunk sizes, and operation types. We also present a simple greedy solution that opportunistically uses idle connections and picks the erasure coding rate accordingly on the fly. Both backlog and greedy solutions support the full rate region and provide best mean delay performance when compared to the best fixed coding rate policy. Our evaluations show that backlog based solutions achieve better delay performance at higher percentile values than the greedy solution.) <|cite_end|> where we present strategies that adjust $n$ according to workload level so that it achieves the near-optimal throughput-delay trade-off for the {\em predetermined} $k$. For example, if $k=3$ is used, the strategies in <|cite_start|> (Reference: FAST CLOUD: Pushing the Envelope on Delay Performance of Cloud Storage with Coding: Our paper presents solutions that can significantly improve the delay performance of putting and retrieving data in and out of cloud storage. We first focus on measuring the delay performance of a very popular cloud storage service Amazon S3. We establish that there is significant randomness in service times for reading and writing small and medium size objects when assigned distinct keys. We further demonstrate that using erasure coding, parallel connections to storage cloud and limited chunking (i.e., dividing the object into a few smaller objects) together pushes the envelope on service time distributions significantly (e.g., 76%, 80%, and 85% reductions in mean, 90th, and 99th percentiles for 2 Mbyte files) at the expense of additional storage (e.g., 1.75x). However, chunking and erasure coding increase the load and hence the queuing delays while reducing the supportable rate region in number of requests per second per node. Thus, in the second part of our paper we focus on analyzing the delay performance when chunking, FEC, and parallel connections are used together. Based on this analysis, we develop load adaptive algorithms that can pick the best code rate on a per request basis by using off-line computed queue backlog thresholds. The solutions work with homogeneous services with fixed object sizes, chunk sizes, operation type (e.g., read or write) as well as heterogeneous services with mixture of object sizes, chunk sizes, and operation types. We also present a simple greedy solution that opportunistically uses idle connections and picks the erasure coding rate accordingly on the fly. Both backlog and greedy solutions support the full rate region and provide best mean delay performance when compared to the best fixed coding rate policy. Our evaluations show that backlog based solutions achieve better delay performance at higher percentile values than the greedy solution.) <|cite_end|> will achieve the lower-envelop of the red curves in Fig.\ref{fig:fixedDelays}. Yet, it still suffers from an almost 60\% loss in system capacity. {\bf Dynamic Job Sizing:} It has been observed in <|cite_start|> (Reference: An evaluation of AmazonÕs grid computing services: EC2, S3 and SQS: Amazon.com’s Elastic Compute Cloud (EC2), Simple Storage Service (S3) and Simple Queue Service (SQS) offer enterprise-class computing, storage and coordination facilities to any organization or individual in the world with a valid credit card. This paper details our experience working with these commodity grid computing services between November 2006 and May 2007, including an analysis of the overall system’s API and ease-of-use; an analysis of EC2’s management and security facilities; an end-to-end performance analysis of S3’s throughput and latency as observed from Amazon’s EC2 cluster and other locations on the Internet; and an analysis of the SQS operation and performance. We conclude with a report of our experience moving a large-scale research application from dedicated hardware to the Amazon offering. We find that this collection of AmazonWeb Services (AWS) has great promise but are hobbled by service consistency problems, the lack of a Service Level Agreement (SLA), and a problematic Web Services Licensing Agreement (WSLA).) <|cite_end|> <|cite_start|> (Reference: Stout: an adaptive interface to scalable cloud storage: Many of today's applications are delivered as scalable, multi-tier services deployed in large data centers. These services frequently leverage shared, scale-out, key-value storage layers that can deliver low latency under light workloads, but may exhibit significant queuing delay and even dropped requests under high load. Stout is a system that helps these applications adapt to variation in storage-layer performance by treating scalable key-value storage as a shared resource requiring congestion control. Under light workloads, applications using Stout send requests to the store immediately, minimizing delay. Under heavy workloads, Stout automatically batches the application's requests together before sending them to the store, resulting in higher throughput and preventing queuing delay. We show experimentally that Stout's adaptation algorithm converges to an appropriate batch size for workloads that require the batch size to vary by over two orders of magnitude. Compared to a non-adaptive strategy optimized for throughput, Stout delivers over 34× lower latency under light workloads; compared to a non-adaptive strategy optimized for latency, Stout can scale to over 3× as many requests.) <|cite_end|> that in key-value storage systems such as Amazon S3 and Microsoft's Azure Storage, throughput is dramatically higher when they receive a small number of storage access requests for large jobs (or objects) than if they receive a large number of requests for small jobs (or objects), because each storage request incurs overheads such as networking delay, protocol-processing, lock acquisitions, transaction log commits, etc. Authors of <|cite_start|> (Reference: Stout: an adaptive interface to scalable cloud storage: Many of today's applications are delivered as scalable, multi-tier services deployed in large data centers. These services frequently leverage shared, scale-out, key-value storage layers that can deliver low latency under light workloads, but may exhibit significant queuing delay and even dropped requests under high load. Stout is a system that helps these applications adapt to variation in storage-layer performance by treating scalable key-value storage as a shared resource requiring congestion control. Under light workloads, applications using Stout send requests to the store immediately, minimizing delay. Under heavy workloads, Stout automatically batches the application's requests together before sending them to the store, resulting in higher throughput and preventing queuing delay. We show experimentally that Stout's adaptation algorithm converges to an appropriate batch size for workloads that require the batch size to vary by over two orders of magnitude. Compared to a non-adaptive strategy optimized for throughput, Stout delivers over 34× lower latency under light workloads; compared to a non-adaptive strategy optimized for latency, Stout can scale to over 3× as many requests.) <|cite_end|> developed Stout in which requests are dynamically batched to improve throughput-delay trade-off of key-value storage systems. Based on the observed congestion Stout increase or reduce the batching size. Thus, at high congestion, a larger batch size is used to improve the throughput while at low congestion a smaller batch size is adopted to reduce the delay. \subsection{Main Contribution} We introduce an adaptive strategy for accessing cloud storage systems via erasure coding, call \ourproposal (Throughput Optimal FEC Cloud), that implements dynamic adjustment of chunking and redundancy levels to provide the optimal throughput-delay trade-off. In other words, \ourproposal achieves the lower envelop of curves in all colors in Fig.\ref{fig:fixedDelays}. The primary novelty of \ourproposal is its backlog-based adaptive algorithm for dynamically adjusting the chunk size as well as the number of redundant requests issued to fulfill storage access requests. This algorithm of variable chunk sizing can be viewed as a novel integration of prior observations from the two bodies of works discussed above. Based on the observed backlog level as an indicator of the workload, \ourproposal increases or reduces the chunk size, as well as the number of redundant requests. In our trace-driven simulation evaluation, we demonstrate that: (1) \ourproposal successfully adapt to full range of workloads, delivering $3\times$ lower average delay than the basic static strategy without chunking under light workloads, and under heavy workloads over $3\times$ the throughput of a static strategy with high chunking and redundancy levels optimized for service delay; and (2) \ourproposal provides good QoS guarantees as it delivers low delay variations. \ourproposal works without any explicit information from the back-end cloud storage implementation: its adaptation strategy is implemented solely at the front-end application server (the storage client) and is based exclusively on the measured latency from unmodified cloud storage systems. This allows \ourproposal to be more easily deployed, as individual cloud applications can adopt \ourproposal without being tied-up with any particular cloud storage system, as long as a small number of APIs are provided by the storage system. Related Work \label{sec:related} FEC in connection with multiple paths and/or multiple servers is a well investigated topic in the literature <|cite_start|> (Reference: MPLOT: A transport protocol exploiting multipath diversity using erasure codes: In this paper, we propose a novel transport protocol that effectively utilizes available bandwidth and diversity gains provided by heterogeneous, highly lossy paths. Our Multi-Path LOss-Tolerant (MPLOT) protocol can be used to provide significant gains in the goodput of wireless mesh networks, subject to bursty, correlated losses with average loss-rates as high as 50%, and random outage events. MPLOT makes intelligent use of erasure codes to guard against packets losses, and a Hybrid-ARQ/FEC scheme to reduce packet recovery latency, where the redundancy is adaptively provisioned into both proactive and reactive FECs. MPLOT uses dynamic packet mapping based on current path characteristics, and does not require packets to be delivered in sequence to ensure reliability. We present a theoretical analysis of the different design choices of MPLOT and show that MPLOT makes an optimal trade-off between goodput and delay constraints. We test MPLOT, through simulations, under a variety of test scenarios and show that it effectively exploits path diversity in addition to aggregating path bandwidths. We also show that MPLOT is fair to single-path protocols like TCP-SACK.) <|cite_end|> <|cite_start|> (Reference: Fault-Tolerant Real-Time Streaming with FEC thanks to Capillary Multi-Path Routing: Erasure resilient FEC codes in off-line packetized streaming rely on time diversity. This requires unrestricted buffering time at the receiver. In real-time streaming the playback buffering time must be very short. Path diversity is an orthogonal strategy. However, the large number of long paths increases the number of underlying links and consecutively the overall link failure rate. This may increase the overall requirement in redundant FEC packets for combating the link failures. We introduce the Redundancy Overall Requirement (ROR) metric, a routing coefficient specifying the total number of FEC packets required for compensation of all underlying link failures. We present a capillary routing algorithm for constructing layer by layer steadily diversifying multi-path routing patterns. By measuring the ROR coefficients of a dozen of routing layers on hundreds of network samples, we show that the number of required FEC packets decreases substantially when the path diversity is increased by the capillary routing construction algorithm.) <|cite_end|> <|cite_start|> (Reference: Accessing multiple mirror sites in parallel: Using tornado codes to speed up downloads: Mirror sites enable client requests to be serviced by any of a number of servers, reducing load at individual servers and dispersing network load. Typically, a client requests service from a single mirror site. We consider enabling a client to access a file from multiple mirror sites in parallel to speed up the download. To eliminate complex client-server negotiations that a straightforward implementation of this approach would require, we develop a feedback-free protocol based on erasure codes. We demonstrate that a protocol using fast Tornado codes can deliver dramatic speedups at the expense of transmitting a moderate number of additional packets into the network. This scalable solution extends naturally to allow multiple clients to access data from multiple mirror sites simultaneously. The approach applies naturally to wireless networks and satellite networks as well.) <|cite_end|> <|cite_start|> (Reference: Evaluating Forward Error Correction performance in BitTorrent protocol: BitTorrent is probably the most famous file-sharing protocol used in the Internet currently. It represents more than half of the P2P traffic. Various applications are using BitTorrent-like protocols to deliver the resource and implement techniques to perform a reliable data transmission. Forward Error Correction (FEC) is an efficient mechanism used for this goal. This paper proposes a performance evaluation of FEC implemented on BitTorrent protocol. A simulation framework has been developed to evaluate the improvement depending on many factors like the leeches/seeds number and capacities, the network nature (homogeneous or heterogeneous), the resource size, and the FEC redundancy ratio. The completion time metric shows that FEC is a method that accelerates the data access in some specific network configurations. On the contrary, this technique can also disrupt the system in some cases since it introduces an overhead.) <|cite_end|>. However, there is very little attention devoted to the queueing delays. FEC in the context of network coding or coded scheduling has also been a popular topic from the perspectives of throughput (or network utility) maximization and throughput vs. service delay trade-offs <|cite_start|> (Reference: On the Delay and Throughput Gains of Coding in Unreliable Networks: In an unreliable packet network setting, we study the performance gains of optimal transmission strategies in the presence and absence of coding capability at the transmitter, where performance is measured in delay and throughput. Although our results apply to a large class of coding strategies including maximum-distance separable (MDS) and Digital Fountain codes, we use random network codes in our discussions because these codes have a greater applicability for complex network topologies. To that end, after introducing a key setting in which performance analysis and comparison can be carried out, we provide closed-form as well as asymptotic expressions for the delay performance with and without network coding. We show that the network coding capability can lead to arbitrarily better delay performance as the system parameters scale when compared to traditional transmission strategies without coding. We further develop a joint scheduling and random-access scheme to extend our results to general wireless network topologies.) <|cite_end|> <|cite_start|> (Reference: Minimizing delay for multicast-streaming in wireless networks with network coding: Network coding is a method that promises to achieve the min-cut capacity in multicasts. However, pushing towards this gain in throughput comes with two sacrifices. Delay suffers as the decoding procedure requires buffering and is performed in batches of coded packets, and unfairness prevails in terms of delay increases between receivers with worse channel conditions and those with better channel conditions. In this paper, we focus on optimizing the delay performance in reliably multicasting a data stream to a set of one-hop receivers from the receiver perspective. We analyze the system based on queueing theory using semi-Markov chains from both the system-wide and receiver perspectives. We find that the average delay per received packet at the receivers' end can be minimized by appropriate scheduling of data packets and appropriate size of the coding buffer, which depends on the rate of incoming data stream and capacities of the receivers. To circumvent unduly computational complexities, we design a heuristic scheme which can achieve significant performance gain when compared to an existing method. Our scheme readily adapts the coding size to the dynamics of the system, and schedules data packets to be coded via some strict priority measure for optimized delay performance. We show through extensive simulations that our scheme gives low average delay at high streaming rates and narrows the performance gap between receivers with bad and good channel conditions.) <|cite_end|> <|cite_start|> (Reference: On the Delay of Network Coding over Line Networks: We analyze a simple network where a source and a receiver are connected by a line of erasure channels of different reliabilities. Recent prior work has shown that random linear network coding can achieve the min-cut capacity and therefore the asymptotic rate is determined by the worst link of the line network. In this paper we investigate the delay for transmitting a batch of packets, which is a function of all the erasure probabilities and the number of packets in the batch. We show a monotonicity result on the delay function and derive simple expressions which characterize the expected delay behavior of line networks. Further, we use a martingale bounded differences argument to show that the actual delay is tightly concentrated around its expectation.) <|cite_end|> <|cite_start|> (Reference: On the throughput capacity of opportunistic multicasting with erasure codes: In this paper, we concentrate on opportunistic scheduling for multicast information. We pose the problem as a multicast throughput optimization problem. As a solution we present how one can jointly utilize fixed-rate and rateless erasure coding along with simple rate adaptation techniques in order to achieve the optimal multicast throughput per user. We first investigate the performance of the proposed system under i.i.d. channel conditions. Our analysis shows a linear gain for the multicast capacity over i.i.d. Rayleigh fading channels with respect to the number of users. Since the established results require coding over large number of blocks and hence induce large decoding delays, we extend our analysis to the cases where we code over shorter block lengths and thus quantify the delay-capacity tradeoffs under a simple setting. We further look into non-i.i.d. channel conditions and show achievable gains by modifying a scheduling heuristic whose fairness is well- established for opportunistic scheduling of unicast flows. Our overall evaluations demonstrate that under both i.i.d. and non-i.i.d. channel conditions, opportunistic multicasting with erasure coding can significantly improve the performance over the traditional techniques used in today's communication systems.) <|cite_end|>. Although some incorporate queuing delay analysis, the treatment is largely for broadcast wireless channels with quite different system characteristics and constraints. FEC has also been extensively studied in the context of distributed storage from the points of high durability and availability while attaining high storage efficiency <|cite_start|> (Reference: Network Coding for Distributed Storage Systems: Distributed storage systems provide reliable access to data through redundancy spread over individually unreliable nodes. Application scenarios include data centers, peer-to-peer storage systems, and storage in wireless networks. Storing data using an erasure code, in fragments spread across nodes, requires less redundancy than simple replication for the same level of reliability. However, since fragments must be periodically replaced as nodes fail, a key question is how to generate encoded fragments in a distributed way while transferring as little data as possible across the network. For an erasure coded system, a common practice to repair from a node failure is for a new node to download subsets of data stored at a number of surviving nodes, reconstruct a lost coded block using the downloaded data, and store it at the new node. We show that this procedure is sub-optimal. We introduce the notion of regenerating codes, which allow a new node to download \emph{functions} of the stored data from the surviving nodes. We show that regenerating codes can significantly reduce the repair bandwidth. Further, we show that there is a fundamental tradeoff between storage and repair bandwidth which we theoretically characterize using flow arguments on an appropriately constructed graph. By invoking constructive results in network coding, we introduce regenerating codes that can achieve any point in this optimal tradeoff.) <|cite_end|> <|cite_start|> (Reference: High Availability in DHTs: Erasure Coding vs. Replication: ) <|cite_end|> <|cite_start|> (Reference: Tree-structured Data Regeneration in Distributed Storage Systems with Regenerating Codes: Distributed storage systems provide large-scale reliable data storage by storing a certain degree of redundancy in a decentralized fashion on a group of storage nodes. To recover from data losses due to the instability of these nodes, whenever a node leaves the system, additional redundancy should be regenerated to compensate such losses. In this context, the general objective is to minimize the volume of actual network traffic caused by such regenerations. A class of codes, called regenerating codes, has been proposed to achieve an optimal trade-off curve between the amount of storage space required for storing redundancy and the network traffic during the regeneration. In this paper, we jointly consider the choices of regenerating codes and network topologies. We propose a new design, referred to as RCTREE, that combines the advantage of regenerating codes with a tree-structured regeneration topology. Our focus is the efficient utilization of network links, in addition to the reduction of the regeneration traffic. With the extensive analysis and quantitative evaluations, we show that RCTREE is able to achieve a both fast and stable regeneration, even with departures of storage nodes during the regeneration.) <|cite_end|>. Authors of <|cite_start|> (Reference: Codes Can Reduce Queueing Delay in Data Centers: In this paper, we quantify how much codes can reduce the data retrieval latency in storage systems. By combining a simple linear code with a novel request scheduling algorithm, which we call Blocking-one Scheduling (BoS), we show analytically that it is possible to reduce data retrieval delay by up to 17% over currently popular replication-based strategies. Although in this work we focus on a simplified setting where the storage system stores a single content, the methodology developed can be applied to more general settings with multiple contents. The results also offer insightful guidance to the design of storage systems in data centers and content distribution networks.) <|cite_end|> conducted theoretical study of cloud storage systems using FEC in a similar fashion as we did in our work <|cite_start|> (Reference: FAST CLOUD: Pushing the Envelope on Delay Performance of Cloud Storage with Coding: Our paper presents solutions that can significantly improve the delay performance of putting and retrieving data in and out of cloud storage. We first focus on measuring the delay performance of a very popular cloud storage service Amazon S3. We establish that there is significant randomness in service times for reading and writing small and medium size objects when assigned distinct keys. We further demonstrate that using erasure coding, parallel connections to storage cloud and limited chunking (i.e., dividing the object into a few smaller objects) together pushes the envelope on service time distributions significantly (e.g., 76%, 80%, and 85% reductions in mean, 90th, and 99th percentiles for 2 Mbyte files) at the expense of additional storage (e.g., 1.75x). However, chunking and erasure coding increase the load and hence the queuing delays while reducing the supportable rate region in number of requests per second per node. Thus, in the second part of our paper we focus on analyzing the delay performance when chunking, FEC, and parallel connections are used together. Based on this analysis, we develop load adaptive algorithms that can pick the best code rate on a per request basis by using off-line computed queue backlog thresholds. The solutions work with homogeneous services with fixed object sizes, chunk sizes, operation type (e.g., read or write) as well as heterogeneous services with mixture of object sizes, chunk sizes, and operation types. We also present a simple greedy solution that opportunistically uses idle connections and picks the erasure coding rate accordingly on the fly. Both backlog and greedy solutions support the full rate region and provide best mean delay performance when compared to the best fixed coding rate policy. Our evaluations show that backlog based solutions achieve better delay performance at higher percentile values than the greedy solution.) <|cite_end|>. Given that exact mathematical analysis of the general case is very difficult, authors of <|cite_start|> (Reference: Codes Can Reduce Queueing Delay in Data Centers: In this paper, we quantify how much codes can reduce the data retrieval latency in storage systems. By combining a simple linear code with a novel request scheduling algorithm, which we call Blocking-one Scheduling (BoS), we show analytically that it is possible to reduce data retrieval delay by up to 17% over currently popular replication-based strategies. Although in this work we focus on a simplified setting where the storage system stores a single content, the methodology developed can be applied to more general settings with multiple contents. The results also offer insightful guidance to the design of storage systems in data centers and content distribution networks.) <|cite_end|> considered a very simple case with a fixed code of $k=2$ tasks. Shah et al. <|cite_start|> (Reference: The mds queue: Analysing latency performance of codes and redundant requests: In order to scale economically, data centers are increasingly evolving their data storage methods from the use of simple data replication to the use of more powerful erasure codes, which provide the same level of reliability as replication-based methods at a significantly lower storage cost. In particular, it is well known that MaximumDistance-Separable (MDS) codes, such as Reed-Solomon codes, provide the maximum storage efficiency. While the use of codes for providing improved reliability in archival storage systems, where the data is less frequently accessed (or so-called “cold data”), is well understood, the role of codes in the storage of more frequently accessed and active “hot data”, where latency is the key metric, is less clear. In this paper, we study data storage systems based on MDS codes through the lens of queueing theory, and term this the “MDS queue.” We analytically characterize the latency performance of MDS queues, for which we present insightful scheduling policies that form upper and lower bounds to performance, and show that they are quite tight. Extensive simulations using Monte Carlo methods are also provided and used to validate our theoretical analysis. As a side note, our lower-bound analytical method based on the so-called MDS-Reservation(t) queue, represents an elegant practical scheme that requires the maintenance of considerably smaller state, depending on the parameter t, than that of the full-fledged MDS queue (which corresponds to t =∞), and may be of independent interest in practical systems. Comparisons with replication-based systems reveal that codes provide a superior latency-performance (by up to 70%) than replication. The second part of the paper considers an alternative method of (potentially) reducing latency in data centers, that of sending redundant requests. Here, a request is sent to more servers than required, and is deemed served when any requisite number of servers complete service. Several recent works provide empirical evidence of the benefits of redundant requests in various settings, and in this paper, we aim to analytically characterize the situations when can redundant requests actually help. We show that under the MDS queue model (with exponential service times and negligible costs of cancelling jobs), in a replication-based system, the average latency strictly reduces with more redundancy in the requests, and that under a general MDS code, the average latency is minimized when requests are sent to all servers. To the best of our knowledge, these are the first analytical results that prove the benefits of sending redundant requests.) <|cite_end|> generalize the results from <|cite_start|> (Reference: Codes Can Reduce Queueing Delay in Data Centers: In this paper, we quantify how much codes can reduce the data retrieval latency in storage systems. By combining a simple linear code with a novel request scheduling algorithm, which we call Blocking-one Scheduling (BoS), we show analytically that it is possible to reduce data retrieval delay by up to 17% over currently popular replication-based strategies. Although in this work we focus on a simplified setting where the storage system stores a single content, the methodology developed can be applied to more general settings with multiple contents. The results also offer insightful guidance to the design of storage systems in data centers and content distribution networks.) <|cite_end|> to $k>2$. Both works rely on the assumption of exponential task delays, which hardly captures the reality. Therefore, some of their theoretical results cannot be applied in practice. For example, under the assumption of exponential task delays, Shah et al. have proved that using larger $n$ will not reduce system capacity and will always improve delay, contradicting with simulation results using real-world measurements in <|cite_start|> (Reference: FAST CLOUD: Pushing the Envelope on Delay Performance of Cloud Storage with Coding: Our paper presents solutions that can significantly improve the delay performance of putting and retrieving data in and out of cloud storage. We first focus on measuring the delay performance of a very popular cloud storage service Amazon S3. We establish that there is significant randomness in service times for reading and writing small and medium size objects when assigned distinct keys. We further demonstrate that using erasure coding, parallel connections to storage cloud and limited chunking (i.e., dividing the object into a few smaller objects) together pushes the envelope on service time distributions significantly (e.g., 76%, 80%, and 85% reductions in mean, 90th, and 99th percentiles for 2 Mbyte files) at the expense of additional storage (e.g., 1.75x). However, chunking and erasure coding increase the load and hence the queuing delays while reducing the supportable rate region in number of requests per second per node. Thus, in the second part of our paper we focus on analyzing the delay performance when chunking, FEC, and parallel connections are used together. Based on this analysis, we develop load adaptive algorithms that can pick the best code rate on a per request basis by using off-line computed queue backlog thresholds. The solutions work with homogeneous services with fixed object sizes, chunk sizes, operation type (e.g., read or write) as well as heterogeneous services with mixture of object sizes, chunk sizes, and operation types. We also present a simple greedy solution that opportunistically uses idle connections and picks the erasure coding rate accordingly on the fly. Both backlog and greedy solutions support the full rate region and provide best mean delay performance when compared to the best fixed coding rate policy. Our evaluations show that backlog based solutions achieve better delay performance at higher percentile values than the greedy solution.) <|cite_end|> and this paper. <|paper_end|>
[ "<|reference_start|> An evaluation of AmazonÕs grid computing services: EC2, S3 and SQS: Amazon.com’s Elastic Compute Cloud (EC2), Simple Storage Service (S3) and Simple Queue Service (SQS) offer enterprise-class computing, storage and coordination facilities to any organization or individual in the world with a valid credit card. This paper details our experience working with these commodity grid computing services between November 2006 and May 2007, including an analysis of the overall system’s API and ease-of-use; an analysis of EC2’s management and security facilities; an end-to-end performance analysis of S3’s throughput and latency as observed from Amazon’s EC2 cluster and other locations on the Internet; and an analysis of the SQS operation and performance. We conclude with a report of our experience moving a large-scale research application from dedicated hardware to the Amazon offering. We find that this collection of AmazonWeb Services (AWS) has great promise but are hobbled by service consistency problems, the lack of a Service Level Agreement (SLA), and a problematic Web Services Licensing Agreement (WSLA). <|reference_end|>", "<|reference_start|> Stout: an adaptive interface to scalable cloud storage: Many of today's applications are delivered as scalable, multi-tier services deployed in large data centers. These services frequently leverage shared, scale-out, key-value storage layers that can deliver low latency under light workloads, but may exhibit significant queuing delay and even dropped requests under high load. \n \nStout is a system that helps these applications adapt to variation in storage-layer performance by treating scalable key-value storage as a shared resource requiring congestion control. Under light workloads, applications using Stout send requests to the store immediately, minimizing delay. Under heavy workloads, Stout automatically batches the application's requests together before sending them to the store, resulting in higher throughput and preventing queuing delay. We show experimentally that Stout's adaptation algorithm converges to an appropriate batch size for workloads that require the batch size to vary by over two orders of magnitude. Compared to a non-adaptive strategy optimized for throughput, Stout delivers over 34× lower latency under light workloads; compared to a non-adaptive strategy optimized for latency, Stout can scale to over 3× as many requests. <|reference_end|>", "<|reference_start|> On the Delay of Network Coding over Line Networks: We analyze a simple network where a source and a receiver are connected by a line of erasure channels of different reliabilities. Recent prior work has shown that random linear network coding can achieve the min-cut capacity and therefore the asymptotic rate is determined by the worst link of the line network. In this paper we investigate the delay for transmitting a batch of packets, which is a function of all the erasure probabilities and the number of packets in the batch. We show a monotonicity result on the delay function and derive simple expressions which characterize the expected delay behavior of line networks. Further, we use a martingale bounded differences argument to show that the actual delay is tightly concentrated around its expectation. <|reference_end|>", "<|reference_start|> Codes Can Reduce Queueing Delay in Data Centers: In this paper, we quantify how much codes can reduce the data retrieval latency in storage systems. By combining a simple linear code with a novel request scheduling algorithm, which we call Blocking-one Scheduling (BoS), we show analytically that it is possible to reduce data retrieval delay by up to 17% over currently popular replication-based strategies. Although in this work we focus on a simplified setting where the storage system stores a single content, the methodology developed can be applied to more general settings with multiple contents. The results also offer insightful guidance to the design of storage systems in data centers and content distribution networks. <|reference_end|>" ]
[ 9, 10, 18, 23 ]
{"<|cite_1|>": "ss-850255", "<|cite_2|>": "ss-970013", "<|multi_cite_3_1|>": "arxiv-39976", "<|multi_cite_3_2|>": "arxiv-28473", "<|multi_cite_3_3|>": "ss-1691004", "<|cite_4|>": "arxiv-39976", "<|cite_5|>": "ss-1691004", "<|cite_6|>": "arxiv-39976", "<|cite_7|>": "arxiv-39976", "<|multi_cite_8_1|>": "ss-970013", "<|multi_cite_8_2|>": "ss-1691005", "<|cite_9|>": "ss-1691005", "<|multi_cite_10_1|>": "ss-1380157", "<|multi_cite_10_2|>": "ss-1691006", "<|multi_cite_10_3|>": "ss-1157928", "<|multi_cite_10_4|>": "ss-1691007", "<|multi_cite_11_1|>": "ss-1647532", "<|multi_cite_11_2|>": "ss-1144050", "<|multi_cite_11_3|>": "arxiv-9653", "<|multi_cite_11_4|>": "ss-1691008", "<|multi_cite_12_1|>": "arxiv-2961", "<|multi_cite_12_2|>": "ss-1000058", "<|multi_cite_12_3|>": "ss-1693210", "<|cite_13|>": "arxiv-28473", "<|cite_14|>": "arxiv-39976", "<|cite_15|>": "arxiv-28473", "<|cite_16|>": "ss-1691004", "<|cite_17|>": "arxiv-28473", "<|cite_18|>": "arxiv-39976"}
1708.04073-1
<|cite_start|> (Reference: Object location using path separators: We study a novel separator property called <i>k-path separable</i>. Roughly speaking, a <i>k-path separable</i> graph can be recursively separated into smaller components by sequentially removing <i>k</i> shortest paths. Our main result is that every minor free weighted graph is <i>k</i>-path separable. We then show that <i>k</i>-path separable graphs can be used to solve several object location problems: (1) a small-worldization with an average poly-logarithmic number of hops; (2) an (1 + ε)-approximate distance labeling scheme with <i>O</i>(log <i>n</i>) space labels; (3) a stretch-(1 + ε) compact routing scheme with tables of poly-logarithmic space; (4) an (1 + ε)-approximate distance oracle with <i>O</i>(<i>n</i> log <i>n</i>) space and <i>O</i>(log <i>n</i>) query time. Our results generalizes to much wider classes of weighted graphs, namely to bounded-dimension isometric sparable graphs.) <|cite_end|>, and nearest neighbor search <|cite_start|> (Reference: Approximate nearest neighbor search in metrics of planar graphs: We investigate the problem of approximate Nearest-Neighbor Search (NNS) in graphical metrics: The task is to preprocess an edge-weighted graph G=(V,E) on m vertices and a small "dataset" D \subset V of size n << m, so that given a query point q \in V, one can quickly approximate dist(q,D) (the distance from q to its closest vertex in D) and find a vertex a \in D within this approximated distance. We assume the query algorithm has access to a distance oracle, that quickly evaluates the exact distance between any pair of vertices. For planar graphs G with maximum degree Delta, we show how to efficiently construct a compact data structure -- of size ~O(n(Delta+1/epsilon)) -- that answers (1+epsilon)-NNS queries in time ~O(Delta+1/epsilon). Thus, as far as NNS applications are concerned, metrics derived from bounded-degree planar graphs behave as low-dimensional metrics, even though planar metrics do not necessarily have a low doubling dimension, nor can they be embedded with low distortion into l_2. We complement our algorithmic result by lower bounds showing that the access to an exact distance oracle (rather than an approximate one) and the dependency on Delta (in query time) are both essential.) <|cite_end|>. However, to the best of our knowledge, this is the first time it has been used directly for low-distortion embeddings into normed spaces. In a follow up paper, <|cite_start|> (Reference: A face cover perspective to $\ell_1$ embeddings of planar graphs: It was conjectured by Gupta et al. [Combinatorica04] that every planar graph can be embedded into $\ell_1$ with constant distortion. However, given an $n$-vertex weighted planar graph, the best upper bound on the distortion is only $O(\sqrt{\log n})$, by Rao [SoCG99]. In this paper we study the case where there is a set $K$ of terminals, and the goal is to embed only the terminals into $\ell_1$ with low distortion. In a seminal paper, Okamura and Seymour [J.Comb.Theory81] showed that if all the terminals lie on a single face, they can be embedded isometrically into $\ell_1$. The more general case, where the set of terminals can be covered by $\gamma$ faces, was studied by Lee and Sidiropoulos [STOC09] and Chekuri et al. [J.Comb.Theory13]. The state of the art is an upper bound of $O(\log \gamma)$ by Krauthgamer, Lee and Rika [SODA19]. Our contribution is a further improvement on the upper bound to $O(\sqrt{\log\gamma})$. Since every planar graph has at most $O(n)$ faces, any further improvement on this result, will be a major breakthrough, directly improving upon Rao's long standing upper bound. Moreover, it is well known that the flow-cut gap equals to the distortion of the best embedding into $\ell_1$. Therefore, our result provides a polynomial time $O(\sqrt{\log \gamma})$-approximation to the sparsest cut problem on planar graphs, for the case where all the demand pairs can be covered by $\gamma$ faces.) <|cite_end|>(the second author) generalized our definition of \SPD to partial-\SPD (allowing the lower level in the partition hierarchy to be general subgraph rather than only a shortest path). Given a weighted planar graph $G=(V,E,w)$ with a subset of terminals $K$, a face cover is a subset of faces such that every terminal lies on some face from the cover. Given a face cover of size $\gamma$, using our embedding result for \SPD, <|cite_start|> (Reference: A face cover perspective to $\ell_1$ embeddings of planar graphs: It was conjectured by Gupta et al. [Combinatorica04] that every planar graph can be embedded into $\ell_1$ with constant distortion. However, given an $n$-vertex weighted planar graph, the best upper bound on the distortion is only $O(\sqrt{\log n})$, by Rao [SoCG99]. In this paper we study the case where there is a set $K$ of terminals, and the goal is to embed only the terminals into $\ell_1$ with low distortion. In a seminal paper, Okamura and Seymour [J.Comb.Theory81] showed that if all the terminals lie on a single face, they can be embedded isometrically into $\ell_1$. The more general case, where the set of terminals can be covered by $\gamma$ faces, was studied by Lee and Sidiropoulos [STOC09] and Chekuri et al. [J.Comb.Theory13]. The state of the art is an upper bound of $O(\log \gamma)$ by Krauthgamer, Lee and Rika [SODA19]. Our contribution is a further improvement on the upper bound to $O(\sqrt{\log\gamma})$. Since every planar graph has at most $O(n)$ faces, any further improvement on this result, will be a major breakthrough, directly improving upon Rao's long standing upper bound. Moreover, it is well known that the flow-cut gap equals to the distortion of the best embedding into $\ell_1$. Therefore, our result provides a polynomial time $O(\sqrt{\log \gamma})$-approximation to the sparsest cut problem on planar graphs, for the case where all the demand pairs can be covered by $\gamma$ faces.) <|cite_end|>shows that the terminal set $K$ can be embedded into $\ell_1$ with distortion $O(\sqrt{\log \gamma})$. <|paper_end|>
[ "<|reference_start|> Object location using path separators: We study a novel separator property called <i>k-path separable</i>. Roughly speaking, a <i>k-path separable</i> graph can be recursively separated into smaller components by sequentially removing <i>k</i> shortest paths. Our main result is that every minor free weighted graph is <i>k</i>-path separable. We then show that <i>k</i>-path separable graphs can be used to solve several object location problems: (1) a small-worldization with an average poly-logarithmic number of hops; (2) an (1 + ε)-approximate distance labeling scheme with <i>O</i>(log <i>n</i>) space labels; (3) a stretch-(1 + ε) compact routing scheme with tables of poly-logarithmic space; (4) an (1 + ε)-approximate distance oracle with <i>O</i>(<i>n</i> log <i>n</i>) space and <i>O</i>(log <i>n</i>) query time. Our results generalizes to much wider classes of weighted graphs, namely to bounded-dimension isometric sparable graphs. <|reference_end|>", "<|reference_start|> Approximate nearest neighbor search in metrics of planar graphs: We investigate the problem of approximate Nearest-Neighbor Search (NNS) in graphical metrics: The task is to preprocess an edge-weighted graph G=(V,E) on m vertices and a small \"dataset\" D \\subset V of size n << m, so that given a query point q \\in V, one can quickly approximate dist(q,D) (the distance from q to its closest vertex in D) and find a vertex a \\in D within this approximated distance. We assume the query algorithm has access to a distance oracle, that quickly evaluates the exact distance between any pair of vertices. \n \nFor planar graphs G with maximum degree Delta, we show how to efficiently construct a compact data structure -- of size ~O(n(Delta+1/epsilon)) -- that answers (1+epsilon)-NNS queries in time ~O(Delta+1/epsilon). Thus, as far as NNS applications are concerned, metrics derived from bounded-degree planar graphs behave as low-dimensional metrics, even though planar metrics do not necessarily have a low doubling dimension, nor can they be embedded with low distortion into l_2. We complement our algorithmic result by lower bounds showing that the access to an exact distance oracle (rather than an approximate one) and the dependency on Delta (in query time) are both essential. <|reference_end|>", "<|reference_start|> A face cover perspective to $\\ell_1$ embeddings of planar graphs: It was conjectured by Gupta et al. [Combinatorica04] that every planar graph can be embedded into $\\ell_1$ with constant distortion. However, given an $n$-vertex weighted planar graph, the best upper bound on the distortion is only $O(\\sqrt{\\log n})$, by Rao [SoCG99]. In this paper we study the case where there is a set $K$ of terminals, and the goal is to embed only the terminals into $\\ell_1$ with low distortion. In a seminal paper, Okamura and Seymour [J.Comb.Theory81] showed that if all the terminals lie on a single face, they can be embedded isometrically into $\\ell_1$. The more general case, where the set of terminals can be covered by $\\gamma$ faces, was studied by Lee and Sidiropoulos [STOC09] and Chekuri et al. [J.Comb.Theory13]. The state of the art is an upper bound of $O(\\log \\gamma)$ by Krauthgamer, Lee and Rika [SODA19]. Our contribution is a further improvement on the upper bound to $O(\\sqrt{\\log\\gamma})$. Since every planar graph has at most $O(n)$ faces, any further improvement on this result, will be a major breakthrough, directly improving upon Rao's long standing upper bound. Moreover, it is well known that the flow-cut gap equals to the distortion of the best embedding into $\\ell_1$. Therefore, our result provides a polynomial time $O(\\sqrt{\\log \\gamma})$-approximation to the sparsest cut problem on planar graphs, for the case where all the demand pairs can be covered by $\\gamma$ faces. <|reference_end|>", "<|reference_start|> A face cover perspective to $\\ell_1$ embeddings of planar graphs: It was conjectured by Gupta et al. [Combinatorica04] that every planar graph can be embedded into $\\ell_1$ with constant distortion. However, given an $n$-vertex weighted planar graph, the best upper bound on the distortion is only $O(\\sqrt{\\log n})$, by Rao [SoCG99]. In this paper we study the case where there is a set $K$ of terminals, and the goal is to embed only the terminals into $\\ell_1$ with low distortion. In a seminal paper, Okamura and Seymour [J.Comb.Theory81] showed that if all the terminals lie on a single face, they can be embedded isometrically into $\\ell_1$. The more general case, where the set of terminals can be covered by $\\gamma$ faces, was studied by Lee and Sidiropoulos [STOC09] and Chekuri et al. [J.Comb.Theory13]. The state of the art is an upper bound of $O(\\log \\gamma)$ by Krauthgamer, Lee and Rika [SODA19]. Our contribution is a further improvement on the upper bound to $O(\\sqrt{\\log\\gamma})$. Since every planar graph has at most $O(n)$ faces, any further improvement on this result, will be a major breakthrough, directly improving upon Rao's long standing upper bound. Moreover, it is well known that the flow-cut gap equals to the distortion of the best embedding into $\\ell_1$. Therefore, our result provides a polynomial time $O(\\sqrt{\\log \\gamma})$-approximation to the sparsest cut problem on planar graphs, for the case where all the demand pairs can be covered by $\\gamma$ faces. <|reference_end|>" ]
[ 0, 1, 2, 3 ]
{"<|cite_1|>": "ss-761016", "<|cite_2|>": "arxiv-25264", "<|cite_3|>": "ss-921674", "<|cite_4|>": "ss-1377265", "<|multi_cite_5_1|>": "ss-761016", "<|cite_6|>": "ss-678997", "<|cite_8|>": "ss-1014445", "<|cite_9|>": "ss-1014446", "<|cite_10|>": "arxiv-9454", "<|cite_13|>": "ss-1513144", "<|cite_14|>": "arxiv-9454", "<|cite_16|>": "ss-761016", "<|cite_17|>": "ss-1014447", "<|cite_19|>": "arxiv-672367", "<|cite_20|>": "arxiv-76757", "<|cite_21|>": "arxiv-9454", "<|cite_22|>": "arxiv-672367", "<|cite_23|>": "arxiv-76757", "<|cite_24|>": "ss-1014448", "<|cite_25|>": "arxiv-52645", "<|cite_26|>": "arxiv-672367", "<|multi_cite_27_1|>": "ss-1014449", "<|multi_cite_27_3|>": "ss-1014450", "<|cite_28|>": "ss-1014449", "<|cite_30|>": "ss-1014450", "<|multi_cite_31_1|>": "ss-1014451", "<|multi_cite_31_2|>": "ss-1014452", "<|multi_cite_32_1|>": "ss-1014449", "<|cite_33|>": "ss-1014450", "<|multi_cite_35_1|>": "ss-1014453", "<|multi_cite_35_2|>": "ss-1014448", "<|multi_cite_35_3|>": "arxiv-672367", "<|multi_cite_35_4|>": "ss-1514614", "<|cite_36|>": "ss-1014453", "<|cite_37|>": "ss-1014448", "<|cite_38|>": "ss-1014454", "<|cite_39|>": "arxiv-52645", "<|cite_41|>": "ss-1014448", "<|cite_42|>": "arxiv-672367", "<|cite_43|>": "ss-2278498", "<|multi_cite_44_1|>": "arxiv-52645", "<|multi_cite_44_2|>": "arxiv-672367", "<|multi_cite_45_1|>": "ss-1014454", "<|multi_cite_45_2|>": "ss-811815", "<|cite_46|>": "ss-849858", "<|cite_47|>": "ss-767603", "<|cite_48|>": "ss-1511844", "<|cite_49|>": "ss-1014455", "<|cite_50|>": "arxiv-194352", "<|cite_51|>": "arxiv-194352"}
2311.09141-0
<|paper_start|> Title: Prophet Inequalities Require Only a Constant Number of Samples Abstract: Prophet Inequalities Require Only a Constant Number of Samples: In a prophet inequality problem, $n$ independent random variables are presented to a gambler one by one. The gambler decides when to stop the sequence and obtains the most recent value as reward. We evaluate a stopping rule by the worst-case ratio between its expected reward and the expectation of the maximum variable. In the classic setting, the order is fixed, and the optimal ratio is known to be 1/2. Three variants of this problem have been extensively studied: the prophet-secretary model, where variables arrive in uniformly random order; the free-order model, where the gambler chooses the arrival order; and the i.i.d. model, where the distributions are all the same, rendering the arrival order irrelevant. Most of the literature assumes that distributions are known to the gambler. Recent work has considered the question of what is achievable when the gambler has access only to a few samples per distribution. Surprisingly, in the fixed-order case, a single sample from each distribution is enough to approximate the optimal ratio, but this is not the case in any of the three variants. We provide a unified proof that for all three variants of the problem, a constant number of samples (independent of n) for each distribution is good enough to approximate the optimal ratios. Prior to our work, this was known to be the case only in the i.i.d. variant. We complement our result showing that our algorithms can be implemented in polynomial time. A key ingredient in our proof is an existential result based on a minimax argument, which states that there must exist an algorithm that attains the optimal ratio and does not rely on the knowledge of the upper tail of the distributions. A second key ingredient is a refined sample-based version of a decomposition of the instance into "small" and "large" variables, first introduced by Liu et al. [EC'21]. Introduction The \textit{Prophet Inequality} is a fundamental problem in optimal stopping theory, in which a gambler is successively proposed with $n$ realizations of positive independent random variables and has to pick one of them. The gambler knows in advance the order and the distribution of each variable but upon observing each realization must decide irrevocably whether to pick it. A classic result by Krengel and Sucheston <|cite_start|> (Reference: Semiamarts and finite values: ) <|cite_end|>asserts that the gambler can get at least half of the expected maximum of the variables, and that this is the best possible guarantee that is independent of the variables' distributions. Remarkably, Samuel-Cahn <|cite_start|> (Reference: Comparison of Threshold Stop Rules and Maximum for Independent Nonnegative Random Variables: ) <|cite_end|>proved this can be achieved using a very simple rule: pick any variable that is above the median of the distribution of the maximum. In the last decade, due to its connections with mechanism design and posted price mechanisms <|cite_start|> (Reference: Automated online mechanism design and prophet inequalities: Recent work on online auctions for digital goods has explored the role of optimal stopping theory -- particularly secretary problems -- in the design of approximately optimal online mechanisms. This work generally assumes that the size of the market (number of bidders) is known a priori, but that the mechanism designer has no knowledge of the distribution of bid values. However, in many real-world applications (such as online ticket sales), the opposite is true: the seller has distributional knowledge of the bid values (e.g., via the history of past transactions in the market), but there is uncertainty about market size. Adopting the perspective of automated mechanism design, introduced by Conitzer and Sandholm, we develop algorithms that compute an optimal, or approximately optimal, online auction mechanism given access to this distributional knowledge. Our main results are twofold. First, we show that when the seller does not know the market size, no constant-approximation to the optimum efficiency or revenue is achievable in the worst case, even under the very strong assumption that bid values are i.i.d. samples from a distribution known to the seller. Second, we show that when the seller has distributional knowledge of the market size as well as the bid values, one can do well in several senses. Perhaps most interestingly, by combining dynamic programming with prophet inequalities (a technique from optimal stopping theory) we are able to design and analyze online mechanisms which are temporally strategyproof (even with respect to arrival and departure times) and approximately efficiency (revenue)-maximizing. In exploring the interplay between automated mechanism design and prophet inequalities, we prove new prophet inequalities motivated by the auction setting.) <|cite_end|> <|cite_start|> (Reference: Multi-Parameter Mechanism Design and Sequential Posted Pricing: We study the classic mathematical economics problem of Bayesian optimal mechanism design where a principal aims to optimize expected revenue when allocating resources to self-interested agents with preferences drawn from a known distribution. In single parameter settings (i.e., where each agent's preference is given by a single private value for being served and zero for not being served) this problem is solved [20]. Unfortunately, these single parameter optimal mechanisms are impractical and rarely employed [1], and furthermore the underlying economic theory fails to generalize to the important, relevant, and unsolved multi-dimensional setting (i.e., where each agent's preference is given by multiple values for each of the multiple services available) [25]. In contrast to the theory of optimal mechanisms we develop a theory of sequential posted price mechanisms, where agents in sequence are offered take-it-or-leave-it prices. We prove that these mechanisms are approximately optimal in single-dimensional settings. These posted-price mechanisms avoid many of the properties of optimal mechanisms that make the latter impractical. Furthermore, these mechanisms generalize naturally to multi-dimensional settings where they give the first known approximations to the elusive optimal multi-dimensional mechanism design problem. In particular, we solve multi-dimensional multi-unit auction problems and generalizations to matroid feasibility constraints. The constant approximations we obtain range from 1.5 to 8. For all but one case, our posted price sequences can be computed in polynomial time. This work can be viewed as an extension and improvement of the single-agent algorithmic pricing work of [9] to the setting of multiple agents where the designer has combinatorial feasibility constraints on which agents can simultaneously obtain each service.) <|cite_end|> <|cite_start|> (Reference: From pricing to prophets, and back!: ) <|cite_end|>, the prophet inequality and its many variants have become an intensely studied topic and a staple framework to study online selection problems beyond worst-case analysis. Three variants of this problem have been extensively studied. First, the \textit{i.i.d. problem}, in which variables have i.i.d. distributions. There, the optimal ratio is $\beta \simeq 0.745$, where $1/\beta$ is the unique solution of $\int_0^1 \frac{1}{y(1-\ln(y))+(\beta-1)} dy =1$. The upper bound was shown in <|cite_start|> (Reference: Comparisons of Stop Rule and Supremum Expectations of I.I.D. Random Variables: Implicitly defined (and easily approximated) universal constants 1.1 < an random variables and if Tn is the set of stop rules for Xl, "', Xn, then E(max{Xl , • • • ,Xn}) ~ an sup {EX, : tE Tn}, and the bound an is best possible. Similar universal constants 0 < bn < Y. are found so that if the {Xi} are i.i.d. random variables taking values only in [a, b), then E (max {XI , .. , , Xn }) ~ sup {EXt: t E Tn} + bn(b-a), where again the bound bn is best possible. In both situations, extremal distributions for which equality is attained (or nearly attained) are given in implicit form.) <|cite_end|> <|cite_start|> (Reference: Stop rule and supremum expectations of i.i.d. random variables: a complete comparison by conjugate duality: ) <|cite_end|>, and the lower bound in <|cite_start|> (Reference: Posted price mechanisms for a random stream of customers: Posted price mechanisms constitute a widely used way of selling items to strategic consumers. Although suboptimal, the attractiveness of these mechanisms comes from their simplicity and easy implementation. In this paper, we investigate the performance of posted price mechanisms when customers arrive in an unknown random order. We compare the expected revenue of these mechanisms to the expected revenue of the optimal auction in two different settings. Namely, the nonadaptive setting in which all offers are sent to the customers beforehand, and the adaptive setting in which an offer is made when a consumer arrives. For the nonadaptive case, we obtain a strategy achieving an expected revenue within at least a 1-1/e fraction of that of the optimal auction. We also show that this bound is tight, even if the customers have i.i.d. valuations for the item. For the adaptive case, we exhibit a posted price mechanism that achieves a factor 0.745 of the optimal revenue, when the customers have i.i.d. valuations for the item. Furthermore, we prove that our results extend to the prophet inequality setting and in particular our result for i.i.d. random valuations resolves a problem posed by Hill and Kertz. [13]) <|cite_end|>. Second, the \textit{Prophet Secretary problem}, in which variables appear in uniformly random order. Esfandiari et al. <|cite_start|> (Reference: Prophet Secretary: Optimal stopping theory is a powerful tool for analyzing scenarios such as online auctions in which we generally require optimizing an objective function over the space of stopping rules for an allocation process under uncertainty. Perhaps the most classic problems of stopping theory are the prophet inequality problem and the secretary problem. The classical prophet inequality states that by choosing the same threshold OPT/2 for every step, one can achieve the tight competitive ratio of 0.5. On the other hand, for the basic secretary problem, the optimal strategy achieves the tight competitive ratio of 1/e. In this paper, we introduce Prophet Secretary, a natural combination of the prophet inequality and the secretary problems. An example motivation for our problem is as follows. Consider a seller that has an item to sell on the market to a set of arriving customers. The seller knows the types of customers that may be interested in the item and he has a price distribution for each type: the price offered by a customer of a type is anticipated to be drawn from the corresponding distribution. However, the customers arrive in a random order. Upon the arrival of a customer, the seller makes an irrevocable decision whether to sell the item at the offered price. We address the question of finding a strategy for selling the item at a high price. We show that by using a uniform threshold one cannot break the 0.5 barrier. However, we show that i) using n distinct non-adaptive thresholds one can obtain a competitive ratio that goes to (1-1/e) as n grows; and ii) no online algorithm can achieve a competitive ratio better than 0.75. Our results improve the (asymptotic) approximation guarantee of single-item sequential posted pricing mechanisms from 0.5 to (1-1/e) when the order of agents (customers) is chosen randomly.) <|cite_end|>initiated the study of this variant, showing that the gambler can guarantee a factor of $1-1/e$, and later Ehsani et al. <|cite_start|> (Reference: Prophet Secretary for Combinatorial Auctions and Matroids: The secretary and the prophet inequality problems are central to the field of Stopping Theory. Recently, there has been a lot of work in generalizing these models to multiple items because of their applications in mechanism design. The most important of these generalizations are to matroids and to combinatorial auctions (extends bipartite matching). Kleinberg-Weinberg \cite{KW-STOC12} and Feldman et al. \cite{feldman2015combinatorial} show that for adversarial arrival order of random variables the optimal prophet inequalities give a $1/2$-approximation. For many settings, however, it's conceivable that the arrival order is chosen uniformly at random, akin to the secretary problem. For such a random arrival model, we improve upon the $1/2$-approximation and obtain $(1-1/e)$-approximation prophet inequalities for both matroids and combinatorial auctions. This also gives improvements to the results of Yan \cite{yan2011mechanism} and Esfandiari et al. \cite{esfandiari2015prophet} who worked in the special cases where we can fully control the arrival order or when there is only a single item. Our techniques are threshold based. We convert our discrete problem into a continuous setting and then give a generic template on how to dynamically adjust these thresholds to lower bound the expected total welfare.) <|cite_end|>showed this can be achieved with a single-threshold rule. Azar et al. <|cite_start|> (Reference: Prophet secretary: Surpassing the 1-1/e barrier: In the Prophet Secretary problem, samples from a known set of probability distributions arrive one by one in a uniformly random order, and an algorithm must irrevocably pick one of the samples as soon as it arrives. The goal is to maximize the expected value of the sample picked relative to the expected maximum of the distributions. This is one of the most simple and fundamental problems in online decision making that models the process selling one item to a sequence of costumers. For a closely related problem called the Prophet Inequality where the order of the random variables is adversarial, it is known that one can achieve in expectation 1/2 of the expected maximum, and no better ratio is possible. For the Prophet Secretary problem, that is, when the variables arrive in a random order, Esfandiari et al. (2015) showed that one can actually get 1-1/e of the maximum. The 1-1/e bound was recently extended to more general settings by Ehsani et al. (2018). Given these results, one might be tempted to believe that 1-1/e is the correct bound. We show that this is not the case by providing an algorithm for the Prophet Secretary problem that beats the 1-1/e bound and achieves 1-1/e+1/400 times the expected maximum. We also prove a hardness result on the performance of algorithms under a natural restriction which we call deterministic distribution-insensitivity.) <|cite_end|>slightly improved the $1-1/e$ factor by using a multi-threshold algorithm, and then Correa et al. <|cite_start|> (Reference: Prophet Secretary Through Blind Strategies: In the classic prophet inequality, samples from independent random variables arrive online. A gambler that knows the distributions must decide at each point in time whether to stop and pick the current sample or to continue and lose that sample forever. The goal of the gambler is to maximize the expected value of what she picks and the performance measure is the worst case ratio between the expected value the gambler gets and what a prophet, that sees all the realizations in advance, gets. In the late seventies, Krengel and Sucheston, and Gairing (1977) established that this worst case ratio is a universal constant equal to 1/2. In the last decade prophet inequalities has resurged as an important problem due to its connections to posted price mechanisms, frequently used in online sales. A very interesting variant is the Prophet Secretary problem, in which the only difference is that the samples arrive in a uniformly random order. For this variant several algorithms achieve a constant of 1-1/e and very recently this barrier was slightly improved. This paper analyzes strategies that set a nonincreasing sequence of thresholds to be applied at different times. The gambler stops the first time a sample surpasses the corresponding threshold. Specifically we consider a class of strategies called blind quantile strategies. They consist in fixing a function which is used to define a sequence of thresholds once the instance is revealed. Our main result shows that they can achieve a constant of 0.665, improving upon the best known result of Azar et al. (2018), and on Beyhaghi et al. (2018) (order selection). Our proof analyzes precisely the underlying stopping time distribution, relying on Schur-convexity theory. We further prove that blind strategies cannot achieve better than 0.675. Finally we prove that no algorithm for the gambler can achieve better than 0.732.) <|cite_end|>proved the optimal factor lies in $[0.669,0.732]$. The current known best upper bound is $0.724$ <|cite_start|> (Reference: Prophet Inequalities: Separating Random Order from Order Selection: Prophet inequalities are a central object of study in optimal stopping theory. A gambler is sent values in an online fashion, sampled from an instance of independent distributions, in an adversarial, random or selected order, depending on the model. When observing each value, the gambler either accepts it as a reward or irrevocably rejects it and proceeds to observe the next value. The goal of the gambler, who cannot see the future, is maximising the expected value of the reward while competing against the expectation of a prophet (the offline maximum). In other words, one seeks to maximise the gambler-to-prophet ratio of the expectations. The model, in which the gambler selects the arrival order first, and then observes the values, is known as Order Selection. In this model a ratio of $0.7251$ is attainable for any instance. Recently, this has been improved up to $0.7258$ by Bubna and Chiplunkar (2023). If the gambler chooses the arrival order (uniformly) at random, we obtain the Random Order model. The worst case ratio over all possible instances has been extensively studied for at least $40$ years. Through simulations, Bubna and Chiplunkar (2023) also showed that this ratio is at most $0.7254$ for the Random Order model, thus establishing for the first time that carefully choosing the order, instead of simply taking it at random, benefits the gambler. We give an alternative, non-simulation-assisted proof of this fact, by showing mathematically that in the Random Order model, no algorithm can achieve a ratio larger than $0.7235$. This sets a new state-of-the-art hardness for this model, and establishes more formally that there is a real benefit in choosing the order.) <|cite_end|> <|cite_start|> (Reference: Prophet Inequality: Order selection beats random order: In the prophet inequality problem, a gambler faces a sequence of items arriving online with values drawn independently from known distributions. On seeing an item, the gambler must choose whether to accept its value as her reward and quit the game, or reject it and continue. The gambler's aim is to maximize her expected reward relative to the expected maximum of the values of all items. Since the seventies, a tight bound of 1/2 has been known for this competitive ratio in the setting where the items arrive in an adversarial order (Krengel and Sucheston, 1977, 1978). However, the optimum ratio still remains unknown in the order selection setting, where the gambler selects the arrival order, as well as in prophet secretary, where the items arrive in a random order. Moreover, it is not even known whether a separation exists between the two settings. In this paper, we show that the power of order selection allows the gambler to guarantee a strictly better competitive ratio than if the items arrive randomly. For the order selection setting, we identify an instance for which Peng and Tang's (FOCS'22) state-of-the-art algorithm performs no better than their claimed competitive ratio of (approximately) 0.7251, thus illustrating the need for an improved approach. We therefore extend their design and provide a more general algorithm design framework, using which we show that their ratio can be beaten, by designing a 0.7258-competitive algorithm. For the random order setting, we improve upon Correa, Saona and Ziliotto's (SODA'19) 0.732-hardness result to show a hardness of 0.7254 for general algorithms - even in the setting where the gambler knows the arrival order beforehand, thus establishing a separation between the order selection and random order settings.) <|cite_end|>, and it remains one of the most important open problems in the area to close this gap. Last, in the \textit{Free-order problem}, variables are ordered by the gambler. The best-known upper bound is the i.i.d. model ratio $1/\beta$. Lower bounds have been successively obtained by <|cite_start|> (Reference: Prophet secretary: Surpassing the 1-1/e barrier: In the Prophet Secretary problem, samples from a known set of probability distributions arrive one by one in a uniformly random order, and an algorithm must irrevocably pick one of the samples as soon as it arrives. The goal is to maximize the expected value of the sample picked relative to the expected maximum of the distributions. This is one of the most simple and fundamental problems in online decision making that models the process selling one item to a sequence of costumers. For a closely related problem called the Prophet Inequality where the order of the random variables is adversarial, it is known that one can achieve in expectation 1/2 of the expected maximum, and no better ratio is possible. For the Prophet Secretary problem, that is, when the variables arrive in a random order, Esfandiari et al. (2015) showed that one can actually get 1-1/e of the maximum. The 1-1/e bound was recently extended to more general settings by Ehsani et al. (2018). Given these results, one might be tempted to believe that 1-1/e is the correct bound. We show that this is not the case by providing an algorithm for the Prophet Secretary problem that beats the 1-1/e bound and achieves 1-1/e+1/400 times the expected maximum. We also prove a hardness result on the performance of algorithms under a natural restriction which we call deterministic distribution-insensitivity.) <|cite_end|> <|cite_start|> (Reference: Multi-Parameter Mechanism Design and Sequential Posted Pricing: We study the classic mathematical economics problem of Bayesian optimal mechanism design where a principal aims to optimize expected revenue when allocating resources to self-interested agents with preferences drawn from a known distribution. In single parameter settings (i.e., where each agent's preference is given by a single private value for being served and zero for not being served) this problem is solved [20]. Unfortunately, these single parameter optimal mechanisms are impractical and rarely employed [1], and furthermore the underlying economic theory fails to generalize to the important, relevant, and unsolved multi-dimensional setting (i.e., where each agent's preference is given by multiple values for each of the multiple services available) [25]. In contrast to the theory of optimal mechanisms we develop a theory of sequential posted price mechanisms, where agents in sequence are offered take-it-or-leave-it prices. We prove that these mechanisms are approximately optimal in single-dimensional settings. These posted-price mechanisms avoid many of the properties of optimal mechanisms that make the latter impractical. Furthermore, these mechanisms generalize naturally to multi-dimensional settings where they give the first known approximations to the elusive optimal multi-dimensional mechanism design problem. In particular, we solve multi-dimensional multi-unit auction problems and generalizations to matroid feasibility constraints. The constant approximations we obtain range from 1.5 to 8. For all but one case, our posted price sequences can be computed in polynomial time. This work can be viewed as an extension and improvement of the single-agent algorithmic pricing work of [9] to the setting of multiple agents where the designer has combinatorial feasibility constraints on which agents can simultaneously obtain each service.) <|cite_end|> <|cite_start|> (Reference: Prophet Secretary Through Blind Strategies: In the classic prophet inequality, samples from independent random variables arrive online. A gambler that knows the distributions must decide at each point in time whether to stop and pick the current sample or to continue and lose that sample forever. The goal of the gambler is to maximize the expected value of what she picks and the performance measure is the worst case ratio between the expected value the gambler gets and what a prophet, that sees all the realizations in advance, gets. In the late seventies, Krengel and Sucheston, and Gairing (1977) established that this worst case ratio is a universal constant equal to 1/2. In the last decade prophet inequalities has resurged as an important problem due to its connections to posted price mechanisms, frequently used in online sales. A very interesting variant is the Prophet Secretary problem, in which the only difference is that the samples arrive in a uniformly random order. For this variant several algorithms achieve a constant of 1-1/e and very recently this barrier was slightly improved. This paper analyzes strategies that set a nonincreasing sequence of thresholds to be applied at different times. The gambler stops the first time a sample surpasses the corresponding threshold. Specifically we consider a class of strategies called blind quantile strategies. They consist in fixing a function which is used to define a sequence of thresholds once the instance is revealed. Our main result shows that they can achieve a constant of 0.665, improving upon the best known result of Azar et al. (2018), and on Beyhaghi et al. (2018) (order selection). Our proof analyzes precisely the underlying stopping time distribution, relying on Schur-convexity theory. We further prove that blind strategies cannot achieve better than 0.675. Finally we prove that no algorithm for the gambler can achieve better than 0.732.) <|cite_end|>, and huge progress was made quite recently by Peng and Tang <|cite_start|> (Reference: Order Selection Prophet Inequality: From Threshold Optimization to Arrival Time Design: In the classical prophet inequality, a gambler faces a sequence of items, whose values are drawn independently from known distributions. Upon the arrival of each item, its value is realized and the gambler either accepts it and the game ends, or irrevocably rejects it and continues to the next item. The goal is to maximize the value of the selected item and compete against the expected maximum value of all items. A tight competitive ratio of $\frac{1}{2}$ is established in the classical setting and various relaxations have been proposed to surpass the barrier, including the i.i.d. model, the order selection model, and the random order model. In this paper, we advance the study of the order selection prophet inequality, in which the gambler is given the extra power for selecting the arrival order of the items. Our main result is a $0.725$-competitive algorithm, that substantially improves the state-of-the-art $0.669$ ratio by Correa, Saona and Ziliotto~(Math. Program. 2021), achieved in the harder random order model. Recently, Agrawal, Sethuraman and Zhang~(EC 2021) proved that the task of selecting the optimal order is NP-hard. Despite this fact, we introduce a novel algorithm design framework that translates the discrete order selection problem into a continuous arrival time design problem. From this perspective, we can focus on the arrival time design without worrying about the threshold optimization afterwards. As a side result, we achieve the optimal $0.745$ competitive ratio by applying our algorithm to the i.i.d. model.) <|cite_end|>, who established a lower bound of $0.724$, which was later improved to $0.725$ <|cite_start|> (Reference: Prophet Inequality: Order selection beats random order: In the prophet inequality problem, a gambler faces a sequence of items arriving online with values drawn independently from known distributions. On seeing an item, the gambler must choose whether to accept its value as her reward and quit the game, or reject it and continue. The gambler's aim is to maximize her expected reward relative to the expected maximum of the values of all items. Since the seventies, a tight bound of 1/2 has been known for this competitive ratio in the setting where the items arrive in an adversarial order (Krengel and Sucheston, 1977, 1978). However, the optimum ratio still remains unknown in the order selection setting, where the gambler selects the arrival order, as well as in prophet secretary, where the items arrive in a random order. Moreover, it is not even known whether a separation exists between the two settings. In this paper, we show that the power of order selection allows the gambler to guarantee a strictly better competitive ratio than if the items arrive randomly. For the order selection setting, we identify an instance for which Peng and Tang's (FOCS'22) state-of-the-art algorithm performs no better than their claimed competitive ratio of (approximately) 0.7251, thus illustrating the need for an improved approach. We therefore extend their design and provide a more general algorithm design framework, using which we show that their ratio can be beaten, by designing a 0.7258-competitive algorithm. For the random order setting, we improve upon Correa, Saona and Ziliotto's (SODA'19) 0.732-hardness result to show a hardness of 0.7254 for general algorithms - even in the setting where the gambler knows the arrival order beforehand, thus establishing a separation between the order selection and random order settings.) <|cite_end|>. In parallel, an exciting recent line of work has considered the more realistic case where the gambler does not have full access to the distributions, but instead observes samples from past data beforehand. Rubinstein, Wang and Weinberg <|cite_start|> (Reference: Optimal Single-Choice Prophet Inequalities from Samples: We study the single-choice Prophet Inequality problem when the gambler is given access to samples. We show that the optimal competitive ratio of $1/2$ can be achieved with a single sample from each distribution. When the distributions are identical, we show that for any constant $\varepsilon > 0$, $O(n)$ samples from the distribution suffice to achieve the optimal competitive ratio ($\approx 0.745$) within $(1+\varepsilon)$, resolving an open problem of Correa, D\"utting, Fischer, and Schewior.) <|cite_end|>showed that a single sample per distribution is enough to achieve the best possible factor of $1/2$ in the classic prophet inequality. Moreover, they prove that in the i.i.d. case, $O(1/\varepsilon^6)$ are enough to achieve the best possible guarantee of $0.745-\varepsilon$. Recently, Correa et al. <|cite_start|> (Reference: Sample-driven optimal stopping: From the secretary problem to the iid prophet inequality: We take a unifying approach to single selection optimal stopping problems with random arrival order and independent sampling of items. In the problem we consider, a decision maker (DM) initially gets to sample each of $N$ items independently with probability $p$, and can observe the relative rankings of these sampled items. Then, the DM faces the remaining items in an online fashion, observing the relative rankings of all revealed items. While scanning the sequence the DM makes irrevocable stop/continue decisions and her reward for stopping the sequence facing the item with rank $i$ is $Y_i$. The goal of the DM is to maximize her reward. We start by studying the case in which the values $Y_i$ are known to the DM, and then move to the case in which these values are adversarial. For the former case, we write the natural linear program that captures the performance of an algorithm, and take its continuous limit. We prove a structural result about this continuous limit, which allows us to reduce the problem to a relatively simple real optimization problem. We establish that the optimal algorithm is given by a sequence of thresholds $t_1\le t_2\le\cdots$ such that the DM should stop if seeing an item with current ranking $i$ after time $t_i$. Additionally we are able to recover several classic results in the area such as those for secretary problem and the minimum ranking problem. For the adversarial case, we obtain a similar linear program with an additional stochastic dominance constraint. Using the same machinery we are able to pin down the optimal competitive ratios for all values of $p$. Notably, we prove that as $p$ approaches 1, our guarantee converges linearly to 0.745, matching that of the i.i.d.~prophet inequality. Also interesting is the case $p=1/2$, where our bound evaluates to $0.671$, which improves upon the state of the art.) <|cite_end|>showed that $O(1/\varepsilon)$ are enough to guarantee $0.745-\varepsilon$. Correa et al. <|cite_start|> (Reference: The two-sided game of googol: The secretary problem or game of Googol are classic models for online selection problems. In this paper we consider a variant of the problem and explore its connections to data-driven online selection. Specifically, we are given n cards with arbitrary non-negative numbers written on both sides. The cards are randomly placed on n consecutive positions on a table, and for each card, the visible side is also selected at random. The player sees the visible side of all cards and wants to select the card with the maximum hidden value. To this end, the player flips the first card, sees its hidden value and decides whether to pick it or drop it and continue with the next card. We study algorithms for two natural objectives: maximizing the probability of selecting the maximum hidden value, and maximizing the expectation of the selected hidden value. For the former objective we obtain a simple 0 . 45292-competitive algorithm. For the latter, we obtain a 0 . 63518-competitive algorithm. Our main contribution is to set up a model allowing to transform probabilistic optimal stopping problems into purely combinatorial ones. For instance, we can apply our results to obtain lower bounds for the single sample prophet secretary problem.) <|cite_end|>showed that in the prophet secretary problem, one sample per distribution is enough to guarantee a factor of $0.635$. The focus of our work is on sample-based versions of the Prophet Secretary problem and of the Free-Order problem. In both models, our main question is what fraction of the expected maximum can be guaranteed using a constant (independent of $n$) number of samples per distribution. \subsection{Our result and technical highlights} Let $C_S$ be the optimal fraction of the expected maximum that can be guaranteed in the prophet secretary problem. We prove that for any $\varepsilon>0$, it is possible to guarantee a $C_S-\varepsilon$ factor in the sample-based prophet secretary problem, using no more than $O(1/\varepsilon^5)$ samples from each distribution. The exact same result holds for the sample-based free-order problem, with the corresponding optimal ratio. Our proof is ``universal'', in the sense that it deals simultaneously with both models, and also works for the i.i.d. model. Analogous results for the prophet inequality and the i.i.d. prophet inequality rely on either converting an existing algorithm with the optimal guarantee into a sample-based one, or on constructing a sample-based algorithm and showing it matches the best-possible guarantee. Remarkably, since the best-possible guarantee for the prophet secretary problem and the free-order problem are unknown, such approaches cannot be used to show our result, and instead, we establish new properties of the problem. Moreover, the optimal algorithms for the classic and the i.i.d. variants use no more than $n$ thresholds, one for each variable. In contrast, in the random order case, the optimal algorithm uses an exponential number of thresholds, one for each variable and each possible arrival order. Similarly, the optimal algorithm for the free-order model has to choose among the exponentially many arrival orders. \\ Before describing the main lines of the proof, let us highlight the difficulty of proving the result with an example in the prophet-secretary variant. First, consider the instance $(X_1,\dots,X_{n})$, such that $X_1,\dots,X_{n-1}$ are i.i.d. and equal to $n$ with probability $n^{-2}$, and 0 otherwise. The variable $X_n$ is deterministic, equal to $\sqrt{3}-1$. Assume that the gambler knows the distributions. This corresponds to the example in <|cite_start|> (Reference: Prophet Secretary Through Blind Strategies: In the classic prophet inequality, samples from independent random variables arrive online. A gambler that knows the distributions must decide at each point in time whether to stop and pick the current sample or to continue and lose that sample forever. The goal of the gambler is to maximize the expected value of what she picks and the performance measure is the worst case ratio between the expected value the gambler gets and what a prophet, that sees all the realizations in advance, gets. In the late seventies, Krengel and Sucheston, and Gairing (1977) established that this worst case ratio is a universal constant equal to 1/2. In the last decade prophet inequalities has resurged as an important problem due to its connections to posted price mechanisms, frequently used in online sales. A very interesting variant is the Prophet Secretary problem, in which the only difference is that the samples arrive in a uniformly random order. For this variant several algorithms achieve a constant of 1-1/e and very recently this barrier was slightly improved. This paper analyzes strategies that set a nonincreasing sequence of thresholds to be applied at different times. The gambler stops the first time a sample surpasses the corresponding threshold. Specifically we consider a class of strategies called blind quantile strategies. They consist in fixing a function which is used to define a sequence of thresholds once the instance is revealed. Our main result shows that they can achieve a constant of 0.665, improving upon the best known result of Azar et al. (2018), and on Beyhaghi et al. (2018) (order selection). Our proof analyzes precisely the underlying stopping time distribution, relying on Schur-convexity theory. We further prove that blind strategies cannot achieve better than 0.675. Finally we prove that no algorithm for the gambler can achieve better than 0.732.) <|cite_end|>, where it is shown that the gambler cannot guarantee a ratio better than $\sqrt{3}-1+o(1)$, which proves that $C_S \leq \sqrt{3}-1$. Now, consider the following other problem: given a positive number $a$, $X_1,\dots,X_{n-1}$ are i.i.d. and equal to $a \cdot n$ with probability $n^{-2}$, and 0 otherwise. The variable $X_n$ is deterministic, equal to $\sqrt{3}-1$. The number $a$ is unknown to the gambler, who has access to a constant number of samples of each distribution. For $n$ large, with probability at least $1-O(1/n)$, the samples of $X_1,\dots,X_{n-1}$ are all equal to 0, hence uninformative. Hence, this problem is seemingly much harder than the previous one, and one may expect that the ratio guaranteed by the gambler goes way below $\sqrt{3}-1$, possibly below $C_S$. Our result shows that it is not the case: the gambler can still guarantee $C_S$. Surprisingly, one of the proof steps shows that he can even guarantee $\sqrt{3}-1$: hence, when $a$ is adversarially chosen, knowing $a$ or not knowing $a$ does not change the guarantee. \\ Our proof consists of three main steps, which are, to some extent, important facts about the prophet-secretary and the free-order variants by themselves. \textbf{Step 1} of our proof is to show that essentially we do not need to know the upper tails of the distributions in order to achieve the best-possible guarantee. This alleviates a heavy burden on the design of sample-based algorithms, as the upper tails potentially contribute most of the expectation of the maximum, and precisely estimating them might require an arbitrary high number of samples. The proof of this fact is based on a minimax argument: if by observing the upper tails of the distributions we can design an algorithm that guarantee the optimal constant, by choosing a randomized algorithm, we can also guarantee the optimal constant against an adversary that decides how large is the contribution of the upper tail of each distribution to the expected maximum. \textbf{Step 2} relies on the notion of $\varepsilon$-small distributions, introduced by Liu et al. <|cite_start|> (Reference: Variable Decomposition for Prophet Inequalities and Optimal Ordering: We introduce a new decomposition technique for random variables that maps a generic instance of the prophet inequalities problem to a new instance where all but a constant number of variables have a tractable structure that we refer to as $(\varepsilon, \delta)$-smallness. Using this technique, we make progress on several outstanding problems in the area: - We show that, even in the case of non-identical distributions, it is possible to achieve (arbitrarily close to) the optimal approximation ratio of $\beta \approx 0.745$ as long as we are allowed to remove a small constant number of distributions. - We show that for frequent instances of prophet inequalities (where each distribution reoccurs some number of times), it is possible to achieve the optimal approximation ratio of $\beta$ (improving over the previous best-known bound of $0.738$). - We give a new, simpler proof of Kertz's optimal approximation guarantee of $\beta \approx 0.745$ for prophet inequalities with i.i.d. distributions. The proof is primal-dual and simultaneously produces upper and lower bounds. - Using this decomposition in combination with a novel convex programming formulation, we construct the first Efficient PTAS for the Optimal Ordering problem.) <|cite_end|>. A variable is $\varepsilon$-small if the probability that it is larger than zero is at most $\varepsilon$. Liu et al. show that in the prophet secretary problem, if all variables are $\varepsilon$-small, it is possible to guarantee a fraction of $0.745$ of the expected maximum, which is the best possible guarantee also if the variables are i.i.d. Our result in this step is to show that if a large proportion of the variables are $\varepsilon$-small, then we can pretend those variables are i.i.d. by losing only an $\varepsilon$ fraction of the expected maximum. The main idea is to show that for a fixed algorithm, replacing the $\varepsilon$-small variables with i.i.d. variables in a way that does not change the distribution of the maximum, we stop the sequence only earlier, and conditional on stopping with an $\varepsilon$-small variable, its expectation is almost the same as if the $\varepsilon$-small variables were i.i.d. In \textbf{Step 3}, we show how to actually use the samples to construct the algorithm. We further divide step 3 into step 3(a) and step 3(b). In step 3(a) we show that using constantly many samples per distribution, we can split the set of variables into two sets, one containing at least $n-O\left( (1/\varepsilon)\log(1/\varepsilon) \right)$ $\varepsilon$-small variables. Because of step 2, we can replace this large set of variables with i.i.d. variables. In step 3(b), we show that using constantly many samples per distribution, we can estimate very well the distribution of the auxiliary i.i.d. variables, as well as the distribution of the constantly-many variables that are not $\varepsilon$-small, except for their upper tails. Finally, notice these three steps alone only guarantee the existence of a sample-based algorithm. In fact, step 1 uses a minimax argument that is non-constructive. We complement this by describing in \textbf{Step 4} a procedure that finds such an algorithm and runs in polynomial time. The starting point is a linear program of exponential size that captures the algorithm from step 1. We show how to reduce the linear program to one of polynomial size by leveraging the fact that we are only interested in solving instances where all variables have supports of polynomial size, and most of them are i.i.d. \subsection{Further related work} The framework of the prophet inequality has been generalized to a wide variety of online selection problems beyond single selection. Important generalizations include prophet inequalities for $k$-selection <|cite_start|> (Reference: Static pricing for multi-unit prophet inequalities: We study a pricing problem where a seller has $k$ identical copies of a product, buyers arrive sequentially, and the seller prices the items aiming to maximize social welfare. When $k=1$, this is the so called "prophet inequality" problem for which there is a simple pricing scheme achieving a competitive ratio of $1/2$. On the other end of the spectrum, as $k$ goes to infinity, the asymptotic performance of both static and adaptive pricing is well understood. We provide a static pricing scheme for the small-supply regime: where $k$ is small but larger than $1$. Prior to our work, the best competitive ratio known for this setting was the $1/2$ that follows from the single-unit prophet inequality. Our pricing scheme is easy to describe as well as practical -- it is anonymous, non-adaptive, and order-oblivious. We pick a single price that equalizes the expected fraction of items sold and the probability that the supply does not sell out before all customers are served; this price is then offered to each customer while supply lasts. This extends an approach introduced by Samuel-Cahn for the case of $k=1$. This pricing scheme achieves a competitive ratio that increases gradually with the supply. Subsequent work by Jiang, Ma, and Zhang shows that our pricing scheme is the optimal static pricing for every value of $k$.) <|cite_end|> <|cite_start|> (Reference: Tight Guarantees for Multi-unit Prophet Inequalities and Online Stochastic Knapsack: Prophet inequalities are a useful tool for designing online allocation procedures and comparing their performance to the optimal offline allocation. In the basic setting of $k$-unit prophet inequalities, the well-known procedure of Alaei (2011) with its celebrated performance guarantee of $1-\frac{1}{\sqrt{k+3}}$ has found widespread adoption in mechanism design and general online allocation problems in online advertising, healthcare scheduling, and revenue management. Despite being commonly used to derive approximately-optimal algorithms for multi-resource allocation problems, the tightness of Alaei's guarantee has remained unknown. In this paper characterize the tight guarantee in Alaei's setting, which we show is in fact strictly greater than $1-\frac{1}{\sqrt{k+3}}$ for all $k>1$. We also consider the more general online stochastic knapsack problem where each individual allocation can consume an arbitrary fraction of the initial capacity. Here we introduce a new ``best-fit'' procedure with a performance guarantee of $\frac{1}{3+e^{-2}}\approx0.319$, which we also show is tight with respect to the standard LP relaxation. This improves the previously best-known guarantee of 0.2 for online knapsack. Our analysis differs from existing ones by eschewing the need to split items into ``large'' or ``small'' based on capacity consumption, using instead an invariant for the overall utilization on different sample paths. Finally, we refine our technique for the unit-density special case of knapsack, and improve the guarantee from 0.321 to 0.3557 in the multi-resource appointment scheduling application of Stein et al. (2020).) <|cite_end|>, matroid and matroid intersection <|cite_start|> (Reference: Matroid Prophet Inequalities: Consider a gambler who observes a sequence of independent, non-negative random numbers and is allowed to stop the sequence at any time, claiming a reward equal to the most recent observation. The famous prophet inequality of Krengel, Sucheston, and Garling asserts that a gambler who knows the distribution of each random variable can achieve at least half as much reward, in expectation, as a "prophet" who knows the sampled values of each random variable and can choose the largest one. We generalize this result to the setting in which the gambler and the prophet are allowed to make more than one selection, subject to a matroid constraint. We show that the gambler can still achieve at least half as much reward as the prophet; this result is the best possible, since it is known that the ratio cannot be improved even in the original prophet inequality, which corresponds to the special case of rank-one matroids. Generalizing the result still further, we show that under an intersection of p matroid constraints, the prophet's reward exceeds the gambler's by a factor of at most O(p), and this factor is also tight. Beyond their interest as theorems about pure online algorithms or optimal stopping rules, these results also have applications to mechanism design. Our results imply improved bounds on the ability of sequential posted-price mechanisms to approximate Bayesian optimal mechanisms in both single-parameter and multi-parameter settings. In particular, our results imply the first efficiently computable constant-factor approximations to the Bayesian optimal revenue in certain multi-parameter settings.) <|cite_end|> <|cite_start|> (Reference: An Improved Lower Bound for Matroid Intersection Prophet Inequalities: We consider prophet inequalities subject to feasibility constraints that are the intersection of $q$ matroids. The best-known algorithms achieve a $\Theta(q)$-approximation, even when restricted to instances that are the intersection of $q$ partition matroids, and with i.i.d.~Bernoulli random variables. The previous best-known lower bound is $\Theta(\sqrt{q})$ due to a simple construction of [Kleinberg-Weinberg STOC 2012] (which uses i.i.d.~Bernoulli random variables, and writes the construction as the intersection of partition matroids). We establish an improved lower bound of $q^{1/2+\Omega(1/\log \log q)}$ by writing the construction of [Kleinberg-Weinberg STOC 2012] as the intersection of asymptotically fewer partition matroids. We accomplish this via an improved upper bound on the product dimension of a graph with $p^p$ disjoint cliques of size $p$, using recent techniques developed in [Alon-Alweiss European Journal of Combinatorics 2020].) <|cite_end|>, matching <|cite_start|> (Reference: Online Prophet-Inequality Matching with Applications to Ad Allocation: We study the problem of online prophet-inequality matching in bipartite graphs. There is a static set of bidders and an online stream of items. We represent the interest of bidders in items by a weighted bipartite graph. Each bidder has a capacity, i.e., an upper bound on the number of items that can be allocated to her. The weight of a matching is the total weight of edges matched to the bidders. Upon the arrival of an item, the online algorithm should either allocate it to a bidder or discard it. The objective is to maximize the weight of the resulting matching. We consider this model in a stochastic setting where we know the distribution of the incoming items in advance. Furthermore, we allow the items to be drawn from different distributions, i.e., we may assume that the tth item is drawn from distribution Dt. In contrast to i.i.d. model, this allows us to model the change in the distribution of items throughout the time. We call this setting the Prophet-Inequality Matching because of the possibility of having a different distribution for each time. We generalize the classic prophet inequality by presenting an algorithm with the approximation ratio of 1--1/√k+3 where k is the minimum capacity. In case of k=2, the algorithm gives a tight ratio of 1/2 which is a different proof of the prophet inequality. We also consider a model in which the bidders do not have a capacity, instead each bidder has a budget. The weight of a matching is the minimum of the budget of each vertex and the total weight of edges matched to it, when summed over all bidders. We show that if the bid to the budget ratio of every bidder is at most 1/k then a natural randomized online algorithm has an approximation ratio of 1-kk/ekk! H 1--1/√2πk compared to the optimal offline (in which the ratio goes to 1 as k becomes large). We also present the applications of this model in Adword Allocation, Display Ad Allocation, and AdCell Model.) <|cite_end|> <|cite_start|> (Reference: Prophet inequality for bipartite matching: merits of being simple and non adaptive: We consider Bayesian online selection problem of a matching in bipartite graphs, i.e., online weighted matching problem with edge arrivals where online algorithm knows distributions of weights, that corresponds to the intersection of two matroids in [Kleinberg and Wienberg STOC 12] model. We consider a simple class of non adaptive vertex-additive policies that assign static prices to all vertices in the graph and accept each edge only if its weight exceeds the sum of the prices of the edge's endpoints. We show existence of a vertex-additive policy with the expected payoff of at least one third of the prophet's payoff and present gradient decent type algorithm that quickly converges to the desired vector of vertex prices. This improves the adaptive online policy of [Kleinberg and Wienberg STOC 12] for the intersection of two matroids in two ways: our policy is non adaptive and has better approximation guarantee of $3$ instead of previous guarantee of $5.82$ against the prophet. We give a complementary lower bound of $2.25$ for any online algorithm in the bipartite matching setting.) <|cite_end|>, and online combinatorial auctions <|cite_start|> (Reference: Combinatorial Auctions via Posted Prices: We study anonymous posted price mechanisms for combinatorial auctions in a Bayesian framework. In a posted price mechanism, item prices are posted, then the consumers approach the seller sequentially in an arbitrary order, each purchasing her favorite bundle from among the unsold items at the posted prices. These mechanisms are simple, transparent and trivially dominant strategy incentive compatible (DSIC). We show that when agent preferences are fractionally subadditive (which includes all submodular functions), there always exist prices that, in expectation, obtain at least half of the optimal welfare. Our result is constructive: given black-box access to a combinatorial auction algorithm A, sample access to the prior distribution, and appropriate query access to the sampled valuations, one can compute, in polytime, prices that guarantee at least half of the expected welfare of A. As a corollary, we obtain the first polytime (in n and m) constant-factor DSIC mechanism for Bayesian submodular combinatorial auctions, given access to demand query oracles. Our results also extend to valuations with complements, where the approximation factor degrades linearly with the level of complementarity.) <|cite_end|> <|cite_start|> (Reference: A Constant Factor Prophet Inequality for Online Combinatorial Auctions: In online combinatorial auctions m indivisible items are to be allocated to n agents who arrive online. Agents have random valuations for the different subsets of items and the goal is to allocate the items on the fly so as to maximize the total value of the assignment. A prophet inequality in this setting refers to the existence of an online algorithm guaranteed to obtain, in expectation, a certain fraction of the expected value obtained by an optimal solution in hindsight. The study of prophet inequalities for online combinatorial auctions has been an intensive area of research in recent years, and constant factor prophet inequalities are known when the agents’ valuation functions are submodular or fractionally subadditive. Despite many efforts, for the more general case of subadditive valuations, the best known prophet inequality has an approximation guarantee of O(loglogm). In this paper, we prove the existence of a constant factor prophet inequality for the subadditive case, resolving a central open problem in the area. Our prophet inequality is achieved by a novel, but elementary, sampling idea which we call the Mirror Lemma. This lemma is essentially concerned with understanding online algorithms for which the set of items that are allocated and those that are not, distribute equally. The other main ingredient is a nonstandard application of Kakutani’s fixed point theorem. Finally, we note that our prophet inequality works against an almighty adversary and even can be implemented in an incentive compatible way.) <|cite_end|>. In these generalizations, the gambler can select multiple variables under some combinatorial constraint on the selected set, instead of just one. Pioneered by Azar, Kleinberg and Weinberg <|cite_start|> (Reference: Prophet Inequalities with Limited Information: In the classical prophet inequality, a gambler observes a sequence of stochastic rewards $V_1,...,V_n$ and must decide, for each reward $V_i$, whether to keep it and stop the game or to forfeit the reward forever and reveal the next value $V_i$. The gambler's goal is to obtain a constant fraction of the expected reward that the optimal offline algorithm would get. Recently, prophet inequalities have been generalized to settings where the gambler can choose $k$ items, and, more generally, where he can choose any independent set in a matroid. However, all the existing algorithms require the gambler to know the distribution from which the rewards $V_1,...,V_n$ are drawn. The assumption that the gambler knows the distribution from which $V_1,...,V_n$ are drawn is very strong. Instead, we work with the much simpler assumption that the gambler only knows a few samples from this distribution. We construct the first single-sample prophet inequalities for many settings of interest, whose guarantees all match the best possible asymptotically, \emph{even with full knowledge of the distribution}. Specifically, we provide a novel single-sample algorithm when the gambler can choose any $k$ elements whose analysis is based on random walks with limited correlation. In addition, we provide a black-box method for converting specific types of solutions to the related \emph{secretary problem} to single-sample prophet inequalities, and apply it to several existing algorithms. Finally, we provide a constant-sample prophet inequality for constant-degree bipartite matchings. We apply these results to design the first posted-price and multi-dimensional auction mechanisms with limited information in settings with asymmetric bidders.) <|cite_end|>, several recent works study the question of what guarantees are possible in prophet inequality models under limited sample access to the distributions. Azar et al. <|cite_start|> (Reference: Prophet Inequalities with Limited Information: In the classical prophet inequality, a gambler observes a sequence of stochastic rewards $V_1,...,V_n$ and must decide, for each reward $V_i$, whether to keep it and stop the game or to forfeit the reward forever and reveal the next value $V_i$. The gambler's goal is to obtain a constant fraction of the expected reward that the optimal offline algorithm would get. Recently, prophet inequalities have been generalized to settings where the gambler can choose $k$ items, and, more generally, where he can choose any independent set in a matroid. However, all the existing algorithms require the gambler to know the distribution from which the rewards $V_1,...,V_n$ are drawn. The assumption that the gambler knows the distribution from which $V_1,...,V_n$ are drawn is very strong. Instead, we work with the much simpler assumption that the gambler only knows a few samples from this distribution. We construct the first single-sample prophet inequalities for many settings of interest, whose guarantees all match the best possible asymptotically, \emph{even with full knowledge of the distribution}. Specifically, we provide a novel single-sample algorithm when the gambler can choose any $k$ elements whose analysis is based on random walks with limited correlation. In addition, we provide a black-box method for converting specific types of solutions to the related \emph{secretary problem} to single-sample prophet inequalities, and apply it to several existing algorithms. Finally, we provide a constant-sample prophet inequality for constant-degree bipartite matchings. We apply these results to design the first posted-price and multi-dimensional auction mechanisms with limited information in settings with asymmetric bidders.) <|cite_end|>showed that there was a connection between this model and the secretary problem, as many algorithms for the secretary problem can be adapted to obtain constant-factor sample-based prophet inequalities. Caramanis et al. <|cite_start|> (Reference: Single-Sample Prophet Inequalities via Greedy-Ordered Selection: We study single-sample prophet inequalities (SSPIs), i.e., prophet inequalities where only a single sample from each prior distribution is available. Besides a direct, and optimal, SSPI for the basic single choice problem [Rubinstein et al., 2020], most existing SSPI results were obtained via an elegant, but inherently lossy, reduction to order-oblivious secretary (OOS) policies [Azar et al., 2014]. Motivated by this discrepancy, we develop an intuitive and versatile greedy-based technique that yields SSPIs directly rather than through the reduction to OOSs. Our results can be seen as generalizing and unifying a number of existing results in the area of prophet and secretary problems. Our algorithms significantly improve on the competitive guarantees for a number of interesting scenarios (including general matching with edge arrivals, bipartite matching with vertex arrivals, and certain matroids), and capture new settings (such as budget additive combinatorial auctions). Complementing our algorithmic results, we also consider mechanism design variants. Finally, we analyze the power and limitations of different SSPI approaches by providing a partial converse to the reduction from SSPI to OOS given by Azar et al.) <|cite_end|>consider sample-based greedy algorithms, which are, in a sense, a refinement of the framework of Azar et al <|cite_start|> (Reference: Prophet Inequalities with Limited Information: In the classical prophet inequality, a gambler observes a sequence of stochastic rewards $V_1,...,V_n$ and must decide, for each reward $V_i$, whether to keep it and stop the game or to forfeit the reward forever and reveal the next value $V_i$. The gambler's goal is to obtain a constant fraction of the expected reward that the optimal offline algorithm would get. Recently, prophet inequalities have been generalized to settings where the gambler can choose $k$ items, and, more generally, where he can choose any independent set in a matroid. However, all the existing algorithms require the gambler to know the distribution from which the rewards $V_1,...,V_n$ are drawn. The assumption that the gambler knows the distribution from which $V_1,...,V_n$ are drawn is very strong. Instead, we work with the much simpler assumption that the gambler only knows a few samples from this distribution. We construct the first single-sample prophet inequalities for many settings of interest, whose guarantees all match the best possible asymptotically, \emph{even with full knowledge of the distribution}. Specifically, we provide a novel single-sample algorithm when the gambler can choose any $k$ elements whose analysis is based on random walks with limited correlation. In addition, we provide a black-box method for converting specific types of solutions to the related \emph{secretary problem} to single-sample prophet inequalities, and apply it to several existing algorithms. Finally, we provide a constant-sample prophet inequality for constant-degree bipartite matchings. We apply these results to design the first posted-price and multi-dimensional auction mechanisms with limited information in settings with asymmetric bidders.) <|cite_end|>. With this framework, they obtained improved factors for various classes of matroids. For the case of selecting a matching on a graph, where edges have random weights, Duetting et al. <|cite_start|> (Reference: Prophet Inequalities for Matching with a Single Sample: We consider the prophet inequality problem for (not necessarily bipartite) matching problems with independent edge values, under both edge arrivals and vertex arrivals. We show constant-factor prophet inequalities for the case where the online algorithm has only limited access to the value distributions through samples. First, we give a $16$-approximate prophet inequality for matching in general graphs under edge arrivals that uses only a single sample from each value distribution as prior information. Then, for bipartite matching and (one-sided) vertex arrivals, we show an improved bound of $8$ that also uses just a single sample from each distribution. Finally, we show how to turn our $16$-approximate single-sample prophet inequality into a truthful single-sample mechanism for online bipartite matching with vertex arrivals.) <|cite_end|>and Kaplan, Naori and Raz <|cite_start|> (Reference: Online Weighted Matching with a Sample: We study the greedy-based online algorithm for edge-weighted matching with (one-sided) vertex arrivals in bipartite graphs, and edge arrivals in general graphs. This algorithm was first studied more than a decade ago by Korula and P\'al for the bipartite case in the random-order model. While the weighted bipartite matching problem is solved in the random-order model, this is not the case in recent and exciting online models in which the online player is provided with a sample, and the arrival order is adversarial. The greedy-based algorithm is arguably the most natural and practical algorithm to be applied in these models. Despite its simplicity and appeal, and despite being studied in multiple works, the greedy-based algorithm was not fully understood in any of the studied online models, and its actual performance remained an open question for more than a decade. We provide a thorough analysis of the greedy-based algorithm in several online models. For vertex arrivals in bipartite graphs, we characterize the exact competitive-ratio of this algorithm in the random-order model, for any arrival order of the vertices subsequent to the sampling phase (adversarial and random orders in particular). We use it to derive tight analysis in the recent adversarial-order model with a sample (AOS model) for any sample size, providing the first result in this model beyond the simple secretary problem. Then, we generalize and strengthen the black box method of converting results in the random-order model to single-sample prophet inequalities, and use it to derive the state-of-the-art single-sample prophet inequality for the problem. Finally, we use our new techniques to analyze the greedy-based algorithm for edge arrivals in general graphs and derive results in all the mentioned online models. In this case as well, we improve upon the state-of-the-art single-sample prophet inequality.) <|cite_end|>recently considered the case where the gambler has a single sample of each edge beforehand and showed constant-factor approximations in edge-arrival and vertex-arrival models. For the case of combinatorial auctions, where the gambler is a seller with a set of items for sale and the random variables correspond to the valuation functions of buyers, Feldman et al. <|cite_start|> (Reference: Combinatorial Auctions via Posted Prices: We study anonymous posted price mechanisms for combinatorial auctions in a Bayesian framework. In a posted price mechanism, item prices are posted, then the consumers approach the seller sequentially in an arbitrary order, each purchasing her favorite bundle from among the unsold items at the posted prices. These mechanisms are simple, transparent and trivially dominant strategy incentive compatible (DSIC). We show that when agent preferences are fractionally subadditive (which includes all submodular functions), there always exist prices that, in expectation, obtain at least half of the optimal welfare. Our result is constructive: given black-box access to a combinatorial auction algorithm A, sample access to the prior distribution, and appropriate query access to the sampled valuations, one can compute, in polytime, prices that guarantee at least half of the expected welfare of A. As a corollary, we obtain the first polytime (in n and m) constant-factor DSIC mechanism for Bayesian submodular combinatorial auctions, given access to demand query oracles. Our results also extend to valuations with complements, where the approximation factor degrades linearly with the level of complementarity.) <|cite_end|>and Correa et al. <|cite_start|> (Reference: Optimal item pricing in online combinatorial auctions: ) <|cite_end|>, besides showing approximation factors for the full-information case, gave sample-based versions, using polynomially many samples per distribution and assuming bounded supports. Gravin et al. <|cite_start|> (Reference: Optimal Prophet Inequality with Less than One Sample: ) <|cite_end|>recently studied the prophet inequality with less than one sample per distribution, i.e., we have a sample from each distribution with probability $p$ independently, in the classic fixed order version. They showed that this model smoothly interpolates between a guarantee of $0$ if there are no samples, and the guarantee of $1/2$ if we have one sample per distribution. Similarly, Correa et al. <|cite_start|> (Reference: Sample-driven optimal stopping: From the secretary problem to the iid prophet inequality: We take a unifying approach to single selection optimal stopping problems with random arrival order and independent sampling of items. In the problem we consider, a decision maker (DM) initially gets to sample each of $N$ items independently with probability $p$, and can observe the relative rankings of these sampled items. Then, the DM faces the remaining items in an online fashion, observing the relative rankings of all revealed items. While scanning the sequence the DM makes irrevocable stop/continue decisions and her reward for stopping the sequence facing the item with rank $i$ is $Y_i$. The goal of the DM is to maximize her reward. We start by studying the case in which the values $Y_i$ are known to the DM, and then move to the case in which these values are adversarial. For the former case, we write the natural linear program that captures the performance of an algorithm, and take its continuous limit. We prove a structural result about this continuous limit, which allows us to reduce the problem to a relatively simple real optimization problem. We establish that the optimal algorithm is given by a sequence of thresholds $t_1\le t_2\le\cdots$ such that the DM should stop if seeing an item with current ranking $i$ after time $t_i$. Additionally we are able to recover several classic results in the area such as those for secretary problem and the minimum ranking problem. For the adversarial case, we obtain a similar linear program with an additional stochastic dominance constraint. Using the same machinery we are able to pin down the optimal competitive ratios for all values of $p$. Notably, we prove that as $p$ approaches 1, our guarantee converges linearly to 0.745, matching that of the i.i.d.~prophet inequality. Also interesting is the case $p=1/2$, where our bound evaluates to $0.671$, which improves upon the state of the art.) <|cite_end|>
[ "<|reference_start|> Static pricing for multi-unit prophet inequalities: We study a pricing problem where a seller has $k$ identical copies of a product, buyers arrive sequentially, and the seller prices the items aiming to maximize social welfare. When $k=1$, this is the so called \"prophet inequality\" problem for which there is a simple pricing scheme achieving a competitive ratio of $1/2$. On the other end of the spectrum, as $k$ goes to infinity, the asymptotic performance of both static and adaptive pricing is well understood. We provide a static pricing scheme for the small-supply regime: where $k$ is small but larger than $1$. Prior to our work, the best competitive ratio known for this setting was the $1/2$ that follows from the single-unit prophet inequality. Our pricing scheme is easy to describe as well as practical -- it is anonymous, non-adaptive, and order-oblivious. We pick a single price that equalizes the expected fraction of items sold and the probability that the supply does not sell out before all customers are served; this price is then offered to each customer while supply lasts. This extends an approach introduced by Samuel-Cahn for the case of $k=1$. This pricing scheme achieves a competitive ratio that increases gradually with the supply. Subsequent work by Jiang, Ma, and Zhang shows that our pricing scheme is the optimal static pricing for every value of $k$. <|reference_end|>", "<|reference_start|> A Constant Factor Prophet Inequality for Online Combinatorial Auctions: In online combinatorial auctions m indivisible items are to be allocated to n agents who arrive online. Agents have random valuations for the different subsets of items and the goal is to allocate the items on the fly so as to maximize the total value of the assignment. A prophet inequality in this setting refers to the existence of an online algorithm guaranteed to obtain, in expectation, a certain fraction of the expected value obtained by an optimal solution in hindsight. The study of prophet inequalities for online combinatorial auctions has been an intensive area of research in recent years, and constant factor prophet inequalities are known when the agents’ valuation functions are submodular or fractionally subadditive. Despite many efforts, for the more general case of subadditive valuations, the best known prophet inequality has an approximation guarantee of O(loglogm). In this paper, we prove the existence of a constant factor prophet inequality for the subadditive case, resolving a central open problem in the area. Our prophet inequality is achieved by a novel, but elementary, sampling idea which we call the Mirror Lemma. This lemma is essentially concerned with understanding online algorithms for which the set of items that are allocated and those that are not, distribute equally. The other main ingredient is a nonstandard application of Kakutani’s fixed point theorem. Finally, we note that our prophet inequality works against an almighty adversary and even can be implemented in an incentive compatible way. <|reference_end|>", "<|reference_start|> Prophet Inequalities with Limited Information: In the classical prophet inequality, a gambler observes a sequence of stochastic rewards $V_1,...,V_n$ and must decide, for each reward $V_i$, whether to keep it and stop the game or to forfeit the reward forever and reveal the next value $V_i$. The gambler's goal is to obtain a constant fraction of the expected reward that the optimal offline algorithm would get. Recently, prophet inequalities have been generalized to settings where the gambler can choose $k$ items, and, more generally, where he can choose any independent set in a matroid. However, all the existing algorithms require the gambler to know the distribution from which the rewards $V_1,...,V_n$ are drawn. The assumption that the gambler knows the distribution from which $V_1,...,V_n$ are drawn is very strong. Instead, we work with the much simpler assumption that the gambler only knows a few samples from this distribution. We construct the first single-sample prophet inequalities for many settings of interest, whose guarantees all match the best possible asymptotically, \\emph{even with full knowledge of the distribution}. Specifically, we provide a novel single-sample algorithm when the gambler can choose any $k$ elements whose analysis is based on random walks with limited correlation. In addition, we provide a black-box method for converting specific types of solutions to the related \\emph{secretary problem} to single-sample prophet inequalities, and apply it to several existing algorithms. Finally, we provide a constant-sample prophet inequality for constant-degree bipartite matchings. We apply these results to design the first posted-price and multi-dimensional auction mechanisms with limited information in settings with asymmetric bidders. <|reference_end|>", "<|reference_start|> Prophet Inequalities with Limited Information: In the classical prophet inequality, a gambler observes a sequence of stochastic rewards $V_1,...,V_n$ and must decide, for each reward $V_i$, whether to keep it and stop the game or to forfeit the reward forever and reveal the next value $V_i$. The gambler's goal is to obtain a constant fraction of the expected reward that the optimal offline algorithm would get. Recently, prophet inequalities have been generalized to settings where the gambler can choose $k$ items, and, more generally, where he can choose any independent set in a matroid. However, all the existing algorithms require the gambler to know the distribution from which the rewards $V_1,...,V_n$ are drawn. The assumption that the gambler knows the distribution from which $V_1,...,V_n$ are drawn is very strong. Instead, we work with the much simpler assumption that the gambler only knows a few samples from this distribution. We construct the first single-sample prophet inequalities for many settings of interest, whose guarantees all match the best possible asymptotically, \\emph{even with full knowledge of the distribution}. Specifically, we provide a novel single-sample algorithm when the gambler can choose any $k$ elements whose analysis is based on random walks with limited correlation. In addition, we provide a black-box method for converting specific types of solutions to the related \\emph{secretary problem} to single-sample prophet inequalities, and apply it to several existing algorithms. Finally, we provide a constant-sample prophet inequality for constant-degree bipartite matchings. We apply these results to design the first posted-price and multi-dimensional auction mechanisms with limited information in settings with asymmetric bidders. <|reference_end|>" ]
[ 24, 31, 33, 35 ]
{"<|cite_1|>": "ss-1511581", "<|cite_2|>": "ss-1283119", "<|multi_cite_3_1|>": "ss-1283127", "<|multi_cite_3_2|>": "ss-1283121", "<|multi_cite_3_3|>": "ss-1511577", "<|multi_cite_4_1|>": "ss-1283128", "<|multi_cite_4_2|>": "ss-1343972", "<|cite_5|>": "ss-1283129", "<|cite_6|>": "arxiv-80471", "<|cite_7|>": "arxiv-138651", "<|cite_8|>": "ss-1267322", "<|cite_9|>": "arxiv-166497", "<|multi_cite_10_1|>": "arxiv-495571", "<|multi_cite_10_2|>": "arxiv-460505", "<|multi_cite_11_1|>": "ss-1267322", "<|multi_cite_11_2|>": "ss-1283121", "<|multi_cite_11_3|>": "arxiv-166497", "<|cite_12|>": "arxiv-410686", "<|cite_13|>": "arxiv-460505", "<|cite_14|>": "arxiv-234914", "<|cite_15|>": "arxiv-303275", "<|cite_16|>": "ss-1221071", "<|cite_17|>": "arxiv-166497", "<|cite_18|>": "arxiv-260706", "<|multi_cite_19_1|>": "arxiv-278726", "<|multi_cite_19_2|>": "arxiv-353032", "<|multi_cite_20_1|>": "arxiv-28044", "<|multi_cite_20_2|>": "arxiv-445745", "<|multi_cite_21_1|>": "ss-1283122", "<|multi_cite_21_2|>": "arxiv-191800", "<|multi_cite_22_1|>": "arxiv-68951", "<|multi_cite_22_2|>": "ss-1343976", "<|cite_23|>": "arxiv-48031", "<|cite_24|>": "arxiv-48031", "<|cite_25|>": "arxiv-379197", "<|cite_26|>": "arxiv-48031", "<|cite_27|>": "arxiv-332304", "<|cite_28|>": "arxiv-333954", "<|cite_29|>": "arxiv-68951", "<|cite_30|>": "ss-1364065", "<|cite_31|>": "ss-869620", "<|cite_32|>": "arxiv-303275"}
2311.08707
<|paper_start|> Title: K-BMPC: Derivative-based Koopman Bilinear Model Predictive Control for Tractor-Trailer Trajectory Tracking with Unknown Parameters Abstract: K-BMPC: Derivative-based Koopman Bilinear Model Predictive Control for Tractor-Trailer Trajectory Tracking with Unknown Parameters: Nonlinear dynamics bring difficulties to controller design for control-affine systems such as tractor-trailer vehicles, especially when the parameters in the dynamics are unknown. To address this constraint, we propose a derivative-based lifting function construction method, show that the corresponding infinite dimensional Koopman bilinear model over the lifting function is equivalent to the original control-affine system. Further, we analyze the propagation and bounds of state prediction errors caused by the truncation in derivative order. The identified finite dimensional Koopman bilinear model would serve as predictive model in the next step. Koopman Bilinear Model Predictive control (K-BMPC) is proposed to solve the trajectory tracking problem. We linearize the bilinear model around the estimation of the lifted state and control input. Then the bilinear Model Predictive Control problem is approximated by a quadratic programming problem. Further, the estimation is updated at each iteration until the convergence is reached. Moreover, we implement our algorithm on a tractor-trailer system, taking into account the longitudinal and side slip effects. The open-loop simulation shows the proposed Koopman bilinear model captures the dynamics with unknown parameters and has good prediction performance. Closed-loop tracking results show the proposed K-BMPC exhibits elevated tracking precision with the commendable computational efficiency. The experimental results demonstrate the feasibility of K-BMPC. Introduction Tractor-trailer vehicles are widely used nowadays, particularly in fields such as agriculture and logistics due to their large cargo capacity, high transport efficiency, and low fuel consumption <|cite_start|> (Reference: Factors influencing the energy consumption of road freight transport: Abstract Key factors that influence the energy consumption of heavy goods vehicles are investigated. These factors include engine efficiency, aerodynamic drag and rolling resistance, vehicle configuration (number of vehicle units), traffic congestion, speed, payload factors, and the use of regenerative braking. An accurate, validated model of the fuel consumption of a 38 tonne tractor-semitrailer vehicle is used as a basis to derive fuel consumption models of a number of other vehicle configurations. These models included a rigid four-axle truck with maximum gross vehicle mass (GVM) of 26 tonnes; a six-axle tractor semitrailer with GVM of 44 tonnes, with and without regenerative braking; a ‘B-double’ with GVM of 60 tonnes; and an ‘A-double’ with GVM of 82 tonnes. These vehicle models were driven over a simple hypothetical drive cycle with a fixed maximum speed and varying numbers of stops in a 10 km stretch of road. It is concluded that: (a) improving engine efficiency, unladen mass, rolling resistance, and aerodynamic drag can yield relatively small improvements in fuel consumption, compared with other factors; (b) larger vehicles are always significantly more energy-efficient than smaller ones when fully loaded; (c) transferring freight from articulated vehicles to smaller rigid vehicles for urban deliveries typically increases fuel consumption by approximately 35 per cent; (d) running vehicles partially loaded can increase the energy per unit freight task by up to 65 per cent; and (e) under urban start—stop conditions, the use of regenerative braking systems can reduce heavy vehicle fuel consumption by 25–35 per cent.) <|cite_end|> <|cite_start|> (Reference: A novel EPT autonomous motion control framework for an off-axle hitching tractor-trailer system with drawbar: A tractor-trailer system is an underactuated system that is subject to nonholonomic constraints and exhibits extremely nonlinear behaviors. Its trajectory planning and control issues remain widely researched. In this paper, on the basis of establishing a precise kinematic model, a novel method is proposed to realize autonomous motion control of an off-axle hitching tractor-trailer system with a drawbar. The proposed method is roughly divided into three steps: EXTRACTING (E) a virtual subsystem, trajectory PLANNING (P) for the original full system, and trajectory TRACKING (T) by feedback control techniques, thus the “EPT method”. In the E step, two alternative strategies are considered to construct a virtual subsystem by extracting a part of an original towed aircraft system. Then, in the P step, a two-layer optimal control-based method is developed to generate a reference trajectory. A trajectory for the virtual subsystem is generated in the first layer, where the Reeds-Shepp curve with artificial expertise in selecting intermediate nodes is introduced to provide initial guesses. The results obtained in the first layer are utilized to initialize partial state and control variables in the trajectory planning for the original full system in the second layer. Finally, in the T step, a receding horizon controller is designed to drive the carrier aircraft to track the reference trajectory under various external disturbances. Numerical simulations demonstrate that the EPT method is applicable and highly efficient. The proposed method is also applicable for generalized tractor-trailer systems and other chain-link systems.) <|cite_end|>. Despite their numerous benefits, achieving high-accuracy tractor-trailer tracking control is challenging, particularly for optimization-based trajectory planning methods <|cite_start|> (Reference: Trajectory planning for a tractor with multiple trailers in extremely narrow environments: A unified approach: Trajectory planning for a tractor-trailer vehicle is challenging because the vehicle kinematics consists of underactuated and nonholonomic constraints that are highly coupled. Prevalent sampling-based or search-based planners suitable for rigid-body vehicles are not capable of handling the tractor-trailer vehicle cases. This work aims to deal with generic n-trailer cases in the tiny environments. To this end, an optimal control problem is formulated, which is beneficial in being accurate, straightforward, and unified. An adaptively homotopic warm-starting approach is proposed to facilitate the numerical solution process of the formulated optimal control problem. Compared with the existing sequential warm starting strategies, our proposal can adaptively define the subproblems with the purpose of making the gaps between adjacent subproblems “pleasant” for the solver. Unification and efficiency of the proposed adaptively homotopic warm-starting approach have been investigated in several extremely tiny scenarios. Our planner finds solutions that other existing planners cannot. Online planning opportunities are briefly discussed as well.) <|cite_end|>- <|cite_start|> (Reference: Optimization-based maneuver planning for a tractor-trailer vehicle in a curvy tunnel: A weak reliance on sampling and search: This study is focused on the maneuver planning problem for a tractor-trailer vehicle in a curvy and tiny tunnel. Due to the curse of dimensionality, the prevalent sampling-and- search-based planners used to handle a rigid-body vehicle well become less efficient when the trailer number grows or when the tunnel narrows. This fact also has impacts on an optimization-based planner if it counts on a sampling-and-search-based initial guess to warm-start. We propose an optimization-based maneuver planner that weakly relies on the sampling and search, hoping to get rid of the curse of dimensionality and thus find optima rapidly. The proposed planner comprises three stages: stage 1 identifies the homotopy class via A* search in a 2D grid map; stage 2 recovers the kinematic feasibility with softened intermediate problems iteratively solved; stage 3 finds an optimum that strictly satisfies the nominal collision-avoidance constraints. Optimization-based planners are commonly known to run slowly, but this work shows that they have obvious advantages over the prevalent sampling-and-search-based planners when the solution space dimension is high and/or the constraints are harsh.) <|cite_end|>. These methods discretize the kinematic constraints with a large sampling period to reduce the number of optimization variables. Nevertheless, the outcome of such methods would violate the dynamics of tractor-trailer to great extent. Therefore, a controller is needed for tractor-trailer vehicles to follow the trajectories with low tracking errors and high computational efficiency. Model Predictive Control (MPC) presents an attractive approach to trajectory tracking control, due to its adaptability to performance metrics and constraints <|cite_start|> (Reference: Experimental Validation of Linear and Nonlinear MPC on an Articulated Unmanned Ground Vehicle: This paper focuses on the trajectory tracking control problem for an articulated unmanned ground vehicle. We propose and compare two approaches in terms of performance and computational complexity. The first uses a nonlinear mathematical model derived from first principles and combines a nonlinear model predictive controller (NMPC) with a nonlinear moving horizon estimator (NMHE) to produce a control strategy. The second is based on an input-state linearization (ISL) of the original model followed by linear model predictive control (LMPC). A fast real-time iteration scheme is proposed, implemented for the NMHE-NMPC framework and benchmarked against the ISL-LMPC framework, which is a traditional and cheap method. The experimental results for a time-based trajectory show that the NMHE-NMPC framework with the proposed real-time iteration scheme gives better trajectory tracking performance than the ISL-LMPC framework and the required computation time is feasible for real-time applications. Moreover, the ISL-LMPC produces results of a quality comparable to the NMHE-NMPC framework at a significantly reduced computational cost.) <|cite_end|>. However, MPC problem becomes difficult to solve in real time because of the nonlinear terms in model dynamics and long prediction horizons. Compared to nonlinear models, locally linearized models carry advantages in computational efficiency. However, their accuracy declines when the vehicle states move away from the point of linearization <|cite_start|> (Reference: {From linear to nonlinear MPC: bridging the gap via the real-time iteration: ABSTRACT Linear model predictive control (MPC) can be currently deployed at outstanding speeds, thanks to recent progress in algorithms for solving online the underlying structured quadratic programs. In contrast, nonlinear MPC (NMPC) requires the deployment of more elaborate algorithms, which require longer computation times than linear MPC. Nonetheless, computational speeds for NMPC comparable to those of MPC are now regularly reported, provided that the adequate algorithms are used. In this paper, we aim at clarifying the similarities and differences between linear MPC and NMPC. In particular, we focus our analysis on NMPC based on the real-time iteration (RTI) scheme, as this technique has been successfully tested and, in some applications, requires computational times that are only marginally larger than linear MPC. The goal of the paper is to promote the understanding of RTI-based NMPC within the linear MPC community.) <|cite_end|>. In addition to the locally linearized models, Koopman operator has been gaining attention for its ability to predict the flow of nonlinear dynamics using a infinite-dimensional linear model <|cite_start|> (Reference: Hamiltonian Systems and Transformation in Hilbert Space.: ) <|cite_end|>. Extended Dynamic Mode Decomposition (EDMD) and Dynamic Mode Decomposition (DMD) are data-driven tools to identify finite dimensional approximations of the Koopman operator and hence applied to approximate a variety of nonlinear dynamics <|cite_start|> (Reference: Dynamic mode decomposition of numerical and experimental data: The description of coherent features of fluid flow is essential to our understanding of fluid-dynamical and transport processes. A method is introduced that is able to extract dynamic information from flow fields that are either generated by a (direct) numerical simulation or visualized/measured in a physical experiment. The extracted dynamic modes, which can be interpreted as a generalization of global stability modes, can be used to describe the underlying physical mechanisms captured in the data sequence or to project large-scale problems onto a dynamical system of significantly fewer degrees of freedom. The concentration on subdomains of the flow field where relevant dynamics is expected allows the dissection of a complex flow into regions of localized instability phenomena and further illustrates the flexibility of the method, as does the description of the dynamics within a spatial framework. Demonstrations of the method are presented consisting of a plane channel flow, flow over a two-dimensional cavity, wake flow behind a flexible membrane and a jet passing between two cylinders.) <|cite_end|>- <|cite_start|> (Reference: Derivative-Based Koopman Operators for Real-Time Control of Robotic Systems: This paper presents a generalizable methodology for data-driven identification of nonlinear dynamics that bounds the model error in terms of the prediction horizon and the magnitude of the derivatives of the system states. Using higher-order derivatives of general nonlinear dynamics that need not be known, we construct a Koopman operator-based linear representation and utilize Taylor series accuracy analysis to derive an error bound. The resulting error formula is used to choose the order of derivatives in the basis functions and obtain a data-driven Koopman model using a closed-form expression that can be computed in real time. Using the inverted pendulum system, we illustrate the robustness of the error bounds given noisy measurements of unknown dynamics, where the derivatives are estimated numerically. When combined with control, the Koopman representation of the nonlinear system has marginally better performance than competing nonlinear modeling methods, such as SINDy and NARX. In addition, as a linear model, the Koopman approach lends itself readily to efficient control design tools, such as LQR, whereas the other modeling approaches require nonlinear control methods. The efficacy of the approach is further demonstrated with simulation and experimental results on the control of a tail-actuated robotic fish. Experimental results show that the proposed data-driven control approach outperforms a tuned PID (Proportional Integral Derivative) controller and that updating the data-driven model online significantly improves performance in the presence of unmodeled fluid disturbance. This paper is complemented with a video: https://youtu.be/9_wx0tdDta0.) <|cite_end|>. However, the lifting functions used to construct EDMD and DMD based Koopman models generally depend on the expert selections <|cite_start|> (Reference: Data-driven identification of vehicle dynamics using koopman operator: This paper presents the results of identification of vehicle dynamics using the Koopman operator. The basic idea is to transform the state space of a nonlinear system (a car in our case) to a higher-dimensional space, using so-called basis functions, where the system dynamics is linear. The selection of basis functions is crucial and there is no general approach on how to select them, this paper gives some discussion on this topic. Two distinct approaches for selecting the basis functions are presented. The first approach, based on Extended Dynamic Mode Decomposition, relies heavily on expert basis selection and is completely data-driven. The second approach utilizes the knowledge of the nonlinear dynamics, which is used to construct eigenfunctions of the Koopman operator which are known by definition to evolve linearly along the nonlinear system trajectory. The eigenfunctions are then used as basis functions for prediction. Each approach is presented with a numerical example and discussion on the feasibility of the approach for a nonlinear vehicle system.) <|cite_end|>. This means a lot of tunings in practice. To this end, deep neural networks are employed to overcome the difficulties in lifting function construction <|cite_start|> (Reference: Learning Deep Neural Network Representations for Koopman Operators of Nonlinear Dynamical Systems: The Koopman operator has recently garnered much attention for its value in dynamical systems analysis and data-driven model discovery. However, its application has been hindered by the computational complexity of extended dynamic mode decomposition; this requires a combinatorially large basis set to adequately describe many nonlinear systems of interest, e.g. cyber-physical infrastructure systems, biological networks, social systems, and fluid dynamics. Often the dictionaries generated for these problems are manually curated, requiring domain-specific knowledge and painstaking tuning. In this paper we introduce a deep learning framework for learning Koopman operators of nonlinear dynamical systems. We show that this novel method automatically selects efficient deep dictionaries, outperforming state-of-the-art methods. We benchmark this method on partially observed nonlinear systems, including the glycolytic oscillator and show it is able to predict quantitatively 100 steps into the future, using only a single timepoint, and qualitative oscillatory behavior 400 steps into the future.) <|cite_end|>- <|cite_start|> (Reference: Deep Koopman with Control: Spectral Analysis of Soft Robot Dynamics: Soft robots are challenging to model and control as inherent non-linearities (e.g., elasticity and deformation), often requires complex explicit physics-based analytical modeling (e.g., a priori geometric definitions). While machine learning can be used to learn non-linear control models in a data-driven approach, these models often lack an intuitive internal physical interpretation and representation, limiting dynamical analysis. To address this, this paper presents an approach using Koopman operator theory and deep neural networks to provide a global linear description of the non-linear control systems. Specifically, by globally linearising dynamics, the Koopman operator is analyzed using spectral decomposition to characterises important physics-based interpretations, such as functional growths and oscillations. Experiments in this paper demonstrate this approach for controlling non-linear soft robotics, and shows model outputs are interpretable in the context of spectral analysis.) <|cite_end|>, however the interpretability of the model is limited. Prior work on Koopman-based MPC design primarily focuses on combining linear MPC with the linear lifted models to increase computational speed, however the accuracy of the lifted linear model is not guaranteed when the original states and controls are coupled. Koopman bilinear models are considered to balance the accuracy and computational speed, and the characteristics such as bilinearizability and reachability are proved in <|cite_start|> (Reference: Bilinearization, reachability, and optimal control of control-affine nonlinear systems: A Koopman spectral approach: This article considers the problem of bilinearization and optimal control of a control-affine nonlinear system by projecting the system dynamics onto the Koopman eigenspace. Although there are linearization techniques like Carleman linearization for embedding a finite-dimensional nonlinear system into an infinite-dimensional space, they depend on the analytic property of the vector fields and work only on polynomial space. The proposed method utilizes the Koopman canonical transform, specifically the Koopman eigenfunctions of the drift vector field, to transform the dynamics into a bilinear system under certain assumptions. While the bilinearization is exact, if there exists a Koopman-invariant finite-dimensional subspace for the drift vector field, sometimes this condition is too conservative. An approximate approach is to minimize an $\mathcal {L}^2$ norm on the truncated state-space. The approximation can be carried out from time-series data without explicit knowledge of the drift vector field. Controllability of the bilinear system is analyzed using the Myhill semigroup method and Lie algebraic structures. Pontryagin’s principle is applied to the bilinear system to yield a two-point boundary-value problem for the optimal control design. A single shooting method solves the boundary value problem in order to determine the control signal. Alternatively, a gradient-based method is also outlined to find the optimal control which exploits the bilinear structure. Several examples of control-affine nonlinear systems numerically illustrate the bilinearization and optimal control design, assuming a cost function quadratic in the states and control input.) <|cite_end|>. However, the bilinear term brings difficulties in controller design. To address the difficulties, a few works try to linearize the bilinear models based on the current state of the system <|cite_start|> (Reference: Advantages of Bilinear Koopman Realizations for the Modeling and Control of Systems with Unknown Dynamics: Nonlinear dynamical systems can be made easier to control by lifting them into the space of observable functions, where their evolution is described by the linear Koopman operator. This paper describes how the Koopman operator can be used to generate approximate linear, bilinear, and nonlinear model realizations from data, and argues in favor of bilinear realizations for characterizing systems with unknown dynamics. Necessary and sufficient conditions for a dynamical system to have a valid linear or bilinear realization over a given set of observable functions are presented and used to show that every control-affine system admits an infinite-dimensional bilinear realization, but does not necessarily admit a linear one. Therefore, approximate bilinear realizations constructed from generic sets of basis functions tend to improve as the number of basis functions increases, whereas approximate linear realizations may not. To demonstrate the advantages of bilinear Koopman realizations for control, a linear, bilinear, and nonlinear Koopman model realization of a simulated robot arm are constructed from data. In a trajectory following task, the bilinear realization exceeds the prediction accuracy of the linear realization and the computational efficiency of the nonlinear realization when incorporated into a model predictive control framework.) <|cite_end|> <|cite_start|> (Reference: Autonomous Driving using Linear Model Predictive Control with a Koopman Operator based Bilinear Vehicle Model: ) <|cite_end|>, but the linearization suffers from the disadvantage of local linearization. Folkestad et al. solve Koopman nonlinear MPC using sequential quadratic programming, however the Hessian of Lagrangian can not be computed directly from the bilinear structure <|cite_start|> (Reference: Koopman NMPC: Koopman-based Learning and Nonlinear Model Predictive Control of Control-affine Systems: Koopman-based learning methods can potentially be practical and powerful tools for dynamical robotic systems. However, common methods to construct Koopman representations seek to learn lifted linear models that cannot capture nonlinear actuation effects inherent in many robotic systems. This paper presents a learning and control methodology that is a first step towards overcoming this limitation. Using the Koopman canonical transform, control-affine dynamics can be expressed by a lifted bilinear model. The learned model is used for nonlinear model predictive control (NMPC) design where the bilinear structure can be exploited to improve computational efficiency. The benefits for control-affine dynamics compared to existing Koopman-based methods are highlighted through an example of a simulated planar quadrotor. Prediction error is greatly reduced and closed loop performance similar to NMPC with full model knowledge is achieved.) <|cite_end|> <|cite_start|> (Reference: KoopNet: Joint Learning of Koopman Bilinear Models and Function Dictionaries with Application to Quadrotor Trajectory Tracking: Nonlinear dynamical effects are crucial to the operation of many agile robotic systems. Koopman-based model learning methods can capture these nonlinear dynamical system effects in higher dimensional lifted bilinear models that are amenable to optimal control. However, standard methods that lift the system state using a fixed function dictionary before model learning result in high dimensional models that are intractable for real time control. This paper presents a novel method that jointly learns a function dictionary and lifted bilinear model purely from data by incorporating the Koopman model in a neural network architecture. Nonlinear MPC design utilizing the learned model can be performed readily. We experimentally realized this method on a multirotor drone for agile trajectory tracking at low altitudes where the aerodynamic ground effect influences the system's behavior. Experimental results demonstrate that the learning-based controller achieves similar performance as a nonlinear MPC based on a nominal dynamics model in medium altitude. However, our learning-based system can reliably track trajectories in near-ground flight regimes while the nominal controller crashes due to unmodeled dynamical effects that are captured by our method.) <|cite_end|>. The challenge in tractor-trailer trajectory tracking using MPC is the nonlinearity in the dynamics. To address the challenge, we generalize derivative-based Koopman operators <|cite_start|> (Reference: Derivative-Based Koopman Operators for Real-Time Control of Robotic Systems: This paper presents a generalizable methodology for data-driven identification of nonlinear dynamics that bounds the model error in terms of the prediction horizon and the magnitude of the derivatives of the system states. Using higher-order derivatives of general nonlinear dynamics that need not be known, we construct a Koopman operator-based linear representation and utilize Taylor series accuracy analysis to derive an error bound. The resulting error formula is used to choose the order of derivatives in the basis functions and obtain a data-driven Koopman model using a closed-form expression that can be computed in real time. Using the inverted pendulum system, we illustrate the robustness of the error bounds given noisy measurements of unknown dynamics, where the derivatives are estimated numerically. When combined with control, the Koopman representation of the nonlinear system has marginally better performance than competing nonlinear modeling methods, such as SINDy and NARX. In addition, as a linear model, the Koopman approach lends itself readily to efficient control design tools, such as LQR, whereas the other modeling approaches require nonlinear control methods. The efficacy of the approach is further demonstrated with simulation and experimental results on the control of a tail-actuated robotic fish. Experimental results show that the proposed data-driven control approach outperforms a tuned PID (Proportional Integral Derivative) controller and that updating the data-driven model online significantly improves performance in the presence of unmodeled fluid disturbance. This paper is complemented with a video: https://youtu.be/9_wx0tdDta0.) <|cite_end|> to Koopman bilinear models, transform the tractor-trailer dynamic into a bilinear model. Then, we propose an iterative strategy to solve the Koopman bilinear MPC problem. Simulation and experimental results demonstrate strengths of our method. Our contributions are twofold. \begin{itemize} \item We propose a lifting function construction method based on the derivatives of the dynamics, show that the corresponding infinite dimensional Koopman bilinear model is equivalent to the original control-affine system. Moreover, under the assumption that the derivative order is truncated, we analyze the state prediction error propagation and its bounds. Open loop simulation shows the proposed Koopman bilinear model captures the unknown parameters in the dynamics with high prediction precision. \item We propose a Koopman bilinear MPC framework (K-BMPC) to solve the bilinear MPC problem using iterative quadratic programming. In K-BMPC, the bilinear model is linearized around the estimation of the state and control input. Then we transform the original MPC problem to a quadratic programming (QP) problem. The estimation is updated within each iteration until the convergence is reached. Closed-loop simulation results show that the proposed K-BMPC exhibits elevated tracking precision along with commendable computational efficiency. Further, real-world experiment shows the feasibility of the proposed method. \end{itemize} <|paper_end|>
[ "<|reference_start|> A novel EPT autonomous motion control framework for an off-axle hitching tractor-trailer system with drawbar: A tractor-trailer system is an underactuated system that is subject to nonholonomic constraints and exhibits extremely nonlinear behaviors. Its trajectory planning and control issues remain widely researched. In this paper, on the basis of establishing a precise kinematic model, a novel method is proposed to realize autonomous motion control of an off-axle hitching tractor-trailer system with a drawbar. The proposed method is roughly divided into three steps: EXTRACTING (E) a virtual subsystem, trajectory PLANNING (P) for the original full system, and trajectory TRACKING (T) by feedback control techniques, thus the “EPT method”. In the E step, two alternative strategies are considered to construct a virtual subsystem by extracting a part of an original towed aircraft system. Then, in the P step, a two-layer optimal control-based method is developed to generate a reference trajectory. A trajectory for the virtual subsystem is generated in the first layer, where the Reeds-Shepp curve with artificial expertise in selecting intermediate nodes is introduced to provide initial guesses. The results obtained in the first layer are utilized to initialize partial state and control variables in the trajectory planning for the original full system in the second layer. Finally, in the T step, a receding horizon controller is designed to drive the carrier aircraft to track the reference trajectory under various external disturbances. Numerical simulations demonstrate that the EPT method is applicable and highly efficient. The proposed method is also applicable for generalized tractor-trailer systems and other chain-link systems. <|reference_end|>", "<|reference_start|> Data-driven identification of vehicle dynamics using koopman operator: This paper presents the results of identification of vehicle dynamics using the Koopman operator. The basic idea is to transform the state space of a nonlinear system (a car in our case) to a higher-dimensional space, using so-called basis functions, where the system dynamics is linear. The selection of basis functions is crucial and there is no general approach on how to select them, this paper gives some discussion on this topic. Two distinct approaches for selecting the basis functions are presented. The first approach, based on Extended Dynamic Mode Decomposition, relies heavily on expert basis selection and is completely data-driven. The second approach utilizes the knowledge of the nonlinear dynamics, which is used to construct eigenfunctions of the Koopman operator which are known by definition to evolve linearly along the nonlinear system trajectory. The eigenfunctions are then used as basis functions for prediction. Each approach is presented with a numerical example and discussion on the feasibility of the approach for a nonlinear vehicle system. <|reference_end|>", "<|reference_start|> Bilinearization, reachability, and optimal control of control-affine nonlinear systems: A Koopman spectral approach: This article considers the problem of bilinearization and optimal control of a control-affine nonlinear system by projecting the system dynamics onto the Koopman eigenspace. Although there are linearization techniques like Carleman linearization for embedding a finite-dimensional nonlinear system into an infinite-dimensional space, they depend on the analytic property of the vector fields and work only on polynomial space. The proposed method utilizes the Koopman canonical transform, specifically the Koopman eigenfunctions of the drift vector field, to transform the dynamics into a bilinear system under certain assumptions. While the bilinearization is exact, if there exists a Koopman-invariant finite-dimensional subspace for the drift vector field, sometimes this condition is too conservative. An approximate approach is to minimize an $\\mathcal {L}^2$ norm on the truncated state-space. The approximation can be carried out from time-series data without explicit knowledge of the drift vector field. Controllability of the bilinear system is analyzed using the Myhill semigroup method and Lie algebraic structures. Pontryagin’s principle is applied to the bilinear system to yield a two-point boundary-value problem for the optimal control design. A single shooting method solves the boundary value problem in order to determine the control signal. Alternatively, a gradient-based method is also outlined to find the optimal control which exploits the bilinear structure. Several examples of control-affine nonlinear systems numerically illustrate the bilinearization and optimal control design, assuming a cost function quadratic in the states and control input. <|reference_end|>", "<|reference_start|> Advantages of Bilinear Koopman Realizations for the Modeling and Control of Systems with Unknown Dynamics: Nonlinear dynamical systems can be made easier to control by lifting them into the space of observable functions, where their evolution is described by the linear Koopman operator. This paper describes how the Koopman operator can be used to generate approximate linear, bilinear, and nonlinear model realizations from data, and argues in favor of bilinear realizations for characterizing systems with unknown dynamics. Necessary and sufficient conditions for a dynamical system to have a valid linear or bilinear realization over a given set of observable functions are presented and used to show that every control-affine system admits an infinite-dimensional bilinear realization, but does not necessarily admit a linear one. Therefore, approximate bilinear realizations constructed from generic sets of basis functions tend to improve as the number of basis functions increases, whereas approximate linear realizations may not. To demonstrate the advantages of bilinear Koopman realizations for control, a linear, bilinear, and nonlinear Koopman model realization of a simulated robot arm are constructed from data. In a trajectory following task, the bilinear realization exceeds the prediction accuracy of the linear realization and the computational efficiency of the nonlinear realization when incorporated into a model predictive control framework. <|reference_end|>" ]
[ 1, 9, 12, 13 ]
{"<|cite_1|>": "ss-2501560", "<|cite_2|>": "ss-2501561", "<|cite_3|>": "ss-2386727", "<|cite_4|>": "ss-2501562", "<|cite_5|>": "arxiv-329826", "<|cite_6|>": "ss-687167", "<|cite_7|>": "ss-680615", "<|cite_8|>": "ss-744047", "<|cite_9|>": "arxiv-295653", "<|cite_10|>": "ss-1677069", "<|cite_11|>": "arxiv-132608", "<|cite_12|>": "arxiv-453894", "<|cite_13|>": "ss-792021", "<|cite_14|>": "arxiv-297527", "<|cite_15|>": "ss-809786", "<|cite_16|>": "arxiv-341502", "<|cite_17|>": "ss-1566137", "<|cite_18|>": "arxiv-295653"}
2403.10569
<|paper_start|> Title: Achieving Pareto Optimality using Efficient Parameter Reduction for DNNs in Resource-Constrained Edge Environment Abstract: Achieving Pareto Optimality using Efficient Parameter Reduction for DNNs in Resource-Constrained Edge Environment: This paper proposes an optimization of an existing Deep Neural Network (DNN) that improves its hardware utilization and facilitates on-device training for resource-constrained edge environments. We implement efficient parameter reduction strategies on Xception that shrink the model size without sacrificing accuracy, thus decreasing memory utilization during training. We evaluate our model in two experiments: Caltech-101 image classification and PCB defect detection and compare its performance against the original Xception and lightweight models, EfficientNetV2B1 and MobileNetV2. The results of the Caltech-101 image classification show that our model has a better test accuracy (76.21%) than Xception (75.89%), uses less memory on average (847.9MB) than Xception (874.6MB), and has faster training and inference times. The lightweight models overfit with EfficientNetV2B1 having a 30.52% test accuracy and MobileNetV2 having a 58.11% test accuracy. Both lightweight models have better memory usage than our model and Xception. On the PCB defect detection, our model has the best test accuracy (90.30%), compared to Xception (88.10%), EfficientNetV2B1 (55.25%), and MobileNetV2 (50.50%). MobileNetV2 has the least average memory usage (849.4MB), followed by our model (865.8MB), then EfficientNetV2B1 (874.8MB), and Xception has the highest (893.6MB). We further experiment with pre-trained weights and observe that memory usage decreases thereby showing the benefits of transfer learning. A Pareto analysis of the models' performance shows that our optimized model architecture satisfies accuracy and low memory utilization objectives. Introduction Recent advances in Artificial Intelligence (AI) and Machine Learning (ML) boost the proliferation of many AI-based applications and services. It is undeniable that new ML models such as large language models <|cite_start|> (Reference: A Survey of Large Language Models: Language is essentially a complex, intricate system of human expressions governed by grammatical rules. It poses a significant challenge to develop capable AI algorithms for comprehending and grasping a language. As a major approach, language modeling has been widely studied for language understanding and generation in the past two decades, evolving from statistical language models to neural language models. Recently, pre-trained language models (PLMs) have been proposed by pre-training Transformer models over large-scale corpora, showing strong capabilities in solving various NLP tasks. Since researchers have found that model scaling can lead to performance improvement, they further study the scaling effect by increasing the model size to an even larger size. Interestingly, when the parameter scale exceeds a certain level, these enlarged language models not only achieve a significant performance improvement but also show some special abilities that are not present in small-scale language models. To discriminate the difference in parameter scale, the research community has coined the term large language models (LLM) for the PLMs of significant size. Recently, the research on LLMs has been largely advanced by both academia and industry, and a remarkable progress is the launch of ChatGPT, which has attracted widespread attention from society. The technical evolution of LLMs has been making an important impact on the entire AI community, which would revolutionize the way how we develop and use AI algorithms. In this survey, we review the recent advances of LLMs by introducing the background, key findings, and mainstream techniques. In particular, we focus on four major aspects of LLMs, namely pre-training, adaptation tuning, utilization, and capacity evaluation. Besides, we also summarize the available resources for developing LLMs and discuss the remaining issues for future directions.) <|cite_end|> and diffusion models <|cite_start|> (Reference: Diffusion Models in Vision: A Survey: Denoising diffusion models represent a recent emerging topic in computer vision, demonstrating remarkable results in the area of generative modeling. A diffusion model is a deep generative model that is based on two stages, a forward diffusion stage and a reverse diffusion stage. In the forward diffusion stage, the input data is gradually perturbed over several steps by adding Gaussian noise. In the reverse stage, a model is tasked at recovering the original input data by learning to gradually reverse the diffusion process, step by step. Diffusion models are widely appreciated for the quality and diversity of the generated samples, despite their known computational burdens, i.e. low speeds due to the high number of steps involved during sampling. In this survey, we provide a comprehensive review of articles on denoising diffusion models applied in vision, comprising both theoretical and practical contributions in the field. First, we identify and present three generic diffusion modeling frameworks, which are based on denoising diffusion probabilistic models, noise conditioned score networks, and stochastic differential equations. We further discuss the relations between diffusion models and other deep generative models, including variational auto-encoders, generative adversarial networks, energy-based models, autoregressive models and normalizing flows. Then, we introduce a multi-perspective categorization of diffusion models applied in computer vision. Finally, we illustrate the current limitations of diffusion models and envision some interesting directions for future research.) <|cite_end|> are changing our lifestyles. However, these AI models require intensive computational resources such as CPU, GPU, memory, and network that only the cloud can offer. Cloud computing has been a key enabler of many new technologies <|cite_start|> (Reference: Fostering new Vertical and Horizontal IoT Applications with Intelligence Everywhere: Intelligence Everywhere is predicated on the seamless integration of IoT networks transporting a vast amount of data streams through many computing resources across an edge-to-cloud continuum, relying on the orchestration of distributed machine learning models. The result is an interconnected and collective intelligent ecosystem where devices, systems, services, and users work together to support IoT applications. This paper discusses the state-of-the-art research and the principles of the Intelligence Everywhere framework for enhancing IoT applications in vertical sectors such as Digital Health, Infrastructure, and Transportation/Mobility in the context of intelligent society (Society 5.0). It also introduces a novel perspective for the development of horizontal IoT applications, capable of running across various IoT networks while fostering collective intelligence across diverse sectors. Finally, this paper provides comprehensive insights into the challenges and opportunities for harnessing collective knowledge from real-time insights, leading to optimised processes and better overall collaboration across different IoT sectors.) <|cite_end|>, such as IoT, and AR/VR, by providing virtually unlimited resources, including on-demand storage and high computing power. By leveraging these advantages, many powerful AI models such as Segment Everything <|cite_start|> (Reference: Segment Anything: We introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation. Using our efficient model in a data collection loop, we built the largest segmentation dataset to date (by far), with over 1 billion masks on 11M licensed and privacy respecting images. The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks. We evaluate its capabilities on numerous tasks and find that its zero-shot performance is impressive -- often competitive with or even superior to prior fully supervised results. We are releasing the Segment Anything Model (SAM) and corresponding dataset (SA-1B) of 1B masks and 11M images at https://segment-anything.com to foster research into foundation models for computer vision.) <|cite_end|> are trained and deployed in the cloud. However, there are several drawbacks when implementing ML models that completely rely on the cloud. For example, the ML models running on the cloud will solely depend on the external infrastructure, leading to potential service interruptions and downtime if the cloud service provider experiences outages or technical issues. Moreover, cloud-based ML models may face latency and performance issues since their quality of services is closely tied to network quality. Unstable connectivity, bandwidth limitations, and network delays could cause many AI service interruptions. These challenges motivate the need to rely on edge computing for training ML models. The rapid development of mobile chipsets and hardware accelerators has improved edge devices' computing power significantly <|cite_start|> (Reference: Communication-Efficient Edge AI: Algorithms and Systems: Artificial intelligence (AI) has achieved remarkable breakthroughs in a wide range of fields, ranging from speech processing, image classification to drug discovery. This is driven by the explosive growth of data, advances in machine learning (especially deep learning), and easy access to vastly powerful computing resources. Particularly, the wide scale deployment of edge devices (e.g., IoT devices) generates an unprecedented scale of data, which provides the opportunity to derive accurate models and develop various intelligent applications at the network edge. However, such enormous data cannot all be sent from end devices to the cloud for processing, due to the varying channel quality, traffic congestion and/or privacy concerns. By pushing inference and training processes of AI models to edge nodes, edge AI has emerged as a promising alternative. AI at the edge requires close cooperation among edge devices, such as smart phones and smart vehicles, and edge servers at the wireless access points and base stations, which however result in heavy communication overheads. In this paper, we present a comprehensive survey of the recent developments in various techniques for overcoming these communication challenges. Specifically, we first identify key communication challenges in edge AI systems. We then introduce communication-efficient techniques, from both algorithmic and system perspectives for training and inference tasks at the network edge. Potential future research directions are also highlighted.) <|cite_end|>. This has led to a shift from deploying models in the cloud to the edge, where AI functionalities are diffused, converged, and embedded into resource-constrained devices in physical proximity to the users, such as micro data centers, cloudlets, edge nodes, routers, and smart gateways. However, this shift wave is only partially implemented and has not fully taken advantage of the power of edge computing. The literature highlights this since existing solutions only deploy the inference ML models at the edge <|cite_start|> (Reference: A CNN-Based Smart Waste Management System Using TensorFlow Lite and LoRa-GPS Shield in Internet of Things Environment: Urban areas are facing challenges in waste management systems due to the rapid growth of population in cities, causing huge amount of waste generation. As traditional waste management system is highly inefficient and costly, the waste of resources can be utilized efficiently with the integration of the internet of things (IoT) and deep learning model. The main purpose of this research is to develop a smart waste management system using the deep learning model that improves the waste segregation process and enables monitoring of bin status in an IoT environment. The SSD MobileNetV2 Quantized is used and trained with the dataset that consists of paper, cardboard, glass, metal, and plastic for waste classification and categorization. By integrating the trained model on TensorFlow Lite and Raspberry Pi 4, the camera module detects the waste and the servo motor, connected to a plastic board, categorizes the waste into the respective waste compartment. The ultrasonic sensor monitors the waste fill percentage, and a GPS module obtains the real-time latitude and longitude. The LoRa module on the smart bin sends the status of the bin to the LoRa receiver at 915 MHz. The electronic components of the smart bin are protected with RFID based locker, where only the registered RFID tag can be used to unlock for maintenance or upgrading purposes.) <|cite_end|> <|cite_start|> (Reference: A Neural Network-Based On-device Learning Anomaly Detector for Edge Devices: Semi-supervised anomaly detection is an approach to identify anomalies by learning the distribution of normal data. Backpropagation neural networks (i.e., BP-NNs) based approaches have recently drawn attention because of their good generalization capability. In a typical situation, BP-NN-based models are iteratively optimized in server machines with input data gathered from edge devices. However, (1) the iterative optimization often requires significant efforts to follow changes in the distribution of normal data (i.e., concept drift), and (2) data transfers between edge and server impose additional latency and energy consumption. To address these issues, we propose ONLAD and its IP core, named ONLAD Core. ONLAD is highly optimized to perform fast sequential learning to follow concept drift in less than one millisecond. ONLAD Core realizes on-device learning for edge devices at low power consumption, which realizes standalone execution where data transfers between edge and server are not required. Experiments show that ONLAD has favorable anomaly detection capability in an environment that simulates concept drift. Evaluations of ONLAD Core confirm that the training latency is 1.95x~6.58x faster than the other software implementations. Also, the runtime power consumption of ONLAD Core implemented on PYNQ-Z1 board, a small FPGA/CPU SoC platform, is 5.0x~25.4x lower than them.) <|cite_end|> <|cite_start|> (Reference: Edge-Cloud Intelligence in Self-Diagnostic of Land Mobile Radio Systems: IIoT sensors are usually deployed on a massive scale with stringent scalability, modularity, and interoperability requirements. It is indisputable that they produce a large amount of high-speed and heterogeneous data streams that pose many challenges to perform management, processing, and analytical tasks. This paper proposes an integrated edge-cloud continuum platform that can harvest IIoT data streams from a variety of sensors deployed at a remote RF site; and can harmonize different machine learning models for diagnosing problems that enhance infrastructure monitoring and long-term structural resilience. A real-world experiment was carried out to evaluate the proposed platform for supporting a self-diagnostic process for intelligent maintenance of Land Mobile Radio (LMR) infrastructures.) <|cite_end|>. Training on the edge can prove beneficial in terms of variations between training and deployment environments and also address the viewpoint problem <|cite_start|> (Reference: Training on the Edge: The why and the how: Edge computing is the natural progression from Cloud computing, where, instead of collecting all data and processing it centrally, like in a cloud computing environment, we distribute the computing power and try to do as much processing as possible, close to the source of the data. There are various reasons this model is being adopted quickly, including privacy, and reduced power and bandwidth requirements on the Edge nodes. While it is common to see inference being done on Edge nodes today, it is much less common to do training on the Edge. The reasons for this range from computational limitations, to it not being advantageous in reducing communications between the Edge nodes. In this paper, we explore some scenarios where it is advantageous to do training on the Edge, as well as the use of checkpointing strategies to save memory.) <|cite_end|>. However, the main challenge of training on the edge is the availability of computing resources, as modern deep learning architectures are designed to be computationally intensive. Although various lightweight deep learning models <|cite_start|> (Reference: EfficientNetV2: Smaller Models and Faster Training: This paper introduces EfficientNetV2, a new family of convolutional networks that have faster training speed and better parameter efficiency than previous models. To develop this family of models, we use a combination of training-aware neural architecture search and scaling, to jointly optimize training speed and parameter efficiency. The models were searched from the search space enriched with new ops such as Fused-MBConv. Our experiments show that EfficientNetV2 models train much faster than state-of-the-art models while being up to 6.8x smaller. Our training can be further sped up by progressively increasing the image size during training, but it often causes a drop in accuracy. To compensate for this accuracy drop, we propose to adaptively adjust regularization (e.g., dropout and data augmentation) as well, such that we can achieve both fast training and good accuracy. With progressive learning, our EfficientNetV2 significantly outperforms previous models on ImageNet and CIFAR/Cars/Flowers datasets. By pretraining on the same ImageNet21k, our EfficientNetV2 achieves 87.3% top-1 accuracy on ImageNet ILSVRC2012, outperforming the recent ViT by 2.0% accuracy while training 5x-11x faster using the same computing resources. Code will be available at https://github.com/google/automl/tree/master/efficientnetv2.) <|cite_end|> <|cite_start|> (Reference: MobileNetV2: Inverted Residuals and Linear Bottlenecks: In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3. The MobileNetV2 architecture is based on an inverted residual structure where the input and output of the residual block are thin bottleneck layers opposite to traditional residual models which use expanded representations in the input an MobileNetV2 uses lightweight depthwise convolutions to filter features in the intermediate expansion layer. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input/output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on Imagenet classification, COCO object detection, VOC image segmentation. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as the number of parameters) <|cite_end|> have been proposed, they do not perform as well as their heavyweight counterparts. This leaves a research gap in developing deep learning architectures suitable for training on resource-constraint edge devices. We therefore aim to answer the following research question: \textit{``Can deep learning models be optimized to facilitate training at the edge with limited resources while maintaining high accuracy with less resource consumption?"} In this paper, we optimize an existing deep neural network architecture with state-of-the-art performance to improve its on-device training in an edge environment. We adopt strategies described by Iandola et al. <|cite_start|> (Reference: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size: Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple DNN architectures that achieve that accuracy level. With equivalent accuracy, smaller DNN architectures offer at least three advantages: (1) Smaller DNNs require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB (510x smaller than AlexNet). The SqueezeNet architecture is available for download here: this https URL) <|cite_end|> that enable low model size while preserving high accuracy. For the existing deep learning model, we choose Xception <|cite_start|> (Reference: Xception: Deep Learning with Depthwise Separable Convolutions: We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution). In this light, a depthwise separable convolution can be understood as an Inception module with a maximally large number of towers. This observation leads us to propose a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions. We show that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset (which Inception V3 was designed for), and significantly outperforms Inception V3 on a larger image classification dataset comprising 350 million images and 17,000 classes. Since the Xception architecture has the same number of parameters as Inception V3, the performance gains are not due to increased capacity but rather to a more efficient use of model parameters.) <|cite_end|> as a backbone to integrate the strategies and implement two experiments to evaluate its training performance. We compare the results against the original Xception as baseline and also against lightweight models, EfficientNetV2B1 and MobileNetV2. The main contributions of this work are as follows: \begin{enumerate} \item We present an optimization of existing deep neural networks, which facilitates efficient hardware utilization for training in resource-constrained edge environments. \item We implement this optimization on the Xception architecture and evaluate its performance in terms of accuracy, memory usage, and inference latency on Caltech-101 and a PCB defect detection task. \item We explore the benefits of transfer learning on the resource utilization of models by comparing the performance of pretrained models vs non-pretrained models. \end{enumerate} Our paper is structured in the following order: Section 1 introduces the background of Edge AI, its challenges, and the problem to be solved. Section 2 discusses relevant related work involving model optimization and machine learning with edge devices. Section 3 describes our implementation of a memory-efficient optimization using efficient parameter reduction. In Section 4, we implement our proposed architecture on an edge device and experiment on Caltech-101 image classification and a PCB defect detection task, and present the results of our finding. We analyze the results of our experiment and conclude our paper in Section 5. Related Work This section presents a summary of the related works in this research area. We first discuss relevant literature on model optimization techniques and then proceed to explore literature on machine learning with edge devices. \subsection{Model Optimization using Post-Training Quantization} The most common model optimization methods involve the use of compression techniques for improved hardware performance. One such method is post-training quantization (PTQ), which includes techniques to reduce hardware utilization and model size. Post-training quantization converts a pre-trained FP32 network into a fixed-point network through various quantization methods while omitting the original training pipeline <|cite_start|> (Reference: A White Paper on Neural Network Quantization: While neural networks have advanced the frontiers in many applications, they often come at a high computational cost. Reducing the power and latency of neural network inference is key if we want to integrate modern networks into edge devices with strict power and compute requirements. Neural network quantization is one of the most effective ways of achieving these savings but the additional noise it induces can lead to accuracy degradation. In this white paper, we introduce state-of-the-art algorithms for mitigating the impact of quantization noise on the network's performance while maintaining low-bit weights and activations. We start with a hardware motivated introduction to quantization and then consider two main classes of algorithms: Post-Training Quantization (PTQ) and Quantization-Aware-Training (QAT). PTQ requires no re-training or labelled data and is thus a lightweight push-button approach to quantization. In most cases, PTQ is sufficient for achieving 8-bit quantization with close to floating-point accuracy. QAT requires fine-tuning and access to labeled training data but enables lower bit quantization with competitive results. For both solutions, we provide tested pipelines based on existing literature and extensive experimentation that lead to state-of-the-art performance for common deep learning models and tasks.) <|cite_end|>. Post-training quantization has been widely used as a model compression technique. Victor Habi et al. <|cite_start|> (Reference: HPTQ: Hardware-Friendly Post Training Quantization: Neural network quantization enables the deployment of models on edge devices. An essential requirement for their hardware efficiency is that the quantizers are hardware-friendly: uniform, symmetric, and with power-of-two thresholds. To the best of our knowledge, current post-training quantization methods do not support all of these constraints simultaneously. In this work, we introduce a hardware-friendly post training quantization (HPTQ) framework, which addresses this problem by synergistically combining several known quantization methods. We perform a large-scale study on four tasks: classification, object detection, semantic segmentation and pose estimation over a wide variety of network architectures. Our extensive experiments show that competitive results can be obtained under hardware-friendly constraints.) <|cite_end|> proposed a hardware-friendly post-training quantization (HPTQ) framework that achieves hardware efficiency by combining several quantization techniques such as channel equalization, threshold selection, per channel quantization, shift negative correction, and bias correction. They achieve a peak quantization accuracy of 75.018\% on ImageNet with ResNet50. Banner et al. <|cite_start|> (Reference: Post Training 4-Bit Quantization of Convolutional Networks for Rapid-Deployment: Convolutional neural networks require significant memory bandwidth and storage for intermediate computations, apart from substantial computing resources. Neural network quantization has significant benefits in reducing the amount of intermediate results, but it often requires the full datasets and time-consuming fine tuning to recover the accuracy lost after quantization. This paper introduces the first practical 4-bit post training quantization approach: it does not involve training the quantized model (fine-tuning), nor it requires the availability of the full dataset. We target the quantization of both activations and weights and suggest three complementary methods for minimizing quantization error at the tensor level, two of whom obtain a closed-form analytical solution. Combining these methods, our approach achieves accuracy that is just a few percents less the state-of-the-art baseline across a wide range of convolutional models. The source code to replicate all experiments is available on GitHub: \url{this https URL}.) <|cite_end|> proposed 4-bit PTQ that targets both weight and activation quantization, and they proposed methods for minimizing quantization error. Wu et al. <|cite_start|> (Reference: Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation: Quantization techniques can reduce the size of Deep Neural Networks and improve inference latency and throughput by taking advantage of high throughput integer instructions. In this paper we review the mathematical aspects of quantization parameters and evaluate their choices on a wide range of neural network models for different application domains, including vision, speech, and language. We focus on quantization techniques that are amenable to acceleration by processors with high-throughput integer math pipelines. We also present a workflow for 8-bit quantization that is able to maintain accuracy within 1% of the floating-point baseline on all networks studied, including models that are more difficult to quantize, such as MobileNets and BERT-large.) <|cite_end|> proposed an 8-bit quantization approach that maintain comparable accuracy as the FP32 baseline on hard-to-quantize networks such as MobileNets and BERT. Several other PTQ have been proposed such as loss-aware post-training quantization <|cite_start|> (Reference: Loss Aware Post-training Quantization: Neural network quantization enables the deployment of large models on resource-constrained devices. Current post-training quantization methods fall short in terms of accuracy for INT4 (or lower) but provide reasonable accuracy for INT8 (or above). In this work, we study the effect of quantization on the structure of the loss landscape. Additionally, we show that the structure is flat and separable for mild quantization, enabling straightforward post-training quantization methods to achieve good results. We show that with more aggressive quantization, the loss landscape becomes highly non-separable with steep curvature, making the selection of quantization parameters more challenging. Armed with this understanding, we design a method that quantizes the layer parameters jointly, enabling significant accuracy improvement over current post-training quantization methods. Reference implementation is available at https://github.com/ynahshan/nn-quantization-pytorch/tree/master/lapq) <|cite_end|>, post-training piecewise linear quantization <|cite_start|> (Reference: Post-Training Piecewise Linear Quantization for Deep Neural Networks: Quantization plays an important role in the energy-efficient deployment of deep neural networks on resource-limited devices. Post-training quantization is highly desirable since it does not require retraining or access to the full training dataset. The well-established uniform scheme for post-training quantization achieves satisfactory results by converting neural networks from full-precision to 8-bit fixed-point integers. However, it suffers from significant performance degradation when quantizing to lower bit-widths. In this paper, we propose a piecewise linear quantization (PWLQ) scheme to enable accurate approximation for tensor values that have bell-shaped distributions with long tails. Our approach breaks the entire quantization range into non-overlapping regions for each tensor, with each region being assigned an equal number of quantization levels. Optimal breakpoints that divide the entire range are found by minimizing the quantization error. Compared to state-of-the-art post-training quantization methods, experimental results show that our proposed method achieves superior performance on image classification, semantic segmentation, and object detection with minor overhead.) <|cite_end|>, and adaptive rounding for post-training quantization. The challenge of these compression techniques is attributed to the iterative training process that makes it difficult to use complex optimization algorithms. As such model compression techniques are often used for inference as they make training difficult to speed up <|cite_start|> (Reference: {Model compression and hardware acceleration for neural networks: A comprehensive survey: Domain-specific hardware is becoming a promising topic in the backdrop of improvement slow down for general-purpose processors due to the foreseeable end of Moore’s Law. Machine learning, especially deep neural networks (DNNs), has become the most dazzling domain witnessing successful applications in a wide spectrum of artificial intelligence (AI) tasks. The incomparable accuracy of DNNs is achieved by paying the cost of hungry memory consumption and high computational complexity, which greatly impedes their deployment in embedded systems. Therefore, the DNN compression concept was naturally proposed and widely used for memory saving and compute acceleration. In the past few years, a tremendous number of compression techniques have sprung up to pursue a satisfactory tradeoff between processing efficiency and application accuracy. Recently, this wave has spread to the design of neural network accelerators for gaining extremely high performance. However, the amount of related works is incredibly huge and the reported approaches are quite divergent. This research chaos motivates us to provide a comprehensive survey on the recent advances toward the goal of efficient compression and execution of DNNs without significantly compromising accuracy, involving both the high-level algorithms and their applications in hardware design. In this article, we review the mainstream compression approaches such as compact model, tensor decomposition, data quantization, and network sparsification. We explain their compression principles, evaluation metrics, sensitivity analysis, and joint-way use. Then, we answer the question of how to leverage these methods in the design of neural network accelerators and present the state-of-the-art hardware architectures. In the end, we discuss several existing issues such as fair comparison, testing workloads, automatic compression, influence on security, and framework/hardware-level support, and give promising topics in this field and the possible challenges as well. This article attempts to enable readers to quickly build up a big picture of neural network compression and acceleration, clearly evaluate various methods, and confidently get started in the right way.) <|cite_end|>, and we therefore exclude PTQ for our approach. \subsection{Model Optimization with Neural Architecture Search} Model optimization can be equally conducted as a neural architecture search (NAS) process. With neural architecture search, a controller decides the best architecture for a given task by using search objectives such as accuracy, latency, and resource utilization. NAS as an optimization technique determines the best model given specific objectives to attain. Various NAS approaches exist such as NAS-RL <|cite_start|> (Reference: Neural Architecture Search with Reinforcement Learning: Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.) <|cite_end|>, ENAS <|cite_start|> (Reference: Efficient Neural Architecture Search via Parameter Sharing: We propose Efficient Neural Architecture Search (ENAS), a fast and inexpensive approach for automatic model design. In ENAS, a controller learns to discover neural network architectures by searching for an optimal subgraph within a large computational graph. The controller is trained with policy gradient to select a subgraph that maximizes the expected reward on the validation set. Meanwhile the model corresponding to the selected subgraph is trained to minimize a canonical cross entropy loss. Thanks to parameter sharing between child models, ENAS is fast: it delivers strong empirical performances using much fewer GPU-hours than all existing automatic model design approaches, and notably, 1000x less expensive than standard Neural Architecture Search. On the Penn Treebank dataset, ENAS discovers a novel architecture that achieves a test perplexity of 55.8, establishing a new state-of-the-art among all methods without post-training processing. On the CIFAR-10 dataset, ENAS designs novel architectures that achieve a test error of 2.89%, which is on par with NASNet (Zoph et al., 2018), whose test error is 2.65%.) <|cite_end|>, DARTS <|cite_start|> (Reference: DARTS: Differentiable Architecture Search: This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space, our method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent. Extensive experiments on CIFAR-10, ImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques. Our implementation has been made publicly available to facilitate further research on efficient architecture search algorithms.) <|cite_end|>, efficient architecture search <|cite_start|> (Reference: Efficient Architecture Search by Network Transformation: Techniques for automatically designing deep neural network architectures such as reinforcement learning based approaches have recently shown promising results. However, their success is based on vast computational resources (e.g. hundreds of GPUs), making them difficult to be widely used. A noticeable limitation is that they still design and train each network from scratch during the exploration of the architecture space, which is highly inefficient. In this paper, we propose a new framework toward efficient architecture search by exploring the architecture space based on the current network and reusing its weights. We employ a reinforcement learning agent as the meta-controller, whose action is to grow the network depth or layer width with function-preserving transformations. As such, the previously validated networks can be reused for further exploration, thus saves a large amount of computational cost. We apply our method to explore the architecture space of the plain convolutional neural networks (no skip-connections, branching etc.) on image benchmark datasets (CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method can design highly competitive networks that outperform existing networks using the same design scheme. On CIFAR-10, our model without skip-connections achieves 4.23\% test error rate, exceeding a vast majority of modern architectures and approaching DenseNet. Furthermore, by applying our method to explore the DenseNet architecture space, we are able to achieve more accurate networks with fewer parameters.) <|cite_end|>, and PNAS <|cite_start|> (Reference: Progressive Neural Architecture Search: We propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary algorithms. Our approach uses a sequential model-based optimization (SMBO) strategy, in which we search for structures in order of increasing complexity, while simultaneously learning a surrogate model to guide the search through structure space. Direct comparison under the same search space shows that our method is up to 5 times more efficient than the RL method of Zoph et al. (2018) in terms of number of models evaluated, and 8 times faster in terms of total compute. The structures we discover in this way achieve state of the art classification accuracies on CIFAR-10 and ImageNet.) <|cite_end|>. Although NAS often results are often successful, the search process is usually long and resource intensive. Searching from scratch fails to take advantage of the existing neural architectures and overlook the neural architecture design expertise that already exists <|cite_start|> (Reference: A Comprehensive Survey of Neural Architecture Search: Challenges and Solutions: Deep learning has made breakthroughs and substantial in many fields due to its powerful automatic representation capabilities. It has been proven that neural architecture design is crucial to the feature representation of data and the final performance. However, the design of the neural architecture heavily relies on the researchers' prior knowledge and experience. And due to the limitations of human' inherent knowledge, it is difficult for people to jump out of their original thinking paradigm and design an optimal model. Therefore, an intuitive idea would be to reduce human intervention as much as possible and let the algorithm automatically design the neural architecture. Neural Architecture Search (NAS) is just such a revolutionary algorithm, and the related research work is complicated and rich. Therefore, a comprehensive and systematic survey on the NAS is essential. Previously related surveys have begun to classify existing work mainly based on the key components of NAS: search space, search strategy, and evaluation strategy. While this classification method is more intuitive, it is difficult for readers to grasp the challenges and the landmark work involved. Therefore, in this survey, we provide a new perspective: beginning with an overview of the characteristics of the earliest NAS algorithms, summarizing the problems in these early NAS algorithms, and then providing solutions for subsequent related research work. Besides, we conduct a detailed and comprehensive analysis, comparison, and summary of these works. Finally, we provide some possible future research directions.) <|cite_end|>. As such, NAS has been explored by using existing model architectures as baselines that define the search space. Li et al. <|cite_start|> (Reference: Pareto Optimization of CNN Models via Hardware-Aware Neural Architecture Search for Drainage Crossing Classification on Resource-Limited Devices: Embedded devices, constrained by limited memory and processors, require deep learning models to be tailored to their specifications. This research explores customized model architectures for classifying drainage crossing images. Building on the foundational ResNet-18, this paper aims to maximize prediction accuracy, reduce memory size, and minimize inference latency. Various configurations were systematically probed by leveraging hardware-aware neural architecture search, accumulating 1,717 experimental results over six benchmarking variants. The experimental data analysis, enhanced by nn-Meter, provided a comprehensive understanding of inference latency across four different predictors. Significantly, a Pareto front analysis with three objectives of accuracy, latency, and memory resulted in five non-dominated solutions. These standout models showcased efficiency while retaining accuracy, offering a compelling alternative to the conventional ResNet-18 when deployed in resource-constrained environments. The paper concludes by highlighting insights drawn from the results and suggesting avenues for future exploration.) <|cite_end|> used a ResNet-18 backbone for their search process, while Lyu et al. <|cite_start|> (Reference: Resource-constrained neural architecture search on edge devices: The performance requirement of deep learning inevitably brings up with the expense of high computational complexity and memory requirements, to make it problematic for the deployment on resource-constrained devices. Edge computing, which distributedly organizes the computing node close to the data source and end-device, provides a feasible way to tackle the high-efficiency demand and substantial computational load. Whereas given edge device is resource-constrained and energy-sensitive, designing effective neural network architecture for specific edge device is urgent in the sense that deploys the deep learning application by the edge computing solution. Undoubtedly manually design the high-performing neural architectures is burdensome, let alone taking account of the resource-constraint for the specific platform. Fortunately, the success of Neural Architecture Search techniques come up with hope recently. This paper dedicates to directly employ multi-objective NAS on the resource-constrained edge devices. We first propose the framework of multi-objective NAS on edge device, which comprehensively considers the performance and real-world efficiency. Our improved MobileNet-V2 search space also strikes the scalability and practicality, so that a series of Pareto-optimal architectures are received. Benefits from the directness and specialization during search procedure, our experiment on JETSON NANO shows the comparable result with the state-of-the-art models on ImageNet.) <|cite_end|> used MobileNetV2. The limitation of these works is their choice of lightweight models, which already have a significantly lower accuracy than heavyweight models. By re-designing a heavyweight architecture, we explore new ways of improving resource efficiency of the architecture without losing significant accuracy. Pre-defining search patterns can help guide the process towards better architectural decisions and therefore shorten the process by constraining the search space. Shortening the search process is a suitable trade-off to consider for resource-constraint devices. Establishing these patterns, however, require prior knowledge of architectural design to guarantee the success of the pattern. In this paper, we focus on establishing the success of an optimization method (which could become a search pattern) and therefore deviate from using NAS headfirst. \subsection{AI Models at the Edge} Many studies have implemented artificial intelligence on edge. Nikouei et al. <|cite_start|> (Reference: Real-Time Human Detection as an Edge Service Enabled by a Lightweight CNN: Edge computing allows more computing tasks to take place on the decentralized nodes at the edge of networks. Today many delay sensitive, mission-critical applications can leverage these edge devices to reduce the time delay or even to enable real time, online decision making thanks to their onsite presence. Human objects detection, behavior recognition and prediction in smart surveillance fall into that category, where a transition of a huge volume of video streaming data can take valuable time and place heavy pressure on communication networks. It is widely recognized that video processing and object detection are computing intensive and too expensive to be handled by resource limited edge devices. Inspired by the depthwise separable convolution and Single Shot Multi-Box Detector (SSD), a lightweight Convolutional Neural Network (LCNN) is introduced in this paper. By narrowing down the classifier's searching space to focus on human objects in surveillance video frames, the proposed LCNN algorithm is able to detect pedestrians with an affordable computation workload to an edge device. A prototype has been implemented on an edge node (Raspberry PI 3) using openCV libraries, and satisfactory performance is achieved using real world surveillance video streams. The experimental study has validated the design of LCNN and shown it is a promising approach to computing intensive applications at the edge.) <|cite_end|> developed a lightweight CNN (L-CNN) using depthwise separable convolution and a Single Shot Multi-Box Detector (SSD) for human object detection and deployed the model on an edge device, Sallang et al. <|cite_start|> (Reference: A CNN-Based Smart Waste Management System Using TensorFlow Lite and LoRa-GPS Shield in Internet of Things Environment: Urban areas are facing challenges in waste management systems due to the rapid growth of population in cities, causing huge amount of waste generation. As traditional waste management system is highly inefficient and costly, the waste of resources can be utilized efficiently with the integration of the internet of things (IoT) and deep learning model. The main purpose of this research is to develop a smart waste management system using the deep learning model that improves the waste segregation process and enables monitoring of bin status in an IoT environment. The SSD MobileNetV2 Quantized is used and trained with the dataset that consists of paper, cardboard, glass, metal, and plastic for waste classification and categorization. By integrating the trained model on TensorFlow Lite and Raspberry Pi 4, the camera module detects the waste and the servo motor, connected to a plastic board, categorizes the waste into the respective waste compartment. The ultrasonic sensor monitors the waste fill percentage, and a GPS module obtains the real-time latitude and longitude. The LoRa module on the smart bin sends the status of the bin to the LoRa receiver at 915 MHz. The electronic components of the smart bin are protected with RFID based locker, where only the registered RFID tag can be used to unlock for maintenance or upgrading purposes.) <|cite_end|> deployed a MobileNetV2-based SSD on a Raspberry Pi 4 for smart waste management; and Sreekumar et al. <|cite_start|> (Reference: Real-time traffic pattern collection and analysis model for intelligent traffic intersection: The traffic congestion hits most big cities in the world - threatening long delays and serious reductions in air quality. City and local government officials continue to face challenges in optimizing crowd flow, synchronizing traffic and mitigating threats or dangerous situations. One of the major challenges faced by city planners and traffic engineers is developing a robust traffic controller that eliminates traffic congestion and imbalanced traffic flow at intersections. Ensuring that traffic moves smoothly and minimizing the waiting time in intersections requires automated vehicle detection techniques for controlling the traffic light automatically, which are still challenging problems. In this paper, we propose an intelligent traffic pattern collection and analysis model, named TPCAM, based on traffic cameras to help in smooth vehicular movement on junctions and set to reduce the traffic congestion. Our traffic detection and pattern analysis model aims at detecting and calculating the traffic flux of vehicles and pedestrians at intersections in real-time. Our system can utilize one camera to capture all the traffic flows in one intersection instead of multiple cameras, which will reduce the infrastructure requirement and potential for easy deployment. We propose a new deep learning model based on YOLOv2 and adapt the model for the traffic detection scenarios. To reduce the network burdens and eliminate the deployment of network backbone at the intersections, we propose to process the traffic video data at the network edge without transmitting the big data back to the cloud. To improve the processing frame rate at the edge, we further propose deep object tracking algorithm leveraging adaptive multi-modal models and make it robust to object occlusions and varying lighting conditions. Based on the deep learning based detection and tracking, we can achieve pseudo-30FPS via adaptive key frame selection.) <|cite_end|> designed a real-time traffic pattern collection method using YOLOv2 deployed on an edge device. Beyond the use of edge devices for deploying machine learning models, other authors explored performing on-device training. Kukreja et al. <|cite_start|> (Reference: Training on the Edge: The why and the how: Edge computing is the natural progression from Cloud computing, where, instead of collecting all data and processing it centrally, like in a cloud computing environment, we distribute the computing power and try to do as much processing as possible, close to the source of the data. There are various reasons this model is being adopted quickly, including privacy, and reduced power and bandwidth requirements on the Edge nodes. While it is common to see inference being done on Edge nodes today, it is much less common to do training on the Edge. The reasons for this range from computational limitations, to it not being advantageous in reducing communications between the Edge nodes. In this paper, we explore some scenarios where it is advantageous to do training on the Edge, as well as the use of checkpointing strategies to save memory.) <|cite_end|> proposed using a student-teacher model for training, where a teacher model is trained on an object and used to update the dataset with different viewpoints on which student models are trained. They also discuss the use of checkpointing to reduce the memory consumption of the training process. Tsukada et al. <|cite_start|> (Reference: A Neural Network-Based On-device Learning Anomaly Detector for Edge Devices: Semi-supervised anomaly detection is an approach to identify anomalies by learning the distribution of normal data. Backpropagation neural networks (i.e., BP-NNs) based approaches have recently drawn attention because of their good generalization capability. In a typical situation, BP-NN-based models are iteratively optimized in server machines with input data gathered from edge devices. However, (1) the iterative optimization often requires significant efforts to follow changes in the distribution of normal data (i.e., concept drift), and (2) data transfers between edge and server impose additional latency and energy consumption. To address these issues, we propose ONLAD and its IP core, named ONLAD Core. ONLAD is highly optimized to perform fast sequential learning to follow concept drift in less than one millisecond. ONLAD Core realizes on-device learning for edge devices at low power consumption, which realizes standalone execution where data transfers between edge and server are not required. Experiments show that ONLAD has favorable anomaly detection capability in an environment that simulates concept drift. Evaluations of ONLAD Core confirm that the training latency is 1.95x~6.58x faster than the other software implementations. Also, the runtime power consumption of ONLAD Core implemented on PYNQ-Z1 board, a small FPGA/CPU SoC platform, is 5.0x~25.4x lower than them.) <|cite_end|> proposed an On-device Learning Anomaly Detector (ONLAD), which combines sequential learning with semi-supervision and an autoencoder to reduce computational cost. They developed a hardware implementation of their method called ONLAD Core, on which they performed on-device training. Similar to these works, we emphasize on-device training by implementing and training our models on the edge. However, we differ from these approaches by optimizing a deep neural network to make it lightweight and computationally efficient enough to train on the edge. This enables us to leverage the well-proven architecture to obtain higher accuracy than other lightweight models. <|paper_end|>
[ "<|reference_start|> Diffusion Models in Vision: A Survey: Denoising diffusion models represent a recent emerging topic in computer vision, demonstrating remarkable results in the area of generative modeling. A diffusion model is a deep generative model that is based on two stages, a forward diffusion stage and a reverse diffusion stage. In the forward diffusion stage, the input data is gradually perturbed over several steps by adding Gaussian noise. In the reverse stage, a model is tasked at recovering the original input data by learning to gradually reverse the diffusion process, step by step. Diffusion models are widely appreciated for the quality and diversity of the generated samples, despite their known computational burdens, i.e. low speeds due to the high number of steps involved during sampling. In this survey, we provide a comprehensive review of articles on denoising diffusion models applied in vision, comprising both theoretical and practical contributions in the field. First, we identify and present three generic diffusion modeling frameworks, which are based on denoising diffusion probabilistic models, noise conditioned score networks, and stochastic differential equations. We further discuss the relations between diffusion models and other deep generative models, including variational auto-encoders, generative adversarial networks, energy-based models, autoregressive models and normalizing flows. Then, we introduce a multi-perspective categorization of diffusion models applied in computer vision. Finally, we illustrate the current limitations of diffusion models and envision some interesting directions for future research. <|reference_end|>", "<|reference_start|> A Neural Network-Based On-device Learning Anomaly Detector for Edge Devices: Semi-supervised anomaly detection is an approach to identify anomalies by learning the distribution of normal data. Backpropagation neural networks (i.e., BP-NNs) based approaches have recently drawn attention because of their good generalization capability. In a typical situation, BP-NN-based models are iteratively optimized in server machines with input data gathered from edge devices. However, (1) the iterative optimization often requires significant efforts to follow changes in the distribution of normal data (i.e., concept drift), and (2) data transfers between edge and server impose additional latency and energy consumption. To address these issues, we propose ONLAD and its IP core, named ONLAD Core. ONLAD is highly optimized to perform fast sequential learning to follow concept drift in less than one millisecond. ONLAD Core realizes on-device learning for edge devices at low power consumption, which realizes standalone execution where data transfers between edge and server are not required. Experiments show that ONLAD has favorable anomaly detection capability in an environment that simulates concept drift. Evaluations of ONLAD Core confirm that the training latency is 1.95x~6.58x faster than the other software implementations. Also, the runtime power consumption of ONLAD Core implemented on PYNQ-Z1 board, a small FPGA/CPU SoC platform, is 5.0x~25.4x lower than them. <|reference_end|>", "<|reference_start|> Neural Architecture Search with Reinforcement Learning: Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214. <|reference_end|>", "<|reference_start|> Real-Time Human Detection as an Edge Service Enabled by a Lightweight CNN: Edge computing allows more computing tasks to take place on the decentralized nodes at the edge of networks. Today many delay sensitive, mission-critical applications can leverage these edge devices to reduce the time delay or even to enable real time, online decision making thanks to their onsite presence. Human objects detection, behavior recognition and prediction in smart surveillance fall into that category, where a transition of a huge volume of video streaming data can take valuable time and place heavy pressure on communication networks. It is widely recognized that video processing and object detection are computing intensive and too expensive to be handled by resource limited edge devices. Inspired by the depthwise separable convolution and Single Shot Multi-Box Detector (SSD), a lightweight Convolutional Neural Network (LCNN) is introduced in this paper. By narrowing down the classifier's searching space to focus on human objects in surveillance video frames, the proposed LCNN algorithm is able to detect pedestrians with an affordable computation workload to an edge device. A prototype has been implemented on an edge node (Raspberry PI 3) using openCV libraries, and satisfactory performance is achieved using real world surveillance video streams. The experimental study has validated the design of LCNN and shown it is a promising approach to computing intensive applications at the edge. <|reference_end|>" ]
[ 1, 6, 20, 28 ]
{"<|cite_1|>": "arxiv-493693", "<|cite_2|>": "arxiv-445374", "<|cite_3|>": "arxiv-544260", "<|cite_4|>": "arxiv-494904", "<|cite_5|>": "arxiv-249797", "<|multi_cite_6_1|>": "ss-2319260", "<|multi_cite_6_2|>": "arxiv-215759", "<|multi_cite_6_3|>": "ss-1612718", "<|cite_7|>": "arxiv-194434", "<|multi_cite_8_1|>": "arxiv-331514", "<|multi_cite_8_2|>": "arxiv-145365", "<|cite_9|>": "ss-717959", "<|cite_10|>": "arxiv-107441", "<|cite_11|>": "arxiv-348623", "<|cite_12|>": "arxiv-368056", "<|cite_13|>": "ss-939342", "<|cite_14|>": "arxiv-260474", "<|cite_15|>": "arxiv-234572", "<|cite_16|>": "arxiv-245885", "<|cite_18|>": "ss-731062", "<|cite_19|>": "arxiv-109401", "<|cite_20|>": "ss-1513289", "<|cite_21|>": "arxiv-163588", "<|cite_22|>": "arxiv-129408", "<|cite_23|>": "arxiv-141996", "<|cite_24|>": "arxiv-269520", "<|cite_25|>": "ss-1612719", "<|cite_26|>": "ss-986837", "<|cite_27|>": "arxiv-156949", "<|cite_28|>": "ss-2319260", "<|cite_29|>": "ss-1612720", "<|cite_30|>": "arxiv-194434", "<|cite_31|>": "arxiv-215759"}
1211.1343-1
<|cite_start|> (Reference: On a functional contraction method: Methods for proving functional limit laws are developed for sequences of stochastic processes which allow a recursive distributional decomposition either in time or space. Our approach is an extension of the so-called contraction method to the space $\mathcal{C}[0,1]$ of continuous functions endowed with uniform topology and the space $\mathcal {D}[0,1]$ of c\`{a}dl\`{a}g functions with the Skorokhod topology. The contraction method originated from the probabilistic analysis of algorithms and random trees where characteristics satisfy natural distributional recurrences. It is based on stochastic fixed-point equations, where probability metrics can be used to obtain contraction properties and allow the application of Banach's fixed-point theorem. We develop the use of the Zolotarev metrics on the spaces $\mathcal{C}[0,1]$ and $\mathcal{D}[0,1]$ in this context. Applications are given, in particular, a short proof of Donsker's functional limit theorem is derived and recurrences arising in the probabilistic analysis of algorithms are discussed.) <|cite_end|>for a recent development in function spaces. A somewhat similar approach towards functional convergence results relying on first establishing one-dimensional convergence at a specific point in the context of the Quicksort algorithm can be found in <|cite_start|> (Reference: Almost sure convergence to the Quicksort process: ) <|cite_end|>. \subsection{Related work on random laminations of the disk} The work of <|cite_start|> (Reference: Random recursive triangulations of the disk via fragmentation theory: We introduce and study an infinite random triangulation of the unit disk that arises as the limit of several recursive models. This triangulation is generated by throwing chords uniformly at random in the unit disk and keeping only those chords that do not intersect the previous ones. After throwing infinitely many chords and taking the closure of the resulting set, one gets a random compact subset of the unit disk whose complement is a countable union of triangles. We show that this limiting random set has Hausdorff dimension $\beta^*+1$, where $\beta^*=(\sqrt{17}-3)/2$, and that it can be described as the geodesic lamination coded by a random continuous function which is H\"{o}lder continuous with exponent $\beta^*-\varepsilon$, for every $\varepsilon>0$. We also discuss recursive constructions of triangulations of the $n$-gon that give rise to the same continuous limit when $n$ tends to infinity.) <|cite_end|>was motivated by the pioneering work of <|cite_start|> (Reference: Triangulating the circle, at random: (2m - 2)! cm - (m-1)!m! One of the interesting aspects of Polya's paper is that it exposed readers to his newly developed theory of "figurate series". We wish to consider the idea of letting n -*> oo and studying triangulations of the oo-gon, i.e. the circle. This question doesn't make much sense as combinatorics, but we can shift viewpoint and consider random triangulations of the n-gon, in which each of the cn-1 possible triangulations is equally likely. The purpose of this paper is to show that there exists an object "the random triangulation of a circle" which is in a natural sense the n -* oo limit of the random triangulation of the n-gon. As with Polya [9], the exposition takes readers into some newly developed theory of the author. Let's start by recalling a precise definition. A triangulation of a finite set S is a collection of nonintersecting line segments with endpoints from S such that the convex hull of S is partitioned into triangular regions. We shall be concerned only with the cases Sn consisting of the vertices of the regular n-gon inscribed in a fixed) <|cite_end|> <|cite_start|> (Reference: Recursive self-similarity for random trees, random triangulations and Brownian excursion: Recursive self-similarity for a random object is the property of being de-composable into independent rescaled copies of the original object. Certain random combinatorial objects-trees and triangulations-possess approximate versions of recursive self-similarity, and then their continuous limits possess exact recursive self-similarity. In particular, since the limit continuum random tree can be identified with Brownian excursion, we get a nonobvious recursive self-similarity property for Brownian excursion.) <|cite_end|>who studied \emph{uniform random triangulations} of the disk which arise as limiting objects for uniform triangulations of regular $n$-gons as $n \to \infty$. In the case of uniform random triangulations, the process which encodes the limit triangulation is the Brownian excursion, and the scaling limit of the sequence of dual trees is the Brownian continuum random tree introduced in <|cite_start|> (Reference: The continuum random tree. I: Exact and asymptotic results for the uniform random labelled tree on n vertices have been studied extensively by combinatorialists. Here we treat asymptotics from a modern stochastic process viewpoint. There are three limit processes. One is an infinite discrete tree. The other two are most naturally represented as continuous two-dimensional fractal tree-like sub-sets of the infinite-dimensional space 11. One is compact; the other is unbounded and self-similar. The proofs are based upon a simple algorithm for generating the finite random tree and upon weak convergence arguments. Distributional properties of these limit processes will be discussed in a sequel.) <|cite_end|> <|cite_start|> (Reference: Stochastic Analysis: The Continuum random tree II: an overview: 1 INTRODUCTION Many different models of random trees have arisen in a variety of applied setting, and there is a large but scattered literature on exact and asymptotic results for particular models. For several years I have been interested in what kinds of "general theory" (as opposed to ad hoc analysis of particular models) might be useful in studying asymptotics of random trees. In this paper, aimed at theoretical probabilists, I discuss aspects of this incipient general theory which are most closely related to topics of current interest in theoretical stochastic processes. No prior knowledge of this subject is assumed: the paper is intended as an introduction and survey. To give the really big picture in a paragraph, consider a tree on n vertices. View the vertices as points in abstract (rather than d-dimensional) space, but let the edges have length (= 1, as a default) so that there is metric structure: the distance between two vertices is the length of the path between them. Consider the average distance between pairs of vertices. As n-> oo this average distance could stay bounded or could grow as order n, but almost all natural random trees fall into one of two categories. In the first (and larger) category, the average distance grows as order log n. This category includes supercritical branching processes, and most "Markovian growth" models such as those occurring in the analysis of algorithms. This paper is concerned with the second category, in which the average distance grows as order n1/2. This occurs with Galton-Watson branching processes conditioned on total population size = n (in brief, CBP(n)). At first sight that seems an unnatural model, but it turns out to coincide (see section 2.1) with various combina-torial models, and is similar to more general models of critical branching processes conditioned to be large (in any reasonable way). The fundamental Aldous: The continuum random tree II fact is that, by scaling edges to have length n-1/2, these random trees converge in distribution as n-+ oo to a limit we call the CCRT (for compact continuum random tree). This was treated explicitly in Aldous [2] in a special case and in Aldous [3] in the natural general case, though (as we shall see) many related results are implicit in recent literature. Thus asymptotic distributions for these models of discrete random trees can be obtained immediately from distributions associated with the limit tree. …) <|cite_end|> <|cite_start|> (Reference: The continuum random tree {III}: Let (W(k), k 2 1) be random trees with k leaves, satisfying a consistency condition: Removing a random leaf from R(k) gives R(k - 1). Then under an extra condition, this family determines a random continuum tree ?/, which it is convenient to represent as a random subset of 11. This leads to an abstract notion of convergence in distribution, as n -A o, of (rescaled) random trees En- on n vertices to a limit continuum random tree I/. The notion is based upon the assumption that, for fixed k, the subtrees of En determined by k randomly chosen vertices converge to R(k). As our main example, under mild conditions on the offspring distribution, the family tree of a Galton-Watson branching process, conditioned on total population size equal to n, can be rescaled to converge to a limit continuum random tree which can be constructed from Brownian excursion. abstractly about a projective limit R(oo), but our goal is to give a concrete representation of a limit.) <|cite_end|>. Among the recent work on laminations of the disk, one can mention <|cite_start|> (Reference: Random non-crossing plane configurations: A conditioned Galton--Watson tree approach: We study various models of random non‐crossing configurations consisting of diagonals of convex polygons, and focus in particular on uniform dissections and non‐crossing trees. For both these models, we prove convergence in distribution towards Aldous’ Brownian triangulation of the disk. In the case of dissections, we also refine the study of the maximal vertex degree and validate a conjecture of Bernasconi, Panagiotou and Steger. Our main tool is the use of an underlying Galton‐Watson tree structure. © 2014 Wiley Periodicals, Inc. Random Struct. Alg., 45, 236–260, 2014) <|cite_end|>where Curien and Kortchemski showed that the Brownian triangulation is also the scaling limit of other random subsets of the disk, in particular non-crossing trees (sets of non-crossing chords which form a tree) <|cite_start|> (Reference: Noncrossing trees are almost conditioned Galton--Watson trees: A noncrossing tree (NC‐tree) is a tree drawn on the plane having as vertices a set of points on the boundary of a circle, and whose edges are straight line segments that do not cross. In this article, we show that NC‐trees with size n are conditioned Galton–Watson trees. As corollaries, we give the limit law of depth‐first traversal processes and the limit profile of NC‐trees. © 2002 John Wiley & Sons, Inc. Random Struct. Alg., 20, 115–125, 2002) <|cite_end|>, and dissections (non-crossing sets of chords) under the uniform distribution. By sampling tessellations according to a Boltzmann weight depending on the degree of the faces, <|cite_start|> (Reference: Random stable laminations of the disk: We study large random dissections of polygons. We consider random dissections of a regular polygon with n sides, which are chosen according to Boltzmann weights in the domain of attraction of a stable law of index 2 (1;2]. As n goes to infinity, we prove that these random dissections converge in distribution towards a random compact set, called the random stable lamination. If = 2, we recover Aldous’ Brownian triangulation. However, if 2 (1;2), large faces remain in the limit and a dierent random compact set appears. We show that the random stable lamination can be coded by the continuous-time height function associated to the normalized excursion of a strictly stable spectrally positive Levy process of index . Using this coding, we establish that the Hausdor dimension of the stable random lamination is almost surely 2 1= .) <|cite_end|>obtained limit laminations which are not triangulations and are encoded by excursions of stable spectrally positive L\'evy processes (with L\'evy measure concentrated on $[0,\infty)$). Finally, <|cite_start|> (Reference: The Markovian hyperbolic triangulation: We construct and study the unique random tiling of the hyperbolic plane into ideal hyperbolic triangles (with the three corners located on the boundary) that is invariant (in law) with respect to Mobius transformations, and possesses a natural spatial Markov property that can be roughly described as the conditional independence of the two parts of the triangulation on the two sides of the edge of one of its triangles.) <|cite_end|>have studied geodesic laminations of the Poincar\'e disk. They construct and study the unique random tiling of the hyperbolic plane into triangles with vertices on the boundary whose distribution is invariant under M\"obius transformations and satisfies a certain spatial Markov property. \medskip \noindent \textbf{Plan of the paper}.\ In Section \ref{sec:limit} we give our construction of a continuous solution $Z$ of \eqref{eq:fixchord} with $\Ec{Z(s)} = \kappa(s(1-s))^\beta$. (Recall that $Z=\sM$ almost surely.) The construction guarantees finiteness of all moments of the supremum $\|Z\|$ which is essential for our approach. Here, we also prove the characterization of $Z$ as a solution of \eqref{eq:fixchord} under additional conditions. In Section~\ref{sec:conv} we prove the uniform convergence of $n^{-\beta/2} C_n$ to $Z$. We also obtain an upper bound on the rate of convergence in the $L^m$ distance, $m\ge 1$, which yields the almost sure convergence in Theorem~\ref{thm:main}. Here, we also show how our results simplify the arguments to deduce convergence of the lamination. Section \ref{sec:limith} is devoted to the proof of Theorem~\ref{thm:main2} which covers the homogeneous case $\alpha = 0$. Finally, in Section~\ref{sec:prop_dual} we prove some properties about the dual tree $\cT_Z=\cT_\sM$, in particular about its fractal dimension. Our proof of Theorem~\ref{thm:unif} is based on generating functions and is given in Appendix~\ref{sec:unif} to keep the body of the paper more focused. <|paper_end|>
[ "<|reference_start|> Almost sure convergence to the Quicksort process: <|reference_end|>", "<|reference_start|> Recursive self-similarity for random trees, random triangulations and Brownian excursion: Recursive self-similarity for a random object is the property of being de-composable into independent rescaled copies of the original object. Certain random combinatorial objects-trees and triangulations-possess approximate versions of recursive self-similarity, and then their continuous limits possess exact recursive self-similarity. In particular, since the limit continuum random tree can be identified with Brownian excursion, we get a nonobvious recursive self-similarity property for Brownian excursion. <|reference_end|>", "<|reference_start|> Noncrossing trees are almost conditioned Galton--Watson trees: A noncrossing tree (NC‐tree) is a tree drawn on the plane having as vertices a set of points on the boundary of a circle, and whose edges are straight line segments that do not cross. In this article, we show that NC‐trees with size n are conditioned Galton–Watson trees. As corollaries, we give the limit law of depth‐first traversal processes and the limit profile of NC‐trees. © 2002 John Wiley & Sons, Inc. Random Struct. Alg., 20, 115–125, 2002 <|reference_end|>", "<|reference_start|> Random stable laminations of the disk: We study large random dissections of polygons. We consider random dissections of a regular polygon with n sides, which are chosen according to Boltzmann weights in the domain of attraction of a stable law of index 2 (1;2]. As n goes to infinity, we prove that these random dissections converge in distribution towards a random compact set, called the random stable lamination. If = 2, we recover Aldous’ Brownian triangulation. However, if 2 (1;2), large faces remain in the limit and a dierent random compact set appears. We show that the random stable lamination can be coded by the continuous-time height function associated to the normalized excursion of a strictly stable spectrally positive Levy process of index . Using this coding, we establish that the Hausdor dimension of the stable random lamination is almost surely 2 1= . <|reference_end|>" ]
[ 1, 4, 9, 10 ]
{"<|cite_1|>": "ss-2379828", "<|cite_2|>": "ss-2379828", "<|cite_3|>": "ss-2379829", "<|cite_29|>": "ss-2379828", "<|cite_30|>": "ss-2379828", "<|multi_cite_31_1|>": "ss-2379829", "<|multi_cite_31_2|>": "ss-2379842", "<|multi_cite_4_1|>": "ss-2379830", "<|multi_cite_4_2|>": "ss-2379828", "<|multi_cite_4_3|>": "ss-2379831", "<|multi_cite_5_1|>": "ss-2011177", "<|multi_cite_5_2|>": "ss-1078459", "<|multi_cite_5_3|>": "ss-1060360", "<|multi_cite_5_4|>": "ss-2001706", "<|cite_6|>": "ss-2379832", "<|cite_32|>": "ss-2379828", "<|cite_7|>": "ss-2379828", "<|multi_cite_8_1|>": "ss-2379833", "<|multi_cite_8_2|>": "ss-2379834", "<|multi_cite_9_1|>": "ss-2017106", "<|multi_cite_9_2|>": "ss-2274851", "<|multi_cite_9_3|>": "ss-2274853", "<|cite_10|>": "ss-2379835", "<|cite_12|>": "ss-2379828", "<|cite_13|>": "ss-2379828", "<|cite_14|>": "ss-2379828", "<|cite_33|>": "ss-2379828", "<|cite_15|>": "ss-2379836", "<|multi_cite_16_1|>": "ss-1983423", "<|multi_cite_16_2|>": "ss-2379837", "<|multi_cite_16_3|>": "ss-1078459", "<|multi_cite_17_1|>": "ss-2379838", "<|cite_34|>": "ss-2379843", "<|cite_18|>": "ss-2379839", "<|cite_36|>": "ss-2379828", "<|cite_37|>": "ss-2379844", "<|cite_19|>": "arxiv-22946", "<|cite_20|>": "ss-2379832", "<|cite_21|>": "ss-2379828", "<|cite_22|>": "ss-2379828", "<|cite_23|>": "ss-2379828", "<|multi_cite_24_1|>": "ss-1046382", "<|multi_cite_24_2|>": "ss-2274857", "<|multi_cite_24_3|>": "ss-2274859", "<|cite_25|>": "arxiv-28476", "<|cite_38|>": "ss-1005272", "<|cite_39|>": "ss-2379828", "<|multi_cite_40_1|>": "ss-2379829", "<|multi_cite_40_2|>": "ss-2379842", "<|multi_cite_26_1|>": "ss-1983423", "<|multi_cite_26_2|>": "ss-2379837", "<|multi_cite_26_3|>": "ss-1078459", "<|cite_27|>": "ss-2379840", "<|cite_28|>": "ss-2379841", "<|cite_41|>": "ss-2379831", "<|cite_42|>": "ss-2379845"}
2302.12246-1
<|cite_start|> (Reference: Large Language Models Can Self-Improve: Large Language Models (LLMs) have achieved excellent performances in various tasks. However, fine-tuning an LLM requires extensive supervision. Human, on the other hand, may improve their reasoning abilities by self-thinking without external inputs. In this work, we demonstrate that an LLM is also capable of self-improving with only unlabeled datasets. We use a pre-trained LLM to generate "high-confidence" rationale-augmented answers for unlabeled questions using Chain-of-Thought prompting and self-consistency, and fine-tune the LLM using those self-generated solutions as target outputs. We show that our approach improves the general reasoning ability of a 540B-parameter LLM (74.4%->82.1% on GSM8K, 78.2%->83.0% on DROP, 90.0%->94.4% on OpenBookQA, and 63.4%->67.9% on ANLI-A3) and achieves state-of-the-art-level performance, without any ground truth label. We conduct ablation studies and show that fine-tuning on reasoning is critical for self-improvement.) <|cite_end|>, and verifier <|cite_start|> (Reference: On the Advance of Making Language Models Better Reasoners: Large language models such as GPT-3 and PaLM have shown remarkable performance in few-shot learning. However, they still struggle with reasoning tasks such as the arithmetic benchmark GSM8K. Recent advances deliberately guide the language model to generate a chain of reasoning steps before producing the final answer, successfully boosting the GSM8K benchmark from 17 . 9% to 58 . 1% in terms of problem solving rate. In this paper, we propose a new approach, D I V E RS E (Diverse Verifier on Reasoning Step), to further advance their reasoning capability. D I V E RS E first explores different prompts to enhance the diversity in reasoning paths. Second, D I V E RS E introduces a verifier to distinguish good answers from bad answers for a better weighted voting. Finally, D I V E RS E verifies the correctness of each single step rather than all the steps in a whole. We conduct extensive experiments using the latest language model code-davinci-002 and demonstrate that D I - V E RS E can achieve new state-of-the-art performance on six out of eight reasoning benchmarks (e.g., GSM8K 74 . 4% → 83 . 2% ), out-performing the PaLM model with 540B parameters.) <|cite_end|>. These studies greatly improve the performance based on CoT on complex tasks while they are limited to a fixed set of exemplars. Compared with them, we propose annotating the most important task-specific questions for easy adaptation. The only exception is Auto-CoT <|cite_start|> (Reference: Automatic Chain of Thought Prompting in Large Language Models: Large language models (LLMs) can perform complex reasoning by generating intermediate reasoning steps. Providing these steps for prompting demonstrations is called chain-of-thought (CoT) prompting. CoT prompting has two major paradigms. One leverages a simple prompt like "Let's think step by step" to facilitate step-by-step thinking before answering a question. The other uses a few manual demonstrations one by one, each composed of a question and a reasoning chain that leads to an answer. The superior performance of the second paradigm hinges on the hand-crafting of task-specific demonstrations one by one. We show that such manual efforts may be eliminated by leveraging LLMs with the "Let's think step by step" prompt to generate reasoning chains for demonstrations one by one, i.e., let's think not just step by step, but also one by one. However, these generated chains often come with mistakes. To mitigate the effect of such mistakes, we find that diversity matters for automatically constructing demonstrations. We propose an automatic CoT prompting method: Auto-CoT. It samples questions with diversity and generates reasoning chains to construct demonstrations. On ten public benchmark reasoning tasks with GPT-3, Auto-CoT consistently matches or exceeds the performance of the CoT paradigm that requires manual designs of demonstrations. Code is available at https://github.com/amazon-research/auto-cot) <|cite_end|>, which divides the test questions into different clusters, takes one question from each cluster for better diversity, and generates the answers via zero-shot prompting. However, this setting needed to go through the test dataset in advance for clustering, and our experiments demonstrate our method's better performance over Auto-CoT. Note that both diversity and uncertainty are useful for selecting the most informative questions, and they are complementary. We consider the combination of diversity and uncertainty as an important future direction. \subsection{Active Learning} Our work is also relevant to active learning <|cite_start|> (Reference: Active Learning with Statistical Models: For many types of machine learning algorithms, one can compute the statistically `optimal' way to select training data. In this paper, we review how optimal data selection techniques have been used with feedforward neural networks. We then show how the same principles may be used to select data for two alternative, statistically-based learning architectures: mixtures of Gaussians and locally weighted regression. While the techniques for neural networks are computationally expensive and approximate, the techniques for mixtures of Gaussians and locally weighted regression are both efficient and accurate. Empirically, we observe that the optimality criterion sharply decreases the number of training examples the learner needs in order to achieve good performance.) <|cite_end|> <|cite_start|> (Reference: A literature survey of active machine learning in the context of natural language processing: Active learning is a supervised machine learning technique in which the learner is in control of the data used for learning. That control is utilized by the learner to ask an oracle, typically a human with extensive knowledge of the domain at hand, about the classes of the instances for which the model learned so far makes unreliable predictions. The active learning process takes as input a set of labeled examples, as well as a larger set of unlabeled examples, and produces a classifier and a relatively small set of newly labeled data. The overall goal is to create as good a classifier as possible, without having to mark-up and supply the learner with more data than necessary. The learning process aims at keeping the human annotation effort to a minimum, only asking for advice where the training utility of the result of such a query is high. Active learning has been successfully applied to a number of natural language processing tasks, such as, information extraction, named entity recognition, text categorization, part-of-speech tagging, parsing, and word sense disambiguation. This report is a literature survey of active learning from the perspective of natural language processing.) <|cite_end|> <|cite_start|> (Reference: Active {{Learning Literature Survey}}: The most time consuming and expensive task in machine learning is the gathering of labeled data to train the model or to estimate its parameters. In the real-world scenario, the availability of labeled data is scarce and we have limited resources to label the abundantly available unlabeled data. Hence it makes sense to pick only the most informative instances from the unlabeled data and request an expert to provide the label for that instance. Active learning algorithms aim at minimizing the amount of labeled data required to achieve the goal of the machine learning task in hand by strategically selecting the data instance to be labeled by the expert. A lot of research has been conducted in this area over the past two decades leading to great improvements in performance of several existing machine learning algorithms and has also been applied to diverse fields like text classification, information retrieval, computer vision and bioinformatics, to name a few. This survey aims at providing an insight into the research in this area and categorizes the diverse algorithms proposed based on main characteristics. We also provides a desk where different active learning algorithms can be compared by evaluation on benchmark datasets.) <|cite_end|> <|cite_start|> (Reference: Multi-task Active Learning for Pre-trained Transformer-based Models: Multi-task learning, in which several tasks are jointly learned by a single model, allows NLP models to share information from multiple annotations and may facilitate better predictions when the tasks are inter-related. This technique, however, requires annotating the same text with multiple annotation schemes which may be costly and laborious. Active learning (AL) has been demonstrated to optimize annotation processes by iteratively selecting unlabeled examples whose annotation is most valuable for the NLP model. Yet, multi-task active learning (MT-AL) has not been applied to state-of-the-art pre-trained Transformer-based NLP models. This paper aims to close this gap. We explore various multi-task selection criteria in three realistic multi-task scenarios, reflecting different relations between the participating tasks, and demonstrate the effectiveness of multi-task compared to single-task selection. Our results suggest that MT-AL can be effectively used in order to minimize annotation efforts for multi-task NLP models.) <|cite_end|>, which aims to improve the data labeling efficiency by finding the most helpful unlabeled data to annotate with reasonable budgets. Recent studies <|cite_start|> (Reference: Revisiting Uncertainty-based Query Strategies for Active Learning with Transformers: Active learning is the iterative construction of a classification model through targeted labeling, enabling significant labeling cost savings. As most research on active learning has been carried out before transformer-based language models ("transformers") became popular, despite its practical importance, comparably few papers have investigated how transformers can be combined with active learning to date. This can be attributed to the fact that using state-of-the-art query strategies for transformers induces a prohibitive runtime overhead, which effectively nullifies, or even outweighs the desired cost savings. For this reason, we revisit uncertainty-based query strategies, which had been largely outperformed before, but are particularly suited in the context of fine-tuning transformers. In an extensive evaluation, we connect transformers to experiments from previous research, assessing their performance on five widely used text classification benchmarks. For active learning with transformers, several other uncertainty-based approaches outperform the well-known prediction entropy query strategy, thereby challenging its status as most popular uncertainty baseline in active learning for text classification.) <|cite_end|> <|cite_start|> (Reference: MEAL: Stable and Active Learning for Few-Shot Prompting: Few-shot classification has made great strides due to foundation models that, through priming and prompting, are highly effective few-shot learners. However, this approach has high variance both across different sets of few shots (data selection) and across different finetuning runs (run variability). This is problematic not only because it impedes the fair comparison of different approaches, but especially because it makes few-shot learning too unreliable for many real-world applications. To alleviate these issues, we make two contributions for more stable and effective few-shot learning: First, we propose novel ensembling methods and show that they substantially reduce run variability. Second, we introduce a new active learning (AL) criterion for data selection and present the first AL-based approach specifically tailored towards prompt-based learning. In our experiments, we show that our combined method, MEAL (Multiprompt finetuning and prediction Ensembling with Active Learning), improves overall performance of prompt-based finetuning by 2.3 points on five diverse tasks. We publicly share our code and data splits in https://github.com/akoksal/MEAL.) <|cite_end|>demonstrate the benefits of active learning based approaches for fine-tuning large language models for classification tasks. Following this, we incorporate max-entropy, and least confidence <|cite_start|> (Reference: Reducing labeling effort for structured prediction tasks: A common obstacle preventing the rapid deployment of supervised machine learning algorithms is the lack of labeled training data. This is particularly expensive to obtain for structured prediction tasks, where each training instance may have multiple, interacting labels, all of which must be correctly annotated for the instance to be of use to the learner. Traditional active learning addresses this problem by optimizing the order in which the examples are labeled to increase learning efficiency. However, this approach does not consider the difficulty of labeling each example, which can vary widely in structured prediction tasks. For example, the labeling predicted by a partially trained system may be easier to correct for some instances than for others. We propose a new active learning paradigm which reduces not only how many instances the annotator must label, but also how difficult each instance is to annotate. The system also leverages information from partially correct predictions to efficiently solicit annotations from the user. We validate this active learning framework in an interactive information extraction system, reducing the total number of annotation actions by 22%.) <|cite_end|>algorithms into in-context learning scenarios, and we especially verify the effectiveness of chain-of-thought prompting for complex reasoning tasks. <|paper_end|>
[ "<|reference_start|> Automatic Chain of Thought Prompting in Large Language Models: Large language models (LLMs) can perform complex reasoning by generating intermediate reasoning steps. Providing these steps for prompting demonstrations is called chain-of-thought (CoT) prompting. CoT prompting has two major paradigms. One leverages a simple prompt like \"Let's think step by step\" to facilitate step-by-step thinking before answering a question. The other uses a few manual demonstrations one by one, each composed of a question and a reasoning chain that leads to an answer. The superior performance of the second paradigm hinges on the hand-crafting of task-specific demonstrations one by one. We show that such manual efforts may be eliminated by leveraging LLMs with the \"Let's think step by step\" prompt to generate reasoning chains for demonstrations one by one, i.e., let's think not just step by step, but also one by one. However, these generated chains often come with mistakes. To mitigate the effect of such mistakes, we find that diversity matters for automatically constructing demonstrations. We propose an automatic CoT prompting method: Auto-CoT. It samples questions with diversity and generates reasoning chains to construct demonstrations. On ten public benchmark reasoning tasks with GPT-3, Auto-CoT consistently matches or exceeds the performance of the CoT paradigm that requires manual designs of demonstrations. Code is available at https://github.com/amazon-research/auto-cot <|reference_end|>", "<|reference_start|> Active {{Learning Literature Survey}}: The most time consuming and expensive task in machine learning is the gathering of labeled data to train the model or to estimate its parameters. In the real-world scenario, the availability of labeled data is scarce and we have limited resources to label the abundantly available unlabeled data. Hence it makes sense to pick only the most informative instances from the unlabeled data and request an expert to provide the label for that instance. Active learning algorithms aim at minimizing the amount of labeled data required to achieve the goal of the machine learning task in hand by strategically selecting the data instance to be labeled by the expert. A lot of research has been conducted in this area over the past two decades leading to great improvements in performance of several existing machine learning algorithms and has also been applied to diverse fields like text classification, information retrieval, computer vision and bioinformatics, to name a few. This survey aims at providing an insight into the research in this area and categorizes the diverse algorithms proposed based on main characteristics. We also provides a desk where different active learning algorithms can be compared by evaluation on benchmark datasets. <|reference_end|>", "<|reference_start|> Multi-task Active Learning for Pre-trained Transformer-based Models: Multi-task learning, in which several tasks are jointly learned by a single model, allows NLP models to share information from multiple annotations and may facilitate better predictions when the tasks are inter-related. This technique, however, requires annotating the same text with multiple annotation schemes which may be costly and laborious. Active learning (AL) has been demonstrated to optimize annotation processes by iteratively selecting unlabeled examples whose annotation is most valuable for the NLP model. Yet, multi-task active learning (MT-AL) has not been applied to state-of-the-art pre-trained Transformer-based NLP models. This paper aims to close this gap. We explore various multi-task selection criteria in three realistic multi-task scenarios, reflecting different relations between the participating tasks, and demonstrate the effectiveness of multi-task compared to single-task selection. Our results suggest that MT-AL can be effectively used in order to minimize annotation efforts for multi-task NLP models. <|reference_end|>", "<|reference_start|> Revisiting Uncertainty-based Query Strategies for Active Learning with Transformers: Active learning is the iterative construction of a classification model through targeted labeling, enabling significant labeling cost savings. As most research on active learning has been carried out before transformer-based language models (\"transformers\") became popular, despite its practical importance, comparably few papers have investigated how transformers can be combined with active learning to date. This can be attributed to the fact that using state-of-the-art query strategies for transformers induces a prohibitive runtime overhead, which effectively nullifies, or even outweighs the desired cost savings. For this reason, we revisit uncertainty-based query strategies, which had been largely outperformed before, but are particularly suited in the context of fine-tuning transformers. In an extensive evaluation, we connect transformers to experiments from previous research, assessing their performance on five widely used text classification benchmarks. For active learning with transformers, several other uncertainty-based approaches outperform the well-known prediction entropy query strategy, thereby challenging its status as most popular uncertainty baseline in active learning for text classification. <|reference_end|>" ]
[ 2, 5, 6, 7 ]
{"<|multi_cite_1_1|>": "ss-942217", "<|multi_cite_1_2|>": "ss-685914", "<|multi_cite_1_3|>": "arxiv-411079", "<|multi_cite_1_4|>": "arxiv-416926", "<|multi_cite_1_5|>": "ss-683337", "<|multi_cite_1_6|>": "arxiv-460885", "<|multi_cite_1_7|>": "arxiv-451338", "<|multi_cite_1_8|>": "arxiv-395389", "<|cite_2|>": "ss-685914", "<|multi_cite_3_1|>": "arxiv-389035", "<|multi_cite_3_2|>": "arxiv-462706", "<|multi_cite_3_3|>": "arxiv-427372", "<|multi_cite_4_1|>": "ss-753465", "<|multi_cite_4_2|>": "arxiv-407230", "<|multi_cite_4_3|>": "arxiv-421182", "<|cite_5|>": "ss-753465", "<|cite_6|>": "arxiv-398419", "<|cite_7|>": "ss-753465", "<|multi_cite_8_1|>": "ss-815267", "<|multi_cite_8_2|>": "ss-1466266", "<|multi_cite_9_1|>": "arxiv-169354", "<|multi_cite_9_2|>": "arxiv-178740", "<|multi_cite_9_3|>": "arxiv-218903", "<|cite_10|>": "arxiv-193528", "<|multi_cite_11_1|>": "ss-1446689", "<|multi_cite_11_2|>": "arxiv-103373", "<|multi_cite_11_3|>": "arxiv-351940", "<|multi_cite_11_4|>": "arxiv-377137", "<|cite_12|>": "arxiv-247605", "<|cite_13|>": "arxiv-341328", "<|multi_cite_14_1|>": "arxiv-159188", "<|multi_cite_14_2|>": "arxiv-204077", "<|multi_cite_14_3|>": "arxiv-262970", "<|multi_cite_14_4|>": "ss-946973", "<|multi_cite_14_5|>": "arxiv-339658", "<|multi_cite_15_1|>": "arxiv-366105", "<|multi_cite_15_2|>": "arxiv-260701", "<|multi_cite_15_3|>": "arxiv-395159", "<|multi_cite_16_1|>": "ss-753465", "<|multi_cite_16_2|>": "arxiv-407230", "<|multi_cite_16_3|>": "arxiv-421182", "<|multi_cite_16_4|>": "arxiv-451857", "<|cite_17|>": "ss-685914", "<|multi_cite_18_1|>": "ss-1104386", "<|multi_cite_18_2|>": "ss-1252782", "<|multi_cite_18_3|>": "arxiv-236700", "<|multi_cite_18_4|>": "arxiv-350090", "<|multi_cite_18_5|>": "ss-728652", "<|multi_cite_18_6|>": "ss-841839", "<|multi_cite_18_7|>": "ss-680075", "<|multi_cite_18_8|>": "arxiv-221387", "<|multi_cite_18_9|>": "arxiv-444305", "<|multi_cite_18_10|>": "arxiv-393915", "<|multi_cite_19_1|>": "ss-784973", "<|multi_cite_19_2|>": "ss-784973", "<|multi_cite_19_3|>": "ss-841839", "<|multi_cite_19_4|>": "arxiv-328337", "<|multi_cite_19_5|>": "arxiv-342865", "<|multi_cite_19_6|>": "ss-841839", "<|multi_cite_20_1|>": "arxiv-190048", "<|multi_cite_20_2|>": "arxiv-278657", "<|multi_cite_20_3|>": "arxiv-232149", "<|multi_cite_20_4|>": "ss-882784", "<|cite_32|>": "ss-753465", "<|cite_33|>": "ss-753465", "<|cite_21|>": "arxiv-407230", "<|cite_22|>": "arxiv-421182", "<|cite_23|>": "arxiv-449878", "<|cite_25|>": "arxiv-455829", "<|cite_26|>": "ss-752306", "<|cite_27|>": "arxiv-451857", "<|multi_cite_28_1|>": "arxiv-675958", "<|multi_cite_28_2|>": "ss-1115178", "<|multi_cite_28_3|>": "ss-815267", "<|multi_cite_28_4|>": "arxiv-439476", "<|multi_cite_29_1|>": "arxiv-354580", "<|multi_cite_29_2|>": "arxiv-462353", "<|cite_31|>": "ss-1466266"}
1905.05946
<|paper_start|> Title: Depth map estimation methodology for detecting free-obstacle navigation areas Abstract: Depth map estimation methodology for detecting free-obstacle navigation areas: This paper presents a vision-based methodology which makes use of a stereo camera rig and a one dimension LiDAR to estimate free obstacle areas for quadrotor navigation. The presented approach fuses information provided by a depth map from a stereo camera rig, and the sensing distance of the 1D-LiDAR. Once the depth map is filtered with a Weighted Least Squares filter (WLS), the information is fused through a Kalman filter algorithm. To determine if there is a free space large enough for the quadrotor to pass through, our approach marks an area inside the disparity map by using the Kalman Filter output information. The whole process is implemented in an embedded computer Jetson TX2 and coded in the Robotic Operating System (ROS). Experiments demonstrate the effectiveness of our approach. Introduction Recently, navigation of mobile robots in unknown environments has been an area of interest for researchers <|cite_start|> (Reference: A Vision and GPS-Based Real-Time Trajectory Planning for a MAV in Unknown and Low-Sunlight Environments: ) <|cite_end|>, due to the increasing number of applications and the necessity of maneuver autonomously with efficiency. An important research issue is the obstacle and object detection by using vision techniques <|cite_start|> (Reference: A vision and GPS-based real-time trajectory planning for MAV in unknown urban environments: This paper addresses the issue of real-time optimal trajectory generation of a micro Air Vehicle (MAV) in unknown urban environments. The MAV is required to navigate from an initial and outdoor position to a final position inside a building. To achieve this objective, we develop a safe path planning method using the information provided by the GPS and a consumer depth camera. With the purpose to develop a safe path planning with obstacle avoidance capabilities, a model predictive control approach is developed, which uses the environment information acquired by the navigation system.) <|cite_end|>. It has been developed in a number of methods and algorithms, among the most common methods we can mention those based on sensors like 3D-LiDAR, RGB-D cameras, monocular cameras, stereo cameras, among others <|cite_start|> (Reference: Vision-Based Window Estimation for MAV in Unknown Urban Environments*: This paper addresses the issue of window estimation of a micro Air Vehicle (MAV) in unknown urban environments. The MAV is required to navigate from an initial and outdoor position to a final position inside a building. This paper develops two vision-based methods using the information provided by the onboard vision system. To effectively identify the target and estimate the distance between the camera carrier and target, firstly a stereo camera system is applied. Besides, we propose another approach using point cloud captured by a RGB-D camera.) <|cite_end|>. Each approach has its strengths and weaknesses, and several algorithms have been developed in the last few years in order to reduce the depth estimation error. In this paper, we present a simple approach to ameliorate the depth estimation given by a stereo camera rig together with a 1D-LiDAR. With this information is defined a window in which a quadrotor can be navigate freely. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{imagenes/DSC_0226labeled.pdf} \caption{Quadrotor UAV used in this paper endowed with a Jetson TX2, stereo camera rig (ZED camera), and Lidar Lite sensor.} \label{drone} \end{figure} \subsection{Previous work} The first issue to solve is to choose an algorithm of stereo matching, which is crucial to obtain a good disparity map. In the website Middlebury Stereo Evaluation - Version 2 there is a list of more than 150 stereo matching algorithms ranked according to the average percent of bad pixels, obtained from the relation between the computed disparity map and the ground truth. However, the best reference and to compare stereo matching algorithms is the version 3 website of the aforementioned evaluation site which is based on the paper of D. Scharstein <|cite_start|> (Reference: A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms: ) <|cite_end|> where several parameters of stereo matching are defined with the purpose of comparison. In this list we find the latest stereo matching algorithms and the most accurate are predominantly performed by neural networks or superpixels methods or a mixture of them. Among the more accurate methods is <|cite_start|> (Reference: Pmsc: Patchmatch-based superpixel cut for accurate stereo matching: Estimating the disparity and normal direction of one pixel simultaneously, instead of only disparity, also known as 3D label methods, can achieve much higher subpixel accuracy in the stereo matching problem. However, it is extremely difficult to assign an appropriate 3D label to each pixel from the continuous label space $\mathbb {R}^{3}$ while maintaining global consistency because of the infinite parameter space. In this paper, we propose a novel algorithm called PatchMatch-based superpixel cut to assign 3D labels of an image more accurately. In order to achieve robust and precise stereo matching between local windows, we develop a bilayer matching cost, where a bottom–up scheme is exploited to design the two layers. The bottom layer is employed to measure the similarity between small square patches locally by exploiting a pretrained convolutional neural network, and then, the top layer is developed to assemble the local matching costs in large irregular windows induced by the tangent planes of object surfaces. To optimize the spatial smoothness of local assignments, we propose a novel strategy to update 3D labels. In the procedure of optimization, both segmentation information and random refinement of PatchMatch are exploited to update candidate 3D label set for each pixel with high probability of achieving lower loss. Since pairwise energy of general candidate label sets violates the submodular property of graph cut, we propose a novel multilayer superpixel structure to group candidate label sets into candidate assignments, which thereby can be efficiently fused by $\alpha $ -expansion graph cut. Extensive experiments demonstrate that our method can achieve higher subpixel accuracy in different data sets, and currently ranks first on the new challenging Middlebury 3.0 benchmark among all the existing methods.) <|cite_end|>, who proposes an algorithm based on superpixels labeling of an image and then applying a bilayer matching cost where a neural network compare similarity between layers. This kind of approach reduces the disparity map noise but the computation time increases significantly. Meanwhile H. Hirschmüller proposes the Semi-Global Block Matching (SGBM) method and it works faster but with less precission. Regarding algorithms for obstacle avoidance, for instance A. Stanoev et al. <|cite_start|> (Reference: Real-time stereo vision for collision detection on autonomous uavs: Collision detection is an important unsolved problem in the domain of modern UAV, which would enable safe navigation in unknown environments. Stereo vision provides a compact, lightweight and low-power solution. This paper describes an adaptive system for achieving real-time stereo vision for collision detection on an embedded GPU. Several optimisations are described including using sensor fusion with an ultrasonic sensor to better filter noise, organising the computations to take advantage of the platform's heterogeneous architecture and using GPU textures to benefit from caching. A discussion of the hardware features is provided, followed by the algorithm and implementation details for disparity calculations and finally a method for identifying objects from a disparity map. The system was implemented on an NVIDIA Tegra X1, achieving 48 FPS on a 320×240 image.) <|cite_end|> establish a threshold in the depth map where the close objects are white and labeled as obstacles and the farther ones are black and then ignored; if the robot moves quickly, the threshold decreases. In some cases it is necessary to differentiate obstacles over a flat surface, in this case it is useful to implement V-disparity maps <|cite_start|> (Reference: Real time obstacle detection in stereovision on non flat road geometry through "v-disparity" representation: Presents a road obstacle detection method able to cope with uphill and downhill gradients and dynamic pitching of the vehicle. Our approach is based on the construction and investigation of the "v-disparity" image which provides a good representation of the geometric content of the road scene. The advantage of this image is that it provides semi-global matching and is able to perform robust obstacle detection even in the case of partial occlusion or errors committed during the matching process. Furthermore, this detection is performed without any explicit extraction of coherent structures. This paper explains the construction of the "v-disparity" image, its main properties, and the obstacle detection method. The longitudinal profile of the road is estimated and the objects located above the road surface are then extracted as potential obstacles; subsequently, the accurate detection of road obstacles, in particular the position of tyre-road contact points is computed in a precise manner. The whole process is performed at frame rate with a current-day PC. Our experimental findings and comparisons with the results obtained using a flat geometry hypothesis show the benefits of our approach.) <|cite_end|> which is a function of the disparity map, that accumulates the disparities of the horizontal line into the v-disparity function, where the abscissa corresponds to the number of disparities. This approach can be used in vehicle navigation on a road. B. Lopez <|cite_start|> (Reference: Aggressive 3-d collision avoidance for high-speed navigation: Autonomous robot navigation through unknown, cluttered environments at high-speeds is still an open problem. Quadrotor platforms with this capability have only begun to emerge with the advancements in light-weight, small form factor sensing and computing. Many of the existing platforms, however, require excessive computation time to perform collision avoidance, which ultimately limits the vehicle's top speed. This work presents an efficient perception and planning approach that significantly reduces the computation time by using instantaneous perception data for collision avoidance. Minimum-time, state and input constrained motion primitives are generated by sampling terminal states until a collision-free path is found. The worst case performance of the Triple Integrator Planner (TIP) is nearly an order of magnitude faster than the state-of-the-art. Experimental results demonstrate the algorithm's ability to plan and execute aggressive collision avoidance maneuvers in highly cluttered environments.) <|cite_end|> proposes a perception and planning approach that significantly reduces the computation time using instantaneous perception for obstacle avoidance. Aman <|cite_start|> (Reference: A sensor fusion methodology for obstacle avoidance robot: Obstacle detection and navigation of dynamic environments is a challenge in mobile robotics. To address this challenge, this paper presents an efficient sensor fusion methodology to detect the size and location of obstacles and navigate the mobile robot with high accuracy. This is done by leveraging upon the unique advantages of accuracy in both ultrasonic sensor and a Kinect sensor for near-field and far-fields respectively. Further, an efficient Kalman filter is implemented to reduce the systematic errors in encoder data to track robot pose of the robot in real-time and reach the destination with high accuracy. Implemented on differential drive-based mobile robot, the proposed system has been validated with a high efficiency of detecting obstacles and reaching the destination with an accuracy of 5cm.) <|cite_end|> proposes a methodology to fuse ultrasonic sensor measurement and depth map from Kinect sensor. M. ki et al <|cite_start|> (Reference: Detect and avoid system based on multi sensor fusion for uav: With the rapid growth of the personal drone market, manufacturers of unmanned aerial vehicles(UAV) mainly used in the existing military field are expanding its application in various fields such as leisure, industrial, and public as well. In addition, the autonomous flight of UAV is being developed through combination with various sensors and signal processing technologies. Obstacle detect and avoidance is the most important function to ensure safety of autonomous flight mode. In this paper, we propose an obstacle detect and avoid technique that combines a 2D LiDAR with a stereo camera for safe navigation in autonomous flight of a multi-copter UAV.) <|cite_end|> propose a framework which implements a stereo camera and a 2D-LiDAR on an UAV, however the sensor is the only obstacle detector, and the camera is just used for monitoring. H. Song <|cite_start|> (Reference: Depth-aided robust localization approach for relative navigation using RGB-depth camera and lidar sensor: This paper describes a robust localization approach for a moving target based on RGB-depth (RGB-D) camera and 2D light detection and ranging (LiDAR) sensor measurements. In the proposed approach, the 3D and 2D position information of a target measured by RGB-D camera and LiDAR sensor, respectively are utilized to find location of target by incorporating visual tracking algorithms, depth information of the structured light sensor and vision-LiDAR low-level fusion algorithm (e.g., extrinsic calibration). For robustness of localization, a novel approach making use of Kalman prediction and filtering with intermittent observations which are identified from depth image segmentation is proposed. The proposed depth-aided localization algorithm shows robust tracking results even if visual tracking using RGB camera fails. The experimental verification results are compared to position data from VICON motion captureas a ground truth and the results show that performance superiority and robustness of the proposed approach.) <|cite_end|> proposes the fuse of RGBD and 2D-LiDAR for tracking purposes. Roopa et al. <|cite_start|> (Reference: Image sensor data fusion using factorized kalman filter: This paper presents image sensor data fusion strategy using factorized Kalman filter algorithm which has wide range of aerospace applications. This involves locating the target from the images obtained from the two sensors using Centroid tracking Factorized Kalman filter and then fusing the sensor data to get much better information of the target position and velocity. Factorized Kalman filter or UD filter (UDF) is used for predicting the upcoming position and other variables of the target. Fusion is used to reduce the error that occurs due to clutters in image data taken from sensors. Performance of two fusion algorithms that is measurement or data level fusion and state vector fusion are carried out and good results are obtained regarding the position and velocity estimation of the target. Image sensor data fusion (ISDF) is realized using MATLAB tool. The sensor images are synthesized and added with different noise levels in order to represent sensor data obtained in the presence of different atmospheric clutter. Segmentation process and nearest neighbor technique is used to extract the target details from the sensor images.) <|cite_end|> fuse images using Kalman Filter (KF) to get more information about the localization of a target, this approach is applied to different cameras and different localization. In <|cite_start|> (Reference: Sensor fusion for prediction of orientation and position from obstacle using multiple ir sensors an approach based on kalman filter: Kalman filters have gained immense research attention in robotics, throughout the last decades. Among the applications, localization of robots through Kalman filters proved promising results. This paper presents an application of sensor fusion for prediction of orientation and depth to wall/obstacle by fusing the inputs from three IR range finders. The experimental result demonstrates the capability of Kalman filter to predict the parameters precisely, from noisy sensor inputs. The technique find application in determining the position and orientation from wall which will be helpful in obstacle avoidance decision making, automatic parking of automobiles etc.) <|cite_end|> the authors fuse with a KF three distance sensors in order to obtain the distance and orientation with respect to a wall. In <|cite_start|> (Reference: High-Precision Depth Estimation with the 3D LiDAR and Stereo Fusion: We present a deep convolutional neural network (CNN) architecture for high-precision depth estimation by jointly utilizing sparse 3D LiDAR and dense stereo depth information. In this network, the complementary characteristics of sparse 3D LiDAR and dense stereo depth are simultaneously encoded in a boosting manner. Tailored to the LiDAR and stereo fusion problem, the proposed network differs from previous CNNs in the incorporation of a compact convolution module, which can be deployed with the constraints of mobile devices. As training data for the LiDAR and stereo fusion is rather limited, we introduce a simple yet effective approach for reproducing the raw KITTI dataset. The raw LiDAR scans are augmented by adapting an off-the-shelf stereo algorithm and a confidence measure. We evaluate the proposed network on the KITTI benchmark and data collected by our multi-sensor acquisition system. Experiments demonstrate that the proposed network generalizes across datasets and is significantly more accurate than various baseline approaches.) <|cite_end|> K. Park et al. presents a high-precision depth map using a high cost 3D-LiDAR, however, cost of implementation is considerably higher than the approach presented in this paper. \subsection{Main contribution} One of the key points in UAV autonomous navigation is the obstacle avoidance problem. In this work, we address the problem of identifying free navigation areas instead of detecting a particular obstacle. We have chosen such an approach due to the high complexity in determining a broad class of objects when we deal with object classifier approach <|cite_start|> (Reference: Tracking a Ground Moving Target with a Quadrotor Using Switching Control: ) <|cite_end|>, <|cite_start|> (Reference: Real-time object detection and pose estimation using stereo vision. An application for a Quadrotor MAV: This paper presents a novel strategy for object detection applied on a Quadrotor micro aerial vehicle (MAV) navigating in unknown urban environments. The Quadrotor is required to fly across a window and complete a transferring flight between an outdoor position to a final point inside a building. To achieve this goal, three main tasks must be accomplished; the first one involves the identification of the object of interest, in this case a window; the second task involves the pose estimation of the MAV w.r.t the window; and finally generating a trajectory needed to cross the window starting from a given initial point. To identify the window, a feature-based cascade classifier is implemented, which provides an extremely fast and robust method for window identification. We develop a safe path-planning method using the information provided by the GPS and the on-board inertial and stereo vision sensors. Therefore, the stereo vision system estimates the relative position w.r.t. the Quadrotor and offers egomotion estimation of the MAV for subsequent position control. Preliminary experimental results of the identification of the window and pose estimation is demonstrated through some video sequences collected from the experimental platform.) <|cite_end|>. For that aim, we use the information of two low-cost devices: a stereo camera rig and a 1D-LiDAR. With the stereo camera we estimate a disparity map, then we measure the distance in front of the quadrotor with a 1D-LiDAR. Then, both data are fused in a KF to obtain a better estimation of the distance between the front of the quadrotor and a predefined area where the UAV can navigate as long as such a distance is free of obstacles. \subsection{Organization of the rest of the paper} In Section \ref{sec:problem} we present in detail the problem formulation. In Section \ref{sec:methods} the proposed methodology is described, in which the system overview, depth map estimation algorithm and KF are presented. Section \ref{sec:exp} shows experimental results. Finally, at Section \ref{sec:conc} we present some concluding remarks and future research. <|paper_end|>
[ "<|reference_start|> A Vision and GPS-Based Real-Time Trajectory Planning for a MAV in Unknown and Low-Sunlight Environments: <|reference_end|>", "<|reference_start|> A vision and GPS-based real-time trajectory planning for MAV in unknown urban environments: This paper addresses the issue of real-time optimal trajectory generation of a micro Air Vehicle (MAV) in unknown urban environments. The MAV is required to navigate from an initial and outdoor position to a final position inside a building. To achieve this objective, we develop a safe path planning method using the information provided by the GPS and a consumer depth camera. With the purpose to develop a safe path planning with obstacle avoidance capabilities, a model predictive control approach is developed, which uses the environment information acquired by the navigation system. <|reference_end|>", "<|reference_start|> A sensor fusion methodology for obstacle avoidance robot: Obstacle detection and navigation of dynamic environments is a challenge in mobile robotics. To address this challenge, this paper presents an efficient sensor fusion methodology to detect the size and location of obstacles and navigate the mobile robot with high accuracy. This is done by leveraging upon the unique advantages of accuracy in both ultrasonic sensor and a Kinect sensor for near-field and far-fields respectively. Further, an efficient Kalman filter is implemented to reduce the systematic errors in encoder data to track robot pose of the robot in real-time and reach the destination with high accuracy. Implemented on differential drive-based mobile robot, the proposed system has been validated with a high efficiency of detecting obstacles and reaching the destination with an accuracy of 5cm. <|reference_end|>", "<|reference_start|> Detect and avoid system based on multi sensor fusion for uav: With the rapid growth of the personal drone market, manufacturers of unmanned aerial vehicles(UAV) mainly used in the existing military field are expanding its application in various fields such as leisure, industrial, and public as well. In addition, the autonomous flight of UAV is being developed through combination with various sensors and signal processing technologies. Obstacle detect and avoidance is the most important function to ensure safety of autonomous flight mode. In this paper, we propose an obstacle detect and avoid technique that combines a 2D LiDAR with a stereo camera for safe navigation in autonomous flight of a multi-copter UAV. <|reference_end|>" ]
[ 0, 1, 8, 9 ]
{"<|cite_1|>": "ss-850820", "<|cite_2|>": "ss-850821", "<|cite_3|>": "ss-850822", "<|cite_6|>": "ss-1133705", "<|cite_7|>": "ss-2065204", "<|cite_9|>": "ss-850823", "<|cite_10|>": "ss-867571", "<|cite_11|>": "ss-1288536", "<|cite_12|>": "ss-850824", "<|cite_13|>": "ss-850825", "<|cite_14|>": "ss-850826", "<|cite_15|>": "ss-850827", "<|cite_16|>": "ss-850828", "<|cite_17|>": "ss-1121730", "<|cite_18|>": "ss-850829", "<|cite_19|>": "ss-850830"}
1801.04726
<|paper_start|> Title: An Interpretable Reasoning Network for Multi-Relation Question Answering Abstract: An Interpretable Reasoning Network for Multi-Relation Question Answering: Multi-relation Question Answering is a challenging task, due to the requirement of elaborated analysis on questions and reasoning over multiple fact triples in knowledge base. In this paper, we present a novel model called Interpretable Reasoning Network that employs an interpretable, hop-by-hop reasoning process for question answering. The model dynamically decides which part of an input question should be analyzed at each hop; predicts a relation that corresponds to the current parsed results; utilizes the predicted relation to update the question representation and the state of the reasoning process; and then drives the next-hop reasoning. Experiments show that our model yields state-of-the-art results on two datasets. More interestingly, the model can offer traceable and observable intermediate predictions for reasoning analysis and failure diagnosis, thereby allowing manual manipulation in predicting the final answer. Introduction \label{intro} \blfootnote{ \hspace{-0.65cm} This work is licenced under a Creative Commons Attribution 4.0 International Licence. Licence details: \url{http://creativecommons.org/licenses/by/4.0/} } Open-domain Question Answering (QA) has always been a hot topic in AI and this task has recently been facilitated by large-scale Knowledge Bases~(KBs) such as Freebase <|cite_start|> (Reference: Freebase: A Collaboratively Created Graph Database for Structuring Human knowledge: Freebase is a practical, scalable tuple database used to structure general human knowledge. The data in Freebase is collaboratively created, structured, and maintained. Freebase currently contains more than 125,000,000 tuples, more than 4000 types, and more than 7000 properties. Public read/write access to Freebase is allowed through an HTTP-based graph-query API using the Metaweb Query Language (MQL) as a data query and manipulation language. MQL provides an easy-to-use object-oriented interface to the tuple data in Freebase and is designed to facilitate the creation of collaborative, Web-based data-oriented applications.) <|cite_end|>. However, due to the variety and complexity of language and knowledge, open-domain question answering over knowledge bases (KBQA) is still a challenging task. Question answering over knowledge bases falls into two types, namely single-relation QA and multi-relation QA, as argued by Yin et al.~\shortcite{QACNN}. Single-relation questions, such as { \em ``How old is Obama?"}, can be answered by finding one fact triple in KB, and this task has been widely studied <|cite_start|> (Reference: Large-scale Simple Question Answering with Memory Networks: Training large-scale question answering systems is complicated because training sources usually cover a small portion of the range of possible questions. This paper studies the impact of multitask and transfer learning for simple question answering; a setting for which the reasoning required to answer is quite easy, as long as one can retrieve the correct evidence given a question, which can be difficult in large-scale conditions. To this end, we introduce a new dataset of 100k questions that we use in conjunction with existing benchmarks. We conduct our study within the framework of Memory Networks (Weston et al., 2015) because this perspective allows us to eventually scale up to more complex reasoning, and show that Memory Networks can be successfully trained to achieve excellent performance.) <|cite_end|> <|cite_start|> (Reference: Question Answering on Freebase via Relation Extraction and Textual Evidence: Existing knowledge-based question answering systems often rely on small annotated training data. While shallow methods like relation extraction are robust to data scarcity, they are less expressive than the deep meaning representation methods like semantic parsing, thereby failing at answering questions involving multiple constraints. Here we alleviate this problem by empowering a relation extraction method with additional evidence from Wikipedia. We first present a neural network based relation extractor to retrieve the candidate answers from Freebase, and then infer over Wikipedia to validate these answers. Experiments on the WebQuestions question answering dataset show that our method achieves an F_1 of 53.3%, a substantial improvement over the state-of-the-art.) <|cite_end|> <|cite_start|> (Reference: Evinets: Neural networks for combining evidence signals for factoid question answering: A critical task for question answering is the final answer selection stage, which has to combine multiple signals available about each answer candidate. This paper proposes EviNets: a novel neural network architecture for factoid question answering. EviNets scores candidate answer entities by combining the available supporting evidence, e.g., structured knowledge bases and unstructured text documents. EviNets represents each piece of evidence with a dense embeddings vector, scores their relevance to the question, and aggregates the support for each candidate to predict their final scores. Each of the components is generic and allows plugging in a variety of models for semantic similarity scoring and information aggregation. We demonstrate the effectiveness of EviNets in experiments on the existing TREC QA and WikiMovies benchmarks, and on the new Yahoo! Answers dataset introduced in this paper. EviNets can be extended to other information types and could facilitate future work on combining evidence signals for joint reasoning in question answering.) <|cite_end|>. In comparison, reasoning over multiple fact triples is required to answer multi-relation questions such as {\em ``Name a soccer player who plays at forward position at the club Borussia Dortmund."} where more than one entity and relation are mentioned. Compared to single-relation QA, multi-relation QA is yet to be addressed. Previous studies on QA over knowledge bases can be roughly categorized into two lines: semantic parsing and embedding-based models. Semantic parsing models <|cite_start|> (Reference: Semantic Parsing for Single-Relation Question Answering: We develop a semantic parsing framework based on semantic similarity for open domain question answering (QA). We focus on single-relation questions and decompose each question into an entity mention and a relation pattern. Using convolutional neural network models, we measure the similarity of entity mentions with entities in the knowledge base (KB) and the similarity of relation patterns and relations in the KB. We score relational triples in the KB using these measures and select the top scoring relational triple to answer the question. When evaluated on an open-domain QA task, our method achieves higher precision across different recall points compared to the previous approach, and can improve F1 by 7 points.) <|cite_end|> <|cite_start|> (Reference: The value of semantic parse labeling for knowledge base question answering: We demonstrate the value of collecting semantic parse labels for knowledge base question answering. In particular, (1) unlike previous studies on small-scale datasets, we show that learning from labeled semantic parses significantly improves overall performance, resulting in absolute 5 point gain compared to learning from answers, (2) we show that with an appropriate user interface, one can obtain semantic parses with high accuracy and at a cost comparable or lower than obtaining just answers, and (3) we have created and shared the largest semantic-parse labeled dataset to date in order to advance research in question answering.) <|cite_end|> obtain competitive performance at the cost of hand-crafted features and manual annotations, but lack the ability to generalize to other domains. In contrast, embedding-based models <|cite_start|> (Reference: Open Question Answering with Weakly Supervised Embedding Models: Building computers able to answer questions on any subject is a long standing goal of artificial intelligence. Promising progress has recently been achieved by methods that learn to map questions to logical forms or database queries. Such approaches can be effective but at the cost of either large amounts of human-labeled data or by defining lexicons and grammars tailored by practitioners. In this paper, we instead take the radical approach of learning to map questions to vectorial feature representations. By mapping answers into the same space one can query any knowledge base independent of its schema, without requiring any grammar or lexicon. Our method is trained with a new optimization procedure combining stochastic gradient descent followed by a fine-tuning step using the weak supervision provided by blending automatically and collaboratively generated resources. We empirically demonstrate that our model can capture meaningful signals from its noisy supervision leading to major improvements over paralex, the only existing method able to be trained on similar weakly labeled data.) <|cite_end|> <|cite_start|> (Reference: An End-to-End Model for Question Answering over Knowledge Base with Cross-Attention Combining Global Knowledge: With the rapid growth of knowledge bases (KBs) on the web, how to take full advantage of them becomes increasingly important. Question answering over knowledge base (KB-QA) is one of the promising approaches to access the substantial knowledge. Meanwhile, as the neural network-based (NN-based) methods develop, NN-based KB-QA has already achieved impressive results. However, previous work did not put more emphasis on question representation, and the question is converted into a fixed vector regardless of its candidate answers. This simple representation strategy is not easy to express the proper information in the question. Hence, we present an end-to-end neural network model to represent the questions and their corresponding scores dynamically according to the various candidate answer aspects via cross-attention mechanism. In addition, we leverage the global knowledge inside the underlying KB, aiming at integrating the rich KB information into the representation of the answers. As a result, it could alleviates the out-of-vocabulary (OOV) problem, which helps the cross-attention model to represent the question more precisely. The experimental results on WebQuestions demonstrate the effectiveness of the proposed approach.) <|cite_end|> <|cite_start|> (Reference: Recovering question answering errors via query revision: The existing factoid QA systems often lack a post-inspection component that can help models recover from their own mistakes. In this work, we propose to crosscheck the corresponding KB relations behind the predicted answers and identify potential inconsistencies. Instead of developing a new model that accepts evidences collected from these relations, we choose to plug them back to the original questions directly and check if the revised question makes sense or not. A bidirectional LSTM is applied to encode revised questions. We develop a scoring mechanism over the revised question encodings to refine the predictions of a base QA system. This approach can improve the F1 score of STAGG (Yih et al., 2015), one of the leading QA systems, from 52.5% to 53.9% on WEBQUESTIONS data.) <|cite_end|> can be trained end-to-end with weak supervision, but existing methods are not adequate to handle multi-relation QA due to the lack of reasoning ability. Recent reasoning models <|cite_start|> (Reference: Key-Value Memory Networks for Directly Reading Documents: Directly reading documents and being able to answer questions from them is an unsolved challenge. To avoid its inherent difficulty, question answering (QA) has been directed towards using Knowledge Bases (KBs) instead, which has proven effective. Unfortunately KBs often suffer from being too restrictive, as the schema cannot support certain types of answers, and too sparse, e.g. Wikipedia contains much more information than Freebase. In this work we introduce a new method, Key-Value Memory Networks, that makes reading documents more viable by utilizing different encodings in the addressing and output stages of the memory read operation. To compare using KBs, information extraction or Wikipedia documents directly in a single framework we construct an analysis tool, WikiMovies, a QA dataset that contains raw text alongside a preprocessed KB, in the domain of movies. Our method reduces the gap between all three settings. It also achieves state-of-the-art results on the existing WikiQA benchmark.) <|cite_end|> <|cite_start|> (Reference: Gated self-matching networks for reading comprehension and question answering: In this paper, we present the gated self-matching networks for reading comprehension style question answering, which aims to answer questions from a given passage. We first match the question and passage with gated attention-based recurrent networks to obtain the question-aware passage representation. Then we propose a self-matching attention mechanism to refine the representation by matching the passage against itself, which effectively encodes information from the whole passage. We finally employ the pointer networks to locate the positions of answers from the passages. We conduct extensive experiments on the SQuAD dataset. The single model achieves 71.3% on the evaluation metrics of exact match on the hidden test set, while the ensemble model further boosts the results to 75.9%. At the time of submission of the paper, our model holds the first place on the SQuAD leaderboard for both single and ensemble model.) <|cite_end|> mainly concentrate on Reading Comprehension (RC) which requires to answer questions according to a given document. However, transferring existing RC methods to KBQA is not trivial. For one reason, the focus of reasoning in RC is usually on understanding the document rather than parsing questions. For another reason, existing reasoning networks are usually designed in a black-box style, making the models less interpretable. While in multi-relation question answering, we believe that an interpretable reasoning process is essential. In this paper, we propose a novel Interpretable Reasoning Network (IRN) to equip QA systems with the reasoning ability to answer multi-relation questions. Our central idea is to design an interpretable reasoning process for a complex question: the reasoning module decides which part of an input question should be analyzed at each hop, and predicted a KB relation that corresponds to the current parsed results. The predicted relation will be used to update the question representation as well as the state of the reasoning module, and helps the model to make the next-hop reasoning. At each hop, an entity will be predicted based on the current state of the reasoning module. Different from previous models, our model is {\em \textbf{interpretable}} in that the predicted relation and entity at each hop are {\em \textbf{traceable and observable}}. At each hop our model has a specific aim to find an appropriate relation based on the iterative analysis of a question, and intermediate output at each hop can be interpreted by the corresponding linked entity. In this manner, IRN offers the ability of visualizing {\em \textbf{a complete reasoning path}} for a complex question, which facilitates reasoning analysis and failure diagnosis, thereby allowing manual manipulation in answer prediction as detailed in our experiments. The contributions of this paper are in two folds: \begin{enumerate} \item We design an Interpretable Reasoning Network which can make reasoning on multi-relation questions with multiple triples in KB. Results show that our model obtains state-of-the-art performance. \item Our model is more interpretable than existing reasoning networks in that the intermediate entities and relations predicted by the hop-by-hop reasoning process construct traceable reasoning paths to clearly reveal how the answer is derived. \end{enumerate} Related Work \label{sect:related works} Recent works on QA can be roughly classified into two types: one is semantic-parsing-based and the other is embedding-based. Semantic parsing approaches map questions to logical form queries <|cite_start|> (Reference: Compositional Semantic Parsing on Semi-Structured Tables: Two important aspects of semantic parsing for question answering are the breadth of the knowledge source and the depth of logical compositionality. While existing work trades off one aspect for another, this paper simultaneously makes progress on both fronts through a new task: answering complex questions on semi-structured tables using question-answer pairs as supervision. The central challenge arises from two compounding factors: the broader domain results in an open-ended set of relations, and the deeper compositionality results in a combinatorial explosion in the space of logical forms. We propose a logical-form driven parsing algorithm guided by strong typing constraints and show that it obtains significant improvements over natural baselines. For evaluation, we created a new dataset of 22,033 complex questions on Wikipedia tables, which is made publicly available.) <|cite_end|> <|cite_start|> (Reference: The value of semantic parse labeling for knowledge base question answering: We demonstrate the value of collecting semantic parse labels for knowledge base question answering. In particular, (1) unlike previous studies on small-scale datasets, we show that learning from labeled semantic parses significantly improves overall performance, resulting in absolute 5 point gain compared to learning from answers, (2) we show that with an appropriate user interface, one can obtain semantic parses with high accuracy and at a cost comparable or lower than obtaining just answers, and (3) we have created and shared the largest semantic-parse labeled dataset to date in order to advance research in question answering.) <|cite_end|> <|cite_start|> (Reference: {QUINT: Interpretable Question Answering over Knowledge Bases: We present QUINT, a live system for question answering over knowledge bases. QUINT automatically learns role-aligned utterance-query templates from user questions paired with their answers. When QUINT answers a question, it visualizes the complete derivation sequence from the natural language utterance to the final answer. The derivation provides an explanation of how the syntactic structure of the question was used to derive the structure of a SPARQL query, and how the phrases in the question were used to instantiate different parts of the query. When an answer seems unsatisfactory, the derivation provides valuable insights towards reformulating the question.) <|cite_end|>. These systems are effective but at the cost of heavy data annotation and pattern/grammar engineering. What's more, parsing systems are often constrained on a specific domain and broken down when executing logical queries on incomplete KBs. Our work follows the line of Embedding-based models <|cite_start|> (Reference: Open Question Answering with Weakly Supervised Embedding Models: Building computers able to answer questions on any subject is a long standing goal of artificial intelligence. Promising progress has recently been achieved by methods that learn to map questions to logical forms or database queries. Such approaches can be effective but at the cost of either large amounts of human-labeled data or by defining lexicons and grammars tailored by practitioners. In this paper, we instead take the radical approach of learning to map questions to vectorial feature representations. By mapping answers into the same space one can query any knowledge base independent of its schema, without requiring any grammar or lexicon. Our method is trained with a new optimization procedure combining stochastic gradient descent followed by a fine-tuning step using the weak supervision provided by blending automatically and collaboratively generated resources. We empirically demonstrate that our model can capture meaningful signals from its noisy supervision leading to major improvements over paralex, the only existing method able to be trained on similar weakly labeled data.) <|cite_end|> <|cite_start|> (Reference: Question Answering over Freebase with Multi-Column Convolutional Neural Networks: Answering natural language questions over a knowledge base is an important and challenging task. Most of existing systems typically rely on hand-crafted features and rules to conduct question understanding and/or answer ranking. In this paper, we introduce multi-column convolutional neural networks (MCCNNs) to understand questions from three different aspects (namely, answer path, answer context, and answer type) and learn their distributed representations. Meanwhile, we jointly learn low-dimensional embeddings of entities and relations in the knowledge base. Question-answer pairs are used to train the model to rank candidate answers. We also leverage question paraphrases to train the column networks in a multi-task learning manner. We use FREEBASE as the knowledge base and conduct extensive experiments on the WEBQUESTIONS dataset. Experimental results show that our method achieves better or comparable performance compared with baseline systems. In addition, we develop a method to compute the salience scores of question words in different column networks. The results help us intuitively understand what MCCNNs learn.) <|cite_end|> <|cite_start|> (Reference: Question Answering on Freebase via Relation Extraction and Textual Evidence: Existing knowledge-based question answering systems often rely on small annotated training data. While shallow methods like relation extraction are robust to data scarcity, they are less expressive than the deep meaning representation methods like semantic parsing, thereby failing at answering questions involving multiple constraints. Here we alleviate this problem by empowering a relation extraction method with additional evidence from Wikipedia. We first present a neural network based relation extractor to retrieve the candidate answers from Freebase, and then infer over Wikipedia to validate these answers. Experiments on the WebQuestions question answering dataset show that our method achieves an F_1 of 53.3%, a substantial improvement over the state-of-the-art.) <|cite_end|> <|cite_start|> (Reference: An End-to-End Model for Question Answering over Knowledge Base with Cross-Attention Combining Global Knowledge: With the rapid growth of knowledge bases (KBs) on the web, how to take full advantage of them becomes increasingly important. Question answering over knowledge base (KB-QA) is one of the promising approaches to access the substantial knowledge. Meanwhile, as the neural network-based (NN-based) methods develop, NN-based KB-QA has already achieved impressive results. However, previous work did not put more emphasis on question representation, and the question is converted into a fixed vector regardless of its candidate answers. This simple representation strategy is not easy to express the proper information in the question. Hence, we present an end-to-end neural network model to represent the questions and their corresponding scores dynamically according to the various candidate answer aspects via cross-attention mechanism. In addition, we leverage the global knowledge inside the underlying KB, aiming at integrating the rich KB information into the representation of the answers. As a result, it could alleviates the out-of-vocabulary (OOV) problem, which helps the cross-attention model to represent the question more precisely. The experimental results on WebQuestions demonstrate the effectiveness of the proposed approach.) <|cite_end|> <|cite_start|> (Reference: Recovering question answering errors via query revision: The existing factoid QA systems often lack a post-inspection component that can help models recover from their own mistakes. In this work, we propose to crosscheck the corresponding KB relations behind the predicted answers and identify potential inconsistencies. Instead of developing a new model that accepts evidences collected from these relations, we choose to plug them back to the original questions directly and check if the revised question makes sense or not. A bidirectional LSTM is applied to encode revised questions. We develop a scoring mechanism over the revised question encodings to refine the predictions of a base QA system. This approach can improve the F1 score of STAGG (Yih et al., 2015), one of the leading QA systems, from 52.5% to 53.9% on WEBQUESTIONS data.) <|cite_end|> which are recently introduced into the QA community where questions and KB entities are represented by distributed vectors, and QA is formulated as a problem of matching between vectors of questions and answer entities. These models need less grammars as well as annotated data, and are more flexible to deal with incomplete KBs. To make better matching, subgraphs of an entity in KB <|cite_start|> (Reference: Question Answering with Subgraph Embeddings: This paper presents a system which learns to answer questions on a broad range of topics from a knowledge base using few hand-crafted features. Our model learns low-dimensional embeddings of words and knowledge base constituents; these representations are used to score natural language questions against candidate answers. Training our system using pairs of questions and structured representations of their answers, and pairs of question paraphrases, yields competitive results on a competitive benchmark of the literature.) <|cite_end|>, answer aspects <|cite_start|> (Reference: Question Answering over Freebase with Multi-Column Convolutional Neural Networks: Answering natural language questions over a knowledge base is an important and challenging task. Most of existing systems typically rely on hand-crafted features and rules to conduct question understanding and/or answer ranking. In this paper, we introduce multi-column convolutional neural networks (MCCNNs) to understand questions from three different aspects (namely, answer path, answer context, and answer type) and learn their distributed representations. Meanwhile, we jointly learn low-dimensional embeddings of entities and relations in the knowledge base. Question-answer pairs are used to train the model to rank candidate answers. We also leverage question paraphrases to train the column networks in a multi-task learning manner. We use FREEBASE as the knowledge base and conduct extensive experiments on the WEBQUESTIONS dataset. Experimental results show that our method achieves better or comparable performance compared with baseline systems. In addition, we develop a method to compute the salience scores of question words in different column networks. The results help us intuitively understand what MCCNNs learn.) <|cite_end|> <|cite_start|> (Reference: An End-to-End Model for Question Answering over Knowledge Base with Cross-Attention Combining Global Knowledge: With the rapid growth of knowledge bases (KBs) on the web, how to take full advantage of them becomes increasingly important. Question answering over knowledge base (KB-QA) is one of the promising approaches to access the substantial knowledge. Meanwhile, as the neural network-based (NN-based) methods develop, NN-based KB-QA has already achieved impressive results. However, previous work did not put more emphasis on question representation, and the question is converted into a fixed vector regardless of its candidate answers. This simple representation strategy is not easy to express the proper information in the question. Hence, we present an end-to-end neural network model to represent the questions and their corresponding scores dynamically according to the various candidate answer aspects via cross-attention mechanism. In addition, we leverage the global knowledge inside the underlying KB, aiming at integrating the rich KB information into the representation of the answers. As a result, it could alleviates the out-of-vocabulary (OOV) problem, which helps the cross-attention model to represent the question more precisely. The experimental results on WebQuestions demonstrate the effectiveness of the proposed approach.) <|cite_end|> and external contexts <|cite_start|> (Reference: Question Answering on Freebase via Relation Extraction and Textual Evidence: Existing knowledge-based question answering systems often rely on small annotated training data. While shallow methods like relation extraction are robust to data scarcity, they are less expressive than the deep meaning representation methods like semantic parsing, thereby failing at answering questions involving multiple constraints. Here we alleviate this problem by empowering a relation extraction method with additional evidence from Wikipedia. We first present a neural network based relation extractor to retrieve the candidate answers from Freebase, and then infer over Wikipedia to validate these answers. Experiments on the WebQuestions question answering dataset show that our method achieves an F_1 of 53.3%, a substantial improvement over the state-of-the-art.) <|cite_end|> can be used to enrich the representation of an answer entity. Though these methods are successful to handle simple questions, answering multi-relation questions or other complex questions is far from solved, since such a task requires reasoning or other elaborated processes. Our work is also related to recent reasoning models which focus on Reading Comprehension where memory modules are designed to comprehend documents. State-of-the-art memory-based Reading Comprehension models <|cite_start|> (Reference: End-To-End Memory Networks: We introduce a neural network with a recurrent attention model over a possibly large external memory. The architecture is a form of Memory Network (Weston et al., 2015) but unlike the model in that work, it is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings. It can also be seen as an extension of RNNsearch to the case where multiple computational steps (hops) are performed per output symbol. The flexibility of the model allows us to apply it to tasks as diverse as (synthetic) question answering and to language modeling. For the former our approach is competitive with Memory Networks, but with less supervision. For the latter, on the Penn TreeBank and Text8 datasets our approach demonstrates comparable performance to RNNs and LSTMs. In both cases we show that the key concept of multiple computational hops yields improved results.) <|cite_end|> <|cite_start|> (Reference: Ask Me Anything: Dynamic Memory Networks for Natural Language Processing: Most tasks in natural language processing can be cast into question answering (QA) problems over language input. We introduce the dynamic memory network (DMN), a neural network architecture which processes input sequences and questions, forms episodic memories, and generates relevant answers. Questions trigger an iterative attention process which allows the model to condition its attention on the inputs and the result of previous iterations. These results are then reasoned over in a hierarchical recurrent sequence model to generate answers. The DMN can be trained end-to-end and obtains state-of-the-art results on several types of tasks and datasets: question answering (Facebook's bAbI dataset), text classification for sentiment analysis (Stanford Sentiment Treebank) and sequence modeling for part-of-speech tagging (WSJ-PTB). The training for these different tasks relies exclusively on trained word vector representations and input-question-answer triplets.) <|cite_end|> <|cite_start|> (Reference: ReasoNet: Learning to Stop Reading in Machine Comprehension: Teaching a computer to read and answer general questions pertaining to a document is a challenging yet unsolved problem. In this paper, we describe a novel neural network architecture called the Reasoning Network (ReasoNet) for machine comprehension tasks. ReasoNets make use of multiple turns to effectively exploit and then reason over the relation among queries, documents, and answers. Different from previous approaches using a fixed number of turns during inference, ReasoNets introduce a termination state to relax this constraint on the reasoning depth. With the use of reinforcement learning, ReasoNets can dynamically determine whether to continue the comprehension process after digesting intermediate results, or to terminate reading when it concludes that existing information is adequate to produce an answer. ReasoNets have achieved exceptional performance in machine comprehension datasets, including unstructured CNN and Daily Mail datasets, the Stanford SQuAD dataset, and a structured Graph Reachability dataset.) <|cite_end|> <|cite_start|> (Reference: Gated self-matching networks for reading comprehension and question answering: In this paper, we present the gated self-matching networks for reading comprehension style question answering, which aims to answer questions from a given passage. We first match the question and passage with gated attention-based recurrent networks to obtain the question-aware passage representation. Then we propose a self-matching attention mechanism to refine the representation by matching the passage against itself, which effectively encodes information from the whole passage. We finally employ the pointer networks to locate the positions of answers from the passages. We conduct extensive experiments on the SQuAD dataset. The single model achieves 71.3% on the evaluation metrics of exact match on the hidden test set, while the ensemble model further boosts the results to 75.9%. At the time of submission of the paper, our model holds the first place on the SQuAD leaderboard for both single and ensemble model.) <|cite_end|> <|cite_start|> (Reference: Scaffolding networks for teaching and learning to comprehend: In scaffolding teaching, students are gradually asked questions to build background knowledge, clear up confusions, learn to be attentive, and improve comprehension. Inspired by this approach, we explore methods for teaching machines to learn to reason over text documents through asking questions about the past information. We address three key challenges in teaching and learning to reason: 1) the need for an effective architecture that learns from the information in text and keeps it in memory; 2) the difficulty of self-assessing what is learned at any given point and what is left to be learned; 3) the difficulty of teaching reasoning in a scalable way. To address the first challenge, we present the Scaffolding Network, an attention-based neural network agent that can reason over a dynamic memory. It learns a policy using reinforcement learning to incrementally register new information about concepts and their relations. For the second challenge, we describe a question simulator as part of the scaffolding network that learns to continuously question the agent about the information processed so far. Through questioning, the agent learns to correctly answer as many questions as possible. For the last challenge, we explore training with reduced annotated data. We evaluate on synthetic and real datasets, demonstrating that our model competes well with the state-of-the-art methods, especially when less supervision is used.) <|cite_end|> make interactions between a question and the corresponding document in a multi-hop manner during reasoning. MemNN <|cite_start|> (Reference: Memory Networks: We describe a new class of learning models called memory networks. Memory networks reason with inference components combined with a long-term memory component; they learn how to use these jointly. The long-term memory can be read and written to, with the goal of using it for prediction. We investigate these models in the context of question answering (QA) where the long-term memory effectively acts as a (dynamic) knowledge base, and the output is a textual response. We evaluate them on a large-scale QA task, and a smaller, but more complex, toy task generated from a simulated world. In the latter, we show the reasoning power of such models by chaining multiple supporting sentences to answer questions that require understanding the intension of verbs.) <|cite_end|>, KVMemN2N <|cite_start|> (Reference: Key-Value Memory Networks for Directly Reading Documents: Directly reading documents and being able to answer questions from them is an unsolved challenge. To avoid its inherent difficulty, question answering (QA) has been directed towards using Knowledge Bases (KBs) instead, which has proven effective. Unfortunately KBs often suffer from being too restrictive, as the schema cannot support certain types of answers, and too sparse, e.g. Wikipedia contains much more information than Freebase. In this work we introduce a new method, Key-Value Memory Networks, that makes reading documents more viable by utilizing different encodings in the addressing and output stages of the memory read operation. To compare using KBs, information extraction or Wikipedia documents directly in a single framework we construct an analysis tool, WikiMovies, a QA dataset that contains raw text alongside a preprocessed KB, in the domain of movies. Our method reduces the gap between all three settings. It also achieves state-of-the-art results on the existing WikiQA benchmark.) <|cite_end|> and EviNet <|cite_start|> (Reference: Evinets: Neural networks for combining evidence signals for factoid question answering: A critical task for question answering is the final answer selection stage, which has to combine multiple signals available about each answer candidate. This paper proposes EviNets: a novel neural network architecture for factoid question answering. EviNets scores candidate answer entities by combining the available supporting evidence, e.g., structured knowledge bases and unstructured text documents. EviNets represents each piece of evidence with a dense embeddings vector, scores their relevance to the question, and aggregates the support for each candidate to predict their final scores. Each of the components is generic and allows plugging in a variety of models for semantic similarity scoring and information aggregation. We demonstrate the effectiveness of EviNets in experiments on the existing TREC QA and WikiMovies benchmarks, and on the new Yahoo! Answers dataset introduced in this paper. EviNets can be extended to other information types and could facilitate future work on combining evidence signals for joint reasoning in question answering.) <|cite_end|> transferred the reading comprehension framework to QA where a set of triples is treated as a document and a similar reasoning process can be applied. However, reading comprehension makes reasoning over documents instead of parsing the questions. Other studies applying hop-by-hop inference into QA can be seen in Neural Programmer <|cite_start|> (Reference: Neural Programmer: Inducing Latent Programs with Gradient Descent: Deep neural networks have achieved impressive supervised classification performance in many tasks including image recognition, speech recognition, and sequence to sequence learning. However, this success has not been translated to applications like question answering that may involve complex arithmetic and logic reasoning. A major limitation of these models is in their inability to learn even simple arithmetic and logic operations. For example, it has been shown that neural networks fail to learn to add two binary numbers reliably. In this work, we propose Neural Programmer, an end-to-end differentiable neural network augmented with a small set of basic arithmetic and logic operations. Neural Programmer can call these augmented operations over several steps, thereby inducing compositional programs that are more complex than the built-in operations. The model learns from a weak supervision signal which is the result of execution of the correct program, hence it does not require expensive annotation of the correct program itself. The decisions of what operations to call, and what data segments to apply to are inferred by Neural Programmer. Such decisions, during training, are done in a differentiable fashion so that the entire network can be trained jointly by gradient descent. We find that training the model is difficult, but it can be greatly improved by adding random noise to the gradient. On a fairly complex synthetic table-comprehension dataset, traditional recurrent networks and attentional models perform poorly while Neural Programmer typically obtains nearly perfect accuracy.) <|cite_end|> <|cite_start|> (Reference: Neural Programmer: Inducing Latent Programs with Gradient Descent: Deep neural networks have achieved impressive supervised classification performance in many tasks including image recognition, speech recognition, and sequence to sequence learning. However, this success has not been translated to applications like question answering that may involve complex arithmetic and logic reasoning. A major limitation of these models is in their inability to learn even simple arithmetic and logic operations. For example, it has been shown that neural networks fail to learn to add two binary numbers reliably. In this work, we propose Neural Programmer, an end-to-end differentiable neural network augmented with a small set of basic arithmetic and logic operations. Neural Programmer can call these augmented operations over several steps, thereby inducing compositional programs that are more complex than the built-in operations. The model learns from a weak supervision signal which is the result of execution of the correct program, hence it does not require expensive annotation of the correct program itself. The decisions of what operations to call, and what data segments to apply to are inferred by Neural Programmer. Such decisions, during training, are done in a differentiable fashion so that the entire network can be trained jointly by gradient descent. We find that training the model is difficult, but it can be greatly improved by adding random noise to the gradient. On a fairly complex synthetic table-comprehension dataset, traditional recurrent networks and attentional models perform poorly while Neural Programmer typically obtains nearly perfect accuracy.) <|cite_end|> and Neural Enquirer <|cite_start|> (Reference: Neural Enquirer: Learning to Query Tables with Natural Language: We proposed Neural Enquirer as a neural network architecture to execute a natural language (NL) query on a knowledge-base (KB) for answers. Basically, Neural Enquirer finds the distributed representation of a query and then executes it on knowledge-base tables to obtain the answer as one of the values in the tables. Unlike similar efforts in end-to-end training of semantic parsers, Neural Enquirer is fully "neuralized": it not only gives distributional representation of the query and the knowledge-base, but also realizes the execution of compositional queries as a series of differentiable operations, with intermediate results (consisting of annotations of the tables at different levels) saved on multiple layers of memory. Neural Enquirer can be trained with gradient descent, with which not only the parameters of the controlling components and semantic parsing component, but also the embeddings of the tables and query words can be learned from scratch. The training can be done in an end-to-end fashion, but it can take stronger guidance, e.g., the step-by-step supervision for complicated queries, and benefit from it. Neural Enquirer is one step towards building neural network systems which seek to understand language by executing it on real-world. Our experiments show that Neural Enquirer can learn to execute fairly complicated NL queries on tables with rich structures.) <|cite_end|>, where deep networks are proposed to parse a question and execute a query on tables. However, Neural Programmer needs to predefine symbolic operations, while Neural Enquirer lacks explicit interpretation. Mou et al.~\shortcite{Mou2016Coupling} proposed a model coupling distributed and symbolic execution with REINFORCE algorithm, however, training such a model is challenging. Neural Module Network <|cite_start|> (Reference: Neural Module Networks: Visual question answering is fundamentally compositional in nature---a question like "where is the dog?" shares substructure with questions like "what color is the dog?" and "where is the cat?" This paper seeks to simultaneously exploit the representational capacity of deep networks and the compositional linguistic structure of questions. We describe a procedure for constructing and learning *neural module networks*, which compose collections of jointly-trained neural "modules" into deep networks for question answering. Our approach decomposes questions into their linguistic substructures, and uses these structures to dynamically instantiate modular networks (with reusable components for recognizing dogs, classifying colors, etc.). The resulting compound networks are jointly trained. We evaluate our approach on two challenging datasets for visual question answering, achieving state-of-the-art results on both the VQA natural image dataset and a new dataset of complex questions about abstract shapes.) <|cite_end|> <|cite_start|> (Reference: Learning to Compose Neural Networks for Question Answering: We describe a question answering model that applies to both images and structured knowledge bases. The model uses natural language strings to automatically assemble neural networks from a collection of composable modules. Parameters for these modules are learned jointly with network-assembly parameters via reinforcement learning, with only (world, question, answer) triples as supervision. Our approach, which we term a dynamic neural model network, achieves state-of-the-art results on benchmark datasets in both visual and structured domains.) <|cite_end|> customized network architectures for different patterns of reasoning, making the reasoning network interpretable. However, a dependency parser and the REINFORCE algorithm are required. <|paper_end|>
[ "<|reference_start|> Evinets: Neural networks for combining evidence signals for factoid question answering: A critical task for question answering is the final answer selection stage, which has to combine multiple signals available about each answer candidate. This paper proposes EviNets: a novel neural network architecture for factoid question answering. EviNets scores candidate answer entities by combining the available supporting evidence, e.g., structured knowledge bases and unstructured text documents. EviNets represents each piece of evidence with a dense embeddings vector, scores their relevance to the question, and aggregates the support for each candidate to predict their final scores. Each of the components is generic and allows plugging in a variety of models for semantic similarity scoring and information aggregation. We demonstrate the effectiveness of EviNets in experiments on the existing TREC QA and WikiMovies benchmarks, and on the new Yahoo! Answers dataset introduced in this paper. EviNets can be extended to other information types and could facilitate future work on combining evidence signals for joint reasoning in question answering. <|reference_end|>", "<|reference_start|> Key-Value Memory Networks for Directly Reading Documents: Directly reading documents and being able to answer questions from them is an unsolved challenge. To avoid its inherent difficulty, question answering (QA) has been directed towards using Knowledge Bases (KBs) instead, which has proven effective. Unfortunately KBs often suffer from being too restrictive, as the schema cannot support certain types of answers, and too sparse, e.g. Wikipedia contains much more information than Freebase. In this work we introduce a new method, Key-Value Memory Networks, that makes reading documents more viable by utilizing different encodings in the addressing and output stages of the memory read operation. To compare using KBs, information extraction or Wikipedia documents directly in a single framework we construct an analysis tool, WikiMovies, a QA dataset that contains raw text alongside a preprocessed KB, in the domain of movies. Our method reduces the gap between all three settings. It also achieves state-of-the-art results on the existing WikiQA benchmark. <|reference_end|>", "<|reference_start|> Gated self-matching networks for reading comprehension and question answering: In this paper, we present the gated self-matching networks for reading comprehension style question answering, which aims to answer questions from a given passage. We first match the question and passage with gated attention-based recurrent networks to obtain the question-aware passage representation. Then we propose a self-matching attention mechanism to refine the representation by matching the passage against itself, which effectively encodes information from the whole passage. We finally employ the pointer networks to locate the positions of answers from the passages. We conduct extensive experiments on the SQuAD dataset. The single model achieves 71.3% on the evaluation metrics of exact match on the hidden test set, while the ensemble model further boosts the results to 75.9%. At the time of submission of the paper, our model holds the first place on the SQuAD leaderboard for both single and ensemble model. <|reference_end|>", "<|reference_start|> Neural Programmer: Inducing Latent Programs with Gradient Descent: Deep neural networks have achieved impressive supervised classification performance in many tasks including image recognition, speech recognition, and sequence to sequence learning. However, this success has not been translated to applications like question answering that may involve complex arithmetic and logic reasoning. A major limitation of these models is in their inability to learn even simple arithmetic and logic operations. For example, it has been shown that neural networks fail to learn to add two binary numbers reliably. In this work, we propose Neural Programmer, an end-to-end differentiable neural network augmented with a small set of basic arithmetic and logic operations. Neural Programmer can call these augmented operations over several steps, thereby inducing compositional programs that are more complex than the built-in operations. The model learns from a weak supervision signal which is the result of execution of the correct program, hence it does not require expensive annotation of the correct program itself. The decisions of what operations to call, and what data segments to apply to are inferred by Neural Programmer. Such decisions, during training, are done in a differentiable fashion so that the entire network can be trained jointly by gradient descent. We find that training the model is difficult, but it can be greatly improved by adding random noise to the gradient. On a fairly complex synthetic table-comprehension dataset, traditional recurrent networks and attentional models perform poorly while Neural Programmer typically obtains nearly perfect accuracy. <|reference_end|>" ]
[ 3, 9, 10, 31 ]
{"<|cite_1|>": "ss-1264499", "<|multi_cite_2_1|>": "arxiv-78908", "<|multi_cite_2_2|>": "arxiv-93308", "<|multi_cite_2_3|>": "ss-1495400", "<|multi_cite_3_1|>": "ss-1966169", "<|multi_cite_3_2|>": "ss-1276661", "<|multi_cite_4_1|>": "arxiv-59558", "<|multi_cite_4_2|>": "ss-690633", "<|multi_cite_4_3|>": "ss-1495401", "<|multi_cite_5_1|>": "arxiv-99775", "<|multi_cite_5_2|>": "ss-1937521", "<|multi_cite_6_1|>": "arxiv-81958", "<|multi_cite_6_2|>": "ss-1276661", "<|multi_cite_6_3|>": "ss-1117451", "<|multi_cite_7_1|>": "arxiv-59558", "<|multi_cite_7_2|>": "ss-975429", "<|multi_cite_7_3|>": "arxiv-93308", "<|multi_cite_7_4|>": "ss-690633", "<|multi_cite_7_5|>": "ss-1495401", "<|cite_8|>": "arxiv-62242", "<|multi_cite_9_1|>": "ss-975429", "<|multi_cite_9_2|>": "ss-690633", "<|cite_10|>": "arxiv-93308", "<|multi_cite_11_1|>": "arxiv-75391", "<|multi_cite_11_2|>": "arxiv-79940", "<|multi_cite_11_3|>": "arxiv-105996", "<|multi_cite_11_4|>": "ss-1937521", "<|multi_cite_11_5|>": "ss-1495402", "<|cite_12|>": "arxiv-67359", "<|cite_13|>": "arxiv-99775", "<|cite_14|>": "ss-1495400", "<|multi_cite_15_1|>": "arxiv-87250", "<|multi_cite_15_2|>": "arxiv-87250", "<|cite_16|>": "arxiv-88432", "<|multi_cite_17_1|>": "arxiv-86843", "<|multi_cite_17_2|>": "arxiv-90115"}
2009.13818
<|paper_start|> Title: A Simple but Tough-to-Beat Data Augmentation Approach for Natural Language Understanding and Generation Abstract: A Simple but Tough-to-Beat Data Augmentation Approach for Natural Language Understanding and Generation: Adversarial training has been shown effective at endowing the learned representations with stronger generalization ability. However, it typically requires expensive computation to determine the direction of the injected perturbations. In this paper, we introduce a set of simple yet effective data augmentation strategies dubbed cutoff, where part of the information within an input sentence is erased to yield its restricted views (during the fine-tuning stage). Notably, this process relies merely on stochastic sampling and thus adds little computational overhead. A Jensen-Shannon Divergence consistency loss is further utilized to incorporate these augmented samples into the training objective in a principled manner. To verify the effectiveness of the proposed strategies, we apply cutoff to both natural language understanding and generation problems. On the GLUE benchmark, it is demonstrated that cutoff, in spite of its simplicity, performs on par or better than several competitive adversarial-based approaches. We further extend cutoff to machine translation and observe significant gains in BLEU scores (based upon the Transformer Base model). Moreover, cutoff consistently outperforms adversarial training and achieves state-of-the-art results on the IWSLT2014 German-English dataset. Introduction Large-scale language models (LMs) pre-trained with massive unlabeled text corpora, in a self-supervised manner, has brought impressive performance gains across a wide range of natural language processing tasks <|cite_start|> (Reference: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding: We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).) <|cite_end|> <|cite_start|> (Reference: RoBERTa: A Robustly Optimized BERT Pretraining Approach: Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes, and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results highlight the importance of previously overlooked design choices, and raise questions about the source of recently reported improvements. We release our models and code.) <|cite_end|> <|cite_start|> (Reference: XLNet: Generalized Autoregressive Pretraining for Language Understanding: With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, under comparable experiment settings, XLNet outperforms BERT on 20 tasks, often by a large margin, including question answering, natural language inference, sentiment analysis, and document ranking.) <|cite_end|> <|cite_start|> (Reference: SpanBERT: Improving Pre-training by Representing and Predicting Spans: We present SpanBERT, a pre-training method that is designed to better represent and predict spans of text. Our approach extends BERT by (1) masking contiguous random spans, rather than random tokens, and (2) training the span boundary representations to predict the entire content of the masked span, without relying on the individual token representations within it. SpanBERT consistently outperforms BERT and our better-tuned baselines, with substantial gains on span selection tasks such as question answering and coreference resolution. In particular, with the same training data and model size as BERT-large, our single model obtains 94.6% and 88.7% F1 on SQuAD 1.1 and 2.0, respectively. We also achieve a new state of the art on the OntoNotes coreference resolution task (79.6\% F1), strong performance on the TACRED relation extraction benchmark, and even show gains on GLUE.) <|cite_end|> <|cite_start|> (Reference: ERNIE: Enhanced Representation through Knowledge Integration: We present a novel language representation model enhanced by knowledge called ERNIE (Enhanced Representation through kNowledge IntEgration). Inspired by the masking strategy of BERT, ERNIE is designed to learn language representation enhanced by knowledge masking strategies, which includes entity-level masking and phrase-level masking. Entity-level strategy masks entities which are usually composed of multiple words.Phrase-level strategy masks the whole phrase which is composed of several words standing together as a conceptual unit.Experimental results show that ERNIE outperforms other baseline methods, achieving new state-of-the-art results on five Chinese natural language processing tasks including natural language inference, semantic similarity, named entity recognition, sentiment analysis and question answering. We also demonstrate that ERNIE has more powerful knowledge inference capacity on a cloze test.) <|cite_end|> <|cite_start|> (Reference: ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators: Masked language modeling (MLM) pre-training methods such as BERT corrupt the input by replacing some tokens with [MASK] and then train a model to reconstruct the original tokens. While they produce good results when transferred to downstream NLP tasks, they generally require large amounts of compute to be effective. As an alternative, we propose a more sample-efficient pre-training task called replaced token detection. Instead of masking the input, our approach corrupts it by replacing some tokens with plausible alternatives sampled from a small generator network. Then, instead of training a model that predicts the original identities of the corrupted tokens, we train a discriminative model that predicts whether each token in the corrupted input was replaced by a generator sample or not. Thorough experiments demonstrate this new pre-training task is more efficient than MLM because the task is defined over all input tokens rather than just the small subset that was masked out. As a result, the contextual representations learned by our approach substantially outperform the ones learned by BERT given the same model size, data, and compute. The gains are particularly strong for small models; for example, we train a model on one GPU for 4 days that outperforms GPT (trained using 30x more compute) on the GLUE natural language understanding benchmark. Our approach also works well at scale, where it performs comparably to RoBERTa and XLNet while using less than 1/4 of their compute and outperforms them when using the same amount of compute.) <|cite_end|> <|cite_start|> (Reference: BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension: We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and many other more recent pretraining schemes. We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system for machine translation, with only target language pretraining. We also report ablation experiments that replicate other pretraining schemes within the BART framework, to better measure which factors most influence end-task performance.) <|cite_end|> <|cite_start|> (Reference: UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training: We propose to pre-train a unified language model for both autoencoding and partially autoregressive language modeling tasks using a novel training procedure, referred to as a pseudo-masked language model (PMLM). Given an input text with masked tokens, we rely on conventional masks to learn inter-relations between corrupted tokens and context via autoencoding, and pseudo masks to learn intra-relations between masked spans via partially autoregressive modeling. With well-designed position embeddings and self-attention masks, the context encodings are reused to avoid redundant computation. Moreover, conventional masks used for autoencoding provide global masking information, so that all the position embeddings are accessible in partially autoregressive language modeling. In addition, the two tasks pre-train a unified language model as a bidirectional encoder and a sequence-to-sequence decoder, respectively. Our experiments show that the unified language models pre-trained using PMLM achieve new state-of-the-art results on a wide range of natural language understanding and generation tasks across several widely used benchmarks.) <|cite_end|> <|cite_start|> (Reference: DeBERTa: Decoding-enhanced BERT with Disentangled Attention: Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks. In this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT with disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the disentangled attention mechanism, where each word is represented using two vectors that encode its content and position, respectively, and the attention weights among words are computed using disentangled matrices on their contents and relative positions, respectively. Second, an enhanced mask decoder is used to incorporate absolute positions in the decoding layer to predict the masked tokens in model pre-training. In addition, a new virtual adversarial training method is used for fine-tuning to improve models' generalization. We show that these techniques significantly improve the efficiency of model pre-training and the performance of both natural language understanding (NLU) and natural langauge generation (NLG) downstream tasks. Compared to RoBERTa-Large, a DeBERTa model trained on half of the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9% (90.2% vs. 91.1%), on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%). Notably, we scale up DeBERTa by training a larger version that consists of 48 Transform layers with 1.5 billion parameters. The significant performance boost makes the single DeBERTa model surpass the human performance on the SuperGLUE benchmark (Wang et al., 2019a) for the first time in terms of macro-average score (89.9 versus 89.8), and the ensemble DeBERTa model sits atop the SuperGLUE leaderboard as of January 6, 2021, out performing the human baseline by a decent margin (90.3 versus 89.8).) <|cite_end|>. Significant research efforts have focused on exploring various pre-training recipes to yield more informative LMs. However, given the imbalanced nature between the huge number of model parameters and limited task-specific data, how to leverage and unlock the knowledge from large-scale LMs (during the fine-tuning stage) remains a challenging issue. It has been observed that the representations from pre-trained models, after being fine-tuned on specific downstream tasks, tend to degrade and become less generalizable <|cite_start|> (Reference: FreeLB: Enhanced Adversarial Training for Natural Language Understanding: Adversarial training, which minimizes the maximal risk for label-preserving input perturbations, has proved to be effective for improving the generalization of language models. In this work, we propose a novel adversarial training algorithm, FreeLB, that promotes higher invariance in the embedding space, by adding adversarial perturbations to word embeddings and minimizing the resultant adversarial risk inside different regions around input samples. To validate the effectiveness of the proposed approach, we apply it to Transformer-based models for natural language understanding and commonsense reasoning tasks. Experiments on the GLUE benchmark show that when applied only to the finetuning stage, it is able to improve the overall test scores of BERT-base model from 78.3 to 79.4, and RoBERTa-large model from 88.5 to 88.8. In addition, the proposed approach achieves state-of-the-art single-model test accuracies of 85.44\% and 67.75\% on ARC-Easy and ARC-Challenge. Experiments on CommonsenseQA benchmark further demonstrate that FreeLB can be generalized and boost the performance of RoBERTa-large model on other tasks as well. Code is available at \url{https://github.com/zhuchen03/FreeLB .) <|cite_end|> <|cite_start|> (Reference: SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization: Transfer learning has fundamentally changed the landscape of natural language processing (NLP) research. Many existing state-of-the-art models are first pre-trained on a large text corpus and then fine-tuned on downstream tasks. However, due to limited data resources from downstream tasks and the extremely large capacity of pre-trained models, aggressive fine-tuning often causes the adapted model to overfit the data of downstream tasks and forget the knowledge of the pre-trained model. To address the above issue in a more principled manner, we propose a new computational framework for robust and efficient fine-tuning for pre-trained language models. Specifically, our proposed framework contains two important ingredients: 1. Smoothness-inducing regularization, which effectively manages the capacity of the model; 2. Bregman proximal point optimization, which is a class of trust-region methods and can prevent knowledge forgetting. Our experiments demonstrate that our proposed method achieves the state-of-the-art performance on multiple NLP benchmarks.) <|cite_end|> <|cite_start|> (Reference: Better Fine-Tuning by Reducing Representational Collapse: Although widely adopted, existing approaches for fine-tuning pre-trained language models have been shown to be unstable across hyper-parameter settings, motivating recent work on trust region methods. In this paper, we present a simplified and efficient method rooted in trust region theory that replaces previously used adversarial objectives with parametric noise (sampling from either a normal or uniform distribution), thereby discouraging representation change during fine-tuning when possible without hurting performance. We also introduce a new analysis to motivate the use of trust region methods more generally, by studying representational collapse; the degradation of generalizable representations from pre-trained models as they are fine-tuned for a specific end task. Extensive experiments show that our fine-tuning method matches or exceeds the performance of previous trust region methods on a range of understanding and generation tasks (including DailyMail/CNN, Gigaword, Reddit TIFU, and the GLUE benchmark), while also being much faster. We also show that it is less prone to representation collapse; the pre-trained models maintain more generalizable representations every time they are fine-tuned.) <|cite_end|>. To alleviate this issue, adversarial training objectives have been proposed to regularize the learned representations during the fine-tuning stage <|cite_start|> (Reference: FreeLB: Enhanced Adversarial Training for Natural Language Understanding: Adversarial training, which minimizes the maximal risk for label-preserving input perturbations, has proved to be effective for improving the generalization of language models. In this work, we propose a novel adversarial training algorithm, FreeLB, that promotes higher invariance in the embedding space, by adding adversarial perturbations to word embeddings and minimizing the resultant adversarial risk inside different regions around input samples. To validate the effectiveness of the proposed approach, we apply it to Transformer-based models for natural language understanding and commonsense reasoning tasks. Experiments on the GLUE benchmark show that when applied only to the finetuning stage, it is able to improve the overall test scores of BERT-base model from 78.3 to 79.4, and RoBERTa-large model from 88.5 to 88.8. In addition, the proposed approach achieves state-of-the-art single-model test accuracies of 85.44\% and 67.75\% on ARC-Easy and ARC-Challenge. Experiments on CommonsenseQA benchmark further demonstrate that FreeLB can be generalized and boost the performance of RoBERTa-large model on other tasks as well. Code is available at \url{https://github.com/zhuchen03/FreeLB .) <|cite_end|> <|cite_start|> (Reference: Adversarial Training for Large Neural Language Models: Generalization and robustness are both key desiderata for designing machine learning methods. Adversarial training can enhance robustness, but past work often finds it hurts generalization. In natural language processing (NLP), pre-training large neural language models such as BERT have demonstrated impressive gain in generalization for a variety of tasks, with further improvement from adversarial fine-tuning. However, these models are still vulnerable to adversarial attacks. In this paper, we show that adversarial pre-training can improve both generalization and robustness. We propose a general algorithm ALUM (Adversarial training for large neural LangUage Models), which regularizes the training objective by applying perturbations in the embedding space that maximizes the adversarial loss. We present the first comprehensive study of adversarial training in all stages, including pre-training from scratch, continual pre-training on a well-trained model, and task-specific fine-tuning. ALUM obtains substantial gains over BERT on a wide range of NLP tasks, in both regular and adversarial scenarios. Even for models that have been well trained on extremely large text corpora, such as RoBERTa, ALUM can still produce significant gains from continual pre-training, whereas conventional non-adversarial methods can not. ALUM can be further combined with task-specific fine-tuning to attain additional gains. The ALUM code is publicly available at https://github.com/namisan/mt-dnn.) <|cite_end|> <|cite_start|> (Reference: SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization: Transfer learning has fundamentally changed the landscape of natural language processing (NLP) research. Many existing state-of-the-art models are first pre-trained on a large text corpus and then fine-tuned on downstream tasks. However, due to limited data resources from downstream tasks and the extremely large capacity of pre-trained models, aggressive fine-tuning often causes the adapted model to overfit the data of downstream tasks and forget the knowledge of the pre-trained model. To address the above issue in a more principled manner, we propose a new computational framework for robust and efficient fine-tuning for pre-trained language models. Specifically, our proposed framework contains two important ingredients: 1. Smoothness-inducing regularization, which effectively manages the capacity of the model; 2. Bregman proximal point optimization, which is a class of trust-region methods and can prevent knowledge forgetting. Our experiments demonstrate that our proposed method achieves the state-of-the-art performance on multiple NLP benchmarks.) <|cite_end|>. Specifically, label-preserving perturbations are performed on the word embedding layer, and the model is encouraged to make consistent predictions regardless of these noises. Although the model's robustness can be improved with these perturbed examples, adversarial-based methods typically require additional backward passes to decide the direction of the inject perturbations. As a result, these methods give rise to significantly more computational and memory overhead (relative to standard SGD training). In this paper, we introduce a set of simple yet efficient data augmentation strategies. They are inspired by the consensus principle in multi-view learning <|cite_start|> (Reference: A Survey on Multi-view Learning: In recent years, a great many methods of learning from multi-view data by considering the diversity of different views have been proposed. These views may be obtained from multiple sources or different feature subsets. In trying to organize and highlight similarities and differences between the variety of multi-view learning approaches, we review a number of representative multi-view learning algorithms in different areas and classify them into three groups: 1) co-training, 2) multiple kernel learning, and 3) subspace learning. Notably, co-training style algorithms train alternately to maximize the mutual agreement on two distinct views of the data; multiple kernel learning algorithms exploit kernels that naturally correspond to different views and combine kernels either linearly or non-linearly to improve learning performance; and subspace learning algorithms aim to obtain a latent subspace shared by multiple views by assuming that the input views are generated from this latent subspace. Though there is significant variance in the approaches to integrating multiple views to improve learning performance, they mainly exploit either the consensus principle or the complementary principle to ensure the success of multi-view learning. Since accessing multiple views is the fundament of multi-view learning, with the exception of study on learning a model from multiple views, it is also valuable to study how to construct multiple views and how to evaluate these views. Overall, by exploring the consistency and complementary properties of different views, multi-view learning is rendered more effective, more promising, and has better generalization ability than single-view learning.) <|cite_end|> <|cite_start|> (Reference: Semi-Supervised Sequence Modeling with Cross-View Training: Unsupervised representation learning algorithms such as word2vec and ELMo improve the accuracy of many supervised NLP models, mainly because they can take advantage of large amounts of unlabeled text. However, the supervised models only learn from task-specific labeled data during the main training phase. We therefore propose Cross-View Training (CVT), a semi-supervised learning algorithm that improves the representations of a Bi-LSTM sentence encoder using a mix of labeled and unlabeled data. On labeled examples, standard supervised learning is used. On unlabeled examples, CVT teaches auxiliary prediction modules that see restricted views of the input (e.g., only part of a sentence) to match the predictions of the full model seeing the whole input. Since the auxiliary modules and the full model share intermediate representations, this in turn improves the full model. Moreover, we show that CVT is particularly effective when combined with multi-task learning. We evaluate CVT on five sequence tagging tasks, machine translation, and dependency parsing, achieving state-of-the-art results.) <|cite_end|>, which states that maximizing the agreement/consensus between two different views of data can lead to lower error rate. Specifically, we propose to erase/remove part of the information within a training instance to produce multiple perturbed samples. To ensure that the model cannot utilize the information from the removed input at all, the erasing process happens at the input embeddings layer. In contrast to Dropout, which converts individual elements within the word embedding matrix to $0$, we propose to erase the vectors along each dimension entirely. As a result, either multiple tokens or embedding dimensions are converted to vectors of all zeros, yielding partial views of the input matrix in a structured manner. To make the augmented samples more challenging, inspired by <|cite_start|> (Reference: SpanBERT: Improving Pre-training by Representing and Predicting Spans: We present SpanBERT, a pre-training method that is designed to better represent and predict spans of text. Our approach extends BERT by (1) masking contiguous random spans, rather than random tokens, and (2) training the span boundary representations to predict the entire content of the masked span, without relying on the individual token representations within it. SpanBERT consistently outperforms BERT and our better-tuned baselines, with substantial gains on span selection tasks such as question answering and coreference resolution. In particular, with the same training data and model size as BERT-large, our single model obtains 94.6% and 88.7% F1 on SQuAD 1.1 and 2.0, respectively. We also achieve a new state of the art on the OntoNotes coreference resolution task (79.6\% F1), strong performance on the TACRED relation extraction benchmark, and even show gains on GLUE.) <|cite_end|>, we further introduce an approach to derive restricted views by removing a contiguous span within an input sequence. The model is fine-tuned with the constraint of making consistent predictions on these augmented data (even with partial views of the original input). Intuitively, the resulting representations tend to have a stronger ability of \emph{fully} abstracting various semantic features from a sentence, since the model can not merely utilize the most salient ones (which may not be available in partial views) to make the corresponding predictions. To capture the intrinsic relationship among these stochastic and diverse augmented examples, we propose a specially-designed consistency regularization objective. Particularly, in addition to the cross-entropy loss typically employed in data augmentation, a Jensen-Shannon Divergence consistency loss is further introduced to match the predictions between different partial views of a given input. One advantage is that this loss is able to naturally maximize the consensus between multiple (more than $2$) views in a more principled and stable manner. We evaluate the effectiveness of the proposed data augmentation strategies on a wide range of natural language understanding (NLU) tasks from the GLUE benchmark. RoBERTa <|cite_start|> (Reference: RoBERTa: A Robustly Optimized BERT Pretraining Approach: Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes, and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results highlight the importance of previously overlooked design choices, and raise questions about the source of recently reported improvements. We release our models and code.) <|cite_end|> is employed as the testbed model in our experiments. However, the augmentation methods proposed here can be easily extended to other large-scale pretrained models. Despite its simplicity, our method consistently gives rises to significant performance gains. More importantly, \emph{cutoff} outperforms several competitive adversarial-based approaches, while being much more computationally efficient. We further extend \emph{cutoff} to the text generation scenario and verify it on a machine translation task. The proposed methods greatly outperform adversarial training on both WMT2014 English-to-German and IWSLT2014 German-to-English tasks. In addition, while combining \emph{cutoff} with a Transformer base model, we achieved state-of-the-art test result on the IWSLT2014 German-to-English dataset, with a BLEU score of $37.6$. \vspace{-0.5mm} Related Work \vspace{-1mm} \paragraph{Adversarial Training} Adversarial training was originally proposed to attack neural-network-based models by applying small perturbations to the input <|cite_start|> (Reference: Intriguing properties of neural networks: Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.) <|cite_end|>. Thereafter, several adversarial-based approaches, including adversarial examples <|cite_start|> (Reference: Explaining and Harnessing Adversarial Examples: Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.) <|cite_end|>, PGD <|cite_start|> (Reference: Towards Deep Learning Models Resistant to Adversarial Attacks: Recent work has demonstrated that deep neural networks are vulnerable to adversarial examples---inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much of the prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against any adversary. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. They also suggest the notion of security against a first-order adversary as a natural and broad security guarantee. We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models. Code and pre-trained models are available at https://github.com/MadryLab/mnist_challenge and https://github.com/MadryLab/cifar10_challenge.) <|cite_end|>, \emph{etc}, have been introduced. It has been demonstrated that these methods can improve the robustness and generalization ability of a model by augmenting the perturbed examples into the original training instances. Recently, adversarial-based approaches emerged as a popular research trend in NLP, which have been successfully applied to a wide variety of NLU tasks, including sentence classification, machine reading comprehension (MRC) and natural language inference (NLI) tasks, \emph{etc}. Despite its success, computational overhead is typically required to calculate the perturbation directions. Several research efforts have been devoted to accelerate adversarial training <|cite_start|> (Reference: Adversarial Training for Free!: Adversarial training, in which a network is trained on adversarial examples, is one of the few defenses against adversarial attacks that withstands strong attacks. Unfortunately, the high cost of generating strong adversarial examples makes standard adversarial training impractical on large-scale problems like ImageNet. We present an algorithm that eliminates the overhead cost of generating adversarial examples by recycling the gradient information computed when updating model parameters. Our "free" adversarial training algorithm achieves comparable robustness to PGD adversarial training on the CIFAR-10 and CIFAR-100 datasets at negligible additional cost compared to natural training, and can be 7 to 30 times faster than other strong adversarial training methods. Using a single workstation with 4 P100 GPUs and 2 days of runtime, we can train a robust model for the large-scale ImageNet classification task that maintains 40% accuracy against PGD attacks. The code is available at https://github.com/ashafahi/free_adv_train.) <|cite_end|> <|cite_start|> (Reference: You only propagate once: Painless adversarial training using maximal principle: Deep learning achieves state-of-the-art results in many areas. However recent works have shown that deep networks can be vulnerable to adversarial perturbations which slightly changes the input but leads to incorrect prediction. Adversarial training is an effective way of improving the robustness to the adversarial examples, typically formulated as a robust optimization problem for network training. To solve it, previous works directly run gradient descent on the “adversarial loss”, i.e. replacing the input data with the corresponding adversaries. A major drawback of this approach is the computational overhead of adversary generation, which is much larger than network updating and leads to inconvenience in adversarial defense. To address this issue, we fully exploit structure of deep neural networks and propose a novel strategy to decouple the adversary update with the gradient back propagation. To achieve this goal, we follow the research line considering training deep neural network as an optimal control problem. We formulate the robust optimization as a differential game. This allows us to figure out the necessary conditions for optimality. In the way, we train the neural network via solving the Pontryagin’s Maximum Principle (PMP). The adversary is only coupled with the first layer weight in PMP. It inspires us to split the adversary computation from the back propagation gradient computation. As a result, our proposed YOPO (You Only Propagate Once) avoids forward and backward propagating the data too many times in one iteration, and restricts core descent directions computation to the first layer of the network, thus speeding up every iteration significantly. For adversarial example defense, our experiment shows that YOPO can achieve comparable defense accuracy using around 1/5 GPU time of the original projected gradient descent training. 2 ∗Equal Contribution Our codes are available at https://github.com/a1600012888/YOPO-You-Only-Propagate-Once Preprint. Under review. ar X iv :1 90 5. 00 87 7v 1 [ st at .M L ] 2 M ay 2 01 9) <|cite_end|>. However, additional forward-backward passes are still needed for adversarial training. Our proposed cutoff methods are much more computationally efficient from this perspective. Besides, the connection between adversarial training and data-augmentation-based approaches has not previously been well-established. Our work bridges this gap by unifying the two types of methods under the consistency training framework. \nocite{Chen2020MixTextLI} \vspace{-2mm} \paragraph{Multi-view Learning} The main idea of multi-view learning is to produce distinct subsets (views) of features corresponding to the same data, and the predictions by the model according to different views are repelled to be consistent <|cite_start|> (Reference: A Survey on Multi-view Learning: In recent years, a great many methods of learning from multi-view data by considering the diversity of different views have been proposed. These views may be obtained from multiple sources or different feature subsets. In trying to organize and highlight similarities and differences between the variety of multi-view learning approaches, we review a number of representative multi-view learning algorithms in different areas and classify them into three groups: 1) co-training, 2) multiple kernel learning, and 3) subspace learning. Notably, co-training style algorithms train alternately to maximize the mutual agreement on two distinct views of the data; multiple kernel learning algorithms exploit kernels that naturally correspond to different views and combine kernels either linearly or non-linearly to improve learning performance; and subspace learning algorithms aim to obtain a latent subspace shared by multiple views by assuming that the input views are generated from this latent subspace. Though there is significant variance in the approaches to integrating multiple views to improve learning performance, they mainly exploit either the consensus principle or the complementary principle to ensure the success of multi-view learning. Since accessing multiple views is the fundament of multi-view learning, with the exception of study on learning a model from multiple views, it is also valuable to study how to construct multiple views and how to evaluate these views. Overall, by exploring the consistency and complementary properties of different views, multi-view learning is rendered more effective, more promising, and has better generalization ability than single-view learning.) <|cite_end|>. Our approach is slightly different from such algorithms, \emph{e.g.}, co-training and co-regularization <|cite_start|> (Reference: A Co-regularization approach to semi-supervised learning with multiple views: The Co-Training algorithm uses unlabeled examples in multiple views to bootstrap classifiers in each view, typically in a greedy manner, and operating under assumptions of view-independence and compatibility. In this paper, we propose a Co-Regularization framework where classifiers are learnt in each view through forms of multi-view regularization. We propose algorithms within this framework that are based on optimizing measures of agreement and smoothness over labeled and unlabeled examples. These algorithms naturally extend standard regularization methods like Support Vector Machines (SVM) and Regularized Least squares (RLS) for multi-view semi-supervised learning, and inherit their benefits and applicability to high-dimensional classification problems. An empirical investigation is presented that confirms the promise of this approach.) <|cite_end|>, in the sense that the multiple views from \emph{cutoff} have certain overlaps, rather than being entirely independent. The intuition of our method bears resemblance to cross-view training (CVT) <|cite_start|> (Reference: Semi-Supervised Sequence Modeling with Cross-View Training: Unsupervised representation learning algorithms such as word2vec and ELMo improve the accuracy of many supervised NLP models, mainly because they can take advantage of large amounts of unlabeled text. However, the supervised models only learn from task-specific labeled data during the main training phase. We therefore propose Cross-View Training (CVT), a semi-supervised learning algorithm that improves the representations of a Bi-LSTM sentence encoder using a mix of labeled and unlabeled data. On labeled examples, standard supervised learning is used. On unlabeled examples, CVT teaches auxiliary prediction modules that see restricted views of the input (e.g., only part of a sentence) to match the predictions of the full model seeing the whole input. Since the auxiliary modules and the full model share intermediate representations, this in turn improves the full model. Moreover, we show that CVT is particularly effective when combined with multi-task learning. We evaluate CVT on five sequence tagging tasks, machine translation, and dependency parsing, achieving state-of-the-art results.) <|cite_end|>, which also proposes to improve sentence representations by encouraging consistent predictions across different views of the input. However, there are several key differences that make our work unique (except that CVT focuses on a semi-supervised setting, rather than a supervised one as in our case): \emph{\romannumeral1}) CVT generates partial views on top of latent representations, while \emph{cutoff} operates at the input embedding layer. As a result, our method is more generic and model-agnostic; \emph{\romannumeral2}) CVT adds an auxiliary prediction module during the training stage, while span cutoff requires no changes to the original model at all; \emph{\romannumeral3}) we leverage Jensen-Shannon Divergence consistency loss to match the predictions with various views, which maximize their consensus in a more natural and stable manner (also more efficient than the multiple KL divergence terms used in CVT). \vspace{-1mm} <|paper_end|>
[ "<|reference_start|> BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension: We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and many other more recent pretraining schemes. We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system for machine translation, with only target language pretraining. We also report ablation experiments that replicate other pretraining schemes within the BART framework, to better measure which factors most influence end-task performance. <|reference_end|>", "<|reference_start|> FreeLB: Enhanced Adversarial Training for Natural Language Understanding: Adversarial training, which minimizes the maximal risk for label-preserving input perturbations, has proved to be effective for improving the generalization of language models. In this work, we propose a novel adversarial training algorithm, FreeLB, that promotes higher invariance in the embedding space, by adding adversarial perturbations to word embeddings and minimizing the resultant adversarial risk inside different regions around input samples. To validate the effectiveness of the proposed approach, we apply it to Transformer-based models for natural language understanding and commonsense reasoning tasks. Experiments on the GLUE benchmark show that when applied only to the finetuning stage, it is able to improve the overall test scores of BERT-base model from 78.3 to 79.4, and RoBERTa-large model from 88.5 to 88.8. In addition, the proposed approach achieves state-of-the-art single-model test accuracies of 85.44\\% and 67.75\\% on ARC-Easy and ARC-Challenge. Experiments on CommonsenseQA benchmark further demonstrate that FreeLB can be generalized and boost the performance of RoBERTa-large model on other tasks as well. Code is available at \\url{https://github.com/zhuchen03/FreeLB . <|reference_end|>", "<|reference_start|> Adversarial Training for Free!: Adversarial training, in which a network is trained on adversarial examples, is one of the few defenses against adversarial attacks that withstands strong attacks. Unfortunately, the high cost of generating strong adversarial examples makes standard adversarial training impractical on large-scale problems like ImageNet. We present an algorithm that eliminates the overhead cost of generating adversarial examples by recycling the gradient information computed when updating model parameters. Our \"free\" adversarial training algorithm achieves comparable robustness to PGD adversarial training on the CIFAR-10 and CIFAR-100 datasets at negligible additional cost compared to natural training, and can be 7 to 30 times faster than other strong adversarial training methods. Using a single workstation with 4 P100 GPUs and 2 days of runtime, we can train a robust model for the large-scale ImageNet classification task that maintains 40% accuracy against PGD attacks. The code is available at https://github.com/ashafahi/free_adv_train. <|reference_end|>", "<|reference_start|> A Co-regularization approach to semi-supervised learning with multiple views: The Co-Training algorithm uses unlabeled examples in multiple views to bootstrap classifiers in each view, typically in a greedy manner, and operating under assumptions of view-independence and compatibility. In this paper, we propose a Co-Regularization framework where classifiers are learnt in each view through forms of multi-view regularization. We propose algorithms within this framework that are based on optimizing measures of agreement and smoothness over labeled and unlabeled examples. These algorithms naturally extend standard regularization methods like Support Vector Machines (SVM) and Regularized Least squares (RLS) for multi-view semi-supervised learning, and inherit their benefits and applicability to high-dimensional classification problems. An empirical investigation is presented that confirms the promise of this approach. <|reference_end|>" ]
[ 6, 9, 22, 25 ]
{"<|multi_cite_1_1|>": "arxiv-175879", "<|multi_cite_1_2|>": "arxiv-216284", "<|multi_cite_1_3|>": "arxiv-210557", "<|multi_cite_1_4|>": "arxiv-215907", "<|multi_cite_1_5|>": "arxiv-200766", "<|multi_cite_1_6|>": "arxiv-255236", "<|multi_cite_1_7|>": "arxiv-231476", "<|multi_cite_1_8|>": "arxiv-251076", "<|multi_cite_1_9|>": "arxiv-269801", "<|multi_cite_2_1|>": "arxiv-225619", "<|multi_cite_2_2|>": "arxiv-233155", "<|multi_cite_2_3|>": "arxiv-283270", "<|multi_cite_3_1|>": "arxiv-225619", "<|multi_cite_3_2|>": "arxiv-260248", "<|multi_cite_3_3|>": "arxiv-233155", "<|multi_cite_4_2|>": "arxiv-44869", "<|multi_cite_4_3|>": "arxiv-173625", "<|cite_5|>": "arxiv-215907", "<|cite_6|>": "arxiv-216284", "<|cite_7|>": "arxiv-54384", "<|cite_8|>": "arxiv-70555", "<|cite_9|>": "arxiv-127148", "<|multi_cite_10_1|>": "arxiv-202009", "<|multi_cite_10_2|>": "ss-1293346", "<|cite_11|>": "arxiv-44869", "<|cite_13|>": "ss-1031721", "<|cite_14|>": "arxiv-173625"}
1707.06943
<|paper_start|> Title: Securing Visible Light Communication Systems by Beamforming in the Presence of Randomly Distributed Eavesdroppers Abstract: Securing Visible Light Communication Systems by Beamforming in the Presence of Randomly Distributed Eavesdroppers: This paper considers secrecy enhancement mechanisms in visible light communication (VLC) systems with spatially distributed passive eavesdroppers (EDs) under the assumption that there are multiple LED transmitters and one legitimate receiver (UE). Based on certain amplitude constraints, we propose an optimal beamforming scheme to optimize secrecy performance. Contrary to the case where null-steering is made possible by using knowledge of the ED locations, we show that the optimal solution when only statistical information about ED locations is available directs the transmission along a particular eigenmode related to the intensity of the ED process and the intended channel. Then, a sub-optimal LED selection scheme is provided to reduce the secrecy outage probability (SOP). An approximate closed-form for the SOP is derived by using secrecy capacity bounds. All analysis is numerically verified by Monte Carlo simulations. The analysis shows that the optimal beamformer yields superior performance to LED selection. However, LED selection is still a highly efficient suboptimal scheme due to the complexity associated with the use of multiple transmitters in the full beamforming approach. These performance trends and exact relations between system parameters can be used to develop a secure VLC system in the presence of randomly distributed EDs. Introduction \label{sec:1} \IEEEPARstart{D}{ue} to the rapid proliferation of mobile communication devices and the associated difficulties in adequately allocating spectra to support new services, visible light communication (VLC) has become an increasingly interesting topic of research in academia and industry. The VLC medium does not interfere with RF systems, and VLC spectrum can be easily reused (spatially) since light can be confined to a certain indoor area. Moreover, VLC uses unregulated spectrum with a wide bandwidth (428 to 750 THz) and is capable of exploiting existing LED light infrastructure for communication. Compared to RF channels, VLC exploits light-of-sight (LoS) propagation and has relatively good signal confinement properties. However, the VLC channel is still of a broadcast nature. Therefore, securing VLC transmissions is an important issue, particularly for deployments in open places such as public libraries, offices, and shopping malls. To cope with the security issue in RF systems, the focus on physical layer security (PLS), which is based on the information theoretic notion of employing coding to achieve secure communication, has accelerated since Wyner's seminal work <|cite_start|> (Reference: The Wire-tap Channel: We consider the situation in which digital data is to be reliably transmitted over a discrete, memoryless channel (dmc) that is subjected to a wire-tap at the receiver. We assume that the wire-tapper views the channel output via a second dmc). Encoding by the transmitter and decoding by the receiver are permitted. However, the code books used in these operations are assumed to be known by the wire-tapper. The designer attempts to build the encoder-decoder in such a way as to maximize the transmission rate R, and the equivocation d of the data as seen by the wire-tapper. In this paper, we find the trade-off curve between R and d, assuming essentially perfect (“error-free”) transmission. In particular, if d is equal to Hs, the entropy of the data source, then we consider that the transmission is accomplished in perfect secrecy. Our results imply that there exists a Cs > 0, such that reliable transmission at rates up to Cs is possible in approximately perfect secrecy.) <|cite_end|>. Due to the broadcast nature of RF communications, both the legitimate receiver, or user equipment (UE), and eavesdroppers (EDs) may receive data from the source. However, the principle of PLS states that if the capacity of the intended data transmission channel is higher than that of the eavesdropping channel, the data can be transmitted at a rate close to the difference in their capacities, the so-called \emph{secrecy capacity}, so that only the intended receiver can successfully decode the data. It is difficult to obtain knowledge of passive ED locations. Yet, the analysis of secrecy capacity in spatial networks inherently depends upon this geometric properties. The mathematical theory of stochastic geometry is a powerful tool for dealing with spatial uncertainty <|cite_start|> (Reference: The Secrecy Graph and Some of its Properties: A new random geometric graph model, the so-called secrecy graph, is introduced and studied. The graph represents a wireless network and includes only edges over which secure communication in the presence of eavesdroppers is possible. The underlying point process models considered are lattices and Poisson point processes. In the lattice case, analogies to standard bond and site percolation can be exploited to determine percolation thresholds. In the Poisson case, the node degrees are determined and percolation is studied using analytical bounds and simulations. It turns out that a small density of eavesdroppers already has a drastic impact on the connectivity of the secrecy graph.) <|cite_end|> <|cite_start|> (Reference: Physical-layer security in stochastic wireless networks: Motivated by recent developments in physical-layer security and stochastic geometry, we aim to characterize the fundamental limits of secure communication in wireless networks. Based on a general model in which legitimate nodes and potential eavesdroppers are randomly scattered in space, we define the secure communication graph (s-graph) from the point of view of information-theoretic security. For the Poisson s-graph, we provide conclusive results for: (a) the in-degree and out-degree of a node; (b) the isolation probability; and (c) the secrecy capacity between a node and each of its neighbours. Our analysis reveals the innate connections between information-theoretic security and the spatial densities of legitimate and eavesdropper nodes.) <|cite_end|>. Using stochastic geometric methods, the impact of random ED locations on secrecy performance for RF communications has been investigated in recent years <|cite_start|> (Reference: On the Throughput Cost of Physical Layer Security in Decentralized Wireless Networks: This paper studies the throughput of large-scale decentralized wireless networks with physical layer security constraints. In particular, we are interested in the question of how much throughput needs to be sacrificed for achieving a certain level of security. We consider random networks where the legitimate nodes and the eavesdroppers are distributed according to independent two-dimensional Poisson point processes. The transmission capacity framework is used to characterize the area spectral efficiency of secure transmissions with constraints on both the quality of service (QoS) and the level of security. This framework illustrates the dependence of the network throughput on key system parameters, such as the densities of legitimate nodes and eavesdroppers, as well as the QoS and security constraints. One important finding is that the throughput cost of achieving a moderate level of security is quite low, while throughput must be significantly sacrificed to realize a highly secure network. We also study the use of a secrecy guard zone, which is shown to give a significant improvement on the throughput of networks with high security requirements.) <|cite_end|> <|cite_start|> (Reference: Secrecy Rates in the Broadcast Channel with Confidential Messages and External Eavesdroppers: In this paper, we consider the broadcast channel with confidential messages and external eavesdroppers (BCCE), where a multi-antenna base station simultaneously communicates to multiple potentially malicious users, in the presence of randomly located external eavesdroppers. Using the proposed model, we study the secrecy rates achievable by regularized channel inversion (RCI) precoding by performing a large-system analysis that combines tools from stochastic geometry and random matrix theory. We obtain explicit expressions for the probability of secrecy outage and an upper bound on the rate loss due to the presence of external eavesdroppers. We show that both these quantities scale as $\frac{\lambda_e}{\sqrt{N}}$, where $N$ is the number of transmit antennas and $\lambda_e$ is the density of external eavesdroppers, irrespective of their collusion strategy. Furthermore, we derive a practical rule for the choice of the regularization parameter, which is agnostic of channel state information and location of eavesdroppers, and yet provides close to optimal performance.) <|cite_end|> <|cite_start|> (Reference: On transmission secrecy outage of a multi-antenna system with randomly located eavesdroppers: This letter studies the physical-layer security of a multi-antenna transmission system in the presence of Poisson distributed eavesdroppers. The transmission secrecy outage probability (TSOP) is adopted to evaluate the security. We derive an accurate integral expression as well as a closed-form upper bound on TSOP for the noncolluding eavesdroppers' case and a closed-form solution for the colluding eavesdroppers' case, respectively. Based on these, we define a novel concept of security region to intuitively illustrate the security from a spatial perspective. We further analyze the impacts of various factors on the security, such as the number of transmit antennas, the node intensity, and the target secrecy rate.) <|cite_end|> <|cite_start|> (Reference: Secrecy Outage Analysis for Downlink Transmissions in the Presence of Randomly Located Eavesdroppers: We analyze the secrecy outage probability in the downlink for wireless networks with spatially (Poisson) distributed eavesdroppers (EDs) under the assumption that the base station employs transmit antenna selection (TAS) to enhance secrecy performance. We compare the cases where the receiving user equipment (UE) operates in half-duplex (HD) mode and full-duplex (FD) mode. In the latter case, the UE simultaneously receives the intended downlink message and transmits a jamming signal to strengthen secrecy. We investigate two models of (semi)passive eavesdropping: (1) EDs act independently and (2) EDs collude to intercept the transmitted message. For both of these models, we obtain expressions for the secrecy outage probability in the downlink for HD and FD UE operation. The expressions for HD systems have very accurate approximate or exact forms in terms of elementary and/or special functions for all path loss exponents. Those related to the FD systems have exact integral forms for general path loss exponents, while exact closed forms are given for specific exponents. A closed-form approximation is also derived for the FD case with colluding EDs. The resulting analysis shows that the reduction in the secrecy outage probability is logarithmic in the number of antennas used for TAS and identifies conditions under which HD operation should be used instead of FD jamming at the UE. These performance trends and exact relations between system parameters can be used to develop adaptive power allocation and duplex operation methods in practice. Examples of such techniques are alluded to herein.) <|cite_end|>. The location distribution of EDs can be modeled as a Poisson point process (PPP) or a binomial point process (BPP). In <|cite_start|> (Reference: On the Throughput Cost of Physical Layer Security in Decentralized Wireless Networks: This paper studies the throughput of large-scale decentralized wireless networks with physical layer security constraints. In particular, we are interested in the question of how much throughput needs to be sacrificed for achieving a certain level of security. We consider random networks where the legitimate nodes and the eavesdroppers are distributed according to independent two-dimensional Poisson point processes. The transmission capacity framework is used to characterize the area spectral efficiency of secure transmissions with constraints on both the quality of service (QoS) and the level of security. This framework illustrates the dependence of the network throughput on key system parameters, such as the densities of legitimate nodes and eavesdroppers, as well as the QoS and security constraints. One important finding is that the throughput cost of achieving a moderate level of security is quite low, while throughput must be significantly sacrificed to realize a highly secure network. We also study the use of a secrecy guard zone, which is shown to give a significant improvement on the throughput of networks with high security requirements.) <|cite_end|>, the locations of multiple legitimate pairs and EDs were represented as independent two-dimensional PPPs, and the average secrecy throughput in such a wireless network was studied. Multiple-input multiple-output (MIMO) transmission with beamforming was considered later in <|cite_start|> (Reference: Secrecy Rates in the Broadcast Channel with Confidential Messages and External Eavesdroppers: In this paper, we consider the broadcast channel with confidential messages and external eavesdroppers (BCCE), where a multi-antenna base station simultaneously communicates to multiple potentially malicious users, in the presence of randomly located external eavesdroppers. Using the proposed model, we study the secrecy rates achievable by regularized channel inversion (RCI) precoding by performing a large-system analysis that combines tools from stochastic geometry and random matrix theory. We obtain explicit expressions for the probability of secrecy outage and an upper bound on the rate loss due to the presence of external eavesdroppers. We show that both these quantities scale as $\frac{\lambda_e}{\sqrt{N}}$, where $N$ is the number of transmit antennas and $\lambda_e$ is the density of external eavesdroppers, irrespective of their collusion strategy. Furthermore, we derive a practical rule for the choice of the regularization parameter, which is agnostic of channel state information and location of eavesdroppers, and yet provides close to optimal performance.) <|cite_end|> <|cite_start|> (Reference: On transmission secrecy outage of a multi-antenna system with randomly located eavesdroppers: This letter studies the physical-layer security of a multi-antenna transmission system in the presence of Poisson distributed eavesdroppers. The transmission secrecy outage probability (TSOP) is adopted to evaluate the security. We derive an accurate integral expression as well as a closed-form upper bound on TSOP for the noncolluding eavesdroppers' case and a closed-form solution for the colluding eavesdroppers' case, respectively. Based on these, we define a novel concept of security region to intuitively illustrate the security from a spatial perspective. We further analyze the impacts of various factors on the security, such as the number of transmit antennas, the node intensity, and the target secrecy rate.) <|cite_end|> to enhance secrecy performance. Transmit antenna selection and full-duplex schemes have also been used to enhance secrecy performance with randomly located EDs <|cite_start|> (Reference: Secrecy Outage Analysis for Downlink Transmissions in the Presence of Randomly Located Eavesdroppers: We analyze the secrecy outage probability in the downlink for wireless networks with spatially (Poisson) distributed eavesdroppers (EDs) under the assumption that the base station employs transmit antenna selection (TAS) to enhance secrecy performance. We compare the cases where the receiving user equipment (UE) operates in half-duplex (HD) mode and full-duplex (FD) mode. In the latter case, the UE simultaneously receives the intended downlink message and transmits a jamming signal to strengthen secrecy. We investigate two models of (semi)passive eavesdropping: (1) EDs act independently and (2) EDs collude to intercept the transmitted message. For both of these models, we obtain expressions for the secrecy outage probability in the downlink for HD and FD UE operation. The expressions for HD systems have very accurate approximate or exact forms in terms of elementary and/or special functions for all path loss exponents. Those related to the FD systems have exact integral forms for general path loss exponents, while exact closed forms are given for specific exponents. A closed-form approximation is also derived for the FD case with colluding EDs. The resulting analysis shows that the reduction in the secrecy outage probability is logarithmic in the number of antennas used for TAS and identifies conditions under which HD operation should be used instead of FD jamming at the UE. These performance trends and exact relations between system parameters can be used to develop adaptive power allocation and duplex operation methods in practice. Examples of such techniques are alluded to herein.) <|cite_end|>. Motivated by the advantage of PLS, a recent topic of interest in the research community has been the investigation of PLS applied in VLC systems using various transmission methods, e.g., beamforming, jamming, etc. Recently, Lampe et~al analyzed the achievable secrecy rate for single-input single-output (SISO) and multiple-input single-output (MISO) scenarios and proposed a variety of beamforming schemes such as zero-forcing (null-steering), artificial noise generation, friendly jamming, and robust beamforming <|cite_start|> (Reference: Physical-Layer Security for MISO Visible Light Communication Channels: This paper considers improving the confidentiality of visible light communication (VLC) links within the framework of physical-layer security. We study a VLC scenario with one transmitter, one legitimate receiver, and one eavesdropper. The transmitter has multiple light sources, while the legitimate and unauthorized receivers have a single photodetector, each. We characterize secrecy rates achievable via transmit beamforming over the multiple-input, single-output (MISO) VLC wiretap channel. For VLC systems, intensity modulation (IM) via light-emitting diodes (LEDs) is the most practical transmission scheme. Because of the limited dynamic range of typical LEDs, the modulating signal must satisfy certain amplitude constraints. Hence, we begin with deriving lower and upper bounds on the secrecy capacity of the scalar Gaussian wiretap channel subject to amplitude constraints. Then, we utilize beamforming to obtain a closed-form secrecy rate expression for the MISO wiretap channel. Finally, we propose a robust beamforming scheme to consider the scenario wherein information about the eavesdropper's channel is imperfect due to location uncertainty. A typical application of the proposed scheme is to secure the communication link when the eavesdropper is expected to exist within a specified area. The performance is measured in terms of the worst-case secrecy rate guaranteed under all admissible realizations of the eavesdropper's channel.) <|cite_end|> <|cite_start|> (Reference: Physical-layer security for indoor visible light communications: This paper considers secure transmission over the visible light communication (VLC) channel by the means of physical-layer security techniques. In particular, we consider achievable secrecy rates of the multiple-input, single-output (MISO) wiretap VLC channel. The VLC channel is modeled as a deterministic and real-valued Gaussian channel subject to amplitude constraints. We utilize null-steering and artificial noise strategies to achieve positive secrecy rates when the eavesdropper's channel state information (CSI) is perfectly known and entirely unknown to the transmitter, respectively. In both scenarios, the legitimate receiver's CSI is available to the transmitter. We numerically evaluate achievable secrecy rates under typical VLC scenarios and show that simple precoding techniques can significantly improve the confidentiality of VLC links.) <|cite_end|> <|cite_start|> (Reference: Securing visible light communications via friendly jamming: Despite offering higher security than radio frequency (RF) channels, the broadcast nature of the visible light communication (VLC) channel makes VLC links inherently susceptible to eavesdropping by unauthorized users. In this work, we consider the physical-layer security of VLC links aided by friendly jamming. The jammer has multiple light sources, but does not have access to the data transmitted. The eavesdropper's reception is degraded by a jamming signal that causes no interference to the legitimate receiver. Due to the limited dynamic range of typical light-emitting diodes (LEDs), both the data and jamming signals are subject to amplitude constraints. Therefore, we begin with deriving a closed-form secrecy rate expression for the corresponding wiretap channel, and adopt secrecy rate as the performance measure. Then, we formulate a linear programming problem to maximize the secrecy rate when the eavesdropper's channel is accurately known to the jammer. Finally, we consider robust beamforming to maximize the worst-case secrecy rate when information about the eavesdropper's channel is uncertain due to location uncertainty. The robust scheme makes use of simple linear programming, making real-time implementation feasible in a variety of real-world scenarios.) <|cite_end|>. Additionally, Alouini et~al proposed the truncated normal input distribution and the truncated generalized normal input distribution to increase the secrecy rate under constraints on the input signal amplitude <|cite_start|> (Reference: Improved achievable secrecy rate of visible light communication with cooperative jamming: In this paper we study the problem of securing a visible light communication (VLC) link against passive eavesdropping, with the help of a (friendly) jammer. Differently from radio frequency (RF) communications, VLC imposes a peak amplitude constraint on the input distribution which renders Gaussian inputs not admissible. We provide an achievable secrecy rate that improves upon a recently established one in a concurrent work by Mostafa and Lampe. Our scheme follows from both the secrecy capacity result by Wyner and the artificial noise scheme by Goel and Negi, but using truncated Gaussian input distributions instead of uniform ones. Via numerical results, we show that our secrecy rate outperforms the concurrent rate in different settings.) <|cite_end|> <|cite_start|> (Reference: On the Secrecy Capacity of MISO Visible Light Communication Channels: We study the secrecy capacity of the multiple- input single-output (MISO) Gaussian wiretap visible light communication (VLC) channel. We study a typical VLC scenario with one transmitter, one legitimate receiver, and one eavesdropper. Specifically, we compute the achievable secrecy rate for various input signaling distributions, including the truncated generalized normal (TGN) and uniform distributions. The transmitter is equipped with multiple light sources, while the legitimate and unauthorized receivers are each equipped with a single photodetector. We analyze the achievable secrecy rates via transmit beamforming and artificial noise. In addition, both zero-forcing beamforming and robust beamforming are considered. In the former case, the location of the eavesdropper is assumed to be known, whereas in the latter case, the location of the eavesdropper is unknown. Our numerical results show that the secrecy rate achieved by the TGN distribution is significantly improved as compared to those achieved by the truncated Gaussian and uniform distributions, for both zero-forcing beamforming and robust beamforming. We also derive an upper bound on the achievable secrecy capacity that we used to assess the closeness of the achievable secrecy rates to the derived bound.) <|cite_end|>. It is important to note, however, that these contributions assumed a small number of EDs are present in the system and either the channel state information (CSI) or the locations of the EDs are known. In practice, it might be impossible to obtain ED CSI or locations. Inspired by the aforementioned contributions exploiting stochastic geometry in RF communications, our previous work <|cite_start|> (Reference: Secrecy analysis in visible light communication systems with randomly located eavesdroppers: We investigate the secrecy connectivity in visible light communication in the presence of randomly located eavesdroppers. We apply spatial point processes to characterize the unknown eavesdropper locations. The closed-form of the secrecy outage probability is derived as a function of the density of eavesdroppers. The analysis is verified by Monte Carlo simulations. Furthermore, we suggest an LED transmitter selection scheme based on the location of a legitimate user. It is verified that the proposed transmission scheme can significantly improve the secrecy performance as a function of the number of LED transmitters.) <|cite_end|> firstly developed an analogous approach to modeling ED locations in VLC systems. In this paper, we use this model to further analyze system performance and propose new MISO beamforming solutions. The contributions of this paper can be summarized comprehensively as follows:\begin{itemize} \item we propose a MISO beamforming solution that optimizes secrecy performance measures (e.g., SNR and secrecy capacity bounds) subject to a signal amplitude constraint for VLC systems when only information about the ED intensity measure is available at the transmitter; \item we demonstrate that the proposed beamforming method is well approximated by a simple LED selection scheme when the distance between the UE and one of the transmitting LEDs is small; \item we obtain closed-form bounds on the secrecy outage probability (SOP) when LED selection is adopted. \end{itemize} The rest of this paper is organized as follows\footnote{The notation and symbols used in the paper are listed in Table~\ref{tb:1}.}. Section II begins with the system model describing the modulation and beamforming schemes in VLC and providing various performance measures. In Section III, the optimal beamformer maximizing secrecy performance is investigated. In Section IV, LED selection is proposed, and closed-form upper and lower bounds on the SOP are calculated. Section V gives numerical results that support out analysis. Section VI concludes the paper. \begin{table}[!t] \centering \caption{Notation and Symbols Used in the Paper} \small \begin{tabular}{|c|l|} \hline Symbol &Definition/Explanation \\ \hline $L$ & the length of a room \\ $W$ & the width of a room \\ $Z$ & the height from the ceiling to the work plane\\\ $N$ & number of transmitters \\ $\Phi_{E}$ & poisson point process of EDs \\ $ \lambda_E$ & ED intensity function \\ $I_{DC}$ & fixed bias current \\ $R$ & photodetector's responsivity \\ $\alpha$ & modulation index \\ $\phi_{1/2}$ & half illuminance angle \\ $A_{PD}$ & physical area of a photodiode \\ $\phi$ & angle of irradiance \\ $\psi$ & angle of incidence \\ $\kappa$ & refractive index of an optical concentrator \\ $\Psi_{c}$ & received field of view of a photodiode \\ $\mathbb R$ & set of real numbers \\ $\mathbb R^+$ & set of non-negative real numbers \\ $\mathbbm{1}$ & all-ones column vector \\ $\mathbf{0}$ & all-zeros column vector \\ $\mathbb E[\cdot]$ & expectation operator \\ $\mathbb{P}(\cdot)$ & probability operator \\ $[\cdot]^T$ & transpose operator \\ $\Gamma(x,y)$ & upper incomplete gamma function \\ \hline \end{tabular} \label{tb:1} \end{table} <|paper_end|>
[ "<|reference_start|> The Secrecy Graph and Some of its Properties: A new random geometric graph model, the so-called secrecy graph, is introduced and studied. The graph represents a wireless network and includes only edges over which secure communication in the presence of eavesdroppers is possible. The underlying point process models considered are lattices and Poisson point processes. In the lattice case, analogies to standard bond and site percolation can be exploited to determine percolation thresholds. In the Poisson case, the node degrees are determined and percolation is studied using analytical bounds and simulations. It turns out that a small density of eavesdroppers already has a drastic impact on the connectivity of the secrecy graph. <|reference_end|>", "<|reference_start|> Physical-layer security in stochastic wireless networks: Motivated by recent developments in physical-layer security and stochastic geometry, we aim to characterize the fundamental limits of secure communication in wireless networks. Based on a general model in which legitimate nodes and potential eavesdroppers are randomly scattered in space, we define the secure communication graph (s-graph) from the point of view of information-theoretic security. For the Poisson s-graph, we provide conclusive results for: (a) the in-degree and out-degree of a node; (b) the isolation probability; and (c) the secrecy capacity between a node and each of its neighbours. Our analysis reveals the innate connections between information-theoretic security and the spatial densities of legitimate and eavesdropper nodes. <|reference_end|>", "<|reference_start|> On transmission secrecy outage of a multi-antenna system with randomly located eavesdroppers: This letter studies the physical-layer security of a multi-antenna transmission system in the presence of Poisson distributed eavesdroppers. The transmission secrecy outage probability (TSOP) is adopted to evaluate the security. We derive an accurate integral expression as well as a closed-form upper bound on TSOP for the noncolluding eavesdroppers' case and a closed-form solution for the colluding eavesdroppers' case, respectively. Based on these, we define a novel concept of security region to intuitively illustrate the security from a spatial perspective. We further analyze the impacts of various factors on the security, such as the number of transmit antennas, the node intensity, and the target secrecy rate. <|reference_end|>", "<|reference_start|> Securing visible light communications via friendly jamming: Despite offering higher security than radio frequency (RF) channels, the broadcast nature of the visible light communication (VLC) channel makes VLC links inherently susceptible to eavesdropping by unauthorized users. In this work, we consider the physical-layer security of VLC links aided by friendly jamming. The jammer has multiple light sources, but does not have access to the data transmitted. The eavesdropper's reception is degraded by a jamming signal that causes no interference to the legitimate receiver. Due to the limited dynamic range of typical light-emitting diodes (LEDs), both the data and jamming signals are subject to amplitude constraints. Therefore, we begin with deriving a closed-form secrecy rate expression for the corresponding wiretap channel, and adopt secrecy rate as the performance measure. Then, we formulate a linear programming problem to maximize the secrecy rate when the eavesdropper's channel is accurately known to the jammer. Finally, we consider robust beamforming to maximize the worst-case secrecy rate when information about the eavesdropper's channel is uncertain due to location uncertainty. The robust scheme makes use of simple linear programming, making real-time implementation feasible in a variety of real-world scenarios. <|reference_end|>" ]
[ 1, 2, 9, 13 ]
{"<|cite_2|>": "ss-993086", "<|multi_cite_3_1|>": "arxiv-3353", "<|multi_cite_3_2|>": "ss-1998048", "<|multi_cite_4_1|>": "arxiv-18128", "<|multi_cite_4_2|>": "ss-1998049", "<|multi_cite_4_3|>": "ss-1051855", "<|multi_cite_4_4|>": "arxiv-113735", "<|cite_5|>": "arxiv-18128", "<|multi_cite_6_1|>": "ss-1998049", "<|multi_cite_6_2|>": "ss-1051855", "<|cite_7|>": "arxiv-113735", "<|multi_cite_8_1|>": "ss-1297285", "<|multi_cite_8_2|>": "ss-799608", "<|multi_cite_8_3|>": "ss-1639449", "<|multi_cite_9_1|>": "ss-1639450", "<|multi_cite_9_2|>": "ss-2518129", "<|cite_10|>": "ss-1639453"}
2011.07932
<|paper_start|> Title: Combating the Instability of Mutual Information-based Losses via Regularization Abstract: Combating the Instability of Mutual Information-based Losses via Regularization: Notable progress has been made in numerous fields of machine learning based on neural network-driven mutual information (MI) bounds. However, utilizing the conventional MI-based losses is often challenging due to their practical and mathematical limitations. In this work, we first identify the symptoms behind their instability: (1) the neural network not converging even after the loss seemed to converge, and (2) saturating neural network outputs causing the loss to diverge. We mitigate both issues by adding a novel regularization term to the existing losses. We theoretically and experimentally demonstrate that added regularization stabilizes training. Finally, we present a novel benchmark that evaluates MI-based losses on both the MI estimation power and its capability on the downstream tasks, closely following the pre-existing supervised and contrastive learning settings. We evaluate six different MI-based losses and their regularized counterparts on multiple benchmarks to show that our approach is simple yet effective. Introduction \label{sec:intro} Identifying a relationship between two variables of interest is one of the key problems in mathematics, statistics, and machine learning <|cite_start|> (Reference: Generative {{Adversarial Nets}}: CNN and RNN are classifiers for image and speech recognition, and are used in many computer vision. However, this model alone does not produce images...) <|cite_end|> <|cite_start|> (Reference: {Faster {R-CNN:: 根据目标检测算法中出现的目标漏检和重复检测问题,本文提出了一种基于双阈值-非极大值抑制的Faster R-CNN改进算法。算法首先利用深层卷积网络架构提取目标的多层卷积特征,然后通过提出的双阈值-非极大值抑制(DT-NMS)算法在RPN阶段提取目标候选区域的深层信息,最后使用了双线性插值方法来改进原RoI pooling层中的最近邻插值法,使算法在检测数据集上对目标的定位更加准确。实验结果表明,DT-NMS算法既有效地平衡了单阈值算法对目标漏检问题和目标误检问题的关系,又针对性地减小了同一目标被多次检测的概率。与soft-NMS算法相比,本文算法在PASCAL VOC2007上的重复检测率降低了2.4%,多次检测的目标错分率降低了2%。与Faster R-CNN算法相比,本文算法在PASCAL VOC2007上检测精度达到74.7%,性能提升了1.5%。在MSCOCO数据集上性能提升了1.4%。同时本文算法具有较快的检测速度,达到16 FPS。) <|cite_end|> <|cite_start|> (Reference: Deep Residual Learning for Image Recognition: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.) <|cite_end|> <|cite_start|> (Reference: Attention Is All You Need: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.) <|cite_end|>. One of the fundamental approaches is information theory-based measurement, namely the measure of mutual information (MI). Due to its mathematical soundness and the rise of deep learning, many have designed differentiable MI-based losses for neural networks. Some utilize the MI-based losses to bridge the gap between latent variables and representations in generative adversarial networks <|cite_start|> (Reference: f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization: Generative neural samplers are probabilistic models that implement sampling using feedforward neural networks: they take a random input vector and produce a sample from a probability distribution defined by the network weights. These models are expressive and allow efficient computation of samples and derivatives, but cannot be used for computing likelihoods or for marginalization. The generative-adversarial training method allows to train such models through the use of an auxiliary discriminative neural network. We show that the generative-adversarial approach is a special case of an existing more general variational divergence estimation approach. We show that any f-divergence can be used for training generative neural samplers. We discuss the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models.) <|cite_end|> <|cite_start|> (Reference: InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets: This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound to the mutual information objective that can be optimized efficiently, and show that our training procedure can be interpreted as a variation of the Wake-Sleep algorithm. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence/absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods.) <|cite_end|> <|cite_start|> (Reference: Mutual Information Neural Estimation: We argue that the estimation of mutual information between high dimensional continuous random variables can be achieved by gradient descent over neural networks. We present a Mutual Information Neural Estimator (MINE) that is linearly scalable in dimensionality as well as in sample size, trainable through back-prop, and strongly consistent. We present a handful of applications on which MINE can be used to minimize or maximize mutual information. We apply MINE to improve adversarially trained generative models. We also use MINE to implement Information Bottleneck, applying it to supervised classification; our results demonstrate substantial improvement in flexibility and performance in these settings.) <|cite_end|> <|cite_start|> (Reference: Representation Learning with Contrastive Predictive Coding: While supervised learning has enabled great progress in many applications, unsupervised learning has not seen such widespread adoption, and remains an important and challenging endeavor for artificial intelligence. In this work, we propose a universal unsupervised learning approach to extract useful representations from high-dimensional data, which we call Contrastive Predictive Coding. The key insight of our model is to learn such representations by predicting the future in latent space by using powerful autoregressive models. We use a probabilistic contrastive loss which induces the latent space to capture information that is maximally useful to predict future samples. It also makes the model tractable by using negative sampling. While most prior work has focused on evaluating representations for a particular modality, we demonstrate that our approach is able to learn useful representations achieving strong performance on four distinct domains: speech, images, text and reinforcement learning in 3D environments.) <|cite_end|> <|cite_start|> (Reference: Learning deep representations by mutual information estimation and maximization: In this work, we perform unsupervised learning of representations by maximizing mutual information between an input and the output of a deep neural network encoder. Importantly, we show that structure matters: incorporating knowledge about locality of the input to the objective can greatly influence a representation's suitability for downstream tasks. We further control characteristics of the representation by matching to a prior distribution adversarially. Our method, which we call Deep InfoMax (DIM), outperforms a number of popular unsupervised learning methods and competes with fully-supervised learning on several classification tasks. DIM opens new avenues for unsupervised learning of representations and is an important step towards flexible formulations of representation-learning objectives for specific end-goals.) <|cite_end|>, where others introduce MI-based methodologies identifying the relationship between input, output, and hidden variables <|cite_start|> (Reference: Deep Learning and the Information Bottleneck Principle: Deep Neural Networks (DNNs) are analyzed via the theoretical framework of the information bottleneck (IB) principle. We first show that any DNN can be quantified by the mutual information between the layers and the input and output variables. Using this representation we can calculate the optimal information theoretic limits of the DNN and obtain finite sample generalization bounds. The advantage of getting closer to the theoretical limit is quantifiable both by the generalization bound and by the network's simplicity. We argue that both the optimal architecture, number of layers and features/connections at each layer, are related to the bifurcation points of the information bottleneck tradeoff, namely, relevant compression of the input layer with respect to the output layer. The hierarchical representations at the layered network naturally correspond to the structural phase transitions along the information curve. We believe that this new insight can lead to new optimality bounds and deep learning algorithms.) <|cite_end|> <|cite_start|> (Reference: Opening the Black Box of Deep Neural Networks via Information: Despite their great success, there is still no comprehensive theoretical understanding of learning with Deep Neural Networks (DNNs) or their inner organization. Previous work proposed to analyze DNNs in the \textit{Information Plane}; i.e., the plane of the Mutual Information values that each layer preserves on the input and output variables. They suggested that the goal of the network is to optimize the Information Bottleneck (IB) tradeoff between compression and prediction, successively, for each layer. In this work we follow up on this idea and demonstrate the effectiveness of the Information-Plane visualization of DNNs. Our main results are: (i) most of the training epochs in standard DL are spent on {\emph compression} of the input to efficient representation and not on fitting the training labels. (ii) The representation compression phase begins when the training errors becomes small and the Stochastic Gradient Decent (SGD) epochs change from a fast drift to smaller training error into a stochastic relaxation, or random diffusion, constrained by the training error value. (iii) The converged layers lie on or very close to the Information Bottleneck (IB) theoretical bound, and the maps from the input to any hidden layer and from this hidden layer to the output satisfy the IB self-consistent equations. This generalization through noise mechanism is unique to Deep Neural Networks and absent in one layer networks. (iv) The training time is dramatically reduced when adding more hidden layers. Thus the main advantage of the hidden layers is computational. This can be explained by the reduced relaxation time, as this it scales super-linearly (exponentially for simple diffusion) with the information compression from the previous layer.) <|cite_end|> <|cite_start|> (Reference: ON THE INFORMATION BOTTLENECK THEORY OF DEEP LEARNING: The practical successes of deep neural networks have not been matched by theoretical progress that satisfyingly explains their behavior. In this work, we study the information bottleneck (IB) theory of deep learning, which makes three specific claims: first, that deep networks undergo two distinct phases consisting of an initial fitting phase and a subsequent compression phase; second, that the compression phase is causally related to the excellent generalization performance of deep networks; and third, that the compression phase occurs due to the diffusion-like behavior of stochastic gradient descent. Here we show that none of these claims hold true in the general case, and instead reflect assumptions made to compute a finite mutual information metric in deterministic networks. When computed using simple binning, we demonstrate through a combination of analytical results and simulation that the information plane trajectory observed in prior work is predominantly a function of the neural nonlinearity employed: double-sided saturating nonlinearities like yield a compression phase as neural activations enter the saturation regime, but linear activation functions and single-sided saturating nonlinearities like the widely used ReLU in fact do not. Moreover, we find that there is no evident causal connection between compression and generalization: networks that do not compress are still capable of generalization, and vice versa. Next, we show that the compression phase, when it exists, does not arise from stochasticity in training by demonstrating that we can replicate the IB findings using full batch gradient descent rather than stochastic gradient descent. Finally, we show that when an input domain consists of a subset of task-relevant and task-irrelevant information, hidden representations do compress the task-irrelevant information, although the overall information about the input may monotonically increase with training time, and that this compression happens concurrently with the fitting process rather than during a subsequent compression period.) <|cite_end|>. Furthermore, recent self-supervised losses use contrastive losses, where its origin can be traced back to MI-based losses <|cite_start|> (Reference: {CLUB:: 2OOO年夏“新浪奥运情侣特使活动”决赛在湖南卫视“玫瑰之约”做了电视转播。北京的唱英杰和张雷在水上体能大比拼、奥运知识抢答,个人才艺表演等比赛项目中热情奔放的出色表现,让人感叹,当之无愧地成为“新浪奥运情侣特使”。) <|cite_end|> <|cite_start|> (Reference: Data-Efficient Image Recognition with Contrastive Predictive Coding: Human observers can learn to recognize new categories of images from a handful of examples, yet doing so with artificial ones remains an open challenge. We hypothesize that data-efficient recognition is enabled by representations which make the variability in natural signals more predictable. We therefore revisit and improve Contrastive Predictive Coding, an unsupervised objective for learning such representations. This new implementation produces features which support state-of-the-art linear classification accuracy on the ImageNet dataset. When used as input for non-linear classification with deep neural networks, this representation allows us to use 2-5x less labels than classifiers trained directly on image pixels. Finally, this unsupervised representation substantially improves transfer learning to object detection on the PASCAL VOC dataset, surpassing fully supervised pre-trained ImageNet classifiers.) <|cite_end|> <|cite_start|> (Reference: Debiased Contrastive Learning: A prominent technique for self-supervised representation learning has been to contrast semantically similar and dissimilar pairs of samples. Without access to labels, dissimilar (negative) points are typically taken to be randomly sampled datapoints, implicitly accepting that these points may, in reality, actually have the same label. Perhaps unsurprisingly, we observe that sampling negative examples from truly different labels improves performance, in a synthetic setting where labels are available. Motivated by this observation, we develop a debiased contrastive objective that corrects for the sampling of same-label datapoints, even without knowledge of the true labels. Empirically, the proposed objective consistently outperforms the state-of-the-art for representation learning in vision, language, and reinforcement learning benchmarks. Theoretically, we establish generalization bounds for the downstream classification task.) <|cite_end|>. Although many have shown computational tractability and usefulness of MI-based losses, others still struggle with their instability during optimization. Contrastive learning literature with MI-based losses such as <|cite_start|> (Reference: A Simple Framework for Contrastive Learning of Visual Representations: This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels.) <|cite_end|> <|cite_start|> (Reference: Momentum Contrast for Unsupervised Visual Representation Learning: We present Momentum Contrast (MoCo) for unsupervised visual representation learning. From a perspective on contrastive learning as dictionary look-up, we build a dynamic dictionary with a queue and a moving-averaged encoder. This enables building a large and consistent dictionary on-the-fly that facilitates contrastive unsupervised learning. MoCo provides competitive results under the common linear protocol on ImageNet classification. More importantly, the representations learned by MoCo transfer well to downstream tasks. MoCo can outperform its supervised pre-training counterpart in 7 detection/segmentation tasks on PASCAL VOC, COCO, and other datasets, sometimes surpassing it by large margins. This suggests that the gap between unsupervised and supervised representation learning has been largely closed in many vision tasks.) <|cite_end|> use huge batch sizes to reduce the variance of losses. <|cite_start|> (Reference: VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning: Recent self-supervised methods for image representation learning are based on maximizing the agreement between embedding vectors from different views of the same image. A trivial solution is obtained when the encoder outputs constant vectors. This collapse problem is often avoided through implicit biases in the learning architecture, that often lack a clear justification or interpretation. In this paper, we introduce VICReg (Variance-Invariance-Covariance Regularization), a method that explicitly avoids the collapse problem with a simple regularization term on the variance of the embeddings along each dimension individually. VICReg combines the variance term with a decorrelation mechanism based on redundancy reduction and covariance regularization, and achieves results on par with the state of the art on several downstream tasks. In addition, we show that incorporating our new variance term into other methods helps stabilize the training and leads to performance improvements.) <|cite_end|> adds a regularization term to the neural network embeddings to stabilize the training. <|cite_start|> (Reference: Formal Limitations on the Measurement of Mutual Information: Measuring mutual information from finite data is difficult. Recent work has considered variational methods maximizing a lower bound. In this paper, we prove that serious statistical limitations are inherent to any method of measuring mutual information. More specifically, we show that any distribution-free high-confidence lower bound on mutual information estimated from N samples cannot be larger than O(ln N ).) <|cite_end|> and <|cite_start|> (Reference: Understanding the Limitations of Variational Mutual Information Estimators: Variational approaches based on neural networks are showing promise for estimating mutual information (MI) between high dimensional variables. However, they can be difficult to use in practice due to poorly understood bias/variance tradeoffs. We theoretically show that, under some conditions, estimators such as MINE exhibit variance that could grow exponentially with the true amount of underlying MI. We also empirically demonstrate that existing estimators fail to satisfy basic self-consistency properties of MI, such as data processing and additivity under independence. Based on a unified perspective of variational approaches, we develop a new estimator that focuses on variance reduction. Empirical results on standard benchmark tasks demonstrate that our proposed estimator exhibits improved bias-variance trade-offs on standard benchmark tasks.) <|cite_end|> further provide theoretical limitations of variational MI estimators, arguing that the limited batch size induces a MI estimation variance too large to handle. We argue that mitigating the variance of MI-based losses is critical for stabilizing training, where it is well known that more stable optimization of neural networks yields better predictive performance on the downstream tasks <|cite_start|> (Reference: ProMP: Proximal Meta-Policy Search: Credit assignment in Meta-reinforcement learning (Meta-RL) is still poorly understood. Existing methods either neglect credit assignment to pre-adaptation behavior or implement it naively. This leads to poor sample-efficiency during meta-training as well as ineffective task identification strategies. This paper provides a theoretical analysis of credit assignment in gradient-based Meta-RL. Building on the gained insights we develop a novel meta-learning algorithm that overcomes both the issue of poor credit assignment and previous difficulties in estimating meta-policy gradients. By controlling the statistical distance of both pre-adaptation and adapted policies during meta-policy search, the proposed algorithm endows efficient and stable meta-learning. Our approach leads to superior pre-adaptation policy behavior and consistently outperforms previous Meta-RL algorithms in sample-efficiency, wall-clock time, and asymptotic performance.) <|cite_end|> <|cite_start|> (Reference: Loss Functions Modulate the Optimal Bias-Variance Trade-off: Prediction problems vary in the extent to which accuracy is rewarded and inaccuracy is penalized—i.e., in their loss functions. Here, we focus on a particular feature of loss functions that controls how much large errors are penalized relative to how much precise correctness is rewarded: convexity. We show that prediction problems with convex loss functions (i.e., those in which large errors are particularly harmful) favor simpler models that tend to be biased, but exhibit low variability. Conversely, problems with concave loss functions (in which precise correctness is particularly rewarded) favor more complex models that are less biased, but exhibit higher variability. We discuss how this relationship between the bias-variance trade-off and the shape of the loss function may help explain features of human psychology, such as dual-process psychology and fast versus slow learning strategies, and inform statistical inference.) <|cite_end|> <|cite_start|> (Reference: Reducing Noise in GAN Training with Variance Reduced Extragradient: We study the effect of the stochastic gradient noise on the training of generative adversarial networks (GANs) and show that it can prevent the convergence of standard game optimization methods, while the batch version converges. We address this issue with a novel stochastic variance-reduced extragradient (SVRE) optimization algorithm, which for a large class of games improves upon the previous convergence rates proposed in the literature. We observe empirically that SVRE performs similarly to a batch method on MNIST while being computationally cheaper, and that SVRE yields more stable GAN training on standard datasets.) <|cite_end|> <|cite_start|> (Reference: VarGrad: A Low-Variance Gradient Estimator for Variational Inference: We analyse the properties of an unbiased gradient estimator of the ELBO for variational inference, based on the score function method with leave-one-out control variates. We show that this gradient estimator can be obtained using a new loss, defined as the variance of the log-ratio between the exact posterior and the variational approximation, which we call the $\textit{log-variance loss}$. Under certain conditions, the gradient of the log-variance loss equals the gradient of the (negative) ELBO. We show theoretically that this gradient estimator, which we call $\textit{VarGrad}$ due to its connection to the log-variance loss, exhibits lower variance than the score function method in certain settings, and that the leave-one-out control variate coefficients are close to the optimal ones. We empirically demonstrate that VarGrad offers a favourable variance versus computation trade-off compared to other state-of-the-art estimators on a discrete VAE.) <|cite_end|> <|cite_start|> (Reference: Visual Navigation with Asynchronous Proximal Policy Optimization in Artificial Agents: Vanilla policy gradient methods suffer from high variance, leading to unstable policies during training, where the policy’s performance fluctuates drastically between iterations. To address this issue, we analyze the policy optimization process of the navigation method based on deep reinforcement learning (DRL) that uses asynchronous gradient descent for optimization. A variant navigation (asynchronous proximal policy optimization navigation, appoNav) is presented that can guarantee the policy monotonic improvement during the process of policy optimization. Our experiments are tested in DeepMind Lab, and the experimental results show that the artificial agents with appoNav perform better than the compared algorithm.) <|cite_end|> <|cite_start|> (Reference: A Novel Estimator of Mutual Information for Learning to Disentangle Textual Representations: Learning disentangled representations of textual data is essential for many natural language tasks such as fair classification, style transfer and sentence generation, among others. The existent dominant approaches in the context of text data {either rely} on training an adversary (discriminator) that aims at making attribute values difficult to be inferred from the latent code {or rely on minimising variational bounds of the mutual information between latent code and the value attribute}. {However, the available methods suffer of the impossibility to provide a fine-grained control of the degree (or force) of disentanglement.} {In contrast to} {adversarial methods}, which are remarkably simple, although the adversary seems to be performing perfectly well during the training phase, after it is completed a fair amount of information about the undesired attribute still remains. This paper introduces a novel variational upper bound to the mutual information between an attribute and the latent code of an encoder. Our bound aims at controlling the approximation error via the Renyi's divergence, leading to both better disentangled representations and in particular, a precise control of the desirable degree of disentanglement {than state-of-the-art methods proposed for textual data}. Furthermore, it does not suffer from the degeneracy of other losses in multi-class scenarios. We show the superiority of this method on fair classification and on textual style transfer tasks. Additionally, we provide new insights illustrating various trade-offs in style transfer when attempting to learn disentangled representations and quality of the generated sentence.) <|cite_end|>. In this paper, we concentrate on identifying the cause behind the instability of MI-based losses and propose a simple yet effective regularization method that can be applied to various MI-based losses. We start by analyzing the behaviors of two MI estimators; the MI Neural Estimator (MINE) loss <|cite_start|> (Reference: Mutual Information Neural Estimation: We argue that the estimation of mutual information between high dimensional continuous random variables can be achieved by gradient descent over neural networks. We present a Mutual Information Neural Estimator (MINE) that is linearly scalable in dimensionality as well as in sample size, trainable through back-prop, and strongly consistent. We present a handful of applications on which MINE can be used to minimize or maximize mutual information. We apply MINE to improve adversarially trained generative models. We also use MINE to implement Information Bottleneck, applying it to supervised classification; our results demonstrate substantial improvement in flexibility and performance in these settings.) <|cite_end|> and Nguyen-Wainwright-Jordan loss (NWJ) loss <|cite_start|> (Reference: Estimating divergence functionals and the likelihood ratio by convex risk minimization: We develop and analyze $M$-estimation methods for divergence functionals and the likelihood ratios of two probability distributions. Our method is based on a non-asymptotic variational characterization of $f$-divergences, which allows the problem of estimating divergences to be tackled via convex empirical risk optimization. The resulting estimators are simple to implement, requiring only the solution of standard convex programs. We present an analysis of consistency and convergence for these estimators. Given conditions only on the ratios of densities, we show that our estimators can achieve optimal minimax rates for the likelihood ratio and the divergence functionals in certain regimes. We derive an efficient optimization algorithm for computing our estimates, and illustrate their convergence behavior and practical viability by simulations.) <|cite_end|>. We identify two distinctive behaviors that induce instability during training, drifting and exploding neural network outputs. Based on these observations, we design two novel dual representations of the KL-divergence called Regularized Donsker-Varadhan representation (ReDV) and Regularized NWJ representation (ReNWJ). We show theoretically and experimentally that adding our regularizer term suppresses two behaviors of drifting and exploding, avoiding instability during training. Finally, we design a novel benchmark that bridges the gap between variational MI estimators and real-world tasks, whereas previous works either do not directly show the MI estimation performance or evaluate only on toy problems. We reformulate both the supervised and the contrastive learning problem <|cite_start|> (Reference: A Simple Framework for Contrastive Learning of Visual Representations: This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels.) <|cite_end|> <|cite_start|> (Reference: Momentum Contrast for Unsupervised Visual Representation Learning: We present Momentum Contrast (MoCo) for unsupervised visual representation learning. From a perspective on contrastive learning as dictionary look-up, we build a dynamic dictionary with a queue and a moving-averaged encoder. This enables building a large and consistent dictionary on-the-fly that facilitates contrastive unsupervised learning. MoCo provides competitive results under the common linear protocol on ImageNet classification. More importantly, the representations learned by MoCo transfer well to downstream tasks. MoCo can outperform its supervised pre-training counterpart in 7 detection/segmentation tasks on PASCAL VOC, COCO, and other datasets, sometimes surpassing it by large margins. This suggests that the gap between unsupervised and supervised representation learning has been largely closed in many vision tasks.) <|cite_end|> <|cite_start|> (Reference: Supervised Contrastive Learning: Contrastive learning applied to self-supervised representation learning has seen a resurgence in recent years, leading to state of the art performance in the unsupervised training of deep image models. Modern batch contrastive approaches subsume or significantly outperform traditional contrastive losses such as triplet, max-margin and the N-pairs loss. In this work, we extend the self-supervised batch contrastive approach to the fully-supervised setting, allowing us to effectively leverage label information. Clusters of points belonging to the same class are pulled together in embedding space, while simultaneously pushing apart clusters of samples from different classes. We analyze two possible versions of the supervised contrastive (SupCon) loss, identifying the best-performing formulation of the loss. On ResNet-200, we achieve top-1 accuracy of 81.4% on the ImageNet dataset, which is 0.8% above the best number reported for this architecture. We show consistent outperformance over cross-entropy on other datasets and two ResNet variants. The loss shows benefits for robustness to natural corruptions and is more stable to hyperparameter settings such as optimizers and data augmentations. Our loss function is simple to implement, and reference TensorFlow code is released at https://t.ly/supcon.) <|cite_end|> as MI estimation problems and show that our regularization yields better performance on both perspectives, downstream task and MI estimation performance. <|paper_end|>
[ "<|reference_start|> Deep Learning and the Information Bottleneck Principle: Deep Neural Networks (DNNs) are analyzed via the theoretical framework of the information bottleneck (IB) principle. We first show that any DNN can be quantified by the mutual information between the layers and the input and output variables. Using this representation we can calculate the optimal information theoretic limits of the DNN and obtain finite sample generalization bounds. The advantage of getting closer to the theoretical limit is quantifiable both by the generalization bound and by the network's simplicity. We argue that both the optimal architecture, number of layers and features/connections at each layer, are related to the bifurcation points of the information bottleneck tradeoff, namely, relevant compression of the input layer with respect to the output layer. The hierarchical representations at the layered network naturally correspond to the structural phase transitions along the information curve. We believe that this new insight can lead to new optimality bounds and deep learning algorithms. <|reference_end|>", "<|reference_start|> VarGrad: A Low-Variance Gradient Estimator for Variational Inference: We analyse the properties of an unbiased gradient estimator of the ELBO for variational inference, based on the score function method with leave-one-out control variates. We show that this gradient estimator can be obtained using a new loss, defined as the variance of the log-ratio between the exact posterior and the variational approximation, which we call the $\\textit{log-variance loss}$. Under certain conditions, the gradient of the log-variance loss equals the gradient of the (negative) ELBO. We show theoretically that this gradient estimator, which we call $\\textit{VarGrad}$ due to its connection to the log-variance loss, exhibits lower variance than the score function method in certain settings, and that the leave-one-out control variate coefficients are close to the optimal ones. We empirically demonstrate that VarGrad offers a favourable variance versus computation trade-off compared to other state-of-the-art estimators on a discrete VAE. <|reference_end|>", "<|reference_start|> A Simple Framework for Contrastive Learning of Visual Representations: This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels. <|reference_end|>", "<|reference_start|> Momentum Contrast for Unsupervised Visual Representation Learning: We present Momentum Contrast (MoCo) for unsupervised visual representation learning. From a perspective on contrastive learning as dictionary look-up, we build a dynamic dictionary with a queue and a moving-averaged encoder. This enables building a large and consistent dictionary on-the-fly that facilitates contrastive unsupervised learning. MoCo provides competitive results under the common linear protocol on ImageNet classification. More importantly, the representations learned by MoCo transfer well to downstream tasks. MoCo can outperform its supervised pre-training counterpart in 7 detection/segmentation tasks on PASCAL VOC, COCO, and other datasets, sometimes surpassing it by large margins. This suggests that the gap between unsupervised and supervised representation learning has been largely closed in many vision tasks. <|reference_end|>" ]
[ 9, 23, 28, 29 ]
{"<|multi_cite_5_1|>": "ss-805363", "<|multi_cite_5_2|>": "ss-949521", "<|multi_cite_5_3|>": "arxiv-88870", "<|multi_cite_5_4|>": "arxiv-126595", "<|multi_cite_6_1|>": "arxiv-99182", "<|multi_cite_6_2|>": "arxiv-99905", "<|multi_cite_6_3|>": "ss-774931", "<|multi_cite_6_4|>": "arxiv-165446", "<|multi_cite_6_5|>": "arxiv-169741", "<|multi_cite_7_1|>": "arxiv-74262", "<|multi_cite_7_2|>": "arxiv-118029", "<|multi_cite_7_3|>": "ss-742659", "<|multi_cite_8_1|>": "ss-1852979", "<|multi_cite_8_2|>": "arxiv-205375", "<|multi_cite_8_3|>": "arxiv-275579", "<|multi_cite_1_1|>": "arxiv-248169", "<|multi_cite_1_2|>": "arxiv-234041", "<|cite_2|>": "arxiv-340197", "<|cite_3|>": "arxiv-179891", "<|cite_4|>": "arxiv-228764", "<|multi_cite_9_1|>": "arxiv-176397", "<|multi_cite_9_2|>": "ss-1297360", "<|multi_cite_9_3|>": "ss-1255871", "<|multi_cite_9_4|>": "ss-1297361", "<|multi_cite_9_5|>": "ss-1297362", "<|multi_cite_9_6|>": "arxiv-339229", "<|cite_10|>": "ss-774931", "<|cite_11|>": "arxiv-4756", "<|multi_cite_12_1|>": "arxiv-248169", "<|multi_cite_12_2|>": "arxiv-234041", "<|multi_cite_12_3|>": "arxiv-261145"}
1807.09929
<|paper_start|> Title: Jupyter as Common Technology Platform for Interactive HPC Services Abstract: Jupyter as Common Technology Platform for Interactive HPC Services: The Minnesota Supercomputing Institute has implemented Jupyterhub and the Jupyter notebook server as a general-purpose point-of-entry to interactive high performance computing services. This mode of operation runs counter to traditional job-oriented HPC operations, but offers significant advantages for ease-of-use, data exploration, prototyping, and workflow development. From the user perspective, these features bring the computing cluster nearer to parity with emerging cloud computing options. On the other hand, retreating from fully-scheduled, job-based resource allocation poses challenges for resource availability and utilization efficiency, and can involve tools and technologies outside the typical core competencies of a supercomputing center's operations staff. MSI has attempted to mitigate these challenges by adopting Jupyter as a common technology platform for interactive services, capable of providing command-line, graphical, and workflow-oriented access to HPC resources while still integrating with job scheduling systems and using existing compute resources. This paper will describe the mechanisms that MSI has put in place, advantages for research and instructional uses, and lessons learned. Introduction Users of academic research computing services display a wide range of familiarity with high-performance computing technology, but the large majority are focused on scientific or learning outcomes rather than computational practice. As a result, a significant amount of software development effort associated with a high-performance computing center focuses on interfaces--particularly web interfaces--enhancing the usability of computing systems and accessibility of technical information. Different challenges result depending on the type and location of these software development activities. They may be undertaken by users, acting alone or in groups, or by dedicated center staff, who may be dedicated software developers or operational staff engaging in development activities as a secondary task. In the case of user-driven development, center staff is challenged to either support diverse software packages or support users attempting to deploy such packages with the limited access permitted to unprivileged users. When development is staff-driven, the option exists to build around common base technologies and reduce the cognitive load on both development and support staff. Moreover, if the staff-deployed usability technologies are sufficiently flexible, they may displace some demand for user-selected software by substituting domain-general platforms for domain-specific gateways <|cite_start|> (Reference: Sandstone HPC: A Domain-General Gateway for New HPC Users: The complexity of high-performance computing (HPC) resources poses many challenges to new users. A number of science gateways have been developed to increase the productivity of novice users by hiding the underlying infrastructure, however these solutions tend not to teach HPC skills that transfer easily outside of the gateway. In this paper we introduce a domain-general gateway, Sandstone HPC, that represents the HPC environment more naturally to novice users by abstracting the command-line interface and providing contextual help. We assess the degree to which Sandstone HPC improves upon the usability of the command-line interface by analyzing the results of a usability study conducted on both environments. We will also detail how the architecture promotes long-term sustainability and a community-development model.) <|cite_end|>. The Minnesota Supercomputing Institute (MSI) at the University of Minnesota has adopted a goal of supporting Interactive HPC as a first-class service. The availability of interactive services can provide significant benefits for data visualization and exploration <|cite_start|> (Reference: Vizic: A Jupyter-based interactive visualization tool for astronomical catalogs: ) <|cite_end|>, workflow prototyping <|cite_start|> (Reference: Sandstone HPC: A Domain-General Gateway for New HPC Users: The complexity of high-performance computing (HPC) resources poses many challenges to new users. A number of science gateways have been developed to increase the productivity of novice users by hiding the underlying infrastructure, however these solutions tend not to teach HPC skills that transfer easily outside of the gateway. In this paper we introduce a domain-general gateway, Sandstone HPC, that represents the HPC environment more naturally to novice users by abstracting the command-line interface and providing contextual help. We assess the degree to which Sandstone HPC improves upon the usability of the command-line interface by analyzing the results of a usability study conducted on both environments. We will also detail how the architecture promotes long-term sustainability and a community-development model.) <|cite_end|>, and training <|cite_start|> (Reference: Incorporating interactive compute environments into web-based training materials using the Cornell job runner service: Online training materials, such as the Cornell Virtual WorkshopSM have many advantages, the foremost being that they are always available as a 24x7 option for learners who want to study a topic on demand and at their own pace. It can be challenging to create online materials that are engaging and provide a realistic learning environment. Traditionally, training materials and compute environments have been separate entities. Even in the HPC environment, students learn from online materials in one window, then log into a new machine or session to try out new skills or concepts. Accessing this second environment can impose obstacles such as gaining access to the appropriate computer and learning to navigate a computer-specific login environment and file system. In an effort to circumvent these obstacles, the Cornell University Center for Advanced Computing (CAC) developed the Cornell Job Runner ServiceSM (CJRS), along with a general-purpose toolkit for using the CJRS to embed a computing environment directly into web pages, backed by real or virtual compute resources. This implementation provides the learner immediate access to a compute environment that looks and feels like a typical HPC login node or batch job, allowing incorporation of on-demand learning experiences interspersed with general training content. With CJRS, students can try out commands and run jobs without obtaining an account or leaving the learning environment to sign in to a remote machine. This paper explores the use of the CJRS toolkit to provide three different interactive modes for learners: a Linux console configured as a general login node, a form element that launches a pre-defined SLURM job, and a guided session which allows the user to walk through pre-planned steps of compiling, fixing, and running MPI code.) <|cite_end|>. This mode of operation is strongly desired by currently-existing users, who are routinely willing to sacrifice performance (by computing on local resources) or cost (by purchasing access to external cloud computing resources) to achieve flexibility not offered by traditional HPC. At present MSI supports several interactive modes of operation, including traditional command line interface (CLI) tools, graphical remote desktop sessions, and web-based services. These features bring MSI service offerings closer to parity with both local and emerging cloud computing options. In addition MSI employs a dedicated core of staff software developers, primarily supported by grants and contracts from the research community, focused on application development that strongly leverages the availability of high-performance computing resources. In practice, nearly all of the development projects supported in this way include a web application component supporting usability or accessibility. In this paper, we describe efforts at MSI to provide both interactive HPC services, and robust application development support of usability and accessibility technologies, using components of the Jupyter software ecosystem as a common technology platform. MSI has implemented a public JupyterHub service that permits users to seamlessly run the interactive Jupyter Notebook web application using normal batch-scheduled clustered computing resources. By exploiting existing extension points in Jupyter and JupyterHub, MSI application developers have used these components to deploy project-specific customized web portals providing access to particular workflows and data for research and training purposes. These efforts benefit greatly from the community-supported open source nature of the tools in the Jupyter software ecosystem. <|paper_end|>
[ "<|reference_start|> Sandstone HPC: A Domain-General Gateway for New HPC Users: The complexity of high-performance computing (HPC) resources poses many challenges to new users. A number of science gateways have been developed to increase the productivity of novice users by hiding the underlying infrastructure, however these solutions tend not to teach HPC skills that transfer easily outside of the gateway. In this paper we introduce a domain-general gateway, Sandstone HPC, that represents the HPC environment more naturally to novice users by abstracting the command-line interface and providing contextual help. We assess the degree to which Sandstone HPC improves upon the usability of the command-line interface by analyzing the results of a usability study conducted on both environments. We will also detail how the architecture promotes long-term sustainability and a community-development model. <|reference_end|>", "<|reference_start|> Vizic: A Jupyter-based interactive visualization tool for astronomical catalogs: <|reference_end|>", "<|reference_start|> Sandstone HPC: A Domain-General Gateway for New HPC Users: The complexity of high-performance computing (HPC) resources poses many challenges to new users. A number of science gateways have been developed to increase the productivity of novice users by hiding the underlying infrastructure, however these solutions tend not to teach HPC skills that transfer easily outside of the gateway. In this paper we introduce a domain-general gateway, Sandstone HPC, that represents the HPC environment more naturally to novice users by abstracting the command-line interface and providing contextual help. We assess the degree to which Sandstone HPC improves upon the usability of the command-line interface by analyzing the results of a usability study conducted on both environments. We will also detail how the architecture promotes long-term sustainability and a community-development model. <|reference_end|>", "<|reference_start|> Incorporating interactive compute environments into web-based training materials using the Cornell job runner service: Online training materials, such as the Cornell Virtual WorkshopSM have many advantages, the foremost being that they are always available as a 24x7 option for learners who want to study a topic on demand and at their own pace. It can be challenging to create online materials that are engaging and provide a realistic learning environment. Traditionally, training materials and compute environments have been separate entities. Even in the HPC environment, students learn from online materials in one window, then log into a new machine or session to try out new skills or concepts. Accessing this second environment can impose obstacles such as gaining access to the appropriate computer and learning to navigate a computer-specific login environment and file system. In an effort to circumvent these obstacles, the Cornell University Center for Advanced Computing (CAC) developed the Cornell Job Runner ServiceSM (CJRS), along with a general-purpose toolkit for using the CJRS to embed a computing environment directly into web pages, backed by real or virtual compute resources. This implementation provides the learner immediate access to a compute environment that looks and feels like a typical HPC login node or batch job, allowing incorporation of on-demand learning experiences interspersed with general training content. With CJRS, students can try out commands and run jobs without obtaining an account or leaving the learning environment to sign in to a remote machine. This paper explores the use of the CJRS toolkit to provide three different interactive modes for learners: a Linux console configured as a general login node, a form element that launches a pre-defined SLURM job, and a guided session which allows the user to walk through pre-planned steps of compiling, fixing, and running MPI code. <|reference_end|>" ]
[ 0, 1, 2, 3 ]
{"<|cite_2|>": "ss-1378024", "<|cite_3|>": "ss-1378025", "<|cite_4|>": "ss-1378024", "<|cite_5|>": "ss-1378026"}
2208.11744
<|paper_start|> Title: Enforcing Delayed-Impact Fairness Guarantees Abstract: Enforcing Delayed-Impact Fairness Guarantees: Recent research has shown that seemingly fair machine learning models, when used to inform decisions that have an impact on peoples' lives or well-being (e.g., applications involving education, employment, and lending), can inadvertently increase social inequality in the long term. This is because prior fairness-aware algorithms only consider static fairness constraints, such as equal opportunity or demographic parity. However, enforcing constraints of this type may result in models that have negative long-term impact on disadvantaged individuals and communities. We introduce ELF (Enforcing Long-term Fairness), the first classification algorithm that provides high-confidence fairness guarantees in terms of long-term, or delayed, impact. We prove that the probability that ELF returns an unfair solution is less than a user-specified tolerance and that (under mild assumptions), given sufficient training data, ELF is able to find and return a fair solution if one exists. We show experimentally that our algorithm can successfully mitigate long-term unfairness. Introduction \label{sec: introduction} Using machine learning (ML) for high-stakes applications, such as lending, hiring, and criminal sentencing, may potentially harm historically disadvantaged communities <|cite_start|> (Reference: Ethnic and gender discrimination in the rental housing market: Evidence from a meta-analysis of correspondence tests, 2006–2017: ) <|cite_end|> <|cite_start|> (Reference: Algorithmic Advertising Discrimination: The ability of social media companies to precisely target advertisements to individual users based on those users’ characteristics is changing how job opportunities are advertised. Companies like Facebook use machine learning to place their ads, and machine learning systems present risks of discrimination, which current legal doctrines are not designed to deal with. This Note will explain why it is difficult to ensure such systems do not learn discriminatory functions and why it is hard to discern what they have learned as long as they appear to be performing well on their assigned task. This Note then shows how litigation might adapt to these new systems to provide a remedy to individual plaintiffs but explains why deterrence is ill-suited in this context to prevent this discrimination from occurring in the first place. Preventing machine learning systems from learning to discriminate requires training those systems on broad, representative datasets that include protected characteristics—data that the corporations training these systems may not have. The Note proposes a proactive solution, which would involve a third party safeguarding a rich, large, nationally representative dataset of real people’s information. This third party could allow corporations like Facebook to train their machine learning systems on a representative dataset, while keeping the private data themselves out of those corporations’ hands. AUTHOR—J.D.–Ph.D Candidate, Northwestern Pritzker School of Law and McCormick School of Engineering, Department of Computer Science. I would like to thank Professors Deborah Tuerkheimer and Sarah Lawsky for their invaluable help shaping this work, and Professors Ken Forbus, Doug Downey, and Brian Pardo for their discussions on machine learning. Thanks as well to the editors of the Northwestern University Law Review for their work on this piece, particularly Will French, Kathleen Gould, Matthew Erickson, Matthew Freilich, Andrew Kunsak, Abigail Bachrach, Andrew Borrasso, and Annie Prossnitz. All errors are my own. N O R T H W E S T E R N U N I V E R S I T Y L A W R E V I E W 416 INTRODUCTION ............................................................................................................ 416 I. FRONTIERS IN DIGITAL EMPLOYMENT ADVERTISING ............................................ 420 A. Using Explicit Proxy Variables .................................................................. 420 B. Lookalike Audiences ................................................................................... 422 C. Algorithmic Bias in Machine Learning Systems ......................................... 423 II. EMPLOYMENT ADVERTISING DISCRIMINATION LAW ............................................. 439 A. Discrimination Law for Employment Agencies ........................................... 440 B. Relevant Title VII Employment Discrimination Actions ............................. 442 III. APPLYING AND ADAPTING EMPLOYMENT DISCRIMINATION TO MACHINE LEARNING ............................................................................................ 446 A. Disparate Treatment ................................................................................... 448 B. Antistereotyping Theory .............................................................................. 449 C. Disparate Impact ........................................................................................ 450 D. Word-of-Mouth Hiring ................................................................................ 453 E. Reckless Discrimination ............................................................................. 454 IV. COUNTERING ALGORITHMIC DISCRIMINATION REACTIVELY AND PREVENTATIVELY ......................................................................................... 456 A. The Insufficiency of Reactive Solutions ...................................................... 457 B. Proactive Solutions ..................................................................................... 459 CONCLUSION ............................................................................................................... 466) <|cite_end|> <|cite_start|> (Reference: Consumer-Lending Discrimination in the Fintech Era: Abstract U.S. fair-lending law prohibits lenders from making credit determinations that disparately affect minority borrowers if those determinations are based on characteristics unrelated to creditworthiness. Using an identification under this rule, we show risk-equivalent Latinx/Black borrowers pay significantly higher interest rates on GSE-securitized and FHA-insured loans, particularly in high-minority-share neighborhoods. We estimate these rate differences cost minority borrowers over $450 million yearly. FinTech lenders’ rate disparities were similar to those of non-Fintech lenders for GSE mortgages, but lower for FHA mortgages issued in 2009–2015 and for FHA refi mortgages issued in 2018–2019.) <|cite_end|>. For example, software meant to guide lending decisions has been shown to exhibit racial bias <|cite_start|> (Reference: Consumer-Lending Discrimination in the Fintech Era: Abstract U.S. fair-lending law prohibits lenders from making credit determinations that disparately affect minority borrowers if those determinations are based on characteristics unrelated to creditworthiness. Using an identification under this rule, we show risk-equivalent Latinx/Black borrowers pay significantly higher interest rates on GSE-securitized and FHA-insured loans, particularly in high-minority-share neighborhoods. We estimate these rate differences cost minority borrowers over $450 million yearly. FinTech lenders’ rate disparities were similar to those of non-Fintech lenders for GSE mortgages, but lower for FHA mortgages issued in 2009–2015 and for FHA refi mortgages issued in 2018–2019.) <|cite_end|>. Extensive research has been devoted to algorithmic approaches that promote fairness and ameliorate concerns of bias and discrimination for socially impactful applications. The bulk of this research has focused on the classification setting, in which an ML model must make predictions given information about a person or community. Most fairness definitions studied in the classification setting are \emph{static} <|cite_start|> (Reference: Delayed Impact of Fair Machine Learning: Fairness in machine learning has predominantly been studied in static classification settings without concern for how decisions change the underlying population over time. Conventional wisdom suggests that fairness criteria promote the long-term well-being of those groups they aim to protect. We study how static fairness criteria interact with temporal indicators of well-being, such as long-term improvement, stagnation, and decline in a variable of interest. We demonstrate that even in a one-step feedback model, common fairness criteria in general do not promote improvement over time, and may in fact cause harm in cases where an unconstrained objective would not. We completely characterize the delayed impact of three standard criteria, contrasting the regimes in which these exhibit qualitatively different behavior. In addition, we find that a natural form of measurement error broadens the regime in which fairness criteria perform favorably. Our results highlight the importance of measurement and temporal modeling in the evaluation of fairness criteria, suggesting a range of new challenges and trade-offs.) <|cite_end|> in that they do not consider how a classifier's predictions impact the long-term well-being of a community. In their seminal paper, <|cite_start|> (Reference: Delayed Impact of Fair Machine Learning: Fairness in machine learning has predominantly been studied in static classification settings without concern for how decisions change the underlying population over time. Conventional wisdom suggests that fairness criteria promote the long-term well-being of those groups they aim to protect. We study how static fairness criteria interact with temporal indicators of well-being, such as long-term improvement, stagnation, and decline in a variable of interest. We demonstrate that even in a one-step feedback model, common fairness criteria in general do not promote improvement over time, and may in fact cause harm in cases where an unconstrained objective would not. We completely characterize the delayed impact of three standard criteria, contrasting the regimes in which these exhibit qualitatively different behavior. In addition, we find that a natural form of measurement error broadens the regime in which fairness criteria perform favorably. Our results highlight the importance of measurement and temporal modeling in the evaluation of fairness criteria, suggesting a range of new challenges and trade-offs.) <|cite_end|>~ show that classifiers' predictions that appear fair with respect to static fairness criteria can nevertheless negatively impact the long-term wellness of the community it aims to protect. Importantly, they assume that the precise analytical relationship between a classifier's prediction and its long-term impact, or \emph{delayed impact} (DI), is known. This is also the case in other fairness-related work <|cite_start|> (Reference: Towards Long-term Fairness in Recommendation: As Recommender Systems (RS) influence more and more people in their daily life, the issue of fairness in recommendation is becoming more and more important. Most of the prior approaches to fairness-aware recommendation have been situated in a static or one-shot setting, where the protected groups of items are fixed, and the model provides a one-time fairness solution based on fairness-constrained optimization. This fails to consider the dynamic nature of the recommender systems, where attributes such as item popularity may change over time due to the recommendation policy and user engagement. For example, products that were once popular may become no longer popular, and vice versa. As a result, the system that aims to maintain long-term fairness on the item exposure in different popularity groups must accommodate this change in a timely fashion. Novel to this work, we explore the problem of long-term fairness in recommendation and accomplish the problem through dynamic fairness learning. We focus on the fairness of exposure of items in different groups, while the division of the groups is based on item popularity, which dynamically changes over time in the recommendation process. We tackle this problem by proposing a fairness-constrained reinforcement learning algorithm for recommendation, which models the recommendation problem as a Constrained Markov Decision Process (CMDP), so that the model can dynamically adjust its recommendation policy to make sure the fairness requirement is always satisfied when the environment changes. Experiments on several real-world datasets verify our framework's superiority in terms of recommendation performance, short-term fairness, and long-term fairness.) <|cite_end|> <|cite_start|> (Reference: Achieving Long-Term Fairness in Sequential Decision Making: In this paper, we propose a framework for achieving long-term fair sequential decision making. By conducting both the hard and soft interventions, we propose to take path-specific effects on the time-lagged causal graph as a quantitative tool for measuring long-term fairness. The problem of fair sequential decision making is then formulated as a constrained optimization problem with the utility as the objective and the long-term and short-term fairness as constraints. We show that such an optimization problem can be converted to a performative risk optimization. Finally, repeated risk minimization (RRM) is used for model training, and the convergence of RRM is theoretically analyzed. The empirical evaluation shows the effectiveness of the proposed algorithm on synthetic and semi-synthetic temporal datasets.) <|cite_end|>. \emph{Designing classification algorithms that mitigate negative delayed impact when this relationship is not known has remained an open problem.} In this work, we introduce \algname (Enforcing Long-term Fairness), the first classification algorithm that solves this open problem. \algname does not require access to an analytic model of the delayed impact of a classifier's predictions. Instead, it works under the less strict assumption that the algorithm has access to historical data containing observations of the delayed impact that results from the predictions of an existing classifier. We illustrate this setting below with an example. \noindent\textbf{Loan repayment example.} As a running example, consider a bank that wishes to increase its profit by maximizing successful loan repayments. The bank's decisions are informed by a classifier that predicts repayment success. These decisions may affect the long-term financial well-being of loan applicants, such as their savings rate or debt-to-income ratio two years after a lending decision is made. Taking this delayed impact into account is important: when a subset of the population is disadvantaged, the bank may want (or be required by law) to maximize profit subject to a fairness constraint that considers the disadvantaged group's long-term well-being. Unfortunately, existing methods that address this problem can only be used if analytical models of how repayment predictions affect long-term financial well-being are available. Constructing such models is challenging: many complex factors influence how different demographic groups in a \mbox{given community are affected by financial decisions.} \algname, by contrast, can ensure delayed-impact fairness with high confidence as long as the bank can collect data about the long-term financial well-being of loan applicants, following decisions based on an existing classifier. As an example, the bank might access information about the savings rate of an applicant two years after a lending decision is made. Here, delayed impact could be defined as the real-valued savings rate. However, we emphasize that our approach works with any metric of delayed impact that can be observed and quantified, including more holistic metrics than savings rate. As one motivating use case, this work provides the algorithmic tools to responsibly apply ML for this task.\footnote{Notice that if used by adversaries, our method could be used to enforce bias instead of minimizing it.} \noindent \textbf{Contributions.} We present \algname, the first method capable of enforcing DI fairness when the analytical relationship between predictions and DI is not known \emph{a priori}. To accomplish this, we simultaneously formulate the fair classification problem as both a classification and a reinforcement learning problem---classification for optimizing the primary objective (a measure of classification loss) and reinforcement learning when considering DI. We prove that \textbf{1)} the probability that \algname returns a model that is unfair (in terms of DI) is at most $\delta$, where $\delta$ is a hyperparameter that can be set appropriately for the application at hand; and \textbf{2)} given sufficient training data, \algname is able to find and return a solution that is fair if one exists. We provide an empirical analysis of \algname's performance while varying both the amount of training data and the influence that a classifier's predictions have on DI. \noindent \textbf{Limitations and future work.} \algname's high probability fairness guarantees only hold if the world has not changed between the time training data was collected and the time the trained classifier is deployed. While this (stationarity) assumption is common in ML, it may be unnatural in this setting since gathering data that includes measures of long-term impact requires that a correspondingly long duration of time has passed, and so nonstationarity of the data-generating distribution could compound over time to make this assumption unreasonable. For example, the way a group of people was affected by model predictions a decade ago may not be reflective of the present. While providing guarantees when nonstationarity occurs is important future work, in this paper we focus on the important first step of providing the first classification algorithm that provides DI fairness guarantees in the stationary setting. Additionally, notice that some applications may require long-term \emph{and} static fairness to be simultaneously satisfied. Appendix~\ref{app: extensions} shows how \algname can be used in conjunction with prior methods <|cite_start|> (Reference: Offline Contextual Bandits with High Probability Fairness Guarantees: We present RobinHood, an offline contextual bandit algorithm designed to satisfy a broad family of fairness constraints. Our algorithm accepts multiple fairness definitions and allows users to construct their own unique fairness definitions for the problem at hand. We provide a theoretical analysis of RobinHood, which includes a proof that it will not return an unfair solution with probability greater than a user-specified threshold. We validate our algorithm on three applications: a tutoring system in which we conduct a user study and consider multiple unique fairness definitions; a loan approval setting (using the Statlog German credit data set) in which well-known fairness definitions are applied; and criminal recidivism (using data released by ProPublica). In each setting, our algorithm is able to produce fair policies that achieve performance competitive with other offline and online contextual bandit algorithms.) <|cite_end|> to enforce a broader set of fairness definitions that includes common static fairness definitions. We leave an empirical analysis of these settings to future work. \looseness=-1 <|paper_end|>
[ "<|reference_start|> Delayed Impact of Fair Machine Learning: Fairness in machine learning has predominantly been studied in static classification settings without concern for how decisions change the underlying population over time. Conventional wisdom suggests that fairness criteria promote the long-term well-being of those groups they aim to protect. We study how static fairness criteria interact with temporal indicators of well-being, such as long-term improvement, stagnation, and decline in a variable of interest. We demonstrate that even in a one-step feedback model, common fairness criteria in general do not promote improvement over time, and may in fact cause harm in cases where an unconstrained objective would not. We completely characterize the delayed impact of three standard criteria, contrasting the regimes in which these exhibit qualitatively different behavior. In addition, we find that a natural form of measurement error broadens the regime in which fairness criteria perform favorably. Our results highlight the importance of measurement and temporal modeling in the evaluation of fairness criteria, suggesting a range of new challenges and trade-offs. <|reference_end|>", "<|reference_start|> Delayed Impact of Fair Machine Learning: Fairness in machine learning has predominantly been studied in static classification settings without concern for how decisions change the underlying population over time. Conventional wisdom suggests that fairness criteria promote the long-term well-being of those groups they aim to protect. We study how static fairness criteria interact with temporal indicators of well-being, such as long-term improvement, stagnation, and decline in a variable of interest. We demonstrate that even in a one-step feedback model, common fairness criteria in general do not promote improvement over time, and may in fact cause harm in cases where an unconstrained objective would not. We completely characterize the delayed impact of three standard criteria, contrasting the regimes in which these exhibit qualitatively different behavior. In addition, we find that a natural form of measurement error broadens the regime in which fairness criteria perform favorably. Our results highlight the importance of measurement and temporal modeling in the evaluation of fairness criteria, suggesting a range of new challenges and trade-offs. <|reference_end|>", "<|reference_start|> Towards Long-term Fairness in Recommendation: As Recommender Systems (RS) influence more and more people in their daily life, the issue of fairness in recommendation is becoming more and more important. Most of the prior approaches to fairness-aware recommendation have been situated in a static or one-shot setting, where the protected groups of items are fixed, and the model provides a one-time fairness solution based on fairness-constrained optimization. This fails to consider the dynamic nature of the recommender systems, where attributes such as item popularity may change over time due to the recommendation policy and user engagement. For example, products that were once popular may become no longer popular, and vice versa. As a result, the system that aims to maintain long-term fairness on the item exposure in different popularity groups must accommodate this change in a timely fashion. Novel to this work, we explore the problem of long-term fairness in recommendation and accomplish the problem through dynamic fairness learning. We focus on the fairness of exposure of items in different groups, while the division of the groups is based on item popularity, which dynamically changes over time in the recommendation process. We tackle this problem by proposing a fairness-constrained reinforcement learning algorithm for recommendation, which models the recommendation problem as a Constrained Markov Decision Process (CMDP), so that the model can dynamically adjust its recommendation policy to make sure the fairness requirement is always satisfied when the environment changes. Experiments on several real-world datasets verify our framework's superiority in terms of recommendation performance, short-term fairness, and long-term fairness. <|reference_end|>", "<|reference_start|> Achieving Long-Term Fairness in Sequential Decision Making: In this paper, we propose a framework for achieving long-term fair sequential decision making. By conducting both the hard and soft interventions, we propose to take path-specific effects on the time-lagged causal graph as a quantitative tool for measuring long-term fairness. The problem of fair sequential decision making is then formulated as a constrained optimization problem with the utility as the objective and the long-term and short-term fairness as constraints. We show that such an optimization problem can be converted to a performative risk optimization. Finally, repeated risk minimization (RRM) is used for model training, and the convergence of RRM is theoretically analyzed. The empirical evaluation shows the effectiveness of the proposed algorithm on synthetic and semi-synthetic temporal datasets. <|reference_end|>" ]
[ 4, 5, 6, 7 ]
{"<|multi_cite_1_1|>": "ss-1776219", "<|multi_cite_1_2|>": "ss-1776220", "<|multi_cite_1_3|>": "ss-955478", "<|cite_2|>": "ss-955478", "<|cite_3|>": "arxiv-151311", "<|cite_6|>": "arxiv-151311", "<|multi_cite_4_2|>": "arxiv-314388", "<|multi_cite_4_3|>": "arxiv-410860", "<|cite_5|>": "ss-1263556"}
2001.08162
<|paper_start|> Title: Cross Layer Design for Maximizing Network Utility in Multiple Gateways Wireless Mesh Networks Abstract: Cross Layer Design for Maximizing Network Utility in Multiple Gateways Wireless Mesh Networks: We investigate the problem of network utility maximization in multiple gateways wireless mesh networks by considering Signal to Interference plus Noise Ratio (SINR) as the interference model. The aim is a cross layer design that considers joint rate control, traffic splitting, routing, scheduling, link rate allocation and power control to formulate the network utility maximization problem. As this problem is computationally complex, we propose the Joint dynamic Gateway selection, link Rate allocation and Power control (JGRP) algorithm based on the differential backlog as a sub-optimal solution. This algorithm first constructs the initial network topology, and then in each time slot, determines the generation rate and destination gateway of each traffic flow, simultaneously. The other main task of this algorithm is joint routing, scheduling, links rate allocation and node power allocation in each time slot. Moreover, for improving the fairness, we propose some new parameters instead of the differential backlog in JGRP algorithm. Simulation results show that using the proposed parameters in JGRP algorithm improves fairness from throughput and delay point of views. Introduction We study the network utility maximization problem by jointly considering rate control, traffic splitting among gateways, routing, scheduling, link rate allocation and power control in multiple gateways wireless mesh networks. Over the past two decades, the mesh structure has been considered as an appropriate solution to increase the coverage area and capacity of wireless networks <|cite_start|> (Reference: A Survey of Network Design Problems and Joint Design Approaches in Wireless Mesh Networks: Over the last decade, the paradigm of Wireless Mesh Networks (WMNs) has matured to a reasonably commonly understood one, and there has been extensive research on various areas related to WMNs such as design, deployment, protocols, performance, etc. The quantity of research being conducted in the area of wireless mesh design has dramatically increased in the past few years, due to increasing interest in this paradigm as its potential for the "last few miles", and the possibility of significant wireless services in metropolitan area networks. This recent work has focused increasingly on joint design problems, together with studies in designing specific aspects of the WMN such as routing, power control etc. in isolation. While excellent surveys and tutorials pertaining to WMNs exist in literature, the explosive growth of research in the area of specific design issues, and especially joint design, has left them behind. Our objective in this paper is to identify the fundamental WMN design problems of interference modeling, power control, topology control, link scheduling, and routing, and provide brief overviews, together with a survey of the recent research on these topics, with special stress on joint design methods. We believe this paper will fulfill an outstanding need in informing the interested student and researcher in getting familiar with this abundant recent research area, and starting research.) <|cite_end|>. Important features of the wireless mesh networks include low cost deployment, distributed communication and robustness. However, the performance of these networks could be degraded, which is mainly due to poor design of network protocols <|cite_start|> (Reference: Wireless mesh networks design—A survey: With the advances in wireless technologies and the explosive growth of the Internet, wireless networks, especially Wireless Mesh Networks (WMNs), are going through an important evolution. Designing efficient WMNs has become a major task for networks operators. Over the last few years, a plethora of studies has been carried out to improve the efficiency of wireless networks. However, only a few studies are related to WMNs design and are mainly concerned with protocol design and routing metrics optimization. In this paper, we survey different aspects of WMNs design and examine various methods that have been proposed either to improve the performance of an already deployed network or to improve its performance by a careful planning of its deployment.) <|cite_end|> <|cite_start|> (Reference: {Wireless mesh networks: a survey: In this paper, a survey on architectures, applications and design issues of wireless mesh networks (WMNs) is conducted. Wireless mesh network is a type of distributed, self-organizing, self-configuring and self-healing network. When access points in wireless local area networks (WLANs) start to communicate and get networked in an ad hoc fashion to relay packets for their neighbors, a wireless mesh network comes into being. Therefore, WLANs and ad hoc networks play significant roles during the development of wireless mesh networks. There are three types of architectures for wireless mesh networks, which are backbone WMN, client WMN and hybrid WMN, respectively. Among them, backbone WMN is the most common type and hybrid WMN is the most applicable type. For a better understanding of characteristics of wireless mesh networks, in this paper, several common features and main differences between wireless mesh networks and ad hoc networks are discussed. Moreover, some open issues existing in wireless mesh networks are investigated to provide some directions for further research. Keywords - Wireless mesh networks, Backbone WMN, Cross-layer design Wally Read Best Student Paper name: Yuting Liu IEEE membership number: 80529895) <|cite_end|>. In recent years, various approaches have been provided to improve the performance of wireless mesh networks, among them is cross layer design which could be performed with various aims, such as improvement of throughput, delay and other network parameters. Another approach is using multiple gateways in these networks. In the following, we briefly review some related works according to these approaches. First, some researches on cross layer design in wireless mesh networks. In <|cite_start|> (Reference: Fault-tolerant interference-aware topology control in multi-radio multi-channel wireless mesh networks: ) <|cite_end|>, the authors investigated joint routing, channel assignment, power control and rate adaptation to improve the throughput, load balancing and fault-tolerant in multi-radio multi-channel wireless mesh networks. As this problem is NP-hard, they proposed a heuristic algorithm with two levels. In the first level, a $K$-connectivity network topology is created using channel assignment and routing. In the second level, power control, rate adaption and scheduling are jointly considered for maximizing the throughput while $K$-connectivity network topology is preserved. In order to maximize the capacity of multi-radio mult-channel wireless mesh networks, the authors in <|cite_start|> (Reference: Joint link rate allocation, routing and channel assignment in multi-rate multi-channel wireless networks: ) <|cite_end|>, considered the link rate allocation, routing and channel assignment. In <|cite_start|> (Reference: Joint routing and scheduling in multi-Tx/Rx wireless mesh networks with random demands: ) <|cite_end|> the scheduling and routing design are performed jointly with the aim of minimizing the superframe length to support any random demand in multi-Tx/Rx wireless mesh networks. The authors of <|cite_start|> (Reference: Novel joint routing and scheduling algorithms for minimizing end-to-end delays in multi Tx-Rx wireless mesh networks: ) <|cite_end|> considered joint scheduling and routing in multi-Tx/Rx wireless mesh networks for minimizing the end-to-end delays and superframe length. In <|cite_start|> (Reference: Joint topology control and routing for multi-radio multi-channel WMNs under SINR model using bio-inspired techniques: ) <|cite_end|>, joint optimization of channel assignment, power control and routing is investigated under the Signal to Interference plus Noise Ratio (SINR) model with the aim of increasing the network capacity. As this joint optimization problem is NP-hard, the Genetic and particle swarm optimization algorithms are employed in <|cite_start|> (Reference: Joint topology control and routing for multi-radio multi-channel WMNs under SINR model using bio-inspired techniques: ) <|cite_end|> for optimizing channel assignment and power control, and then according to the optimal values obtained by these two algorithms, optimal routing is achieved by solving an LP problem . In <|cite_start|> (Reference: Joint routing, scheduling and power control for large interference wireless networks: We consider the problem of joint routing, scheduling and power control in multi-hop wireless networks. We use a linear relation between link capacity and signal to interference noise ratio in our formulation. In a previous work, using a duality approach, the optimal link scheduling and power control that minimizes the total average transmission power is found. We formulate this problem as a linear programming problem with exponential number of constraints. To cope with the exponential number of constraints, we propose an iterative algorithm based on the cutting plane method. The separation oracle for the cutting plane algorithm turns out to be an element-wise concave optimization problem that can be effectively solved using branch and bound algorithm. We extend the same method to find the optimal routing scheduling and power control. Simulation results show that this methodology is more efficient and scalable compare to the previously proposed algorithm.) <|cite_end|> the authors designed a joint routing and power control mechanism for reducing the power consumption in large wireless mesh networks . The authors of <|cite_start|> (Reference: Joint multicast routing and channel assignment for multi-radio multi-channel wireless mesh networks with hybrid traffic: ) <|cite_end|> considered joint routing and channel assignment to do multiple multicast routing and showed that this design increases the network throughput. In <|cite_start|> (Reference: Joint Design of Routing and Power Control Over Unreliable Links in Multi-Hop Wireless Networks With Energy-Delay Tradeoff: Energy consumption and delay in end-to-end transmission, two significant cost metrics for guiding the design of routing protocols, are often relevant and even conflicting with each other. Therefore, it naturally becomes important to identify the tradeoff between them. In this paper, we investigate the joint design of routing and power control over unreliable communication links in multi-hop wireless networks with energy-delay tradeoff. Two popular retransmission schemes, hop-by-hop (HBH) scheme and end-to-end (E2E) scheme, are adopted to achieve the reliable transmissions over unreliable links. We model the process of packet transmission in HBH or E2E scheme as a random walk and then calculate the expected energy consumption and the expected delay needed to forward a packet. An expected cost function is defined to capture the tradeoff between energy consumption and delay. Due to the correlation between cost function and transmit power assigned on each link, power control technique is necessary to be integrated into the minimum expected cost routing (MECR) for each retransmission scheme. We prove that the optimal power assignment for each link in a candidate path, which minimizes the expected cost for packet transmission, uniquely exists. Then, the MECR algorithms are proposed for both HBH scheme and E2E scheme to find the optimal routing paths. Simulations illustrate that our proposed protocols not only attain the tradeoff between energy consumption and delay, but also outperform the existing protocols in terms of energy-efficiency and delay.) <|cite_end|>, joint routing and power control are considered to make trade-off between delay and energy consumption in wireless mesh networks. In <|cite_start|> (Reference: Cooperative channel allocation and scheduling in multi-interface wireless mesh networks: ) <|cite_end|>, the authors considered joint scheduling and channel assignment to increase the throughput and load balancing of multi-radio multi-channel wireless mesh networks. The authors in <|cite_start|> (Reference: Joint rate control and scheduling in multihop wireless networks: We study the joint problem of allocating data rates and finding a stabilizing scheduling policy in a multihop wireless network. We propose a dual optimization based approach through which the rate control problem and the scheduling problem can be decomposed. We demonstrate via both analytical and numerical results that the proposed mechanism can fully utilize the capacity of the network, maintain fairness, and improve the quality of service to the users.) <|cite_end|> <|cite_start|> (Reference: Fairness and optimal stochastic control for heterogeneous networks: We consider optimal control for general networks with both wireless and wireline components and time varying channels. A dynamic strategy is developed to support all traffic whenever possible, and to make optimally fair decisions about which data to serve when inputs exceed network capacity. The strategy is decoupled into separate algorithms for flow control, routing, and resource allocation, and allows each user to make decisions independent of the actions of others. The combined strategy is shown to yield data rates that are arbitrarily close to the optimal operating point achieved when all network controllers are coordinated and have perfect knowledge of future events. The cost of approaching this fair operating point is an end-to-end delay increase for data that is served by the network.) <|cite_end|> have proposed joint rate control and scheduling for increasing the network utility. In <|cite_start|> (Reference: Performance-aware cross-layer design in wireless multihop networks via a weighted backpressure approach: In this paper, we study, analyze, and evaluate a performance-aware cross-layer design approach for wireless multihop networks. Through network utility maximization (NUM) and weighted network graph modeling, a cross-layer algorithm for performing jointly routing, scheduling, and congestion control is introduced. The performance awareness is achieved by both the appropriate definition of the link weights for the corresponding application's requirements and the introduction of a weighted backpressure (BP) routing/scheduling. Contrary to the conventional BP, the proposed algorithm scales the congestion gradients with the appropriately defined per-pair (link, destination) weights. We analytically prove the queue stability achieved by the proposed cross-layer scheme, while its convergence to a close neighborhood of the optimal source rates' values is proven via an ε-subgradient approach. The issue of the weights' assignment based on various quality-of-service (QoS) metrics is also investigated. Through modeling and simulation, we demonstrate the performance improvements that can be achieved by the proposed approach-when compared against existing methodologies in the literature-for two different examples with diverse application requirements, emphasizing respectively on delay and trustworthiness.) <|cite_end|>, for improving the quality of service parameters such as reliability and end-to-end delay, the authors proposed a joint scheduling and routing algorithm. In <|cite_start|> (Reference: Channel assignment, link scheduling, routing, and rate control for multi-channel wireless mesh networks with directional antennas: The wireless mesh network (WMN) has attracted significant interests as a broadband wireless network to provide ubiquitous wireless access for broadband services. Especially with incorporating multiple orthogonal channels and multiple directional antennas into the WMN, each node can communicate with its neighbor nodes simultaneously without interference between them. However, as we allow more freedom, we need a more sophisticated algorithm to fully utilize it and developing such an algorithm is not easy in general. In this paper, we study a joint channel assignment, link scheduling, routing, and rate control problem for the WMN with multiple orthogonal channels and multiple directional antennas. This problem is inherently hard to solve, since the problem is formulated as a mixed integer nonlinear problem (MINLP). However, despite of its inherent difficulty, we develop an algorithm to solve the problem by using the generalized Benders decomposition approach [2]. The simulation results show the proposed algorithm provides the optimal solution to maximize the network utility, which is defined as the sum of utilities of all sessions.) <|cite_end|>, the authors considered joint rate control, routing, channel assignment and scheduling to maximize the network utility of the multi-radio multi-channel wireless mesh networks with directional antennas. As the considered problem in <|cite_start|> (Reference: Channel assignment, link scheduling, routing, and rate control for multi-channel wireless mesh networks with directional antennas: The wireless mesh network (WMN) has attracted significant interests as a broadband wireless network to provide ubiquitous wireless access for broadband services. Especially with incorporating multiple orthogonal channels and multiple directional antennas into the WMN, each node can communicate with its neighbor nodes simultaneously without interference between them. However, as we allow more freedom, we need a more sophisticated algorithm to fully utilize it and developing such an algorithm is not easy in general. In this paper, we study a joint channel assignment, link scheduling, routing, and rate control problem for the WMN with multiple orthogonal channels and multiple directional antennas. This problem is inherently hard to solve, since the problem is formulated as a mixed integer nonlinear problem (MINLP). However, despite of its inherent difficulty, we develop an algorithm to solve the problem by using the generalized Benders decomposition approach [2]. The simulation results show the proposed algorithm provides the optimal solution to maximize the network utility, which is defined as the sum of utilities of all sessions.) <|cite_end|> is mixed integer nonlinear problem (MINLP), the authors used generalized Benders decomposition approach to solve it. In <|cite_start|> (Reference: Traffic engineering in cognitive mesh networks: Joint link-channel selection and power allocation: ) <|cite_end|>, the authors investigated joint power allocation and channel assignment for maximizing the aggregate throughput of cognitive wireless mesh networks. In <|cite_start|> (Reference: Cross-layer design and performance analysis for maximizing the network utilization of wireless mesh networks in cloud computing: ) <|cite_end|>, resource allocation scheduling and routing are jointly determined to maximize the network utility of wireless mesh networks in cloud computing. In <|cite_start|> (Reference: Joint Topology Control and Channel Assignment Employing Partially Overlapping Channels in Multirate Wireless Mesh Backbone: ) <|cite_end|>, the authors considered joint topology control and partially overlapping channel assignment to improve the capacity of multi-radio multi-channel wireless mesh networks. As mentioned before, a solution to improve the performance of wireless mesh networks is considering multiple gateways for these networks. In <|cite_start|> (Reference: PLASMA: A new routing paradigm for wireless multihop networks: In this paper we present a new routing paradigm for wireless multihop networks. In plasma routing, each packet is delivered over the best available path to one of the gateways. The choice of the path and gateway for each packet is not made beforehand by the source node, but rather on-the-fly by the mesh routers as the packet traverses the network. We propose a distributed routing algorithm to jointly optimize the transmission rate and the set of gateways each node should use. A load balancing technique is also proposed to disperse the network traffic among multiple gateways. We validate our proposal with simulations and show that plasma routing outperforms the state-of-the-art multirate anypath routing paradigm, with a 98% throughput gain and a 2.2x delay decrease. Finally, we also show that the load can be evenly distributed among gateways with a similar routing cost, resulting in a further 63% throughput gain.) <|cite_end|>, a heuristic routing algorithm is proposed to increase the network throughput. This algorithm determines the transmission rate and destination gateway of each flow. In <|cite_start|> (Reference: Multi-rate multicast routing in multi-gateway multi-channel multi-radio wireless mesh networks: ) <|cite_end|> the authors considered multi-rate multicast routing in multiple gateways multi-radio multi-channel wireless mesh networks for maximizing the throughput.Then, the authors split this NP-hard problem into three phases: gateway selection, channel assignment and rate allocation. In <|cite_start|> (Reference: Cost-effective multicast routings in wireless mesh networks with multiple gateways: ) <|cite_end|>, considering multiple gateways, the authors proposed a multicast routing algorithm which constructs a multicast tree by maximizing the multicast-tree transmission ratio, and they showed that this algorithm improves the average delay and delivery ratio . In <|cite_start|> (Reference: An optimization framework for multicasting in MCMR wireless mesh network with partially overlapping channels: ) <|cite_end|> the authors considered the problem of multicast routing with multiple gateways and partially overlapped channels, and they showed that such techniques in this problem lead to reduce the links interference. The authors of <|cite_start|> (Reference: Joint traffic splitting, rate control, routing, and scheduling algorithm for maximizing network utility in wireless mesh networks: The existence of multiple gateways, as is a common case in wireless mesh networks (WMNs), brings the possibility to improve network performance. However, previous studies, including both heuristic-based works and theory-driven cross-layer design works, cannot guarantee an optimal exploitation of multiple gateways. In this paper, we focus on exploiting multiple gateways optimally to achieve maximum network utility. We first extend the current framework of cross-layer design and formulate a network utility maximization (NUM) problem under WMNs with multiple gateways as a constrained optimization problem. Then, by solving this optimization problem, we propose a novel joint traffic splitting, rate control, routing, and scheduling algorithm called cross-layer control with dynamic gateway selection (CLC_DGS), which splits and distributes network traffic into multiple gateways in an optimal way. We prove that CLC_DGS can achieve maximum network utility. Finally, we run extensive simulations to demonstrate that, compared with the previous methods, CLC_DGS significantly improves the performance of WMNs under various network environments, including gateway heterogeneity, link heterogeneity, and different interference models.) <|cite_end|> employed both cross layer design and multiple gateways approaches to improve the performance of wireless mesh networks. The authors considered joint rate control, traffic splitting, routing and scheduling under one-hop interference model to maximize the network utility of multiple gateways wireless mesh networks and they showed that using both cross layer design and multiple gateways approaches considerably improves the throughput and fairness. In this paper, we consider joint rate control, traffic splitting, scheduling, routing, link rate allocation and power control under SINR as the interference model in a multiple gateways wireless mesh network. Actually, by considering the SINR model, we investigate a more realistic scenario compared to <|cite_start|> (Reference: Joint traffic splitting, rate control, routing, and scheduling algorithm for maximizing network utility in wireless mesh networks: The existence of multiple gateways, as is a common case in wireless mesh networks (WMNs), brings the possibility to improve network performance. However, previous studies, including both heuristic-based works and theory-driven cross-layer design works, cannot guarantee an optimal exploitation of multiple gateways. In this paper, we focus on exploiting multiple gateways optimally to achieve maximum network utility. We first extend the current framework of cross-layer design and formulate a network utility maximization (NUM) problem under WMNs with multiple gateways as a constrained optimization problem. Then, by solving this optimization problem, we propose a novel joint traffic splitting, rate control, routing, and scheduling algorithm called cross-layer control with dynamic gateway selection (CLC_DGS), which splits and distributes network traffic into multiple gateways in an optimal way. We prove that CLC_DGS can achieve maximum network utility. Finally, we run extensive simulations to demonstrate that, compared with the previous methods, CLC_DGS significantly improves the performance of WMNs under various network environments, including gateway heterogeneity, link heterogeneity, and different interference models.) <|cite_end|>, which has considered the one hop interference model. In addition, besides rate control, traffic splitting, routing and scheduling that has been considered in <|cite_start|> (Reference: Joint traffic splitting, rate control, routing, and scheduling algorithm for maximizing network utility in wireless mesh networks: The existence of multiple gateways, as is a common case in wireless mesh networks (WMNs), brings the possibility to improve network performance. However, previous studies, including both heuristic-based works and theory-driven cross-layer design works, cannot guarantee an optimal exploitation of multiple gateways. In this paper, we focus on exploiting multiple gateways optimally to achieve maximum network utility. We first extend the current framework of cross-layer design and formulate a network utility maximization (NUM) problem under WMNs with multiple gateways as a constrained optimization problem. Then, by solving this optimization problem, we propose a novel joint traffic splitting, rate control, routing, and scheduling algorithm called cross-layer control with dynamic gateway selection (CLC_DGS), which splits and distributes network traffic into multiple gateways in an optimal way. We prove that CLC_DGS can achieve maximum network utility. Finally, we run extensive simulations to demonstrate that, compared with the previous methods, CLC_DGS significantly improves the performance of WMNs under various network environments, including gateway heterogeneity, link heterogeneity, and different interference models.) <|cite_end|>, we consider also link rate allocation and power control in our cross layer design, as these tools have important roles in SINR model. Similar to <|cite_start|> (Reference: Joint traffic splitting, rate control, routing, and scheduling algorithm for maximizing network utility in wireless mesh networks: The existence of multiple gateways, as is a common case in wireless mesh networks (WMNs), brings the possibility to improve network performance. However, previous studies, including both heuristic-based works and theory-driven cross-layer design works, cannot guarantee an optimal exploitation of multiple gateways. In this paper, we focus on exploiting multiple gateways optimally to achieve maximum network utility. We first extend the current framework of cross-layer design and formulate a network utility maximization (NUM) problem under WMNs with multiple gateways as a constrained optimization problem. Then, by solving this optimization problem, we propose a novel joint traffic splitting, rate control, routing, and scheduling algorithm called cross-layer control with dynamic gateway selection (CLC_DGS), which splits and distributes network traffic into multiple gateways in an optimal way. We prove that CLC_DGS can achieve maximum network utility. Finally, we run extensive simulations to demonstrate that, compared with the previous methods, CLC_DGS significantly improves the performance of WMNs under various network environments, including gateway heterogeneity, link heterogeneity, and different interference models.) <|cite_end|>, our aim is maximizing the network utility which is a widely-used performance metric and could measure both the aggregated throughput and fairness in the network. In this paper, we propose Joint dynamic Gateway selection, link Rate allocation and Power control (JGRP) algorithm based on the differential backlog as a sub-optimal solution for solving the network utility maximization problem. This algorithm has three parts; in the first part, the network topology is formed by pruning the full mesh network to reduce the complexity of other parts. In the second part, the mechanisms of rate control and traffic splitting are jointly obtained and in the third part, joint scheduling, routing, rate allocation to links and power allocation to nodes are obtained by employing a sub-optimal search method which we present. Moreover, we propose some new parameters instead of the differential backlog to improve the fairness of our JGRP algorithm. The rest of the paper is organized as follows: In Section \ref{sect2}, we describe the network model. In Section \ref{sect3}, the network utility maximization problem is formulated. Section \ref{sect4} describes the proposed JGRP algorithm as a sub-optimal solution to solve the network utility maximization problem. In Section \ref{sect5}, we attempt to improve the fairness by defining some new parameters. We provide some simulation results in Section \ref{sect6}, and finally Section \ref{sect7} concludes the paper. <|paper_end|>
[ "<|reference_start|> Joint topology control and routing for multi-radio multi-channel WMNs under SINR model using bio-inspired techniques: <|reference_end|>", "<|reference_start|> Traffic engineering in cognitive mesh networks: Joint link-channel selection and power allocation: <|reference_end|>", "<|reference_start|> Cross-layer design and performance analysis for maximizing the network utilization of wireless mesh networks in cloud computing: <|reference_end|>", "<|reference_start|> PLASMA: A new routing paradigm for wireless multihop networks: In this paper we present a new routing paradigm for wireless multihop networks. In plasma routing, each packet is delivered over the best available path to one of the gateways. The choice of the path and gateway for each packet is not made beforehand by the source node, but rather on-the-fly by the mesh routers as the packet traverses the network. We propose a distributed routing algorithm to jointly optimize the transmission rate and the set of gateways each node should use. A load balancing technique is also proposed to disperse the network traffic among multiple gateways. We validate our proposal with simulations and show that plasma routing outperforms the state-of-the-art multirate anypath routing paradigm, with a 98% throughput gain and a 2.2x delay decrease. Finally, we also show that the load can be evenly distributed among gateways with a similar routing cost, resulting in a further 63% throughput gain. <|reference_end|>" ]
[ 8, 18, 19, 21 ]
{"<|cite_1|>": "ss-2059898", "<|multi_cite_2_1|>": "ss-1650285", "<|multi_cite_2_2|>": "ss-709153", "<|cite_3|>": "ss-2385505", "<|cite_4|>": "ss-2385506", "<|cite_5|>": "ss-2385507", "<|cite_6|>": "ss-2385508", "<|cite_7|>": "ss-2385509", "<|cite_8|>": "ss-2385509", "<|cite_9|>": "ss-2385510", "<|cite_10|>": "ss-2385511", "<|cite_11|>": "ss-2385512", "<|cite_12|>": "ss-2385513", "<|multi_cite_13_1|>": "ss-2385514", "<|multi_cite_13_2|>": "ss-1061079", "<|cite_14|>": "ss-2385515", "<|cite_15|>": "ss-2385516", "<|cite_16|>": "ss-2385516", "<|cite_17|>": "ss-2385517", "<|cite_18|>": "ss-2385518", "<|cite_19|>": "ss-2385519", "<|cite_20|>": "ss-2385520", "<|cite_21|>": "ss-2385521", "<|cite_22|>": "ss-2385522", "<|cite_23|>": "ss-2385523", "<|cite_24|>": "ss-2385524", "<|cite_25|>": "ss-2385524", "<|cite_26|>": "ss-2385524", "<|cite_27|>": "ss-2385524"}
2204.02287
<|paper_start|> Title: Rethinking Visual Geo-localization for Large-Scale Applications Abstract: Rethinking Visual Geo-localization for Large-Scale Applications: Visual Geo-localization (VG) is the task of estimating the position where a given photo was taken by comparing it with a large database of images of known locations. To investigate how existing techniques would perform on a real-world city-wide VG application, we build San Francisco eXtra Large, a new dataset covering a whole city and providing a wide range of challenging cases, with a size 30x bigger than the previous largest dataset for visual geo-localization. We find that current methods fail to scale to such large datasets, therefore we design a new highly scalable training technique, called CosPlace, which casts the training as a classification problem avoiding the expensive mining needed by the commonly used contrastive learning. We achieve state-of-the-art performance on a wide range of datasets and find that CosPlace is robust to heavy domain changes. Moreover, we show that, compared to the previous state-of-the-art, CosPlace requires roughly 80% less GPU memory at train time, and it achieves better results with 8x smaller descriptors, paving the way for city-wide real-world visual geo-localization. Dataset, code and trained models are available for research purposes at https://github.com/gmberton/CosPlace. Introduction \label{sec:introduction} Visual geo-localization (VG), also known as visual place recognition or image localization <|cite_start|> (Reference: Stochastic Attraction-Repulsion Embedding for Large Scale Image Localization: This paper tackles the problem of large-scale image-based localization (IBL) where the spatial location of a query image is determined by finding out the most similar reference images in a large database. For solving this problem, a critical task is to learn discriminative image representation that captures informative information relevant for localization. We propose a novel representation learning method having higher location-discriminating power. It provides the following contributions: 1) we represent a place (location) as a set of exemplar images depicting the same landmarks and aim to maximize similarities among intra-place images while minimizing similarities among inter-place images; 2) we model a similarity measure as a probability distribution on L_2-metric distances between intra-place and inter-place image representations; 3) we propose a new Stochastic Attraction and Repulsion Embedding (SARE) loss function minimizing the KL divergence between the learned and the actual probability distributions; 4) we give theoretical comparisons between SARE, triplet ranking and contrastive losses. It provides insights into why SARE is better by analyzing gradients. Our SARE loss is easy to implement and pluggable to any CNN. Experiments show that our proposed method improves the localization performance on standard benchmarks by a large margin. Demonstrating the broad applicability of our method, we obtained the third place out of 209 teams in the 2018 Google Landmark Retrieval Challenge. Our code and model are available at https://github.com/Liumouliu/deepIBL.) <|cite_end|>, is a staple of computer vision <|cite_start|> (Reference: Visual Place Recognition with Repetitive Structures: Repeated structures such as building facades, fences or road markings often represent a significant challenge for place recognition. Repeated structures are notoriously hard for establishing correspondences using multi-view geometry. Even more importantly, they violate the feature independence assumed in the bag-of-visual-words representation which often leads to over-counting evidence and significant degradation of retrieval performance. In this work we show that repeated structures are not a nuisance but, when appropriately represented, they form an important distinguishing feature for many places. We describe a representation of repeated structures suitable for scalable retrieval. It is based on robust detection of repeated image structures and a simple modification of weights in the bag-of-visual-word model. Place recognition results are shown on datasets of street-level imagery from Pittsburgh and San Francisco demonstrating significant gains in recognition performance compared to the standard bag-of-visual-words baseline and more recently proposed burstiness weighting.) <|cite_end|> <|cite_start|> (Reference: 24/7 place recognition by view synthesis: We address the problem of large-scale visual place recognition for situations where the scene undergoes a major change in appearance, for example, due to illumination (day/night), change of seasons, aging, or structural modifications over time such as buildings built or destroyed. Such situations represent a major challenge for current large-scale place recognition methods. This work has the following three principal contributions. First, we demonstrate that matching across large changes in the scene appearance becomes much easier when both the query image and the database image depict the scene from approximately the same viewpoint. Second, based on this observation, we develop a new place recognition approach that combines (i) an efficient synthesis of novel views with (ii) a compact indexable image representation. Third, we introduce a new challenging dataset of 1,125 camera-phone query images of Tokyo that contain major changes in illumination (day, sunset, night) as well as structural changes in the scene. We demonstrate that the proposed approach significantly outperforms other large-scale place recognition techniques on this challenging data.) <|cite_end|> <|cite_start|> (Reference: Scalable Place Recognition Under Appearance Change for Autonomous Driving: A major challenge in place recognition for autonomous driving is to be robust against appearance changes due to short-term (e.g., weather, lighting) and long-term (seasons, vegetation growth, etc.) environmental variations. A promising solution is to continuously accumulate images to maintain an adequate sample of the conditions and incorporate new changes into the place recognition decision. However, this demands a place recognition technique that is scalable on an ever growing dataset. To this end, we propose a novel place recognition technique that can be efficiently retrained and compressed, such that the recognition of new queries can exploit all available data (including recent changes) without suffering from visible growth in computational cost. Underpinning our method is a novel temporal image matching technique based on Hidden Markov Models. Our experiments show that, compared to state-of-the-art techniques, our method has much greater potential for large-scale place recognition for autonomous driving.) <|cite_end|> <|cite_start|> (Reference: Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition: Visual Place Recognition is a challenging task for robotics and autonomous systems, which must deal with the twin problems of appearance and viewpoint change in an always changing world. This paper introduces Patch-NetVLAD, which provides a novel formulation for combining the advantages of both local and global descriptor methods by deriving patch-level features from NetVLAD residuals. Unlike the fixed spatial neighborhood regime of existing local keypoint features, our method enables aggregation and matching of deep-learned local features defined over the feature-space grid. We further introduce a multi-scale fusion of patch features that have complementary scales (i.e. patch sizes) via an integral feature space and show that the fused features are highly invariant to both condition (season, structure, and illumination) and viewpoint (translation and rotation) changes. Patch-NetVLAD outperforms both global and local feature descriptor-based methods with comparable compute, achieving state-of-the-art visual place recognition results on a range of challenging real-world datasets, including winning the Facebook Mapillary Visual Place Recognition Challenge at ECCV2020. It is also adaptable to user requirements, with a speed-optimised version operating over an order of magnitude faster than the state-of-the-art. By combining superior performance with improved computational efficiency in a configurable framework, Patch-NetVLAD is well suited to enhance both stand-alone place recognition capabilities and the overall performance of SLAM systems.) <|cite_end|> <|cite_start|> (Reference: Are Large-Scale 3D models really necessary for accurate visual localization?: Accurate visual localization is a key technology for autonomous navigation. 3D structure-based methods employ 3D models of the scene to estimate the full 6DOF pose of a camera very accurately. However, constructing (and extending) large-scale 3D models is still a significant challenge. In contrast, 2D image retrieval-based methods only require a database of geo-tagged images, which is trivial to construct and to maintain. They are often considered inaccurate since they only approximate the positions of the cameras. Yet, the exact camera pose can theoretically be recovered when enough relevant database images are retrieved. In this paper, we demonstrate experimentally that large-scale 3D models are not strictly necessary for accurate visual localization. We create reference poses for a large and challenging urban dataset. Using these poses, we show that combining image-based methods with local reconstructions results in a pose accuracy similar to the state-of-the-art structure-based methods. Our results suggest that we might want to reconsider the current approach for accurate large-scale localization.) <|cite_end|> <|cite_start|> (Reference: VPR-Bench: An Open-Source Visual Place Recognition Evaluation Framework with Quantifiable Viewpoint and Appearance Change: ) <|cite_end|> and robotics research <|cite_start|> (Reference: 2017: В рамках работы сети регистрации и мониторинга выбросов китообразных были получены данные о 225 животных, обнаруженных на черноморском побережье Крыма или в прибрежной акватории. В ходе мониторинга участка побережья Юго-Восточного Крыма было обнаружено 8 животных. От местного населения и отдыхающих было получено 172 сообщения о 217 животных (из них 10 животных были обнаружены живыми). Среди упомянутых 217 животных преобладали морские свиньи (75,1%), также были обнаружены афалины (11,5%) и белобочки (6,5%). Наибольшее количество животных было зафиксировано в Севастопольском районе (42%) и ЮгоВосточном Крыму (21%). Причинами этого, по нашему мнению, являются высокая концентрация отдыхающих в местах обитания локальных популяций китообразных и интенсивное рыболовство в летний сезон в этих районах. Пик обнаружения выброшенных животных пришелся на июль. Среди обнаруженных морских свиней (всего 163 животных) почти 30% составляли детеныши. Интенсивное рыболовство в местах обитания морской свиньи повышает вероятность гибели в орудиях лова пар самка-детеныш в критический сезон выкармливания новорожденных: июнь – июль. Для сокращения гибели требуется оценка распределения локальных стад и ограничение рыболовства в местах обитания животных в этот период. Интеграция данных об обнаруженных 10 погибших животных на побережье и в акватории Судакского района с данными о работе рыболовецких судов с учетом временных показателей позволяют предположить гибель животных в тралах. Нами предложена гипотеза об отсутствии внешних признаков гибели в результате прилова в ряде случаев попадания китообразных в трал. Дальнейшее развитие РИМСсети (регистрация – исследования – мониторинг – спасение) в Крыму наиболее эффективно в случае объединения компонентов регистрации данных от населения и мониторинга контрольных участков; при этом оба компонента должны сопровождаться проведением комплексных исследований выброшенных животных.) <|cite_end|> <|cite_start|> (Reference: 2017: В рамках работы сети регистрации и мониторинга выбросов китообразных были получены данные о 225 животных, обнаруженных на черноморском побережье Крыма или в прибрежной акватории. В ходе мониторинга участка побережья Юго-Восточного Крыма было обнаружено 8 животных. От местного населения и отдыхающих было получено 172 сообщения о 217 животных (из них 10 животных были обнаружены живыми). Среди упомянутых 217 животных преобладали морские свиньи (75,1%), также были обнаружены афалины (11,5%) и белобочки (6,5%). Наибольшее количество животных было зафиксировано в Севастопольском районе (42%) и ЮгоВосточном Крыму (21%). Причинами этого, по нашему мнению, являются высокая концентрация отдыхающих в местах обитания локальных популяций китообразных и интенсивное рыболовство в летний сезон в этих районах. Пик обнаружения выброшенных животных пришелся на июль. Среди обнаруженных морских свиней (всего 163 животных) почти 30% составляли детеныши. Интенсивное рыболовство в местах обитания морской свиньи повышает вероятность гибели в орудиях лова пар самка-детеныш в критический сезон выкармливания новорожденных: июнь – июль. Для сокращения гибели требуется оценка распределения локальных стад и ограничение рыболовства в местах обитания животных в этот период. Интеграция данных об обнаруженных 10 погибших животных на побережье и в акватории Судакского района с данными о работе рыболовецких судов с учетом временных показателей позволяют предположить гибель животных в тралах. Нами предложена гипотеза об отсутствии внешних признаков гибели в результате прилова в ряде случаев попадания китообразных в трал. Дальнейшее развитие РИМСсети (регистрация – исследования – мониторинг – спасение) в Крыму наиболее эффективно в случае объединения компонентов регистрации данных от населения и мониторинга контрольных участков; при этом оба компонента должны сопровождаться проведением комплексных исследований выброшенных животных.) <|cite_end|> <|cite_start|> (Reference: Learning Context Flexible Attention Model for Long-Term Visual Place Recognition: Identifying regions of interest in an image has long been of great importance in a wide range of tasks, including place recognition. In this letter, we propose a novel attention mechanism with flexible context, which can be incorporated into existing feedforward network architecture to learn image representations for long-term place recognition. In particular, in order to focus on regions that contribute positively to place recognition, we introduce a multiscale context-flexible network to estimate the importance of each spatial region in the feature map. Our model is trained end-to-end for place recognition and can detect regions of interest of arbitrary shape. Extensive experiments have been conducted to verify the effectiveness of our approach and the results demonstrate that our model can achieve consistently better performance than the state of the art on standard benchmark datasets. Finally, we visualize the learned attention maps to generate insights into what attention the network has learned.) <|cite_end|> <|cite_start|> (Reference: Semantic–geometric visual place recognition: a new perspective for reconciling opposing views: Human drivers are capable of recognizing places from a previous journey even when viewing them from the opposite direction during the return trip under radically different environmental conditions, without needing to look back or employ a 360 ° camera or LIDAR sensor. Such navigation capabilities are attributed in large part to the robust semantic scene understanding capabilities of humans. However, for an autonomous robot or vehicle, achieving such human-like visual place recognition capability presents three major challenges: (1) dealing with a limited amount of commonly observable visual content when viewing the same place from the opposite direction; (2) dealing with significant lateral viewpoint changes caused by opposing directions of travel taking place on opposite sides of the road; and (3) dealing with a radically changed scene appearance due to environmental conditions such as time of day, season, and weather. Current state-of-the-art place recognition systems have only addressed these three challenges in isolation or in pairs, typically relying on appearance-based, deep-learnt place representations. In this paper, we present a novel, semantics-based system that for the first time solves all three challenges simultaneously. We propose a hybrid image descriptor that semantically aggregates salient visual information, complemented by appearance-based description, and augment a conventional coarse-to-fine recognition pipeline with keypoint correspondences extracted from within the convolutional feature maps of a pre-trained network. Finally, we introduce descriptor normalization and local score enhancement strategies for improving the robustness of the system. Using both existing benchmark datasets and extensive new datasets that for the first time combine the three challenges of opposing viewpoints, lateral viewpoint shifts, and extreme appearance change, we show that our system can achieve practical place recognition performance where existing state-of-the-art methods fail.) <|cite_end|> <|cite_start|> (Reference: A Holistic Visual Place Recognition Approach Using Lightweight CNNs for Significant ViewPoint and Appearance Changes: This article presents a lightweight visual place recognition approach, capable of achieving high performance with low computational cost, and feasible for mobile robotics under significant viewpoint and appearance changes. Results on several benchmark datasets confirm an average boost of 13% in accuracy, and 12x average speedup relative to state-of-the-art methods.) <|cite_end|> <|cite_start|> (Reference: Multi-Process Fusion: Visual Place Recognition Using Multiple Image Processing Methods: Typical attempts to improve the capability of visual place recognition techniques include the use of multi-sensor fusion and integration of information over time from image sequences. These approaches can improve performance but have disadvantages including the need for multiple physical sensors and calibration processes, both for multiple sensors and for tuning the image matching sequence length. In this paper we address these shortcomings with a novel "multi-sensor" fusion approach applied to multiple image processing methods for a single visual image stream, combined with a dynamic sequence matching length technique and an automatic processing method weighting scheme. In contrast to conventional single method approaches, our approach reduces the performance requirements of a single image processing methodology, instead requiring that within the suite of image processing methods, at least one performs well in any particular environment. In comparison to static sequence length techniques, the dynamic sequence matching technique enables reduced localization latencies through analysis of recognition quality metrics when re-entering familiar locations. We evaluate our approach on multiple challenging benchmark datasets, achieving superior performance to two state-of-the-art visual place recognition systems across environmental changes including winter to summer, afternoon to morning and night to day. Across the four benchmark datasets our proposed approach achieves an average F1 score of 0.96, compared to 0.78 for NetVLAD and 0.49 for SeqSLAM. We provide source code for the multi-fusion method and present analysis explaining how superior performance is achieved despite the multiple, disparate, image processing methods all being applied to a single source of imagery, rather than to multiple separate sensors.) <|cite_end|> and it is defined as the task of coarsely recognizing the geographical location where a photo was taken, usually with a tolerance of few meters <|cite_start|> (Reference: Learned Contextual Feature Reweighting for Image Geo-Localization: We address the problem of large scale image geo-localization where the location of an image is estimated by identifying geo-tagged reference images depicting the same place. We propose a novel model for learning image representations that integrates context-aware feature reweighting in order to effectively focus on regions that positively contribute to geo-localization. In particular, we introduce a Contextual Reweighting Network (CRN) that predicts the importance of each region in the feature map based on the image context. Our model is learned end-to-end for the image geo-localization task, and requires no annotation other than image geo-tags for training. In experimental results, the proposed approach significantly outperforms the previous state-of-the-art on the standard geo-localization benchmark datasets. We also demonstrate that our CRN discovers task-relevant contexts without any additional supervision.) <|cite_end|> <|cite_start|> (Reference: Stochastic Attraction-Repulsion Embedding for Large Scale Image Localization: This paper tackles the problem of large-scale image-based localization (IBL) where the spatial location of a query image is determined by finding out the most similar reference images in a large database. For solving this problem, a critical task is to learn discriminative image representation that captures informative information relevant for localization. We propose a novel representation learning method having higher location-discriminating power. It provides the following contributions: 1) we represent a place (location) as a set of exemplar images depicting the same landmarks and aim to maximize similarities among intra-place images while minimizing similarities among inter-place images; 2) we model a similarity measure as a probability distribution on L_2-metric distances between intra-place and inter-place image representations; 3) we propose a new Stochastic Attraction and Repulsion Embedding (SARE) loss function minimizing the KL divergence between the learned and the actual probability distributions; 4) we give theoretical comparisons between SARE, triplet ranking and contrastive losses. It provides insights into why SARE is better by analyzing gradients. Our SARE loss is easy to implement and pluggable to any CNN. Experiments show that our proposed method improves the localization performance on standard benchmarks by a large margin. Demonstrating the broad applicability of our method, we obtained the third place out of 209 teams in the 2018 Google Landmark Retrieval Challenge. Our code and model are available at https://github.com/Liumouliu/deepIBL.) <|cite_end|> <|cite_start|> (Reference: Self-supervising Fine-grained Region Similarities for Large-scale Image Localization: The task of large-scale retrieval-based image localization is to estimate the geographical location of a query image by recognizing its nearest reference images from a city-scale dataset. However, the general public benchmarks only provide noisy GPS labels associated with the training images, which act as weak supervisions for learning image-to-image similarities. Such label noise prevents deep neural networks from learning discriminative features for accurate localization. To tackle this challenge, we propose to self-supervise image-to-region similarities in order to fully explore the potential of difficult positive images alongside their sub-regions. The estimated image-to-region similarities can serve as extra training supervision for improving the network in generations, which could in turn gradually refine the fine-grained similarities to achieve optimal performance. Our proposed self-enhanced image-to-region similarity labels effectively deal with the training bottleneck in the state-of-the-art pipelines without any additional parameters or manual annotations in both training and inference. Our method outperforms state-of-the-arts on the standard localization benchmarks by noticeable margins and shows excellent generalization capability on multiple image retrieval datasets.) <|cite_end|> <|cite_start|> (Reference: Adaptive-Attentive Geolocalization from few queries: a hybrid approach: We address the task of cross-domain visual place recognition, where the goal is to geolocalize a given query image against a labeled gallery, in the case where the query and the gallery belong to different visual domains. To achieve this, we focus on building a domain robust deep network by leveraging over an attention mechanism combined with few-shot unsupervised domain adaptation techniques, where we use a small number of unlabeled target domain images to learn about the target distribution. With our method, we are able to outperform the current state of the art while using two orders of magnitude less target domain images. Finally we propose a new large-scale dataset for cross-domain visual place recognition, called SVOX. The pytorch code is available at https://github.com/valeriopaolicelli/AdAGeo .) <|cite_end|> <|cite_start|> (Reference: Inside Out Visual Place Recognition: Visual Place Recognition (VPR) is generally concerned with localizing outdoor images. However, localizing indoor scenes that contain part of an outdoor scene can be of large value for a wide range of applications. In this paper, we introduce Inside Out Visual Place Recognition (IOVPR), a task aiming to localize images based on outdoor scenes visible through windows. For this task we present the new large-scale dataset Amsterdam-XXXL, with images taken in Amsterdam, that consists of 6.4 million panoramic street-view images and 1000 user-generated indoor queries. Additionally, we introduce a new training protocol Inside Out Data Augmentation to adapt Visual Place Recognition methods for localizing indoor images, demonstrating the potential of Inside Out Visual Place Recognition. We empirically show the benefits of our proposed data augmentation scheme on a smaller scale, whilst demonstrating the difficulty of this large-scale dataset for existing methods. With this new task we aim to encourage development of methods for IOVPR. The dataset and code are available for research purposes at https://github.com/saibr/IOVPR) <|cite_end|> <|cite_start|> (Reference: Mapillary Street-Level Sequences: A Dataset for Lifelong Place Recognition: Lifelong place recognition is an essential and challenging task in computer vision with vast applications in robust localization and efficient large-scale 3D reconstruction. Progress is currently hindered by a lack of large, diverse, publicly available datasets. We contribute with Mapillary Street-Level Sequences (SLS), a large dataset for urban and suburban place recognition from image sequences. It contains more than 1.6 million images curated from the Mapillary collaborative mapping platform. The dataset is orders of magnitude larger than current data sources, and is designed to reflect the diversities of true lifelong learning. It features images from 30 major cities across six continents, hundreds of distinct cameras, and substantially different viewpoints and capture times, spanning all seasons over a nine year period. All images are geo-located with GPS and compass, and feature high-level attributes such as road type. We propose a set of benchmark tasks designed to push state-of-the-art performance and provide baseline studies. We show that current state-of-the-art methods still have a long way to go, and that the lack of diversity in existing datasets have prevented generalization to new environments. The dataset and benchmarks are available for academic research.) <|cite_end|>. This task is commonly approached as an image retrieval problem where the query to be localized is compared to a database of geo-tagged images: the most similar images retrieved from the database, together with their metadata, represent the hypotheses of the query's geographical location. In particular, all recent VG methods are learning-based and use a neural network to project the images into an embedding space that well represents the similarity of their locations, and that can be used for the retrieval. So far, research on VG has focused on recognizing the location of images in moderately sized geographical areas (\eg, a neighborhood). However, real-world applications of this technology, such as autonomous driving <|cite_start|> (Reference: Scalable Place Recognition Under Appearance Change for Autonomous Driving: A major challenge in place recognition for autonomous driving is to be robust against appearance changes due to short-term (e.g., weather, lighting) and long-term (seasons, vegetation growth, etc.) environmental variations. A promising solution is to continuously accumulate images to maintain an adequate sample of the conditions and incorporate new changes into the place recognition decision. However, this demands a place recognition technique that is scalable on an ever growing dataset. To this end, we propose a novel place recognition technique that can be efficiently retrained and compressed, such that the recognition of new queries can exploit all available data (including recent changes) without suffering from visible growth in computational cost. Underpinning our method is a novel temporal image matching technique based on Hidden Markov Models. Our experiments show that, compared to state-of-the-art techniques, our method has much greater potential for large-scale place recognition for autonomous driving.) <|cite_end|> and assistive devices <|cite_start|> (Reference: Unifying Visual Localization and Scene Recognition for People With Visual Impairment: With the development of computer vision and mobile computing, assistive navigation for people with visual impairment arouses the attention of research communities. As two key challenges of assistive navigation, “Where am I?” and “What are the surroundings?” are still to be resolved by taking advantage of visual information. In this paper, we leverage the prevailing compact network as the backbone to build a unified network featuring two branches that implement scene description and scene recognition separately. Based on the unified network, the proposed pipeline performs scene recognition and visual localization simultaneously in the scenario of assistive navigation. The visual localization pipeline involves image retrieval and sequence matching. In the experiments, different configurations of the proposed pipeline are tested on public datasets to search for the optimal parameters. Moreover, on the real-world datasets captured by the wearable assistive device, the proposed assistive navigation pipeline is proved to achieve satisfactory performance. On the challenging dataset, the top-5 precision of scene recognition is more than 80%, and the visual localization precision is over 60% under a recall of 60%. The related codes and datasets are open-source online at https://github.com/chengricky/ScenePlaceRecognition.) <|cite_end|>, are posed to operate at a much larger scale (\eg, cities or metropolitan areas), thus requiring massive databases of geo-tagged images to execute the retrieval. Having access to such massive databases, it would be advisable to use them also to train the model rather than just for the execution of the retrieval (inference). This idea requires us to rethink VG, addressing the two following limitations. \myparagraph{Non-representative datasets.} The current datasets for VG are not representative of realistic large-scale applications, because they are either too small in the geographical coverage <|cite_start|> (Reference: 24/7 place recognition by view synthesis: We address the problem of large-scale visual place recognition for situations where the scene undergoes a major change in appearance, for example, due to illumination (day/night), change of seasons, aging, or structural modifications over time such as buildings built or destroyed. Such situations represent a major challenge for current large-scale place recognition methods. This work has the following three principal contributions. First, we demonstrate that matching across large changes in the scene appearance becomes much easier when both the query image and the database image depict the scene from approximately the same viewpoint. Second, based on this observation, we develop a new place recognition approach that combines (i) an efficient synthesis of novel views with (ii) a compact indexable image representation. Third, we introduce a new challenging dataset of 1,125 camera-phone query images of Tokyo that contain major changes in illumination (day, sunset, night) as well as structural changes in the scene. We demonstrate that the proposed approach significantly outperforms other large-scale place recognition techniques on this challenging data.) <|cite_end|> <|cite_start|> (Reference: Mapping a suburb with a single camera using a biologically inspired SLAM system: This paper describes a biologically inspired approach to vision-only simultaneous localization and mapping (SLAM) on ground-based platforms. The core SLAM system, dubbed RatSLAM, is based on computational models of the rodent hippocampus, and is coupled with a lightweight vision system that provides odometry and appearance information. RatSLAM builds a map in an online manner, driving loop closure and relocalization through sequences of familiar visual scenes. Visual ambiguity is managed by maintaining multiple competing vehicle pose estimates, while cumulative errors in odometry are corrected after loop closure by a map correction algorithm. We demonstrate the mapping performance of the system on a 66 km car journey through a complex suburban road network. Using only a web camera operating at 10 Hz, RatSLAM generates a coherent map of the entire environment at real-time speed, correctly closing more than 51 loops of up to 5 km in length.) <|cite_end|> <|cite_start|> (Reference: Vision meets robotics: The KITTI dataset: We present a novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research. In total, we recorded 6 hours of traffic scenarios at 10–100 Hz using a variety of sensor modalities such as high-resolution color and grayscale stereo cameras, a Velodyne 3D laser scanner and a high-precision GPS/IMU inertial navigation system. The scenarios are diverse, capturing real-world traffic situations, and range from freeways over rural areas to inner-city scenes with many static and dynamic objects. Our data is calibrated, synchronized and timestamped, and we provide the rectified and raw image sequences. Our dataset also contains object labels in the form of 3D tracklets, and we provide online benchmarks for stereo, optical flow, object detection and other tasks. This paper describes our recording platform, the data format and the utilities that we provide.) <|cite_end|> <|cite_start|> (Reference: University of Michigan North Campus long-term vision and lidar dataset: This paper documents a large scale, long-term autonomy dataset for robotics research collected on the University of Michigan’s North Campus. The dataset consists of omnidirectional imagery, 3D lidar, planar lidar, GPS, and proprioceptive sensors for odometry collected using a Segway robot. The dataset was collected to facilitate research focusing on long-term autonomous operation in changing environments. The dataset is composed of 27 sessions spaced approximately biweekly over the course of 15 months. The sessions repeatedly explore the campus, both indoors and outdoors, on varying trajectories, and at different times of the day across all four seasons. This allows the dataset to capture many challenging elements including: moving obstacles (e.g. pedestrians, bicyclists and cars), changing lighting, varying viewpoint, seasonal and weather changes (e.g. falling leaves and snow), and long-term structural changes caused by construction projects. To further facilitate research, we also provide ground-truth pose for all sessions in a single frame of reference.) <|cite_end|> or too sparse <|cite_start|> (Reference: Mapillary Street-Level Sequences: A Dataset for Lifelong Place Recognition: Lifelong place recognition is an essential and challenging task in computer vision with vast applications in robust localization and efficient large-scale 3D reconstruction. Progress is currently hindered by a lack of large, diverse, publicly available datasets. We contribute with Mapillary Street-Level Sequences (SLS), a large dataset for urban and suburban place recognition from image sequences. It contains more than 1.6 million images curated from the Mapillary collaborative mapping platform. The dataset is orders of magnitude larger than current data sources, and is designed to reflect the diversities of true lifelong learning. It features images from 30 major cities across six continents, hundreds of distinct cameras, and substantially different viewpoints and capture times, spanning all seasons over a nine year period. All images are geo-located with GPS and compass, and feature high-level attributes such as road type. We propose a set of benchmark tasks designed to push state-of-the-art performance and provide baseline studies. We show that current state-of-the-art methods still have a long way to go, and that the lack of diversity in existing datasets have prevented generalization to new environments. The dataset and benchmarks are available for academic research.) <|cite_end|> <|cite_start|> (Reference: Highly scalable appearance-only SLAM - FAB-MAP 2.0: We describe a new formulation of appearance-only SLAM suitable for very large scale navigation. The system navigates in the space of appearance, assigning each new observation to either a new or previously visited location, without reference to metric position. The system is demonstrated performing reliable online appearance mapping and loop closure detection over a 1,000 km trajectory, with mean filter update times of 14 ms. The 1,000 km experiment is more than an order of magnitude larger than any previously reported result. The scalability of the system is achieved by defining a sparse approximation to the FAB-MAP model suitable for implementation using an inverted index. Our formulation of the problem is fully probabilistic and naturally incorporates robustness against perceptual aliasing. The 1,000 km data set comprising almost a terabyte of omni-directional and stereo imagery is available for use, and we hope that it will serve as a benchmark for future systems.) <|cite_end|> <|cite_start|> (Reference: Understanding how camera configuration and environmental conditions affect appearance-based localization: Localization is a central problem for intelligent vehicles. Visual localization can supplement or replace GPS-based localization approaches in situations where GPS is unavailable or inaccurate. Although visual localization has been demonstrated in a variety of algorithms and systems, the problem of how to best configure such a system remains largely an open question. Design choices, such as “where should the camera be placed?” and “how should it be oriented?” can have substantial effect on the cost and robustness of a fielded intelligent vehicle. This paper analyzes how different sensor configuration parameters and environmental conditions affect visual localization performance with the goal of understanding what causes certain configurations to perform better than others and providing general principles for configuring systems for visual localization. We ground the investigation using extensive field testing of a visual localization algorithm, and the data sets used for the analysis are made available for comparative evaluation.) <|cite_end|> (see \cref{fig:map} for an example of these limitations). Moreover, current datasets follow the common practice of splitting the collected images into geographically disjoint sets for training and inference. However, this practice does not find a correspondence in the real world where one would likely opt to use images from the target geographical area to train the model. Considering also the cost of collecting the images, it would be advisable to use the whole database also for training. \myparagraph{Scalability of training.} Having access to a massive amount of data raises the question of how to use it effectively for training. All the recent state-of-the-art methods in VG use contrastive learning <|cite_start|> (Reference: Learned Contextual Feature Reweighting for Image Geo-Localization: We address the problem of large scale image geo-localization where the location of an image is estimated by identifying geo-tagged reference images depicting the same place. We propose a novel model for learning image representations that integrates context-aware feature reweighting in order to effectively focus on regions that positively contribute to geo-localization. In particular, we introduce a Contextual Reweighting Network (CRN) that predicts the importance of each region in the feature map based on the image context. Our model is learned end-to-end for the image geo-localization task, and requires no annotation other than image geo-tags for training. In experimental results, the proposed approach significantly outperforms the previous state-of-the-art on the standard geo-localization benchmark datasets. We also demonstrate that our CRN discovers task-relevant contexts without any additional supervision.) <|cite_end|> <|cite_start|> (Reference: Self-supervising Fine-grained Region Similarities for Large-scale Image Localization: The task of large-scale retrieval-based image localization is to estimate the geographical location of a query image by recognizing its nearest reference images from a city-scale dataset. However, the general public benchmarks only provide noisy GPS labels associated with the training images, which act as weak supervisions for learning image-to-image similarities. Such label noise prevents deep neural networks from learning discriminative features for accurate localization. To tackle this challenge, we propose to self-supervise image-to-region similarities in order to fully explore the potential of difficult positive images alongside their sub-regions. The estimated image-to-region similarities can serve as extra training supervision for improving the network in generations, which could in turn gradually refine the fine-grained similarities to achieve optimal performance. Our proposed self-enhanced image-to-region similarity labels effectively deal with the training bottleneck in the state-of-the-art pipelines without any additional parameters or manual annotations in both training and inference. Our method outperforms state-of-the-arts on the standard localization benchmarks by noticeable margins and shows excellent generalization capability on multiple image retrieval datasets.) <|cite_end|> <|cite_start|> (Reference: Stochastic Attraction-Repulsion Embedding for Large Scale Image Localization: This paper tackles the problem of large-scale image-based localization (IBL) where the spatial location of a query image is determined by finding out the most similar reference images in a large database. For solving this problem, a critical task is to learn discriminative image representation that captures informative information relevant for localization. We propose a novel representation learning method having higher location-discriminating power. It provides the following contributions: 1) we represent a place (location) as a set of exemplar images depicting the same landmarks and aim to maximize similarities among intra-place images while minimizing similarities among inter-place images; 2) we model a similarity measure as a probability distribution on L_2-metric distances between intra-place and inter-place image representations; 3) we propose a new Stochastic Attraction and Repulsion Embedding (SARE) loss function minimizing the KL divergence between the learned and the actual probability distributions; 4) we give theoretical comparisons between SARE, triplet ranking and contrastive losses. It provides insights into why SARE is better by analyzing gradients. Our SARE loss is easy to implement and pluggable to any CNN. Experiments show that our proposed method improves the localization performance on standard benchmarks by a large margin. Demonstrating the broad applicability of our method, we obtained the third place out of 209 teams in the 2018 Google Landmark Retrieval Challenge. Our code and model are available at https://github.com/Liumouliu/deepIBL.) <|cite_end|> <|cite_start|> (Reference: Mapillary Street-Level Sequences: A Dataset for Lifelong Place Recognition: Lifelong place recognition is an essential and challenging task in computer vision with vast applications in robust localization and efficient large-scale 3D reconstruction. Progress is currently hindered by a lack of large, diverse, publicly available datasets. We contribute with Mapillary Street-Level Sequences (SLS), a large dataset for urban and suburban place recognition from image sequences. It contains more than 1.6 million images curated from the Mapillary collaborative mapping platform. The dataset is orders of magnitude larger than current data sources, and is designed to reflect the diversities of true lifelong learning. It features images from 30 major cities across six continents, hundreds of distinct cameras, and substantially different viewpoints and capture times, spanning all seasons over a nine year period. All images are geo-located with GPS and compass, and feature high-level attributes such as road type. We propose a set of benchmark tasks designed to push state-of-the-art performance and provide baseline studies. We show that current state-of-the-art methods still have a long way to go, and that the lack of diversity in existing datasets have prevented generalization to new environments. The dataset and benchmarks are available for academic research.) <|cite_end|> <|cite_start|> (Reference: Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition: Visual Place Recognition is a challenging task for robotics and autonomous systems, which must deal with the twin problems of appearance and viewpoint change in an always changing world. This paper introduces Patch-NetVLAD, which provides a novel formulation for combining the advantages of both local and global descriptor methods by deriving patch-level features from NetVLAD residuals. Unlike the fixed spatial neighborhood regime of existing local keypoint features, our method enables aggregation and matching of deep-learned local features defined over the feature-space grid. We further introduce a multi-scale fusion of patch features that have complementary scales (i.e. patch sizes) via an integral feature space and show that the fused features are highly invariant to both condition (season, structure, and illumination) and viewpoint (translation and rotation) changes. Patch-NetVLAD outperforms both global and local feature descriptor-based methods with comparable compute, achieving state-of-the-art visual place recognition results on a range of challenging real-world datasets, including winning the Facebook Mapillary Visual Place Recognition Challenge at ECCV2020. It is also adaptable to user requirements, with a speed-optimised version operating over an order of magnitude faster than the state-of-the-art. By combining superior performance with improved computational efficiency in a configurable framework, Patch-NetVLAD is well suited to enhance both stand-alone place recognition capabilities and the overall performance of SLAM systems.) <|cite_end|> <|cite_start|> (Reference: Attentional pyramid pooling of salient visual residuals for place recognition: The core of visual place recognition (VPR) lies in how to identify task-relevant visual cues and embed them into dis- criminative representations. Focusing on these two points, we propose a novel encoding strategy named Attentional Pyramid Pooling of Salient Visual Residuals (APPSVR). It incorporates three types of attention modules to model the saliency of local features in individual, spatial and cluster dimensions respectively. (1) To inhibit task-irrelevant local features, a semantic-reinforced local weighting scheme is employed for local feature refinement; (2) To leverage the spatial context, an attentional pyramid structure is constructed to adaptively encode regional features according to their relative spatial saliency; (3) To distinguish the different importance of visual clusters to the task, a parametric normalization is proposed to adjust their contribution to image descriptor generation. Experiments demonstrate APPSVR outperforms the existing techniques and achieves a new state-of-the-art performance on VPR benchmark datasets. The visualization shows the saliency map learned in a weakly supervised manner is largely consistent with human cognition.) <|cite_end|> <|cite_start|> (Reference: Semantic Reinforced Attention Learning for Visual Place Recognition: Large-scale visual place recognition (VPR) is inherently challenging because not all visual cues in the image are beneficial to the task. In order to highlight the task-relevant visual cues in the feature embedding, the existing attention mechanisms are either based on artificial rules or trained in a thorough data-driven manner. To fill the gap between the two types, we propose a novel Semantic Reinforced Attention Learning Network (SRALNet), in which the inferred attention can benefit from both semantic priors and data-driven fine-tuning. The contribution lies in two-folds. (1) To suppress misleading local features, an interpretable local weighting scheme is proposed based on hierarchical feature distribution. (2) By exploiting the interpretability of the local weighting scheme, a semantic constrained initialization is proposed so that the local attention can be reinforced by semantic priors. Experiments demonstrate that our method outperforms state-of-the-art techniques on city-scale VPR benchmark datasets.) <|cite_end|> <|cite_start|> (Reference: Adaptive-Attentive Geolocalization from few queries: a hybrid approach: We address the task of cross-domain visual place recognition, where the goal is to geolocalize a given query image against a labeled gallery, in the case where the query and the gallery belong to different visual domains. To achieve this, we focus on building a domain robust deep network by leveraging over an attention mechanism combined with few-shot unsupervised domain adaptation techniques, where we use a small number of unlabeled target domain images to learn about the target distribution. With our method, we are able to outperform the current state of the art while using two orders of magnitude less target domain images. Finally we propose a new large-scale dataset for cross-domain visual place recognition, called SVOX. The pytorch code is available at https://github.com/valeriopaolicelli/AdAGeo .) <|cite_end|> <|cite_start|> (Reference: Inside Out Visual Place Recognition: Visual Place Recognition (VPR) is generally concerned with localizing outdoor images. However, localizing indoor scenes that contain part of an outdoor scene can be of large value for a wide range of applications. In this paper, we introduce Inside Out Visual Place Recognition (IOVPR), a task aiming to localize images based on outdoor scenes visible through windows. For this task we present the new large-scale dataset Amsterdam-XXXL, with images taken in Amsterdam, that consists of 6.4 million panoramic street-view images and 1000 user-generated indoor queries. Additionally, we introduce a new training protocol Inside Out Data Augmentation to adapt Visual Place Recognition methods for localizing indoor images, demonstrating the potential of Inside Out Visual Place Recognition. We empirically show the benefits of our proposed data augmentation scheme on a smaller scale, whilst demonstrating the difficulty of this large-scale dataset for existing methods. With this new task we aim to encourage development of methods for IOVPR. The dataset and code are available for research purposes at https://github.com/saibr/IOVPR) <|cite_end|> (mostly relying on a triplet loss), which heavily depends on mining of negative examples across the training database. This operation is expensive, and it becomes prohibitive when the database is very large. Lightweight mining strategies that explore only a small pool of samples can reduce the duration of the mining phase <|cite_start|> (Reference: Mapillary Street-Level Sequences: A Dataset for Lifelong Place Recognition: Lifelong place recognition is an essential and challenging task in computer vision with vast applications in robust localization and efficient large-scale 3D reconstruction. Progress is currently hindered by a lack of large, diverse, publicly available datasets. We contribute with Mapillary Street-Level Sequences (SLS), a large dataset for urban and suburban place recognition from image sequences. It contains more than 1.6 million images curated from the Mapillary collaborative mapping platform. The dataset is orders of magnitude larger than current data sources, and is designed to reflect the diversities of true lifelong learning. It features images from 30 major cities across six continents, hundreds of distinct cameras, and substantially different viewpoints and capture times, spanning all seasons over a nine year period. All images are geo-located with GPS and compass, and feature high-level attributes such as road type. We propose a set of benchmark tasks designed to push state-of-the-art performance and provide baseline studies. We show that current state-of-the-art methods still have a long way to go, and that the lack of diversity in existing datasets have prevented generalization to new environments. The dataset and benchmarks are available for academic research.) <|cite_end|>, but they still result in a slow convergence and possibly less effective use of the data. \myparagraph{Contributions.} In this paper, we address these two limitations with the following contributions: \begin{itemize} \item A new large-scale and dense dataset, called San Francisco eXtra Large ({\ourD}), that is roughly 30x bigger than what is currently available (see \cref{fig:map}). The dataset includes crowd-sourced (\ie, multi-domain) queries that make for a challenging problem. \item A procedure that uses a classification task as a proxy to train the model that is used at inference to extract discriminative descriptors for the retrieval. We call this method {\our}. {\our} is remarkably simple, it does not necessitate to mine negative examples, and it can effectively learn from massive collections of data. \end{itemize} Through extensive experimental validation, we demonstrate that not only {\our} requires roughly 80\% less GPU memory at train time than current SOTA, but also that a simple model trained with {\our} on {\ourD} surpasses the SOTA while using 8x smaller embeddings. Additionally, we show that this model generalizes far better to other datasets. <|paper_end|>
[ "<|reference_start|> Are Large-Scale 3D models really necessary for accurate visual localization?: Accurate visual localization is a key technology for autonomous navigation. 3D structure-based methods employ 3D models of the scene to estimate the full 6DOF pose of a camera very accurately. However, constructing (and extending) large-scale 3D models is still a significant challenge. In contrast, 2D image retrieval-based methods only require a database of geo-tagged images, which is trivial to construct and to maintain. They are often considered inaccurate since they only approximate the positions of the cameras. Yet, the exact camera pose can theoretically be recovered when enough relevant database images are retrieved. In this paper, we demonstrate experimentally that large-scale 3D models are not strictly necessary for accurate visual localization. We create reference poses for a large and challenging urban dataset. Using these poses, we show that combining image-based methods with local reconstructions results in a pose accuracy similar to the state-of-the-art structure-based methods. Our results suggest that we might want to reconsider the current approach for accurate large-scale localization. <|reference_end|>", "<|reference_start|> VPR-Bench: An Open-Source Visual Place Recognition Evaluation Framework with Quantifiable Viewpoint and Appearance Change: <|reference_end|>", "<|reference_start|> Stochastic Attraction-Repulsion Embedding for Large Scale Image Localization: This paper tackles the problem of large-scale image-based localization (IBL) where the spatial location of a query image is determined by finding out the most similar reference images in a large database. For solving this problem, a critical task is to learn discriminative image representation that captures informative information relevant for localization. We propose a novel representation learning method having higher location-discriminating power. It provides the following contributions: 1) we represent a place (location) as a set of exemplar images depicting the same landmarks and aim to maximize similarities among intra-place images while minimizing similarities among inter-place images; 2) we model a similarity measure as a probability distribution on L_2-metric distances between intra-place and inter-place image representations; 3) we propose a new Stochastic Attraction and Repulsion Embedding (SARE) loss function minimizing the KL divergence between the learned and the actual probability distributions; 4) we give theoretical comparisons between SARE, triplet ranking and contrastive losses. It provides insights into why SARE is better by analyzing gradients. Our SARE loss is easy to implement and pluggable to any CNN. Experiments show that our proposed method improves the localization performance on standard benchmarks by a large margin. Demonstrating the broad applicability of our method, we obtained the third place out of 209 teams in the 2018 Google Landmark Retrieval Challenge. Our code and model are available at https://github.com/Liumouliu/deepIBL. <|reference_end|>", "<|reference_start|> Attentional pyramid pooling of salient visual residuals for place recognition: The core of visual place recognition (VPR) lies in how to identify task-relevant visual cues and embed them into dis- criminative representations. Focusing on these two points, we propose a novel encoding strategy named Attentional Pyramid Pooling of Salient Visual Residuals (APPSVR). It incorporates three types of attention modules to model the saliency of local features in individual, spatial and cluster dimensions respectively. (1) To inhibit task-irrelevant local features, a semantic-reinforced local weighting scheme is employed for local feature refinement; (2) To leverage the spatial context, an attentional pyramid structure is constructed to adaptively encode regional features according to their relative spatial saliency; (3) To distinguish the different importance of visual clusters to the task, a parametric normalization is proposed to adjust their contribution to image descriptor generation. Experiments demonstrate APPSVR outperforms the existing techniques and achieves a new state-of-the-art performance on VPR benchmark datasets. The visualization shows the saliency map learned in a weakly supervised manner is largely consistent with human cognition. <|reference_end|>" ]
[ 5, 6, 14, 33 ]
{"<|cite_2|>": "arxiv-170354", "<|multi_cite_3_1|>": "ss-1268379", "<|multi_cite_3_3|>": "ss-1522604", "<|multi_cite_3_4|>": "arxiv-217027", "<|multi_cite_3_5|>": "arxiv-324587", "<|multi_cite_3_6|>": "ss-1236945", "<|multi_cite_3_7|>": "ss-1482114", "<|multi_cite_4_1|>": "ss-1941386", "<|multi_cite_4_2|>": "ss-1941386", "<|multi_cite_4_3|>": "ss-1260030", "<|multi_cite_4_4|>": "ss-709870", "<|multi_cite_4_5|>": "ss-2071826", "<|multi_cite_4_6|>": "arxiv-194535", "<|multi_cite_5_2|>": "ss-1302036", "<|multi_cite_5_3|>": "arxiv-170354", "<|multi_cite_5_4|>": "arxiv-269927", "<|multi_cite_5_5|>": "arxiv-296125", "<|multi_cite_5_6|>": "arxiv-383453", "<|multi_cite_5_7|>": "ss-1523193", "<|cite_6|>": "arxiv-217027", "<|cite_7|>": "ss-1284748", "<|multi_cite_8_2|>": "ss-1522604", "<|multi_cite_8_3|>": "ss-1514495", "<|multi_cite_8_4|>": "ss-682917", "<|multi_cite_8_5|>": "ss-1218950", "<|multi_cite_9_1|>": "ss-1523193", "<|multi_cite_9_2|>": "ss-1218339", "<|multi_cite_9_4|>": "ss-1218951", "<|multi_cite_10_2|>": "ss-1302036", "<|multi_cite_10_3|>": "arxiv-269927", "<|multi_cite_10_4|>": "arxiv-170354", "<|multi_cite_10_5|>": "ss-1523193", "<|multi_cite_10_6|>": "arxiv-324587", "<|multi_cite_10_7|>": "ss-1347634", "<|multi_cite_10_8|>": "arxiv-361718", "<|multi_cite_10_9|>": "arxiv-296125", "<|multi_cite_10_10|>": "arxiv-383453", "<|cite_12|>": "ss-1523193"}
2212.01703
<|paper_start|> Title: Active learning using adaptable task-based prioritisation Abstract: Active learning using adaptable task-based prioritisation: Supervised machine learning-based medical image computing applications necessitate expert label curation, while unlabelled image data might be relatively abundant. Active learning methods aim to prioritise a subset of available image data for expert annotation, for label-efficient model training. We develop a controller neural network that measures priority of images in a sequence of batches, as in batch-mode active learning, for multi-class segmentation tasks. The controller is optimised by rewarding positive task-specific performance gain, within a Markov decision process (MDP) environment that also optimises the task predictor. In this work, the task predictor is a segmentation network. A meta-reinforcement learning algorithm is proposed with multiple MDPs, such that the pre-trained controller can be adapted to a new MDP that contains data from different institutes and/or requires segmentation of different organs or structures within the abdomen. We present experimental results using multiple CT datasets from more than one thousand patients, with segmentation tasks of nine different abdominal organs, to demonstrate the efficacy of the learnt prioritisation controller function and its cross-institute and cross-organ adaptability. We show that the proposed adaptable prioritisation metric yields converging segmentation accuracy for the novel class of kidney, unseen in training, using between approximately 40\% to 60\% of labels otherwise required with other heuristic or random prioritisation metrics. For clinical datasets of limited size, the proposed adaptable prioritisation offers a performance improvement of 22.6\% and 10.2\% in Dice score, for tasks of kidney and liver vessel segmentation, respectively, compared to random prioritisation and alternative active sampling strategies. Introduction \label{sec:introduction} Medical imaging tasks are increasingly being automated using machine learning by utilising expert annotated data <|cite_start|> (Reference: Deep learning in medical imaging: general overview: The artificial neural network (ANN)–a machine learning technique inspired by the human neuronal synapse system–was introduced in the 1950s. However, the ANN was previously limited in its ability to solve actual problems, due to the vanishing gradient and overfitting problems with training of deep architecture, lack of computing power, and primarily the absence of sufficient data to train the computer system. Interest in this concept has lately resurfaced, due to the availability of big data, enhanced computing power with the current graphics processing units, and novel algorithms to train the deep neural network. Recent studies on this technology suggest its potentially to perform better than humans in some visual and auditory recognition tasks, which may portend its applications in medicine and healthcare, especially in medical imaging, in the foreseeable future. This review article offers perspectives on the history, development, and applications of deep learning technology, particularly regarding its applications in medical imaging.) <|cite_end|> <|cite_start|> (Reference: Machine Learning for Medical Imaging: Machine learning is a technique for recognizing patterns that can be applied to medical images. Although it is a powerful tool that can help in rendering medical diagnoses, it can be misapplied. Machine learning typically begins with the machine learning algorithm system computing the image features that are believed to be of importance in making the prediction or diagnosis of interest. The machine learning algorithm system then identifies the best combination of these image features for classifying the image or computing some metric for the given image region. There are several methods that can be used, each with different strengths and weaknesses. There are open-source versions of most of these machine learning methods that make them easy to try and apply to images. Several metrics for measuring the performance of an algorithm exist; however, one must be aware of the possible associated pitfalls that can result in misleading metrics. More recently, deep learning has started to be used; this method has the benefit that it does not require image feature identification and calculation as a first step; rather, features are identified as part of the learning process. Machine learning has been used in medical imaging and will have a greater influence in the future. Those working in medical imaging must be aware of how machine learning works. ©RSNA, 2017.) <|cite_end|>. Supervised learning using expert annotations allows for reliable predictions from the trained model, however, this expert annotation may often be expensive. Applications such as complex surgical planning thus become challenging to develop, due to the need for many structures to be annotated at the voxel-level and different regions of interest (ROIs) required by subsequent procedures mandated by local expertise and protocols. This is further complicated by the now well-known problem of generalisation from deep models across different institutions, all of which are often under data size constraints. Active learning (AL) aims to directly address the expensive data labelling by prioritising a subset of available unlabelled data for annotation, such that the machine learning models trained with these annotated data reach a predefined, or the same, performance level with fewer labelled samples, as models trained with all data being labelled. The efficiency of the performance convergence measures performance of the AL methods, in terms of the quantity of annotated data, i.e. required number of \textit{AL iterations}, often compared with random sampling without prioritisation <|cite_start|> (Reference: A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis: Fully automatic deep learning has become the state-of-the-art technique for many tasks including image acquisition, analysis and interpretation, and for the extraction of clinically useful information for computer-aided detection, diagnosis, treatment planning, intervention and therapy. However, the unique challenges posed by medical image analysis suggest that retaining a human end user in any deep learning enabled system will be beneficial. In this review we investigate the role that humans might play in the development and deployment of deep learning enabled diagnostic applications and focus on techniques that will retain a significant input from a human end user. Human-in-the-Loop computing is an area that we see as increasingly important in future research due to the safety-critical nature of working in the medical domain. We evaluate four key areas that we consider vital for deep learning in the clinical practice: (1) Active Learning to choose the best data to annotate for optimal model performance; (2) Interaction with model outputs - using iterative feedback to steer models to optima for a given prediction and offering meaningful ways to interpret and respond to predictions; (3) Practical considerations - developing full scale applications and the key considerations that need to be made before deployment; (4) Future Prospective and Unanswered Questions - knowledge gaps and related research fields that will benefit human-in-the-loop computing as they evolve. We offer our opinions on the most promising directions of research and how various aspects of each area might be unified towards common goals.) <|cite_end|> <|cite_start|> (Reference: Active {{Learning Literature Survey}}: The most time consuming and expensive task in machine learning is the gathering of labeled data to train the model or to estimate its parameters. In the real-world scenario, the availability of labeled data is scarce and we have limited resources to label the abundantly available unlabeled data. Hence it makes sense to pick only the most informative instances from the unlabeled data and request an expert to provide the label for that instance. Active learning algorithms aim at minimizing the amount of labeled data required to achieve the goal of the machine learning task in hand by strategically selecting the data instance to be labeled by the expert. A lot of research has been conducted in this area over the past two decades leading to great improvements in performance of several existing machine learning algorithms and has also been applied to diverse fields like text classification, information retrieval, computer vision and bioinformatics, to name a few. This survey aims at providing an insight into the research in this area and categorizes the diverse algorithms proposed based on main characteristics. We also provides a desk where different active learning algorithms can be compared by evaluation on benchmark datasets.) <|cite_end|>. Therefore, metrics that valuate how data samples affect AL convergence (hereinafter referred to as prioritisation metrics) are the key to the goal of fast convergence, i.e. using as few labelled samples as possible. Informativeness and representativeness are regarded as the main criteria in existing prioritisation metrics <|cite_start|> (Reference: A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis: Fully automatic deep learning has become the state-of-the-art technique for many tasks including image acquisition, analysis and interpretation, and for the extraction of clinically useful information for computer-aided detection, diagnosis, treatment planning, intervention and therapy. However, the unique challenges posed by medical image analysis suggest that retaining a human end user in any deep learning enabled system will be beneficial. In this review we investigate the role that humans might play in the development and deployment of deep learning enabled diagnostic applications and focus on techniques that will retain a significant input from a human end user. Human-in-the-Loop computing is an area that we see as increasingly important in future research due to the safety-critical nature of working in the medical domain. We evaluate four key areas that we consider vital for deep learning in the clinical practice: (1) Active Learning to choose the best data to annotate for optimal model performance; (2) Interaction with model outputs - using iterative feedback to steer models to optima for a given prediction and offering meaningful ways to interpret and respond to predictions; (3) Practical considerations - developing full scale applications and the key considerations that need to be made before deployment; (4) Future Prospective and Unanswered Questions - knowledge gaps and related research fields that will benefit human-in-the-loop computing as they evolve. We offer our opinions on the most promising directions of research and how various aspects of each area might be unified towards common goals.) <|cite_end|>. Informativeness estimates information gained if a particular labelled sample is added to training. Uncertainty with respect to the given samples is often used to quantify the informativeness, as it measures the amount of uncertain, therefore likely unknown, information that could be learnt by including the samples. For tasks like image segmentation, a summation of the lowest class probabilities over all pixels can be used <|cite_start|> (Reference: {A mathematical theory of communication: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.) <|cite_end|> <|cite_start|> (Reference: A Sequential Algorithm for Training Text Classifiers: The ability to cheaply train text classifiers is critical to their use in information retrieval, content analysis, natural language processing, and other tasks involving data which is partly or fully textual. An algorithm for sequential sampling during machine learning of statistical classifiers was developed and tested on a newswire text categorization task. This method, which we call uncertainty sampling, reduced by as much as 500-fold the amount of training data that would have to be manually classified to achieve a given level of effectiveness.) <|cite_end|>, while high class probabilities are assumed high prediction confidence. An ensemble of multiple models was proposed for quantifying uncertainty <|cite_start|> (Reference: Active {{Learning Literature Survey}}: The most time consuming and expensive task in machine learning is the gathering of labeled data to train the model or to estimate its parameters. In the real-world scenario, the availability of labeled data is scarce and we have limited resources to label the abundantly available unlabeled data. Hence it makes sense to pick only the most informative instances from the unlabeled data and request an expert to provide the label for that instance. Active learning algorithms aim at minimizing the amount of labeled data required to achieve the goal of the machine learning task in hand by strategically selecting the data instance to be labeled by the expert. A lot of research has been conducted in this area over the past two decades leading to great improvements in performance of several existing machine learning algorithms and has also been applied to diverse fields like text classification, information retrieval, computer vision and bioinformatics, to name a few. This survey aims at providing an insight into the research in this area and categorizes the diverse algorithms proposed based on main characteristics. We also provides a desk where different active learning algorithms can be compared by evaluation on benchmark datasets.) <|cite_end|> <|cite_start|> (Reference: Is segmentation uncertainty useful?: Probabilistic image segmentation encodes varying prediction confidence and inherent ambiguity in the segmentation problem. While different probabilistic segmentation models are designed to capture different aspects of segmentation uncertainty and ambiguity, these modelling differences are rarely discussed in the context of applications of uncertainty. We consider two common use cases of segmentation uncertainty, namely assessment of segmentation quality and active learning. We consider four established strategies for probabilistic segmentation, discuss their modelling capabilities, and investigate their performance in these two tasks. We find that for all models and both tasks, returned uncertainty correlates positively with segmentation error, but does not prove to be useful for active learning.) <|cite_end|>. Monte-Carlo Dropout-based uncertainty estimation <|cite_start|> (Reference: Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning: Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes. A direct result of this theory gives us tools to model uncertainty with dropout NNs -- extracting information from existing models that has been thrown away so far. This mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We perform an extensive study of the properties of dropout's uncertainty. Various network architectures and non-linearities are assessed on tasks of regression and classification, using MNIST as an example. We show a considerable improvement in predictive log-likelihood and RMSE compared to existing state-of-the-art methods, and finish by using dropout's uncertainty in deep reinforcement learning.) <|cite_end|> was also proposed and may be viewed as a special case of ensemble methods. Representativeness measures the similarity between data samples, such that an effective AL strategy can be designed for prioritising those samples that can efficiently represent many others <|cite_start|> (Reference: A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis: Fully automatic deep learning has become the state-of-the-art technique for many tasks including image acquisition, analysis and interpretation, and for the extraction of clinically useful information for computer-aided detection, diagnosis, treatment planning, intervention and therapy. However, the unique challenges posed by medical image analysis suggest that retaining a human end user in any deep learning enabled system will be beneficial. In this review we investigate the role that humans might play in the development and deployment of deep learning enabled diagnostic applications and focus on techniques that will retain a significant input from a human end user. Human-in-the-Loop computing is an area that we see as increasingly important in future research due to the safety-critical nature of working in the medical domain. We evaluate four key areas that we consider vital for deep learning in the clinical practice: (1) Active Learning to choose the best data to annotate for optimal model performance; (2) Interaction with model outputs - using iterative feedback to steer models to optima for a given prediction and offering meaningful ways to interpret and respond to predictions; (3) Practical considerations - developing full scale applications and the key considerations that need to be made before deployment; (4) Future Prospective and Unanswered Questions - knowledge gaps and related research fields that will benefit human-in-the-loop computing as they evolve. We offer our opinions on the most promising directions of research and how various aspects of each area might be unified towards common goals.) <|cite_end|>. Distances between multiple images have been proposed, for example, based on features extracted from a trained model for a different, usually unsupervised, task such as self-reconstruction <|cite_start|> (Reference: Suggestive Annotation: A Deep Active Learning Framework for Biomedical Image Segmentation: Image segmentation is a fundamental problem in biomedical image analysis. Recent advances in deep learning have achieved promising results on many biomedical image segmentation benchmarks. However, due to large variations in biomedical images (different modalities, image settings, objects, noise, etc), to utilize deep learning on a new application, it usually needs a new set of training data. This can incur a great deal of annotation effort and cost, because only biomedical experts can annotate effectively, and often there are too many instances in images (e.g., cells) to annotate. In this paper, we aim to address the following question: With limited effort (e.g., time) for annotation, what instances should be annotated in order to attain the best performance? We present a deep active learning framework that combines fully convolutional network (FCN) and active learning to significantly reduce annotation effort by making judicious suggestions on the most effective annotation areas. We utilize uncertainty and similarity information provided by FCN and formulate a generalized version of the maximum set cover problem to determine the most representative and uncertain areas for annotation. Extensive experiments using the 2015 MICCAI Gland Challenge dataset and a lymph node ultrasound image segmentation dataset show that, using annotation suggestions by our method, state-of-the-art segmentation performance can be achieved by using only 50% of training data.) <|cite_end|> <|cite_start|> (Reference: MedAL: Accurate and robust deep active learning for medical image analysis: Deep learning models have been successfully used in medical image analysis problems but they require a large amount of labeled images to obtain good performance. However, such large labeled datasets are costly to acquire. Active learning techniques can be used to minimize the number of required training labels while maximizing the model's performance. In this work, we propose a novel sampling method that queries the unlabeled examples that maximize the average distance to all training set examples in a learned feature space. We then extend our sampling method to define a better initial training set, without the need for a trained model, by using Oriented FAST and Rotated BRIEF (ORB) feature descriptors. We validate MedAL on 3 medical image datasets and show that our method is robust to different dataset properties. MedAL is also efficient, achieving 80% accuracy on the task of Diabetic Retinopathy detection using only 425 labeled images, corresponding to a 32% reduction in the number of required labeled examples compared to the standard uncertainty sampling technique, and a 40% reduction compared to random sampling.) <|cite_end|> <|cite_start|> (Reference: Active Learning for Segmentation by Optimizing Content Information for Maximal Entropy: Segmentation is essential for medical image analysis tasks such as intervention planning, therapy guidance, diagnosis, treatment decisions. Deep learning is becoming increasingly prominent for segmentation, where the lack of annotations, however, often becomes the main limitation. Due to privacy concerns and ethical considerations, most medical datasets are created, curated, and allow access only locally. Furthermore, current deep learning methods are often suboptimal in translating anatomical knowledge between different medical imaging modalities. Active learning can be used to select an informed set of image samples to request for manual annotation, in order to best utilize the limited annotation time of clinical experts for optimal outcomes, which we focus on in this work. Our contributions herein are two fold: (1) we enforce domain-representativeness of selected samples using a proposed penalization scheme to maximize information at the network abstraction layer, and (2) we propose a Borda-count based sample querying scheme for selecting samples for segmentation. Comparative experiments with baseline approaches show that the samples queried with our proposed method, where both above contributions are combined, result in significantly improved segmentation performance for this active learning task.) <|cite_end|>. Representativeness can also be combined with informativeness measures <|cite_start|> (Reference: Suggestive Annotation: A Deep Active Learning Framework for Biomedical Image Segmentation: Image segmentation is a fundamental problem in biomedical image analysis. Recent advances in deep learning have achieved promising results on many biomedical image segmentation benchmarks. However, due to large variations in biomedical images (different modalities, image settings, objects, noise, etc), to utilize deep learning on a new application, it usually needs a new set of training data. This can incur a great deal of annotation effort and cost, because only biomedical experts can annotate effectively, and often there are too many instances in images (e.g., cells) to annotate. In this paper, we aim to address the following question: With limited effort (e.g., time) for annotation, what instances should be annotated in order to attain the best performance? We present a deep active learning framework that combines fully convolutional network (FCN) and active learning to significantly reduce annotation effort by making judicious suggestions on the most effective annotation areas. We utilize uncertainty and similarity information provided by FCN and formulate a generalized version of the maximum set cover problem to determine the most representative and uncertain areas for annotation. Extensive experiments using the 2015 MICCAI Gland Challenge dataset and a lymph node ultrasound image segmentation dataset show that, using annotation suggestions by our method, state-of-the-art segmentation performance can be achieved by using only 50% of training data.) <|cite_end|> <|cite_start|> (Reference: MedAL: Accurate and robust deep active learning for medical image analysis: Deep learning models have been successfully used in medical image analysis problems but they require a large amount of labeled images to obtain good performance. However, such large labeled datasets are costly to acquire. Active learning techniques can be used to minimize the number of required training labels while maximizing the model's performance. In this work, we propose a novel sampling method that queries the unlabeled examples that maximize the average distance to all training set examples in a learned feature space. We then extend our sampling method to define a better initial training set, without the need for a trained model, by using Oriented FAST and Rotated BRIEF (ORB) feature descriptors. We validate MedAL on 3 medical image datasets and show that our method is robust to different dataset properties. MedAL is also efficient, achieving 80% accuracy on the task of Diabetic Retinopathy detection using only 425 labeled images, corresponding to a 32% reduction in the number of required labeled examples compared to the standard uncertainty sampling technique, and a 40% reduction compared to random sampling.) <|cite_end|> <|cite_start|> (Reference: Active Learning for Segmentation by Optimizing Content Information for Maximal Entropy: Segmentation is essential for medical image analysis tasks such as intervention planning, therapy guidance, diagnosis, treatment decisions. Deep learning is becoming increasingly prominent for segmentation, where the lack of annotations, however, often becomes the main limitation. Due to privacy concerns and ethical considerations, most medical datasets are created, curated, and allow access only locally. Furthermore, current deep learning methods are often suboptimal in translating anatomical knowledge between different medical imaging modalities. Active learning can be used to select an informed set of image samples to request for manual annotation, in order to best utilize the limited annotation time of clinical experts for optimal outcomes, which we focus on in this work. Our contributions herein are two fold: (1) we enforce domain-representativeness of selected samples using a proposed penalization scheme to maximize information at the network abstraction layer, and (2) we propose a Borda-count based sample querying scheme for selecting samples for segmentation. Comparative experiments with baseline approaches show that the samples queried with our proposed method, where both above contributions are combined, result in significantly improved segmentation performance for this active learning task.) <|cite_end|>. However, general prioritisation metrics, such as Monte-Carlo Dropout and ensemble, have shown nearly equivalent performance to random sampling <|cite_start|> (Reference: Is segmentation uncertainty useful?: Probabilistic image segmentation encodes varying prediction confidence and inherent ambiguity in the segmentation problem. While different probabilistic segmentation models are designed to capture different aspects of segmentation uncertainty and ambiguity, these modelling differences are rarely discussed in the context of applications of uncertainty. We consider two common use cases of segmentation uncertainty, namely assessment of segmentation quality and active learning. We consider four established strategies for probabilistic segmentation, discuss their modelling capabilities, and investigate their performance in these two tasks. We find that for all models and both tasks, returned uncertainty correlates positively with segmentation error, but does not prove to be useful for active learning.) <|cite_end|>. The fixed and non-adaptive nature of these metrics could lead to adverse consequences. For example, high uncertainty in samples may in fact be a result of label error or inconsistency, due to manual annotation difficulty <|cite_start|> (Reference: Is segmentation uncertainty useful?: Probabilistic image segmentation encodes varying prediction confidence and inherent ambiguity in the segmentation problem. While different probabilistic segmentation models are designed to capture different aspects of segmentation uncertainty and ambiguity, these modelling differences are rarely discussed in the context of applications of uncertainty. We consider two common use cases of segmentation uncertainty, namely assessment of segmentation quality and active learning. We consider four established strategies for probabilistic segmentation, discuss their modelling capabilities, and investigate their performance in these two tasks. We find that for all models and both tasks, returned uncertainty correlates positively with segmentation error, but does not prove to be useful for active learning.) <|cite_end|>. It has been speculated that not accounting for the impact of annotated samples, \textit{post annotation}, and assuming that annotations are unambiguous and noise-free have led to the ineffective prioritisation metrics <|cite_start|> (Reference: Is segmentation uncertainty useful?: Probabilistic image segmentation encodes varying prediction confidence and inherent ambiguity in the segmentation problem. While different probabilistic segmentation models are designed to capture different aspects of segmentation uncertainty and ambiguity, these modelling differences are rarely discussed in the context of applications of uncertainty. We consider two common use cases of segmentation uncertainty, namely assessment of segmentation quality and active learning. We consider four established strategies for probabilistic segmentation, discuss their modelling capabilities, and investigate their performance in these two tasks. We find that for all models and both tasks, returned uncertainty correlates positively with segmentation error, but does not prove to be useful for active learning.) <|cite_end|> <|cite_start|> (Reference: Learning how to Active Learn: A Deep Reinforcement Learning Approach: Active learning aims to select a small subset of data for annotation such that a classifier learned on the data is highly accurate. This is usually done using heuristic selection methods, however the effectiveness of such methods is limited and moreover, the performance of heuristics varies between datasets. To address these shortcomings, we introduce a novel formulation by reframing the active learning as a reinforcement learning problem and explicitly learning a data selection policy, where the policy takes the role of the active learning heuristic. Importantly, our method allows the selection policy learned using simulation on one language to be transferred to other languages. We demonstrate our method using cross-lingual named entity recognition, observing uniform improvements over traditional active learning.) <|cite_end|>. This has been consistent with our preliminary results in a task of segmenting kidney on 3D CT images (summarised in Fig. \ref{fig:ablat}, with further details discussed in Sec. \ref{sec:exp}). In contrast, task-based prioritisation can utilise task-specific feedback in formulating the prioritisation, such as the performance of a trained model for the subsequent task. This task-based feedback enables post annotation impact to be measured during model training <|cite_start|> (Reference: Learning how to Active Learn: A Deep Reinforcement Learning Approach: Active learning aims to select a small subset of data for annotation such that a classifier learned on the data is highly accurate. This is usually done using heuristic selection methods, however the effectiveness of such methods is limited and moreover, the performance of heuristics varies between datasets. To address these shortcomings, we introduce a novel formulation by reframing the active learning as a reinforcement learning problem and explicitly learning a data selection policy, where the policy takes the role of the active learning heuristic. Importantly, our method allows the selection policy learned using simulation on one language to be transferred to other languages. We demonstrate our method using cross-lingual named entity recognition, observing uniform improvements over traditional active learning.) <|cite_end|> <|cite_start|> (Reference: Active One-shot Learning: Recent advances in one-shot learning have produced models that can learn from a handful of labeled examples, for passive classification and regression tasks. This paper combines reinforcement learning with one-shot learning, allowing the model to decide, during classification, which examples are worth labeling. We introduce a classification task in which a stream of images are presented and, on each time step, a decision must be made to either predict a label or pay to receive the correct label. We present a recurrent neural network based action-value function, and demonstrate its ability to learn how and when to request labels. Through the choice of reward function, the model can achieve a higher prediction accuracy than a similar model on a purely supervised task, or trade prediction accuracy for fewer label requests.) <|cite_end|> and may alleviate the discussed limitations for individual tasks. In this work, we focus on organ segmentation on 3D abdominal CT images. Multiorgan segmentation has a number of clinical applications <|cite_start|> (Reference: A review of deep learning based methods for medical image multi-organ segmentation.: ) <|cite_end|> <|cite_start|> (Reference: {Automatic Multi-Organ Segmentation on Abdominal CT With Dense V-Networks: Automatic segmentation of abdominal anatomy on computed tomography (CT) images can support diagnosis, treatment planning, and treatment delivery workflows. Segmentation methods using statistical models and multi-atlas label fusion (MALF) require inter-subject image registrations, which are challenging for abdominal images, but alternative methods without registration have not yet achieved higher accuracy for most abdominal organs. We present a registration-free deep-learning-based segmentation algorithm for eight organs that are relevant for navigation in endoscopic pancreatic and biliary procedures, including the pancreas, the gastrointestinal tract (esophagus, stomach, and duodenum) and surrounding organs (liver, spleen, left kidney, and gallbladder). We directly compared the segmentation accuracy of the proposed method to the existing deep learning and MALF methods in a cross-validation on a multi-centre data set with 90 subjects. The proposed method yielded significantly higher Dice scores for all organs and lower mean absolute distances for most organs, including Dice scores of 0.78 versus 0.71, 0.74, and 0.74 for the pancreas, 0.90 versus 0.85, 0.87, and 0.83 for the stomach, and 0.76 versus 0.68, 0.69, and 0.66 for the esophagus. We conclude that the deep-learning-based segmentation represents a registration-free method for multi-organ abdominal CT segmentation whose accuracy can surpass current methods, potentially supporting image-guided navigation in gastrointestinal endoscopy procedures.) <|cite_end|>. Planning laparoscopic liver resection or liver surgery in general is one such example, in which localising the liver, liver vessels and surrounding anatomy is necessary for existing inter-modality image registration <|cite_start|> (Reference: Deep hashing for global registration of untracked 2D laparoscopic ultrasound to CT: ) <|cite_end|> and useful for subsequent navigation <|cite_start|> (Reference: A novel ultrasound-based registration for image-guided laparoscopic liver ablation: Background. Patient-to-image registration is a core process of image-guided surgery (IGS) systems. We present a novel registration approach for application in laparoscopic liver surgery, which reconstructs in real time an intraoperative volume of the underlying intrahepatic vessels through an ultrasound (US) sweep process. Methods. An existing IGS system for an open liver procedure was adapted, with suitable instrument tracking for laparoscopic equipment. Registration accuracy was evaluated on a realistic phantom by computing the target registration error (TRE) for 5 intrahepatic tumors. The registration work flow was evaluated by computing the time required for performing the registration. Additionally, a scheme for intraoperative accuracy assessment by visual overlay of the US image with preoperative image data was evaluated. Results. The proposed registration method achieved an average TRE of 7.2 mm in the left lobe and 9.7 mm in the right lobe. The average time required for performing the registration was 12 minutes. A positive correlation was found between the intraoperative accuracy assessment and the obtained TREs. Conclusions. The registration accuracy of the proposed method is adequate for laparoscopic intrahepatic tumor targeting. The presented approach is feasible and fast and may, therefore, not be disruptive to the current surgical work flow.) <|cite_end|> during the procedure. Moreover, AL will greatly benefit the development of automatic segmentation models for different clinical requirements, because of the potentially diverging protocol-specific needs, such as the types of vessels and/or organs required for different registration algorithms and changing local image-navigating procedures. We thus identify two aspects for a desirable AL approach in this application: 1) prioritising CT images to be annotated for the required ROI types (organs or anatomical structures), potentially new and unseen in developing such prioritisation strategy, and 2) the ability to adapt or generalise such prioritisation to image data from a different and novel institute. We first propose a prioritisation metric based on direct feedback from the segmentation task using annotated samples, which is learnt using reinforcement learning (RL) based meta-learning. Second, we outline a mechanism, using the proposed meta-RL, to allow for the metric to be adapted to new data distributions including data from new institutes and for segmenting new ROI classes i.e. organs or structures unseen in training. In our formulation, task-based feedback for AL is delivered by means of a reward signal in the RL algorithm, in order to learn a prioritisation metric function. The reward signal is computed by measuring performance of a partially trained model on a set of samples for which annotations are available. The meta-RL further enables such prioritisation function to be useful across wider domains than with ``simple'' RL <|cite_start|> (Reference: RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning: Deep reinforcement learning (deep RL) has been successful in learning sophisticated behaviors automatically; however, the learning process requires a huge number of trials. In contrast, animals can learn new tasks in just a few trials, benefiting from their prior knowledge about the world. This paper seeks to bridge this gap. Rather than designing a "fast" reinforcement learning algorithm, we propose to represent it as a recurrent neural network (RNN) and learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in the weights of the RNN, which are learned slowly through a general-purpose ("slow") RL algorithm. The RNN receives all information a typical RL algorithm would receive, including observations, actions, rewards, and termination flags; and it retains its state across episodes in a given Markov Decision Process (MDP). The activations of the RNN store the state of the "fast" RL algorithm on the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both small-scale and large-scale problems. On the small-scale side, we train it to solve randomly generated multi-arm bandit problems and finite MDPs. After RL$^2$ is trained, its performance on new MDPs is close to human-designed algorithms with optimality guarantees. On the large-scale side, we test RL$^2$ on a vision-based navigation task and show that it scales up to high-dimensional problems.) <|cite_end|> <|cite_start|> (Reference: Learning to reinforcement learn: In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.) <|cite_end|> <|cite_start|> (Reference: Reinforcement Learning, Fast and Slow: ) <|cite_end|>. It is important to highlight the difference between the proposed prioritisation metrics for AL and few-shot learning, which requires small number of annotated data from the novel classes and/or institutions during adaptation, e.g. <|cite_start|> (Reference: Few-shot image segmentation for cross-institution male pelvic organs using registration-assisted prototypical learning: The ability to adapt medical image segmentation networks for a novel class such as an unseen anatomical or pathological structure, when only a few labelled examples of this class are available from local healthcare providers, is sought-after. This potentially addresses two widely recognised limitations in deploying modern deep learning models to clinical practice, expertise-and-labour-intensive labelling and cross-institution generalisation. This work presents the first 3D few-shot interclass segmentation network for medical images, using a labelled multi-institution dataset from prostate cancer patients with eight regions of interest. We propose an image alignment module registering the predicted segmentation of both query and support data, in a standard prototypical learning algorithm, to a reference atlas space. The built-in registration mechanism can effectively utilise the prior knowledge of consistent anatomy between subjects, regardless whether they are from the same institution or not. Experimental results demonstrated that the proposed registration-assisted prototypical learning significantly improved segmentation accuracy (p-values<0.01) on query data from a holdout institution, with varying availability of support data from multiple institutions. We also report the additional benefits of the proposed 3D networks with 75% fewer parameters and an arguably simpler implementation, compared with existing 2D few-shot approaches that segment 2D slices of volumetric medical images.) <|cite_end|> <|cite_start|> (Reference: Semi-supervised few-shot learning for medical image segmentation: Recent years have witnessed the great progress of deep neural networks on semantic segmentation, particularly in medical imaging. Nevertheless, training high-performing models require large amounts of pixel-level ground truth masks, which can be prohibitive to obtain in the medical domain. Furthermore, training such models in a low-data regime highly increases the risk of overfitting. Recent attempts to alleviate the need for large annotated datasets have developed training strategies under the few-shot learning paradigm, which addresses this shortcoming by learning a novel class from only a few labeled examples. In this context, a segmentation model is trained on episodes, which represent different segmentation problems, each of them trained with a very small labeled dataset. In this work, we propose a novel few-shot learning framework for semantic segmentation, where unlabeled images are also made available at each episode. To handle this new learning paradigm, we propose to include surrogate tasks that can leverage very powerful supervisory signals --derived from the data itself-- for semantic feature learning. We show that including unlabeled surrogate tasks in the episodic training leads to more powerful feature representations, which ultimately results in better generability to unseen tasks. We demonstrate the efficiency of our method in the task of skin lesion segmentation in two publicly available datasets. Furthermore, our approach is general and model-agnostic, which can be combined with different deep architectures.) <|cite_end|>. It is also interesting to compare our proposed methods with recent image quality assessment approaches. For example, although aiming for a distinct objective of prioritising data to label, the proposed prioritisation metrics share technical similarities with previous work that quantifies task amenability of samples using direct feedback from a clinical task such as organ segmentation <|cite_start|> (Reference: Learning image quality assessment by reinforcing task amenable data selection: In this paper, we consider a type of image quality assessment as a task-specific measurement, which can be used to select images that are more amenable to a given target task, such as image classification or segmentation. We propose to train simultaneously two neural networks for image selection and a target task using reinforcement learning. A controller network learns an image selection policy by maximising an accumulated reward based on the target task performance on the controller-selected validation set, whilst the target task predictor is optimised using the training set. The trained controller is therefore able to reject those images that lead to poor accuracy in the target task. In this work, we show that the controller-predicted image quality can be significantly different from the task-specific image quality labels that are manually defined by humans. Furthermore, we demonstrate that it is possible to learn effective image quality assessment without using a ``clean'' validation set, thereby avoiding the requirement for human labelling of images with respect to their amenability for the task. Using $6712$, labelled and segmented, clinical ultrasound images from $259$ patients, experimental results on holdout data show that the proposed image quality assessment achieved a mean classification accuracy of $0.94\pm0.01$ and a mean segmentation Dice of $0.89\pm0.02$, by discarding $5\%$ and $15\%$ of the acquired images, respectively. The significantly improved performance was observed for both tested tasks, compared with the respective $0.90\pm0.01$ and $0.82\pm0.02$ from networks without considering task amenability. This enables image quality feedback during real-time ultrasound acquisition among many other medical imaging applications.) <|cite_end|> <|cite_start|> (Reference: Image quality assessment for machine learning tasks using meta-reinforcement learning: In this paper, we consider image quality assessment (IQA) as a measure of how images are amenable with respect to a given downstream task, or task amenability. When the task is performed using machine learning algorithms, such as a neural-network-based task predictor for image classification or segmentation, the performance of the task predictor provides an objective estimate of task amenability. In this work, we use an IQA controller to predict the task amenability which, itself being parameterised by neural networks, can be trained simultaneously with the task predictor. We further develop a meta-reinforcement learning framework to improve the adaptability for both IQA controllers and task predictors, such that they can be fine-tuned efficiently on new datasets or meta-tasks. We demonstrate the efficacy of the proposed task-specific, adaptable IQA approach, using two clinical applications for ultrasound-guided prostate intervention and pneumonia detection on X-ray images.) <|cite_end|> <|cite_start|> (Reference: Adaptable image quality assessment using meta-reinforcement learning of task amenability: The performance of many medical image analysis tasks are strongly associated with image data quality. When developing modern deep learning algorithms, rather than relying on subjective (human-based) image quality assessment (IQA), task amenability potentially provides an objective measure of task-specific image quality. To predict task amenability, an IQA agent is trained using reinforcement learning (RL) with a simultaneously optimised task predictor, such as a classification or segmentation neural network. In this work, we develop transfer learning or adaptation strategies to increase the adaptability of both the IQA agent and the task predictor so that they are less dependent on high-quality, expert-labelled training data. The proposed transfer learning strategy re-formulates the original RL problem for task amenability in a meta-reinforcement learning (meta-RL) framework. The resulting algorithm facilitates efficient adaptation of the agent to different definitions of image quality, each with its own Markov decision process environment including different images, labels and an adaptable task predictor. Our work demonstrates that the IQA agents pre-trained on non-expert task labels can be adapted to predict task amenability as defined by expert task labels, using only a small set of expert labels. Using 6644 clinical ultrasound images from 249 prostate cancer patients, our results for image classification and segmentation tasks show that the proposed IQA method can be adapted using data with as few as respective 19.7% and 29.6% expert-reviewed consensus labels and still achieve comparable IQA and task performance, which would otherwise require a training dataset with 100% expert labels.) <|cite_end|> <|cite_start|> (Reference: Image quality assessment by overlapping task-specific and task-agnostic measures: application to prostate multiparametric MR images for cancer segmentation: Image quality assessment (IQA) in medical imaging can be used to ensure that downstream clinical tasks can be reliably performed. Quantifying the impact of an image on the specific target tasks, also named as task amenability, is needed. A task-specific IQA has recently been proposed to learn an image-amenability-predicting controller simultaneously with a target task predictor. This allows for the trained IQA controller to measure the impact an image has on the target task performance, when this task is performed using the predictor, e.g. segmentation and classification neural networks in modern clinical applications. In this work, we propose an extension to this task-specific IQA approach, by adding a task-agnostic IQA based on auto-encoding as the target task. Analysing the intersection between low-quality images, deemed by both the task-specific and task-agnostic IQA, may help to differentiate the underpinning factors that caused the poor target task performance. For example, common imaging artefacts may not adversely affect the target task, which would lead to a low task-agnostic quality and a high task-specific quality, whilst individual cases considered clinically challenging, which can not be improved by better imaging equipment or protocols, is likely to result in a high task-agnostic quality but a low task-specific quality. We first describe a flexible reward shaping strategy which allows for the adjustment of weighting between task-agnostic and task-specific quality scoring. Furthermore, we evaluate the proposed algorithm using a clinically challenging target task of prostate tumour segmentation on multiparametric magnetic resonance (mpMR) images, from 850 patients. The proposed reward shaping strategy, with appropriately weighted task-specific and task-agnostic qualities, successfully identified samples that need re-acquisition due to defected imaging process.) <|cite_end|>. Moreover, the proposed AL strategy is designed for medical images, as opposed to the language data used in previously proposed AL approaches that also utilised RL, e.g. Meng et al. <|cite_start|> (Reference: Learning how to Active Learn: A Deep Reinforcement Learning Approach: Active learning aims to select a small subset of data for annotation such that a classifier learned on the data is highly accurate. This is usually done using heuristic selection methods, however the effectiveness of such methods is limited and moreover, the performance of heuristics varies between datasets. To address these shortcomings, we introduce a novel formulation by reframing the active learning as a reinforcement learning problem and explicitly learning a data selection policy, where the policy takes the role of the active learning heuristic. Importantly, our method allows the selection policy learned using simulation on one language to be transferred to other languages. We demonstrate our method using cross-lingual named entity recognition, observing uniform improvements over traditional active learning.) <|cite_end|> and Woodward et al. <|cite_start|> (Reference: Active One-shot Learning: Recent advances in one-shot learning have produced models that can learn from a handful of labeled examples, for passive classification and regression tasks. This paper combines reinforcement learning with one-shot learning, allowing the model to decide, during classification, which examples are worth labeling. We introduce a classification task in which a stream of images are presented and, on each time step, a decision must be made to either predict a label or pay to receive the correct label. We present a recurrent neural network based action-value function, and demonstrate its ability to learn how and when to request labels. Through the choice of reward function, the model can achieve a higher prediction accuracy than a similar model on a purely supervised task, or trade prediction accuracy for fewer label requests.) <|cite_end|>, with algorithmic differences including problem definition, labelled example requirement, reward formulation and training methodology. The contributions of this work are summarised as follows: 1) We proposed a task-based AL metric with task-specific feedback from the targeted segmentation task; 2) We proposed to learn the prioritisation metric using meta-RL with adaptability over different imaging institutes and organ segmentation tasks; 3) We evaluated our proposed framework using real patient CT images and including segmentation tasks for anatomical structures such as liver, pancreas, spleen, liver vessels, gallbladder, adrenal glands (left and right), major vessels (aorta, vena cava and portal vein) and stomach; subsequently, the trained system was evaluated, for AL, on holdout tasks for liver vessels and kidneys for data from new institutes. <|paper_end|>
[ "<|reference_start|> MedAL: Accurate and robust deep active learning for medical image analysis: Deep learning models have been successfully used in medical image analysis problems but they require a large amount of labeled images to obtain good performance. However, such large labeled datasets are costly to acquire. Active learning techniques can be used to minimize the number of required training labels while maximizing the model's performance. In this work, we propose a novel sampling method that queries the unlabeled examples that maximize the average distance to all training set examples in a learned feature space. We then extend our sampling method to define a better initial training set, without the need for a trained model, by using Oriented FAST and Rotated BRIEF (ORB) feature descriptors. We validate MedAL on 3 medical image datasets and show that our method is robust to different dataset properties. MedAL is also efficient, achieving 80% accuracy on the task of Diabetic Retinopathy detection using only 425 labeled images, corresponding to a 32% reduction in the number of required labeled examples compared to the standard uncertainty sampling technique, and a 40% reduction compared to random sampling. <|reference_end|>", "<|reference_start|> {Automatic Multi-Organ Segmentation on Abdominal CT With Dense V-Networks: Automatic segmentation of abdominal anatomy on computed tomography (CT) images can support diagnosis, treatment planning, and treatment delivery workflows. Segmentation methods using statistical models and multi-atlas label fusion (MALF) require inter-subject image registrations, which are challenging for abdominal images, but alternative methods without registration have not yet achieved higher accuracy for most abdominal organs. We present a registration-free deep-learning-based segmentation algorithm for eight organs that are relevant for navigation in endoscopic pancreatic and biliary procedures, including the pancreas, the gastrointestinal tract (esophagus, stomach, and duodenum) and surrounding organs (liver, spleen, left kidney, and gallbladder). We directly compared the segmentation accuracy of the proposed method to the existing deep learning and MALF methods in a cross-validation on a multi-centre data set with 90 subjects. The proposed method yielded significantly higher Dice scores for all organs and lower mean absolute distances for most organs, including Dice scores of 0.78 versus 0.71, 0.74, and 0.74 for the pancreas, 0.90 versus 0.85, 0.87, and 0.83 for the stomach, and 0.76 versus 0.68, 0.69, and 0.66 for the esophagus. We conclude that the deep-learning-based segmentation represents a registration-free method for multi-organ abdominal CT segmentation whose accuracy can surpass current methods, potentially supporting image-guided navigation in gastrointestinal endoscopy procedures. <|reference_end|>", "<|reference_start|> RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning: Deep reinforcement learning (deep RL) has been successful in learning sophisticated behaviors automatically; however, the learning process requires a huge number of trials. In contrast, animals can learn new tasks in just a few trials, benefiting from their prior knowledge about the world. This paper seeks to bridge this gap. Rather than designing a \"fast\" reinforcement learning algorithm, we propose to represent it as a recurrent neural network (RNN) and learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in the weights of the RNN, which are learned slowly through a general-purpose (\"slow\") RL algorithm. The RNN receives all information a typical RL algorithm would receive, including observations, actions, rewards, and termination flags; and it retains its state across episodes in a given Markov Decision Process (MDP). The activations of the RNN store the state of the \"fast\" RL algorithm on the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both small-scale and large-scale problems. On the small-scale side, we train it to solve randomly generated multi-arm bandit problems and finite MDPs. After RL$^2$ is trained, its performance on new MDPs is close to human-designed algorithms with optimality guarantees. On the large-scale side, we test RL$^2$ on a vision-based navigation task and show that it scales up to high-dimensional problems. <|reference_end|>", "<|reference_start|> Learning how to Active Learn: A Deep Reinforcement Learning Approach: Active learning aims to select a small subset of data for annotation such that a classifier learned on the data is highly accurate. This is usually done using heuristic selection methods, however the effectiveness of such methods is limited and moreover, the performance of heuristics varies between datasets. To address these shortcomings, we introduce a novel formulation by reframing the active learning as a reinforcement learning problem and explicitly learning a data selection policy, where the policy takes the role of the active learning heuristic. Importantly, our method allows the selection policy learned using simulation on one language to be transferred to other languages. We demonstrate our method using cross-lingual named entity recognition, observing uniform improvements over traditional active learning. <|reference_end|>" ]
[ 15, 24, 27, 36 ]
{"<|multi_cite_1_1|>": "ss-1516592", "<|multi_cite_1_2|>": "ss-1844812", "<|multi_cite_2_1|>": "arxiv-227547", "<|multi_cite_2_2|>": "ss-815267", "<|cite_3|>": "arxiv-227547", "<|multi_cite_4_1|>": "ss-677957", "<|multi_cite_4_2|>": "arxiv-668391", "<|multi_cite_5_1|>": "ss-815267", "<|multi_cite_5_2|>": "arxiv-330967", "<|cite_6|>": "arxiv-78927", "<|cite_7|>": "arxiv-227547", "<|multi_cite_8_1|>": "arxiv-126833", "<|multi_cite_8_2|>": "ss-1427979", "<|multi_cite_8_3|>": "arxiv-166374", "<|multi_cite_9_1|>": "arxiv-126833", "<|multi_cite_9_2|>": "ss-1427979", "<|multi_cite_9_3|>": "arxiv-166374", "<|cite_10|>": "arxiv-330967", "<|cite_11|>": "arxiv-330967", "<|multi_cite_12_1|>": "arxiv-330967", "<|multi_cite_12_2|>": "arxiv-131442", "<|multi_cite_13_1|>": "arxiv-131442", "<|multi_cite_13_2|>": "arxiv-117130", "<|multi_cite_14_1|>": "ss-1334488", "<|multi_cite_14_2|>": "ss-1271548", "<|cite_15|>": "ss-1201534", "<|cite_16|>": "ss-1565883", "<|multi_cite_17_1|>": "arxiv-109711", "<|multi_cite_17_2|>": "arxiv-110382", "<|multi_cite_17_3|>": "ss-1254598", "<|multi_cite_18_1|>": "arxiv-393024", "<|multi_cite_18_2|>": "arxiv-254430", "<|multi_cite_19_1|>": "arxiv-321292", "<|multi_cite_19_2|>": "arxiv-408622", "<|multi_cite_19_3|>": "arxiv-360060", "<|multi_cite_19_4|>": "arxiv-400348", "<|cite_20|>": "arxiv-131442", "<|cite_21|>": "arxiv-117130"}
2306.08424
<|paper_start|> Title: Selective Concept Models: Permitting Stakeholder Customisation at Test-Time Abstract: Selective Concept Models: Permitting Stakeholder Customisation at Test-Time: Concept-based models perform prediction using a set of concepts that are interpretable to stakeholders. However, such models often involve a fixed, large number of concepts, which may place a substantial cognitive load on stakeholders. We propose Selective COncept Models (SCOMs) which make predictions using only a subset of concepts and can be customised by stakeholders at test-time according to their preferences. We show that SCOMs only require a fraction of the total concepts to achieve optimal accuracy on multiple real-world datasets. Further, we collect and release a new dataset, CUB-Sel, consisting of human concept set selections for 900 bird images from the popular CUB dataset. Using CUB-Sel, we show that humans have unique individual preferences for the choice of concepts they prefer to reason about, and struggle to identify the most theoretically informative concepts. The customisation and concept selection provided by SCOM improves the efficiency of interpretation and intervention for stakeholders. Introduction Humans can reason about a limited number of concepts at once when making decisions <|cite_start|> (Reference: {The magical number seven, plus or minus two: Some limits on our capacity for processing information.: First, the span of absolute judgment and the span of immediate memory impose severe limitations on the amount of information that we are able to receive, process, and remember. By organizing the stimulus input simultaneously into several dimensions and successively into a sequence or chunks, we manage to break (or at least stretch) this informational bottleneck. Second, the process of recoding is a very important one in human psychology and deserves much more explicit attention than it has received. In particular, the kind of linguistic recoding that people do seems to me to be the very lifeblood of the thought processes. Recoding procedures are a constant concern to clinicians, social psychologists, linguists, and anthropologists and yet, probably because recoding is less accessible to experimental manipulation than nonsense syllables or T mazes, the traditional experimental psychologist has contributed little or nothing to their analysis. Nevertheless, experimental techniques can be used, methods of recoding can be specified, behavioral indicants can be found. And I anticipate that we will find a very orderly set of relations describing what now seems an uncharted wilderness of individual differences. Third, the concepts and measures provided by the theory of information provide a quantitative way of getting at some of these questions. The theory provides us with a yardstick for calibrating our stimulus materials and for measuring the performance of our subjects. In the interests of communication I have suppressed the technical details of information measurement and have tried to express the ideas in more familiar terms; I hope this paraphrase will not lead you to think they are not useful in research. Informational concepts have already proved valuable in the study of discrimination and of language; they promise a great deal in the study of learning and memory; and it has even been proposed that they can be useful in the study of concept formation. A lot of questions that seemed fruitless twenty or thirty years ago may now be worth another look. In fact, I feel that my story here must stop just as it begins to get really interesting. And finally, what about the magical number seven? What about the seven wonders of the world, the seven seas, the seven deadly sins, the seven daughters of Atlas in the Pleiades, the seven ages of man, the seven levels of hell, the seven primary colors, the seven notes of the musical scale, and the seven days of the week? What about the seven-point rating scale, the seven categories for absolute judgment, the seven objects in the span of attention, and the seven digits in the span of immediate memory? For the present I propose to withhold judgment. Perhaps there is something deep and profound behind all these sevens, something just calling out for us to discover it. But I suspect that it is only a pernicious, Pythagorean coincidence.) <|cite_end|> <|cite_start|> (Reference: The capacity of visual working memory for features and conjunctions: ) <|cite_end|> <|cite_start|> (Reference: The magical number 4 in short-term memory: A reconsideration of mental storage capacity: Miller (1956) summarized evidence that people can remember about seven chunks in short-term memory (STM) tasks. However, that number was meant more as a rough estimate and a rhetorical device than as a real capacity limit. Others have since suggested that there is a more precise capacity limit, but that it is only three to five chunks. The present target article brings together a wide variety of data on capacity limits suggesting that the smaller capacity limit is real. Capacity limits will be useful in analyses of information processing only if the boundary conditions for observing them can be carefully described. Four basic conditions in which chunks can be identified and capacity limits can accordingly be observed are: (1) when information overload limits chunks to individual stimulus items, (2) when other steps are taken specifically to block the recoding of stimulus items into larger chunks, (3) in performance discontinuities caused by the capacity limit, and (4) in various indirect effects of the capacity limit. Under these conditions, rehearsal and long-term memory cannot be used to combine stimulus items into chunks of an unknown size; nor can storage mechanisms that are not capacity-limited, such as sensory memory, allow the capacity-limited storage mechanism to be refilled during recall. A single, central capacity limit averaging about four chunks is implicated along with other, noncapacity-limited sources. The pure STM capacity limit expressed in chunks is distinguished from compound STM limits obtained when the number of separately held chunks is unclear. Reasons why pure capacity estimates fall within a narrow range are discussed and a capacity limit for the focus of attention is proposed.) <|cite_end|>. While concept-based methods such as Concept Bottleneck Models (\cbms) <|cite_start|> (Reference: Concept Bottleneck Models: We seek to learn models that we can interact with using high-level concepts: if the model did not think there was a bone spur in the x-ray, would it still predict severe arthritis? State-of-the-art models today do not typically support the manipulation of concepts like "the existence of bone spurs", as they are trained end-to-end to go directly from raw input (e.g., pixels) to output (e.g., arthritis severity). We revisit the classic idea of first predicting concepts that are provided at training time, and then using these concepts to predict the label. By construction, we can intervene on these concept bottleneck models by editing their predicted concept values and propagating these changes to the final prediction. On x-ray grading and bird identification, concept bottleneck models achieve competitive accuracy with standard end-to-end models, while enabling interpretation in terms of high-level clinical concepts ("bone spurs") or bird attributes ("wing color"). These models also allow for richer human-model interaction: accuracy improves significantly if we can correct model mistakes on concepts at test time.) <|cite_end|> have been proposed to support human interpretability and intervenability in machine learning (ML) systems, such models typically involve dozens of concepts, well beyond the number of concepts stakeholders can process at any given time <|cite_start|> (Reference: Bayesian modeling of human concept learning: I consider the problem of learning concepts from small numbers of positive examples, a feat which humans perform routinely but which computers are rarely capable of. Bridging machine learning and cognitive science perspectives, I present both theoretical analysis and an empirical study with human subjects for the simple task oflearning concepts corresponding to axis-aligned rectangles in a multidimensional feature space. Existing learning models, when applied to this task, cannot explain how subjects generalize from only a few examples of the concept. I propose a principled Bayesian model based on the assumption that the examples are a random sample from the concept to be learned. The model gives precise fits to human behavior on this simple task and provides qualitative insights into more complex, realistic cases of concept learning.) <|cite_end|> <|cite_start|> (Reference: Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability: Concept-based interpretability methods aim to explain deep neural network model predictions using a predefined set of semantic concepts. These methods evaluate a trained model on a new, “probe” dataset and correlate model predictions with the visual concepts labeled in that dataset. Despite their popularity, they suffer from limitations that are not well-understood and articulated by the literature. In this work, we analyze three commonly overlooked factors in concept-based explanations. First, the choice of the probe dataset has a profound impact on the generated explanations. Our analysis reveals that different probe datasets may lead to very different explanations, and suggests that the explanations are not generalizable outside the probe dataset. Second, we find that concepts in the probe dataset are often less salient and harder to learn than the classes they claim to explain, calling into question the correctness of the explanations. We argue that only visually salient concepts should be used in concept-based explanations. Finally, while existing methods use hundreds or even thousands of concepts, our human studies reveal a much stricter upper bound of 32 concepts or less, beyond which the explanations are much less practically useful. Finally, we make suggestions for future development and analysis of concept-based interpretability methods. Code for our analysis and user interface can be found at https://github.com/ princetonvisualai/OverlookedFactors) <|cite_end|>. To reduce the cognitive load of reasoning about many concepts, we propose Selective COncept Models (\scoms). \scoms provide a streamlined extension of \cbms by selecting the concepts that are most pertinent to any given task from a larger set of available concepts. This enables a stakeholder to reason with a reduced set of concepts without compromising task accuracy. Unlike \cbms which require a fixed concept set, \scoms make predictions using an arbitrary concept subset which can be customised at inference-time \textit{ without retraining}. For example, one might wish to prohibit consideration of sensitive attributes such as biological sex and subjective attractiveness during prediction. For \scoms, withdrawing certain concepts is trivial, whereas conventional \cbms make such exclusion difficult. \scoms enable flexible customisation according to a stakeholder's preferences for the number of concepts to use and their personal trade-off between cognitive load and predictive accuracy <|cite_start|> (Reference: Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability: Concept-based interpretability methods aim to explain deep neural network model predictions using a predefined set of semantic concepts. These methods evaluate a trained model on a new, “probe” dataset and correlate model predictions with the visual concepts labeled in that dataset. Despite their popularity, they suffer from limitations that are not well-understood and articulated by the literature. In this work, we analyze three commonly overlooked factors in concept-based explanations. First, the choice of the probe dataset has a profound impact on the generated explanations. Our analysis reveals that different probe datasets may lead to very different explanations, and suggests that the explanations are not generalizable outside the probe dataset. Second, we find that concepts in the probe dataset are often less salient and harder to learn than the classes they claim to explain, calling into question the correctness of the explanations. We argue that only visually salient concepts should be used in concept-based explanations. Finally, while existing methods use hundreds or even thousands of concepts, our human studies reveal a much stricter upper bound of 32 concepts or less, beyond which the explanations are much less practically useful. Finally, we make suggestions for future development and analysis of concept-based interpretability methods. Code for our analysis and user interface can be found at https://github.com/ princetonvisualai/OverlookedFactors) <|cite_end|>. On the task of bird species recognition, \scoms require only 6 out of 28 concepts to achieve optimal prediction accuracy. Smaller concept sets decrease the human cost of interventions, and increase the impact of each intervention. Thus this work is complementary to research aimed at designing better intervention policies over a given set of concepts, for example CooP <|cite_start|> (Reference: Interactive Concept Bottleneck Models: Concept bottleneck models (CBMs) are interpretable neural networks that first predict labels for human-interpretable concepts relevant to the prediction task, and then predict the final label based on the concept label predictions. We extend CBMs to interactive prediction settings where the model can query a human collaborator for the label to some concepts. We develop an interaction policy that, at prediction time, chooses which concepts to request a label for so as to maximally improve the final prediction. We demonstrate that a simple policy combining concept prediction uncertainty and influence of the concept on the final prediction achieves strong performance and outperforms static approaches as well as active feature acquisition methods proposed in the literature. We show that the interactive CBM can achieve accuracy gains of 5-10% with only 5 interactions over competitive baselines on the Caltech-UCSD Birds, CheXpert and OAI datasets.) <|cite_end|>. Since \scoms place no restrictions on the exact output network architecture, they provide a simple extension to existing models used by practitioners. \begin{figure*}[t] \centering \includegraphics[width=1.0\linewidth]{figures/figure1.pdf} \caption{In \cbms, human-interpretable concepts are predicted from the input $X$ and used to infer the output $Y$. \scoms extend \cbms by selecting the most relevant concepts which maximise the mutual information between the selected concepts ($C$) and the output ($Y$). The output model learns to make predictions using an augmented concept vector, which contains the mask used to select concepts. This allows the number of concepts ($k$) and the specific concepts selected to be customised at inference time by the stakeholder without retraining the model.} \label{fig:figure1} \end{figure*} <|paper_end|>
[ "<|reference_start|> {The magical number seven, plus or minus two: Some limits on our capacity for processing information.: First, the span of absolute judgment and the span of immediate memory impose severe limitations on the amount of information that we are able to receive, process, and remember. By organizing the stimulus input simultaneously into several dimensions and successively into a sequence or chunks, we manage to break (or at least stretch) this informational bottleneck. Second, the process of recoding is a very important one in human psychology and deserves much more explicit attention than it has received. In particular, the kind of linguistic recoding that people do seems to me to be the very lifeblood of the thought processes. Recoding procedures are a constant concern to clinicians, social psychologists, linguists, and anthropologists and yet, probably because recoding is less accessible to experimental manipulation than nonsense syllables or T mazes, the traditional experimental psychologist has contributed little or nothing to their analysis. Nevertheless, experimental techniques can be used, methods of recoding can be specified, behavioral indicants can be found. And I anticipate that we will find a very orderly set of relations describing what now seems an uncharted wilderness of individual differences. Third, the concepts and measures provided by the theory of information provide a quantitative way of getting at some of these questions. The theory provides us with a yardstick for calibrating our stimulus materials and for measuring the performance of our subjects. In the interests of communication I have suppressed the technical details of information measurement and have tried to express the ideas in more familiar terms; I hope this paraphrase will not lead you to think they are not useful in research. Informational concepts have already proved valuable in the study of discrimination and of language; they promise a great deal in the study of learning and memory; and it has even been proposed that they can be useful in the study of concept formation. A lot of questions that seemed fruitless twenty or thirty years ago may now be worth another look. In fact, I feel that my story here must stop just as it begins to get really interesting. And finally, what about the magical number seven? What about the seven wonders of the world, the seven seas, the seven deadly sins, the seven daughters of Atlas in the Pleiades, the seven ages of man, the seven levels of hell, the seven primary colors, the seven notes of the musical scale, and the seven days of the week? What about the seven-point rating scale, the seven categories for absolute judgment, the seven objects in the span of attention, and the seven digits in the span of immediate memory? For the present I propose to withhold judgment. Perhaps there is something deep and profound behind all these sevens, something just calling out for us to discover it. But I suspect that it is only a pernicious, Pythagorean coincidence. <|reference_end|>", "<|reference_start|> The magical number 4 in short-term memory: A reconsideration of mental storage capacity: Miller (1956) summarized evidence that people can remember about seven chunks in short-term memory (STM) tasks. However, that number was meant more as a rough estimate and a rhetorical device than as a real capacity limit. Others have since suggested that there is a more precise capacity limit, but that it is only three to five chunks. The present target article brings together a wide variety of data on capacity limits suggesting that the smaller capacity limit is real. Capacity limits will be useful in analyses of information processing only if the boundary conditions for observing them can be carefully described. Four basic conditions in which chunks can be identified and capacity limits can accordingly be observed are: (1) when information overload limits chunks to individual stimulus items, (2) when other steps are taken specifically to block the recoding of stimulus items into larger chunks, (3) in performance discontinuities caused by the capacity limit, and (4) in various indirect effects of the capacity limit. Under these conditions, rehearsal and long-term memory cannot be used to combine stimulus items into chunks of an unknown size; nor can storage mechanisms that are not capacity-limited, such as sensory memory, allow the capacity-limited storage mechanism to be refilled during recall. A single, central capacity limit averaging about four chunks is implicated along with other, noncapacity-limited sources. The pure STM capacity limit expressed in chunks is distinguished from compound STM limits obtained when the number of separately held chunks is unclear. Reasons why pure capacity estimates fall within a narrow range are discussed and a capacity limit for the focus of attention is proposed. <|reference_end|>", "<|reference_start|> Bayesian modeling of human concept learning: I consider the problem of learning concepts from small numbers of positive examples, a feat which humans perform routinely but which computers are rarely capable of. Bridging machine learning and cognitive science perspectives, I present both theoretical analysis and an empirical study with human subjects for the simple task oflearning concepts corresponding to axis-aligned rectangles in a multidimensional feature space. Existing learning models, when applied to this task, cannot explain how subjects generalize from only a few examples of the concept. I propose a principled Bayesian model based on the assumption that the examples are a random sample from the concept to be learned. The model gives precise fits to human behavior on this simple task and provides qualitative insights into more complex, realistic cases of concept learning. <|reference_end|>", "<|reference_start|> Interactive Concept Bottleneck Models: Concept bottleneck models (CBMs) are interpretable neural networks that first predict labels for human-interpretable concepts relevant to the prediction task, and then predict the final label based on the concept label predictions. We extend CBMs to interactive prediction settings where the model can query a human collaborator for the label to some concepts. We develop an interaction policy that, at prediction time, chooses which concepts to request a label for so as to maximally improve the final prediction. We demonstrate that a simple policy combining concept prediction uncertainty and influence of the concept on the final prediction achieves strong performance and outperforms static approaches as well as active feature acquisition methods proposed in the literature. We show that the interactive CBM can achieve accuracy gains of 5-10% with only 5 interactions over competitive baselines on the Caltech-UCSD Birds, CheXpert and OAI datasets. <|reference_end|>" ]
[ 0, 2, 4, 7 ]
{"<|multi_cite_1_1|>": "ss-713858", "<|multi_cite_1_2|>": "ss-2404303", "<|multi_cite_1_3|>": "ss-2022020", "<|cite_2|>": "arxiv-277352", "<|multi_cite_3_1|>": "ss-1513279", "<|multi_cite_3_2|>": "ss-822988", "<|cite_4|>": "ss-822988", "<|cite_5|>": "arxiv-469539"}
1008.2297
<|paper_start|> Title: An MGF-based Unified Framework to Determine the Joint Statistics of Partial Sums of Ordered Random Variables Abstract: An MGF-based Unified Framework to Determine the Joint Statistics of Partial Sums of Ordered Random Variables: Order statistics find applications in various areas of communications and signal processing. In this paper, we introduce an unified analytical framework to determine the joint statistics of partial sums of ordered random variables (RVs). With the proposed approach, we can systematically derive the joint statistics of any partial sums of ordered statistics, in terms of the moment generating function (MGF) and the probability density function (PDF). Our MGF-based approach applies not only when all the K ordered RVs are involved but also when only the Ks (Ks < K) best RVs are considered. In addition, we present the closed-form expressions for the exponential RV special case. These results apply to the performance analysis of various wireless communication systems over fading channels. Introduction The subject of order statistics deals with the properties and distributions of the ordered random variables (RVs) and their functions. It has found applications in many areas of statistical theory and practice <|cite_start|> (Reference: Order Statistics: The webpage cited in the text contains all of the large datasets presented, a table of contents, answers to selected exercises, miscellaneous links, and the computer code used in Appendix C. I found the material reasonable, but would have liked to have seen the R code for additional case studies worked out to the same extent as the case study in Appendix C. I felt there were other techniques used sufficiently often throughout the text that some exemplary code, either on the webpage or the Appendix, would have been appropriate. In particular, the EM algorithm is used repeatedly in the latter half of the book, but no code implementing EM methods is provided. Granted, the EM steps are spelled out in some detail in numerous places in the book, but some typical code would be useful. In an ambitious work such as this, it is easy to find ways in which the material could be reorganized, and suggestions for reorganization necessarily depend on what the user has in mind. With that caveat, I think the material in the very last chapter could be incorporated, or at least integrated, into either Chapter 6 or Chapter 11. The transition between the discussion of Bayesian decision analysis in Chapter 23 and posterior inference in those other chapters is essentially seamless and could be easily incorporated into the authors’ analysis rubric presented in Chapter 11. This is a book that challenges the user in its sophisticated approach toward data analysis in general and Bayesian methods in particular. I am thoroughly excited to have this book in hand to supplement course material and to offer research collaborators and clients at our consulting lab more sophisticated methods to solve their research problems.) <|cite_end|>, with examples including life-testing, quality control, signal and image processing <|cite_start|> (Reference: Handbook of Statistics 17: Order Statistics: Applications: Topics are all conventional enough. There are five kriging chapterssimple, ordinary, universal, block, and cokriging. There are the attendant chapter topics for modeling-normalization, semivariogram, crossvalidation, drift, and residuals. Extension topics are stochastic simulation, reliability (confidence intervals), cumulative distribution estimators, and regionalized classification. Application occurs only through the exercises. Software discussion is brief and limited only to basic software such as Deutsch and Journel (1998). reported by Ziegel (1998). Altogether this is not a book that I would choose for learning geostatistics. The updated book by Hohn (1999), reported by Ziegel (2000) is a much better place to begin.) <|cite_end|> <|cite_start|> (Reference: WIRELESS COMMUNICATIONS: "Professor Andreas F. Molisch, renowned researcher and educator, has put together the comprehensive book, Wireless Communications. The second edition, which includes a wealth of new material on important topics, ensures the role of the text as the key resource for every student, researcher, and practitioner in the field."Professor Moe Win, MIT, USAWireless communications has grown rapidly over the past decade from a niche market into one of the most important, fast moving industries. Fully updated to incorporate the latest research and developments, Wireless Communications, Second Edition provides an authoritative overview of the principles and applications of mobile communication technology.The author provides an in-depth analysis of current treatment of the area, addressing both the traditional elements, such as Rayleigh fading, BER in flat fading channels, and equalisation, and more recently emerging topics such as multi-user detection in CDMA systems, MIMO systems, and cognitive radio. The dominant wireless standards; including cellular, cordless and wireless LANs; are discussed.Topics featured include: wireless propagation channels, transceivers and signal processing, multiple access and advanced transceiver schemes, and standardised wireless systems.Combines mathematical descriptions with intuitive explanations of the physical facts, enabling readers to acquire a deep understanding of the subject.Includes new chapters on cognitive radio, cooperative communications and relaying, video coding, 3GPP Long Term Evolution, and WiMax; plus significant new sections on multi-user MIMO, 802.11n, and information theory.Companion website featuring: supplementary material on 'DECT', solutions manual and presentation slides for instructors, appendices, list of abbreviations and other useful resources.) <|cite_end|>. Recently, order statistics makes a growing number of appearance in the analysis and design of wireless communication systems (see for example <|cite_start|> (Reference: Digital Communications Over Generalized Fading Channels: ) <|cite_end|> <|cite_start|> (Reference: New results on ordered statistics and analysis of minimum-selection generalized selection combining (GSC): Diversity combining techniques improve the performance of wireless communication systems at the cost of increased power consumption. Minimum-selection generalized selection combining (MS-GSC) scheme has been proposed as a power saving implementation of conventional generalized selection combining (GSC) scheme. In this paper, noting that previous analytical results on the error rate of MS-GSC are approximate, we carry out a thorough and exact analysis for MS-GSC. In particular, based on a new result on order statistics, we obtain the statistics of the combined SNR with MS-GSC and we then apply these results to analyze the performance of MS-GSC over fading channels. We derive the closed-form expressions of important performance measures, including outage probability and average error rate, for the Rayleigh fading scenario. In addition, we investigate the average number of active MRC branches with MS-GSC, as a quantification of the power saving) <|cite_end|> <|cite_start|> (Reference: Adaptive modulation and diversity combining based on output-threshold {MRC}: In this paper, we propose a combined adaptive modulation transmission and output-threshold MRC diversity reception scheme for high spectral and power efficient wireless transceiver design. The modulation constellation size and the number of combined diversity paths are jointly determined based on the fading channel conditions and the required error rate performance. We derive the statistics of the resulting output signal-to-noise ratio (SNR) and then use it to analyze the proposed system in terms of performance, complexity, and spectral efficiency. Some selected numerical examples are finally presented to illustrate the mathematical formulism) <|cite_end|> <|cite_start|> (Reference: Finger replacement method for RAKE receivers in the soft handover region: We propose and analyze a new finger replacement technique that is applicable for RAKE receivers in the soft handover (SHO) region. More specifically, the receiver uses in the SHO region by default the strongest paths from the serving base station (BS) and only when the combined signal-to-noise ratio falls below a certain pre-determined threshold, the receiver uses more resolvable paths from the target BS to improve the performance. Instead of changing the configuration for all fingers, the receiver just compares the sum of the weakest paths out of the currently connected paths from the serving BS with the sum of the strongest paths from the target BS and selects the better group. Using accurate statistical analysis, we investigate in this letter the tradeoff between error performance, average number of required path comparisons, and SHO overhead offered by this newly proposed scheme.) <|cite_end|> <|cite_start|> (Reference: Sum-rate analysis of MIMO broadcast channel with random unitary beamforming: In this paper, we present the exact sum-rate analysis of the multiuser MIMO system with random unitary beamforming (RUB). Specifically, with the derived statistics of ordered beam signal to interference and noise ratio, we obtain the analytical expression of system sum rate for arbitrary number of users. The effects of system parameters, including the number of users, the number of transmit antennas and the signal to noise ratio (SNR), on the sum rate are examined through numerical methods. Besides, we develop an upper bound for the sum rate, which reveals that the sum rate with RUB benefits mainly from multiuser directional diversity gain when SNR is high.) <|cite_end|> <|cite_start|> (Reference: An MGF-based performance analysis of generalized selection combining over Rayleigh fading channels: Using the notion of the "spacing" between ordered exponential random variables, a performance analysis of the generalized selection combining (GSC) diversity scheme over Rayleigh fading channels is presented and compared with that of the conventional maximal-ratio combining and selection combining schemes. Starting with the moment generating function (MGF) of the GSC output signal-to-noise ratio (SNR), we derive closed-form expressions for the average combined SNR, outage probability, and average error probability of a wide variety of modulation schemes operating over independently, identically distributed (i.i.d.) diversity paths. Because of their simple form, these expressions readily allow numerical evaluation for cases of practical interest. The results are also extended to the case of non-i.i.d. diversity paths.) <|cite_end|> <|cite_start|> (Reference: Unified error probability analysis for generalized selection combining in Nakagami fading channels: We study generalized selection combining (GSC) schemes in independent Nakagami fading channels, where N diversity branches with the largest instantaneous signal-to-noise ratios (SNRs) are selected from the total of L (N/spl les/L) branches and then coherently or noncoherently combined. We propose two different techniques to derive the moment generating function (MGF) expressions for the GSC output SNR in generalized Nakagami fading channels, where there are distinct and noninteger fading severity parameters, as well as different average SNRs in different diversity branches. For arbitrary fading severity parameter m/sub k/, k=1, /spl middot//spl middot//spl middot/L, the MGF expression is given in a summation of N-dimensional definite integrals with the limits independent of SNR or channel parameters, and therefore can be evaluated very efficiently with numerical methods. Furthermore, for integer m/sub k/ closed-form MGF expressions are derived. Specializations of our results to Rayleigh channels and independent identically distributed (i.i.d.) Nakagami channels are presented, which are either new or equivalent to previously published results. Using the newly derived MGF expression, we provide a unified error probability analysis for many coherent and noncoherent modulation/detection schemes.) <|cite_end|> <|cite_start|> (Reference: Virtual branch analysis of symbol error probability for hybrid selection/maximal-ratio combining in Rayleigh fading: We derive analytical expressions for the symbol error probability (SEP) for a hybrid selection/maximal-ratio combining (H-S/MRC) diversity system in multipath-fading wireless environments. With H-S/MRC, L out of N diversity branches are selected and combined using maximal-ratio combining (MRC). We consider coherent detection of M-ary phase-shift keying (MPSK) and quadrature amplitude modulation (MQAM) using H-S/MRC for the case of independent Rayleigh fading with equal signal-to-noise ratio averaged over the fading. The proposed problem is made analytically tractable by transforming the ordered physical diversity branches, which are correlated, into independent and identically distributed (i.i.d.) "virtual branches," which results in a simple derivation of the SEP for arbitrary L and N. We further obtain a canonical structure for the SEP of H-S/MRC as a weighted sum of the elementary SEPs, which are the SEPs using MRC with i.i.d. diversity branches in Rayleigh fading, or equivalently the SEPs of the nondiversity (single-branch) system in Nakagami fading, whose closed-form expressions are well-known. We present numerical examples illustrating that H-S/MRC, even with L/spl Lt/N, can achieve a performance close to that of N-branch MRC.) <|cite_end|> <|cite_start|> (Reference: Analysis of hybrid selection/maximal-ratio diversity combiners with Gaussian errors: The paper examines the impact of Gaussian distributed weighting errors (in the channel gain estimates used for coherent combination) on both the output statistics of a hybrid selection/maximal-ratio (SC/MRC) receiver and the degradation of the average symbol-error rate (ASER) performance as compared with the ideal case. New expressions are derived for the probability density function, cumulative distribution function and moment generating function (MGF) of the coherent hybrid SC/MRC combiner output signal-to-noise ratio (SNR). The MGF is then used to derive exact, closed-form, ASER expressions for binary and M-ary modulations in conjunction a nonideal hybrid SC/MRC receiver in a Rayleigh fading environment. Results for both selection combining (SC) and maximal-ratio combining (MRC) are obtained as limiting cases. Additionally, the effect of the weighting errors on both the outage rate of error probability and the average combined SNR is investigated. These analytical results provide insights into the tradeoff between diversity gain and combination losses, in concert with increasing orders of diversity branches in an energy-sharing communication system.) <|cite_end|> <|cite_start|> (Reference: Minimum selection GSC in independent Rayleigh fading: We analyze the error performance of minimum selection generalized selection combining (MS-GSC), in which the minimum number of diversity branches are selected such that their combined signal-to-noise ratio (SNR) is above a given threshold. A flat Rayleigh fading channel with independent and distinctly distributed branch SNR is considered. By transforming the ordered instantaneous branch SNR to their differences, we derive the distribution of the number of selected branches in closed-form. We then modify the derivation of this distribution to get the characteristic function (c.f.) of the combiner output SNR. This c.f. is used to obtain the symbol error probability for different coherent digital modulation schemes.) <|cite_end|> <|cite_start|> (Reference: Minimum selection GSC in independent Rayleigh fading: We analyze the error performance of minimum selection generalized selection combining (MS-GSC), in which the minimum number of diversity branches are selected such that their combined signal-to-noise ratio (SNR) is above a given threshold. A flat Rayleigh fading channel with independent and distinctly distributed branch SNR is considered. By transforming the ordered instantaneous branch SNR to their differences, we derive the distribution of the number of selected branches in closed-form. We then modify the derivation of this distribution to get the characteristic function (c.f.) of the combiner output SNR. This c.f. is used to obtain the symbol error probability for different coherent digital modulation schemes.) <|cite_end|>). For example, diversity techniques have been used for over the past fifty years to mitigate the effects of fading on wireless communication systems. These techniques improve the performance of wireless systems over fading channels by generating and combining multiple replicas of the same information bearing signal at the receiver. The analysis of low-complexity selection combining schemes, which select the best replica, requires some basic results of order statistics, i.e. the distribution functions of the largest one among several random variables. More recently, the design and analysis of adaptive diversity combining techniques and multiuser scheduling strategies call for some further results on order statistics <|cite_start|> (Reference: New results on ordered statistics and analysis of minimum-selection generalized selection combining (GSC): Diversity combining techniques improve the performance of wireless communication systems at the cost of increased power consumption. Minimum-selection generalized selection combining (MS-GSC) scheme has been proposed as a power saving implementation of conventional generalized selection combining (GSC) scheme. In this paper, noting that previous analytical results on the error rate of MS-GSC are approximate, we carry out a thorough and exact analysis for MS-GSC. In particular, based on a new result on order statistics, we obtain the statistics of the combined SNR with MS-GSC and we then apply these results to analyze the performance of MS-GSC over fading channels. We derive the closed-form expressions of important performance measures, including outage probability and average error rate, for the Rayleigh fading scenario. In addition, we investigate the average number of active MRC branches with MS-GSC, as a quantification of the power saving) <|cite_end|> <|cite_start|> (Reference: Adaptive modulation and diversity combining based on output-threshold {MRC}: In this paper, we propose a combined adaptive modulation transmission and output-threshold MRC diversity reception scheme for high spectral and power efficient wireless transceiver design. The modulation constellation size and the number of combined diversity paths are jointly determined based on the fading channel conditions and the required error rate performance. We derive the statistics of the resulting output signal-to-noise ratio (SNR) and then use it to analyze the proposed system in terms of performance, complexity, and spectral efficiency. Some selected numerical examples are finally presented to illustrate the mathematical formulism) <|cite_end|>. In particular, the joint statistics of partial sums of ordered RVs are often necessary for the accurate characterization of system performance <|cite_start|> (Reference: Finger replacement method for RAKE receivers in the soft handover region: We propose and analyze a new finger replacement technique that is applicable for RAKE receivers in the soft handover (SHO) region. More specifically, the receiver uses in the SHO region by default the strongest paths from the serving base station (BS) and only when the combined signal-to-noise ratio falls below a certain pre-determined threshold, the receiver uses more resolvable paths from the target BS to improve the performance. Instead of changing the configuration for all fingers, the receiver just compares the sum of the weakest paths out of the currently connected paths from the serving BS with the sum of the strongest paths from the target BS and selects the better group. Using accurate statistical analysis, we investigate in this letter the tradeoff between error performance, average number of required path comparisons, and SHO overhead offered by this newly proposed scheme.) <|cite_end|> <|cite_start|> (Reference: Minimum selection GSC in independent Rayleigh fading: We analyze the error performance of minimum selection generalized selection combining (MS-GSC), in which the minimum number of diversity branches are selected such that their combined signal-to-noise ratio (SNR) is above a given threshold. A flat Rayleigh fading channel with independent and distinctly distributed branch SNR is considered. By transforming the ordered instantaneous branch SNR to their differences, we derive the distribution of the number of selected branches in closed-form. We then modify the derivation of this distribution to get the characteristic function (c.f.) of the combiner output SNR. This c.f. is used to obtain the symbol error probability for different coherent digital modulation schemes.) <|cite_end|>. The major difficulty in obtaining the statistics of partial sums of ordered RVs resides in the fact that even if the original unordered RVs are independently distributed, their ordered versions are necessarily dependent due to the inequality relations among them. Recently, the co-author has applied a successive conditioning approach to convert dependent ordered random variables to independent unordered ones <|cite_start|> (Reference: New results on ordered statistics and analysis of minimum-selection generalized selection combining (GSC): Diversity combining techniques improve the performance of wireless communication systems at the cost of increased power consumption. Minimum-selection generalized selection combining (MS-GSC) scheme has been proposed as a power saving implementation of conventional generalized selection combining (GSC) scheme. In this paper, noting that previous analytical results on the error rate of MS-GSC are approximate, we carry out a thorough and exact analysis for MS-GSC. In particular, based on a new result on order statistics, we obtain the statistics of the combined SNR with MS-GSC and we then apply these results to analyze the performance of MS-GSC over fading channels. We derive the closed-form expressions of important performance measures, including outage probability and average error rate, for the Rayleigh fading scenario. In addition, we investigate the average number of active MRC branches with MS-GSC, as a quantification of the power saving) <|cite_end|> <|cite_start|> (Reference: Adaptive modulation and diversity combining based on output-threshold {MRC}: In this paper, we propose a combined adaptive modulation transmission and output-threshold MRC diversity reception scheme for high spectral and power efficient wireless transceiver design. The modulation constellation size and the number of combined diversity paths are jointly determined based on the fading channel conditions and the required error rate performance. We derive the statistics of the resulting output signal-to-noise ratio (SNR) and then use it to analyze the proposed system in terms of performance, complexity, and spectral efficiency. Some selected numerical examples are finally presented to illustrate the mathematical formulism) <|cite_end|>. That approach, however, requires some case-specific manipulations, which may not always be generalizable. In this paper, we present an unified analytical framework to determine the joint statistics of partial sums of ordered RVs using a moment generating functions (MGF) based approach. More specifically, we extend the result in <|cite_start|> (Reference: Joint probability density function of selected order statistics and the sum of the remaining random variables: Abstract : A set of N independent, identically distributed random variables ?X(sub n)), with common probability density function p(x), are ordered into a new set of dependent random variables ?X'(sub n)), each with a different probability density function. From this latter set, the n1-th largest random variable through the n(sub M-1)-th largest random variable are selected. Then, the sum of the remaining N+1-M random variables is computed, giving a total of M dependent random variables. The joint probability density function of these M random variables is derived in a form involving a single Bromwich contour integral in the moment-generating function domain. The integral is most easily numerically evaluated by locating (approximately) the real saddlepoint of the integrand and passing the contour through this point. Very high accuracy in the probability density function evaluation is available by using numerical integration instead of a saddlepoint approximation.) <|cite_end|> <|cite_start|> (Reference: Joint distributions for two useful classes of statistics, with applications to classification and hypothesis testing: Abstract : In this paper, we analyze the statistics of two general classes of statistics. The first class is "M quadratic and linear forms of correlated Gaussian random variables". Examples include both cyclic and non-cyclic autocorrelation function (ACF) estimates of a correlated Gaussian process or the magnitude-squared of the output samples of a filtered Gaussian process. The second class consists of a subset of order statistics together with a remainder term. An example is the largest M - 1 bins of a discrete Fourier transform (DFT) or discrete wavelet transform (DWT), together with the sum of the remaining energies, forming an M-dimensional statistic. Both classes of statistics are useful in classification and detection of signals. In this paper, we solve for the joint probability density functions (PDFs) of both classes. Using the PDF projection method, these results can be used to transform the feature PDFs into the corresponding high-dimensional PDFs of the raw input data.) <|cite_end|>, which only derive the joint MGF of the selected individual order statistics and the sum of the remaining ones, and systematically solve for the joint statistics of arbitrary partial sums of ordered RVs. The main advantage of the proposed MGF-based unified framework is that it applies not only to the cases when all the $K$ ordered RVs are considered but also to those cases when only the $K_s$ $(K_s <K)$ best RVs are involved. After considering several illustrative examples, we focus on the exponential RV special case and derive the closed-form expressions of the joint statistics. These statistical results can apply to the performance analysis of various wireless communication systems over generalized fading channels. {The remainder of this paper is organized as follows. In section II, we summarize the main idea behind the proposed unified analytical framework, including the general idea and some special considerations. We then introduce some common functions and useful relations in section III, which will help make the results in later sections more compact. In section IV and V, we present some selected examples on the derivation of joint PDF based on our proposed approach. Following this, we show in section VI some closed form expressions for the selected examples presented in previous sections under i.i.d. Rayleigh fading conditions. Finally, we discuss some useful applications of these results in section VII.} <|paper_end|>
[ "<|reference_start|> New results on ordered statistics and analysis of minimum-selection generalized selection combining (GSC): Diversity combining techniques improve the performance of wireless communication systems at the cost of increased power consumption. Minimum-selection generalized selection combining (MS-GSC) scheme has been proposed as a power saving implementation of conventional generalized selection combining (GSC) scheme. In this paper, noting that previous analytical results on the error rate of MS-GSC are approximate, we carry out a thorough and exact analysis for MS-GSC. In particular, based on a new result on order statistics, we obtain the statistics of the combined SNR with MS-GSC and we then apply these results to analyze the performance of MS-GSC over fading channels. We derive the closed-form expressions of important performance measures, including outage probability and average error rate, for the Rayleigh fading scenario. In addition, we investigate the average number of active MRC branches with MS-GSC, as a quantification of the power saving <|reference_end|>", "<|reference_start|> Analysis of hybrid selection/maximal-ratio diversity combiners with Gaussian errors: The paper examines the impact of Gaussian distributed weighting errors (in the channel gain estimates used for coherent combination) on both the output statistics of a hybrid selection/maximal-ratio (SC/MRC) receiver and the degradation of the average symbol-error rate (ASER) performance as compared with the ideal case. New expressions are derived for the probability density function, cumulative distribution function and moment generating function (MGF) of the coherent hybrid SC/MRC combiner output signal-to-noise ratio (SNR). The MGF is then used to derive exact, closed-form, ASER expressions for binary and M-ary modulations in conjunction a nonideal hybrid SC/MRC receiver in a Rayleigh fading environment. Results for both selection combining (SC) and maximal-ratio combining (MRC) are obtained as limiting cases. Additionally, the effect of the weighting errors on both the outage rate of error probability and the average combined SNR is investigated. These analytical results provide insights into the tradeoff between diversity gain and combination losses, in concert with increasing orders of diversity branches in an energy-sharing communication system. <|reference_end|>", "<|reference_start|> Finger replacement method for RAKE receivers in the soft handover region: We propose and analyze a new finger replacement technique that is applicable for RAKE receivers in the soft handover (SHO) region. More specifically, the receiver uses in the SHO region by default the strongest paths from the serving base station (BS) and only when the combined signal-to-noise ratio falls below a certain pre-determined threshold, the receiver uses more resolvable paths from the target BS to improve the performance. Instead of changing the configuration for all fingers, the receiver just compares the sum of the weakest paths out of the currently connected paths from the serving BS with the sum of the strongest paths from the target BS and selects the better group. Using accurate statistical analysis, we investigate in this letter the tradeoff between error performance, average number of required path comparisons, and SHO overhead offered by this newly proposed scheme. <|reference_end|>", "<|reference_start|> Adaptive modulation and diversity combining based on output-threshold {MRC}: In this paper, we propose a combined adaptive modulation transmission and output-threshold MRC diversity reception scheme for high spectral and power efficient wireless transceiver design. The modulation constellation size and the number of combined diversity paths are jointly determined based on the fading channel conditions and the required error rate performance. We derive the statistics of the resulting output signal-to-noise ratio (SNR) and then use it to analyze the proposed system in terms of performance, complexity, and spectral efficiency. Some selected numerical examples are finally presented to illustrate the mathematical formulism <|reference_end|>" ]
[ 4, 11, 16, 19 ]
{"<|cite_1|>": "ss-1485945", "<|multi_cite_2_1|>": "ss-1689978", "<|multi_cite_2_2|>": "ss-1013616", "<|multi_cite_3_1|>": "ss-1689984", "<|multi_cite_3_2|>": "ss-1932827", "<|multi_cite_3_3|>": "ss-1689985", "<|multi_cite_3_4|>": "ss-1932823", "<|multi_cite_3_5|>": "ss-1689986", "<|multi_cite_3_6|>": "ss-1689987", "<|multi_cite_3_7|>": "ss-1689988", "<|multi_cite_3_8|>": "ss-1689989", "<|multi_cite_3_9|>": "ss-1689990", "<|multi_cite_3_10|>": "ss-1689991", "<|multi_cite_3_11|>": "ss-1689991", "<|multi_cite_4_1|>": "ss-1932827", "<|multi_cite_4_2|>": "ss-1689985", "<|multi_cite_5_1|>": "ss-1932823", "<|multi_cite_5_2|>": "ss-1689991", "<|multi_cite_6_1|>": "ss-1932827", "<|multi_cite_6_2|>": "ss-1689985", "<|multi_cite_7_2|>": "ss-1689998", "<|multi_cite_7_3|>": "ss-1689979"}
1802.00157
<|paper_start|> Title: Optimal LRC codes for all lenghts n <= q Abstract: Optimal LRC codes for all lenghts n <= q: A family of distance-optimal LRC codes from certain subcodes of $q$-ary Reed-Solomon codes, proposed by I.~Tamo and A.~Barg in 2014, assumes that the code length $n$ is a multiple of $r+1.$ By shortening codes from this family, we show that it is possible to lift this assumption, still obtaining distance-optimal codes. Introduction Let $\cC$ be a $q$-ary code of length $n$ and cardinality $q^k$. We say that $\cC$ has locality $r$ if for every $i=1,\dots,n$ there exists a subset $I_i\subset\{1,\dots,n\}\backslash\{i\}, |I_i|=r$ such that for every codeword $\bfc=(c_1,\dots,c_n)$ and every $i=1,\dots,n$ the coordinate $c_i$ is a function of the coordinates $\{c_i,i\in I_i\}.$ We call $\cC$ an $(n,k,r)$ LRC code, and call the subsets $A_i:=I_i\cup\{i\}$ {\em repair groups}. Codes with the locality property were introduced in <|cite_start|> (Reference: On the Locality of Codeword Symbols: Consider a linear [n,k,d]_q code C. We say that that i-th coordinate of C has locality r, if the value at this coordinate can be recovered from accessing some other r coordinates of C. Data storage applications require codes with small redundancy, low locality for information coordinates, large distance, and low locality for parity coordinates. In this paper we carry out an in-depth study of the relations between these parameters. We establish a tight bound for the redundancy n-k in terms of the message length, the distance, and the locality of information coordinates. We refer to codes attaining the bound as optimal. We prove some structure theorems about optimal codes, which are particularly strong for small distances. This gives a fairly complete picture of the tradeoffs between codewords length, worst-case distance and locality of information symbols. We then consider the locality of parity check symbols and erasure correction beyond worst case distance for optimal codes. Using our structure theorem, we obtain a tight bound for the locality of parity symbols possible in such codes for a broad class of parameter settings. We prove that there is a tradeoff between having good locality for parity checks and the ability to correct erasures beyond the minimum distance.) <|cite_end|>, which also proved the following upper bound on the minimum distance of an $(n,k,r)$ LRC code: \begin{equation}\label{eq:sb} d_{\text{min}}(\cC)\le n-k-\Big\lceil \frac kr\Big\rceil+2. \end{equation} We call an LRC code {\em optimal} if its distance is the largest possible given the other parameters. Several constructions of optimal LRC codes were proposed in the literature, among them <|cite_start|> (Reference: Optimal Linear Codes with a Local-Error-Correction Property: Motivated by applications to distributed storage, Gopalan \textit{et al} recently introduced the interesting notion of information-symbol locality in a linear code. By this it is meant that each message symbol appears in a parity-check equation associated with small Hamming weight, thereby enabling recovery of the message symbol by examining a small number of other code symbols. This notion is expanded to the case when all code symbols, not just the message symbols, are covered by such "local" parity. In this paper, we extend the results of Gopalan et. al. so as to permit recovery of an erased code symbol even in the presence of errors in local parity symbols. We present tight bounds on the minimum distance of such codes and exhibit codes that are optimal with respect to the local error-correction property. As a corollary, we obtain an upper bound on the minimum distance of a concatenated code.) <|cite_end|> <|cite_start|> (Reference: Optimal Locally Repairable Codes via Rank-Metric Codes: This paper presents a new explicit construction for locally repairable codes (LRCs) for distributed storage systems which possess all-symbols locality and maximal possible minimum distance, or equivalently, can tolerate the maximal number of node failures. This construction, based on maximum rank distance (MRD) Gabidulin codes, provides new optimal vector and scalar LRCs. In addition, the paper also discusses mechanisms by which codes obtained using this construction can be used to construct LRCs with efficient repair of failed nodes by combination of LRC with regenerating codes.) <|cite_end|> <|cite_start|> (Reference: Optimal Locally Repairable Codes and Connections to Matroid Theory: Petabyte-scale distributed storage systems are currently transitioning to erasure codes to achieve higher storage efficiency. Classical codes like Reed-Solomon are highly sub-optimal for distributed environments due to their high overhead in single-failure events. Locally Repairable Codes (LRCs) form a new family of codes that are repair efficient. In particular, LRCs minimize the number of nodes participating in single node repairs during which they generate small network traffic. Two large-scale distributed storage systems have already implemented different types of LRCs: Windows Azure Storage and the Hadoop Distributed File System RAID used by Facebook. The fundamental bounds for LRCs, namely the best possible distance for a given code locality, were recently discovered, but few explicit constructions exist. In this work, we present an explicit and optimal LRCs that are simple to construct. Our construction is based on grouping Reed-Solomon (RS) coded symbols to obtain RS coded symbols over a larger finite field. We then partition these RS symbols in small groups, and re-encode them using a simple local code that offers low repair locality. For the analysis of the optimality of the code, we derive a new result on the matroid represented by the code generator matrix.) <|cite_end|> <|cite_start|> (Reference: A family of optimal locally recoverable codes: A code over a finite alphabet is called locally recoverable (LRC) if every symbol in the encoding is a function of a small number (at most $r$) other symbols. We present a family of LRC codes that attain the maximum possible value of the distance for a given locality parameter and code cardinality. The codewords are obtained as evaluations of specially constructed polynomials over a finite field, and reduce to a Reed-Solomon code if the locality parameter $r$ is set to be equal to the code dimension. The size of the code alphabet for most parameters is only slightly greater than the code length. The recovery procedure is performed by polynomial interpolation over $r$ points. We also construct codes with several disjoint recovering sets for every symbol. This construction enables the system to conduct several independent and simultaneous recovery processes of a specific symbol by accessing different parts of the codeword. This property enables high availability of frequently accessed data ("hot data").) <|cite_end|> <|cite_start|> (Reference: Optimal locally repairable codes via elliptic curves: Constructing locally repairable codes achieving Singleton-type bound (we call them optimal codes in this paper) is a challenging task and has attracted great attention in the last few years. Tamo and Barg \cite{TB14} first gave a breakthrough result in this topic by cleverly considering subcodes of Reed-Solomon codes. Thus, $q$-ary optimal locally repairable codes from subcodes of Reed-Solomon codes given in \cite{TB14} have length upper bounded by $q$. Recently, it was shown through extension of construction in \cite{TB14} that length of $q$-ary optimal locally repairable codes can be $q+1$ in \cite{JMX17}. Surprisingly it was shown in \cite{BHHMV16} that, unlike classical MDS codes, $q$-ary optimal locally repairable codes could have length bigger than $q+1$. Thus, it becomes an interesting and challenging problem to construct $q$-ary optimal locally repairable codes of length bigger than $q+1$. In the present paper, we make use of rich algebraic structures of elliptic curves to construct a family of $q$-ary optimal locally repairable codes of length up to $q+2\sqrt{q}$. It turns out that locality of our codes can be as big as $23$ and distance can be linear in length.) <|cite_end|> <|cite_start|> (Reference: Optimal binary linear locally repairable codes with disjoint repair groups: In recent years, several classes of codes are introduced to provide some fault-tolerance and guarantee system reliability in distributed storage systems, among which locally repairable codes (LRCs for short) play an important role. However, most known constructions are over large fields with sizes close to the code length, which lead to the systems computationally expensive. Due to this, binary LRCs are of interest in practice. In this paper, we focus on binary linear LRCs with disjoint repair groups. We first derive an explicit bound for the dimension k of such codes, which can be served as a generalization of the bounds given in [11, 36, 37]. We also give several new constructions of binary LRCs with minimum distance $d = 6$ based on weakly independent sets and partial spreads, which are optimal with respect to our newly obtained bound. In particular, for locality $r\in \{2,3\}$ and minimum distance $d = 6$, we obtain the desired optimal binary linear LRCs with disjoint repair groups for almost all parameters.) <|cite_end|> <|cite_start|> (Reference: New constructions of optimal locally recoverable codes via good polynomials: In recent literature, a family of optimal linear locally recoverable codes (LRC codes) that attain the maximum possible distance (given code length, cardinality, and locality) is presented. The key ingredient for constructing such optimal linear LRC codes is the so-called <inline-formula> <tex-math notation="LaTeX">$r$ </tex-math></inline-formula>-good polynomials, where <inline-formula> <tex-math notation="LaTeX">$r$ </tex-math></inline-formula> is equal to the locality of the LRC code. However, given a prime <inline-formula> <tex-math notation="LaTeX">$p$ </tex-math></inline-formula>, known constructions of <inline-formula> <tex-math notation="LaTeX">$r$ </tex-math></inline-formula>-good polynomials over some extension field of <inline-formula> <tex-math notation="LaTeX">$\mathbb {F}_{p}$ </tex-math></inline-formula> exist only for some special integers <inline-formula> <tex-math notation="LaTeX">$r$ </tex-math></inline-formula>, and the problem of constructing optimal LRC codes over small field for any given locality is still open. In this paper, by using function composition, we present two general methods of designing good polynomials, which lead to three new constructions of <inline-formula> <tex-math notation="LaTeX">$r$ </tex-math></inline-formula>-good polynomials. Such polynomials bring new constructions of optimal LRC codes. In particular, our constructed polynomials as well as the power functions yield optimal <inline-formula> <tex-math notation="LaTeX">$(n,k,r)$ </tex-math></inline-formula> LRC codes over <inline-formula> <tex-math notation="LaTeX">$\mathbb {F}_{q}$ </tex-math></inline-formula> for all positive integers <inline-formula> <tex-math notation="LaTeX">$r$ </tex-math></inline-formula> as localities, where <inline-formula> <tex-math notation="LaTeX">$q$ </tex-math></inline-formula> is near the code length <inline-formula> <tex-math notation="LaTeX">$n$ </tex-math></inline-formula>.) <|cite_end|>. In particular, <|cite_start|> (Reference: A family of optimal locally recoverable codes: A code over a finite alphabet is called locally recoverable (LRC) if every symbol in the encoding is a function of a small number (at most $r$) other symbols. We present a family of LRC codes that attain the maximum possible value of the distance for a given locality parameter and code cardinality. The codewords are obtained as evaluations of specially constructed polynomials over a finite field, and reduce to a Reed-Solomon code if the locality parameter $r$ is set to be equal to the code dimension. The size of the code alphabet for most parameters is only slightly greater than the code length. The recovery procedure is performed by polynomial interpolation over $r$ points. We also construct codes with several disjoint recovering sets for every symbol. This construction enables the system to conduct several independent and simultaneous recovery processes of a specific symbol by accessing different parts of the codeword. This property enables high availability of frequently accessed data ("hot data").) <|cite_end|> suggested a family of $q$-ary $(n,k,r)$ LRC codes for any $n\le q$ such that $(r+1)|n$ that are optimal with respect to the bound \eqref{eq:sb}. As shown recently in <|cite_start|> (Reference: Construction of optimal locally repairable codes via automorphism groups of rational function fields: Locally repairable codes, or locally recoverable codes (LRC for short) are designed for application in distributed and cloud storage systems. Similar to classical block codes, there is an important bound called the Singleton-type bound for locally repairable codes. In this paper, an optimal locally repairable code refers to a block code achieving this Singleton-type bound. Like classical MDS codes, optimal locally repairable codes carry some very nice combinatorial structures. Since introduction of the Singleton-type bound for locally repairable codes, people have put tremendous effort on constructions of optimal locally repairable codes. Due to hardness of this problem, there are few constructions of optimal locally repairable codes in literature. Most of these constructions are realized via either combinatorial or algebraic structures. In this paper, we employ automorphism groups of rational function fields to construct optimal locally repairable codes by considering the group action on the projective lines over finite fields. It turns out that we are able to construct optimal locally repairable codes with reflexibility of locality as well as smaller alphabet size comparable to the code length. In particular, we produce new families of $q$-ary locally repairable codes, including codes of length $q+1$ via cyclic groups and codes via dihedral groups.) <|cite_end|>, in some cases it is possible to extend this construction to the case $n\le q+1$ (still assuming the divisibility). The codes in <|cite_start|> (Reference: A family of optimal locally recoverable codes: A code over a finite alphabet is called locally recoverable (LRC) if every symbol in the encoding is a function of a small number (at most $r$) other symbols. We present a family of LRC codes that attain the maximum possible value of the distance for a given locality parameter and code cardinality. The codewords are obtained as evaluations of specially constructed polynomials over a finite field, and reduce to a Reed-Solomon code if the locality parameter $r$ is set to be equal to the code dimension. The size of the code alphabet for most parameters is only slightly greater than the code length. The recovery procedure is performed by polynomial interpolation over $r$ points. We also construct codes with several disjoint recovering sets for every symbol. This construction enables the system to conduct several independent and simultaneous recovery processes of a specific symbol by accessing different parts of the codeword. This property enables high availability of frequently accessed data ("hot data").) <|cite_end|> are constructed as certain subcodes of Reed-Solomon (RS) codes. Namely, for a given $n$ the code is constructed as a subcode of the RS code of length $n$ and dimension $k+\lceil\frac kr\rceil-1.$ While the ``parent'' RS code is obtained by evaluating all the polynomials of degree $\le k+\lceil\frac kr\rceil-2,$ the LRC codes in <|cite_start|> (Reference: A family of optimal locally recoverable codes: A code over a finite alphabet is called locally recoverable (LRC) if every symbol in the encoding is a function of a small number (at most $r$) other symbols. We present a family of LRC codes that attain the maximum possible value of the distance for a given locality parameter and code cardinality. The codewords are obtained as evaluations of specially constructed polynomials over a finite field, and reduce to a Reed-Solomon code if the locality parameter $r$ is set to be equal to the code dimension. The size of the code alphabet for most parameters is only slightly greater than the code length. The recovery procedure is performed by polynomial interpolation over $r$ points. We also construct codes with several disjoint recovering sets for every symbol. This construction enables the system to conduct several independent and simultaneous recovery processes of a specific symbol by accessing different parts of the codeword. This property enables high availability of frequently accessed data ("hot data").) <|cite_end|> are isolated by evaluating the subset of polynomials of the form $$ f_a(x)=\sum_{i=0}^{r-1}\sum_{j=0}^{\lceil\frac kr\rceil-1}a_{ij}g(x)^j x^i, $$ where $\deg(f_a)\le k+\lceil\frac kr\rceil-2$ and where $g(x)$ is a polynomial constant on each of the repair groups $A_i.$ As pointed out in <|cite_start|> (Reference: A family of optimal locally recoverable codes: A code over a finite alphabet is called locally recoverable (LRC) if every symbol in the encoding is a function of a small number (at most $r$) other symbols. We present a family of LRC codes that attain the maximum possible value of the distance for a given locality parameter and code cardinality. The codewords are obtained as evaluations of specially constructed polynomials over a finite field, and reduce to a Reed-Solomon code if the locality parameter $r$ is set to be equal to the code dimension. The size of the code alphabet for most parameters is only slightly greater than the code length. The recovery procedure is performed by polynomial interpolation over $r$ points. We also construct codes with several disjoint recovering sets for every symbol. This construction enables the system to conduct several independent and simultaneous recovery processes of a specific symbol by accessing different parts of the codeword. This property enables high availability of frequently accessed data ("hot data").) <|cite_end|>, it is possible to lift the condition $(r+1)|n$, obtaining LRC codes whose distance is at most one less than the right-hand side of \eqref{eq:sb}. At the same time, <|cite_start|> (Reference: A family of optimal locally recoverable codes: A code over a finite alphabet is called locally recoverable (LRC) if every symbol in the encoding is a function of a small number (at most $r$) other symbols. We present a family of LRC codes that attain the maximum possible value of the distance for a given locality parameter and code cardinality. The codewords are obtained as evaluations of specially constructed polynomials over a finite field, and reduce to a Reed-Solomon code if the locality parameter $r$ is set to be equal to the code dimension. The size of the code alphabet for most parameters is only slightly greater than the code length. The recovery procedure is performed by polynomial interpolation over $r$ points. We also construct codes with several disjoint recovering sets for every symbol. This construction enables the system to conduct several independent and simultaneous recovery processes of a specific symbol by accessing different parts of the codeword. This property enables high availability of frequently accessed data ("hot data").) <|cite_end|> did not give a concrete construction of such codes, and did not resolve the question of optimality. In this note we point out a way to lift the divisibility assumption, constructing optimal LRC codes for almost all parameters. Our results can be summarized as follows. \begin{theorem} \label{thm:main} {Suppose that the following assumptions on the parameters are satisfied: (1) Let $s:=n\Mod (r+1)$ and suppose that $s\ne 1;$ (2) Let $$ m=\Big\lceil\frac n{r+1}\Big\rceil. $$ We assume that $\bar n:=m(r+1)\le q;$ Then there exists an explicitly constructible $(n,k,r)$ LRC code $\cC$ whose distance is the largest possible for its parameters $n,k,$ and $r$.} \end{theorem} {\em Remark:} After this note was completed, we became aware that most of its results are implied by an earlier work by A. Zeh and E. Yaakobi <|cite_start|> (Reference: Bounds and Constructions of Codes with Multiple Localities: This paper studies bounds and constructions of locally repairable codes (LRCs) with multiple localities so-called multiple-locality LRCs (ML-LRCs). In the simplest case of two localities some code symbols of an ML-LRC have a certain locality while the remaining code symbols have another one. We extend two bounds, the Singleton and the alphabet-dependent upper bound on the dimension of Cadambe--Mazumdar for LRCs, to the case of ML-LRCs with more than two localities. Furthermore, we construct Singleton-optimal ML-LRCs as well as codes that achieve the extended alphabet-dependent bound. We give a family of binary ML-LRCs based on generalized code concatenation that is optimal with respect to the alphabet-dependent bound.) <|cite_end|>. Specifically, we prove a bound on the distance of LRC codes of length $n$ given in Theorem \ref{thm:sb1}, which is sometimes stronger than the bound \eqref{eq:sb}. We also construct a family of LRC codes obtained as shortenings of the codes in <|cite_start|> (Reference: A family of optimal locally recoverable codes: A code over a finite alphabet is called locally recoverable (LRC) if every symbol in the encoding is a function of a small number (at most $r$) other symbols. We present a family of LRC codes that attain the maximum possible value of the distance for a given locality parameter and code cardinality. The codewords are obtained as evaluations of specially constructed polynomials over a finite field, and reduce to a Reed-Solomon code if the locality parameter $r$ is set to be equal to the code dimension. The size of the code alphabet for most parameters is only slightly greater than the code length. The recovery procedure is performed by polynomial interpolation over $r$ points. We also construct codes with several disjoint recovering sets for every symbol. This construction enables the system to conduct several independent and simultaneous recovery processes of a specific symbol by accessing different parts of the codeword. This property enables high availability of frequently accessed data ("hot data").) <|cite_end|> and use the bounds \eqref{eq:sb}, \eqref{eq:sb1} to show that they have the largest possible minimum distance for their parameters. It turns out that our strengthened bound is a particular case of \cite[Thm.6]{ZY16}, and that the fact that shortening optimal LRC codes preserves optimality is shown in \cite[Thm.13]{ZY16}. This implies that the codes in <|cite_start|> (Reference: A family of optimal locally recoverable codes: A code over a finite alphabet is called locally recoverable (LRC) if every symbol in the encoding is a function of a small number (at most $r$) other symbols. We present a family of LRC codes that attain the maximum possible value of the distance for a given locality parameter and code cardinality. The codewords are obtained as evaluations of specially constructed polynomials over a finite field, and reduce to a Reed-Solomon code if the locality parameter $r$ is set to be equal to the code dimension. The size of the code alphabet for most parameters is only slightly greater than the code length. The recovery procedure is performed by polynomial interpolation over $r$ points. We also construct codes with several disjoint recovering sets for every symbol. This construction enables the system to conduct several independent and simultaneous recovery processes of a specific symbol by accessing different parts of the codeword. This property enables high availability of frequently accessed data ("hot data").) <|cite_end|> can be shortened without sacrificing the optimality property. In this note we give an explicit algebraic construction of the shortened codes from <|cite_start|> (Reference: A family of optimal locally recoverable codes: A code over a finite alphabet is called locally recoverable (LRC) if every symbol in the encoding is a function of a small number (at most $r$) other symbols. We present a family of LRC codes that attain the maximum possible value of the distance for a given locality parameter and code cardinality. The codewords are obtained as evaluations of specially constructed polynomials over a finite field, and reduce to a Reed-Solomon code if the locality parameter $r$ is set to be equal to the code dimension. The size of the code alphabet for most parameters is only slightly greater than the code length. The recovery procedure is performed by polynomial interpolation over $r$ points. We also construct codes with several disjoint recovering sets for every symbol. This construction enables the system to conduct several independent and simultaneous recovery processes of a specific symbol by accessing different parts of the codeword. This property enables high availability of frequently accessed data ("hot data").) <|cite_end|>, which is not directly implied by <|cite_start|> (Reference: Bounds and Constructions of Codes with Multiple Localities: This paper studies bounds and constructions of locally repairable codes (LRCs) with multiple localities so-called multiple-locality LRCs (ML-LRCs). In the simplest case of two localities some code symbols of an ML-LRC have a certain locality while the remaining code symbols have another one. We extend two bounds, the Singleton and the alphabet-dependent upper bound on the dimension of Cadambe--Mazumdar for LRCs, to the case of ML-LRCs with more than two localities. Furthermore, we construct Singleton-optimal ML-LRCs as well as codes that achieve the extended alphabet-dependent bound. We give a family of binary ML-LRCs based on generalized code concatenation that is optimal with respect to the alphabet-dependent bound.) <|cite_end|>. We believe that the construction of codes presents some interest. We also give an independent, self-contained proof of the needed particular case of the bound on their distance. <|paper_end|>
[ "<|reference_start|> Optimal Linear Codes with a Local-Error-Correction Property: Motivated by applications to distributed storage, Gopalan \\textit{et al} recently introduced the interesting notion of information-symbol locality in a linear code. By this it is meant that each message symbol appears in a parity-check equation associated with small Hamming weight, thereby enabling recovery of the message symbol by examining a small number of other code symbols. This notion is expanded to the case when all code symbols, not just the message symbols, are covered by such \"local\" parity. In this paper, we extend the results of Gopalan et. al. so as to permit recovery of an erased code symbol even in the presence of errors in local parity symbols. We present tight bounds on the minimum distance of such codes and exhibit codes that are optimal with respect to the local error-correction property. As a corollary, we obtain an upper bound on the minimum distance of a concatenated code. <|reference_end|>", "<|reference_start|> Optimal Locally Repairable Codes via Rank-Metric Codes: This paper presents a new explicit construction for locally repairable codes (LRCs) for distributed storage systems which possess all-symbols locality and maximal possible minimum distance, or equivalently, can tolerate the maximal number of node failures. This construction, based on maximum rank distance (MRD) Gabidulin codes, provides new optimal vector and scalar LRCs. In addition, the paper also discusses mechanisms by which codes obtained using this construction can be used to construct LRCs with efficient repair of failed nodes by combination of LRC with regenerating codes. <|reference_end|>", "<|reference_start|> A family of optimal locally recoverable codes: A code over a finite alphabet is called locally recoverable (LRC) if every symbol in the encoding is a function of a small number (at most $r$) other symbols. We present a family of LRC codes that attain the maximum possible value of the distance for a given locality parameter and code cardinality. The codewords are obtained as evaluations of specially constructed polynomials over a finite field, and reduce to a Reed-Solomon code if the locality parameter $r$ is set to be equal to the code dimension. The size of the code alphabet for most parameters is only slightly greater than the code length. The recovery procedure is performed by polynomial interpolation over $r$ points. We also construct codes with several disjoint recovering sets for every symbol. This construction enables the system to conduct several independent and simultaneous recovery processes of a specific symbol by accessing different parts of the codeword. This property enables high availability of frequently accessed data (\"hot data\"). <|reference_end|>", "<|reference_start|> Bounds and Constructions of Codes with Multiple Localities: This paper studies bounds and constructions of locally repairable codes (LRCs) with multiple localities so-called multiple-locality LRCs (ML-LRCs). In the simplest case of two localities some code symbols of an ML-LRC have a certain locality while the remaining code symbols have another one. We extend two bounds, the Singleton and the alphabet-dependent upper bound on the dimension of Cadambe--Mazumdar for LRCs, to the case of ML-LRCs with more than two localities. Furthermore, we construct Singleton-optimal ML-LRCs as well as codes that achieve the extended alphabet-dependent bound. We give a family of binary ML-LRCs based on generalized code concatenation that is optimal with respect to the alphabet-dependent bound. <|reference_end|>" ]
[ 1, 2, 8, 14 ]
{"<|cite_1|>": "arxiv-22319", "<|multi_cite_2_1|>": "arxiv-28625", "<|multi_cite_2_2|>": "arxiv-40826", "<|multi_cite_2_3|>": "arxiv-41151", "<|multi_cite_2_4|>": "arxiv-52686", "<|multi_cite_2_5|>": "arxiv-142798", "<|multi_cite_2_6|>": "arxiv-140610", "<|multi_cite_2_7|>": "ss-1287537", "<|cite_3|>": "arxiv-52686", "<|cite_4|>": "arxiv-138240", "<|cite_5|>": "arxiv-52686", "<|cite_6|>": "arxiv-52686", "<|cite_7|>": "arxiv-52686", "<|cite_8|>": "arxiv-52686", "<|cite_9|>": "arxiv-90275", "<|cite_10|>": "arxiv-52686", "<|cite_11|>": "arxiv-52686", "<|cite_12|>": "arxiv-52686", "<|cite_13|>": "arxiv-90275"}
2404.15704
<|paper_start|> Title: Efficient Multi-Model Fusion with Adversarial Complementary Representation Learning Abstract: Efficient Multi-Model Fusion with Adversarial Complementary Representation Learning: Single-model systems often suffer from deficiencies in tasks such as speaker verification (SV) and image classification, relying heavily on partial prior knowledge during decision-making, resulting in suboptimal performance. Although multi-model fusion (MMF) can mitigate some of these issues, redundancy in learned representations may limits improvements. To this end, we propose an adversarial complementary representation learning (ACoRL) framework that enables newly trained models to avoid previously acquired knowledge, allowing each individual component model to learn maximally distinct, complementary representations. We make three detailed explanations of why this works and experimental results demonstrate that our method more efficiently improves performance compared to traditional MMF. Furthermore, attribution analysis validates the model trained under ACoRL acquires more complementary knowledge, highlighting the efficacy of our approach in enhancing efficiency and robustness across tasks. Introduction \label{sec:intro} Multi-model fusion (MMF) has demonstrated great potential to achieve superior overall performance compared to individual, as distinct component models may contain complementary capabilities to avoid their limitations. Despite various applications and tasks using quite different architectures and processing methods, there are commonalities exist in the core logic -- MMF for the model exists throughout the model inference pipeline, including input data$ _a $, early, mid, and late stages of the model$ _b $, and final inference output$ _c $ <|cite_start|> (Reference: Fusion of medical imaging and electronic health records using deep learning: a systematic review and implementation guidelines: ) <|cite_end|>. (a) Data-level fusion can merge datasets with fully or partially overlapping labels at the label level to incorporate more training data <|cite_start|> (Reference: SpeechEQ: Speech Emotion Recognition based on Multi-scale Unified Datasets and Multitask Learning: Speech emotion recognition (SER) has many challenges, but one of the main challenges is that each framework does not have a unified standard. In this paper, we propose SpeechEQ, a framework for unifying SER tasks based on a multi-scale unified metric. This metric can be trained by Multitask Learning (MTL), which includes two emotion recognition tasks of Emotion States Category (EIS) and Emotion Intensity Scale (EIS), and two auxiliary tasks of phoneme recognition and gender recognition. For this framework, we build a Mandarin SER dataset - SpeechEQ Dataset (SEQD). We conducted experiments on the public CASIA and ESD datasets in Mandarin, which exhibit that our method outperforms baseline methods by a relatively large margin, yielding 8.0% and 6.5% improvement in accuracy respectively. Additional experiments on IEMOCAP with four emotion categories (i.e., angry, happy, sad, and neutral) also show the proposed method achieves a state-of-the-art of both weighted accuracy (WA) of 78.16% and unweighted accuracy (UA) of 77.47%.) <|cite_end|>. Additionally, multi-task learning enables joint training on datasets with disparate labels using different optimization objectives <|cite_start|> (Reference: SpeechEQ: Speech Emotion Recognition based on Multi-scale Unified Datasets and Multitask Learning: Speech emotion recognition (SER) has many challenges, but one of the main challenges is that each framework does not have a unified standard. In this paper, we propose SpeechEQ, a framework for unifying SER tasks based on a multi-scale unified metric. This metric can be trained by Multitask Learning (MTL), which includes two emotion recognition tasks of Emotion States Category (EIS) and Emotion Intensity Scale (EIS), and two auxiliary tasks of phoneme recognition and gender recognition. For this framework, we build a Mandarin SER dataset - SpeechEQ Dataset (SEQD). We conducted experiments on the public CASIA and ESD datasets in Mandarin, which exhibit that our method outperforms baseline methods by a relatively large margin, yielding 8.0% and 6.5% improvement in accuracy respectively. Additional experiments on IEMOCAP with four emotion categories (i.e., angry, happy, sad, and neutral) also show the proposed method achieves a state-of-the-art of both weighted accuracy (WA) of 78.16% and unweighted accuracy (UA) of 77.47%.) <|cite_end|>. Data augmentation via generative models like generative adversarial networks (GANs) also enables the fusion of real and synthetic data <|cite_start|> (Reference: A Novel Multi-Model Stacking Ensemble Learning Method for Metro Traction Energy Prediction: Metro traction energy prediction is the basis of abnormal monitoring and plays an indispensable role in the planning and operation of the metro system. However, current studies rarely offer a satisfactory prediction performance. To improve the prediction accuracy, a novel prediction method for metro traction energy consumption is proposed based on gradient penalty Wasserstein generative adversarial network (WGAN-GP) and stacking ensemble learning with multi-model integration. Firstly, aiming to collect effective train data, WGAN-GP is used to generate characteristic data of traction energy consumption. Then, various algorithms like BP, SVM, ELM, and XGBoost are employed to preliminarily disclose the relationship between traction energy consumption and characteristic data of traction energy consumption via K-fold verification. Thereafter, the XGBoost algorithm is implemented as the meta model to construct a stacking ensemble learning prediction model. Finally, the proposed method is verified with data from Guangzhou Metro Line 13, and the results substantiate the effectiveness of the prediction model.) <|cite_end|>. (b) Model-level fusion utilizes intermediate representations for fusion. Early fusion leverages multiple engineered features to allow models to fully exploit information in the data <|cite_start|> (Reference: Multi-model fusion metric learning for image set classification: ) <|cite_end|> <|cite_start|> (Reference: Effective Phase Encoding for End-To-End Speaker Verification.: .) <|cite_end|> <|cite_start|> (Reference: A social emotion classification approach using multi-model fusion: ) <|cite_end|> <|cite_start|> (Reference: A review on multi-model medical image fusion: Nowadays, Image fusion seems to be the most promising area in image processing. It plays a pivotal role in different applications, namely medical diagnosis, object detection and recognition, navigation, military, civilian surveillance, robotics, satellite imaging for remote sensing. The process of image fusion aims to integrate two or more images into a single image, which consists of more useful information when compared with each of the source images without introducing any artefacts. In this review paper, three aspects are considered: image fusion methods on spatial domain and transform domain methods, Image fusion rules on transform domain method and image fusion metrics. This review includes different applications, including medical image fusion methodologies.) <|cite_end|> <|cite_start|> (Reference: A multi-feature-based multi-model fusion method for state of health estimation of lithium-ion batteries: ) <|cite_end|> <|cite_start|> (Reference: Multi-step short-term wind speed prediction based on integrated multi-model fusion: ) <|cite_end|>. Mid-term fusion means that each type of data is first processed through its own network and then fused at some intermediate modeling layer. Late fusion is more flexible to aggregate high-level features of individually trained models optimized under different conditions. Varying training induces slightly different model specializations, so integrating them may improve overall performance <|cite_start|> (Reference: A Novel Multi-Model Stacking Ensemble Learning Method for Metro Traction Energy Prediction: Metro traction energy prediction is the basis of abnormal monitoring and plays an indispensable role in the planning and operation of the metro system. However, current studies rarely offer a satisfactory prediction performance. To improve the prediction accuracy, a novel prediction method for metro traction energy consumption is proposed based on gradient penalty Wasserstein generative adversarial network (WGAN-GP) and stacking ensemble learning with multi-model integration. Firstly, aiming to collect effective train data, WGAN-GP is used to generate characteristic data of traction energy consumption. Then, various algorithms like BP, SVM, ELM, and XGBoost are employed to preliminarily disclose the relationship between traction energy consumption and characteristic data of traction energy consumption via K-fold verification. Thereafter, the XGBoost algorithm is implemented as the meta model to construct a stacking ensemble learning prediction model. Finally, the proposed method is verified with data from Guangzhou Metro Line 13, and the results substantiate the effectiveness of the prediction model.) <|cite_end|> <|cite_start|> (Reference: Adapting Image Super-Resolution State-of-the-arts and Learning Multi-model Ensemble for Video Super-Resolution: Recently, image super-resolution has been widely studied and achieved significant progress by leveraging the power of deep convolutional neural networks. However, there has been limited advancement in video super-resolution (VSR) due to the complex temporal patterns in videos. In this paper, we investigate how to adapt state-of-the-art methods of image super-resolution for video super-resolution. The proposed adapting method is straightforward. The information among successive frames is well exploited, while the overhead on the original image super-resolution method is negligible. Furthermore, we propose a learning-based method to ensemble the outputs from multiple super-resolution models. Our methods show superior performance and rank second in the NTIRE2019 Video Super-Resolution Challenge Track 1.) <|cite_end|> <|cite_start|> (Reference: A multi model ensemble based deep convolution neural network structure for detection of COVID19: ) <|cite_end|> <|cite_start|> (Reference: Multi-model fusion metric learning for image set classification: ) <|cite_end|>. These models are typically fused by concatenation, addition, multiplication, or attention mechanism. (c) Output-level fusion commonly integrates the predictions of multiple pre-trained models to achieve improved performance <|cite_start|> (Reference: Deep Multi-Model Fusion for Single-Image Dehazing: This paper presents a deep multi-model fusion network to attentively integrate multiple models to separate layers and boost the performance in single-image dehazing. To do so, we first formulate the attentional feature integration module to maximize the integration of the convolutional neural network (CNN) features at different CNN layers and generate the attentional multi-level integrated features (AMLIF). Then, from the AMLIF, we further predict a haze-free result for an atmospheric scattering model, as well as for four haze-layer separation models, and then fuse the results together to produce the final haze-free image. To evaluate the effectiveness of our method, we compare our network with several state-of-the-art methods on two widely-used dehazing benchmark datasets, as well as on two sets of real-world hazy images. Experimental results demonstrate clear quantitative and qualitative improvements of our method over the state-of-the-arts.) <|cite_end|> <|cite_start|> (Reference: ATMFN: Adaptive-threshold-based multi-model fusion network for compressed face hallucination: Although tremendous strides have been recently made in face hallucination, exiting methods based on a single deep learning framework can hardly satisfactorily provide fine facial features from tiny faces under complex degradation. This article advocates an adaptive-threshold-based multi-model fusion network (ATMFN) for compressed face hallucination, which unifies different deep learning models to take advantages of their respective learning merits. First of all, we construct CNN-, GAN- and RNN-based underlying super-resolvers to produce candidate SR results. Further, the attention subnetwork is proposed to learn the individual fusion weight matrices capturing the most informative components of the candidate SR faces. Particularly, the hyper-parameters of the fusion matrices and the underlying networks are optimized together in an end-to-end manner to drive them for collaborative learning. Finally, a threshold-based fusion and reconstruction module is employed to exploit the candidates’ complementarity and thus generate high-quality face images. Extensive experiments on benchmark face datasets and real-world samples show that our model outperforms the state-of-the-art SR methods in terms of quantitative indicators and visual effects. The code and configurations are released at https://github.com/kuihua/ATMFN.) <|cite_end|> <|cite_start|> (Reference: A feature selection and multi-model fusion-based approach of predicting air quality.: ) <|cite_end|>. This technique is frequently employed in various competitions. For example, in speaker verification (SV) challenges, fusion of scoring results is often used in the optimization pipeline <|cite_start|> (Reference: VoxSRC 2022: The Fourth VoxCeleb Speaker Recognition Challenge: This paper summarises the findings from the VoxCeleb Speaker Recognition Challenge 2022 (VoxSRC-22), which was held in conjunction with INTERSPEECH 2022. The goal of this challenge was to evaluate how well state-of-the-art speaker recognition systems can diarise and recognise speakers from speech obtained "in the wild". The challenge consisted of: (i) the provision of publicly available speaker recognition and diarisation data from YouTube videos together with ground truth annotation and standardised evaluation software; and (ii) a public challenge and hybrid workshop held at INTERSPEECH 2022. We describe the four tracks of our challenge along with the baselines, methods, and results. We conclude with a discussion on the new domain-transfer focus of VoxSRC-22, and on the progression of the challenge from the previous three editions.) <|cite_end|> <|cite_start|> (Reference: The 2021 NIST Speaker Recognition Evaluation: The 2021 Speaker Recognition Evaluation (SRE21) was the latest cycle of the ongoing evaluation series conducted by the U.S. National Institute of Standards and Technology (NIST) since 1996. It was the second large-scale multimodal speaker/person recognition evaluation organized by NIST (the first one being SRE19). Similar to SRE19, it featured two core evaluation tracks, namely audio and audio-visual, as well as an optional visual track. In addition to offering fixed and open training conditions, it also introduced new challenges for the community, thanks to a new multimodal (i.e., audio, video, and selfie images) and multilingual (i.e., with multilingual speakers) corpus, termed WeCanTalk, collected outside North America by the Linguistic Data Consortium (LDC). These challenges included: 1) trials (target and non-target) with enrollment and test segments originating from different domains (i.e., telephony versus video), and 2) trials (target and non-target) with enrollment and test segments spoken in different languages (i.e., cross-lingual trials). This paper presents an overview of SRE21 including the tasks, performance metric, data, evaluation protocol, results and system performance analyses. A total of 23 organizations (forming 15 teams) from academia and industry participated in SRE21 and submitted 158 valid system outputs. Evaluation results indicate: audio-visual fusion produce substantial gains in performance over audio-only or visual-only systems; top performing speaker and face recognition systems exhibited comparable performance under the matched domain conditions present in this evaluation; and, the use of complex neural network architectures (e.g., ResNet) along with angular losses with margin, data augmentation, as well as long duration fine-tuning contributed to notable performance improvements for the audio-only speaker recognition task.) <|cite_end|> <|cite_start|> (Reference: The SpeakIn System for VoxCeleb Speaker Recognition Challange 2021: This report describes our submission to the track 1 and track 2 of the VoxCeleb Speaker Recognition Challenge 2021 (VoxSRC 2021). Both track 1 and track 2 share the same speaker verification system, which only uses VoxCeleb2-dev as our training set. This report explores several parts, including data augmentation, network structures, domain-based large margin fine-tuning, and back-end refinement. Our system is a fusion of 9 models and achieves first place in these two tracks of VoxSRC 2021. The minDCF of our submission is 0.1034, and the corresponding EER is 1.8460%.) <|cite_end|>, deepfake detection <|cite_start|> (Reference: Audio Deepfake Detection: A Survey: Audio deepfake detection is an emerging active topic. A growing number of literatures have aimed to study deepfake detection algorithms and achieved effective performance, the problem of which is far from being solved. Although there are some review literatures, there has been no comprehensive survey that provides researchers with a systematic overview of these developments with a unified evaluation. Accordingly, in this survey paper, we first highlight the key differences across various types of deepfake audio, then outline and analyse competitions, datasets, features, classifications, and evaluation of state-of-the-art approaches. For each aspect, the basic techniques, advanced developments and major challenges are discussed. In addition, we perform a unified comparison of representative features and classifiers on ASVspoof 2021, ADD 2023 and In-the-Wild datasets for audio deepfake detection, respectively. The survey shows that future research should address the lack of large scale datasets in the wild, poor generalization of existing detection methods to unknown fake attacks, as well as interpretability of detection results.) <|cite_end|>, and many other challenges <|cite_start|> (Reference: Early box office prediction in China’s film market based on a stacking fusion model: ) <|cite_end|>. However, although MMF is effective in improving overall system performance, it has some limitations. The constituent models used in MMF are often very similar in nature, which limits the effectiveness of model integration. Also, integrating too many models significantly increases computational costs, making it infeasible in resource-constrained environments. There are a number of similar methods that attempt to train a new model that avoid previously learned knowledge, but they vary in their approach, and purpose. Shen et al. <|cite_start|> (Reference: MEAL: Multi-Model Ensemble via Adversarial Learning: Often the best performing deep neural models are ensembles of multiple base-level networks. Unfortunately, the space required to store these many networks, and the time required to execute them at test-time, prohibits their use in applications where test sets are large (e.g., ImageNet). In this paper, we present a method for compressing large, complex trained ensembles into a single network, where knowledge from a variety of trained deep neural networks (DNNs) is distilled and transferred to a single DNN. In order to distill diverse knowledge from different trained (teacher) models, we propose to use adversarial-based learning strategy where we define a block-wise training loss to guide and optimize the predefined student network to recover the knowledge in teacher models, and to promote the discriminator network to distinguish teacher vs. student features simultaneously. The proposed ensemble method (MEAL) of transferring distilled knowledge with adversarial learning exhibits three important advantages: (1) the student network that learns the distilled knowledge with discriminators is optimized better than the original model; (2) fast inference is realized by a single forward pass, while the performance is even better than traditional ensembles from multi-original models; (3) the student network can learn the distilled knowledge from a teacher model that has arbitrary structures. Extensive experiments on CIFAR-10/100, SVHN and ImageNet datasets demonstrate the effectiveness of our MEAL method. On ImageNet, our ResNet-50 based MEAL achieves top-1/5 21.79%/5.99% val error, which outperforms the original model by 2.06%/1.14%. Code and models are available at: https://github.com/AaronHeee/MEAL) <|cite_end|> proposed an adversarial learning based model distillation method, which aims to incorporate the student model to learn the knowledge of the teacher's network, but not to learn its similar knowledge from the corresponding layers in the teacher's model. However, the problem with this framework is that the student model is only learning the teacher's network, and the incorrect knowledge learned from the teacher's network is not modified by correct labeling, which may lead to the magnifying the incorrect knowledge. Nam et al. <|cite_start|> (Reference: Diversity Matters When Learning From Ensembles: Deep ensembles excel in large-scale image classification tasks both in terms of prediction accuracy and calibration. Despite being simple to train, the computation and memory cost of deep ensembles limits their practicability. While some recent works propose to distill an ensemble model into a single model to reduce such costs, there is still a performance gap between the ensemble and distilled models. We propose a simple approach for reducing this gap, i.e., making the distilled performance close to the full ensemble. Our key assumption is that a distilled model should absorb as much function diversity inside the ensemble as possible. We first empirically show that the typical distillation procedure does not effectively transfer such diversity, especially for complex models that achieve near-zero training error. To fix this, we propose a perturbation strategy for distillation that reveals diversity by seeking inputs for which ensemble member outputs disagree. We empirically show that a model distilled with such perturbed samples indeed exhibits enhanced diversity, leading to improved performance.) <|cite_end|> attempted to utilize the perturbation method so that the student model would absorb as much knowledge as possible from the individual teacher models, thereby distilling a single model that would perform as a complete set. Although this work distills a model of learning knowledge that is as different as possible from the teacher's knowledge, the model's performance is only close to that of the previous ensembled model, but does not surpass it. Zhang et al. <|cite_start|> (Reference: Adversarial Complementary Learning for Weakly Supervised Object Localization: In this work, we propose Adversarial Complementary Learning (ACoL) to automatically localize integral objects of semantic interest with weak supervision. We first mathematically prove that class localization maps can be obtained by directly selecting the class-specific feature maps of the last convolutional layer, which paves a simple way to identify object regions. We then present a simple network architecture including two parallel-classifiers for object localization. Specifically, we leverage one classification branch to dynamically localize some discriminative object regions during the forward pass. Although it is usually responsive to sparse parts of the target objects, this classifier can drive the counterpart classifier to discover new and complementary object regions by erasing its discovered regions from the feature maps. With such an adversarial learning, the two parallel-classifiers are forced to leverage complementary object regions for classification and can finally generate integral object localization together. The merits of ACoL are mainly two-fold: 1) it can be trained in an end-to-end manner; 2) dynamically erasing enables the counterpart classifier to discover complementary object regions more effectively. We demonstrate the superiority of our ACoL approach in a variety of experiments. In particular, the Top-1 localization error rate on the ILSVRC dataset is 45.14%, which is the new state-of-the-art.) <|cite_end|>, most similar to our idea, proposed an adversarial based object localization framework that avoid area that the previous models had learned. However, this method only apply to the object detection task, and it cannot apply to other tasks. Most similar to our idea, Zhang et al. <|cite_start|> (Reference: Adversarial Complementary Learning for Weakly Supervised Object Localization: In this work, we propose Adversarial Complementary Learning (ACoL) to automatically localize integral objects of semantic interest with weak supervision. We first mathematically prove that class localization maps can be obtained by directly selecting the class-specific feature maps of the last convolutional layer, which paves a simple way to identify object regions. We then present a simple network architecture including two parallel-classifiers for object localization. Specifically, we leverage one classification branch to dynamically localize some discriminative object regions during the forward pass. Although it is usually responsive to sparse parts of the target objects, this classifier can drive the counterpart classifier to discover new and complementary object regions by erasing its discovered regions from the feature maps. With such an adversarial learning, the two parallel-classifiers are forced to leverage complementary object regions for classification and can finally generate integral object localization together. The merits of ACoL are mainly two-fold: 1) it can be trained in an end-to-end manner; 2) dynamically erasing enables the counterpart classifier to discover complementary object regions more effectively. We demonstrate the superiority of our ACoL approach in a variety of experiments. In particular, the Top-1 localization error rate on the ILSVRC dataset is 45.14%, which is the new state-of-the-art.) <|cite_end|> proposed an adversarial-based object localization framework which avoids regions learned by previous models. However, this approach is only applicable to the object localization task and can not be generalized, and cannot apply to the models more than two. There are many other researches about ensemble learning that are meaningful to the form of our idea <|cite_start|> (Reference: Ensemble deep learning in bioinformatics: ) <|cite_end|> <|cite_start|> (Reference: Ensemble deep learning: A review: Ensemble learning combines several individual models to obtain better generalization performance. Currently, deep learning architectures are showing better performance compared to the shallow or traditional models. Deep ensemble learning models combine the advantages of both the deep learning models as well as the ensemble learning such that the final model has better generalization performance. This paper reviews the state-of-art deep ensemble models and hence serves as an extensive summary for the researchers. The ensemble models are broadly categorised into bagging, boosting, stacking, negative correlation based deep ensemble models, explicit/implicit ensembles, homogeneous/heterogeneous ensemble, decision fusion strategies based deep ensemble models. Applications of deep ensemble models in different domains are also briefly discussed. Finally, we conclude this paper with some potential future research directions.) <|cite_end|> <|cite_start|> (Reference: A survey on ensemble learning: ) <|cite_end|> <|cite_start|> (Reference: A comprehensive review on ensemble deep learning: Opportunities and challenges: ) <|cite_end|> <|cite_start|> (Reference: Inverse Adversarial Diversity Learning for Network Ensemble: Network ensemble aims to obtain better results by aggregating the predictions of multiple weak networks, in which how to keep the diversity of different networks plays a critical role in the training process. Many existing approaches keep this kind of diversity either by simply using different network initializations or data partitions, which often requires repeated attempts to pursue a relatively high performance. In this article, we propose a novel inverse adversarial diversity learning (IADL) method to learn a simple yet effective ensemble regime, which can be easily implemented in the following two steps. First, we take each weak network as a generator and design a discriminator to judge the difference between the features extracted by different weak networks. Second, we present an inverse adversarial diversity constraint to push the discriminator to cheat generators that all the resulting features of the same image are too similar to distinguish each other. As a result, diverse features will be extracted by these weak networks through a min–max optimization. What is more, our method can be applied to a variety of tasks, such as image classification and image retrieval, by applying a multitask learning objective function to train all these weak networks in an end-to-end manner. We conduct extensive experiments on the CIFAR-10, CIFAR-100, CUB200-2011, and CARS196 datasets, in which the results show that our method significantly outperforms most of the state-of-the-art approaches.) <|cite_end|>. \begin{figure*}[t] \centering \includegraphics[width=0.95\textwidth]{images/acorl.pdf} \caption{The overview of ACoRL framework.} \label{fig:acorl} \end{figure*} Also, in machine learning, bagging and boosting are popular ensemble methods that improve system performance. Bagging trains multiple models on different bootstrap samples of the training data to reduce variance. Boosting sequentially fits models to emphasize previously misclassified instances, thereby reducing bias. However, random sampling of bagging data still does not solve the training redundancy problem. Boosting uses a sequential approach that limits parallelization and reduces efficiency. Additionally, some tasks are not amenable to boosting's error-focused learning. To fully address the challenges of MMF, the analytical lens cannot be narrowly confined. Let's start with a story: When a person or group seeks allies to maximize their benefits, they tend to ally with someone who is as different as possible from themselves in terms of knowledge, vision, and ability. However, if they are free to acquire knowledge from a single source, their acquired knowledge tends to closely resemble that of others. Allying among them may risk substantial redundancy. Furthermore, too many models require much more computational resources. \vspace{0.5em} \noindent Therefore, we have made the following contributions: \begin{itemize} \item We propose an adversarial complementary representation learning (ACoRL) framework that promotes diversity during multi-model fusion by enabling models to avoid previously acquired knowledge and learn distinct representations. \item We theoretically prove and explain how ACoRL can improve the performance of multi-model fusion (MMF) by extending the range of representations in the latent space. \item Experimental results and attribution analysis validate that ACoRL can leverage more complementary knowledge, strengthening its ability to improve model performance across tasks. \end{itemize} <|paper_end|>
[ "<|reference_start|> Multi-model fusion metric learning for image set classification: <|reference_end|>", "<|reference_start|> VoxSRC 2022: The Fourth VoxCeleb Speaker Recognition Challenge: This paper summarises the findings from the VoxCeleb Speaker Recognition Challenge 2022 (VoxSRC-22), which was held in conjunction with INTERSPEECH 2022. The goal of this challenge was to evaluate how well state-of-the-art speaker recognition systems can diarise and recognise speakers from speech obtained \"in the wild\". The challenge consisted of: (i) the provision of publicly available speaker recognition and diarisation data from YouTube videos together with ground truth annotation and standardised evaluation software; and (ii) a public challenge and hybrid workshop held at INTERSPEECH 2022. We describe the four tracks of our challenge along with the baselines, methods, and results. We conclude with a discussion on the new domain-transfer focus of VoxSRC-22, and on the progression of the challenge from the previous three editions. <|reference_end|>", "<|reference_start|> The SpeakIn System for VoxCeleb Speaker Recognition Challange 2021: This report describes our submission to the track 1 and track 2 of the VoxCeleb Speaker Recognition Challenge 2021 (VoxSRC 2021). Both track 1 and track 2 share the same speaker verification system, which only uses VoxCeleb2-dev as our training set. This report explores several parts, including data augmentation, network structures, domain-based large margin fine-tuning, and back-end refinement. Our system is a fusion of 9 models and achieves first place in these two tracks of VoxSRC 2021. The minDCF of our submission is 0.1034, and the corresponding EER is 1.8460%. <|reference_end|>", "<|reference_start|> A survey on ensemble learning: <|reference_end|>" ]
[ 13, 17, 19, 28 ]
{"<|cite_1|>": "ss-1535624", "<|cite_2|>": "arxiv-429795", "<|cite_3|>": "arxiv-429795", "<|cite_4|>": "ss-1873670", "<|multi_cite_5_1|>": "ss-1873671", "<|multi_cite_5_2|>": "ss-1873672", "<|multi_cite_5_3|>": "ss-2284284", "<|multi_cite_5_4|>": "ss-1873673", "<|multi_cite_5_5|>": "ss-883725", "<|multi_cite_5_6|>": "ss-1873674", "<|multi_cite_6_1|>": "ss-1873670", "<|multi_cite_6_2|>": "arxiv-203040", "<|multi_cite_6_3|>": "ss-757062", "<|multi_cite_6_4|>": "ss-1873671", "<|multi_cite_7_1|>": "ss-692293", "<|multi_cite_7_2|>": "ss-753550", "<|multi_cite_7_3|>": "ss-1873675", "<|multi_cite_8_1|>": "arxiv-482865", "<|multi_cite_8_2|>": "arxiv-414651", "<|multi_cite_8_3|>": "arxiv-364838", "<|cite_9|>": "arxiv-534818", "<|cite_10|>": "ss-1873676", "<|cite_11|>": "arxiv-183496", "<|cite_12|>": "arxiv-377128", "<|cite_13|>": "arxiv-155507", "<|cite_14|>": "arxiv-155507", "<|multi_cite_15_1|>": "ss-930209", "<|multi_cite_15_2|>": "arxiv-332464", "<|multi_cite_15_3|>": "ss-1275424", "<|multi_cite_15_4|>": "ss-2453014", "<|multi_cite_15_5|>": "ss-2478921"}
1907.10388
<|paper_start|> Title: Higher-Order Function Networks for Learning Composable 3D Object Representations Abstract: Higher-Order Function Networks for Learning Composable 3D Object Representations: We present a new approach to 3D object representation where a neural network encodes the geometry of an object directly into the weights and biases of a second 'mapping' network. This mapping network can be used to reconstruct an object by applying its encoded transformation to points randomly sampled from a simple geometric space, such as the unit sphere. We study the effectiveness of our method through various experiments on subsets of the ShapeNet dataset. We find that the proposed approach can reconstruct encoded objects with accuracy equal to or exceeding state-of-the-art methods with orders of magnitude fewer parameters. Our smallest mapping network has only about 7000 parameters and shows reconstruction quality on par with state-of-the-art object decoder architectures with millions of parameters. Further experiments on feature mixing through the composition of learned functions show that the encoding captures a meaningful subspace of objects. Introduction This paper is primarily concerned with the problem of learning compact 3D object representations and estimating them from images. If we consider an object to be a continuous surface in $\mathbb{R}^3$, it is not straightforward to directly represent this infinite set of points in memory. In working around this problem, many learning-based approaches to 3D object representation suffer from problems related to memory usage, computational burden, or sampling efficiency. Nonetheless, neural networks with tens of millions of parameters have proven effective tools for learning expressive representations of geometric data. In this work, we show that object geometries can be encoded into neural networks with thousands, rather than millions, of parameters with little or no loss in reconstruction quality. To this end, we propose an object representation that encodes an object as a function that maps points from a canonical space, such as the unit sphere, to the set of points defining the object. In this work, the function is approximated with a small multilayer perceptron. The parameters of this function are estimated by a `higher order' encoder network, thus motivating the name for our method: \textit{Higher-Order Function networks (\method{})}. This procedure is shown in Figure 1. There are two key ideas that distinguish HOF from prior work in 3D object representation learning: fast-weights object encoding and interpolation through function composition. \textit{(1) Fast-weights object encoding:} `Fast-weights' in this context generally refers to methods that use network weights and biases that are not fixed; at least some of these parameters are estimated on a per-sample basis. Our fast-weights approach stands in contrast to existing methods which encode objects as vector-valued inputs to a decoder network with fixed weights. Empirically, we find that our approach enables a dramatic reduction (two orders of magnitude) in the size of the \FuncName{} network compared to the decoder networks employed by other methods. \textit{(2) Interpolation through function composition:} Our functional formulation allows for interpolation between inputs by composing the roots of our reconstruction functions. We demonstrate that the functional representation learned by HOF provides a rich latent space in which we can `interpolate' between objects, producing new, coherent objects sharing properties of the `parent' objects. In order to position HOF among other methods for 3D reconstruction, we first define a taxonomy of existing work and show that HOF provides a generalization of current best-performing methods. Afterwards, we demonstrate the effectiveness of \method{} on the task of 3D reconstruction from an RGB image using a subset of the ShapeNet dataset <|cite_start|> (Reference: ShapeNet: An Information-Rich 3D Model Repository: We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans.) <|cite_end|>. The results, reported in Tables~\ref{tab:mainresults} and~\ref{tab:what3d-results} and Figure~\ref{fig:recon-compare}, show state-of-the-art reconstruction quality using orders of magnitude fewer parameters than other methods. \begin{figure} \centering \includegraphics[width=0.99\textwidth]{figures/figure1conv.png} \includegraphics[width=0.92\textwidth]{figures/plane_slices/nonreg2.png} \caption{\textbf{Top}: Overview of HOF. The encoder network $\Enc$ encodes the geometry of the object pictured in each input image directly into the parameters of the mapping function $\DecNoParam$, which produces a reconstruction as a transformation of a canonical object (here, the unit sphere). \textbf{Bottom}: We visualize the transformation $f_\theta$ by showing various subsets of the inputs $X$ and their corresponding mapped locations in red and green, respectively. In each frame, light gray shows the rest of $X$ and dark gray shows the rest of the reconstructed object.} \label{fig:overview_fig} \end{figure} Related Work The selection of object representation is a crucial design choice for methods addressing 3D reconstruction. Voxel-based approaches <|cite_start|> (Reference: 3D-R2N2: A Unified Approach for Single and Multi-view 3D Object Reconstruction: Inspired by the recent success of methods that employ shape priors to achieve robust 3D reconstructions, we propose a novel recurrent neural network architecture that we call the 3D Recurrent Reconstruction Neural Network (3D-R2N2). The network learns a mapping from images of objects to their underlying 3D shapes from a large collection of synthetic data. Our network takes in one or more images of an object instance from arbitrary viewpoints and outputs a reconstruction of the object in the form of a 3D occupancy grid. Unlike most of the previous works, our network does not require any image annotations or object class labels for training or testing. Our extensive experimental analysis shows that our reconstruction framework i) outperforms the state-of-the-art methods for single view reconstruction, and ii) enables the 3D reconstruction of objects in situations when traditional SFM/SLAM methods fail (because of lack of texture and/or wide baseline).) <|cite_end|> <|cite_start|> (Reference: Hierarchical Surface Prediction for 3D Object Reconstruction: Recently, Convolutional Neural Networks have shown promising results for 3D geometry prediction. They can make predictions from very little input data such as a single color image. A major limitation of such approaches is that they only predict a coarse resolution voxel grid, which does not capture the surface of the objects well. We propose a general framework, called hierarchical surface prediction (HSP), which facilitates prediction of high resolution voxel grids. The main insight is that it is sufficient to predict high resolution voxels around the predicted surfaces. The exterior and interior of the objects can be represented with coarse resolution voxels. Our approach is not dependent on a specific input type. We show results for geometry prediction from color images, depth images and shape completion from partial voxel grids. Our analysis shows that our high resolution predictions are more accurate than low resolution predictions.) <|cite_end|> typically use a uniform discretization of $\mathbb{R}^3$ in order to extend highly successful convolutional neural network (CNN) based approaches to three dimensions. However, the inherent sparsity of surfaces in 3D space make voxelization inefficient in terms of both memory and computation time. Partition-based approaches such as octrees <|cite_start|> (Reference: Octree Generating Networks: Efficient Convolutional Architectures for High-resolution 3D Outputs: We present a deep convolutional decoder architecture that can generate volumetric 3D outputs in a compute- and memory-efficient manner by using an octree representation. The network learns to predict both the structure of the octree, and the occupancy values of individual cells. This makes it a particularly valuable technique for generating 3D shapes. In contrast to standard decoders acting on regular voxel grids, the architecture does not have cubic complexity. This allows representing much higher resolution outputs with a limited memory budget. We demonstrate this in several application domains, including 3D convolutional autoencoders, generation of objects and whole scenes from high-level representations, and shape from a single image.) <|cite_end|> <|cite_start|> (Reference: OctNet: Learning Deep 3D Representations at High Resolutions: We present OctNet, a representation for deep learning with sparse 3D data. In contrast to existing models, our representation enables 3D convolutional networks which are both deep and high resolution. Towards this goal, we exploit the sparsity in the input data to hierarchically partition the space using a set of unbalanced octrees where each leaf node stores a pooled feature representation. This allows to focus memory allocation and computation to the relevant dense regions and enables deeper networks without compromising resolution. We demonstrate the utility of our OctNet representation by analyzing the impact of resolution on several 3D tasks including 3D object classification, orientation estimation and point cloud labeling.) <|cite_end|> address the space efficiency shortcomings of voxelization, but they are tedious to implement and more computationally demanding to query. Graph-based models such as meshes <|cite_start|> (Reference: Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images: We propose an end-to-end deep learning architecture that produces a 3D shape in triangular mesh from a single color image. Limited by the nature of deep neural network, previous methods usually represent a 3D shape in volume or point cloud, and it is non-trivial to convert them to the more ready-to-use mesh model. Unlike the existing methods, our network represents 3D mesh in a graph-based convolutional neural network and produces correct geometry by progressively deforming an ellipsoid, leveraging perceptual features extracted from the input image. We adopt a coarse-to-fine strategy to make the whole deformation procedure stable, and define various of mesh related losses to capture properties of different levels to guarantee visually appealing and physically accurate 3D geometry. Extensive experiments show that our method not only qualitatively produces mesh model with better details, but also achieves higher 3D shape estimation accuracy compared to the state-of-the-art.) <|cite_end|> <|cite_start|> (Reference: Mesh R-CNN: Rapid advances in 2D perception have led to systems that accurately detect objects in real-world images. However, these systems make predictions in 2D, ignoring the 3D structure of the world. Concurrently, advances in 3D shape prediction have mostly focused on synthetic benchmarks and isolated objects. We unify advances in these two areas. We propose a system that detects objects in real-world images and produces a triangle mesh giving the full 3D shape of each detected object. Our system, called Mesh R-CNN, augments Mask R-CNN with a mesh prediction branch that outputs meshes with varying topological structure by first predicting coarse voxel representations which are converted to meshes and refined with a graph convolution network operating over the mesh's vertices and edges. We validate our mesh prediction branch on ShapeNet, where we outperform prior work on single-image shape prediction. We then deploy our full Mesh R-CNN system on Pix3D, where we jointly detect objects and predict their 3D shapes.) <|cite_end|> <|cite_start|> (Reference: Customers' joining behavior in an unobservable GI/Geo/m queue: This paper studies the equilibrium balking strategies of impatient customers in a discrete-time multi-server renewal input queue with identical servers. Arriving customers are unaware of the number of customers in the queue before making a decision whether to join or balk the queue. We model the decision-making process as a non-cooperative symmetric game and derive the Nash equilibrium mixed strategy and optimal social strategies. The stationary system-length distributions at different observation epochs under the equilibrium structure are obtained using the roots method. Finally, some numerical examples are presented to show the effect of the information level together with system parameters on the equilibrium and social behavior of impatient customers.) <|cite_end|> <|cite_start|> (Reference: MeshCNN: A Network with an Edge: Polygonal meshes provide an efficient representation for 3D shapes. They explicitly capture both shape surface and topology, and leverage non-uniformity to represent large flat regions as well as sharp, intricate features. This non-uniformity and irregularity, however, inhibits mesh analysis efforts using neural networks that combine convolution and pooling operations. In this paper, we utilize the unique properties of the mesh for a direct analysis of 3D shapes using MeshCNN, a convolutional neural network designed specifically for triangular meshes. Analogous to classic CNNs, MeshCNN combines specialized convolution and pooling layers that operate on the mesh edges, by leveraging their intrinsic geodesic connections. Convolutions are applied on edges and the four edges of their incident triangles, and pooling is applied via an edge collapse operation that retains surface topology, thereby, generating new mesh connectivity for the subsequent convolutions. MeshCNN learns which edges to collapse, thus forming a task-driven process where the network exposes and expands the important features while discarding the redundant ones. We demonstrate the effectiveness of our task-driven pooling on various learning tasks applied to 3D meshes.) <|cite_end|> provide a compact representation for capturing topology and surface level information, however their irregular structure makes them harder to learn. Point set representations, discrete (and typically finite) subsets of the continuous geometric object, have also gained popularity due to the fact that they retain the simplicity of voxel based methods while eliminating their storage and computational burden <|cite_start|> (Reference: PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation: Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds and well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.) <|cite_end|> <|cite_start|> (Reference: A Point Set Generation Network for 3D Object Reconstruction from a Single Image: Generation of 3D data by deep neural network has been attracting increasing attention in the research community. The majority of extant works resort to regular representations such as volumetric grids or collection of images; however, these representations obscure the natural invariance of 3D shapes under geometric transformations and also suffer from a number of other issues. In this paper we address the problem of 3D reconstruction from a single image, generating a straight-forward form of output -- point cloud coordinates. Along with this problem arises a unique and interesting issue, that the groundtruth shape for an input image may be ambiguous. Driven by this unorthodox output form and the inherent ambiguity in groundtruth, we design architecture, loss function and learning paradigm that are novel and effective. Our final solution is a conditional shape sampler, capable of predicting multiple plausible 3D point clouds from an input image. In experiments not only can our system outperform state-of-the-art methods on single image based 3d reconstruction benchmarks; but it also shows a strong performance for 3d shape completion and promising ability in making multiple plausible predictions.) <|cite_end|> <|cite_start|> (Reference: PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space: Few prior works study deep learning on point sets. PointNet by Qi et al. is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds.) <|cite_end|> <|cite_start|> (Reference: FoldingNet: Point Cloud Auto-encoder via Deep Grid Deformation: Recent deep networks that directly handle points in a point set, e.g., PointNet, have been state-of-the-art for supervised learning tasks on point clouds such as classification and segmentation. In this work, a novel end-to-end deep auto-encoder is proposed to address unsupervised learning challenges on point clouds. On the encoder side, a graph-based enhancement is enforced to promote local structures on top of PointNet. Then, a novel folding-based decoder deforms a canonical 2D grid onto the underlying 3D object surface of a point cloud, achieving low reconstruction errors even for objects with delicate structures. The proposed decoder only uses about 7% parameters of a decoder with fully-connected neural networks, yet leads to a more discriminative representation that achieves higher linear SVM classification accuracy than the benchmark. In addition, the proposed decoder structure is shown, in theory, to be a generic architecture that is able to reconstruct an arbitrary point cloud from a 2D grid. Our code is available at http://www.merl.com/research/license#FoldingNet) <|cite_end|> <|cite_start|> (Reference: DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation: Computer graphics, 3D computer vision and robotics communities have produced multiple approaches to representing 3D geometry for rendering and reconstruction. These provide trade-offs across fidelity, efficiency and compression capabilities. In this work, we introduce DeepSDF, a learned continuous Signed Distance Function (SDF) representation of a class of shapes that enables high quality shape representation, interpolation and completion from partial and noisy 3D input data. DeepSDF, like its classical counterpart, represents a shape's surface by a continuous volumetric field: the magnitude of a point in the field represents the distance to the surface boundary and the sign indicates whether the region is inside (-) or outside (+) of the shape, hence our representation implicitly encodes a shape's boundary as the zero-level-set of the learned function while explicitly representing the classification of space as being part of the shapes interior or not. While classical SDF's both in analytical or discretized voxel form typically represent the surface of a single shape, DeepSDF can represent an entire class of shapes. Furthermore, we show state-of-the-art performance for learned 3D shape representation and completion while reducing the model size by an order of magnitude compared with previous work.) <|cite_end|>. The PointNet architecture <|cite_start|> (Reference: PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation: Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds and well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.) <|cite_end|> <|cite_start|> (Reference: PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space: Few prior works study deep learning on point sets. PointNet by Qi et al. is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds.) <|cite_end|> was an architectural milestone that made manipulating point sets with deep learning methods a competitive alternative to earlier approaches; however, PointNet is concerned with \textit{processing}, rather than \textit{generating}, point clouds. Further, while point clouds are more flexible than voxels in terms of information density, it is still not obvious how to adapt them to the task of producing arbitrary- or varied-resolution predictions. Independently regressing each point in the point set requires additional parameters for each additional point <|cite_start|> (Reference: A Point Set Generation Network for 3D Object Reconstruction from a Single Image: Generation of 3D data by deep neural network has been attracting increasing attention in the research community. The majority of extant works resort to regular representations such as volumetric grids or collection of images; however, these representations obscure the natural invariance of 3D shapes under geometric transformations and also suffer from a number of other issues. In this paper we address the problem of 3D reconstruction from a single image, generating a straight-forward form of output -- point cloud coordinates. Along with this problem arises a unique and interesting issue, that the groundtruth shape for an input image may be ambiguous. Driven by this unorthodox output form and the inherent ambiguity in groundtruth, we design architecture, loss function and learning paradigm that are novel and effective. Our final solution is a conditional shape sampler, capable of predicting multiple plausible 3D point clouds from an input image. In experiments not only can our system outperform state-of-the-art methods on single image based 3d reconstruction benchmarks; but it also shows a strong performance for 3d shape completion and promising ability in making multiple plausible predictions.) <|cite_end|> <|cite_start|> (Reference: Learning Representations and Generative Models for 3D Point Clouds: Three-dimensional geometric data offer an excellent domain for studying representation learning and generative modeling. In this paper, we look at geometric data represented as point clouds. We introduce a deep AutoEncoder (AE) network with state-of-the-art reconstruction quality and generalization ability. The learned representations outperform existing methods on 3D recognition tasks and enable shape editing via simple algebraic manipulations, such as semantic part editing, shape analogies and shape interpolation, as well as shape completion. We perform a thorough study of different generative models including GANs operating on the raw point clouds, significantly improved GANs trained in the fixed latent space of our AEs, and Gaussian Mixture Models (GMMs). To quantitatively evaluate generative models we introduce measures of sample fidelity and diversity based on matchings between sets of point clouds. Interestingly, our evaluation of generalization, fidelity and diversity reveals that GMMs trained in the latent space of our AEs yield the best results overall.) <|cite_end|>, which is an undesirable property if the goal is high-resolution point clouds. Many current approaches to representation and reconstruction follow an encoder-decoder paradigm, where the encoder and decoder both have learned weights that are fixed at the end of training. An image or set of 3D points is encoded as a latent vector `codeword' either with a learned encoder as in <|cite_start|> (Reference: FoldingNet: Point Cloud Auto-encoder via Deep Grid Deformation: Recent deep networks that directly handle points in a point set, e.g., PointNet, have been state-of-the-art for supervised learning tasks on point clouds such as classification and segmentation. In this work, a novel end-to-end deep auto-encoder is proposed to address unsupervised learning challenges on point clouds. On the encoder side, a graph-based enhancement is enforced to promote local structures on top of PointNet. Then, a novel folding-based decoder deforms a canonical 2D grid onto the underlying 3D object surface of a point cloud, achieving low reconstruction errors even for objects with delicate structures. The proposed decoder only uses about 7% parameters of a decoder with fully-connected neural networks, yet leads to a more discriminative representation that achieves higher linear SVM classification accuracy than the benchmark. In addition, the proposed decoder structure is shown, in theory, to be a generic architecture that is able to reconstruct an arbitrary point cloud from a 2D grid. Our code is available at http://www.merl.com/research/license#FoldingNet) <|cite_end|> <|cite_start|> (Reference: Learning Efficient Point Cloud Generation for Dense 3D Object Reconstruction: Conventional methods of 3D object generative modeling learn volumetric predictions using deep networks with 3D convolutional operations, which are direct analogies to classical 2D ones. However, these methods are computationally wasteful in attempt to predict 3D shapes, where information is rich only on the surfaces. In this paper, we propose a novel 3D generative modeling framework to efficiently generate object shapes in the form of dense point clouds. We use 2D convolutional operations to predict the 3D structure from multiple viewpoints and jointly apply geometric reasoning with 2D projection optimization. We introduce the pseudo-renderer, a differentiable module to approximate the true rendering operation, to synthesize novel depth maps for optimization. Experimental results for single-image 3D object reconstruction tasks show that we outperforms state-of-the-art methods in terms of shape similarity and prediction density.) <|cite_end|> <|cite_start|> (Reference: Perspective Transformer Nets: Learning Single-View 3D Object Reconstruction without 3D Supervision: Understanding the 3D world is a fundamental problem in computer vision. However, learning a good representation of 3D objects is still an open problem due to the high dimensionality of the data and many factors of variation involved. In this work, we investigate the task of single-view 3D object reconstruction from a learning agent's perspective. We formulate the learning process as an interaction between 3D and 2D representations and propose an encoder-decoder network with a novel projection loss defined by the perspective transformation. More importantly, the projection loss enables the unsupervised learning using 2D observation without explicit 3D supervision. We demonstrate the ability of the model in generating 3D volume from a single 2D image with three sets of experiments: (1) learning from single-class objects; (2) learning from multi-class objects and (3) testing on novel object classes. Results show superior performance and better generalization ability for 3D object reconstruction when the projection loss is involved.) <|cite_end|> or by direct optimization of the latent vector itself with respect to a reconstruction-based objective function as in <|cite_start|> (Reference: DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation: Computer graphics, 3D computer vision and robotics communities have produced multiple approaches to representing 3D geometry for rendering and reconstruction. These provide trade-offs across fidelity, efficiency and compression capabilities. In this work, we introduce DeepSDF, a learned continuous Signed Distance Function (SDF) representation of a class of shapes that enables high quality shape representation, interpolation and completion from partial and noisy 3D input data. DeepSDF, like its classical counterpart, represents a shape's surface by a continuous volumetric field: the magnitude of a point in the field represents the distance to the surface boundary and the sign indicates whether the region is inside (-) or outside (+) of the shape, hence our representation implicitly encodes a shape's boundary as the zero-level-set of the learned function while explicitly representing the classification of space as being part of the shapes interior or not. While classical SDF's both in analytical or discretized voxel form typically represent the surface of a single shape, DeepSDF can represent an entire class of shapes. Furthermore, we show state-of-the-art performance for learned 3D shape representation and completion while reducing the model size by an order of magnitude compared with previous work.) <|cite_end|>. Afterwards, the latent code is decoded by a learned decoder into a reconstruction of the desired object by one of two methods, which we call \textit{direct decoding} and \textit{contextual mapping}. Direct decoding methods directly map the latent code into a fixed set of points <|cite_start|> (Reference: A Point Set Generation Network for 3D Object Reconstruction from a Single Image: Generation of 3D data by deep neural network has been attracting increasing attention in the research community. The majority of extant works resort to regular representations such as volumetric grids or collection of images; however, these representations obscure the natural invariance of 3D shapes under geometric transformations and also suffer from a number of other issues. In this paper we address the problem of 3D reconstruction from a single image, generating a straight-forward form of output -- point cloud coordinates. Along with this problem arises a unique and interesting issue, that the groundtruth shape for an input image may be ambiguous. Driven by this unorthodox output form and the inherent ambiguity in groundtruth, we design architecture, loss function and learning paradigm that are novel and effective. Our final solution is a conditional shape sampler, capable of predicting multiple plausible 3D point clouds from an input image. In experiments not only can our system outperform state-of-the-art methods on single image based 3d reconstruction benchmarks; but it also shows a strong performance for 3d shape completion and promising ability in making multiple plausible predictions.) <|cite_end|> <|cite_start|> (Reference: 3D-R2N2: A Unified Approach for Single and Multi-view 3D Object Reconstruction: Inspired by the recent success of methods that employ shape priors to achieve robust 3D reconstructions, we propose a novel recurrent neural network architecture that we call the 3D Recurrent Reconstruction Neural Network (3D-R2N2). The network learns a mapping from images of objects to their underlying 3D shapes from a large collection of synthetic data. Our network takes in one or more images of an object instance from arbitrary viewpoints and outputs a reconstruction of the object in the form of a 3D occupancy grid. Unlike most of the previous works, our network does not require any image annotations or object class labels for training or testing. Our extensive experimental analysis shows that our reconstruction framework i) outperforms the state-of-the-art methods for single view reconstruction, and ii) enables the 3D reconstruction of objects in situations when traditional SFM/SLAM methods fail (because of lack of texture and/or wide baseline).) <|cite_end|> <|cite_start|> (Reference: Deep Level Sets: Implicit Surface Representations for 3D Shape Inference: Existing 3D surface representation approaches are unable to accurately classify pixels and their orientation lying on the boundary of an object. Thus resulting in coarse representations which usually require post-processing steps to extract 3D surface meshes. To overcome this limitation, we propose an end-to-end trainable model that directly predicts implicit surface representations of arbitrary topology by optimising a novel geometric loss function. Specifically, we propose to represent the output as an oriented level set of a continuous embedding function, and incorporate this in a deep end-to-end learning framework by introducing a variational shape inference formulation. We investigate the benefits of our approach on the task of 3D surface prediction and demonstrate its ability to produce a more accurate reconstruction compared to voxel-based representations. We further show that our model is flexible and can be applied to a variety of shape inference problems.) <|cite_end|> <|cite_start|> (Reference: Learning Efficient Point Cloud Generation for Dense 3D Object Reconstruction: Conventional methods of 3D object generative modeling learn volumetric predictions using deep networks with 3D convolutional operations, which are direct analogies to classical 2D ones. However, these methods are computationally wasteful in attempt to predict 3D shapes, where information is rich only on the surfaces. In this paper, we propose a novel 3D generative modeling framework to efficiently generate object shapes in the form of dense point clouds. We use 2D convolutional operations to predict the 3D structure from multiple viewpoints and jointly apply geometric reasoning with 2D projection optimization. We introduce the pseudo-renderer, a differentiable module to approximate the true rendering operation, to synthesize novel depth maps for optimization. Experimental results for single-image 3D object reconstruction tasks show that we outperforms state-of-the-art methods in terms of shape similarity and prediction density.) <|cite_end|>; contextual mapping methods map the latent code into a function that can be sampled or otherwise manipulated to acquire a reconstruction <|cite_start|> (Reference: DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation: Computer graphics, 3D computer vision and robotics communities have produced multiple approaches to representing 3D geometry for rendering and reconstruction. These provide trade-offs across fidelity, efficiency and compression capabilities. In this work, we introduce DeepSDF, a learned continuous Signed Distance Function (SDF) representation of a class of shapes that enables high quality shape representation, interpolation and completion from partial and noisy 3D input data. DeepSDF, like its classical counterpart, represents a shape's surface by a continuous volumetric field: the magnitude of a point in the field represents the distance to the surface boundary and the sign indicates whether the region is inside (-) or outside (+) of the shape, hence our representation implicitly encodes a shape's boundary as the zero-level-set of the learned function while explicitly representing the classification of space as being part of the shapes interior or not. While classical SDF's both in analytical or discretized voxel form typically represent the surface of a single shape, DeepSDF can represent an entire class of shapes. Furthermore, we show state-of-the-art performance for learned 3D shape representation and completion while reducing the model size by an order of magnitude compared with previous work.) <|cite_end|> <|cite_start|> (Reference: FoldingNet: Point Cloud Auto-encoder via Deep Grid Deformation: Recent deep networks that directly handle points in a point set, e.g., PointNet, have been state-of-the-art for supervised learning tasks on point clouds such as classification and segmentation. In this work, a novel end-to-end deep auto-encoder is proposed to address unsupervised learning challenges on point clouds. On the encoder side, a graph-based enhancement is enforced to promote local structures on top of PointNet. Then, a novel folding-based decoder deforms a canonical 2D grid onto the underlying 3D object surface of a point cloud, achieving low reconstruction errors even for objects with delicate structures. The proposed decoder only uses about 7% parameters of a decoder with fully-connected neural networks, yet leads to a more discriminative representation that achieves higher linear SVM classification accuracy than the benchmark. In addition, the proposed decoder structure is shown, in theory, to be a generic architecture that is able to reconstruct an arbitrary point cloud from a 2D grid. Our code is available at http://www.merl.com/research/license#FoldingNet) <|cite_end|> <|cite_start|> (Reference: Occupancy Networks: Learning 3D Reconstruction in Function Space: With the advent of deep neural networks, learning-based approaches for 3D reconstruction have gained popularity. However, unlike for images, in 3D there is no canonical representation which is both computationally and memory efficient yet allows for representing high-resolution geometry of arbitrary topology. Many of the state-of-the-art learning-based 3D reconstruction approaches can hence only represent very coarse 3D geometry or are limited to a restricted domain. In this paper, we propose Occupancy Networks, a new representation for learning-based 3D reconstruction methods. Occupancy networks implicitly represent the 3D surface as the continuous decision boundary of a deep neural network classifier. In contrast to existing approaches, our representation encodes a description of the 3D output at infinite resolution without excessive memory footprint. We validate that our representation can efficiently encode 3D structure and can be inferred from various kinds of input. Our experiments demonstrate competitive results, both qualitatively and quantitatively, for the challenging tasks of 3D reconstruction from single images, noisy point clouds and coarse discrete voxel grids. We believe that occupancy networks will become a useful tool in a wide variety of learning-based 3D tasks.) <|cite_end|> <|cite_start|> (Reference: Deep Level Sets: Implicit Surface Representations for 3D Shape Inference: Existing 3D surface representation approaches are unable to accurately classify pixels and their orientation lying on the boundary of an object. Thus resulting in coarse representations which usually require post-processing steps to extract 3D surface meshes. To overcome this limitation, we propose an end-to-end trainable model that directly predicts implicit surface representations of arbitrary topology by optimising a novel geometric loss function. Specifically, we propose to represent the output as an oriented level set of a continuous embedding function, and incorporate this in a deep end-to-end learning framework by introducing a variational shape inference formulation. We investigate the benefits of our approach on the task of 3D surface prediction and demonstrate its ability to produce a more accurate reconstruction compared to voxel-based representations. We further show that our model is flexible and can be applied to a variety of shape inference problems.) <|cite_end|>. Direct decoding methods generally suffer from the limitation that their predictions are of fixed resolution; they cannot be sampled more or less precisely. With contextual mapping methods, it is possible in principle to sample the object to arbitrarily high resolution with the correct decoder function. However, sampling can provide a significant computational burden for some contextual mapping approaches as those proposed by <|cite_start|> (Reference: DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation: Computer graphics, 3D computer vision and robotics communities have produced multiple approaches to representing 3D geometry for rendering and reconstruction. These provide trade-offs across fidelity, efficiency and compression capabilities. In this work, we introduce DeepSDF, a learned continuous Signed Distance Function (SDF) representation of a class of shapes that enables high quality shape representation, interpolation and completion from partial and noisy 3D input data. DeepSDF, like its classical counterpart, represents a shape's surface by a continuous volumetric field: the magnitude of a point in the field represents the distance to the surface boundary and the sign indicates whether the region is inside (-) or outside (+) of the shape, hence our representation implicitly encodes a shape's boundary as the zero-level-set of the learned function while explicitly representing the classification of space as being part of the shapes interior or not. While classical SDF's both in analytical or discretized voxel form typically represent the surface of a single shape, DeepSDF can represent an entire class of shapes. Furthermore, we show state-of-the-art performance for learned 3D shape representation and completion while reducing the model size by an order of magnitude compared with previous work.) <|cite_end|> and <|cite_start|> (Reference: Deep Level Sets: Implicit Surface Representations for 3D Shape Inference: Existing 3D surface representation approaches are unable to accurately classify pixels and their orientation lying on the boundary of an object. Thus resulting in coarse representations which usually require post-processing steps to extract 3D surface meshes. To overcome this limitation, we propose an end-to-end trainable model that directly predicts implicit surface representations of arbitrary topology by optimising a novel geometric loss function. Specifically, we propose to represent the output as an oriented level set of a continuous embedding function, and incorporate this in a deep end-to-end learning framework by introducing a variational shape inference formulation. We investigate the benefits of our approach on the task of 3D surface prediction and demonstrate its ability to produce a more accurate reconstruction compared to voxel-based representations. We further show that our model is flexible and can be applied to a variety of shape inference problems.) <|cite_end|>. Another hurdle is the need for post-processing such as applying the Marching Cubes algorithm developed by. We call contextual mapping approaches that encode context by concatenating a duplicate of a latent context vector with each input \textit{latent vector concatenation (LVC)} methods. In particular, we compare with LVC architectures used in FoldingNet <|cite_start|> (Reference: FoldingNet: Point Cloud Auto-encoder via Deep Grid Deformation: Recent deep networks that directly handle points in a point set, e.g., PointNet, have been state-of-the-art for supervised learning tasks on point clouds such as classification and segmentation. In this work, a novel end-to-end deep auto-encoder is proposed to address unsupervised learning challenges on point clouds. On the encoder side, a graph-based enhancement is enforced to promote local structures on top of PointNet. Then, a novel folding-based decoder deforms a canonical 2D grid onto the underlying 3D object surface of a point cloud, achieving low reconstruction errors even for objects with delicate structures. The proposed decoder only uses about 7% parameters of a decoder with fully-connected neural networks, yet leads to a more discriminative representation that achieves higher linear SVM classification accuracy than the benchmark. In addition, the proposed decoder structure is shown, in theory, to be a generic architecture that is able to reconstruct an arbitrary point cloud from a 2D grid. Our code is available at http://www.merl.com/research/license#FoldingNet) <|cite_end|> and DeepSDF <|cite_start|> (Reference: DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation: Computer graphics, 3D computer vision and robotics communities have produced multiple approaches to representing 3D geometry for rendering and reconstruction. These provide trade-offs across fidelity, efficiency and compression capabilities. In this work, we introduce DeepSDF, a learned continuous Signed Distance Function (SDF) representation of a class of shapes that enables high quality shape representation, interpolation and completion from partial and noisy 3D input data. DeepSDF, like its classical counterpart, represents a shape's surface by a continuous volumetric field: the magnitude of a point in the field represents the distance to the surface boundary and the sign indicates whether the region is inside (-) or outside (+) of the shape, hence our representation implicitly encodes a shape's boundary as the zero-level-set of the learned function while explicitly representing the classification of space as being part of the shapes interior or not. While classical SDF's both in analytical or discretized voxel form typically represent the surface of a single shape, DeepSDF can represent an entire class of shapes. Furthermore, we show state-of-the-art performance for learned 3D shape representation and completion while reducing the model size by an order of magnitude compared with previous work.) <|cite_end|>. HOF is a contextual mapping method that distinguishes itself from other methods within this class through its approach to representing the mapping function: HOF uses one neural network to estimate the weights of another. Conceptually related methods have been previously studied under nomenclature such as the `fast-weight' paradigm <|cite_start|> (Reference: {Learning to Control Fast-weight Memories: an Alternative to Dynamic Recurrent Networks: Previous algorithms for supervised sequence learning are based on dynamic recurrent networks. This paper describes an alternative class of gradient-based systems consisting of two feedforward nets that learn to deal with temporal sequences using fast weights: The first net learns to produce context-dependent weight changes for the second net whose weights may vary very quickly. The method offers the potential for STM storage efficiency: A single weight (instead of a full-fledged unit) may be sufficient for storing temporal information. Various learning methods are derived. Two experiments with unknown time delays illustrate the approach. One experiment shows how the system can be used for adaptive temporary variable binding.) <|cite_end|> <|cite_start|> (Reference: Dynamic Filter Networks: In a traditional convolutional layer, the learned filters stay fixed after training. In contrast, we introduce a new framework, the Dynamic Filter Network, where filters are generated dynamically conditioned on an input. We show that this architecture is a powerful one, with increased flexibility thanks to its adaptive nature, yet without an excessive increase in the number of model parameters. A wide variety of filtering operations can be learned this way, including local spatial transformations, but also others like selective (de)blurring or adaptive feature extraction. Moreover, multiple such layers can be combined, e.g. in a recurrent architecture. We demonstrate the effectiveness of the dynamic filter network on the tasks of video and stereo prediction, and reach state-of-the-art performance on the moving MNIST dataset with a much smaller model. By visualizing the learned filters, we illustrate that the network has picked up flow information by only looking at unlabelled training data. This suggests that the network can be used to pretrain networks for various supervised tasks in an unsupervised way, like optical flow and depth estimation.) <|cite_end|> <|cite_start|> (Reference: A Dynamic Convolutional Layer for short rangeweather prediction: We present a new deep network layer called “Dynamic Convolutional Layer” which is a generalization of the convolutional layer. The conventional convolutional layer uses filters that are learned during training and are held constant during testing. In contrast, the dynamic convolutional layer uses filters that will vary from input to input during testing. This is achieved by learning a function that maps the input to the filters. We apply the dynamic convolutional layer to the application of short range weather prediction and show performance improvements compared to other baselines.) <|cite_end|> <|cite_start|> (Reference: Conditioned Regression Models for Non-blind Single Image Super-Resolution: Single image super-resolution is an important task in the field of computer vision and finds many practical applications. Current state-of-the-art methods typically rely on machine learning algorithms to infer a mapping from low-to high-resolution images. These methods use a single fixed blur kernel during training and, consequently, assume the exact same kernel underlying the image formation process for all test images. However, this setting is not realistic for practical applications, because the blur is typically different for each test image. In this paper, we loosen this restrictive constraint and propose conditioned regression models (including convolutional neural networks and random forests) that can effectively exploit the additional kernel information during both, training and inference. This allows for training a single model, while previous methods need to be re-trained for every blur kernel individually to achieve good results, which we demonstrate in our evaluations. We also empirically show that the proposed conditioned regression models (i) can effectively handle scenarios where the blur kernel is different for each image and (ii) outperform related approaches trained for only a single kernel.) <|cite_end|> and more recently `hypernetworks' <|cite_start|> (Reference: HyperNetworks: This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.) <|cite_end|>. However, the work by <|cite_start|> (Reference: {Learning to Control Fast-weight Memories: an Alternative to Dynamic Recurrent Networks: Previous algorithms for supervised sequence learning are based on dynamic recurrent networks. This paper describes an alternative class of gradient-based systems consisting of two feedforward nets that learn to deal with temporal sequences using fast weights: The first net learns to produce context-dependent weight changes for the second net whose weights may vary very quickly. The method offers the potential for STM storage efficiency: A single weight (instead of a full-fledged unit) may be sufficient for storing temporal information. Various learning methods are derived. Two experiments with unknown time delays illustrate the approach. One experiment shows how the system can be used for adaptive temporary variable binding.) <|cite_end|> deals with encoding memories in sequence learning tasks. <|cite_start|> (Reference: HyperNetworks: This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters.) <|cite_end|> suggest that estimating weights of one network with another might lead to improvements in parameter-efficiency. However, this work does not leverage the key insight of using network parameters that are estimated \textit{per sample} in vision tasks. \label{sec:taxonomy} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{figures/reconstruction/comparison_camera_ready.png} \caption{From left to right: Input RGB image, ground truth point cloud, reconstruction from FoldingNet <|cite_start|> (Reference: FoldingNet: Point Cloud Auto-encoder via Deep Grid Deformation: Recent deep networks that directly handle points in a point set, e.g., PointNet, have been state-of-the-art for supervised learning tasks on point clouds such as classification and segmentation. In this work, a novel end-to-end deep auto-encoder is proposed to address unsupervised learning challenges on point clouds. On the encoder side, a graph-based enhancement is enforced to promote local structures on top of PointNet. Then, a novel folding-based decoder deforms a canonical 2D grid onto the underlying 3D object surface of a point cloud, achieving low reconstruction errors even for objects with delicate structures. The proposed decoder only uses about 7% parameters of a decoder with fully-connected neural networks, yet leads to a more discriminative representation that achieves higher linear SVM classification accuracy than the benchmark. In addition, the proposed decoder structure is shown, in theory, to be a generic architecture that is able to reconstruct an arbitrary point cloud from a 2D grid. Our code is available at http://www.merl.com/research/license#FoldingNet) <|cite_end|>, reconstruction from DeepSDF <|cite_start|> (Reference: DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation: Computer graphics, 3D computer vision and robotics communities have produced multiple approaches to representing 3D geometry for rendering and reconstruction. These provide trade-offs across fidelity, efficiency and compression capabilities. In this work, we introduce DeepSDF, a learned continuous Signed Distance Function (SDF) representation of a class of shapes that enables high quality shape representation, interpolation and completion from partial and noisy 3D input data. DeepSDF, like its classical counterpart, represents a shape's surface by a continuous volumetric field: the magnitude of a point in the field represents the distance to the surface boundary and the sign indicates whether the region is inside (-) or outside (+) of the shape, hence our representation implicitly encodes a shape's boundary as the zero-level-set of the learned function while explicitly representing the classification of space as being part of the shapes interior or not. While classical SDF's both in analytical or discretized voxel form typically represent the surface of a single shape, DeepSDF can represent an entire class of shapes. Furthermore, we show state-of-the-art performance for learned 3D shape representation and completion while reducing the model size by an order of magnitude compared with previous work.) <|cite_end|>, and our method.} \label{fig:recon-compare} \end{figure} <|paper_end|>
[ "<|reference_start|> Octree Generating Networks: Efficient Convolutional Architectures for High-resolution 3D Outputs: We present a deep convolutional decoder architecture that can generate volumetric 3D outputs in a compute- and memory-efficient manner by using an octree representation. The network learns to predict both the structure of the octree, and the occupancy values of individual cells. This makes it a particularly valuable technique for generating 3D shapes. In contrast to standard decoders acting on regular voxel grids, the architecture does not have cubic complexity. This allows representing much higher resolution outputs with a limited memory budget. We demonstrate this in several application domains, including 3D convolutional autoencoders, generation of objects and whole scenes from high-level representations, and shape from a single image. <|reference_end|>", "<|reference_start|> Customers' joining behavior in an unobservable GI/Geo/m queue: This paper studies the equilibrium balking strategies of impatient customers in a discrete-time multi-server renewal input queue with identical servers. Arriving customers are unaware of the number of customers in the queue before making a decision whether to join or balk the queue. We model the decision-making process as a non-cooperative symmetric game and derive the Nash equilibrium mixed strategy and optimal social strategies. The stationary system-length distributions at different observation epochs under the equilibrium structure are obtained using the roots method. Finally, some numerical examples are presented to show the effect of the information level together with system parameters on the equilibrium and social behavior of impatient customers. <|reference_end|>", "<|reference_start|> Deep Level Sets: Implicit Surface Representations for 3D Shape Inference: Existing 3D surface representation approaches are unable to accurately classify pixels and their orientation lying on the boundary of an object. Thus resulting in coarse representations which usually require post-processing steps to extract 3D surface meshes. To overcome this limitation, we propose an end-to-end trainable model that directly predicts implicit surface representations of arbitrary topology by optimising a novel geometric loss function. Specifically, we propose to represent the output as an oriented level set of a continuous embedding function, and incorporate this in a deep end-to-end learning framework by introducing a variational shape inference formulation. We investigate the benefits of our approach on the task of 3D surface prediction and demonstrate its ability to produce a more accurate reconstruction compared to voxel-based representations. We further show that our model is flexible and can be applied to a variety of shape inference problems. <|reference_end|>", "<|reference_start|> FoldingNet: Point Cloud Auto-encoder via Deep Grid Deformation: Recent deep networks that directly handle points in a point set, e.g., PointNet, have been state-of-the-art for supervised learning tasks on point clouds such as classification and segmentation. In this work, a novel end-to-end deep auto-encoder is proposed to address unsupervised learning challenges on point clouds. On the encoder side, a graph-based enhancement is enforced to promote local structures on top of PointNet. Then, a novel folding-based decoder deforms a canonical 2D grid onto the underlying 3D object surface of a point cloud, achieving low reconstruction errors even for objects with delicate structures. The proposed decoder only uses about 7% parameters of a decoder with fully-connected neural networks, yet leads to a more discriminative representation that achieves higher linear SVM classification accuracy than the benchmark. In addition, the proposed decoder structure is shown, in theory, to be a generic architecture that is able to reconstruct an arbitrary point cloud from a 2D grid. Our code is available at http://www.merl.com/research/license#FoldingNet <|reference_end|>" ]
[ 3, 7, 29, 32 ]
{"<|cite_8|>": "arxiv-88804", "<|multi_cite_9_1|>": "arxiv-95123", "<|multi_cite_9_2|>": "arxiv-120728", "<|multi_cite_10_1|>": "arxiv-120182", "<|multi_cite_10_2|>": "arxiv-110187", "<|multi_cite_11_1|>": "arxiv-153851", "<|multi_cite_11_2|>": "arxiv-208413", "<|multi_cite_11_3|>": "ss-2226685", "<|multi_cite_11_4|>": "arxiv-172867", "<|multi_cite_12_1|>": "arxiv-111622", "<|multi_cite_12_2|>": "arxiv-111625", "<|multi_cite_12_3|>": "arxiv-126253", "<|multi_cite_12_4|>": "arxiv-143577", "<|multi_cite_12_5|>": "arxiv-187680", "<|multi_cite_13_1|>": "arxiv-111622", "<|multi_cite_13_2|>": "arxiv-126253", "<|multi_cite_14_1|>": "arxiv-111625", "<|multi_cite_14_2|>": "arxiv-128774", "<|multi_cite_1_1|>": "arxiv-143577", "<|multi_cite_1_2|>": "arxiv-127383", "<|multi_cite_1_3|>": "arxiv-111671", "<|cite_2|>": "arxiv-187680", "<|multi_cite_15_1|>": "arxiv-111625", "<|multi_cite_15_2|>": "arxiv-95123", "<|multi_cite_15_3|>": "arxiv-188203", "<|multi_cite_15_4|>": "arxiv-127383", "<|multi_cite_16_1|>": "arxiv-187680", "<|multi_cite_16_2|>": "arxiv-143577", "<|multi_cite_16_3|>": "arxiv-183932", "<|multi_cite_16_4|>": "arxiv-188203", "<|cite_3|>": "arxiv-187680", "<|cite_4|>": "arxiv-188203", "<|cite_17|>": "arxiv-143577", "<|cite_18|>": "arxiv-187680", "<|multi_cite_19_1|>": "ss-776426", "<|multi_cite_19_2|>": "arxiv-99021", "<|multi_cite_19_3|>": "ss-1260431", "<|multi_cite_19_4|>": "ss-1260432", "<|cite_20|>": "arxiv-106807", "<|cite_6|>": "ss-776426", "<|cite_7|>": "arxiv-106807", "<|cite_21|>": "arxiv-143577", "<|cite_22|>": "arxiv-187680"}
2310.03494
<|paper_start|> Title: How the level sampling process impacts zero-shot generalisation in deep reinforcement learning Abstract: How the level sampling process impacts zero-shot generalisation in deep reinforcement learning: A key limitation preventing the wider adoption of autonomous agents trained via deep reinforcement learning (RL) is their limited ability to generalise to new environments, even when these share similar characteristics with environments encountered during training. In this work, we investigate how a non-uniform sampling strategy of individual environment instances, or levels, affects the zero-shot generalisation (ZSG) ability of RL agents, considering two failure modes: overfitting and over-generalisation. As a first step, we measure the mutual information (MI) between the agent's internal representation and the set of training levels, which we find to be well-correlated to instance overfitting. In contrast to uniform sampling, adaptive sampling strategies prioritising levels based on their value loss are more effective at maintaining lower MI, which provides a novel theoretical justification for this class of techniques. We then turn our attention to unsupervised environment design (UED) methods, which adaptively generate new training levels and minimise MI more effectively than methods sampling from a fixed set. However, we find UED methods significantly shift the training distribution, resulting in over-generalisation and worse ZSG performance over the distribution of interest. To prevent both instance overfitting and over-generalisation, we introduce self-supervised environment design (SSED). SSED generates levels using a variational autoencoder, effectively reducing MI while minimising the shift with the distribution of interest, and leads to statistically significant improvements in ZSG over fixed-set level sampling strategies and UED methods. Introduction \label{sec:intro} \begin{wrapfigure}[]{r}{0.5\textwidth} \centering \vspace{-.5cm} \begin{subfigure}{0.32\linewidth} \includegraphics[width=.95\linewidth]{content/layouts/lvl_a.png} \caption{} \label{subfig:lvla} \end{subfigure} \begin{subfigure}{0.32\linewidth} \includegraphics[width=.95\linewidth]{content/layouts/lvl_b.png} \caption{} \label{subfig:lvlb} \end{subfigure} \hspace{-3pt} \rulesep \begin{subfigure}{0.32\linewidth} \includegraphics[width=0.95\linewidth]{content/layouts/lvl_c.png} \caption{} \label{subfig:lvlc} \end{subfigure} \vspace{-.2cm} \caption{The agent (blue) must navigate to the goal (lime green) but can be blocked by walls (grey) and only observes tiles directly adjacent to itself. An agent trained over levels (a) and (b) will transfer zero-shot to level (c) if it has learnt to follow the pale green tiles to the goal location. } \label{fig:CMDPex} \vspace{-.4cm} \end{wrapfigure} A central challenge facing modern reinforcement learning (RL) is learning policies capable of zero-shot transfer of learned behaviours to a wide range of environment settings. Prior applications of RL algorithms <|cite_start|> (Reference: Solving the Rubik’s cube with deep reinforcement learning and search: ) <|cite_end|> <|cite_start|> (Reference: Learning Quadrupedal Locomotion over Challenging Terrain: Some of the most challenging environments on our planet are accessible to quadrupedal animals but remain out of reach for autonomous machines. Legged locomotion can dramatically expand the operational domains of robotics. However, conventional controllers for legged locomotion are based on elaborate state machines that explicitly trigger the execution of motion primitives and reflexes. These designs have escalated in complexity while falling short of the generality and robustness of animal locomotion. Here we present a radically robust controller for legged locomotion in challenging natural environments. We present a novel solution to incorporating proprioceptive feedback in locomotion control and demonstrate remarkable zero-shot generalization from simulation to natural environments. The controller is trained by reinforcement learning in simulation. It is based on a neural network that acts on a stream of proprioceptive signals. The trained controller has taken two generations of quadrupedal ANYmal robots to a variety of natural environments that are beyond the reach of prior published work in legged locomotion. The controller retains its robustness under conditions that have never been encountered during training: deformable terrain such as mud and snow, dynamic footholds such as rubble, and overground impediments such as thick vegetation and gushing water. The presented work opens new frontiers for robotics and indicates that radical robustness in natural environments can be achieved by training in much simpler domains.) <|cite_end|> <|cite_start|> (Reference: Learning to Walk in Minutes Using Massively Parallel Deep Reinforcement Learning: In this work, we present and study a training set-up that achieves fast policy generation for real-world robotic tasks by using massive parallelism on a single workstation GPU. We analyze and discuss the impact of different training algorithm components in the massively parallel regime on the final policy performance and training times. In addition, we present a novel game-inspired curriculum that is well suited for training with thousands of simulated robots in parallel. We evaluate the approach by training the quadrupedal robot ANYmal to walk on challenging terrain. The parallel approach allows training policies for flat terrain in under four minutes, and in twenty minutes for uneven terrain. This represents a speedup of multiple orders of magnitude compared to previous work. Finally, we transfer the policies to the real robot to validate the approach. We open-source our training code to help accelerate further research in the field of learned legged locomotion.) <|cite_end|> indicate that strong zero-shot generalisation (ZSG) capabilities can be achieved by employing an adaptive sampling strategy over the set of environment instances available during training, which we refer to as the set of training \textit{levels}. However the relationship between ZSG and the level sampling process remains poorly understood. In this work, we draw novel connections between this process and the minimisation of an upper bound on the generalisation error derived by <|cite_start|> (Reference: Instance based Generalization in Reinforcement Learning: Agents trained via deep reinforcement learning (RL) routinely fail to generalize to unseen environments, even when these share the same underlying dynamics as the training levels. Understanding the generalization properties of RL is one of the challenges of modern machine learning. Towards this goal, we analyze policy learning in the context of Partially Observable Markov Decision Processes (POMDPs) and formalize the dynamics of training levels as instances. We prove that, independently of the exploration strategy, reusing instances introduces significant changes on the effective Markov dynamics the agent observes during training. Maximizing expected rewards impacts the learned belief state of the agent by inducing undesired instance specific speedrunning policies instead of generalizeable ones, which are suboptimal on the training set. We provide generalization bounds to the value gap in train and test environments based on the number of training instances, and use insights based on these to improve performance on unseen levels. We propose training a shared belief representation over an ensemble of specialized policies, from which we compute a consensus policy that is used for data collection, disallowing instance specific exploitation. We experimentally validate our theory, observations, and the proposed computational solution over the CoinRun benchmark.) <|cite_end|>, which depends on the \textit{mutual information} (MI) between the agent's internal representation and the identity of individual training levels. To build an understanding of the relationship between MI and ZSG, consider the minimal gridworld navigation example in \Cref{fig:CMDPex}. A ``shortcut'' exists in level (a), and a model with high MI is able to first predict the level identity from its initial observation and follow a level-specific policy, which is optimal over the training set. When deployed on (c) the model will predict it is in (a) since under the agent's restricted field of view (a) and (c) share the same initial observation. As a result the agent will attempt to follow the (a)-specific policy, which will not transfer. An agent learning level-specific policies implies high MI between its internal representation and the level identities, and in general, will not transfer zero-shot to new levels. We discover that the reduced generalisation error achieved by adaptive level sampling strategies over uniform sampling can be attributed to their effectiveness in reducing the MI between the agent's internal representation and the level identity. In particular, we find that strategies de-prioritising levels with low value loss, as first proposed in prioritised level replay \citep[PLR,][]{PLR}, effectively minimise mutual information by avoiding training on levels in which the value can be accurately estimated through level identification. While adaptive sampling strategies help reduce mutual information, our experiments indicate that they are ultimately limited by the number of training levels. A natural extension is to \textit{augment} the set of training levels to further reduce the mutual information and the generalisation error upper bound. We consider the setting in which we are provided with a set of environment parameters $X$, each $\vx \in X$ instantiating a level within a parametrisable simulator. These parameters may consist of a set of values, a configuration file or any other modality specific to the simulator. For example, these may be numerical arrays describing the gridworld layouts in \Cref{fig:CMDPex} or 3D scans of indoor environments <|cite_start|> (Reference: OpenRooms: An open framework for photorealistic indoor scene datasets: We propose a novel framework for creating large-scale photorealistic datasets of indoor scenes, with ground truth geometry, material, lighting and semantics. Our goal is to make the dataset creation process widely accessible, transforming scans into photorealistic datasets with high-quality ground truth for appearance, layout, semantic labels, high quality spatially-varying BRDF and complex lighting, including direct, indirect and visibility components. This enables important applications in inverse rendering, scene understanding and robotics. We show that deep networks trained on the proposed dataset achieve competitive performance for shape, material and lighting estimation on real images, enabling photorealistic augmented reality applications, such as object insertion and material editing. We also show our semantic labels may be used for segmentation and multi-task learning. Finally, we demonstrate that our framework may also be integrated with physics engines, to create virtual robotics environments with unique ground truth such as friction coefficients and correspondence to real scenes. The dataset and all the tools to create such datasets will be made publicly available.1) <|cite_end|>. The later is likely to be expensive to collect and thus limited in supply, as in most other practical applications. It is also likely to be high dimensional, highly structured data, and would make random or unsupervised generation methods ill-suited for augmenting this supply (and we see in our experiments that these methods can be ill-suited even in settings as simple as the aforementioned gridworld). Instead, we introduce the \textit{Self-Supervised Environment Design} (SSED) framework, which uses the information present in $X$ to generate the augmented set of level parameters $\tilde{X}$. While an augmented set effectively minimises the mutual information, it may not result in better generalisation performance if $\tilde{X}$ and $X$ are drawn from different distributions. In fact, we show it can induce a form of \textit{over-generalisation}, in which the agent learns to solve levels incompatible with the targeted task, and performs poorly at test time. There is therefore a trade-off between augmenting $X$ to prevent \textit{instance-overfitting}, i.e. not learning level-specific policies, and ensuring that $\tilde{X}$ and $X$ come from similar distributions to avoid distributional shift and over-generalisation. In our experiments, we show that SSED is able to strike this trade-off more effectively than when training over the original level set, even adaptively, or when training on augmented sets obtained using previously proposed environment design methods. We demonstrate that SSED leads to statistically significant improvements in the agent's ZSG capabilities, reaching 125\% of the returns achieved by the next best baseline on held-out test levels, and a 200\% to 300\% improvement on levels with an increased difficulty with respect to the training distribution. <|paper_end|>
[ "<|reference_start|> Learning Quadrupedal Locomotion over Challenging Terrain: Some of the most challenging environments on our planet are accessible to quadrupedal animals but remain out of reach for autonomous machines. Legged locomotion can dramatically expand the operational domains of robotics. However, conventional controllers for legged locomotion are based on elaborate state machines that explicitly trigger the execution of motion primitives and reflexes. These designs have escalated in complexity while falling short of the generality and robustness of animal locomotion. Here we present a radically robust controller for legged locomotion in challenging natural environments. We present a novel solution to incorporating proprioceptive feedback in locomotion control and demonstrate remarkable zero-shot generalization from simulation to natural environments. The controller is trained by reinforcement learning in simulation. It is based on a neural network that acts on a stream of proprioceptive signals. The trained controller has taken two generations of quadrupedal ANYmal robots to a variety of natural environments that are beyond the reach of prior published work in legged locomotion. The controller retains its robustness under conditions that have never been encountered during training: deformable terrain such as mud and snow, dynamic footholds such as rubble, and overground impediments such as thick vegetation and gushing water. The presented work opens new frontiers for robotics and indicates that radical robustness in natural environments can be achieved by training in much simpler domains. <|reference_end|>", "<|reference_start|> Learning to Walk in Minutes Using Massively Parallel Deep Reinforcement Learning: In this work, we present and study a training set-up that achieves fast policy generation for real-world robotic tasks by using massive parallelism on a single workstation GPU. We analyze and discuss the impact of different training algorithm components in the massively parallel regime on the final policy performance and training times. In addition, we present a novel game-inspired curriculum that is well suited for training with thousands of simulated robots in parallel. We evaluate the approach by training the quadrupedal robot ANYmal to walk on challenging terrain. The parallel approach allows training policies for flat terrain in under four minutes, and in twenty minutes for uneven terrain. This represents a speedup of multiple orders of magnitude compared to previous work. Finally, we transfer the policies to the real robot to validate the approach. We open-source our training code to help accelerate further research in the field of learned legged locomotion. <|reference_end|>", "<|reference_start|> Instance based Generalization in Reinforcement Learning: Agents trained via deep reinforcement learning (RL) routinely fail to generalize to unseen environments, even when these share the same underlying dynamics as the training levels. Understanding the generalization properties of RL is one of the challenges of modern machine learning. Towards this goal, we analyze policy learning in the context of Partially Observable Markov Decision Processes (POMDPs) and formalize the dynamics of training levels as instances. We prove that, independently of the exploration strategy, reusing instances introduces significant changes on the effective Markov dynamics the agent observes during training. Maximizing expected rewards impacts the learned belief state of the agent by inducing undesired instance specific speedrunning policies instead of generalizeable ones, which are suboptimal on the training set. We provide generalization bounds to the value gap in train and test environments based on the number of training instances, and use insights based on these to improve performance on unseen levels. We propose training a shared belief representation over an ensemble of specialized policies, from which we compute a consensus policy that is used for data collection, disallowing instance specific exploitation. We experimentally validate our theory, observations, and the proposed computational solution over the CoinRun benchmark. <|reference_end|>", "<|reference_start|> OpenRooms: An open framework for photorealistic indoor scene datasets: We propose a novel framework for creating large-scale photorealistic datasets of indoor scenes, with ground truth geometry, material, lighting and semantics. Our goal is to make the dataset creation process widely accessible, transforming scans into photorealistic datasets with high-quality ground truth for appearance, layout, semantic labels, high quality spatially-varying BRDF and complex lighting, including direct, indirect and visibility components. This enables important applications in inverse rendering, scene understanding and robotics. We show that deep networks trained on the proposed dataset achieve competitive performance for shape, material and lighting estimation on real images, enabling photorealistic augmented reality applications, such as object insertion and material editing. We also show our semantic labels may be used for segmentation and multi-task learning. Finally, we demonstrate that our framework may also be integrated with physics engines, to create virtual robotics environments with unique ground truth such as friction coefficients and correspondence to real scenes. The dataset and all the tools to create such datasets will be made publicly available.1 <|reference_end|>" ]
[ 1, 2, 3, 4 ]
{"<|multi_cite_2_1|>": "ss-719429", "<|multi_cite_2_2|>": "arxiv-298076", "<|multi_cite_2_3|>": "arxiv-369244", "<|cite_1|>": "arxiv-300960", "<|cite_3|>": "ss-1354738"}
1406.2296
<|paper_start|> Title: Approximating Nash Equilibria and Dense Subgraphs via an Approximate Version of Carath\'eodory's Theorem Abstract: Approximating Nash Equilibria and Dense Subgraphs via an Approximate Version of Carath\'eodory's Theorem: We present algorithmic applications of an approximate version of Carath\'{e}odory's theorem. The theorem states that given a set of vectors $X$ in $\mathbb{R}^d$, for every vector in the convex hull of $X$ there exists an $\varepsilon$-close (under the $p$-norm distance, for $2\leq p < \infty$) vector that can be expressed as a convex combination of at most $b$ vectors of $X$, where the bound $b$ depends on $\varepsilon$ and the norm $p$ and is independent of the dimension $d$. This theorem can be derived by instantiating Maurey's lemma, early references to which can be found in the work of Pisier (1981) and Carl (1985). However, in this paper we present a self-contained proof of this result. Using this theorem we establish that in a bimatrix game with $ n \times n$ payoff matrices $A, B$, if the number of non-zero entries in any column of $A+B$ is at most $s$ then an $\varepsilon$-Nash equilibrium of the game can be computed in time $n^{O\left(\frac{\log s }{\varepsilon^2}\right)}$. This, in particular, gives us a polynomial-time approximation scheme for Nash equilibrium in games with fixed column sparsity $s$. Moreover, for arbitrary bimatrix games---since $s$ can be at most $n$---the running time of our algorithm matches the best-known upper bound, which was obtained by Lipton, Markakis, and Mehta (2003). The approximate Carath\'{e}odory's theorem also leads to an additive approximation algorithm for the normalized densest $k$-subgraph problem. Given a graph with $n$ vertices and maximum degree $d$, the developed algorithm determines a subgraph with exactly $k$ vertices with normalized density within $\varepsilon$ (in the additive sense) of the optimal in time $n^{O\left( \frac{\log d}{\varepsilon^2}\right)}$. Additionally, we show that a similar approximation result can be achieved for the problem of finding a $k \times k$-bipartite subgraph of maximum normalized density. Introduction Carath\'{e}odory's theorem is a fundamental dimensionality result in convex geometry. It states that any vector in the convex hull of a set $X$ in $\mathbb{R}^d$ can be expressed as a convex combination of at most $d+1$ vectors of $X$.\footnote{This bound of $d+1$ is tight.} This paper considers a natural approximate version of Carath\'{e}odory's theorem where the goal is to seek convex combinations that are close enough to vectors in the convex hull. Specifically, this approximate version establishes that given a set of vectors $X$ in the $p$-unit ball\footnote{That is, $X$ is contained in the set $\{ v \in \mathbb{R}^d \mid \| v \|_p \leq 1\}$. } with norm $p \in [2, \infty)$, for every vector $\mu$ in the convex hull of $X$ there exists an $\varepsilon$-close---under the $p$-norm distance---vector $\mu'$ that can be expressed as a convex combination of $\frac{4 p }{\varepsilon^2}$ vectors of $X$. A notable aspect of this result is that the number of vectors of $X$ that are required to express $\mu'$, i.e., $\frac{4 p}{\varepsilon^2}$, is independent of the underlying dimension $d$. This theorem can be derived by instantiating Maurey's lemma, early references to which can be found in the work of Pisier and Carl <|cite_start|> (Reference: Inequalities of Bernstein-Jackson-type and the degree of compactness of operators in Banach spaces: © Annales de l’institut Fourier, 1985, tous droits réservés. L’accès aux archives de la revue « Annales de l’institut Fourier » (http://annalif.ujf-grenoble.fr/) implique l’accord avec les conditions générales d’utilisation (http://www.numdam.org/legal.php). Toute utilisation commerciale ou impression systématique est constitutive d’une infraction pénale. Toute copie ou impression de ce fichier doit contenir la présente mention de copyright.) <|cite_end|>. However, in this paper we present a self-contained proof of this result, which we proceed to outline below. The author was made aware of the connection with Maurey's lemma after a preliminary version of this work had appeared. To establish the approximate version of Carath\'{e}odory's theorem we use the probabilistic method. Given a vector $\mu$ in the convex hull of a set $X \subset \mathbb{R}^d$, consider a convex combination of vectors of $X$ that generates $\mu$. The coefficients in this convex combination induce a probability distribution over $X$ and the mean of this distribution is $\mu$. The approach is to draw $b$ independent and identically distributed (i.i.d.)\ samples from this distribution and show that with positive probability the sample mean, with an appropriate number of samples, is close to $\mu$ under the $p$-norm distance, for $p \in [2, \infty)$. Therefore, the probabilistic method implies that these exists a vector close to $\mu$ that can be expressed as a convex combination of at most $b$ vectors, where $b$ is the number of samples we drew. Note that in this context applying the probabilistic method is a natural idea, but a direct application of this method will not work. Specifically, a dimension-free result is unlikely if we first try to prove that the $i$th component of the sample mean vector is close to the $i$th component of $\mu$, for every $i \in [d]$; since this would entail a union bound over the number of components $d$. Bypassing such a component-wise analysis requires the use of atypical ideas. We are able to accomplish this task and, in particular, bound (in expectation) the $p$-norm distance between $\mu$ and the sample mean vector via an interesting application of Khintchine inequality (see Theorem~\ref{thm:Khintchine-v}). Given the significance of Carath\'{e}odory's theorem, this approximate version is interesting in its own right. The key contribution of the paper is to substantiate the algorithmic relevance of this approximate version by developing new algorithmic applications. Our applications include additive approximation algorithms for (i) Nash equilibria in two-player games, and (ii) the densest subgraph problem. These algorithmic results are outlined below. \subsection*{Algorithmic Applications} \paragraph{Approximate Nash Equilibria.} Nash equilibria are central constructs in game theory that are used to model likely outcomes of strategic interactions between self-interested entities, like human players. They denote distributions over actions of players under which no player can benefit, in expectation, by unilateral deviation. These solution concepts are arguably the most well-studied notions of rationality and questions about their computational complexity lie at the core of algorithmic game theory. In recent years, hardness results have been established for Nash equilibrium, even in two-player games <|cite_start|> (Reference: Settling the Complexity of Computing Two-Player Nash Equilibria: We settle a long-standing open question in algorithmic game theory. We prove that Bimatrix, the problem of finding a Nash equilibrium in a two-player game, is complete for the complexity class PPAD Polynomial Parity Argument, Directed version) introduced by Papadimitriou in 1991. This is the first of a series of results concerning the complexity of Nash equilibria. In particular, we prove the following theorems: Bimatrix does not have a fully polynomial-time approximation scheme unless every problem in PPAD is solvable in polynomial time. The smoothed complexity of the classic Lemke-Howson algorithm and, in fact, of any algorithm for Bimatrix is not polynomial unless every problem in PPAD is solvable in randomized polynomial time. Our results demonstrate that, even in the simplest form of non-cooperative games, equilibrium computation and approximation are polynomial-time equivalent to fixed point computation. Our results also have two broad complexity implications in mathematical economics and operations research: Arrow-Debreu market equilibria are PPAD-hard to compute. The P-Matrix Linear Complementary Problem is computationally harder than convex programming unless every problem in PPAD is solvable in polynomial time.) <|cite_end|> <|cite_start|> (Reference: {The complexity of computing a Nash equilibrium: How long does it take until economic agents converge to an equilibrium? By studying the complexity of the problem of computing a mixed Nash equilibrium in a game, we provide evidence that there are games in which convergence to such an equilibrium takes prohibitively long. Traditionally, computational problems fall into two classes: those that have a polynomial-time algorithm and those that are NP-hard. However, the concept of NP-hardness cannot be applied to the rare problems where "every instance has a solution"---for example, in the case of games Nash's theorem asserts that every game has a mixed equilibrium (now known as the Nash equilibrium, in honor of that result). We show that finding a Nash equilibrium is complete for a class of problems called PPAD, containing several other known hard problems; all problems in PPAD share the same style of proof that every instance has a solution.) <|cite_end|>. But, the question whether an \emph{approximate} Nash equilibrium can be computed in polynomial time still remains open. Throughout this paper we will consider the standard additive notion of approximate Nash equilibria that are defined as follows: a pair distributions, one for each player, is said to be an $\varepsilon$-Nash equilibrium if any unilateral deviation increases utility by at most $\varepsilon$, in expectation. We apply the approximate version of Carath\'{e}odory's theorem to address this central open question. Specifically, we prove that in a bimatrix game with $n \times n$ payoff matrices $A, B$, i.e., a two-player game with $n$ actions for each player, if the number of non-zero entries in any column of $A+B$ is at most $s$ then an $\varepsilon$-Nash equilibrium of the game can be computed in time $n^{O\left(\frac{\log s}{\varepsilon^2}\right)}$. Our result, in particular, shows that games with fixed column sparsity $s$ admit a polynomial-time approximation scheme (PTAS) for Nash equilibrium. Along the lines of zero-sum games (which model strict competition), games with fixed column sparsity capture settings in which, except for particular action profiles, the gains and losses of the two player balance out. In other words, such games are a natural generalization of zero-sum games; recall that zero-sum games admit efficient computation of Nash equilibrium (see, e.g., <|cite_start|> (Reference: Algorithmic game theory: We give an introduction to the micro-economic field of Mechanism Design slightly biased towards a computer-scientist’s point of view.) <|cite_end|>). It is also worth pointing out that for an arbitrary bimatrix game the running time of our algorithm is $n^{O\left(\frac{\log n}{\varepsilon^2}\right)}$, since $s$ is at most $n$. Given that the best-known algorithm for computing $\varepsilon$-Nash equilibrium also runs in time $n^{O\left(\frac{\log n}{\varepsilon^2}\right)}$ <|cite_start|> (Reference: Playing Large Games Using Simple Strategies: We prove the existence of ε-Nash equilibrium strategies with support logarithmic in the number of pure strategies. We also show that the payoffs to all players in any (exact) Nash equilibrium can be ε-approximated by the payoffs to the players in some such logarithmic support ε-Nash equilibrium. These strategies are also uniform on a multiset of logarithmic size and therefore this leads to a quasi-polynomial algorithm for computing an ε-Nash equilibrium. To our knowledge this is the first subexponential algorithm for finding an ε-Nash equilibrium. Our results hold for any multiple-player game as long as the number of players is a constant (i.e., it is independent of the number of pure strategies). A similar argument also proves that for a fixed number of players m, the payoffs to all players in any m-tuple of mixed strategies can be ε-approximated by the payoffs in some m-tuple of constant support strategies.We also prove that if the payoff matrices of a two person game have low rank then the game has an exact Nash equilibrium with small support. This implies that if the payoff matrices can be well approximated by low rank matrices, the game has an ε-equilibrium with small support. It also implies that if the payoff matrices have constant rank we can compute an exact Nash equilibrium in polynomial time.) <|cite_end|>, for general games the time complexity of our algorithm matches the best-known upper bound. Overall, this result provides a parameterized understanding of the complexity of computing approximate Nash equilibrium in terms of a very natural measure, the column sparsity $s$ of the matrix $A+B$. Our framework can address other notions of sparsity as well. Specifically, if there exist \emph{constants} $\alpha, \beta \in \mathbb{R}_+$ and $\gamma \in \mathbb{R}$ such that the matrix $\alpha A + \beta B + \gamma \mathbbm{1}_{n \times n}$ has column or row sparsity $s$, then our algorithm can be directly adopted to find an $\varepsilon$-Nash equilibrium of the game $(A,B)$ in time $n^{O\left(\frac{\log s}{\varepsilon^2}\right)}$; here, $\mathbbm{1}_{n \times n}$ is the all-ones $n \times n$ matrix.\footnote{Note that given matrices $A$ and $B$, parameters $\alpha$, $\beta$, and $\gamma$ can efficiently computed.} Additionally, the same running-time bound can be achieved for approximating Nash equilibrium in games wherein \emph{both} matrices $A$ and $B$ have column or row sparsity $s$. Note that this case is not subsumed by the previous result; in particular, if the columns of matrix $A$ and the rows of matrix $B$ are sparse, then it is not necessary that $A+B$ has low column or row sparsity. We also refine the following result of Daskalakis and Papadimitriou <|cite_start|> (Reference: On Oblivious PTAS's for Nash Equilibrium: If a game has a Nash equilibrium with probability values that are either zero or Omega(1) then this equilibrium can be found exhaustively in polynomial time. Somewhat surprisingly, we show that there is a PTAS for the games whose equilibria are guaranteed to have small-O(1/n)-values, and therefore large-Omega(n)-supports. We also point out that there is a PTAS for games with sparse payoff matrices, which are known to be PPAD-complete to solve exactly. Both algorithms are of a special kind that we call oblivious: The algorithm just samples a fixed distribution on pairs of mixed strategies, and the game is only used to determine whether the sampled strategies comprise an eps-Nash equilibrium; the answer is yes with inverse polynomial probability. These results bring about the question: Is there an oblivious PTAS for Nash equilibrium in general games? We answer this question in the negative; our lower bound comes close to the quasi-polynomial upper bound of [Lipton, Markakis, Mehta 2003]. Another recent PTAS for anonymous games is also oblivious in a weaker sense appropriate for this class of games (it samples from a fixed distribution on unordered collections of mixed strategies), but its runtime is exponential in 1/eps. We prove that any oblivious PTAS for anonymous games with two strategies and three player types must have 1/eps^c in the exponent of the running time for some c>1/3, rendering the algorithm in [Daskalakis 2008] essentially optimal within oblivious algorithms. In contrast, we devise a poly(n) (1/eps)^O(log^2(1/eps)) non-oblivious PTAS for anonymous games with 2 strategies and any bounded number of player types. Our algorithm is based on the construction of a sparse (and efficiently computable) eps-cover of the set of all possible sums of n independent indicators, under the total variation distance. The size of the cover is poly(n) (1/ eps^{O(log^2 (1/eps))}.) <|cite_end|>: They develop a PTAS for bimatrix games that admit an equilibrium with small, specifically $O\left(\frac{1}{n}\right)$, probability values. This result is somewhat surprising, since such small-probability equilibria have large, $\Omega(n)$, support, and hence are not amenable to, say, exhaustive search. We show that if a game has an equilibrium with probability values $O\left( \frac{1}{m} \right)$, for $m \in [n]$, then an approximate equilibrium can be computed in time $n^t$, where $t= O\left(\frac{\log (s/m)}{\varepsilon^2}\right)$. Since $s\leq n$, we get the result of <|cite_start|> (Reference: On Oblivious PTAS's for Nash Equilibrium: If a game has a Nash equilibrium with probability values that are either zero or Omega(1) then this equilibrium can be found exhaustively in polynomial time. Somewhat surprisingly, we show that there is a PTAS for the games whose equilibria are guaranteed to have small-O(1/n)-values, and therefore large-Omega(n)-supports. We also point out that there is a PTAS for games with sparse payoff matrices, which are known to be PPAD-complete to solve exactly. Both algorithms are of a special kind that we call oblivious: The algorithm just samples a fixed distribution on pairs of mixed strategies, and the game is only used to determine whether the sampled strategies comprise an eps-Nash equilibrium; the answer is yes with inverse polynomial probability. These results bring about the question: Is there an oblivious PTAS for Nash equilibrium in general games? We answer this question in the negative; our lower bound comes close to the quasi-polynomial upper bound of [Lipton, Markakis, Mehta 2003]. Another recent PTAS for anonymous games is also oblivious in a weaker sense appropriate for this class of games (it samples from a fixed distribution on unordered collections of mixed strategies), but its runtime is exponential in 1/eps. We prove that any oblivious PTAS for anonymous games with two strategies and three player types must have 1/eps^c in the exponent of the running time for some c>1/3, rendering the algorithm in [Daskalakis 2008] essentially optimal within oblivious algorithms. In contrast, we devise a poly(n) (1/eps)^O(log^2(1/eps)) non-oblivious PTAS for anonymous games with 2 strategies and any bounded number of player types. Our algorithm is based on the construction of a sparse (and efficiently computable) eps-cover of the set of all possible sums of n independent indicators, under the total variation distance. The size of the cover is poly(n) (1/ eps^{O(log^2 (1/eps))}.) <|cite_end|> as a special case. \paragraph{Densest Subgraph.} In the normalized densest $k$-subgraph problem (\rm{NDkS}) we are given a simple graph and the objective is to find a size-$k$ subgraph (i.e., a subgraph containing exactly $k$ vertices) of maximum density; here, density is normalized to be at most one, i.e., for a subgraph with $k$ vertices, it is defined to be the number of edges in the subgraph divided by $k^2$. \rm{NDkS} is simply a normalized version of of the standard densest $k$-subgraph problem (see, e.g., and references therein) wherein the goal is to find a subgraph with $k$ vertices with the maximum possible number of edges in it. The densest $k$-subgraph problem (\rm{DkS}) is computationally hard and it is shown in that a constant-factor approximation for \rm{DkS} is unlikely. This result implies that \rm{NDkS} is hard to approximate (multiplicatively) within a constant factor as well. In this paper we focus on an additive approximation for \rm{NDkS}. In particular, our objective is to compute a size-$k$ subgraph whose density is close (in the additive sense) to the optimal. The paper also presents additive approximations for the densest $k$-bipartite subgraph (\rm{DkBS}) problem. \rm{DkBS} is a natural variant of \rm{NDkS} and the goal in this problem is to find size-$k$ vertex subsets of maximum density. In the bipartite case, density of vertex subsets $S$ and $T$ is defined to be the number of edges between the two subsets divided by $|S||T|$. Hardness of additively approximating \rm{DkBS} was studied by Hazan and Krauthgamer <|cite_start|> (Reference: How Hard is It to Approximate the Best Nash Equilibrium?: The quest for a PTAS for Nash equilibrium in a two-player game seeks to circumvent the PPAD-completeness of an (exact) Nash equilibrium by finding an approximate equilibrium, and has emerged as a major open question in Algorithmic Game Theory. A closely related problem is that of finding an equilibrium maximizing a certain objective, such as the social welfare. This optimization problem was shown to be NP-hard by Gilboa and Zemel [Games and Economic Behavior 1989]. However, this NP-hardness is unlikely to extend to finding an approximate equilibrium, since the latter admits a quasi-polynomial time algorithm, as proved by Lipton, Markakis and Mehta [Proc. of 4th EC, 2003]. We show that this optimization problem, namely, finding in a two-player game an approximate equilibrium achieving large social welfare is unlikely to have a polynomial time algorithm. One interpretation of our results is that the quest for a PTAS for Nash equilibrium should not extend to a PTAS for finding the best Nash equilibrium, which stands in contrast to certain algorithmic techniques used so far (e.g. sampling and enumeration). Technically, our result is a reduction from a notoriously difficult problem in modern Combinatorics, of finding a planted (but hidden) clique in a random graph G(n, 1/2). Our reduction starts from an instance with planted clique size k = O(log n). For comparison, the currently known algorithms due to Alon, Krivelevich and Sudakov [Random Struct. & Algorithms, 1998], and Krauthgamer and Feige [Random Struct. & Algorithms, 2000], are effective for a much larger clique size k = Ω(√n).) <|cite_end|>. Specifically, the reduction in <|cite_start|> (Reference: How Hard is It to Approximate the Best Nash Equilibrium?: The quest for a PTAS for Nash equilibrium in a two-player game seeks to circumvent the PPAD-completeness of an (exact) Nash equilibrium by finding an approximate equilibrium, and has emerged as a major open question in Algorithmic Game Theory. A closely related problem is that of finding an equilibrium maximizing a certain objective, such as the social welfare. This optimization problem was shown to be NP-hard by Gilboa and Zemel [Games and Economic Behavior 1989]. However, this NP-hardness is unlikely to extend to finding an approximate equilibrium, since the latter admits a quasi-polynomial time algorithm, as proved by Lipton, Markakis and Mehta [Proc. of 4th EC, 2003]. We show that this optimization problem, namely, finding in a two-player game an approximate equilibrium achieving large social welfare is unlikely to have a polynomial time algorithm. One interpretation of our results is that the quest for a PTAS for Nash equilibrium should not extend to a PTAS for finding the best Nash equilibrium, which stands in contrast to certain algorithmic techniques used so far (e.g. sampling and enumeration). Technically, our result is a reduction from a notoriously difficult problem in modern Combinatorics, of finding a planted (but hidden) clique in a random graph G(n, 1/2). Our reduction starts from an instance with planted clique size k = O(log n). For comparison, the currently known algorithms due to Alon, Krivelevich and Sudakov [Random Struct. & Algorithms, 1998], and Krauthgamer and Feige [Random Struct. & Algorithms, 2000], are effective for a much larger clique size k = Ω(√n).) <|cite_end|> rules out an additive PTAS for \rm{DkBS}, under complexity theoretic assumptions.\footnote{They reduce the problem of determining a \emph{planted clique} to that of computing an $\varepsilon$-additive approximation for \rm{DkBS}, with a sufficiently small but constant $\varepsilon$.} In terms of upper bound, the result of Alon et al. <|cite_start|> (Reference: The Approximate Rank of a Matrix and Its Algorithmic Applications: Approximate Rank: We study the ε-rank of a real matrix A, defined for any ε > 0 as the minimum rank over matrices that approximate every entry of A to within an additive ε. This parameter is connected to other notions of approximate rank and is motivated by problems from various topics including communication complexity, combinatorial optimization, game theory, computational geometry and learning theory. Here we give bounds on the ε-rank and use them for algorithmic applications. Our main algorithmic results are (a) polynomial-time additive approximation schemes for Nash equilibria for 2-player games when the payoff matrices are positive semidefinite or have logarithmic rank and (b) an additive PTAS for the densest subgraph problem for similar classes of weighted graphs. We use combinatorial, geometric and spectral techniques; our main new tool is an algorithm for efficiently covering a convex body with translates of another convex body.) <|cite_end|> presents an algorithm for this problem that runs in time exponential in the rank of the adjacency matrix. This paper develops the following complementary upper bounds: given a graph with $n$ vertices and maximum degree $d$, an $\varepsilon$-additive approximation for \rm{NDkS} can be computed in time $n^{O\left(\frac{\log d}{\varepsilon^2}\right)}$. This paper also presents an algorithm with the same time complexity for additively approximating \rm{DkBS}. \subsection{Related Work} \paragraph{Approximate Version of Carath\'{e}odory's Theorem.} In this paper we provide a self-contained proof of the approximate version of Carath\'{e}odory's theorem, employing the Khintchine inequality (see Theorem~\ref{thm:Khintchine-v}), and use the theorem to develop new approximation algorithms. As mentioned earlier, the approximate version of Carath\'{e}odory's theorem can also be obtained by instantiating Maurey's lemma, which, in particular, appears in the analysis and operator theory literatures; see, e.g., <|cite_start|> (Reference: Inequalities of Bernstein-Jackson-type and the degree of compactness of operators in Banach spaces: © Annales de l’institut Fourier, 1985, tous droits réservés. L’accès aux archives de la revue « Annales de l’institut Fourier » (http://annalif.ujf-grenoble.fr/) implique l’accord avec les conditions générales d’utilisation (http://www.numdam.org/legal.php). Toute utilisation commerciale ou impression systématique est constitutive d’une infraction pénale. Toute copie ou impression de ce fichier doit contenir la présente mention de copyright.) <|cite_end|> <|cite_start|> (Reference: On the duality problem for entropy numbers of operators: ) <|cite_end|>. \paragraph{Approximate Nash Equilibria.} The computation of equilibria is an active area of research. Nash equilibria is known to be computationally hard <|cite_start|> (Reference: Settling the Complexity of Computing Two-Player Nash Equilibria: We settle a long-standing open question in algorithmic game theory. We prove that Bimatrix, the problem of finding a Nash equilibrium in a two-player game, is complete for the complexity class PPAD Polynomial Parity Argument, Directed version) introduced by Papadimitriou in 1991. This is the first of a series of results concerning the complexity of Nash equilibria. In particular, we prove the following theorems: Bimatrix does not have a fully polynomial-time approximation scheme unless every problem in PPAD is solvable in polynomial time. The smoothed complexity of the classic Lemke-Howson algorithm and, in fact, of any algorithm for Bimatrix is not polynomial unless every problem in PPAD is solvable in randomized polynomial time. Our results demonstrate that, even in the simplest form of non-cooperative games, equilibrium computation and approximation are polynomial-time equivalent to fixed point computation. Our results also have two broad complexity implications in mathematical economics and operations research: Arrow-Debreu market equilibria are PPAD-hard to compute. The P-Matrix Linear Complementary Problem is computationally harder than convex programming unless every problem in PPAD is solvable in polynomial time.) <|cite_end|> <|cite_start|> (Reference: {The complexity of computing a Nash equilibrium: How long does it take until economic agents converge to an equilibrium? By studying the complexity of the problem of computing a mixed Nash equilibrium in a game, we provide evidence that there are games in which convergence to such an equilibrium takes prohibitively long. Traditionally, computational problems fall into two classes: those that have a polynomial-time algorithm and those that are NP-hard. However, the concept of NP-hardness cannot be applied to the rare problems where "every instance has a solution"---for example, in the case of games Nash's theorem asserts that every game has a mixed equilibrium (now known as the Nash equilibrium, in honor of that result). We show that finding a Nash equilibrium is complete for a class of problems called PPAD, containing several other known hard problems; all problems in PPAD share the same style of proof that every instance has a solution.) <|cite_end|>, and in light of these findings, a considerable effort has been directed towards understanding the complexity of \emph{approximate} Nash equilibrium. Results in this direction include both upper bounds <|cite_start|> (Reference: Playing Large Games Using Simple Strategies: We prove the existence of ε-Nash equilibrium strategies with support logarithmic in the number of pure strategies. We also show that the payoffs to all players in any (exact) Nash equilibrium can be ε-approximated by the payoffs to the players in some such logarithmic support ε-Nash equilibrium. These strategies are also uniform on a multiset of logarithmic size and therefore this leads to a quasi-polynomial algorithm for computing an ε-Nash equilibrium. To our knowledge this is the first subexponential algorithm for finding an ε-Nash equilibrium. Our results hold for any multiple-player game as long as the number of players is a constant (i.e., it is independent of the number of pure strategies). A similar argument also proves that for a fixed number of players m, the payoffs to all players in any m-tuple of mixed strategies can be ε-approximated by the payoffs in some m-tuple of constant support strategies.We also prove that if the payoff matrices of a two person game have low rank then the game has an exact Nash equilibrium with small support. This implies that if the payoff matrices can be well approximated by low rank matrices, the game has an ε-equilibrium with small support. It also implies that if the payoff matrices have constant rank we can compute an exact Nash equilibrium in polynomial time.) <|cite_end|> <|cite_start|> (Reference: Polynomial algorithms for approximating Nash equilibria of bimatrix games: ) <|cite_end|> <|cite_start|> (Reference: A note on approximate Nash equilibria: ) <|cite_end|> <|cite_start|> (Reference: Games of fixed rank: A hierarchy of bimatrix games: We propose a new hierarchical approach to understand the complexity of the open problem of computing a Nash equilibrium in a bimatrix game. Specifically, we investigate a hierarchy of bimatrix games $(A,B)$ which results from restricting the rank of the matrix $A+B$ to be of fixed rank at most $k$. For every fixed $k$, this class strictly generalizes the class of zero-sum games, but is a very special case of general bimatrix games. We show that even for $k=1$ the set of Nash equilibria of these games can consist of an arbitrarily large number of connected components. While the question of exact polynomial time algorithms to find a Nash equilibrium remains open for games of fixed rank, we can provide polynomial time algorithms for finding an $\epsilon$-approximation.) <|cite_end|> <|cite_start|> (Reference: Progress in Approximate Nash Equilibria: It is known [5] that an additively ε-approximate Nash equilibrium (with supports of size at most two) can be computed in polynomial time in any 2-player game with ε=.5. It is also known that no approximation better than .5 is possible unless equilibria with support larger than logn are considered, where n is the number of strategies per player. We give a polynomial algorithm for computing an ε-approximate Nash equilibrium in 2-player games with ε ≈ .38; our algorithm computes equilibria with arbitrarily large supports.) <|cite_end|> <|cite_start|> (Reference: Efficient Algorithms for Constant Well Supported Approximate Equilibria in Bimatrix Games: ) <|cite_end|> <|cite_start|> (Reference: Approximating Nash equilibria using small-support strategies: We study the problem of finding approximate Nash equilibria of two player games. We show that for any 0<ε<1, there is no 1<over>1 + ε - approximate equilibrium with strategies of support <i>O</i>(log <i>n</i><over>ε<sup>2</sup>).) <|cite_end|> <|cite_start|> (Reference: New algorithms for approximate Nash equilibria in bimatrix games: ) <|cite_end|> <|cite_start|> (Reference: An Optimization Approach for Approximate Nash Equilibria: In this paper we propose a new methodology for determining approximate Nash equilibria of noncooperative bimatrix games, and based on that, we provide an efficient algorithm that computes 0.3393-approximate equilibria, the best approximation to date. The methodology is based on the formulation of an appropriate function of pairs of mixed strategies reflecting the maximum deviation of the players' payoffs from the best payoff each player could achieve given the strategy chosen by the other. We then seek to minimize such a function using descent procedures. Because it is unlikely to be able to find global minima in polynomial time, given the recently proven intractability of the problem, we concentrate on the computation of stationary points and prove that they can be approximated arbitrarily closely in polynomial time and that they have the above-mentioned approximation property. Our result provides the best ε to date for polynomially computable ε-approximate Nash equilibria of bimatrix games. Furthermore, our methodology for computing approximate Nash equilibria has not been used by others.) <|cite_end|> <|cite_start|> (Reference: Practical and Efficient Approximations of Nash Equilibria for Win-Lose Games Based on Graph Spectra: ) <|cite_end|> <|cite_start|> (Reference: The Approximate Rank of a Matrix and Its Algorithmic Applications: Approximate Rank: We study the ε-rank of a real matrix A, defined for any ε > 0 as the minimum rank over matrices that approximate every entry of A to within an additive ε. This parameter is connected to other notions of approximate rank and is motivated by problems from various topics including communication complexity, combinatorial optimization, game theory, computational geometry and learning theory. Here we give bounds on the ε-rank and use them for algorithmic applications. Our main algorithmic results are (a) polynomial-time additive approximation schemes for Nash equilibria for 2-player games when the payoff matrices are positive semidefinite or have logarithmic rank and (b) an additive PTAS for the densest subgraph problem for similar classes of weighted graphs. We use combinatorial, geometric and spectral techniques; our main new tool is an algorithm for efficiently covering a convex body with translates of another convex body.) <|cite_end|> <|cite_start|> (Reference: The cover number of a matrix and its algorithmic applications: Given a matrix A, we study how many epsilon-cubes are required to cover the convex hull of the columns of A. We show bounds on this cover number in terms of VC dimension and the gamma_2 norm and give algorithms for enumerating elements of a cover. This leads to algorithms for computing approximate Nash equilibria that unify and extend several previous results in the literature. Moreover, our approximation algorithms can be applied quite generally to a family of quadratic optimization problems that also includes finding the k-by-k combinatorial rectangle of a matrix. In particular, for this problem we give the first quasi-polynomial time additive approximation algorithm that works for any matrix A in [0,1]^{m x n}.) <|cite_end|> and lower bounds <|cite_start|> (Reference: How Hard is It to Approximate the Best Nash Equilibrium?: The quest for a PTAS for Nash equilibrium in a two-player game seeks to circumvent the PPAD-completeness of an (exact) Nash equilibrium by finding an approximate equilibrium, and has emerged as a major open question in Algorithmic Game Theory. A closely related problem is that of finding an equilibrium maximizing a certain objective, such as the social welfare. This optimization problem was shown to be NP-hard by Gilboa and Zemel [Games and Economic Behavior 1989]. However, this NP-hardness is unlikely to extend to finding an approximate equilibrium, since the latter admits a quasi-polynomial time algorithm, as proved by Lipton, Markakis and Mehta [Proc. of 4th EC, 2003]. We show that this optimization problem, namely, finding in a two-player game an approximate equilibrium achieving large social welfare is unlikely to have a polynomial time algorithm. One interpretation of our results is that the quest for a PTAS for Nash equilibrium should not extend to a PTAS for finding the best Nash equilibrium, which stands in contrast to certain algorithmic techniques used so far (e.g. sampling and enumeration). Technically, our result is a reduction from a notoriously difficult problem in modern Combinatorics, of finding a planted (but hidden) clique in a random graph G(n, 1/2). Our reduction starts from an instance with planted clique size k = O(log n). For comparison, the currently known algorithms due to Alon, Krivelevich and Sudakov [Random Struct. & Algorithms, 1998], and Krauthgamer and Feige [Random Struct. & Algorithms, 2000], are effective for a much larger clique size k = Ω(√n).) <|cite_end|> <|cite_start|> (Reference: On the complexity of approximating a Nash equilibrium: We show that computing a relative---that is, multiplicative as opposed to additive---approximate Nash equilibrium in two-player games is PPAD-complete, even for constant values of the approximation. Our result is the first constant inapproximability result for the problem, since the appearance of the original results on the complexity of the Nash equilibrium [8, 5, 7]. Moreover, it provides an apparent---assuming that PPAD ⊈ TIME(nO(log n))---dichotomy between the complexities of additive and relative notions of approximation, since for constant values of additive approximation a quasi-polynomial-time algorithm is known [22]. Such a dichotomy does not arise for values of the approximation that scale with the size of the game, as both relative and additive approximations are PPAD-complete [7]. As a byproduct, our proof shows that the Lipton-Markakis-Mehta sampling lemma is not applicable to relative notions of constant approximation, answering in the negative direction a question posed to us by Shang-Hua Teng [26].) <|cite_end|> <|cite_start|> (Reference: Approximating the best Nash Equilibrium in no(log n)-time breaks the Exponential Time Hypothesis: The celebrated PPAD hardness result for finding an exact Nash equilibrium in a two-player game initiated a quest for finding approximate Nash equilibria efficiently, and is one of the major open questions in algorithmic game theory. We study the computational complexity of finding an e-approximate Nash equilibrium with good social welfare. Hazan and Krauthgamer and subsequent improvements showed that finding an e-approximate Nash equilibrium with good social welfare in a two player game and many variants of this problem is at least as hard as finding a planted clique of size O(log n) in the random graph G(n, 1/2). We show that any polynomial time algorithm that finds an e-approximate Nash equilibrium with good social welfare refutes (the worst-case) Exponential Time Hypothesis by Impagliazzo and Paturi, confirming the recent conjecture by Aaronson, Impagliazzo and Moshkovitz. Specifically it would imply a 2O(n1/2) algorithm for SAT. Our lower bound matches the quasi-polynomial time algorithm by Lipton, Markakis and Mehta for solving the problem. Our key tool is a reduction from the PCP machinery to finding Nash equilibrium via free games, the framework introduced in the recent work by Aaronson, Impagliazzo and Moshkovitz. Techniques developed in the process may be useful for replacing planted clique hardness with ETH-hardness in other applications.) <|cite_end|>. In particular, it is known that for a general bimatrix game an approximate Nash equilibrium can be computed in quasi-polynomial time <|cite_start|> (Reference: Playing Large Games Using Simple Strategies: We prove the existence of ε-Nash equilibrium strategies with support logarithmic in the number of pure strategies. We also show that the payoffs to all players in any (exact) Nash equilibrium can be ε-approximated by the payoffs to the players in some such logarithmic support ε-Nash equilibrium. These strategies are also uniform on a multiset of logarithmic size and therefore this leads to a quasi-polynomial algorithm for computing an ε-Nash equilibrium. To our knowledge this is the first subexponential algorithm for finding an ε-Nash equilibrium. Our results hold for any multiple-player game as long as the number of players is a constant (i.e., it is independent of the number of pure strategies). A similar argument also proves that for a fixed number of players m, the payoffs to all players in any m-tuple of mixed strategies can be ε-approximated by the payoffs in some m-tuple of constant support strategies.We also prove that if the payoff matrices of a two person game have low rank then the game has an exact Nash equilibrium with small support. This implies that if the payoff matrices can be well approximated by low rank matrices, the game has an ε-equilibrium with small support. It also implies that if the payoff matrices have constant rank we can compute an exact Nash equilibrium in polynomial time.) <|cite_end|>. Polynomial time algorithms have been developed for computing approximate Nash equilibria for fixed values of the approximation factor $\varepsilon$; the best-known result of this type shows that a $0.3393$-approximate Nash equilibrium can be computed in polynomial time <|cite_start|> (Reference: An Optimization Approach for Approximate Nash Equilibria: In this paper we propose a new methodology for determining approximate Nash equilibria of noncooperative bimatrix games, and based on that, we provide an efficient algorithm that computes 0.3393-approximate equilibria, the best approximation to date. The methodology is based on the formulation of an appropriate function of pairs of mixed strategies reflecting the maximum deviation of the players' payoffs from the best payoff each player could achieve given the strategy chosen by the other. We then seek to minimize such a function using descent procedures. Because it is unlikely to be able to find global minima in polynomial time, given the recently proven intractability of the problem, we concentrate on the computation of stationary points and prove that they can be approximated arbitrarily closely in polynomial time and that they have the above-mentioned approximation property. Our result provides the best ε to date for polynomially computable ε-approximate Nash equilibria of bimatrix games. Furthermore, our methodology for computing approximate Nash equilibria has not been used by others.) <|cite_end|>. In addition, several interesting classes of games have been identified that admit a PTAS <|cite_start|> (Reference: Games of fixed rank: A hierarchy of bimatrix games: We propose a new hierarchical approach to understand the complexity of the open problem of computing a Nash equilibrium in a bimatrix game. Specifically, we investigate a hierarchy of bimatrix games $(A,B)$ which results from restricting the rank of the matrix $A+B$ to be of fixed rank at most $k$. For every fixed $k$, this class strictly generalizes the class of zero-sum games, but is a very special case of general bimatrix games. We show that even for $k=1$ the set of Nash equilibria of these games can consist of an arbitrarily large number of connected components. While the question of exact polynomial time algorithms to find a Nash equilibrium remains open for games of fixed rank, we can provide polynomial time algorithms for finding an $\epsilon$-approximation.) <|cite_end|> <|cite_start|> (Reference: On Oblivious PTAS's for Nash Equilibrium: If a game has a Nash equilibrium with probability values that are either zero or Omega(1) then this equilibrium can be found exhaustively in polynomial time. Somewhat surprisingly, we show that there is a PTAS for the games whose equilibria are guaranteed to have small-O(1/n)-values, and therefore large-Omega(n)-supports. We also point out that there is a PTAS for games with sparse payoff matrices, which are known to be PPAD-complete to solve exactly. Both algorithms are of a special kind that we call oblivious: The algorithm just samples a fixed distribution on pairs of mixed strategies, and the game is only used to determine whether the sampled strategies comprise an eps-Nash equilibrium; the answer is yes with inverse polynomial probability. These results bring about the question: Is there an oblivious PTAS for Nash equilibrium in general games? We answer this question in the negative; our lower bound comes close to the quasi-polynomial upper bound of [Lipton, Markakis, Mehta 2003]. Another recent PTAS for anonymous games is also oblivious in a weaker sense appropriate for this class of games (it samples from a fixed distribution on unordered collections of mixed strategies), but its runtime is exponential in 1/eps. We prove that any oblivious PTAS for anonymous games with two strategies and three player types must have 1/eps^c in the exponent of the running time for some c>1/3, rendering the algorithm in [Daskalakis 2008] essentially optimal within oblivious algorithms. In contrast, we devise a poly(n) (1/eps)^O(log^2(1/eps)) non-oblivious PTAS for anonymous games with 2 strategies and any bounded number of player types. Our algorithm is based on the construction of a sparse (and efficiently computable) eps-cover of the set of all possible sums of n independent indicators, under the total variation distance. The size of the cover is poly(n) (1/ eps^{O(log^2 (1/eps))}.) <|cite_end|> <|cite_start|> (Reference: Practical and Efficient Approximations of Nash Equilibria for Win-Lose Games Based on Graph Spectra: ) <|cite_end|> <|cite_start|> (Reference: The Approximate Rank of a Matrix and Its Algorithmic Applications: Approximate Rank: We study the ε-rank of a real matrix A, defined for any ε > 0 as the minimum rank over matrices that approximate every entry of A to within an additive ε. This parameter is connected to other notions of approximate rank and is motivated by problems from various topics including communication complexity, combinatorial optimization, game theory, computational geometry and learning theory. Here we give bounds on the ε-rank and use them for algorithmic applications. Our main algorithmic results are (a) polynomial-time additive approximation schemes for Nash equilibria for 2-player games when the payoff matrices are positive semidefinite or have logarithmic rank and (b) an additive PTAS for the densest subgraph problem for similar classes of weighted graphs. We use combinatorial, geometric and spectral techniques; our main new tool is an algorithm for efficiently covering a convex body with translates of another convex body.) <|cite_end|> <|cite_start|> (Reference: The cover number of a matrix and its algorithmic applications: Given a matrix A, we study how many epsilon-cubes are required to cover the convex hull of the columns of A. We show bounds on this cover number in terms of VC dimension and the gamma_2 norm and give algorithms for enumerating elements of a cover. This leads to algorithms for computing approximate Nash equilibria that unify and extend several previous results in the literature. Moreover, our approximation algorithms can be applied quite generally to a family of quadratic optimization problems that also includes finding the k-by-k combinatorial rectangle of a matrix. In particular, for this problem we give the first quasi-polynomial time additive approximation algorithm that works for any matrix A in [0,1]^{m x n}.) <|cite_end|>. For example, the result of Alon et al. <|cite_start|> (Reference: The Approximate Rank of a Matrix and Its Algorithmic Applications: Approximate Rank: We study the ε-rank of a real matrix A, defined for any ε > 0 as the minimum rank over matrices that approximate every entry of A to within an additive ε. This parameter is connected to other notions of approximate rank and is motivated by problems from various topics including communication complexity, combinatorial optimization, game theory, computational geometry and learning theory. Here we give bounds on the ε-rank and use them for algorithmic applications. Our main algorithmic results are (a) polynomial-time additive approximation schemes for Nash equilibria for 2-player games when the payoff matrices are positive semidefinite or have logarithmic rank and (b) an additive PTAS for the densest subgraph problem for similar classes of weighted graphs. We use combinatorial, geometric and spectral techniques; our main new tool is an algorithm for efficiently covering a convex body with translates of another convex body.) <|cite_end|> provides a PTAS for games in which the sum of the payoff matrices, i.e., $A+B$, has logarithmic rank. Our result is incomparable to such rank based results, since a sparse matrix can have high rank and a low-rank matrix can have high sparsity. Chen et al. <|cite_start|> (Reference: Sparse Games Are Hard: ) <|cite_end|> considered sparsity in the context of games and showed that computing an exact Nash equilibrium is hard even if \emph{both} the payoff matrices have a fixed number of non-zero entries in every row \emph{and} column. It was observed in <|cite_start|> (Reference: On Oblivious PTAS's for Nash Equilibrium: If a game has a Nash equilibrium with probability values that are either zero or Omega(1) then this equilibrium can be found exhaustively in polynomial time. Somewhat surprisingly, we show that there is a PTAS for the games whose equilibria are guaranteed to have small-O(1/n)-values, and therefore large-Omega(n)-supports. We also point out that there is a PTAS for games with sparse payoff matrices, which are known to be PPAD-complete to solve exactly. Both algorithms are of a special kind that we call oblivious: The algorithm just samples a fixed distribution on pairs of mixed strategies, and the game is only used to determine whether the sampled strategies comprise an eps-Nash equilibrium; the answer is yes with inverse polynomial probability. These results bring about the question: Is there an oblivious PTAS for Nash equilibrium in general games? We answer this question in the negative; our lower bound comes close to the quasi-polynomial upper bound of [Lipton, Markakis, Mehta 2003]. Another recent PTAS for anonymous games is also oblivious in a weaker sense appropriate for this class of games (it samples from a fixed distribution on unordered collections of mixed strategies), but its runtime is exponential in 1/eps. We prove that any oblivious PTAS for anonymous games with two strategies and three player types must have 1/eps^c in the exponent of the running time for some c>1/3, rendering the algorithm in [Daskalakis 2008] essentially optimal within oblivious algorithms. In contrast, we devise a poly(n) (1/eps)^O(log^2(1/eps)) non-oblivious PTAS for anonymous games with 2 strategies and any bounded number of player types. Our algorithm is based on the construction of a sparse (and efficiently computable) eps-cover of the set of all possible sums of n independent indicators, under the total variation distance. The size of the cover is poly(n) (1/ eps^{O(log^2 (1/eps))}.) <|cite_end|> that such games admit a trivial PTAS.\footnote{In particular, the product of uniform distributions over players' actions corresponds to an approximate Nash equilibrium in such games.} Note that we study a strictly larger class of games and provide a PTAS for games in which the row \emph{or} column sparsity of $A+B$ is fixed. \paragraph{Densest Subgraph.} The best-known (multiplicative) approximation ratio for the densest $k$-subgraph problem is $n^{(1/4+ o(1))}$ <|cite_start|> (Reference: Detecting high log-densities: an {$O (n^{1/4})$} approximation for densest k-subgraph: In the Densest k -Subgraph problem, given a graph G and a parameter k , one needs to find a subgraph of G induced on k vertices that contains the largest number of edges. There is a significant gap between the best known upper and lower bounds for this problem. It is NP-hard, and does not have a PTAS unless NP has subexponential time algorithms. On the other hand, the current best known algorithm of Feige, Kortsarz and Peleg [FKP01], gives an approximation ratio of n 1 / 3 − ε for some specific ε > 0 (estimated by those authors at around ε = 1 / 60). We present an algorithm that for every ε > 0 approximates the Densest k -Subgraph problem within a ratio of n 1 / 4+ ε in time n O (1 /ε ) . If allowed to run for time n O (log n ) , our algorithm achieves an approximation ratio of O ( n 1 / 4 ). Our algorithm is inspired by studying an average-case version of the problem where the goal is to distinguish random graphs from random graphs with planted dense subgraphs – the approximation ratio we achieve for the general case matches the “distinguishing ratio” we obtain for this planted problem. Achieving a distinguishing ratio of o ( n 1 / 4 ) for the planted problem (in polynomial time) is beyond the reach of our current techniques. Atahigh level, our algorithms involve cleverly counting appropriately defined trees of constant size in G , and using these counts to identify the vertices of the dense subgraph. Our algorithm is based on the following principle. We say that a graph G ( V, E ) has log-density α if its average degree is Θ( | V | α ). The algorithmic core of our result is a family of algorithms that) <|cite_end|>. But unlike this result, our work addresses additive approximations with normalized density as the maximization objective. In parituclar, we approximate \rm{NDkS} by approximately solving a quadratic program, which is similar to the quadratic program used in the Motzkin-Straus theorem. In addition, our approximation algorithm for \rm{DkBS} is based on solving a bilinear program that was formulated by Alon et al. <|cite_start|> (Reference: The Approximate Rank of a Matrix and Its Algorithmic Applications: Approximate Rank: We study the ε-rank of a real matrix A, defined for any ε > 0 as the minimum rank over matrices that approximate every entry of A to within an additive ε. This parameter is connected to other notions of approximate rank and is motivated by problems from various topics including communication complexity, combinatorial optimization, game theory, computational geometry and learning theory. Here we give bounds on the ε-rank and use them for algorithmic applications. Our main algorithmic results are (a) polynomial-time additive approximation schemes for Nash equilibria for 2-player games when the payoff matrices are positive semidefinite or have logarithmic rank and (b) an additive PTAS for the densest subgraph problem for similar classes of weighted graphs. We use combinatorial, geometric and spectral techniques; our main new tool is an algorithm for efficiently covering a convex body with translates of another convex body.) <|cite_end|>. This bilinear program was used in <|cite_start|> (Reference: The Approximate Rank of a Matrix and Its Algorithmic Applications: Approximate Rank: We study the ε-rank of a real matrix A, defined for any ε > 0 as the minimum rank over matrices that approximate every entry of A to within an additive ε. This parameter is connected to other notions of approximate rank and is motivated by problems from various topics including communication complexity, combinatorial optimization, game theory, computational geometry and learning theory. Here we give bounds on the ε-rank and use them for algorithmic applications. Our main algorithmic results are (a) polynomial-time additive approximation schemes for Nash equilibria for 2-player games when the payoff matrices are positive semidefinite or have logarithmic rank and (b) an additive PTAS for the densest subgraph problem for similar classes of weighted graphs. We use combinatorial, geometric and spectral techniques; our main new tool is an algorithm for efficiently covering a convex body with translates of another convex body.) <|cite_end|> to develop an additive PTAS for \rm{DkBS} in particular classes of graphs, including ones with low-rank adjacency matrices. This paper supplements prior work by developing an approximation algorithm whose running time is parametrized by the maximum degree of the given graph, and not by the rank of its adjacency matrix. \subsection{Techniques} \label{sect:tech} \paragraph{Approximate Nash Equilibria.} Our algorithm for computing an approximate Nash equilibrium relies on finding a near-optimal solution of a bilinear program (BP). The BP we consider was formulated by Mangasarian and Stone <|cite_start|> (Reference: Two-person nonzero-sum games and quadratic programming: ) <|cite_end|> and its optimal (near-optimal) solutions correspond to exact (approximate) Nash equilibria of the given game. Below we provide a sketch of our algorithm that determines a near-optimal solution of this BP. The variables of the BP, $x$ and $y$, correspond to probability distributions that are mixed strategies of the players and its objective is to maximize $x^T C y$, where $C$ is the sum of the payoff matrices of the game.\footnote{We ignore the linear part of the objective for ease of presentation, see Section~\ref{sect:nash} for details.} Suppose we knew the vector $u:=C \hat{y}$, for some Nash equilibrium $(\hat{x}, \hat{y})$. Then, a Nash equilibrium can be efficiently computed by solving a linear program (with variables $x$ and $y$) that is obtained by modifying the BP as follows: replace $x^T C y $ by $x^Tu$ as the objective and include the constraint $Cy = u$. Section~\ref{sect:nash} shows that this idea can be used to find an approximate Nash equilibrium, even if $u$ is not exactly equal to $C\hat{y}$ but close to it. That is, to find an approximate Nash equilibrium it suffices to have a vector $u$ for which $\| C\hat{y} - u \|_p$ is small. To apply the approximate version of Carath\'{e}odory's theorem we observe that $C\hat{y}$ is a vector in the convex hull of the \emph{columns} of $C$. Also, note that in the context of (additive) approximate Nash equilibria the payoff matrices are normalized, hence the absolute value of any entry of matrix $C$ is no more than, say, $2$. This entry-wise normalization implies that if no column of matrix $C$ has more than $s$ non-zero entries, then the $\log s$ norm of the columns is a fixed constant: $\| C^i \|_p \leq (s \cdot 2^p )^{1/p} = 2 \cdot 2^{\frac{\log s}{p}} \leq 4 $, where $C^i$ is the $i$th column of $C$ and norm $p = \log s$. This is a simple but critical observation, since it implies that, modulo a small scaling factor, the columns of an $C$ lie in the $\log s$-unit ball. At this point we can apply the approximate version of Carath\'{e}odory's theorem to guarantee that close to $C \hat{y}$ there exists a vector $u$ that can be expressed as a convex combination of about $p=\log s$ columns of $C$. We show in Section~\ref{sect:nash} that exhaustively searching for $u$ takes $n^{O(\log s)}$ time, where $n$ is the number of columns of $C$. Thus we can find a vector close to $C \hat{y}$ and hence determine a near-optimal solution of the bilinear program. This way we get an approximate Nash equilibrium and the running time of the algorithm is dominated by the exhaustive search. Overall, this template for approximating Nash equilibria in sparse games is made possible by the approximate version of Carath\'{e}odory's theorem. It is notable that our algorithmic framework employs arbitrary norms $p \in [2, \infty)$, and in this sense it goes beyond standard \emph{$\varepsilon$-net}-based results that typically use norms $1$, $2$, or $\infty$. \paragraph{Densest Subgraph.} The algorithmic approach outlined above applies to any quadratic or bilinear program in which the objective matrix is column (or row) sparse and the feasible region is contained in the simplex. We use this observation to develop an additive approximations for \rm{NDkS} and \rm{DkBS}. Specifically, we formulate a quadratic program, near-optimal solutions of which correspond to approximate-solutions of \rm{NDkS}. The column sparsity of the objective matrix in the quadratic program is equal to the maximum degree of the underlying graph plus one. Hence, using the above mentioned observation, we obtain the approximation result for \rm{NDkS}. The same template applies to \rm{DkBS}; for this problem we employ a bilinear program from <|cite_start|> (Reference: The Approximate Rank of a Matrix and Its Algorithmic Applications: Approximate Rank: We study the ε-rank of a real matrix A, defined for any ε > 0 as the minimum rank over matrices that approximate every entry of A to within an additive ε. This parameter is connected to other notions of approximate rank and is motivated by problems from various topics including communication complexity, combinatorial optimization, game theory, computational geometry and learning theory. Here we give bounds on the ε-rank and use them for algorithmic applications. Our main algorithmic results are (a) polynomial-time additive approximation schemes for Nash equilibria for 2-player games when the payoff matrices are positive semidefinite or have logarithmic rank and (b) an additive PTAS for the densest subgraph problem for similar classes of weighted graphs. We use combinatorial, geometric and spectral techniques; our main new tool is an algorithm for efficiently covering a convex body with translates of another convex body.) <|cite_end|>. \subsection{Organization} We begin by setting up notation in Section~\ref{sect:notation}. Then, in Section~\ref{sect:caratheodory} we present the approximate version of Carath\'{e}odory's theorem. Algorithmic applications of the theorem are developed in Sections~\ref{sect:nash} and~\ref{sect:dense}. In Section~\ref{sect:ext} we consider convex hulls of matrices and also detail approximate versions of the colorful Carath\'{e}odory theorem and Tverberg's theorem. Finally, Section~\ref{sect:lb} presents a lower bound proving showing that, in general, $\varepsilon$-close (under the $p$-norm distance with $p \in [2, \infty)$) vectors cannot be expressed as a convex combination of less than $ \frac{1}{4 \ \varepsilon^{p/(p-1)}} $ vectors of the given set. <|paper_end|>
[ "<|reference_start|> Algorithmic game theory: We give an introduction to the micro-economic field of Mechanism Design slightly biased towards a computer-scientist’s point of view. <|reference_end|>", "<|reference_start|> Playing Large Games Using Simple Strategies: We prove the existence of ε-Nash equilibrium strategies with support logarithmic in the number of pure strategies. We also show that the payoffs to all players in any (exact) Nash equilibrium can be ε-approximated by the payoffs to the players in some such logarithmic support ε-Nash equilibrium. These strategies are also uniform on a multiset of logarithmic size and therefore this leads to a quasi-polynomial algorithm for computing an ε-Nash equilibrium. To our knowledge this is the first subexponential algorithm for finding an ε-Nash equilibrium. Our results hold for any multiple-player game as long as the number of players is a constant (i.e., it is independent of the number of pure strategies). A similar argument also proves that for a fixed number of players m, the payoffs to all players in any m-tuple of mixed strategies can be ε-approximated by the payoffs in some m-tuple of constant support strategies.We also prove that if the payoff matrices of a two person game have low rank then the game has an exact Nash equilibrium with small support. This implies that if the payoff matrices can be well approximated by low rank matrices, the game has an ε-equilibrium with small support. It also implies that if the payoff matrices have constant rank we can compute an exact Nash equilibrium in polynomial time. <|reference_end|>", "<|reference_start|> The Approximate Rank of a Matrix and Its Algorithmic Applications: Approximate Rank: We study the ε-rank of a real matrix A, defined for any ε > 0 as the minimum rank over matrices that approximate every entry of A to within an additive ε. This parameter is connected to other notions of approximate rank and is motivated by problems from various topics including communication complexity, combinatorial optimization, game theory, computational geometry and learning theory. Here we give bounds on the ε-rank and use them for algorithmic applications. Our main algorithmic results are (a) polynomial-time additive approximation schemes for Nash equilibria for 2-player games when the payoff matrices are positive semidefinite or have logarithmic rank and (b) an additive PTAS for the densest subgraph problem for similar classes of weighted graphs. We use combinatorial, geometric and spectral techniques; our main new tool is an algorithm for efficiently covering a convex body with translates of another convex body. <|reference_end|>", "<|reference_start|> The Approximate Rank of a Matrix and Its Algorithmic Applications: Approximate Rank: We study the ε-rank of a real matrix A, defined for any ε > 0 as the minimum rank over matrices that approximate every entry of A to within an additive ε. This parameter is connected to other notions of approximate rank and is motivated by problems from various topics including communication complexity, combinatorial optimization, game theory, computational geometry and learning theory. Here we give bounds on the ε-rank and use them for algorithmic applications. Our main algorithmic results are (a) polynomial-time additive approximation schemes for Nash equilibria for 2-player games when the payoff matrices are positive semidefinite or have logarithmic rank and (b) an additive PTAS for the densest subgraph problem for similar classes of weighted graphs. We use combinatorial, geometric and spectral techniques; our main new tool is an algorithm for efficiently covering a convex body with translates of another convex body. <|reference_end|>" ]
[ 3, 4, 40, 41 ]
{"<|cite_2|>": "ss-1984323", "<|multi_cite_3_1|>": "arxiv-65", "<|multi_cite_3_2|>": "ss-937231", "<|cite_4|>": "ss-1288197", "<|cite_5|>": "ss-1656836", "<|cite_6|>": "arxiv-19160", "<|cite_7|>": "arxiv-19160", "<|cite_10|>": "ss-1385683", "<|cite_11|>": "ss-1385683", "<|cite_12|>": "ss-1117331", "<|multi_cite_13_2|>": "ss-1984323", "<|multi_cite_13_3|>": "ss-1984324", "<|multi_cite_14_1|>": "arxiv-65", "<|multi_cite_14_2|>": "ss-937231", "<|multi_cite_15_1|>": "ss-1656836", "<|multi_cite_15_2|>": "ss-725532", "<|multi_cite_15_3|>": "ss-812352", "<|multi_cite_15_4|>": "arxiv-673487", "<|multi_cite_15_5|>": "ss-809086", "<|multi_cite_15_6|>": "ss-725535", "<|multi_cite_15_7|>": "ss-1710701", "<|multi_cite_15_8|>": "ss-725534", "<|multi_cite_15_9|>": "ss-725536", "<|multi_cite_15_10|>": "ss-1932867", "<|multi_cite_15_11|>": "ss-1117331", "<|multi_cite_15_12|>": "ss-1984325", "<|multi_cite_16_1|>": "ss-1385683", "<|multi_cite_16_2|>": "ss-1351956", "<|multi_cite_16_3|>": "ss-777635", "<|cite_17|>": "ss-1656836", "<|cite_18|>": "ss-725536", "<|multi_cite_19_1|>": "arxiv-673487", "<|multi_cite_19_2|>": "arxiv-19160", "<|multi_cite_19_3|>": "ss-1932867", "<|multi_cite_19_4|>": "ss-1117331", "<|multi_cite_19_5|>": "ss-1984325", "<|cite_20|>": "ss-1117331", "<|cite_21|>": "ss-1828259", "<|cite_22|>": "arxiv-19160", "<|cite_23|>": "ss-686758", "<|cite_25|>": "ss-1117331", "<|cite_26|>": "ss-1117331", "<|cite_27|>": "ss-2454952", "<|cite_28|>": "ss-1117331"}
2407.08459
<|paper_start|> Title: Graph Expansions of Deep Neural Networks and their Universal Scaling Limits Abstract: Graph Expansions of Deep Neural Networks and their Universal Scaling Limits: We present a unified approach to obtain scaling limits of neural networks using the genus expansion technique from random matrix theory. This approach begins with a novel expansion of neural networks which is reminiscent of Butcher series for ODEs, and is obtained through a generalisation of Fa\`a di Bruno's formula to an arbitrary number of compositions. In this expansion, the role of monomials is played by random multilinear maps indexed by directed graphs whose edges correspond to random matrices, which we call operator graphs. This expansion linearises the effect of the activation functions, allowing for the direct application of Wick's principle to compute the expectation of each of its terms. We then determine the leading contribution to each term by embedding the corresponding graphs onto surfaces, and computing their Euler characteristic. Furthermore, by developing a correspondence between analytic and graphical operations, we obtain similar graph expansions for the neural tangent kernel as well as the input-output Jacobian of the original neural network, and derive their infinite-width limits with relative ease. Notably, we find explicit formulae for the moments of the limiting singular value distribution of the Jacobian. We then show that all of these results hold for networks with more general weights, such as general matrices with i.i.d. entries satisfying moment assumptions, complex matrices and sparse matrices. Introduction \label{sec:intro} \subsection{Scaling limits of neural networks} Deep neural networks (NNs) whose weights' and biases' entries are initialised as appropriately rescaled, independent and identically distributed (i.i.d.) Gaussian random variables converge to Gaussian processes (GPs) as their width tends to infinity. This well-known fact was originally observed by <|cite_start|> (Reference: Bayesian learning for neural networks: ) <|cite_end|> for shallow feedforward networks and more recently by <|cite_start|> (Reference: Gaussian Process Behaviour in Wide Deep Neural Networks: Whilst deep neural networks have shown great empirical success, there is still much work to be done to understand their theoretical properties. In this paper, we study the relationship between random, wide, fully connected, feedforward networks with more than one hidden layer and Gaussian processes with a recursive kernel definition. We show that, under broad conditions, as we make the architecture increasingly wide, the implied random function converges in distribution to a Gaussian process, formalising and extending existing results by Neal (1996) to deep networks. To evaluate convergence rates empirically, we use maximum mean discrepancy. We then compare finite Bayesian deep networks from the literature to Gaussian processes in terms of the key predictive quantities of interest, finding that in some cases the agreement can be very close. We discuss the desirability of Gaussian process behaviour and review non-Gaussian alternative models from the literature.) <|cite_end|> for multi-layer feedforward networks, by <|cite_start|> (Reference: Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes: There is a previously identified equivalence between wide fully connected neural networks (FCNs) and Gaussian processes (GPs). This equivalence enables, for instance, test set predictions that would have resulted from a fully Bayesian, infinitely wide trained FCN to be computed without ever instantiating the FCN, but by instead evaluating the corresponding GP. In this work, we derive an analogous equivalence for multi-layer convolutional neural networks (CNNs) both with and without pooling layers, and achieve state of the art results on CIFAR10 for GPs without trainable kernels. We also introduce a Monte Carlo method to estimate the GP corresponding to a given neural network architecture, even in cases where the analytic form has too many terms to be computationally feasible. Surprisingly, in the absence of pooling layers, the GPs corresponding to CNNs with and without weight sharing are identical. As a consequence, translation equivariance, beneficial in finite channel CNNs trained with stochastic gradient descent (SGD), is guaranteed to play no role in the Bayesian treatment of the infinite channel limit - a qualitative difference between the two regimes that is not present in the FCN case. We confirm experimentally, that while in some scenarios the performance of SGD-trained finite CNNs approaches that of the corresponding GPs as the channel count increases, with careful tuning SGD-trained CNNs can significantly outperform their corresponding GPs, suggesting advantages from SGD training compared to fully Bayesian parameter estimation.) <|cite_end|> and <|cite_start|> (Reference: Deep Convolutional Networks as shallow Gaussian Processes: We show that the output of a (residual) convolutional neural network (CNN) with an appropriate prior over the weights and biases is a Gaussian process (GP) in the limit of infinitely many convolutional filters, extending similar results for dense networks. For a CNN, the equivalent kernel can be computed exactly and, unlike "deep kernels", has very few parameters: only the hyperparameters of the original CNN. Further, we show that this kernel has two properties that allow it to be computed efficiently; the cost of evaluating the kernel for a pair of images is similar to a single forward pass through the original CNN with only one filter per layer. The kernel equivalent to a 32-layer ResNet obtains 0.84% classification error on MNIST, a new record for GPs with a comparable number of parameters.) <|cite_end|> for deep convolutional networks, and by <|cite_start|> (Reference: Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any Architecture are Gaussian Processes: Wide neural networks with random weights and biases are Gaussian processes, as observed by Neal (1995) for shallow networks, and more recently by Lee et al.~(2018) and Matthews et al.~(2018) for deep fully-connected networks, as well as by Novak et al.~(2019) and Garriga-Alonso et al.~(2019) for deep convolutional networks. We show that this Neural Network-Gaussian Process correspondence surprisingly extends to all modern feedforward or recurrent neural networks composed of multilayer perceptron, RNNs (e.g. LSTMs, GRUs), (nD or graph) convolution, pooling, skip connection, attention, batch normalization, and/or layer normalization. More generally, we introduce a language for expressing neural network computations, and our result encompasses all such expressible neural networks. This work serves as a tutorial on the \emph{tensor programs} technique formulated in Yang (2019) and elucidates the Gaussian Process results obtained there. We provide open-source implementations of the Gaussian Process kernels of simple RNN, GRU, transformer, and batchnorm+ReLU network at github.com/thegregyang/GP4A. Please see our arxiv version for the complete and up-to-date version of this paper.) <|cite_end|> for more general architectures, including recurrent and attention-based networks. Albeit these results hold for untrained neural networks at initialisation, similar scaling limits have been derived in recent years to study the training dynamics of NNs in the infinite-width limit. Different scalings/parametrisations when passing to the limit (i.e. choices, as functions of the width, of the variance of the random initialisation and of the learning rates for each layer) produce fundamentally different limiting behaviours of the gradient descent (GD) dynamics of wide NNs. Notable examples include the so-called \emph{neural tangent kernel} (NTK) by <|cite_start|> (Reference: Gradient Descent Provably Optimizes Over-parameterized Neural Networks: One of the mysteries in the success of neural networks is randomly initialized first order methods like gradient descent can achieve zero training loss even though the objective function is non-convex and non-smooth. This paper demystifies this surprising phenomenon for two-layer fully connected ReLU activated neural networks. For an $m$ hidden node shallow neural network with ReLU activation and $n$ training data, we show as long as $m$ is large enough and no two inputs are parallel, randomly initialized gradient descent converges to a globally optimal solution at a linear convergence rate for the quadratic loss function. Our analysis relies on the following observation: over-parameterization and random initialization jointly restrict every weight vector to be close to its initialization for all iterations, which allows us to exploit a strong convexity-like property to show that gradient descent converges at a global linear rate to the global optimum. We believe these insights are also useful in analyzing deep models and other first order methods.) <|cite_end|> <|cite_start|> (Reference: Neural Tangent Kernel: Convergence and Generalization in Neural Networks: At initialization, artificial neural networks (ANNs) are equivalent to Gaussian processes in the infinite-width limit, thus connecting them to kernel methods. We prove that the evolution of an ANN during training can also be described by a kernel: during gradient descent on the parameters of an ANN, the network function $f_\theta$ (which maps input vectors to output vectors) follows the kernel gradient of the functional cost (which is convex, in contrast to the parameter cost) w.r.t. a new kernel: the Neural Tangent Kernel (NTK). This kernel is central to describe the generalization features of ANNs. While the NTK is random at initialization and varies during training, in the infinite-width limit it converges to an explicit limiting kernel and it stays constant during training. This makes it possible to study the training of ANNs in function space instead of parameter space. Convergence of the training can then be related to the positive-definiteness of the limiting NTK. We prove the positive-definiteness of the limiting NTK when the data is supported on the sphere and the non-linearity is non-polynomial. We then focus on the setting of least-squares regression and show that in the infinite-width limit, the network function $f_\theta$ follows a linear differential equation during training. The convergence is fastest along the largest kernel principal components of the input data with respect to the NTK, hence suggesting a theoretical motivation for early stopping. Finally we study the NTK numerically, observe its behavior for wide networks, and compare it to the infinite-width limit.) <|cite_end|>, the \emph{mean field parameterisation} studied by <|cite_start|> (Reference: On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport: Many tasks in machine learning and signal processing can be solved by minimizing a convex function of a measure. This includes sparse spikes deconvolution or training a neural network with a single hidden layer. For these problems, we study a simple minimization method: the unknown measure is discretized into a mixture of particles and a continuous-time gradient descent is performed on their weights and positions. This is an idealization of the usual way to train neural networks with a large hidden layer. We show that, when initialized correctly and in the many-particle limit, this gradient flow, although non-convex, converges to global minimizers. The proof involves Wasserstein gradient flows, a by-product of optimal transport theory. Numerical experiments show that this asymptotic behavior is already at play for a reasonable number of particles, even in high dimension.) <|cite_end|> <|cite_start|> (Reference: A Mean Field View Of The Landscape Of Two-layer Neural Networks: Significance Multilayer neural networks have proven extremely successful in a variety of tasks, from image classification to robotics. However, the reasons for this practical success and its precise domain of applicability are unknown. Learning a neural network from data requires solving a complex optimization problem with millions of variables. This is done by stochastic gradient descent (SGD) algorithms. We study the case of two-layer networks and derive a compact description of the SGD dynamics in terms of a limiting partial differential equation. Among other consequences, this shows that SGD dynamics does not become more complex when the network size increases. Multilayer neural networks are among the most powerful models in machine learning, yet the fundamental reasons for this success defy mathematical understanding. Learning a neural network requires optimizing a nonconvex high-dimensional objective (risk function), a problem that is usually attacked using stochastic gradient descent (SGD). Does SGD converge to a global optimum of the risk or only to a local optimum? In the former case, does this happen because local minima are absent or because SGD somehow avoids them? In the latter, why do local minima reached by SGD have good generalization properties? In this paper, we consider a simple case, namely two-layer neural networks, and prove that—in a suitable scaling limit—SGD dynamics is captured by a certain nonlinear partial differential equation (PDE) that we call distributional dynamics (DD). We then consider several specific examples and show how DD can be used to prove convergence of SGD to networks with nearly ideal generalization error. This description allows for “averaging out” some of the complexities of the landscape of neural networks and can be used to prove a general convergence result for noisy SGD.) <|cite_end|> <|cite_start|> (Reference: Mean Field Analysis of Neural Networks: a law of large numbers: Machine learning, and in particular neural network models, have revolutionized fields such as image, text, and speech recognition. Today, many important real-world applications in these areas are driven by neural networks. There are also growing applications in engineering, robotics, medicine, and finance. Despite their immense success in practice, there is limited mathematical understanding of neural networks. This paper illustrates how neural networks can be studied via stochastic analysis, and develops approaches for addressing some of the technical challenges which arise. We analyze one-layer neural networks in the asymptotic regime of simultaneously (A) large network sizes and (B) large numbers of stochastic gradient descent training iterations. We rigorously prove that the empirical distribution of the neural network parameters converges to the solution of a nonlinear partial differential equation. This result can be considered a law of large numbers for neural networks. In addition, a consequence of our analysis is that the trained parameters of the neural network asymptotically become independent, a property which is commonly called "propagation of chaos".) <|cite_end|> for two-layer NNs, or the more recent \emph{maximal update parameterisation} ($\mu$P) by <|cite_start|> (Reference: Feature Learning in Infinite-Width Neural Networks: As its width tends to infinity, a deep neural network's behavior under gradient descent can become simplified and predictable (e.g. given by the Neural Tangent Kernel (NTK)), if it is parametrized appropriately (e.g. the NTK parametrization). However, we show that the standard and NTK parametrizations of a neural network do not admit infinite-width limits that can learn features, which is crucial for pretraining and transfer learning such as with BERT. We propose simple modifications to the standard parametrization to allow for feature learning in the limit. Using the *Tensor Programs* technique, we derive explicit formulas for such limits. On Word2Vec and few-shot learning on Omniglot via MAML, two canonical tasks that rely crucially on feature learning, we compute these limits exactly. We find that they outperform both NTK baselines and finite-width networks, with the latter approaching the infinite-width feature learning performance as width increases. More generally, we classify a natural space of neural network parametrizations that generalizes standard, NTK, and Mean Field parametrizations. We show 1) any parametrization in this space either admits feature learning or has an infinite-width training dynamics given by kernel gradient descent, but not both; 2) any such infinite-width limit can be computed using the Tensor Programs technique. Code for our experiments can be found at github.com/edwardjhu/TP4.) <|cite_end|> <|cite_start|> (Reference: Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer: Hyperparameter (HP) tuning in deep learning is an expensive process, prohibitively so for neural networks (NNs) with billions of parameters. We show that, in the recently discovered Maximal Update Parametrization (muP), many optimal HPs remain stable even as model size changes. This leads to a new HP tuning paradigm we call muTransfer: parametrize the target model in muP, tune the HP indirectly on a smaller model, and zero-shot transfer them to the full-sized model, i.e., without directly tuning the latter at all. We verify muTransfer on Transformer and ResNet. For example, 1) by transferring pretraining HPs from a model of 13M parameters, we outperform published numbers of BERT-large (350M parameters), with a total tuning cost equivalent to pretraining BERT-large once; 2) by transferring from 40M parameters, we outperform published numbers of the 6.7B GPT-3 model, with tuning cost only 7% of total pretraining cost. A Pytorch implementation of our technique can be found at github.com/microsoft/mup and installable via `pip install mup`.) <|cite_end|>. Beside, the input-output Jacobian singular value distribution, or \emph{spectrum}, of a wide neural network is an important indicator of its architectural soundness, particularly when one is interested in preventing exponential explosion or vanishing of gradients <|cite_start|> (Reference: Understanding the Difficulty of training Deep Feedforward Neural Networks: Whereas before 2006 it appears that deep multilayer neural networks were not successfully trained, since then several algorithms have been shown to successfully train them, with experimental results showing the superiority of deeper vs less deep architectures. All these experimental results were obtained with new initialization or training mechanisms. Our objective here is to understand better why standard gradient descent from random initialization is doing so poorly with deep neural networks, to better understand these recent relative successes and help design better algorithms in the future. We first observe the influence of the non-linear activations functions. We find that the logistic sigmoid activation is unsuited for deep networks with random initialization because of its mean value, which can drive especially the top hidden layer into saturation. Surprisingly, we find that saturated units can move out of saturation by themselves, albeit slowly, and explaining the plateaus sometimes seen when training neural networks. We find that a new non-linearity that saturates less can often be beneficial. Finally, we study how activations and gradients vary across layers and during training, with the idea that training may be more difficult when the singular values of the Jacobian associated with each layer are far from 1. Based on these considerations, we propose a new initialization scheme that brings substantially faster convergence. 1 Deep Neural Networks Deep learning methods aim at learning feature hierarchies with features from higher levels of the hierarchy formed by the composition of lower level features. They include Appearing in Proceedings of the 13 International Conference on Artificial Intelligence and Statistics (AISTATS) 2010, Chia Laguna Resort, Sardinia, Italy. Volume 9 of JMLR: WC Weston et al., 2008). Much attention has recently been devoted to them (see (Bengio, 2009) for a review), because of their theoretical appeal, inspiration from biology and human cognition, and because of empirical success in vision (Ranzato et al., 2007; Larochelle et al., 2007; Vincent et al., 2008) and natural language processing (NLP) (Collobert & Weston, 2008; Mnih & Hinton, 2009). Theoretical results reviewed and discussed by Bengio (2009), suggest that in order to learn the kind of complicated functions that can represent high-level abstractions (e.g. in vision, language, and other AI-level tasks), one may need deep architectures. Most of the recent experimental results with deep architecture are obtained with models that can be turned into deep supervised neural networks, but with initialization or training schemes different from the classical feedforward neural networks (Rumelhart et al., 1986). Why are these new algorithms working so much better than the standard random initialization and gradient-based optimization of a supervised training criterion? Part of the answer may be found in recent analyses of the effect of unsupervised pretraining (Erhan et al., 2009), showing that it acts as a regularizer that initializes the parameters in a “better” basin of attraction of the optimization procedure, corresponding to an apparent local minimum associated with better generalization. But earlier work (Bengio et al., 2007) had shown that even a purely supervised but greedy layer-wise procedure would give better results. So here instead of focusing on what unsupervised pre-training or semi-supervised criteria bring to deep architectures, we focus on analyzing what may be going wrong with good old (but deep) multilayer neural networks. Our analysis is driven by investigative experiments to monitor activations (watching for saturation of hidden units) and gradients, across layers and across training iterations. We also evaluate the effects on these of choices of activation function (with the idea that it might affect saturation) and initialization procedure (since unsupervised pretraining is a particular form of initialization and it has a drastic impact).) <|cite_end|> <|cite_start|> (Reference: Exact solutions to the nonlinear dynamics of learning in deep linear neural networks: Despite the widespread practical success of deep learning methods, our theoretical understanding of the dynamics of learning in deep neural networks remains quite sparse. We attempt to bridge the gap between the theory and practice of deep learning by systematically analyzing learning dynamics for the restricted case of deep linear neural networks. Despite the linearity of their input-output map, such networks have nonlinear gradient descent dynamics on weights that change with the addition of each new hidden layer. We show that deep linear networks exhibit nonlinear learning phenomena similar to those seen in simulations of nonlinear networks, including long plateaus followed by rapid transitions to lower error solutions, and faster convergence from greedy unsupervised pretraining initial conditions than from random initial conditions. We provide an analytical description of these phenomena by finding new exact solutions to the nonlinear dynamics of deep learning. Our theoretical analysis also reveals the surprising finding that as the depth of a network approaches infinity, learning speed can nevertheless remain finite: for a special class of initial conditions on the weights, very deep networks incur only a finite, depth independent, delay in learning speed relative to shallow networks. We show that, under certain conditions on the training data, unsupervised pretraining can find this special class of initial conditions, while scaled random Gaussian initializations cannot. We further exhibit a new class of random orthogonal initial conditions on weights that, like unsupervised pre-training, enjoys depth independent learning times. We further show that these initial conditions also lead to faithful propagation of gradients even in deep nonlinear networks, as long as they operate in a special regime known as the edge of chaos.) <|cite_end|> <|cite_start|> (Reference: The Emergence of Spectral Universality in Deep Networks: Recent work has shown that tight concentration of the entire spectrum of singular values of a deep network's input-output Jacobian around one at initialization can speed up learning by orders of magnitude. Therefore, to guide important design choices, it is important to build a full theoretical understanding of the spectra of Jacobians at initialization. To this end, we leverage powerful tools from free probability theory to provide a detailed analytic understanding of how a deep network's Jacobian spectrum depends on various hyperparameters including the nonlinearity, the weight and bias distributions, and the depth. For a variety of nonlinearities, our work reveals the emergence of new universal limiting spectral distributions that remain concentrated around one even as the depth goes to infinity.) <|cite_end|>. Although of similar nature, these results have been derived using diverse mathematical techniques across different works, including from classical probability theory, and random matrix theory (particularly free probability), resulting in a lack of unified treatment of the various scaling limits. Furthermore, the vast majority of studies have concentrated on the case of dense Gaussian weights and biases. In this paper, we propose a unified framework to express these scaling limits which leverages on the \textit{genus expansion technique} from random matrix theory. This technique has its roots in connection between matrix integrals and the enumeration of maps, which was first discovered in the context of quantum field theory (see <|cite_start|> (Reference: Space-Time Approach to Non-Relativistic Quantum Mechanics: Non-relativistic quantum mechanics is formulated here in a different way. It is, however, mathematically equivalent to the familiar formulation. In quantum mechanics the probability of an event which can happen in several different ways is the absolute square of a sum of complex contributions, one from each alternative way. The probability that a particle will be found to have a path x(t) lying somewhere within a region of space time is the square of a sum of contributions, one from each path in the region. The contribution from a single path is postulated to be an exponential whose (imaginary) phase is the classical action (in units of ℏ) for the path in question. The total contribution from all paths reaching x, t from the past is the wave function ψ(x, t). This is shown to satisfy Schroedinger's equation. The relation to matrix and operator algebra is discussed. Applications are indicated, in particular to eliminate the coordinates of the field oscillators from the equations of quantum electrodynamics.) <|cite_end|> <|cite_start|> (Reference: A Planar Diagram Theory for Strong Interactions: ) <|cite_end|> <|cite_start|> (Reference: Planar diagrams: ) <|cite_end|>, as well as <|cite_start|> (Reference: Matrix integrals and map enumeration: An accessible introduction: ) <|cite_end|> for an accessible introduction to the subject). The link to random matrix theory was later made by Harer and Zagier <|cite_start|> (Reference: The Euler Spiral: ) <|cite_end|> in a seminal work investigating moduli spaces of curves, and has since been used to study various matrix ensembles and their asymptotic first and second-order freeness (we do not attempt to survey such results here, and instead refer the reader to the recent work of Dubach and Peled and the references therein). Roughly speaking, the technique consists in expanding the trace of random matrix products and evaluating the resulting sum using Wick's principle. The resulting terms turn out to be in bijection with a set of graphs, and one determines which terms are of leading order by embedding their corresponding graphs into surfaces and computing its Euler characteristic. To the best of our knowledge, this technique has yet to be used in the context of deep learning. This is likely due to the presence of non-linear activations, which often prohibit one from being able to apply it directly. We circumvent this problem by first developing a graphical language to express a large class of matrix/vector products, and then deriving an expansion for neural networks in terms of this language. This expansion linearises the effect of the activation functions, allowing for the use of Wick’s formula and the connection to the enumeration of maps to be made. A high-level overview of our approach is given below. \subsection{Overview of our method.} \label{sec:intro_mainideas} \subsubsection{A graphical language for neural network computations.} The idea of using a graphical language to simplify computations involving multilinear maps is not entirely new, dating back to at least the 1970s with the introduction of Penrose diagrams, which have more recently been applied in the context of machine learning (see <|cite_start|> (Reference: Tensor networks in a nutshell: Tensor network methods are taking a central role in modern quantum physics and beyond. They can provide an efficient approximation to certain classes of quantum states, and the associated graphical language makes it easy to describe and pictorially reason about quantum circuits, channels, protocols, open systems and more. Our goal is to explain tensor networks and some associated methods as quickly and as painlessly as possible. Beginning with the key definitions, the graphical tensor network language is presented through examples. We then provide an introduction to matrix product states. We conclude the tutorial with tensor contractions evaluating combinatorial counting problems. The first one counts the number of solutions for Boolean formulae, whereas the second is Penrose's tensor contraction algorithm, returning the number of $3$-edge-colorings of $3$-regular planar graphs.) <|cite_end|> <|cite_start|> (Reference: Tensor networks for dimensionality reduction and large-scale optimization: Part 1 low-rank tensor decompositions: Modern applications in engineering and data science are increasinglybased on multidimensional data of exceedingly high volume, variety,and structural richness. However, standard machine learning algorithmstypically scale exponentially with data volume and complexityof cross-modal couplings - the so called curse of dimensionality -which is prohibitive to the analysis of large-scale, multi-modal andmulti-relational datasets. Given that such data are often efficientlyrepresented as multiway arrays or tensors, it is therefore timely andvaluable for the multidisciplinary machine learning and data analyticcommunities to review low-rank tensor decompositions and tensor networksas emerging tools for dimensionality reduction and large scaleoptimization problems. Our particular emphasis is on elucidating that,by virtue of the underlying low-rank approximations, tensor networkshave the ability to alleviate the curse of dimensionality in a numberof applied areas. In Part 1 of this monograph we provide innovativesolutions to low-rank tensor network decompositions and easy to interpretgraphical representations of the mathematical operations ontensor networks. Such a conceptual insight allows for seamless migrationof ideas from the flat-view matrices to tensor network operationsand vice versa, and provides a platform for further developments, practicalapplications, and non-Euclidean extensions. It also permits theintroduction of various tensor network operations without an explicitnotion of mathematical expressions, which may be beneficial for manyresearch communities that do not directly rely on multilinear algebra.Our focus is on the Tucker and tensor train TT decompositions andtheir extensions, and on demonstrating the ability of tensor networksto provide linearly or even super-linearly e.g., logarithmically scalablesolutions, as illustrated in detail in Part 2 of this monograph.) <|cite_end|>). As discussed earlier, graphs have also been used to evaluate expectations of products of Gaussian variables, and this forms the basis of the genus expansion technique. The graphs that we introduce are novel and accomplish both of these tasks at once. On the one hand, they can be used to express deterministic products and operations involving multilinear maps. On the other, when dealing with tensors with Gaussian entries, the expectation of these operations can once again be expressed in terms of graphs (in the sense of equation (\ref{eq:wickexpansion})). We explain this briefly below, deferring to Section \ref{sec:graph_dictionary} for more details. In what we call a \textit{product} graph $G=(V,E)$, edges will correspond to matrices and vertices to vectors, which we call the inputs of their respective edge/vertex. The graph’s structure then dictates a well-defined product involving these inputs, the result of which we call the \textit{value} of the graph and denote by $\mathbf{W}_G$. For instance, a path of length $k$ can be used to express an (ordinary) product of $k$ matrices, while trees can be used to express Hadamard (entrywise) products (this is depicted in Figures \ref{fig:path-graph} and \ref{fig:basic-tree}). If we omit inputs for some vertices and edges of the graph and view them as variables, then the resulting graph corresponds to a (multi)linear map and we call it an \textit{operator graph}. Differentiation, composition and other operations involving these maps then turn out to be easily expressible using simple manipulations of their corresponding graph (composition, for instance, reduces to attaching graphs by a vertex), as explained in Section \ref{subsec:operator_graph} and the figures therein. \subsubsection{Graph expansions of neural networks.} The connection to neural networks is made by expanding their output at a given input $\bx$ as a linear combination of product graphs \begin{equation}\label{eqn:intro_nn_graph_expansion} \Phi(\bx)= \sum_{G\in \mathcal{F}} c(G)\bW_G \end{equation} for some family of graphs $\mathcal{F}$ and combinatorial factors $c(G)$. This is achieved in Theorem \ref{thm:tree_exp}, which essentially generalizes Faà di Bruno’s formula (see <|cite_start|> (Reference: FAA: 德国之翼9525号航班上名为安德里亚斯·卢比茨(Andreas Lubitz)副驾驶的蓄意坠机,已经导致灾难的发生。现如今大家对该事件的关注点已经放在了了解副驾驶的精神状态,以及航空当局和航空公司在对飞行员的精神稳定性方面的评估上是否已经尽责。) <|cite_end|>) to the case of an arbitrary number of compositions. In similar tasks, trees have been shown to be a natural combinatorial tool to keep track of terms (see the literature on Butcher series <|cite_start|> (Reference: Butcher series: A story of rooted trees and numerical methods for evolution equations: Butcher series appear when Runge-Kutta methods for ordinary differential equations are expanded in power series of the step size parameter. Each term in a Butcher series consists of a weighted elementary differential, and the set of all such differentials is isomorphic to the set of rooted trees, as noted by Cayley in the mid 19th century. A century later Butcher discovered that rooted trees can also be used to obtain the order conditions of Runge-Kutta methods, and he found a natural group structure, today known as the Butcher group. It is now known that many numerical methods also can be expanded in Butcher series; these are called B-series methods. A long-standing problem has been to characterize, in terms of qualitative features, all B-series methods. Here we tell the story of Butcher series, stretching from the early work of Cayley, to modern developments and connections to abstract algebra, and finally to the resolution of the characterization problem. This resolution introduces geometric tools and perspectives to an area traditionally explored using analysis and combinatorics.) <|cite_end|> <|cite_start|> (Reference: Ramification of rough paths: ) <|cite_end|>, and, more generally, on Runge-Kutta methods for ordinary differential equations <|cite_start|> (Reference: Solving ordinary differential equations I: nonstiff problems: This book deals with methods for solving nonstiff ordinary differential equations. The first chapter describes the historical development of the classical theory, and the second chapter includes a modern treatment of Runge-Kutta and extrapolation methods. Chapter three begins with the classical theory of multistep methods, and concludes with the theory of general linear methods. The reader will benefit from many illustrations, a historical and didactic approach, and computer programs which help him/her learn to solve all kinds of ordinary differential equations. This new edition has been rewritten and new material has been included.) <|cite_end|>), and this is reflected here in the fact that $\mathcal{F}$ (in Eq. (\ref{eqn:intro_nn_graph_expansion})) turns out to be a set of rooted trees. By applying our previously mentioned graphical rules to each term in this sum, we derive similar expansions for various related quantities, namely the $k$-th coordinate of $\Phi(\bx)$, the neural tangent kernel, and the trace of the input-output Jacobian of $\Phi$ times its transpose, raised to an arbitrary power. This reduces the task of obtaining scaling limits to that evaluating $\E \bW_G$ for various graphs $G$. \subsubsection{Wick’s principle and genus expansion.} When $G$ is a product graph whose edge inputs have Gaussian entries, our main tool to evaluate $\E\bW_G$ is Wick's principle, which reduces the expectation of products of Gaussian variables to the sum of their pairwise covariances. Applied to $\bW_G$, it yields the following simple identity \begin{equation}\label{eq:wickexpansion} \E{\bW_G} = \sum_{\phi} \bW_{G_\phi}, \end{equation} (see Theorem \ref{thm:wickexpansion}), where the sum is taken over \textit{admissible} pairings $\phi$ of the edges of $G$ (see Def. \ref{def:admissible_pairings}), and $G_\phi$ is the graph obtained from $G$ after identifying edges paired by $\phi$ (meaning that we consider such edges to be the same edge in $G_\phi$). Under additional assumptions on $G$ (see Assumption \ref{assumption:genus_graph}), we find that $\bW_{G_\phi}=\sigma_G N^{|V(G_\phi)|}$ for every $\phi$ (where $|V(G_\phi)|$ is the number of vertices in $G_\phi$ and $\sigma_G$ is a variance parameter) and the asymptotic order of $\bW_G$ is thus determined by the pairings for which $|V(G_\phi)|$ is maximised. Instead of counting this quantity directly, it turns out to be much simpler to embed the graph onto a surface $S_\phi$ (as defined in Equation \ref{eq:sphi}) and to then compute $|V(G_\phi)|$ using the \textit{Euler characteristic formula} \[ |V(G_\phi)|-|E(G_\phi)|+f(G_\phi:S_\phi) = 2-2g(S_\phi), \] where $|E(G_\phi)|$ is the number of edges of $G_\phi$, $f(G_\phi:S_\phi)$ the number of faces of $G_\phi$ in $S_\phi$ and $g(S_\phi)$ the genus of $S_\phi$. This formula allows us to identify which $\phi$ give rise to leading and sub-leading order terms in Eq. (\ref{eq:wickexpansion}), which we call \textit{fully-atomic} and \textit{bi-atomic} pairings, respectively, following. We use this to give a more explicit version of equation (\ref{eq:wickexpansion}), and to extend it to centred mixed moments $\E \{\prod_{G} (\bW_G-\E \bW_G)\}$ as well (see Lemma \ref{lemma:mixedmoments_genus}). Lastly, we combine these results to obtain a limit theorem for the joint moments of product graphs (Theorem \ref{thm:joint_gaussian_limit}), reminiscent of a celebrated result of Diaconis and Shahshahani <|cite_start|> (Reference: On the eigenvalues of random matrices: Let M be a random matrix chosen from Haar measure on the unitary group Un. Let Z = X + iY be a standard complex normal random variable with X and Y independent, mean 0 and variance ½ normal variables. We show that for j = 1, 2, …, Tr(Mj ) are independent and distributed as √jZ asymptotically as n →∞. This result is used to study the set of eigenvalues of M. Similar results are given for the orthogonal and symplectic and symmetric groups.) <|cite_end|> for traces of powers of random unitary matrices and its recent extension generalisation in (Thm. 1.2). \bigskip When the edge inputs in $G$ are complex, non-Gaussian or sparse matrices (or any combination of the three), we show that all of these results still hold up to an $o(1)$ error term (see sections \ref{sec:complex_case}, \ref{sec:non_gaussian}, \ref{sec:sparse}, respectively). This allows us to extend all our main results to NNs with such weight matrices. \subsubsection{A pipeline for scaling limits.}\label{pipeline} With the graph expansion in (\ref{eqn:intro_nn_graph_expansion}), the dictionary between analytic and graphical operations and the genus expansion to compute each $\bW_{G_\phi}$ in (\ref{eq:wickexpansion}), we propose the following pipeline to study neural network scaling limits. \begin{enumerate}[label=(\Roman{*})] \item Express the desired quantity in terms of values of product graphs \( G \). \item Apply genus formula (\ref{eq:wickexpansion}) to derive the scaling limits for the \( \mathbf{W}_G \). \item Evaluate these terms using combinatorial arguments, usually leveraging the symmetries present in the graph $G$. \item Substitute these quantities back into the expression from the first step. \end{enumerate} To the best of our knowledge, the only unifying framework currently proposed in the literature is that of so-called \textit{tensor programs} (developed by Yang <|cite_start|> (Reference: Tensor Programs II: Neural Tangent Kernel for Any Architecture: We prove that a randomly initialized neural network of *any architecture* has its Tangent Kernel (NTK) converge to a deterministic limit, as the network widths tend to infinity. We demonstrate how to calculate this limit. In prior literature, the heuristic study of neural network gradients often assumes every weight matrix used in forward propagation is independent from its transpose used in backpropagation (Schoenholz et al. 2017). This is known as the *gradient independence assumption (GIA)*. We identify a commonly satisfied condition, which we call *Simple GIA Check*, such that the NTK limit calculation based on GIA is correct. Conversely, when Simple GIA Check fails, we show GIA can result in wrong answers. Our material here presents the NTK results of Yang (2019a) in a friendly manner and showcases the *tensor programs* technique for understanding wide neural networks. We provide reference implementations of infinite-width NTKs of recurrent neural network, transformer, and batch normalization at this https URL.) <|cite_end|>). Our pipeline can be seen as an alternative to the latter which is built on first principles, and yields universal results that also hold for finite dimensional weights. As remarked in Section \ref{sec:application_NNs}, it also sheds new light on classical results, by, for instance, recovering mainstream parameterizations as canonical choices. More importantly, this pipeline provides a clear path to tackle more complex settings (e.g. other architectures), and \textit{applies just as well to the training regime}. For instance, we believe that it can be used directly to study discrete stochastic gradient descent, generalising the arguments in <|cite_start|> (Reference: Infinite-width limit of deep linear neural networks: This paper studies the infinite-width limit of deep linear neural networks initialized with random parameters. We obtain that, when the number of neurons diverges, the training dynamics converge (in a precise sense) to the dynamics obtained from a gradient descent on an infinitely wide deterministic linear neural network. Moreover, even if the weights remain random, we get their precise law along the training dynamics, and prove a quantitative convergence result of the linear predictor in terms of the number of neurons. We finally study the continuous-time limit obtained for infinitely wide linear neural networks and show that the linear predictors of the neural network converge at an exponential rate to the minimal $\ell_2$-norm minimizer of the risk.) <|cite_end|> which study the scaling limits of NNs under $\mu P$ initialisation). We survey other possible extensions in Section \ref{sec:informal_extensions}. \subsection{Main results} Fix sequences $(\varphi_{\ell}: \bR \to \bR ~|~ \ell \in \N_{>0})$ of polynomial \emph{activation functions}, $(N_\ell \in \N_{>0} ~|~ \ell \in \N)$ of \textit{layer dimensions} and $(W_{\ell} \in \mathbb{R}^{N_{\ell+1}\times N_{\ell}} ~|~ \ell \in \N)$ of weight matrices. We define a \emph{feed-forward neural network} $\Phi_L$ of depth $L$ by the recursion \begin{equation}\label{intro:def:NN} \Phi_0(\bx)=W_0\bx,\quad \Phi_{\ell+1}(\bx)=W_{\ell+1}\varphi_{\ell+1}(\Phi_{\ell}(\bx)), \end{equation} where each $\varphi_{\ell}$ is applied entry-wise. We omit bias terms and restrict ourselves to polynomial activations for simplicity here, and discuss the requisite modifications to remove these restrictions in Section \ref{sec:informal_extensions}. To demonstrate our pipeline, we obtain simple and insightful proofs of some previously mentioned, fundamental results. The first of these is the following universal Gaussian process limit, which holds under ``GP limit parameterisation" for a large class of neural networks with sparse random weights. \begin{theorem}[Gaussian process limit of neural networks] \label{thm:general_GP} Let $N_\ell = N$ when $\ell>0$, and assume that each $W_\ell$ has i.i.d. entries drawn from a symmetric, centred distribution with finite moments and variance $\frac{1}{N}\mathbf{1}(\ell>0)+\mathbf{1}(\ell=0)$. Then for any $M,L \geq 1$ we have \begin{equation} ([\Phi_{L}]_1,...,[\Phi_{L}]_M) \xrightarrow[N \to \infty]{d} \mathcal{GP}(0,K_{L} \otimes \mathbf{I}_M) \end{equation} where the right hand side is a Gaussian Process indexed on $\bR^{N_0}$, with diagonal covariance function defined by \begin{gather}\label{eq:intro_GPKer} K_{0}(\bx,\by) = \sprod{\bx}{\by}_{\bR^{N_0}}, ~ K_{\ell+1}(\bx,\by) = \E\left[ \varphi_{\ell+1}(X_{\ell}) \varphi_{\ell+1}(Y_{\ell}) \right] \\ (X_{\ell},Y_{\ell}) \sim \mathcal{N}\left(0, \begin{bmatrix} K_{\ell}(\bx,\bx) & K_{\ell}(\bx,\by) \\ K_{\ell}(\by,\bx) & K_{\ell}(\by,\by) \end{bmatrix}\right). \end{gather} Furthermore, the same result holds if the weight matrices are of the form $\tilde W_\ell := W_\ell \odot \frac{1}{\sqrt{p_N}} B_\ell$, where $W_\ell$ are as above and the $B_\ell$ are independent matrices with i.i.d., Bernoulli distributed entries with parameter $p_N$ satisfying $N p_N \to \infty$. \end{theorem} \begin{proof}This follows from Theorem \ref{thm:GPlimit} and the corollaries in sections \ref{sec:complex_case}, \ref{sec:non_gaussian}, and \ref{sec:sparse}. \end{proof} This adds to the growing list of generalisations of the result of Matthews et al. <|cite_start|> (Reference: Gaussian Process Behaviour in Wide Deep Neural Networks: Whilst deep neural networks have shown great empirical success, there is still much work to be done to understand their theoretical properties. In this paper, we study the relationship between random, wide, fully connected, feedforward networks with more than one hidden layer and Gaussian processes with a recursive kernel definition. We show that, under broad conditions, as we make the architecture increasingly wide, the implied random function converges in distribution to a Gaussian process, formalising and extending existing results by Neal (1996) to deep networks. To evaluate convergence rates empirically, we use maximum mean discrepancy. We then compare finite Bayesian deep networks from the literature to Gaussian processes in terms of the key predictive quantities of interest, finding that in some cases the agreement can be very close. We discuss the desirability of Gaussian process behaviour and review non-Gaussian alternative models from the literature.) <|cite_end|> to non-Gaussian settings, such as that of Huang <|cite_start|> (Reference: On the Neural Tangent Kernel of Deep Networks with Orthogonal Initialization: The prevailing thinking is that orthogonal weights are crucial to enforcing dynamical isometry and speeding up training. The increase in learning speed that results from orthogonal initialization in linear networks has been well-proven. However, while the same is believed to also hold for nonlinear networks when the dynamical isometry condition is satisfied, the training dynamics behind this contention have not been thoroughly explored. In this work, we study the dynamics of ultra-wide networks across a range of architectures, including Fully Connected Networks (FCNs) and Convolutional Neural Networks (CNNs) with orthogonal initialization via neural tangent kernel (NTK). Through a series of propositions and lemmas, we prove that two NTKs, one corresponding to Gaussian weights and one to orthogonal weights, are equal when the network width is infinite. Further, during training, the NTK of an orthogonally-initialized infinite-width network should theoretically remain constant. This suggests that the orthogonal initialization cannot speed up training in the NTK (lazy training) regime, contrary to the prevailing thoughts. In order to explore under what circumstances can orthogonality accelerate training, we conduct a thorough empirical investigation outside the NTK regime. We find that when the hyper-parameters are set to achieve a linear regime in nonlinear activation, orthogonal initialization can improve the learning speed with a large learning rate or large depth.) <|cite_end|> to orthogonal weights, and Hanin <|cite_start|> (Reference: Random Neural Networks in the Infinite Width Limit as Gaussian Processes: This article gives a new proof that fully connected neural networks with random weights and biases converge to Gaussian processes in the regime where the input dimension, output dimension, and depth are kept fixed, while the hidden layer widths tend to infinity. Unlike prior work, convergence is shown assuming only moment conditions for the distribution of weights and for quite general non-linearities.) <|cite_end|> to weights with i.i.d. entries satisfying finite moment assumptions. More recently, Nait--Saada, Naderi and Tanner <|cite_start|> (Reference: Beyond IID weights: sparse and low-rank deep Neural Networks are also Gaussian Processes: The infinitely wide neural network has been proven a useful and manageable mathematical model that enables the understanding of many phenomena appearing in deep learning. One example is the convergence of random deep networks to Gaussian processes that allows a rigorous analysis of the way the choice of activation function and network weights impacts the training dynamics. In this paper, we extend the seminal proof of Matthews et al. (2018) to a larger class of initial weight distributions (which we call PSEUDO-IID), including the established cases of IID and orthogonal weights, as well as the emerging low-rank and structured sparse settings celebrated for their computational speed-up benefits. We show that fully-connected and convolutional networks initialized with PSEUDO-IID distributions are all effectively equivalent up to their variance. Using our results, one can identify the Edge-of-Chaos for a broader class of neural networks and tune them at criticality in order to enhance their training. Moreover, they enable the posterior distribution of Bayesian Neural Networks to be tractable across these various initialization schemes.) <|cite_end|> encompassed both of these results by showing that one can relax the i.i.d. assumption to a class of weights which they call \textsc{Pseudo-IID}. In particular, this class includes \textit{structured sparse} weights, making this work the first to rigorously show that the Gaussian process limit holds in a sparse setting. That said, while their result holds for more general activations than the ones considered here, they only deal with sparsification using a fixed binary mask $B$, whereas we allow for masks $B_\ell$ whose expected proportion of ones can decrease as $N_\ell$ tends to infinity. \bigskip Our second result concerns the NTK ( <|cite_start|> (Reference: Gradient Descent Provably Optimizes Over-parameterized Neural Networks: One of the mysteries in the success of neural networks is randomly initialized first order methods like gradient descent can achieve zero training loss even though the objective function is non-convex and non-smooth. This paper demystifies this surprising phenomenon for two-layer fully connected ReLU activated neural networks. For an $m$ hidden node shallow neural network with ReLU activation and $n$ training data, we show as long as $m$ is large enough and no two inputs are parallel, randomly initialized gradient descent converges to a globally optimal solution at a linear convergence rate for the quadratic loss function. Our analysis relies on the following observation: over-parameterization and random initialization jointly restrict every weight vector to be close to its initialization for all iterations, which allows us to exploit a strong convexity-like property to show that gradient descent converges at a global linear rate to the global optimum. We believe these insights are also useful in analyzing deep models and other first order methods.) <|cite_end|> <|cite_start|> (Reference: Neural Tangent Kernel: Convergence and Generalization in Neural Networks: At initialization, artificial neural networks (ANNs) are equivalent to Gaussian processes in the infinite-width limit, thus connecting them to kernel methods. We prove that the evolution of an ANN during training can also be described by a kernel: during gradient descent on the parameters of an ANN, the network function $f_\theta$ (which maps input vectors to output vectors) follows the kernel gradient of the functional cost (which is convex, in contrast to the parameter cost) w.r.t. a new kernel: the Neural Tangent Kernel (NTK). This kernel is central to describe the generalization features of ANNs. While the NTK is random at initialization and varies during training, in the infinite-width limit it converges to an explicit limiting kernel and it stays constant during training. This makes it possible to study the training of ANNs in function space instead of parameter space. Convergence of the training can then be related to the positive-definiteness of the limiting NTK. We prove the positive-definiteness of the limiting NTK when the data is supported on the sphere and the non-linearity is non-polynomial. We then focus on the setting of least-squares regression and show that in the infinite-width limit, the network function $f_\theta$ follows a linear differential equation during training. The convergence is fastest along the largest kernel principal components of the input data with respect to the NTK, hence suggesting a theoretical motivation for early stopping. Finally we study the NTK numerically, observe its behavior for wide networks, and compare it to the infinite-width limit.) <|cite_end|>), which is defined by \begin{equation} \Theta_{L}(\bx,\by) := \sum_{\ell=0}^{L} \lambda_{\ell} (\mathrm{d}_{W_{\ell}}\Phi_{L}(\bx))(\mathrm{d}_{W_{\ell}}\Phi_{L}(\by))^{\top} \in \bR^{N_{L+1} \times N_{L+1}}, \end{equation} for a choice of so-called \emph{layer-wise learning rates} $(\lambda_\ell)_\ell$. We show that at initialisation and under ``NTK parametrisation", $\Theta_L$ converges in $L^2$ to a deterministic kernel. As with the previous result, this convergence holds for non-Gaussian and sparse matrices as well. \begin{theorem}[Convergence in $L^2$ of the NTK at intialisation]\label{thm:general_NTK} For each $\ell>0$, let $N_\ell = N$ and assume that $W_\ell$ has i.i.d. entries drawn from a symmetric, centered distribution with finite moments and variance $\frac{1}{N}\mathbf{1}(\ell>0)+\mathbf{1}(\ell=0)$. Moreover, assume that the layer-wise learning rates $\lambda_\ell=\frac{1}{\sqrt{N}}\mathbf{1}(\ell>0)+\mathbf{1}(\ell=0)$. Then \begin{equation} \Theta_{L}(\bx,\by) \xrightarrow[N \to \infty]{L^2} \Theta_{L}^{\infty}(\bx,\by) \otimes \mathrm{Id}_{N_{L+1}} \end{equation} where, \begin{equation} \Theta_0^{\infty}(\bx,\by) = \sprod{\bx}{\by}_{\bR^{N_0}}, \quad \Theta_{L}^{\infty}(\bx,\by) = K_{L}(\bx,\by) + \dot K_{L}(\bx,\by)\Theta_{L - 1}^{\infty}(\bx,\by) \end{equation} and $\dot K_{\ell}$ is defined in the same way as $K_{\ell}$ but substituting $\varphi_{\ell}$ for $\varphi'_{\ell}$ in (\ref{eq:intro_GPKer}). The same result holds if the weight matrices are of the form $\tilde W_\ell := W_\ell \odot \frac{1}{\sqrt{p_N}} B_\ell$, where $W_\ell$ are as above and the $B_\ell$ are independent matrices with i.i.d., Bernoulli distributed entries with parameter $p_N$ satisfying $N p_N \to \infty$. \end{theorem} \begin{proof}Follows from Theorem \ref{thm:ntk} and the corollaries in sections \ref{sec:complex_case}, \ref{sec:non_gaussian}, and \ref{sec:sparse}. \end{proof} Previous results regarding the NTK at initialization have only been shown for Gaussian and orthogonal weights <|cite_start|> (Reference: Neural Tangent Kernel: Convergence and Generalization in Neural Networks: At initialization, artificial neural networks (ANNs) are equivalent to Gaussian processes in the infinite-width limit, thus connecting them to kernel methods. We prove that the evolution of an ANN during training can also be described by a kernel: during gradient descent on the parameters of an ANN, the network function $f_\theta$ (which maps input vectors to output vectors) follows the kernel gradient of the functional cost (which is convex, in contrast to the parameter cost) w.r.t. a new kernel: the Neural Tangent Kernel (NTK). This kernel is central to describe the generalization features of ANNs. While the NTK is random at initialization and varies during training, in the infinite-width limit it converges to an explicit limiting kernel and it stays constant during training. This makes it possible to study the training of ANNs in function space instead of parameter space. Convergence of the training can then be related to the positive-definiteness of the limiting NTK. We prove the positive-definiteness of the limiting NTK when the data is supported on the sphere and the non-linearity is non-polynomial. We then focus on the setting of least-squares regression and show that in the infinite-width limit, the network function $f_\theta$ follows a linear differential equation during training. The convergence is fastest along the largest kernel principal components of the input data with respect to the NTK, hence suggesting a theoretical motivation for early stopping. Finally we study the NTK numerically, observe its behavior for wide networks, and compare it to the infinite-width limit.) <|cite_end|> <|cite_start|> (Reference: On the Neural Tangent Kernel of Deep Networks with Orthogonal Initialization: The prevailing thinking is that orthogonal weights are crucial to enforcing dynamical isometry and speeding up training. The increase in learning speed that results from orthogonal initialization in linear networks has been well-proven. However, while the same is believed to also hold for nonlinear networks when the dynamical isometry condition is satisfied, the training dynamics behind this contention have not been thoroughly explored. In this work, we study the dynamics of ultra-wide networks across a range of architectures, including Fully Connected Networks (FCNs) and Convolutional Neural Networks (CNNs) with orthogonal initialization via neural tangent kernel (NTK). Through a series of propositions and lemmas, we prove that two NTKs, one corresponding to Gaussian weights and one to orthogonal weights, are equal when the network width is infinite. Further, during training, the NTK of an orthogonally-initialized infinite-width network should theoretically remain constant. This suggests that the orthogonal initialization cannot speed up training in the NTK (lazy training) regime, contrary to the prevailing thoughts. In order to explore under what circumstances can orthogonality accelerate training, we conduct a thorough empirical investigation outside the NTK regime. We find that when the hyper-parameters are set to achieve a linear regime in nonlinear activation, orthogonal initialization can improve the learning speed with a large learning rate or large depth.) <|cite_end|>, and only achieve convergence in probability. With the caveat of only holding for polynomial activations, our result is an improvement on both fronts. \bigskip Having proved these theorems as a warm-up, we move on to the more difficult problem of analysing the Jacobian spectrum of $\Phi$. Defining the input-output Jacobian of $\Phi_L$ as $\mathbf{J}_{L,\bx} := \mathrm{d}(\varphi_{L}\circ \Phi_{L-1})_{\bx}$, we’re interested in the macroscopic behaviour of the squared singular values of $\mathbf{J}_{L,\bx}$, and study the empirical spectral distribution of $\mathbf{J}_{L,\bx}\mathbf{J}_{L,\bx}^\top$, defined as \[ \rho_L := \frac{1}{{N}}\sum_{I=1}^N \delta_{\xi_i} \] where $\{\xi_1, \dots, \xi_N\}$ are the eigenvalues of $\mathbf{J}_{L,\bx}\mathbf{J}_{L,\bx}^\top$ and $\delta_{\xi_i}$ denotes a Dirac mass on $\xi_i$. Our main result establishes the weak convergence in probability of this measure to a deterministic limiting measure $\gamma_L^{\mathrm{NFC}}(\bx, (\varphi_{\ell})_{\ell\leq L})$, which we dub the \textit{non-linear Fuss-Catalan} distribution (in this case, with parameter $L$ and non-linearities $\varphi_\ell$). We go further and find an explicit formula for the moments of this measure as a sum over non-crossing partitions, and give a simple condition under which it is uniquely determined by these moments (see Remark \ref{rem:determinacy}). \begin{theorem}[Weak convergence of $\rho_L$ in probability]\label{thm:general_jacobian} For each $\ell \geq 0$, assume that $N_\ell = N$ and that $W_\ell$ has i.i.d. entries drawn from a symmetric, centred distribution with finite moments and variance ${1}/{N}$. Then $\rho_L$ converges weakly in probability to a deterministic limiting measure $\gamma_{L}^{\mathrm{NFC}}(\mathbf{x},(\varphi_{\ell})_{\ell\leq L})$, whose moments can be evaluated explicitly by the recursion in Equation (\ref{eq:recursion}). Furthermore, the same result holds if the weight matrices are of the form $\tilde W_\ell := W_\ell \odot \frac{1}{\sqrt{p_N}} B_\ell$, where $W_\ell$ are as above and the $B_\ell$ are independent matrices with i.i.d., Bernoulli distributed entries with parameter $p_N$ satisfying $N p_N \to \infty$. \end{theorem} \begin{proof}This follows from Theorem \ref{thm:jacobianthm} and the corollaries in sections \ref{sec:complex_case}, \ref{sec:non_gaussian}, and \ref{sec:sparse}. \end{proof} Indeed, the moments of $\gamma_{L}^{\mathrm{NFC}}(\mathbf{x},(\varphi_{\ell})_{\ell\leq L})$ can be seen as a generalization of the Fuss-Catalan numbers (see, e.g., <|cite_start|> (Reference: Lectures on the Combinatorics of Free Probability: Part I. Basic Concepts: 1. Non-commutative probability spaces and distributions 2. A case study of non-normal distribution 3. C*-probability spaces 4. Non-commutative joint distributions 5. Definition and basic properties of free independence 6. Free product of *-probability spaces 7. Free product of C*-probability spaces Part II. Cumulants: 8. Motivation: free central limit theorem 9. Basic combinatorics I: non-crossing partitions 10. Basic Combinatorics II: Mobius inversion 11. Free cumulants: definition and basic properties 12. Sums of free random variables 13. More about limit theorems and infinitely divisible distributions 14. Products of free random variables 15. R-diagonal elements Part III. Transforms and Models: 16. The R-transform 17. The operation of boxed convolution 18. More on the 1-dimensional boxed convolution 19. The free commutator 20. R-cyclic matrices 21. The full Fock space model for the R-transform 22. Gaussian Random Matrices 23. Unitary Random Matrices Notes and Comments Bibliography Index.) <|cite_end|>) which is obtained by inserting activation-dependent coefficients in their defining recursion. As such, $\gamma_{L}^{\mathrm{NFC}}(\mathbf{x},(\varphi_{\ell})_{\ell\leq L})$ generalizes the Fuss-Catalan distribution, which is known to be the universal first-order limit of squared singular values for products of Ginibre matrices (in the language of free probability, it is the $L$-fold free multiplicative convolution of the Marchenko-Pastur law). We prove this theorem deriving an exact expression for the moments of $\rho_L$, which are then shown to converge in $L^2$ to those of $ \gamma_L^{\mathrm{NFC}}(\bx, (\varphi_{\ell})_{\ell\leq L})$ (see Proposition \ref{prop:jacobianmoments}). Weak convergence in probability of the empirical spectral measure then follows from the method of moments. Under an asymptotic freeness assumption which was later proved in <|cite_start|> (Reference: Asymptotic Freeness of Layerwise Jacobians Caused by Invariance of Multilayer Perceptron: The Haar Orthogonal Case: ) <|cite_end|>, the limiting distribution in Proposition \ref{prop:jacobianmoments} was computed by Pennington et al. <|cite_start|> (Reference: The Emergence of Spectral Universality in Deep Networks: Recent work has shown that tight concentration of the entire spectrum of singular values of a deep network's input-output Jacobian around one at initialization can speed up learning by orders of magnitude. Therefore, to guide important design choices, it is important to build a full theoretical understanding of the spectra of Jacobians at initialization. To this end, we leverage powerful tools from free probability theory to provide a detailed analytic understanding of how a deep network's Jacobian spectrum depends on various hyperparameters including the nonlinearity, the weight and bias distributions, and the depth. For a variety of nonlinearities, our work reveals the emergence of new universal limiting spectral distributions that remain concentrated around one even as the depth goes to infinity.) <|cite_end|> for Gaussian and orthogonal weights using the analytic machinery of free probability. To be precise, they derived an implicit functional equation for the moment generating function of this distribution, from which they are able to determine the first two moments $m_{1,L}$ and $m_{2,L}$ by expanding and solving for coefficients (which breaks down for higher moments). Note that they only identify the limiting distribution, and do not concern themselves with the convergence of the empirical measure. By contrast, we show convergence in probability, find an explicit formula for the moments of the limiting distribution, and show that the same conclusions hold for non-Gaussian and sparse weights. \subsection{Notation} Given a matrix $A \in \bR^{N\times M}$ and a vector $v \in \bR^N$ we write $[A]_{ij}$ and $[v]_j$ for their $i,j$ and $j$-th coordinate, respectively. More generally, we will use square brackets with subscripts to denote coordinates of tensors. We use the notation $\mathbf{I}_{M,N} \in \bR^{N\times M}, \mathbf{1}_M \in \bR^{N}$ to denote the matrix and vector having all entries equal to $1$ (omitting the subscripts whenever it does not hurt comprehension), $\mathbf{E}_{ij} \in \bR^{M \times N}$ and $\mathbf{e}_i \in \bR^N$ to denote the canonical basis matrices/vectors in $\mathbb{R}^N$. $\langle,\rangle$ will denote the standard inner product, with the space in subscript when it is not clear from the context. If $A$ is a matrix with complex entries, we use $\bar{A}$ to denote its conjugate and $A^*$ its Hermitian transpose. $\mathcal{N}(\mu,\sigma^2)$ will denote a Gaussian with mean $\mu$ and variance $\sigma^2$, and similarly $\mathcal{N}_\mathbb{C}(0,1)$ will denote a standard complex Gaussian. We will use standard asymptotic notation, writing $f(T)=o(g(T))$ to mean that $|f(T)/g(T)|\to_{T\to\infty} 0$ and $f(T)=O(g(T))$ to mean that $\limsup_{T\to\infty}|f(T)/g(T)|$ is bounded. We will often write $W[x_1,\dots,x_k]$ to denote the evaluation of a $k$-linear function $W$ at some input $(x_1,\dots,x_k)$; this is not different from $W(x_1,\dots,x_k)$ but will be used to stress the linear nature of the map. For any positive integer $N$, we will use $[N]$ to denote the set $\{1,...,N\}$. Whenever $e=(u,v)$ is an edge in a directed graph, we will call $u$ the \textit{head} and $v$ the \textit{tail} of $e$, and we say that $e$ is adjacent to $u,v$ and vice versa. \bigskip A table compiling the notation that we introduce throughout the paper can be found in Appendix \ref{app:sect:table} together with a dependency graph for all the main results in Appendix \ref{app:sect:map}. \bigskip \textbf{Acknowledgements. } N.C. thanks William Turner for pointing him to and J.H. thanks Adam Jones for helpful discussions. C.S. is supported by Innovate UK (Proj ID 10073285). N.C. and J.H. are supported by the EPSRC Centre for Doctoral Training in Mathematics of Random Systems: Analysis, Modelling and Simulation (EP/S023925/1). <|paper_end|>
[ "<|reference_start|> Deep Convolutional Networks as shallow Gaussian Processes: We show that the output of a (residual) convolutional neural network (CNN) with an appropriate prior over the weights and biases is a Gaussian process (GP) in the limit of infinitely many convolutional filters, extending similar results for dense networks. For a CNN, the equivalent kernel can be computed exactly and, unlike \"deep kernels\", has very few parameters: only the hyperparameters of the original CNN. Further, we show that this kernel has two properties that allow it to be computed efficiently; the cost of evaluating the kernel for a pair of images is similar to a single forward pass through the original CNN with only one filter per layer. The kernel equivalent to a 32-layer ResNet obtains 0.84% classification error on MNIST, a new record for GPs with a comparable number of parameters. <|reference_end|>", "<|reference_start|> Exact solutions to the nonlinear dynamics of learning in deep linear neural networks: Despite the widespread practical success of deep learning methods, our theoretical understanding of the dynamics of learning in deep neural networks remains quite sparse. We attempt to bridge the gap between the theory and practice of deep learning by systematically analyzing learning dynamics for the restricted case of deep linear neural networks. Despite the linearity of their input-output map, such networks have nonlinear gradient descent dynamics on weights that change with the addition of each new hidden layer. We show that deep linear networks exhibit nonlinear learning phenomena similar to those seen in simulations of nonlinear networks, including long plateaus followed by rapid transitions to lower error solutions, and faster convergence from greedy unsupervised pretraining initial conditions than from random initial conditions. We provide an analytical description of these phenomena by finding new exact solutions to the nonlinear dynamics of deep learning. Our theoretical analysis also reveals the surprising finding that as the depth of a network approaches infinity, learning speed can nevertheless remain finite: for a special class of initial conditions on the weights, very deep networks incur only a finite, depth independent, delay in learning speed relative to shallow networks. We show that, under certain conditions on the training data, unsupervised pretraining can find this special class of initial conditions, while scaled random Gaussian initializations cannot. We further exhibit a new class of random orthogonal initial conditions on weights that, like unsupervised pre-training, enjoys depth independent learning times. We further show that these initial conditions also lead to faithful propagation of gradients even in deep nonlinear networks, as long as they operate in a special regime known as the edge of chaos. <|reference_end|>", "<|reference_start|> Random Neural Networks in the Infinite Width Limit as Gaussian Processes: This article gives a new proof that fully connected neural networks with random weights and biases converge to Gaussian processes in the regime where the input dimension, output dimension, and depth are kept fixed, while the hidden layer widths tend to infinity. Unlike prior work, convergence is shown assuming only moment conditions for the distribution of weights and for quite general non-linearities. <|reference_end|>", "<|reference_start|> Beyond IID weights: sparse and low-rank deep Neural Networks are also Gaussian Processes: The infinitely wide neural network has been proven a useful and manageable mathematical model that enables the understanding of many phenomena appearing in deep learning. One example is the convergence of random deep networks to Gaussian processes that allows a rigorous analysis of the way the choice of activation function and network weights impacts the training dynamics. In this paper, we extend the seminal proof of Matthews et al. (2018) to a larger class of initial weight distributions (which we call PSEUDO-IID), including the established cases of IID and orthogonal weights, as well as the emerging low-rank and structured sparse settings celebrated for their computational speed-up benefits. We show that fully-connected and convolutional networks initialized with PSEUDO-IID distributions are all effectively equivalent up to their variance. Using our results, one can identify the Edge-of-Chaos for a broader class of neural networks and tune them at criticality in order to enhance their training. Moreover, they enable the posterior distribution of Bayesian Neural Networks to be tractable across these various initialization schemes. <|reference_end|>" ]
[ 3, 13, 31, 32 ]
{"<|cite_27|>": "ss-933277", "<|cite_28|>": "arxiv-156827", "<|cite_29|>": "ss-909946", "<|cite_1|>": "ss-1178571", "<|cite_30|>": "ss-959736", "<|multi_cite_31_1|>": "arxiv-175040", "<|multi_cite_31_2|>": "arxiv-163159", "<|multi_cite_32_1|>": "arxiv-159825", "<|multi_cite_32_2|>": "ss-771247", "<|multi_cite_32_3|>": "ss-957915", "<|multi_cite_33_1|>": "arxiv-306498", "<|multi_cite_33_2|>": "arxiv-403856", "<|multi_cite_26_1|>": "ss-1082517", "<|multi_cite_26_2|>": "arxiv-54355", "<|multi_cite_26_3|>": "arxiv-149861", "<|multi_cite_2_1|>": "ss-2078221", "<|multi_cite_2_2|>": "ss-1532164", "<|multi_cite_2_3|>": "ss-839270", "<|cite_3|>": "ss-1544309", "<|cite_4|>": "ss-748814", "<|multi_cite_7_1|>": "ss-1258495", "<|multi_cite_7_2|>": "ss-1055250", "<|multi_cite_8_2|>": "ss-818418", "<|multi_cite_9_1|>": "ss-1286338", "<|multi_cite_9_2|>": "ss-1298495", "<|cite_10|>": "ss-1286779", "<|cite_12|>": "ss-2386175", "<|cite_14|>": "ss-1345692", "<|cite_15|>": "arxiv-466285", "<|cite_16|>": "arxiv-156827", "<|cite_17|>": "arxiv-259042", "<|cite_18|>": "arxiv-352805", "<|cite_19|>": "ss-2078222", "<|multi_cite_20_1|>": "arxiv-175040", "<|multi_cite_20_2|>": "arxiv-163159", "<|multi_cite_21_1|>": "arxiv-163159", "<|multi_cite_21_2|>": "arxiv-259042", "<|cite_22|>": "ss-969608", "<|cite_23|>": "ss-2078223", "<|cite_24|>": "arxiv-149861"}
1906.02467
<|paper_start|> Title: ActivityNet-QA: A Dataset for Understanding Complex Web Videos via Question Answering Abstract: ActivityNet-QA: A Dataset for Understanding Complex Web Videos via Question Answering: Recent developments in modeling language and vision have been successfully applied to image question answering. It is both crucial and natural to extend this research direction to the video domain for video question answering (VideoQA). Compared to the image domain where large scale and fully annotated benchmark datasets exists, VideoQA datasets are limited to small scale and are automatically generated, etc. These limitations restrict their applicability in practice. Here we introduce ActivityNet-QA, a fully annotated and large scale VideoQA dataset. The dataset consists of 58,000 QA pairs on 5,800 complex web videos derived from the popular ActivityNet dataset. We present a statistical analysis of our ActivityNet-QA dataset and conduct extensive experiments on it by comparing existing VideoQA baselines. Moreover, we explore various video representation strategies to improve VideoQA performance, especially for long videos. The dataset is available at https://github.com/MILVLG/activitynet-qa Introduction Recent developments in deep neural networks have significantly accelerated the performance of many computer vision and natural language processing tasks. These advances stimulated research into bridging the semantic connections between vision and language, such as in visual captioning <|cite_start|> (Reference: Long-term Recurrent Convolutional Networks for Visual Recognition and Description: Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or "temporally deep", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are "doubly deep"' in that they can be compositional in spatial and temporal "layers". Such models may have advantages when target concepts are complex and/or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and/or optimized.) <|cite_end|> <|cite_start|> (Reference: Show, Attend and Tell: Neural Image Caption Generation with Visual Attention: Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.) <|cite_end|>, visual grounding <|cite_start|> (Reference: Grounding of Textual Phrases in Images by Reconstruction: Grounding (i.e. localizing) arbitrary, free-form textual phrases in visual content is a challenging problem with many applications for human-computer interaction and image-text reference resolution. Few datasets provide the ground truth spatial localization of phrases, thus it is desirable to learn from data with no or little grounding supervision. We propose a novel approach which learns grounding by reconstructing a given phrase using an attention mechanism, which can be either latent or optimized directly. During training our approach encodes the phrase using a recurrent network language model and then learns to attend to the relevant image region in order to reconstruct the input phrase. At test time, the correct attention, i.e., the grounding, is evaluated. If grounding supervision is available it can be directly applied via a loss over the attention mechanism. We demonstrate the effectiveness of our approach on the Flickr 30k Entities and ReferItGame datasets with different levels of supervision, ranging from no supervision over partial supervision to full supervision. Our supervised variant improves by a large margin over the state-of-the-art on both datasets.) <|cite_end|> <|cite_start|> (Reference: Query-guided Regression Network with Context Policy for Phrase Grounding: Given a textual description of an image, phrase grounding localizes objects in the image referred by query phrases in the description. State-of-the-art methods address the problem by ranking a set of proposals based on the relevance to each query, which are limited by the performance of independent proposal generation systems and ignore useful cues from context in the description. In this paper, we adopt a spatial regression method to break the performance limit, and introduce reinforcement learning techniques to further leverage semantic context information. We propose a novel Query-guided Regression network with Context policy (QRC Net) which jointly learns a Proposal Generation Network (PGN), a Query-guided Regression Network (QRN) and a Context Policy Network (CPN). Experiments show QRC Net provides a significant improvement in accuracy on two popular datasets: Flickr30K Entities and Referit Game, with 14.25% and 17.14% increase over the state-of-the-arts respectively.) <|cite_end|> <|cite_start|> (Reference: Rethinking Diversified and Discriminative Proposal Generation for Visual Grounding: Visual grounding aims to localize an object in an image referred to by a textual query phrase. Various visual grounding approaches have been proposed, and the problem can be modularized into a general framework: proposal generation, multi-modal feature representation, and proposal ranking. Of these three modules, most existing approaches focus on the latter two, with the importance of proposal generation generally neglected. In this paper, we rethink the problem of what properties make a good proposal generator. We introduce the diversity and discrimination simultaneously when generating proposals, and in doing so propose Diversified and Discriminative Proposal Networks model (DDPN). Based on the proposals generated by DDPN, we propose a high performance baseline model for visual grounding and evaluate it on four benchmark datasets. Experimental results demonstrate that our model delivers significant improvements on all the tested data-sets (e.g., 18.8\% improvement on ReferItGame and 8.2\% improvement on Flickr30k Entities over the existing state-of-the-arts respectively)) <|cite_end|> and visual question answering <|cite_start|> (Reference: Ask Your Neurons: A Neural-based Approach to Answering Questions about Images: We address a question answering task on real-world images that is set up as a Visual Turing Test. By combining latest advances in image representation and natural language processing, we propose Neural-Image-QA, an end-to-end formulation to this problem for which all parts are trained jointly. In contrast to previous efforts, we are facing a multi-modal problem where the language output (answer) is conditioned on visual and natural language input (image and question). Our approach Neural-Image-QA doubles the performance of the previous best approach on this problem. We provide additional insights into the problem by analyzing how much information is contained only in the language part for which we provide a new human baseline. To study human consensus, which is related to the ambiguities inherent in this challenging task, we propose two novel metrics and collect additional answers which extends the original DAQUAR dataset to DAQUAR-Consensus.) <|cite_end|> <|cite_start|> (Reference: Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding: Modeling textual or visual information with vector representations trained from large language or visual datasets has been successfully explored in recent years. However, tasks such as visual question answering require combining these vector representations with each other. Approaches to multimodal pooling include element-wise product or sum, as well as concatenation of the visual and textual representations. We hypothesize that these methods are not as expressive as an outer product of the visual and textual vectors. As the outer product is typically infeasible due to its high dimensionality, we instead propose utilizing Multimodal Compact Bilinear pooling (MCB) to efficiently and expressively combine multimodal features. We extensively evaluate MCB on the visual question answering and grounding tasks. We consistently show the benefit of MCB over ablations without MCB. For visual question answering, we present an architecture which uses MCB twice, once for predicting attention over spatial features and again to combine the attended representation with the question representation. This model outperforms the state-of-the-art on the Visual7W dataset and the VQA challenge.) <|cite_end|>. Visual question answering (VQA) aims to generate natural language answers to free-form questions about a visual object (\emph{e.g.}, an image or a video). Compared to visual captioning, VQA is \emph{interactive} and provides fine-grained visual understanding. Image question answering (ImageQA) in particular has shown recent success, with many approaches proposed to investigate the key components of this task, \emph{e.g.}, discriminative feature representation <|cite_start|> (Reference: Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering: Top-down visual attention mechanisms have been used extensively in image captioning and visual question answering (VQA) to enable deeper image understanding through fine-grained analysis and even multiple steps of reasoning. In this work, we propose a combined bottom-up and top-down attention mechanism that enables attention to be calculated at the level of objects and other salient image regions. This is the natural basis for attention to be considered. Within our approach, the bottom-up mechanism (based on Faster R-CNN) proposes image regions, each with an associated feature vector, while the top-down mechanism determines feature weightings. Applying this approach to image captioning, our results on the MSCOCO test server establish a new state-of-the-art for the task, achieving CIDEr / SPICE / BLEU-4 scores of 117.9, 21.5 and 36.9, respectively. Demonstrating the broad applicability of the method, applying the same approach to VQA we obtain first place in the 2017 VQA Challenge.) <|cite_end|>, multi-modal fusion <|cite_start|> (Reference: Hadamard Product for Low-rank Bilinear Pooling: Bilinear models provide rich representations compared with linear models. They have been applied in various visual tasks, such as object recognition, segmentation, and visual question-answering, to get state-of-the-art performances taking advantage of the expanded representations. However, bilinear representations tend to be high-dimensional, limiting the applicability to computationally complex tasks. We propose low-rank bilinear pooling using Hadamard product for an efficient attention mechanism of multimodal learning. We show that our model outperforms compact bilinear pooling in visual question-answering tasks with the state-of-the-art results on the VQA dataset, having a better parsimonious property.) <|cite_end|> <|cite_start|> (Reference: Multi-modal Factorized Bilinear Pooling with Co-Attention Learning for Visual Question Answering: Visual question answering (VQA) is challenging because it requires a simultaneous understanding of both the visual content of images and the textual content of questions. The approaches used to represent the images and questions in a fine-grained manner and questions and to fuse these multi-modal features play key roles in performance. Bilinear pooling based models have been shown to outperform traditional linear models for VQA, but their high-dimensional representations and high computational complexity may seriously limit their applicability in practice. For multi-modal feature fusion, here we develop a Multi-modal Factorized Bilinear (MFB) pooling approach to efficiently and effectively combine multi-modal features, which results in superior performance for VQA compared with other bilinear pooling approaches. For fine-grained image and question representation, we develop a co-attention mechanism using an end-to-end deep network architecture to jointly learn both the image and question attentions. Combining the proposed MFB approach with co-attention learning in a new network architecture provides a unified model for VQA. Our experimental results demonstrate that the single MFB with co-attention model achieves new state-of-the-art performance on the real-world VQA dataset. Code available at https://github.com/yuzcccc/mfb.) <|cite_end|> <|cite_start|> (Reference: Beyond Bilinear: Generalized Multimodal Factorized High-Order Pooling for Visual Question Answering: Visual question answering (VQA) is challenging, because it requires a simultaneous understanding of both visual content of images and textual content of questions. To support the VQA task, we need to find good solutions for the following three issues: 1) fine-grained feature representations for both the image and the question; 2) multimodal feature fusion that is able to capture the complex interactions between multimodal features; and 3) automatic answer prediction that is able to consider the complex correlations between multiple diverse answers for the same question. For fine-grained image and question representations, a “coattention” mechanism is developed using a deep neural network (DNN) architecture to jointly learn the attentions for both the image and the question, which can allow us to reduce the irrelevant features effectively and obtain more discriminative features for image and question representations. For multimodal feature fusion, a generalized multimodal factorized high-order pooling approach (MFH) is developed to achieve more effective fusion of multimodal features by exploiting their correlations sufficiently, which can further result in superior VQA performance as compared with the state-of-the-art approaches. For answer prediction, the Kullback–Leibler divergence is used as the loss function to achieve precise characterization of the complex correlations between multiple diverse answers with the same or similar meaning, which can allow us to achieve faster convergence rate and obtain slightly better accuracy on answer prediction. A DNN architecture is designed to integrate all these aforementioned modules into a unified model for achieving superior VQA performance. With an ensemble of our MFH models, we achieve the state-of-the-art performance on the large-scale VQA data sets and win the runner-up in VQA Challenge 2017.) <|cite_end|> and visual reasoning <|cite_start|> (Reference: Dual Attention Networks for Multimodal Reasoning and Matching: We propose Dual Attention Networks (DANs) which jointly leverage visual and textual attention mechanisms to capture fine-grained interplay between vision and language. DANs attend to specific regions in images and words in text through multiple steps and gather essential information from both modalities. Based on this framework, we introduce two types of DANs for multimodal reasoning and matching, respectively. The reasoning model allows visual and textual attentions to steer each other during collaborative inference, which is useful for tasks such as Visual Question Answering (VQA). In addition, the matching model exploits the two attention mechanisms to estimate the similarity between images and sentences by focusing on their shared semantics. Our extensive experiments validate the effectiveness of DANs in combining vision and language, achieving the state-of-the-art performance on public benchmarks for VQA and image-text matching.) <|cite_end|> <|cite_start|> (Reference: Hierarchical Question-Image Co-Attention for Visual Question Answering: A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling "where to look" or visual attention, it is equally important to model "what words to listen to" or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question (and consequently the image via the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3% to 60.5%, and from 61.6% to 63.3% on the COCO-QA dataset. By using ResNet, the performance is further improved to 62.1% for VQA and 65.4% for COCO-QA.) <|cite_end|> <|cite_start|> (Reference: Inferring and Executing Programs for Visual Reasoning: Existing methods for visual reasoning attempt to directly map inputs to outputs using black-box architectures without explicitly modeling the underlying reasoning processes. As a result, these black-box models often learn to exploit biases in the data rather than learning to perform visual reasoning. Inspired by module networks, this paper proposes a model for visual reasoning that consists of a program generator that constructs an explicit representation of the reasoning process to be performed, and an execution engine that executes the resulting program to produce an answer. Both the program generator and the execution engine are implemented by neural networks, and are trained using a combination of backpropagation and REINFORCE. Using the CLEVR benchmark for visual reasoning, we show that our model significantly outperforms strong baselines and generalizes better in a variety of settings.) <|cite_end|>. This success has been facilitated by large scale and well annotated training datasets, such as Visual Genome <|cite_start|> (Reference: Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations: Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image. When asked "What vehicle is the person riding?", computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) in order to answer correctly that "the person is riding a horse-drawn carriage". In this paper, we present the Visual Genome dataset to enable the modeling of such relationships. We collect dense annotations of objects, attributes, and relationships within each image to learn these models. Specifically, our dataset contains over 100K images where each image has an average of 21 objects, 18 attributes, and 18 pairwise relationships between objects. We canonicalize the objects, attributes, relationships, and noun phrases in region descriptions and questions answer pairs to WordNet synsets. Together, these annotations represent the densest and largest dataset of image descriptions, objects, attributes, relationships, and question answers.) <|cite_end|> and VQA <|cite_start|> (Reference: VQA: Visual Question Answering: We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing ~0.25M images, ~0.76M questions, and ~10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (http://cloudcv.org/vqa).) <|cite_end|> <|cite_start|> (Reference: Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering: Problems at the intersection of vision and language are of significant importance both as challenging research questions and for the rich set of applications they enable. However, inherent structure in our world and bias in our language tend to be a simpler signal for learning than visual modalities, resulting in models that ignore visual information, leading to an inflated sense of their capability. We propose to counter these language priors for the task of Visual Question Answering (VQA) and make vision (the V in VQA) matter! Specifically, we balance the popular VQA dataset by collecting complementary images such that every question in our balanced dataset is associated with not just a single image, but rather a pair of similar images that result in two different answers to the question. Our dataset is by construction more balanced than the original VQA dataset and has approximately twice the number of image-question pairs. Our complete balanced dataset is publicly available at www.visualqa.org as part of the 2nd iteration of the Visual Question Answering Dataset and Challenge (VQA v2.0). We further benchmark a number of state-of-art VQA models on our balanced dataset. All models perform significantly worse on our balanced dataset, suggesting that these models have indeed learned to exploit language priors. This finding provides the first concrete empirical evidence for what seems to be a qualitative sense among practitioners. Finally, our data collection protocol for identifying complementary images enables us to develop a novel interpretable model, which in addition to providing an answer to the given (image, question) pair, also provides a counter-example based explanation. Specifically, it identifies an image that is similar to the original image, but it believes has a different answer to the same question. This can help in building trust for machines among their users.) <|cite_end|>. \begin{figure} \centering \includegraphics[width=0.47\textwidth]{videoqa_example.pdf} \caption{A VideoQA example. To answer the question correctly, one should fully understand the fine-grained semantics of the questions (\emph{i.e.}, the underlined keywords) and perform spatio-temporal reasoning on the visual contents of the video (\emph{i.e.}, frames in red border and objects in blue box). } \label{fig:example} \end{figure} \begin{table*} \centering \caption{Comparison of existing VideoQA datasets with ours (OE: open-ended, and MC: multiple-choice).} \small \label{table:dataset_compare} \begin{tabular}{c|cccccc} \toprule Datasets & \makecell{Video source} & \makecell{QA pairs \\generation}& QA tasks & $\#$ \makecell{Videos} & $\#$ QA pairs & \makecell{Average\\ video length} \\ \midrule MSVD-QA <|cite_start|> (Reference: Video question answering via gradually refined attention over appearance and motion: Recently image question answering (ImageQA) has gained lots of attention in the research community. However, as its natural extension, video question answering (VideoQA) is less explored. Although both tasks look similar, VideoQA is more challenging mainly because of the complexity and diversity of videos. As such, simply extending the ImageQA methods to videos is insufficient and suboptimal. Particularly, working with the video needs to model its inherent temporal structure and analyze the diverse information it contains. In this paper, we consider exploiting the appearance and motion information resided in the video with a novel attention mechanism. More specifically, we propose an end-to-end model which gradually refines its attention over the appearance and motion features of the video using the question as guidance. The question is processed word by word until the model generates the final optimized attention. The weighted representation of the video, as well as other contextual information, are used to generate the answer. Extensive experiments show the advantages of our model compared to other baseline models. We also demonstrate the effectiveness of our model by analyzing the refined attention weights during the question answering procedure.) <|cite_end|> & MSVD & Automatic &OE &1,970 & 50,505 & 10s\\ MSRVTT-QA <|cite_start|> (Reference: Video question answering via gradually refined attention over appearance and motion: Recently image question answering (ImageQA) has gained lots of attention in the research community. However, as its natural extension, video question answering (VideoQA) is less explored. Although both tasks look similar, VideoQA is more challenging mainly because of the complexity and diversity of videos. As such, simply extending the ImageQA methods to videos is insufficient and suboptimal. Particularly, working with the video needs to model its inherent temporal structure and analyze the diverse information it contains. In this paper, we consider exploiting the appearance and motion information resided in the video with a novel attention mechanism. More specifically, we propose an end-to-end model which gradually refines its attention over the appearance and motion features of the video using the question as guidance. The question is processed word by word until the model generates the final optimized attention. The weighted representation of the video, as well as other contextual information, are used to generate the answer. Extensive experiments show the advantages of our model compared to other baseline models. We also demonstrate the effectiveness of our model by analyzing the refined attention weights during the question answering procedure.) <|cite_end|>& MSRVTT & Automatic&OE& 10,000 & 243,680 & 15s\\ TGIF-QA <|cite_start|> (Reference: TGIF-QA: Toward Spatio-Temporal Reasoning in Visual Question Answering: Vision and language understanding has emerged as a subject undergoing intense study in Artificial Intelligence. Among many tasks in this line of research, visual question answering (VQA) has been one of the most successful ones, where the goal is to learn a model that understands visual content at region-level details and finds their associations with pairs of questions and answers in the natural language form. Despite the rapid progress in the past few years, most existing work in VQA have focused primarily on images. In this paper, we focus on extending VQA to the video domain and contribute to the literature in three important ways. First, we propose three new tasks designed specifically for video VQA, which require spatio-temporal reasoning from videos to answer questions correctly. Next, we introduce a new large-scale dataset for video VQA named TGIF-QA that extends existing VQA work with our new tasks. Finally, we propose a dual-LSTM based approach with both spatial and temporal attention, and show its effectiveness over conventional VQA techniques through empirical evaluations.) <|cite_end|>& TGIF & Automatic $\&$ Human & OE $\&$ MC & 56,720 & 103,919 & 3s \\ MovieQA <|cite_start|> (Reference: MovieQA: Understanding Stories in Movies through Question-Answering: We introduce the MovieQA dataset which aims to evaluate automatic story comprehension from both video and text. The dataset consists of 14,944 questions about 408 movies with high semantic diversity. The questions range from simpler "Who" did "What" to "Whom", to "Why" and "How" certain events occurred. Each question comes with a set of five possible answers; a correct one and four deceiving answers provided by human annotators. Our dataset is unique in that it contains multiple sources of information -- video clips, plots, subtitles, scripts, and DVS. We analyze our data through various statistics and methods. We further extend existing QA techniques to show that question-answering with such open-ended semantics is hard. We make this data set public along with an evaluation benchmark to encourage inspiring work in this challenging domain.) <|cite_end|> & Movies & Human & MC &6,771& 6,462 & 200s \\ Video-QA <|cite_start|> (Reference: Leveraging Video Descriptions to Learn Video Question Answering: We propose a scalable approach to learn video-based question answering (QA): answer a "free-form natural language question" about a video content. Our approach automatically harvests a large number of videos and descriptions freely available online. Then, a large number of candidate QA pairs are automatically generated from descriptions rather than manually annotated. Next, we use these candidate QA pairs to train a number of video-based QA methods extended fromMN (Sukhbaatar et al. 2015), VQA (Antol et al. 2015), SA (Yao et al. 2015), SS (Venugopalan et al. 2015). In order to handle non-perfect candidate QA pairs, we propose a self-paced learning procedure to iteratively identify them and mitigate their effects in training. Finally, we evaluate performance on manually generated video-based QA pairs. The results show that our self-paced learning procedure is effective, and the extended SS model outperforms various baselines.) <|cite_end|> &Jukinmedia & Automatic&OE&18,100&174,775& 45s\\ \midrule ActivityNet-QA (Ours) & ActivityNet & Human & OE & 5,800& 58,000& 180s \\ \bottomrule \end{tabular} \end{table*} Video question answering (VideoQA) can be seen as a natural but more challenging extension of ImageQA, due to the additional complexity of understanding of image sequences and more diverse types of questions asked. Figure \ref{fig:example} shows an example of VideoQA. To accurately answer the question, a VideoQA model requires simultaneous fine-grained video content understanding and spatio-temporal reasoning. Existing approaches mainly focus on the temporal attention mechanism <|cite_start|> (Reference: TGIF-QA: Toward Spatio-Temporal Reasoning in Visual Question Answering: Vision and language understanding has emerged as a subject undergoing intense study in Artificial Intelligence. Among many tasks in this line of research, visual question answering (VQA) has been one of the most successful ones, where the goal is to learn a model that understands visual content at region-level details and finds their associations with pairs of questions and answers in the natural language form. Despite the rapid progress in the past few years, most existing work in VQA have focused primarily on images. In this paper, we focus on extending VQA to the video domain and contribute to the literature in three important ways. First, we propose three new tasks designed specifically for video VQA, which require spatio-temporal reasoning from videos to answer questions correctly. Next, we introduce a new large-scale dataset for video VQA named TGIF-QA that extends existing VQA work with our new tasks. Finally, we propose a dual-LSTM based approach with both spatial and temporal attention, and show its effectiveness over conventional VQA techniques through empirical evaluations.) <|cite_end|> <|cite_start|> (Reference: Video question answering via gradually refined attention over appearance and motion: Recently image question answering (ImageQA) has gained lots of attention in the research community. However, as its natural extension, video question answering (VideoQA) is less explored. Although both tasks look similar, VideoQA is more challenging mainly because of the complexity and diversity of videos. As such, simply extending the ImageQA methods to videos is insufficient and suboptimal. Particularly, working with the video needs to model its inherent temporal structure and analyze the diverse information it contains. In this paper, we consider exploiting the appearance and motion information resided in the video with a novel attention mechanism. More specifically, we propose an end-to-end model which gradually refines its attention over the appearance and motion features of the video using the question as guidance. The question is processed word by word until the model generates the final optimized attention. The weighted representation of the video, as well as other contextual information, are used to generate the answer. Extensive experiments show the advantages of our model compared to other baseline models. We also demonstrate the effectiveness of our model by analyzing the refined attention weights during the question answering procedure.) <|cite_end|> or memory mechanism <|cite_start|> (Reference: A Read-Write Memory Network for Movie Story Understanding: We propose a novel memory network model named Read-Write Memory Network (RWMN) to perform question and answering tasks for large-scale, multimodal movie story understanding. The key focus of our RWMN model is to design the read network and the write network that consist of multiple convolutional layers, which enable memory read and write operations to have high capacity and flexibility. While existing memory-augmented network models treat each memory slot as an independent block, our use of multi-layered CNNs allows the model to read and write sequential memory cells as chunks, which is more reasonable to represent a sequential story because adjacent memory blocks often have strong correlations. For evaluation, we apply our model to all the six tasks of the MovieQA benchmark, and achieve the best accuracies on several tasks, especially on the visual QA task. Our model shows a potential to better understand not only the content in the story, but also more abstract information, such as relationships between characters and the reasons for their actions.) <|cite_end|> <|cite_start|> (Reference: DeepStory: Video Story QA by Deep Embedded Memory Networks: Question-answering (QA) on video contents is a significant challenge for achieving human-level intelligence as it involves both vision and language in real-world settings. Here we demonstrate the possibility of an AI agent performing video story QA by learning from a large amount of cartoon videos. We develop a video-story learning model, i.e. Deep Embedded Memory Networks (DEMN), to reconstruct stories from a joint scene-dialogue video stream using a latent embedding space of observed data. The video stories are stored in a long-term memory component. For a given question, an LSTM-based attention model uses the long-term memory to recall the best question-story-answer triplet by focusing on specific words containing key information. We trained the DEMN on a novel QA dataset of children's cartoon video series, Pororo. The dataset contains 16,066 scene-dialogue pairs of 20.5-hour videos, 27,328 fine-grained sentences for scene description, and 8,913 story-related QA pairs. Our experimental results show that the DEMN outperforms other QA models. This is mainly due to 1) the reconstruction of video stories in a scene-dialogue combined form that utilize the latent embedding and 2) attention. DEMN also achieved state-of-the-art results on the MovieQA benchmark.) <|cite_end|> <|cite_start|> (Reference: Multi-turn video question answering via multi-stream hierarchical attention context network: Conversational video question answering is a challenging task in visual information retrieval, which generates the accurate answer from the referenced video contents according to the visual conversation context and given question. However, the existing visual question answering methods mainly tackle the problem of single-turn video question answering, which may be ineffectively applied for multi-turn video question answering directly, due to the insufficiency of modeling the sequential conversation context. In this paper, we study the problem of multi-turn video question answering from the viewpoint of multi-step hierarchical attention context network learning. We first propose the hierarchical attention context network for context-aware question understanding by modeling the hierarchically sequential conversation context structure. We then develop the multi-stream spatio-temporal attention network for learning the joint representation of the dynamic video contents and context-aware question embedding. We next devise the hierarchical attention context network learning method with multi-step reasoning process for multi-turn video question answering. We construct two large-scale multi-turn video question answering datasets. The extensive experiments show the effectiveness of our method.) <|cite_end|>. Na \emph{et al.} introduced a read-write memory network to fuse multi-modal features and store temporal information using a multi-stage convolutional neural networks model <|cite_start|> (Reference: A Read-Write Memory Network for Movie Story Understanding: We propose a novel memory network model named Read-Write Memory Network (RWMN) to perform question and answering tasks for large-scale, multimodal movie story understanding. The key focus of our RWMN model is to design the read network and the write network that consist of multiple convolutional layers, which enable memory read and write operations to have high capacity and flexibility. While existing memory-augmented network models treat each memory slot as an independent block, our use of multi-layered CNNs allows the model to read and write sequential memory cells as chunks, which is more reasonable to represent a sequential story because adjacent memory blocks often have strong correlations. For evaluation, we apply our model to all the six tasks of the MovieQA benchmark, and achieve the best accuracies on several tasks, especially on the visual QA task. Our model shows a potential to better understand not only the content in the story, but also more abstract information, such as relationships between characters and the reasons for their actions.) <|cite_end|>. Xu \emph{et al.} represented a video as appearance and motion stream features and introduced a gradually refined attention model to fuse the two-stream features together. <|cite_start|> (Reference: Video question answering via gradually refined attention over appearance and motion: Recently image question answering (ImageQA) has gained lots of attention in the research community. However, as its natural extension, video question answering (VideoQA) is less explored. Although both tasks look similar, VideoQA is more challenging mainly because of the complexity and diversity of videos. As such, simply extending the ImageQA methods to videos is insufficient and suboptimal. Particularly, working with the video needs to model its inherent temporal structure and analyze the diverse information it contains. In this paper, we consider exploiting the appearance and motion information resided in the video with a novel attention mechanism. More specifically, we propose an end-to-end model which gradually refines its attention over the appearance and motion features of the video using the question as guidance. The question is processed word by word until the model generates the final optimized attention. The weighted representation of the video, as well as other contextual information, are used to generate the answer. Extensive experiments show the advantages of our model compared to other baseline models. We also demonstrate the effectiveness of our model by analyzing the refined attention weights during the question answering procedure.) <|cite_end|>. Gao \emph{et al.} proposed a co-memory network to jointly model and interact with the motion and appearance information <|cite_start|> (Reference: Motion-Appearance Co-Memory Networks for Video Question Answering: Video Question Answering (QA) is an important task in understanding video temporal structure. We observe that there are three unique attributes of video QA compared with image QA: (1) it deals with long sequences of images containing richer information not only in quantity but also in variety; (2) motion and appearance information are usually correlated with each other and able to provide useful attention cues to the other; (3) different questions require different number of frames to infer the answer. Based these observations, we propose a motion-appearance comemory network for video QA. Our networks are built on concepts from Dynamic Memory Network (DMN) and introduces new mechanisms for video QA. Specifically, there are three salient aspects: (1) a co-memory attention mechanism that utilizes cues from both motion and appearance to generate attention; (2) a temporal conv-deconv network to generate multi-level contextual facts; (3) a dynamic fact ensemble method to construct temporal representation dynamically for different questions. We evaluate our method on TGIF-QA dataset, and the results outperform state-of-the-art significantly on all four tasks of TGIF-QA.) <|cite_end|>. Zhao \emph{et al.} introduced an adaptive hierarchical encoder to learn the segment-level video representation with adaptive video segmentation, and devised a reinforced decoder to generate the answer for long videos <|cite_start|> (Reference: Open-Ended Long-form Video Question Answering via Adaptive Hierarchical Reinforced Networks: Open-ended long-form video question answering is challenging problem in visual information retrieval, which automatically generates the natural language answer from the referenced long-form video content according to the question. However, the existing video question answering works mainly focus on the short-form video question answering, due to the lack of modeling the semantic representation of long-form video contents. In this paper, we consider the problem of long-form video question answering from the viewpoint of adaptive hierarchical reinforced encoder-decoder network learning. We propose the adaptive hierarchical encoder network to learn the joint representation of the long-form video contents according to the question with adaptive video segmentation. we then develop the reinforced decoder network to generate the natural language answer for open-ended video question answering. We construct a large-scale long-form video question answering dataset. The extensive experiments show the effectiveness of our method.) <|cite_end|>. As noted above, high-quality datasets are of considerable value for VQA research. Several VideoQA datasets have been compiled for different scenarios, such as MovieQA <|cite_start|> (Reference: MovieQA: Understanding Stories in Movies through Question-Answering: We introduce the MovieQA dataset which aims to evaluate automatic story comprehension from both video and text. The dataset consists of 14,944 questions about 408 movies with high semantic diversity. The questions range from simpler "Who" did "What" to "Whom", to "Why" and "How" certain events occurred. Each question comes with a set of five possible answers; a correct one and four deceiving answers provided by human annotators. Our dataset is unique in that it contains multiple sources of information -- video clips, plots, subtitles, scripts, and DVS. We analyze our data through various statistics and methods. We further extend existing QA techniques to show that question-answering with such open-ended semantics is hard. We make this data set public along with an evaluation benchmark to encourage inspiring work in this challenging domain.) <|cite_end|>, TGIF-QA <|cite_start|> (Reference: TGIF-QA: Toward Spatio-Temporal Reasoning in Visual Question Answering: Vision and language understanding has emerged as a subject undergoing intense study in Artificial Intelligence. Among many tasks in this line of research, visual question answering (VQA) has been one of the most successful ones, where the goal is to learn a model that understands visual content at region-level details and finds their associations with pairs of questions and answers in the natural language form. Despite the rapid progress in the past few years, most existing work in VQA have focused primarily on images. In this paper, we focus on extending VQA to the video domain and contribute to the literature in three important ways. First, we propose three new tasks designed specifically for video VQA, which require spatio-temporal reasoning from videos to answer questions correctly. Next, we introduce a new large-scale dataset for video VQA named TGIF-QA that extends existing VQA work with our new tasks. Finally, we propose a dual-LSTM based approach with both spatial and temporal attention, and show its effectiveness over conventional VQA techniques through empirical evaluations.) <|cite_end|>, MSVD-QA, MSRVTT-QA <|cite_start|> (Reference: Video question answering via gradually refined attention over appearance and motion: Recently image question answering (ImageQA) has gained lots of attention in the research community. However, as its natural extension, video question answering (VideoQA) is less explored. Although both tasks look similar, VideoQA is more challenging mainly because of the complexity and diversity of videos. As such, simply extending the ImageQA methods to videos is insufficient and suboptimal. Particularly, working with the video needs to model its inherent temporal structure and analyze the diverse information it contains. In this paper, we consider exploiting the appearance and motion information resided in the video with a novel attention mechanism. More specifically, we propose an end-to-end model which gradually refines its attention over the appearance and motion features of the video using the question as guidance. The question is processed word by word until the model generates the final optimized attention. The weighted representation of the video, as well as other contextual information, are used to generate the answer. Extensive experiments show the advantages of our model compared to other baseline models. We also demonstrate the effectiveness of our model by analyzing the refined attention weights during the question answering procedure.) <|cite_end|>, and Video-QA <|cite_start|> (Reference: Leveraging Video Descriptions to Learn Video Question Answering: We propose a scalable approach to learn video-based question answering (QA): answer a "free-form natural language question" about a video content. Our approach automatically harvests a large number of videos and descriptions freely available online. Then, a large number of candidate QA pairs are automatically generated from descriptions rather than manually annotated. Next, we use these candidate QA pairs to train a number of video-based QA methods extended fromMN (Sukhbaatar et al. 2015), VQA (Antol et al. 2015), SA (Yao et al. 2015), SS (Venugopalan et al. 2015). In order to handle non-perfect candidate QA pairs, we propose a self-paced learning procedure to iteratively identify them and mitigate their effects in training. Finally, we evaluate performance on manually generated video-based QA pairs. The results show that our self-paced learning procedure is effective, and the extended SS model outperforms various baselines.) <|cite_end|>. Most of these VideoQA datasets exploit video source data from other datasets and then add question-answer pairs to them. The detailed statistics of these datasets are listed in Table \ref{table:dataset_compare}. We can see that these existing datasets are imperfect and have at least one of the following limitations: \begin{itemize} \item The datasets are small scale. Without sufficient training samples, the obtained model suffers from under-fitting. Without sufficient testing samples, the evaluated results are unreliable. \item The questions and answers are automatically generated by algorithms (\emph{e.g.}, obtained from the captioning results or narrative descriptions using off-the-shelf algorithms) rather than human annotation. Automatically generated question-answer pairs lack diversity, making the learned model easy to over-fit. \item The videos are short. The length of a video is closely related to the complexity of video content. Questions on short videos (\emph{e.g.}, less than 10 seconds) are usually too easy to answer making it difficult distinguish the performance of different VideoQA approaches on the dataset. \item The videos represent a small number of activities. This severely restricts the generalizability of the VideoQA models trained on these datasets and poorly reflects model performance in real-world use. \end{itemize} In this paper, we construct a new benchmark dataset \emph{ActivityNet-QA} for evaluating VideoQA performance. Our dataset exploits 5,800 videos from the ActivityNet dataset, which contains about 20,000 untrimmed web videos representing 200 action classes <|cite_start|> (Reference: ActivityNet: A large-scale video benchmark for human activity understanding: In spite of many dataset efforts for human action recognition, current computer vision algorithms are still severely limited in terms of the variability and complexity of the actions that they can recognize. This is in part due to the simplicity of current benchmarks, which mostly focus on simple actions and movements occurring on manually trimmed videos. In this paper we introduce ActivityNet, a new large-scale video benchmark for human activity understanding. Our benchmark aims at covering a wide range of complex human activities that are of interest to people in their daily living. In its current version, ActivityNet provides samples from 203 activity classes with an average of 137 untrimmed videos per class and 1.41 activity instances per video, for a total of 849 video hours. We illustrate three scenarios in which ActivityNet can be used to compare algorithms for human activity understanding: untrimmed video classification, trimmed activity classification and activity detection.) <|cite_end|>. We annotate each video with ten question-answer pairs using crowdsourcing to finally obtain 58,000 question-answer pairs. Compared with other VideoQA datasets, ActivityNet-QA is of large scale, fully annotated by humans, and with very long videos. To better understand the properties of ActivityNet-QA, we present statistical and visualization analyses. We further conduct experiments on ActivityNet-QA and compare results produced by existing VideoQA baselines. <|paper_end|>
[ "<|reference_start|> Long-term Recurrent Convolutional Networks for Visual Recognition and Description: Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and/or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and/or optimized. <|reference_end|>", "<|reference_start|> Multi-modal Factorized Bilinear Pooling with Co-Attention Learning for Visual Question Answering: Visual question answering (VQA) is challenging because it requires a simultaneous understanding of both the visual content of images and the textual content of questions. The approaches used to represent the images and questions in a fine-grained manner and questions and to fuse these multi-modal features play key roles in performance. Bilinear pooling based models have been shown to outperform traditional linear models for VQA, but their high-dimensional representations and high computational complexity may seriously limit their applicability in practice. For multi-modal feature fusion, here we develop a Multi-modal Factorized Bilinear (MFB) pooling approach to efficiently and effectively combine multi-modal features, which results in superior performance for VQA compared with other bilinear pooling approaches. For fine-grained image and question representation, we develop a co-attention mechanism using an end-to-end deep network architecture to jointly learn both the image and question attentions. Combining the proposed MFB approach with co-attention learning in a new network architecture provides a unified model for VQA. Our experimental results demonstrate that the single MFB with co-attention model achieves new state-of-the-art performance on the real-world VQA dataset. Code available at https://github.com/yuzcccc/mfb. <|reference_end|>", "<|reference_start|> Dual Attention Networks for Multimodal Reasoning and Matching: We propose Dual Attention Networks (DANs) which jointly leverage visual and textual attention mechanisms to capture fine-grained interplay between vision and language. DANs attend to specific regions in images and words in text through multiple steps and gather essential information from both modalities. Based on this framework, we introduce two types of DANs for multimodal reasoning and matching, respectively. The reasoning model allows visual and textual attentions to steer each other during collaborative inference, which is useful for tasks such as Visual Question Answering (VQA). In addition, the matching model exploits the two attention mechanisms to estimate the similarity between images and sentences by focusing on their shared semantics. Our extensive experiments validate the effectiveness of DANs in combining vision and language, achieving the state-of-the-art performance on public benchmarks for VQA and image-text matching. <|reference_end|>", "<|reference_start|> MovieQA: Understanding Stories in Movies through Question-Answering: We introduce the MovieQA dataset which aims to evaluate automatic story comprehension from both video and text. The dataset consists of 14,944 questions about 408 movies with high semantic diversity. The questions range from simpler \"Who\" did \"What\" to \"Whom\", to \"Why\" and \"How\" certain events occurred. Each question comes with a set of five possible answers; a correct one and four deceiving answers provided by human annotators. Our dataset is unique in that it contains multiple sources of information -- video clips, plots, subtitles, scripts, and DVS. We analyze our data through various statistics and methods. We further extend existing QA techniques to show that question-answering with such open-ended semantics is hard. We make this data set public along with an evaluation benchmark to encourage inspiring work in this challenging domain. <|reference_end|>" ]
[ 0, 9, 11, 31 ]
{"<|multi_cite_1_1|>": "arxiv-68874", "<|multi_cite_1_2|>": "arxiv-72863", "<|multi_cite_2_1|>": "arxiv-87014", "<|multi_cite_2_2|>": "arxiv-131233", "<|multi_cite_2_3|>": "arxiv-157882", "<|multi_cite_3_1|>": "arxiv-77267", "<|multi_cite_3_2|>": "arxiv-99483", "<|cite_4|>": "arxiv-130256", "<|multi_cite_5_1|>": "arxiv-107865", "<|multi_cite_5_2|>": "arxiv-131185", "<|multi_cite_5_3|>": "ss-1466282", "<|multi_cite_6_1|>": "arxiv-109164", "<|multi_cite_6_2|>": "arxiv-99055", "<|multi_cite_6_3|>": "arxiv-123751", "<|cite_7|>": "arxiv-92776", "<|multi_cite_8_1|>": "arxiv-77148", "<|multi_cite_8_2|>": "arxiv-111676", "<|cite_9|>": "ss-1267466", "<|cite_10|>": "ss-1267466", "<|cite_11|>": "arxiv-121709", "<|cite_12|>": "arxiv-88780", "<|cite_13|>": "arxiv-109971", "<|multi_cite_14_1|>": "arxiv-121709", "<|multi_cite_14_2|>": "ss-1267466", "<|multi_cite_15_1|>": "arxiv-135802", "<|multi_cite_15_2|>": "arxiv-128383", "<|multi_cite_15_3|>": "ss-679759", "<|cite_16|>": "arxiv-135802", "<|cite_17|>": "ss-1267466", "<|cite_18|>": "arxiv-153191", "<|cite_19|>": "ss-810197", "<|cite_20|>": "arxiv-88780", "<|cite_21|>": "arxiv-121709", "<|cite_22|>": "ss-1267466", "<|cite_23|>": "arxiv-109971", "<|cite_24|>": "ss-743995"}
2405.16108-1
<|cite_start|> (Reference: Image Anything: Towards Reasoning-coherent and Training-free Multi-modal Image Generation: The multifaceted nature of human perception and comprehension indicates that, when we think, our body can naturally take any combination of senses, a.k.a., modalities and form a beautiful picture in our brain. For example, when we see a cattery and simultaneously perceive the cat's purring sound, our brain can construct a picture of a cat in the cattery. Intuitively, generative AI models should hold the versatility of humans and be capable of generating images from any combination of modalities efficiently and collaboratively. This paper presents ImgAny, a novel end-to-end multi-modal generative model that can mimic human reasoning and generate high-quality images. Our method serves as the first attempt in its capacity of efficiently and flexibly taking any combination of seven modalities, ranging from language, audio to vision modalities, including image, point cloud, thermal, depth, and event data. Our key idea is inspired by human-level cognitive processes and involves the integration and harmonization of multiple input modalities at both the entity and attribute levels without specific tuning across modalities. Accordingly, our method brings two novel training-free technical branches: 1) Entity Fusion Branch ensures the coherence between inputs and outputs. It extracts entity features from the multi-modal representations powered by our specially constructed entity knowledge graph; 2) Attribute Fusion Branch adeptly preserves and processes the attributes. It efficiently amalgamates distinct attributes from diverse input modalities via our proposed attribute knowledge graph. Lastly, the entity and attribute features are adaptively fused as the conditional inputs to the pre-trained Stable Diffusion model for image generation. Extensive experiments under diverse modality combinations demonstrate its exceptional capability for visual content creation.) <|cite_end|> <|cite_start|> (Reference: Any-to-Any Generation via Composable Diffusion: We present Composable Diffusion (CoDi), a novel generative model capable of generating any combination of output modalities, such as language, image, video, or audio, from any combination of input modalities. Unlike existing generative AI systems, CoDi can generate multiple modalities in parallel and its input is not limited to a subset of modalities like text or image. Despite the absence of training datasets for many combinations of modalities, we propose to align modalities in both the input and output space. This allows CoDi to freely condition on any input combination and generate any group of modalities, even if they are not present in the training data. CoDi employs a novel composable generation strategy which involves building a shared multimodal space by bridging alignment in the diffusion process, enabling the synchronized generation of intertwined modalities, such as temporally aligned video and audio. Highly customizable and flexible, CoDi achieves strong joint-modality generation quality, and outperforms or is on par with the unimodal state-of-the-art for single-modality synthesis. The project page with demonstrations and code is at https://codi-gen.github.io) <|cite_end|> <|cite_start|> (Reference: NExT-GPT: Any-to-Any Multimodal LLM: While recently Multimodal Large Language Models (MM-LLMs) have made exciting strides, they mostly fall prey to the limitation of only input-side multimodal understanding, without the ability to produce content in multiple modalities. As we humans always perceive the world and communicate with people through various modalities, developing any-to-any MM-LLMs capable of accepting and delivering content in any modality becomes essential to human-level AI. To fill the gap, we present an end-to-end general-purpose any-to-any MM-LLM system, NExT-GPT. We connect an LLM with multimodal adaptors and different diffusion decoders, enabling NExT-GPT to perceive inputs and generate outputs in arbitrary combinations of text, images, videos, and audio. By leveraging the existing well-trained highly-performing encoders and decoders, NExT-GPT is tuned with only a small amount of parameter (1%) of certain projection layers, which not only benefits low-cost training and also facilitates convenient expansion to more potential modalities. Moreover, we introduce a modality-switching instruction tuning (MosIT) and manually curate a high-quality dataset for MosIT, based on which NExT-GPT is empowered with complex cross-modal semantic understanding and content generation. Overall, our research showcases the promising possibility of building an AI agent capable of modeling universal modalities, paving the way for more human-like AI research in the community. Project page: https://next-gpt.github.io/) <|cite_end|>, typically using a specific primary modality as a bridge for combining them. Nonetheless, they struggle to handle any modality combinations flexibly during the inference <|cite_start|> (Reference: Learning Unseen Modality Interaction: Multimodal learning assumes all modality combinations of interest are available during training to learn cross-modal correspondences. In this paper, we challenge this modality-complete assumption for multimodal learning and instead strive for generalization to unseen modality combinations during inference. We pose the problem of unseen modality interaction and introduce a first solution. It exploits a module that projects the multidimensional features of different modalities into a common space with rich information preserved. This allows the information to be accumulated with a simple summation operation across available modalities. To reduce overfitting to less discriminative modality combinations during training, we further improve the model learning with pseudo-supervision indicating the reliability of a modality's prediction. We demonstrate that our approach is effective for diverse tasks and modalities by evaluating it for multimodal video classification, robot state regression, and multimedia retrieval. Project website: https://xiaobai1217.github.io/Unseen-Modality-Interaction/.) <|cite_end|>. To address this, we propose the AF module to fuse the multi-modal embeddings and learn a unified representation space for any modality combinations. \noindent \textbf{Knowledge Distillation} aims to transfer the knowledge from a teacher model to student model <|cite_start|> (Reference: Distilling the Knowledge in a Neural Network: A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.) <|cite_end|> <|cite_start|> (Reference: Knowledge Distillation and Student-Teacher Learning for Visual Intelligence: A Review and New Outlooks: Deep neural models in recent years have been successful in almost every field, including extremely complex problem statements. However, these models are huge in size, with millions (and even billions) of parameters, thus demanding more heavy computation power and failing to be deployed on edge devices. Besides, the performance boost is highly dependent on redundant labeled data. To achieve faster speeds and to handle the problems caused by the lack of data, knowledge distillation (KD) has been proposed to transfer information learned from one model to another. KD is often characterized by the so-called `Student-Teacher' (S-T) learning framework and has been broadly applied in model compression and knowledge transfer. This paper is about KD and S-T learning, which are being actively studied in recent years. First, we aim to provide explanations of what KD is and how/why it works. Then, we provide a comprehensive survey on the recent progress of KD methods together with S-T frameworks typically for vision tasks. In general, we consider some fundamental questions that have been driving this research area and thoroughly generalize the research progress and technical details. Additionally, we systematically analyze the research status of KD in vision applications. Finally, we discuss the potentials and open challenges of existing methods and prospect the future directions of KD and S-T learning.) <|cite_end|>. Most KD methods focus within a single modality, distilling logits <|cite_start|> (Reference: Knowledge Distillation Meets Self-Supervision: Knowledge distillation, which involves extracting the "dark knowledge" from a teacher network to guide the learning of a student network, has emerged as an important technique for model compression and transfer learning. Unlike previous works that exploit architecture-specific cues such as activation and attention for distillation, here we wish to explore a more general and model-agnostic approach for extracting "richer dark knowledge" from the pre-trained teacher model. We show that the seemingly different self-supervision task can serve as a simple yet powerful solution. For example, when performing contrastive learning between transformed entities, the noisy predictions of the teacher network reflect its intrinsic composition of semantic and pose information. By exploiting the similarity between those self-supervision signals as an auxiliary task, one can effectively transfer the hidden information from the teacher to the student. In this paper, we discuss practical ways to exploit those noisy self-supervision signals with selective transfer for distillation. We further show that self-supervision signals improve conventional distillation with substantial gains under few-shot and noisy-label scenarios. Given the richer knowledge mined from self-supervision, our knowledge distillation approach achieves state-of-the-art performance on standard benchmarks, i.e., CIFAR100 and ImageNet, under both similar-architecture and cross-architecture settings. The advantage is even more pronounced under the cross-architecture setting, where our method outperforms the state of the art CRD by an average of 2.3% in accuracy rate on CIFAR100 across six different teacher-student pairs.) <|cite_end|> <|cite_start|> (Reference: Knowledge Transfer via Dense Cross-Layer Mutual-Distillation: Knowledge Distillation (KD) based methods adopt the one-way Knowledge Transfer (KT) scheme in which training a lower-capacity student network is guided by a pre-trained high-capacity teacher network. Recently, Deep Mutual Learning (DML) presented a two-way KT strategy, showing that the student network can be also helpful to improve the teacher network. In this paper, we propose Dense Cross-layer Mutual-distillation (DCM), an improved two-way KT method in which the teacher and student networks are trained collaboratively from scratch. To augment knowledge representation learning, well-designed auxiliary classifiers are added to certain hidden layers of both teacher and student networks. To boost KT performance, we introduce dense bidirectional KD operations between the layers appended with classifiers. After training, all auxiliary classifiers are discarded, and thus there are no extra parameters introduced to final models. We test our method on a variety of KT tasks, showing its superiorities over related methods. Code is available at https://github.com/sundw2014/DCM) <|cite_end|>, features <|cite_start|> (Reference: FitNets: Hints for Thin Deep Nets: While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network.) <|cite_end|> <|cite_start|> (Reference: A Comprehensive Overhaul of Feature Distillation: We investigate the design aspects of feature distillation methods achieving network compression and propose a novel feature distillation method in which the distillation loss is designed to make a synergy among various aspects: teacher transform, student transform, distillation feature position and distance function. Our proposed distillation loss includes a feature transform with a newly designed margin ReLU, a new distillation feature position, and a partial L2 distance function to skip redundant information giving adverse effects to the compression of student. In ImageNet, our proposed method achieves 21.65% of top-1 error with ResNet50, which outperforms the performance of the teacher network, ResNet152. Our proposed method is evaluated on various tasks such as image classification, object detection and semantic segmentation and achieves a significant performance improvement in all tasks. The code is available at https://sites.google.com/view/byeongho-heo/overhaul) <|cite_end|>and relations <|cite_start|> (Reference: Relational Knowledge Distillation: Knowledge distillation aims at transferring knowledge acquired in one model (a teacher) to another model (a student) that is typically smaller. Previous approaches can be expressed as a form of training the student to mimic output activations of individual data examples represented by the teacher. We introduce a novel approach, dubbed relational knowledge distillation (RKD), that transfers mutual relations of data examples instead. For concrete realizations of RKD, we propose distance-wise and angle-wise distillation losses that penalize structural differences in relations. Experiments conducted on different tasks show that the proposed method improves educated student models with a significant margin. In particular for metric learning, it allows students to outperform their teachers' performance, achieving the state of the arts on standard benchmark datasets.) <|cite_end|> <|cite_start|> (Reference: Cross-Image Relational Knowledge Distillation for Semantic Segmentation: Current Knowledge Distillation (KD) methods for semantic segmentation often guide the student to mimic the teacher's structured information generated from individual data samples. However, they ignore the global semantic relations among pixels across various images that are valuable for KD. This paper proposes a novel Cross-Image Relational KD (CIRKD), which focuses on transferring structured pixel-to-pixel and pixel-to-region relations among the whole images. The motivation is that a good teacher network could construct a well-structured feature space in terms of global pixel dependencies. CIRKD makes the student mimic better structured semantic relations from the teacher, thus improving the segmentation performance. Experimental results over Cityscapes, CamVid and Pascal VOC datasets demonstrate the effectiveness of our proposed approach against state-of-the-art distillation methods. The code is available at https://github.com/winycg/CIRKD.) <|cite_end|> <|cite_start|> (Reference: EventDance: Unsupervised Source-free Cross-modal Adaptation for Event-based Object Recognition: In this paper, we make the first attempt at achieving the cross-modal (i.e., image-to-events) adaptation for event-based object recognition without accessing any labeled source image data owning to privacy and commercial issues. Tackling this novel problem is non-trivial due to the novelty of event cameras and the distinct modality gap between images and events. In particular, as only the source model is available, a hurdle is how to extract the knowledge from the source model by only using the unlabeled target event data while achieving knowledge transfer. To this end, we propose a novel framework, dubbed EventDance for this unsupervised source-free cross-modal adaptation problem. Importantly, inspired by event-to-video reconstruction methods, we propose a reconstruction-based modality bridging (RMB) module, which reconstructs intensity frames from events in a self-supervised manner. This makes it possible to build up the surrogate images to extract the knowledge (i.e., labels) from the source model. We then propose a multi-representation knowledge adaptation (MKA) module that transfers the knowledge to target models learning events with multiple representation types for fully exploring the spatiotemporal information of events. The two modules connecting the source and target models are mutually updated so as to achieve the best performance. Experiments on three benchmark datasets with two adaption settings show that EventDance is on par with prior methods utilizing the source data.) <|cite_end|>between models. For the cross-modal KD, the correspondence is widely applied to transfer the knowledge <|cite_start|> (Reference: Cross Modal Distillation for Supervision Transfer: In this work we propose a technique that transfers supervision between images from different modalities. We use learned representations from a large labeled modality as a supervisory signal for training representations for a new unlabeled paired modality. Our method enables learning of rich representations for unlabeled modalities and can be used as a pre-training procedure for new modalities with limited labeled data. We show experimental results where we transfer supervision from labeled RGB images to unlabeled depth and optical flow images and demonstrate large improvements for both these cross modal supervision transfers. Code, data and pre-trained models are available at https://github.com/s-gupta/fast-rcnn/tree/distillation) <|cite_end|> <|cite_start|> (Reference: An Efficient Approach to Informative Feature Extraction from Multimodal Data: One primary focus in multimodal feature extraction is to find the representations of individual modalities that are maximally correlated. As a well-known measure of dependence, the Hirschfeld-Gebelein-R\'{e}nyi (HGR) maximal correlation becomes an appealing objective because of its operational meaning and desirable properties. However, the strict whitening constraints formalized in the HGR maximal correlation limit its application. To address this problem, this paper proposes Soft-HGR, a novel framework to extract informative features from multiple data modalities. Specifically, our framework prevents the "hard" whitening constraints, while simultaneously preserving the same feature geometry as in the HGR maximal correlation. The objective of Soft-HGR is straightforward, only involving two inner products, which guarantees the efficiency and stability in optimization. We further generalize the framework to handle more than two modalities and missing modalities. When labels are partially available, we enhance the discriminative power of the feature representations by making a semi-supervised adaptation. Empirical evaluation implies that our approach learns more informative feature mappings and is more efficient to optimize.) <|cite_end|> <|cite_start|> (Reference: Knowledge as Priors: Cross-Modal Knowledge Generalization for Datasets without Superior Knowledge: Cross-modal knowledge distillation deals with transferring knowledge from a model trained with superior modalities (Teacher) to another model trained with weak modalities (Student). Existing approaches require paired training examples exist in both modalities. However, accessing the data from superior modalities may not always be feasible. For example, in the case of 3D hand pose estimation, depth maps, point clouds, or stereo images usually capture better hand structures than RGB images, but most of them are expensive to be collected. In this paper, we propose a novel scheme to train the Student in a Target dataset where the Teacher is unavailable. Our key idea is to generalize the distilled cross-modal knowledge learned from a Source dataset, which contains paired examples from both modalities, to the Target dataset by modeling knowledge as priors on parameters of the Student. We name our method "Cross-Modal Knowledge Generalization" and demonstrate that our scheme results in competitive performance for 3D hand pose estimation on standard benchmark datasets.) <|cite_end|> <|cite_start|> (Reference: Towards Cross-modality Medical Image Segmentation with Online Mutual Knowledge Distillation: The success of deep convolutional neural networks is partially attributed to the massive amount of annotated training data. However, in practice, medical data annotations are usually expensive and time-consuming to be obtained. Considering multi-modality data with the same anatomic structures are widely available in clinic routine, in this paper, we aim to exploit the prior knowledge (e.g., shape priors) learned from one modality (aka., assistant modality) to improve the segmentation performance on another modality (aka., target modality) to make up annotation scarcity. To alleviate the learning difficulties caused by modality-specific appearance discrepancy, we first present an Image Alignment Module (IAM) to narrow the appearance gap between assistant and target modality data.We then propose a novel Mutual Knowledge Distillation (MKD) scheme to thoroughly exploit the modality-shared knowledge to facilitate the target-modality segmentation. To be specific, we formulate our framework as an integration of two individual segmentors. Each segmentor not only explicitly extracts one modality knowledge from corresponding annotations, but also implicitly explores another modality knowledge from its counterpart in mutual-guided manner. The ensemble of two segmentors would further integrate the knowledge from both modalities and generate reliable segmentation results on target modality. Experimental results on the public multi-class cardiac segmentation data, i.e., MMWHS 2017, show that our method achieves large improvements on CT segmentation by utilizing additional MRI data and outperforms other state-of-the-art multi-modality learning methods.) <|cite_end|>. In our OmniBind, we introduce the CAD module to align the teacher and student models in modalities with unequal-scale (imbalance) data samples by distilling the intra- and cross-modal correspondence. \noindent \textbf{Multi-modal Datasets} serve as the foundation for multi-modal learning. Initially, these datasets only consist of visual data and their corresponding categories <|cite_start|> (Reference: ImageNet: A large-scale Hierarchical Image Database: The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.) <|cite_end|> <|cite_start|> (Reference: ESC: Dataset for Environmental Sound Classification: One of the obstacles in research activities concentrating on environmental sound classification is the scarcity of suitable and publicly available datasets. This paper tries to address that issue by presenting a new annotated collection of 2000 short clips comprising 50 classes of various common sound events, and an abundant unified compilation of 250000 unlabeled auditory excerpts extracted from recordings available through the Freesound project. The paper also provides an evaluation of human accuracy in classifying environmental sounds and compares it to the performance of selected baseline classifiers using features derived from mel-frequency cepstral coefficients and zero-crossing rate.) <|cite_end|> <|cite_start|> (Reference: LLVIP: A Visible-infrared Paired Dataset for Low-light Vision: It is very challenging for various visual tasks such as image fusion, pedestrian detection and image-to-image translation in low light conditions due to the loss of effective target areas. In this case, infrared and visible images can be used together to provide both rich detail information and effective target areas. In this paper, we present LLVIP, a visible-infrared paired dataset for low-light vision. This dataset contains 30976 images, or 15488 pairs, most of which were taken at very dark scenes, and all of the images are strictly aligned in time and space. Pedestrians in the dataset are labeled. We compare the dataset with other visible-infrared datasets and evaluate the performance of some popular visual algorithms including image fusion, pedestrian detection and image-to-image translation on the dataset. The experimental results demonstrate the complementary effect of fusion on image information, and find the deficiency of existing algorithms of the three visual tasks in very low-light conditions. We believe the LLVIP dataset will contribute to the community of computer vision by promoting image fusion, pedestrian detection and image-to-image translation in very low-light applications. The dataset is being released in https://bupt-ai-cz.github.io/LLVIP. Raw data is also provided for further research such as image registration.) <|cite_end|> <|cite_start|> (Reference: 3D ShapeNets: A Deep Representation for Volumetric Shapes: 3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representations automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet -- a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.) <|cite_end|>, which limit their scalability and diversity. To address this issue, the follow-up works pay attention to the abundance of paired dual-modal datasets <|cite_start|> (Reference: {MSR-VTT: A large video description dataset for bridging video and language: While there has been increasing interest in the task of describing video with natural language, current computer vision algorithms are still severely limited in terms of the variability and complexity of the videos and their associated language that they can recognize. This is in part due to the simplicity of current benchmarks, which mostly focus on specific fine-grained domains with limited videos and simple descriptions. While researchers have provided several benchmark datasets for image captioning, we are not aware of any large-scale video description dataset with comprehensive categories yet diverse video content. In this paper we present MSR-VTT (standing for "MSRVideo to Text") which is a new large-scale video benchmark for video understanding, especially the emerging task of translating video to text. This is achieved by collecting 257 popular queries from a commercial video search engine, with 118 videos for each query. In its current version, MSR-VTT provides 10K web video clips with 41.2 hours and 200K clip-sentence pairs in total, covering the most comprehensive categories and diverse visual content, and representing the largest dataset in terms of sentence and vocabulary. Each clip is annotated with about 20 natural sentences by 1,327 AMT workers. We present a detailed analysis of MSR-VTT in comparison to a complete set of existing datasets, together with a summarization of different state-of-the-art video-to-text approaches. We also provide an extensive evaluation of these approaches on this dataset, showing that the hybrid Recurrent Neural Networkbased approach, which combines single-frame and motion representations with soft-attention pooling strategy, yields the best generalization capability on MSR-VTT.) <|cite_end|> <|cite_start|> (Reference: Microsoft COCO Captions: Data Collection and Evaluation Server: In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.) <|cite_end|> <|cite_start|> (Reference: AudioCaps: Generating Captions for Audios in The Wild: We explore the problem of Audio Captioning: generating natural language description for any kind of audio in the wild, which has been surprisingly unexplored in previous research. We contribute a large-scale dataset of 46K audio clips with human-written text pairs collected via crowdsourcing on the AudioSet dataset. Our thorough empirical studies not only show that our collected captions are indeed faithful to audio inputs but also discover what forms of audio representation and captioning models are effective for the audio captioning. From extensive experiments, we also propose two novel components that help improve audio captioning performance: the top-down multi-scale encoder and aligned semantic attention.) <|cite_end|> <|cite_start|> (Reference: Touch and Go: Learning from Human-Collected Vision and Touch: The ability to associate touch with sight is essential for tasks that require physically interacting with objects in the world. We propose a dataset with paired visual and tactile data called Touch and Go, in which human data collectors probe objects in natural environments using tactile sensors, while simultaneously recording egocentric video. In contrast to previous efforts, which have largely been confined to lab settings or simulated environments, our dataset spans a large number of "in the wild" objects and scenes. To demonstrate our dataset's effectiveness, we successfully apply it to a variety of tasks: 1) self-supervised visuo-tactile feature learning, 2) tactile-driven image stylization, i.e., making the visual appearance of an object more consistent with a given tactile signal, and 3) predicting future frames of a tactile signal from visuo-tactile inputs.) <|cite_end|>for the cross-modal retrieval task. Recently, PointBind <|cite_start|> (Reference: Point-Bind & Point-LLM: Aligning Point Cloud with Multi-modality for 3D Understanding, Generation, and Instruction Following: We introduce Point-Bind, a 3D multi-modality model aligning point clouds with 2D image, language, audio, and video. Guided by ImageBind, we construct a joint embedding space between 3D and multi-modalities, enabling many promising applications, e.g., any-to-3D generation, 3D embedding arithmetic, and 3D open-world understanding. On top of this, we further present Point-LLM, the first 3D large language model (LLM) following 3D multi-modal instructions. By parameter-efficient fine-tuning techniques, Point-LLM injects the semantics of Point-Bind into pre-trained LLMs, e.g., LLaMA, which requires no 3D instruction data, but exhibits superior 3D and multi-modal question-answering capacity. We hope our work may cast a light on the community for extending 3D point clouds to multi-modality applications. Code is available at https://github.com/ZiyuGuo99/Point-Bind_Point-LLM.) <|cite_end|>and LanguageBind <|cite_start|> (Reference: LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment: The video-language (VL) pretraining has achieved remarkable improvement in multiple downstream tasks. However, the current VL pretraining framework is hard to extend to multiple modalities (N modalities, N>=3) beyond vision and language. We thus propose LanguageBind, taking the language as the bind across different modalities because the language modality is well-explored and contains rich semantics. Specifically, we freeze the language encoder acquired by VL pretraining, then train encoders for other modalities with contrastive learning. As a result, all modalities are mapped to a shared feature space, implementing multi-modal semantic alignment. While LanguageBind ensures that we can extend VL modalities to N modalities, we also need a high-quality dataset with alignment data pairs centered on language. We thus propose VIDAL-10M with Video, Infrared, Depth, Audio and their corresponding Language, naming as VIDAL-10M. In our VIDAL-10M, all videos are from short video platforms with complete semantics rather than truncated segments from long videos, and all the video, depth, infrared, and audio modalities are aligned to their textual descriptions. LanguageBind has achieved superior performance on a wide range of 15 benchmarks covering video, audio, depth, and infrared. Moreover, multiple experiments have provided evidence for the effectiveness of LanguageBind in achieving indirect alignment and complementarity among diverse modalities. Code address: https://github.com/PKU-YuanGroup/LanguageBind) <|cite_end|>collect the paired multi-modal datasets, which include more than three modalities. These datasets use language as the bridge to build the \textit{"Text-X"} paired datasets. However, these datasets combine multi-modal data by language, which leads to a lack of flexibility in modality combinations. For example, in training on these datasets, the model can't achieve combinations of only visual modalities, \eg, (image, event, and touch). In light of this, we build the first dataset that consists of seven modalities and enables omni-bind for any of them. <|paper_end|>
[ "<|reference_start|> NExT-GPT: Any-to-Any Multimodal LLM: While recently Multimodal Large Language Models (MM-LLMs) have made exciting strides, they mostly fall prey to the limitation of only input-side multimodal understanding, without the ability to produce content in multiple modalities. As we humans always perceive the world and communicate with people through various modalities, developing any-to-any MM-LLMs capable of accepting and delivering content in any modality becomes essential to human-level AI. To fill the gap, we present an end-to-end general-purpose any-to-any MM-LLM system, NExT-GPT. We connect an LLM with multimodal adaptors and different diffusion decoders, enabling NExT-GPT to perceive inputs and generate outputs in arbitrary combinations of text, images, videos, and audio. By leveraging the existing well-trained highly-performing encoders and decoders, NExT-GPT is tuned with only a small amount of parameter (1%) of certain projection layers, which not only benefits low-cost training and also facilitates convenient expansion to more potential modalities. Moreover, we introduce a modality-switching instruction tuning (MosIT) and manually curate a high-quality dataset for MosIT, based on which NExT-GPT is empowered with complex cross-modal semantic understanding and content generation. Overall, our research showcases the promising possibility of building an AI agent capable of modeling universal modalities, paving the way for more human-like AI research in the community. Project page: https://next-gpt.github.io/ <|reference_end|>", "<|reference_start|> Learning Unseen Modality Interaction: Multimodal learning assumes all modality combinations of interest are available during training to learn cross-modal correspondences. In this paper, we challenge this modality-complete assumption for multimodal learning and instead strive for generalization to unseen modality combinations during inference. We pose the problem of unseen modality interaction and introduce a first solution. It exploits a module that projects the multidimensional features of different modalities into a common space with rich information preserved. This allows the information to be accumulated with a simple summation operation across available modalities. To reduce overfitting to less discriminative modality combinations during training, we further improve the model learning with pseudo-supervision indicating the reliability of a modality's prediction. We demonstrate that our approach is effective for diverse tasks and modalities by evaluating it for multimodal video classification, robot state regression, and multimedia retrieval. Project website: https://xiaobai1217.github.io/Unseen-Modality-Interaction/. <|reference_end|>", "<|reference_start|> Towards Cross-modality Medical Image Segmentation with Online Mutual Knowledge Distillation: The success of deep convolutional neural networks is partially attributed to the massive amount of annotated training data. However, in practice, medical data annotations are usually expensive and time-consuming to be obtained. Considering multi-modality data with the same anatomic structures are widely available in clinic routine, in this paper, we aim to exploit the prior knowledge (e.g., shape priors) learned from one modality (aka., assistant modality) to improve the segmentation performance on another modality (aka., target modality) to make up annotation scarcity. To alleviate the learning difficulties caused by modality-specific appearance discrepancy, we first present an Image Alignment Module (IAM) to narrow the appearance gap between assistant and target modality data.We then propose a novel Mutual Knowledge Distillation (MKD) scheme to thoroughly exploit the modality-shared knowledge to facilitate the target-modality segmentation. To be specific, we formulate our framework as an integration of two individual segmentors. Each segmentor not only explicitly extracts one modality knowledge from corresponding annotations, but also implicitly explores another modality knowledge from its counterpart in mutual-guided manner. The ensemble of two segmentors would further integrate the knowledge from both modalities and generate reliable segmentation results on target modality. Experimental results on the public multi-class cardiac segmentation data, i.e., MMWHS 2017, show that our method achieves large improvements on CT segmentation by utilizing additional MRI data and outperforms other state-of-the-art multi-modality learning methods. <|reference_end|>", "<|reference_start|> ESC: Dataset for Environmental Sound Classification: One of the obstacles in research activities concentrating on environmental sound classification is the scarcity of suitable and publicly available datasets. This paper tries to address that issue by presenting a new annotated collection of 2000 short clips comprising 50 classes of various common sound events, and an abundant unified compilation of 250000 unlabeled auditory excerpts extracted from recordings available through the Freesound project. The paper also provides an evaluation of human accuracy in classifying environmental sounds and compares it to the performance of selected baseline classifiers using features derived from mel-frequency cepstral coefficients and zero-crossing rate. <|reference_end|>" ]
[ 2, 3, 16, 18 ]
{"<|multi_cite_1_2|>": "arxiv-474416", "<|multi_cite_2_1|>": "arxiv-503527", "<|multi_cite_2_2|>": "arxiv-535954", "<|multi_cite_2_3|>": "arxiv-580505", "<|multi_cite_3_1|>": "arxiv-545081", "<|multi_cite_3_2|>": "arxiv-597370", "<|multi_cite_4_1|>": "arxiv-517765", "<|multi_cite_4_2|>": "arxiv-562239", "<|multi_cite_4_3|>": "arxiv-524948", "<|multi_cite_5_1|>": "ss-2327005", "<|multi_cite_5_2|>": "ss-2327006", "<|multi_cite_6_1|>": "ss-2302409", "<|multi_cite_6_2|>": "ss-1195951", "<|multi_cite_7_1|>": "arxiv-349895", "<|multi_cite_7_2|>": "arxiv-468576", "<|multi_cite_7_3|>": "arxiv-487661", "<|multi_cite_7_4|>": "arxiv-491122", "<|multi_cite_7_5|>": "arxiv-313241", "<|multi_cite_7_6|>": "arxiv-323919", "<|multi_cite_7_7|>": "ss-889524", "<|multi_cite_7_8|>": "arxiv-395432", "<|multi_cite_7_9|>": "arxiv-477561", "<|multi_cite_8_1|>": "arxiv-350750", "<|multi_cite_8_2|>": "arxiv-375997", "<|multi_cite_8_3|>": "arxiv-452603", "<|multi_cite_9_1|>": "ss-2322749", "<|multi_cite_9_2|>": "arxiv-450701", "<|multi_cite_9_3|>": "arxiv-385347", "<|multi_cite_9_4|>": "arxiv-474442", "<|multi_cite_10_1|>": "ss-750590", "<|multi_cite_10_2|>": "arxiv-117123", "<|multi_cite_10_3|>": "arxiv-438686", "<|cite_11|>": "arxiv-537498", "<|cite_12|>": "arxiv-535954", "<|cite_13|>": "arxiv-580505", "<|cite_14|>": "arxiv-545081", "<|cite_15|>": "arxiv-597370", "<|multi_cite_16_1|>": "arxiv-517765", "<|multi_cite_16_2|>": "arxiv-562239", "<|multi_cite_16_3|>": "arxiv-524948", "<|multi_cite_17_1|>": "ss-2309651", "<|multi_cite_17_2|>": "ss-2327007", "<|multi_cite_18_1|>": "arxiv-225343", "<|multi_cite_18_2|>": "arxiv-416557", "<|multi_cite_18_3|>": "arxiv-443554", "<|multi_cite_18_4|>": "arxiv-378896", "<|multi_cite_18_5|>": "arxiv-378896", "<|multi_cite_19_1|>": "arxiv-475356", "<|multi_cite_19_2|>": "arxiv-491684", "<|multi_cite_20_1|>": "arxiv-404496", "<|multi_cite_20_2|>": "arxiv-580005", "<|multi_cite_20_3|>": "ss-748521", "<|multi_cite_20_4|>": "arxiv-517765", "<|multi_cite_21_1|>": "arxiv-580319", "<|multi_cite_21_2|>": "arxiv-506590", "<|multi_cite_21_3|>": "arxiv-538218", "<|cite_22|>": "arxiv-517765", "<|multi_cite_23_1|>": "arxiv-74282", "<|multi_cite_23_2|>": "arxiv-259074", "<|multi_cite_24_1|>": "arxiv-271399", "<|multi_cite_24_2|>": "arxiv-285154", "<|multi_cite_25_1|>": "arxiv-70546", "<|multi_cite_25_2|>": "arxiv-198042", "<|multi_cite_26_1|>": "arxiv-199249", "<|multi_cite_26_2|>": "arxiv-413152", "<|multi_cite_26_3|>": "arxiv-598171", "<|multi_cite_27_1|>": "arxiv-80344", "<|multi_cite_27_2|>": "arxiv-181403", "<|multi_cite_27_3|>": "arxiv-256749", "<|multi_cite_27_4|>": "arxiv-293718", "<|multi_cite_28_1|>": "ss-710402", "<|multi_cite_28_2|>": "ss-756171", "<|multi_cite_28_3|>": "arxiv-362777", "<|multi_cite_28_4|>": "arxiv-62554", "<|multi_cite_29_1|>": "ss-785672", "<|multi_cite_29_2|>": "arxiv-75485", "<|multi_cite_29_3|>": "ss-688255", "<|multi_cite_29_4|>": "arxiv-464251", "<|cite_30|>": "arxiv-535954", "<|cite_31|>": "arxiv-545081"}
1808.03303
<|paper_start|> Title: On-Chip Optical Convolutional Neural Networks Abstract: On-Chip Optical Convolutional Neural Networks: Convolutional Neural Networks (CNNs) are a class of Artificial Neural Networks(ANNs) that employ the method of convolving input images with filter-kernels for object recognition and classification purposes. In this paper, we propose a photonics circuit architecture which could consume a fraction of energy per inference compared with state of the art electronics. Introduction Exploration of neuromorphic computing architectures began in the late 1950s with the invention of the perceptron, which functioned as a binary classifier with a linear decision boundary <|cite_start|> (Reference: {The perceptron: a probabilistic model for information storage and organization in the brain.: The first of these questions is in the province of sensory physiology, and is the only one for which appreciable understanding has been achieved. This article will be concerned primarily with the second and third questions, which are still subject to a vast amount of speculation, and where the few relevant facts currently supplied by neurophysiology have not yet been integrated into an acceptable theory. With regard to the second question, two alternative positions have been maintained. The first suggests that storage of sensory information is in the form of coded representations or images, with some sort of one-to-one mapping between the sensory stimulus) <|cite_end|>. The preceptron worked well for certain tasks, but further progress was hindered by a lack of understanding on how to handle multilayer versions. Progress on neuromorphic computing for image processing accelerated rapidly in the 1990s, when LeCun et al. pioneered using back-propagation on an architecture based on convolving images with kernels, known as Convolutional Neural Networks (CNNs) <|cite_start|> (Reference: gradient-based learning applied to document recognition: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.) <|cite_end|> <|cite_start|> (Reference: Supporting Online Material for Reducing the Dimensionality of Data with Neural Networks: ) <|cite_end|> <|cite_start|> (Reference: Deep Learning: Deep learning (DL) is a high dimensional data reduction technique for constructing high-dimensional predictors in input-output models. DL is a form of machine learning that uses hierarchical layers of latent features. In this article, we review the state-of-the-art of deep learning from a modeling and algorithmic perspective. We provide a list of successful areas of applications in Artificial Intelligence (AI), Image Processing, Robotics and Automation. Deep learning is predictive in its nature rather then inferential and can be viewed as a black-box methodology for high-dimensional function estimation.) <|cite_end|>. This architecture consists of successive layers of convolution, nonlinearity, downsampling, followed by fully connected layers (see Fig. 1a). The key to the success of CNNs was that convolution and downsampling handled the translation invariance of image features efficiently, while the multiple layers allowed greater flexibility in training than the few-layer approaches. \par Although the CNN architecture successfully managed to implement digit classification at human performance levels and compared favorably to other machine learning techniques, it was not until improvements in processing speeds and the creation of large human-labeled image databases from the Internet, that the full potential of CNNs became apparent <|cite_start|> (Reference: ImageNet Large Scale Visual Recognition Challenge: The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the five years of the challenge, and propose future directions and improvements.) <|cite_end|>. Using GPU-accelerated backpropagtion, AlexNet achieved record breaking results on ImageNet for a thousand categories using a CNN architecture composed of five convolutional layers and three fully connected layers <|cite_start|> (Reference: ImageNet classification with deep convolutional neural networks: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0%, respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.) <|cite_end|> <|cite_start|> (Reference: ImageNet Large Scale Visual Recognition Challenge: The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the five years of the challenge, and propose future directions and improvements.) <|cite_end|>. Following AlexNet's lead, modern CNNs of dozens or hundreds of layers, and hundreds of millions to billions of parameters, can achieve better than human level performance in many image classification tasks <|cite_start|> (Reference: Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification: Rectified activation units (rectifiers) are essential for state-of-the-art neural networks. In this work, we study rectifier neural networks for image classification from two aspects. First, we propose a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit. PReLU improves model fitting with nearly zero extra computational cost and little overfitting risk. Second, we derive a robust initialization method that particularly considers the rectifier nonlinearities. This method enables us to train extremely deep rectified models directly from scratch and to investigate deeper or wider network architectures. Based on our PReLU networks (PReLU-nets), we achieve 4.94% top-5 test error on the ImageNet 2012 classification dataset. This is a 26% relative improvement over the ILSVRC 2014 winner (GoogLeNet, 6.66%). To our knowledge, our result is the first to surpass human-level performance (5.1%, Russakovsky et al.) on this visual recognition challenge.) <|cite_end|> <|cite_start|> (Reference: Deep Residual Learning for Image Recognition: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.) <|cite_end|>. Recent breakthroughs with DeepLearning such as playing Atari games <|cite_start|> (Reference: Human-level control through deep reinforcement learning: ) <|cite_end|>, by combining reinforcement-learning and CNNs, have convinced many that these networks are some of the best tools for a new machine learning golden age with applications ranging from pedestrian detection for self-driving cars to biomedical image analysis <|cite_start|> (Reference: Deep Learning: Deep learning (DL) is a high dimensional data reduction technique for constructing high-dimensional predictors in input-output models. DL is a form of machine learning that uses hierarchical layers of latent features. In this article, we review the state-of-the-art of deep learning from a modeling and algorithmic perspective. We provide a list of successful areas of applications in Artificial Intelligence (AI), Image Processing, Robotics and Automation. Deep learning is predictive in its nature rather then inferential and can be viewed as a black-box methodology for high-dimensional function estimation.) <|cite_end|> <|cite_start|> (Reference: Deep Learning in Neural Networks: An Overview: In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarises relevant work, much of it from the previous millennium. Shallow and deep learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.) <|cite_end|> <|cite_start|> (Reference: Mastering the game of Go with deep neural networks and tree search: ) <|cite_end|> <|cite_start|> (Reference: Ten Years of Pedestrian Detection, What Have We Learned?: Paper-by-paper results make it easy to miss the forest for the trees.We analyse the remarkable progress of the last decade by discussing the main ideas explored in the 40+ detectors currently present in the Caltech pedestrian detection benchmark. We observe that there exist three families of approaches, all currently reaching similar detection quality. Based on our analysis, we study the complementarity of the most promising ideas by combining multiple published strategies. This new decision forest detector achieves the current best known performance on the challenging Caltech-USA dataset.) <|cite_end|> <|cite_start|> (Reference: Dermatologist-level classification of skin cancer with deep neural networks: ) <|cite_end|> <|cite_start|> (Reference: Predicting non-small cell lung cancer prognosis by fully automated microscopic pathology image features: ) <|cite_end|> <|cite_start|> (Reference: A Convolutional Neural Network Cascade for Face Detection: In real-world face detection, large visual variations, such as those due to pose, expression, and lighting, demand an advanced discriminative model to accurately differentiate faces from the backgrounds. Consequently, effective models for the problem tend to be computationally prohibitive. To address these two conflicting challenges, we propose a cascade architecture built on convolutional neural networks (CNNs) with very powerful discriminative capability, while maintaining high performance. The proposed CNN cascade operates at multiple resolutions, quickly rejects the background regions in the fast low resolution stages, and carefully evaluates a small number of challenging candidates in the last high resolution stage. To improve localization effectiveness, and reduce the number of candidates at later stages, we introduce a CNN-based calibration stage after each of the detection stages in the cascade. The output of each calibration stage is used to adjust the detection window position for input to the subsequent stage. The proposed method runs at 14 FPS on a single CPU core for VGA-resolution images and 100 FPS using a GPU, and achieves state-of-the-art detection performance on two public face detection benchmarks.) <|cite_end|>. \par A big part of this success story was the advent of GPU-acceleration for large matrix-matrix multiplications, which are the essential and most time intensive step of back-propagation in CNN training. Despite significant gains, training large CNNs takes weeks utilizing large clusters of GPUs. More practically, GPU-accelerated CNN inference is still a computationally intensive task, making image analysis of the vast majority of the image and video data generated by the Internet very difficult. Youtube itself, in 2015 experienced uploads of 300 hours of video every minute; this would require a cluster of 18000 Nvidia Titan X GPUs to process continuously with CNNs, drawing 4.5 Megawatts, with the hardware costing tens of millions of US dollars <|cite_start|> (Reference: Optimizing Deep CNN-Based Queries over Video Streams at Scale: Video is one of the fastest-growing sources of data and is rich with interesting semantic information. Furthermore, recent advances in computer vision, in the form of deep convolutional neural networks (CNNs), have made it possible to query this semantic information with near-human accuracy (in the form of image tagging). However, performing inference with state-of-the-art CNNs is computationally expensive: analyzing videos in real time (at 30 frames/sec) requires a $1200 GPU per video stream, posing a serious computational barrier to CNN adoption in large-scale video data management systems. In response, we present N O S COPE , a system that uses cost-based optimization to assemble a specialized video processing pipeline for each input video stream, greatly accelerating subsequent CNN-based queries on the video. As N O S COPE observes a video, it trains two types of pipeline components (which we call filters) to exploit the locality in the video stream: difference detectors that exploit temporal locality between frames, and specialized models that are tailored to a specific scene and query (i.e., exploit environmental and query-specific locality). We show that the optimal set of filters and their parameters depends significantly on the video stream and query in question, so N O S COPE introduces an efficient cost-based optimizer for this problem to select them. With this approach, our N O S COPE prototype achieves up to 120-3,200 × speed-ups (318-8,500 × real-time) on binary classification tasks over real-world webcam and surveillance video while maintaining accuracy within 1-5% of a state-of-the-art CNN.) <|cite_end|>. \par Given that this is just one company and that video traffic is predicted to grow to be 80$\%$ of the Internet by 2020 <|cite_start|> (Reference: QOL-15. NEURAL NETWORK INTEGRITY FOR FACIAL AFFECT RECOGNITION IN SURVIVORS OF MEDULLOBLASTOMA: iii434 NEURO-ONCOLOGY • December 2020 QOL-15. NEURAL NETWORK INTEGRITY FOR FACIAL AFFECT RECOGNITION IN SURVIVORS OF MEDULLOBLASTOMA Tara Brinkman, Kevin Krull, Matthew Scoggins, Zhenghong Li, John Glass, Ping Zou, Kirsten Ness, Noah Sabin, Amar Gajjar, Gregory Armstrong, Leslie Robison, Melissa Hudson, and Wilburn Reddick; St. Jude Children’s Research Hospital, Memphis, TN, USA BACKGROUND: Medulloblastoma survivors are at risk for social deficits, yet underlying mechanisms are poorly understood. METHODS: Facial affect recognition was assessed in 50 medulloblastoma survivors treated with craniospinal radiation (median[range] 21.4[12.5–30.9] years old, 11.0[5.7–22.6] years since diagnosis) and 56 non-cancer age-, sex-, and race-matched controls. Brain activation and connectivity in core regions/nodes of the face perception network (fusiform gyri, occipital gyri, superior temporal sulcus) were examined using structural and functional neuroimaging. Structural networks were constructed from diffusion tensor imaging (DTI) data and individual node strength and efficiency were assessed. Functional MRI (fMRI) was conducted using a 1-back facial affect recognition task with assessment of regional differences in task-related cerebral blood flow (BOLD). Standardized neurocognitive testing was completed with 24 hours of brain imaging. RESULTS: Medulloblastoma survivors performed worse on a behavioral measure of facial affect recognition (P=0.003) compared to matched controls. During the facial affect recognition task, controls demonstrated greater BOLD activation of the left and right fusiform gyri and the left and right middle occipital gyri compared to survivors (P’s<0.05, corrected for multiple comparisons). DTI indicated weaker core node strength in survivors in the right lateral occipital gyri (P=0.02) and efficiency was lower in the left (P=0.01) and right (P=0.03) occipital gyri compared to controls. CONCLUSIONS: Medulloblastoma survivors have deficits in facial affect recognition and reduced activation and efficiency in brain regions comprising the face perception network compared to matched controls. Interventions targeting this specific skill and neural network may improve social functioning in survivors. QOL-17. BIOLOGICAL CORRELATES OF QUALITY OF SURVIVAL AND NEUROCOGNITIVE OUTCOMES IN MEDULLOBLASTOMA; A META-ANALYSIS OF THE SIOP-UKCCSG-PNET3 AND HIT-SIOPPNET4 TRIALS Mathilde Chevignard1, Kim Bull2, James Holt3, Marie-Amelie Heng4, Colin Kennedy2, Francois Doz5, Birgitta Lannering6, Stefan Rutkowski7, Maura Massimino8, Steven Clifford3, and Debbie Hicks3; 1Saint Maurice Hospitals, Saint Maurice, France, 2University of Southampton, Southampton, United Kingdom, 3Newcastle University Centre for Cancer, Newcastle upon Tyne, United Kingdom, 4Institut Curie, Paris, France, 5Institut Curie, Paris, United Kingdom, 6University of Gothenburg, Gothenburg, Sweden, 7University Medical Center Hamburg-Eppendorf, Hamburg, Germany, 8Fondazione IRCCS Istituto Nazionale dei Tumori, Milan, Italy Relationships between biological factors (genetic, tumour molecular subgroup) and neurocognitive/Quality of Survival (QoS) outcomes in medulloblastoma survivors are emerging, based on studies of limited retrospective cohorts. Integrated investigations of the medulloblastoma late-effects pathway (considering biological, clinical and treatment factors), using larger clinically-controlled cohorts, are now essential to determine their independent significance and potential for clinical application. In a combined cohort of SIOP-UKCCSG-PNET3 and HIT-SIOPPNET4 patients (n=150), molecular subgroup (MBWNT, MBSHH, MBGrp3, MBGrp4) was assessed against QoS measures [health status: HUI3; emotional and behavioural difficulties: SDQ; Health-related Quality of Life (HrQoL): PedsQL]. Additionally, in DNA remaining from HIT-SIOPPNET4 (n=74), 39 candidate SNPs (involved in metabolism, DNA maintenance/repair, neural growth/repair and oxidative stress/inflammation) were genotyped by multiplexed MALDI-TOF MassArray and assessed against Wechsler Intelligence Scale (WISC) scores. Molecular subgroup was significantly associated with HrQoL and health status in univariate analyses; MBGrp4 predicted significantly worse outcomes than MBSHH and MBGrp3 (p<0.05), but not in multivariate analyses taking into consideration other significant and reported QoS predictors (e.g. treatment, gender, age). In contrast, 6 SNPs were significantly associated with ≥1 WISC domain; 4/6 showed associations across domains. 3 SNPs were independently prognostic in multivariate analyses, and further significant associations were apparent at the gene (BDNF, APOE) and pathway (folate) level. This cross-discipline, international study encompassing two medulloblastoma trials has identified relationships between molecular subgroup, genotype and survivorship outcomes. These findings now require assessment in larger series, to inform our understanding of medulloblastoma survivorship outcomes and impact future disease management strategies. QOL-18. A LONGITUDINAL STUDY OF NEUROCOGNITION IN CHILDREN TREATED FOR A BRAIN TUMOR Jurgen Lemiere, Linde Van den Wyngaert, Josefien Vandereydt, Karen Vandenabeele, Trui Vercruysse, Charlotte Sleurs, and Sandra Jacobs; University Hospital Leuven, Leuven, Belgium It is well known that neurocognition in children treated for a brain tumor can be affected. However, studies on the trajectory of these neurocognitive problems are scarce. In the present study we investigated the evolution of neurocognition between timepoints of diagnosis, 2, 4 and 6 years later. A total of 53 children diagnosed with a brain tumor were recruited in this study, of which all completed a comprehensive neuropsychological test battery at three successive timepoints and 30 at 4 timepoints. The first assessment was conducted as soon as possible after diagnosis and before initiation of chemoand/or radiotherapy. Mean age at diagnosis was 8.06 years. The most common diagnoses were pilocytic astrocytoma (n=28) and medulloblastoma (n=10). 24.5% and 18.9% of these patient groups received focal or craniospinal irradiation, respectively. A repeated measures analysis with cranial irradiation (no, focal, craniospinal) as betweensubjects factor demonstrated a significant interaction effect between time and type of irradiation for overall intelligence (p=0.02) for children with three assessments. The same interaction effect was found for overall intelligence and processing speed for children with four assessments (p=.005 and p=.002, respectively). The group who received craniospinal irradiation demonstrated the most pronounced decline. Interestingly, no main time effect or interaction effect was found for general memory functioning. Our results demonstrate that not all neurocognitive functions in children treated for a brain tumor decline after treatment. Overall IQ and processing speed are the most vulnerable outcomes in our cohort, especially for the children treated with craniospinal irradiation. QOL-19. PARENT-REPORTED COGNITIVE PROBLEMS AND DIRECT ASSESSMENT OF COGNITION IN CHILDREN TREATED FOR A BRAIN TUMOR Jurgen Lemiere1, Charlotte Sleurs2, Linde Van den Wyngaert1, Karen Vandenabeele1, Josefien Vandereydt1, Trui Vercruysse1, and Sandra Jacobs1; 1University Hospital Leuven, Leuven, Belgium, 2KULeuven, Leuven, Belgium The Pediatric Perceived Cognitive Function (PedsPCF) item bank is a short parent and self-reported cognitive screening questionnaire developed in the context of pediatric oncology. The PedsPCF demonstrated satisfactory psychometric properties and the scores of the PedsPCF are found to be associated with clinical outcomes. Today little research is available to evaluate whether the PedsPCF is correlated with direct assessments of neurocognitive domains. The aim of the current study is to investigate whether important cognitive domains, such as different aspects of intelligence, memory, visuomotor integration can predict the PedsPCF score. We obtained 100 PedsPCF filled in by parents from children treated for a brain tumor. All these children completed a comprehensive neuropsychological battery. Mean age at diagnosis was 7.47 years and mean age at completion of PedsPCF and testing 13.84. The most common diagnoses were pilocytic astrocytoma (n=43) and medulloblastoma (n=14). A linear regression model with verbal comprehension, perceptual reasoning, processing speed, visuomotor integration as predictors for overall PedsPCF score was significant (p.005), but the overall model fit was limited (adjusted R2: 14%). Visuomotor integration and processing speed were significant predictors (beta = 0.56 and -0.29). Our results are in line with the overall finding that the correlation between questionnaires assessing quality of survival and direct assessments of cognition are low. For clinical practice these results are important as the PedsPCF can’t be used to replace direct cognitive assessments or vice versa. QOL-20. IMPACT OF RADIATION DOSE AND VOLUME ON MEMORY FUNCTIONING IN CHILDREN WITH MEDULLOBLASTOMA: A REPORT FROM CHILDREN’S ONCOLOGY GROUP (COG) ACNS0331 Leanne Embry1, Paul Colte2, Patsy Cullen3, Jeff Michalski4, Yimei Li5, Yuanyuan Han5, and Kristina Hardy6; 1University of Texas Health Science Center, San Antonio, TX, USA, 2Primary Children’s Hospital, Salt Lake City, UT, USA, 3Regis University, Denver, CO, USA, 4Washington University School of Medicine, St, Louis, MO, USA, 5St. Jude Children’s Research Hospital, Memphis, TN, USA, 6Children’s National Medical Center, Washington, DC, USA BACKGROUND/OBJECTIVES: We examined longitudinal verbal and visual memory functioning in children treated for medulloblastoma on COG protocol ACNS0331. METHODS: Children with medulloblastoma participated in neuropsychological testing at three timepoints over a 6-year period. Children aged 3–7 years were randomized to receive craniospinal irradi-) <|cite_end|>, this problem is going to get harder and will far outpace the current computing paradigms, requiring investment in specialized neuromorphic \emph{hardware} architectures. There are many proposals and experimental demonstrations to accomplish this through analog circuits, digital ASIC designs, FPGAs, and other electronic technologies <|cite_start|> (Reference: Neuromorphic Electronic Systems: It is shown that for many problems, particularly those in which the input data are ill-conditioned and the computation can be specified in a relative manner, biological solutions are many orders of magnitude more effective than those using digital methods. This advantage can be attributed principally to the use of elementary physical phenomena as computational primitives, and to the representation of information by the relative values of analog signals rather than by the absolute values of digital signals. This approach requires adaptive techniques to mitigate the effects of component differences. This kind of adaptation leads naturally to systems that learn about their environment. Large-scale adaptive analog systems are more robust to component degradation and failure than are more conventional systems, and they use far less power. For this reason, adaptive analog technology can be expected to utilize the full potential of wafer-scale silicon fabrication. >) <|cite_end|> <|cite_start|> (Reference: Neuromorphic Silicon Neurons and Large-Scale Neural Networks: Challenges and Opportunities: Neuromorphic silicoN NeuroNs: state of the art Complementary metal-oxide-semiconductor (CMOS) transistors are commonly used in very-large-scale-integration (VLSI) digital circuits as a basic binary switch that turns on or off as the transistor gate voltage crosses some threshold. Carver Mead first noted that CMOS transistor circuits operating below this threshold in current mode have strikingly similar sigmoidal current– voltage relationships as do neuronal ion channels and consume little power; hence they are ideal analogs of neuronal function (Mead, 1989). This unique device physics led to the advent of “neuromorphic” silicon neurons (SiNs) which allow neuronal spiking dynamics to be directly emulated on analog VLSI chips without the need for digital software simulation (Mahowald and Douglas, 1991). In the inaugural issue of this Journal, Indiveri et al. (2011) review the current state of the art in CMOS-based neuromorphic neuron circuit designs that have evolved over the past two decades. The comprehensive appraisal delineates and compares the latest SiN design techniques as applied to varying types of spiking neuron models ranging from realistic conductancebased Hodgkin–Huxley models to simple yet versatile integrate-and-fire models. The timely and much needed compendium is a tour de force that will certainly provide a valuable guidepost for future SiN designs and applications.) <|cite_end|> <|cite_start|> (Reference: ISAAC: A Convolutional Neural Network Accelerator with In-situ Analog Arithmetic in Crossbars: A number of recent efforts have attempted to design accelerators for popular machine learning algorithms, such as those involving convolutional and deep neural networks (CNNs and DNNs). These algorithms typically involve a large number of multiply-accumulate (dot-product) operations. A recent project, DaDianNao, adopts a near data processing approach, where a specialized neural functional unit performs all the digital arithmetic operations and receives input weights from adjacent eDRAM banks. This work explores an in-situ processing approach, where memristor crossbar arrays not only store input weights, but are also used to perform dot-product operations in an analog manner. While the use of crossbar memory as an analog dot-product engine is well known, no prior work has designed or characterized a full-fledged accelerator based on crossbars. In particular, our work makes the following contributions: (i) We design a pipelined architecture, with some crossbars dedicated for each neural network layer, and eDRAM buffers that aggregate data between pipeline stages. (ii) We define new data encoding techniques that are amenable to analog computations and that can reduce the high overheads of analog-to-digital conversion (ADC). (iii) We define the many supporting digital components required in an analog CNN accelerator and carry out a design space exploration to identify the best balance of memristor storage/compute, ADCs, and eDRAM storage on a chip. On a suite of CNN and DNN workloads, the proposed ISAAC architecture yields improvements of 14.8×, 5.5×, and 7.5× in throughput, energy, and computational density (respectively), relative to the state-of-the-art DaDianNao architecture.) <|cite_end|> <|cite_start|> (Reference: A digital neurosynaptic core using embedded crossbar memory with 45pJ per spike in 45nm: The grand challenge of neuromorphic computation is to develop a flexible brain-like architecture capable of a wide array of real-time applications, while striving towards the ultra-low power consumption and compact size of the human brain—within the constraints of existing silicon and post-silicon technologies. To this end, we fabricated a key building block of a modular neuromorphic architecture, a neurosynaptic core, with 256 digital integrate-and-fire neurons and a 1024×256 bit SRAM crossbar memory for synapses using IBM's 45nm SOI process. Our fully digital implementation is able to leverage favorable CMOS scaling trends, while ensuring one-to-one correspondence between hardware and software. In contrast to a conventional von Neumann architecture, our core tightly integrates computation (neurons) alongside memory (synapses), which allows us to implement efficient fan-out (communication) in a naturally parallel and event-driven manner, leading to ultra-low active power consumption of 45pJ/spike. The core is fully configurable in terms of neuron parameters, axon types, and synapse states and is thus amenable to a wide range of applications. As an example, we trained a restricted Boltzmann machine offline to perform a visual digit recognition task, and mapped the learned weights to our chip.) <|cite_end|> <|cite_start|> (Reference: Eyeriss: an energy-efficient reconfigurable accelerator for deep convolutional neural networks: Eyeriss is an accelerator for state-of-the-art deep convolutional neural networks (CNNs). It optimizes for the energy efficiency of the entire system, including the accelerator chip and off-chip DRAM, for various CNN shapes by reconfiguring the architecture. CNNs are widely used in modern AI systems but also bring challenges on throughput and energy efficiency to the underlying hardware. This is because its computation requires a large amount of data, creating significant data movement from on-chip and off-chip that is more energy-consuming than computation. Minimizing data movement energy cost for any CNN shape, therefore, is the key to high throughput and energy efficiency. Eyeriss achieves these goals by using a proposed processing dataflow, called row stationary (RS), on a spatial architecture with 168 processing elements. RS dataflow reconfigures the computation mapping of a given shape, which optimizes energy efficiency by maximally reusing data locally to reduce expensive data movement, such as DRAM accesses. Compression and data gating are also applied to further improve energy efficiency. Eyeriss processes the convolutional layers at 35 frames/s and 0.0029 DRAM access/multiply and accumulation (MAC) for AlexNet at 278 mW (batch size $N = 4$ ), and 0.7 frames/s and 0.0035 DRAM access/MAC for VGG-16 at 236 mW ( $N = 3$ ).) <|cite_end|>. \par Our work follows a long history of optical computing such as optical implementations of unitary matrix multiplication, optical memory, all optical switching, optical interconnects, and even recent works on optical neuromorphic architectures such as photonic spike processing and reservoir computing <|cite_start|> (Reference: Sub-femtojoule all-optical switching using a photonic-crystal nanocavity: ) <|cite_end|> <|cite_start|> (Reference: Integrated all-photonic non-volatile multi-level memory: ) <|cite_end|> <|cite_start|> (Reference: Large-scale nanophotonic phased array: ) <|cite_end|> <|cite_start|> (Reference: Single-chip microprocessor that communicates directly using light: ) <|cite_end|> <|cite_start|> (Reference: Fast bistable all-optical switch and memory on a silicon photonic crystal on-chip: We demonstrate extremely low-power all-optical bistability by utilizing silicon photonic crystal nanocavities, based on the plasma effect of carriers generated by two-photon absorption. Owing to the high quality factor and the small volume of the nanocavities, the photon density inside the cavity becomes extremely high, which leads to a large reduction in operation power. Optical bistable operation in a single nanocavity permits optical read-write memory operation, which opens the possibility of an integrated optical logic circuit on a single chip, based on photonic crystals. The demonstrated bistable threshold power is 0.4 mW with a set pulse energy of 74 fJ, at a switching speed of <100 ps.) <|cite_end|> <|cite_start|> (Reference: Deep Learning with Coherent Nanophotonic Circuits: Artificial Neural Networks have dramatically improved performance for many machine learning tasks. We demonstrate a new architecture for a fully optical neural network that enables a computational speed enhancement of at least two orders of magnitude and three orders of magnitude in power efficiency over state-of-the-art electronics.) <|cite_end|> <|cite_start|> (Reference: Photonic Neuromorphic Signal Processing and Computing: ) <|cite_end|> <|cite_start|> (Reference: Broadcast and Weight: An Integrated Network For Scalable Photonic Spike Processing: We propose an on-chip optical architecture to support massive parallel communication among high-performance spiking laser neurons. Designs for a network protocol, computational element, and waveguide medium are described, and novel methods are considered in relation to prior research in optical on-chip networking, neural networking, and computing. Broadcast-and-weight is a new approach for combining neuromorphic processing and optoelectronic physics, a pairing that is found to yield a variety of advantageous features. We discuss properties and design considerations for architectures for scalable wavelength reuse and biologically relevant organizational capabilities, in addition to aspects of practical feasibility. Given recent developments commercial photonic systems integration and neuromorphic computing, we suggest that a novel approach to photonic spike processing represents a promising opportunity in unconventional computing.) <|cite_end|> <|cite_start|> (Reference: Recent progress in semiconductor excitable lasers for photonic spike processing: Recently, there has been tremendous interest in excitable optoelectronic devices and in particular excitable semiconductor lasers that could potentially enable unconventional processing approaches beyond conventional binary-logic-based approaches. In parallel, there has been renewed investigation of non-von Neumann architectures driven in part by incipient limitations in aspects of Moore’s law. These neuromorphic architectures attempt to decentralize processing by interweaving interconnection with computing while simultaneously incorporating time-resolved dynamics, loosely classified as spiking (a.k.a. excitability). The rapid and efficient advances in CMOS-compatible photonic interconnect technologies have led to opportunities in optics and photonics for unconventional circuits and systems. Effort in the budding research field of photonic spike processing aims to synergistically integrate the underlying physics of photonics with bio-inspired processing. Lasers operating in the excitable regime are dynamically analogous with the spiking dynamics observed in neuron biophysics but roughly 8 orders of magnitude faster. The field is reaching a critical juncture at which there is a shift from studying single devices to studying an interconnected network of lasers. In this paper, we review the recent research in the information processing abilities of such lasers, dubbed “photonic neurons,” “laser neurons,” or “optical neurons.” An integrated network of such lasers on a chip could potentially grant the capacity for complex, ultrafast categorization and decision making to provide a range of computing and signal processing applications, such as sensing and manipulating the radio frequency spectrum and for hypersonic aircraft control.) <|cite_end|> <|cite_start|> (Reference: Experimental demonstration of reservoir computing on a silicon photonics chip: ) <|cite_end|>. We focus primarily on integrated photonics as a computation platform because it provides the highest raw bandwidth currently available of any technology that is mass manufacturable and has standardized components. \begin{figure}[!] \centering\includegraphics[width=\textwidth]{fig1_ml-eps-converted-to.pdf} \caption{Convolutional Neural Net (CNN) Architecture. a. Logic Block Diagram: The input image, number 3 shown here, is passed through successive layers of convolution and pooling, nonlinearities (see Fig. 2 for further description ), and re-shuffling of the pixels (see Fig. 3 for further description). A final fully connected layer maps the last stage of convolution output to a set of classification outputs. b Schematic Illustration: First part of CNN implements convolution of the image with a set of smaller filters. These produce a sequence of kernel-patch dot products which are passed through a nonlinearity and are re-shuffled into a new d-dimensional image, where d is the number of filters in the first layer. The process is then repeated on this new image for many subsequent layers.} \end{figure} <|paper_end|>
[ "<|reference_start|> Supporting Online Material for Reducing the Dimensionality of Data with Neural Networks: <|reference_end|>", "<|reference_start|> ImageNet Large Scale Visual Recognition Challenge: The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the five years of the challenge, and propose future directions and improvements. <|reference_end|>", "<|reference_start|> Large-scale nanophotonic phased array: <|reference_end|>", "<|reference_start|> Fast bistable all-optical switch and memory on a silicon photonic crystal on-chip: We demonstrate extremely low-power all-optical bistability by utilizing silicon photonic crystal nanocavities, based on the plasma effect of carriers generated by two-photon absorption. Owing to the high quality factor and the small volume of the nanocavities, the photon density inside the cavity becomes extremely high, which leads to a large reduction in operation power. Optical bistable operation in a single nanocavity permits optical read-write memory operation, which opens the possibility of an integrated optical logic circuit on a single chip, based on photonic crystals. The demonstrated bistable threshold power is 0.4 mW with a set pulse energy of 74 fJ, at a switching speed of <100 ps. <|reference_end|>" ]
[ 2, 6, 26, 28 ]
{"<|cite_1|>": "ss-715613", "<|multi_cite_2_1|>": "ss-1056505", "<|multi_cite_2_2|>": "ss-1296379", "<|multi_cite_2_3|>": "arxiv-166644", "<|cite_3|>": "arxiv-65515", "<|multi_cite_4_1|>": "ss-690198", "<|multi_cite_4_2|>": "arxiv-65515", "<|multi_cite_5_1|>": "arxiv-72633", "<|multi_cite_5_2|>": "arxiv-88870", "<|cite_6|>": "ss-749221", "<|multi_cite_7_1|>": "arxiv-166644", "<|multi_cite_7_2|>": "arxiv-60238", "<|multi_cite_7_3|>": "ss-805362", "<|multi_cite_7_4|>": "arxiv-68854", "<|multi_cite_7_5|>": "ss-853580", "<|multi_cite_7_6|>": "ss-1548610", "<|multi_cite_7_7|>": "ss-1198086", "<|cite_9|>": "ss-1304648", "<|cite_10|>": "ss-717653", "<|multi_cite_11_1|>": "ss-1426387", "<|multi_cite_11_2|>": "ss-745867", "<|multi_cite_11_4|>": "ss-750861", "<|multi_cite_11_6|>": "ss-1458584", "<|multi_cite_11_7|>": "ss-728901", "<|multi_cite_12_1|>": "ss-861782", "<|multi_cite_12_2|>": "ss-1448484", "<|multi_cite_12_3|>": "ss-1682025", "<|multi_cite_12_4|>": "ss-2299572", "<|multi_cite_12_5|>": "ss-861783", "<|multi_cite_12_6|>": "ss-723356", "<|multi_cite_12_7|>": "ss-2427131", "<|multi_cite_12_8|>": "ss-787763", "<|multi_cite_12_9|>": "ss-861784", "<|multi_cite_12_10|>": "ss-861785"}
2204.07258
<|paper_start|> Title: Causal Transformer for Estimating Counterfactual Outcomes Abstract: Causal Transformer for Estimating Counterfactual Outcomes: Estimating counterfactual outcomes over time from observational data is relevant for many applications (e.g., personalized medicine). Yet, state-of-the-art methods build upon simple long short-term memory (LSTM) networks, thus rendering inferences for complex, long-range dependencies challenging. In this paper, we develop a novel Causal Transformer for estimating counterfactual outcomes over time. Our model is specifically designed to capture complex, long-range dependencies among time-varying confounders. For this, we combine three transformer subnetworks with separate inputs for time-varying covariates, previous treatments, and previous outcomes into a joint network with in-between cross-attentions. We further develop a custom, end-to-end training procedure for our Causal Transformer. Specifically, we propose a novel counterfactual domain confusion loss to address confounding bias: it aims to learn adversarial balanced representations, so that they are predictive of the next outcome but non-predictive of the current treatment assignment. We evaluate our Causal Transformer based on synthetic and real-world datasets, where it achieves superior performance over current baselines. To the best of our knowledge, this is the first work proposing transformer-based architecture for estimating counterfactual outcomes from longitudinal data. Introduction \label{sec:intro} \begin{figure*}[tbp] \vskip -0.07in \begin{center} \centerline{\includegraphics[width=0.9\textwidth]{figures/multi-input-causal-transformer}} \vskip -0.1in \caption{Overview of our \shortname. We distinguish two timelines: time steps $1, \ldots, t$ refer to observational data (patient trajectories) and thus input; time steps $t+1, \ldots, t+\tau$ is the projection horizon and thus output. Three separate transformers are used in parallel for encoding observational data as input: treatments $\mathbf{A}_t$ / treatment interventions $\mathbf{a}_t$ (blue), outcomes $\mathbf{Y}_{t}$ / outcome predictions $\hat{\mathbf{Y}}_{t}$ (green), and time-varying covariates $\mathbf{X}_t$ (red). These are fused via $B$ stacked multi-input blocks. Additional static covariates $\mathbf{V}$ (gray) are fed into all multi-input blocks. Each multi-input block further makes use of cross-attentions. Afterward, the three respective representation for treatments, outcomes, and time-varying covariates are averaged, giving the (balanced) representation $\mathbf{\Phi}_t$ (purple). On top of that are two additional networks $G_Y$ (outcome prediction network) and $G_A$ (treatment classifier network) for learning balanced representations in our CDC loss. Layer normalizations and residual connections are omitted for clarity. } \label{fig:multi-input-transformer} \end{center} \vskip -0.3in \end{figure*} Decision-making in medicine requires precise knowledge of individualized health outcomes over time after applying different treatments <|cite_start|> (Reference: Analysis of multi-stage treatments for recurrent diseases: Patients with a non‐curable disease such as many types of cancer usually go through the process of initial treatment, a various number of disease recurrences and salvage treatments, and eventually death. The analysis of the effects of initial and salvage treatments on overall survival is not trivial. One may try to use disease recurrences and salvage treatments as time‐dependent covariates in a Cox proportional hazards model. However, because disease recurrence is an intermediate outcome between initial treatment and final survival, the interpretation of such an estimation result is awkward. It does not estimate the causal effects of treatments on overall survival. Nevertheless, such causal effect estimates are critical for treatment decision making. Our approach to address this issue is that, at any treatment stage, for each patient, we compute a potential survival time if he or she would receive the optimal subsequent treatments, and use this potential survival time to do comparison between current‐stage treatment groups. This potential survival time is assumed to follow an accelerated failure time model at each treatment stage and calculated by backward induction, starting from the last stage of treatment. By doing that, the effects on survival of different treatments at each stage can be consistently estimated and fairly compared. Under suitable conditions, these estimated effects have a causal interpretation. We evaluated the proposed model and estimation method by simulation studies and illustrated using the motivating, real data set that describes initial and salvage treatments for patients with soft tissue sarcoma. Copyright © 2012 John Wiley & Sons, Ltd.) <|cite_end|> <|cite_start|> (Reference: Assessing lack of common support in causal inference using bayesian nonparametrics: Implications for evaluating the effect of breastfeeding on children's cognitive outcomes: Causal inference in observational studies typically requires making comparisons between groups that are dissimilar. For instance, researchers investigating the role of a prolonged duration of breastfeeding on child outcomes may be forced to make comparisons between women with substantially different characteristics on average. In the extreme there may exist neighborhoods of the covariate space where there are not sufficient numbers of both groups of women (those who breastfed for prolonged periods and those who did not) to make inferences about those women. This is referred to as lack of common support. Problems can arise when we try to estimate causal effects for units that lack common support, thus we may want to avoid inference for such units. If ignorability is satisfied with respect to a set of potential confounders, then identifying whether, or for which units, the common support assumption holds is an empirical question. However, in the high-dimensional covariate space often required to satisfy ignorability such identification may not be trivial. Existing methods used to address this problem often require reliance on parametric assumptions and most, if not all, ignore the information embedded in the response variable. We distinguish between the concepts of “common support” and “common causal support.” We propose a new approach for identifying common causal support that addresses some of the shortcomings of existing methods. We motivate and illustrate the approach using data from the National Longitudinal Survey of Youth to estimate the effect of breastfeeding at least nine months on reading and math achievement scores at age five or six. We also evaluate the comparative performance of this method in hypothetical examples and simulations where the true treatment effect is known.) <|cite_end|>. This then informs the choice of treatment plans and thus ensures effective care personalized to individual patients. Traditionally, the gold standard for estimating the effects of treatments are randomized controlled trials~(RCTs). However, RCTs are costly, often impractical, or even unethical. To address this, there is a growing interest in estimating health outcomes over time from observational data, such as, \eg, electronic health records. Numerous methods have been proposed for estimating (counterfactual) outcomes from observational data in the static setting <|cite_start|> (Reference: Targeted Maximum Likelihood Learning: Suppose one observes a sample of independent and identically distributed observations from a particular data generating distribution. Suppose that one is concerned with estimation of a particular pathwise differentiable Euclidean parameter. A substitution estimator evaluating the parameter of a given likelihood based density estimator is typically too biased and might not even converge at the parametric rate: that is, the density estimator was targeted to be a good estimator of the density and might therefore result in a poor estimator of a particular smooth functional of the density. In this article we propose a one step (and, by iteration, k-th step) targeted maximum likelihood density estimator which involves 1) creating a hardest parametric submodel with parameter epsilon through the given density estimator with score equal to the efficient influence curve of the pathwise differentiable parameter at the density estimator, 2) estimating epsilon with the maximum likelihood estimator, and 3) defining a new density estimator as the corresponding update of the original density estimator. We show that iteration of this algorithm results in a targeted maximum likelihood density estimator which solves the efficient influence curve estimating equation and thereby yields a locally efficient estimator of the parameter of interest, under regularity conditions. In particular, we show that, if the parameter is linear and the model is convex, then the targeted maximum likelihood estimator is often achieved in the first step, and it results in a locally efficient estimator at an arbitrary (e.g., heavily misspecified) starting density.We also show that the targeted maximum likelihood estimators are now in full agreement with the locally efficient estimating function methodology as presented in Robins and Rotnitzky (1992) and van der Laan and Robins (2003), creating, in particular, algebraic equivalence between the double robust locally efficient estimators using the targeted maximum likelihood estimators as an estimate of its nuisance parameters, and targeted maximum likelihood estimators. In addition, it is argued that the targeted MLE has various advantages relative to the current estimating function based approach. We proceed by providing data driven methodologies to select the initial density estimator for the targeted MLE, thereby providing data adaptive targeted maximum likelihood estimation methodology. We illustrate the method with various worked out examples.) <|cite_end|> <|cite_start|> (Reference: {{BART: 旧金山市湾区捷运(BART)系统技术标准高,具备较高的安全可靠度;容量大,储备充分,服务功能完善,舒适度高,具有可持续发展性;四通八达,与其它交通方式有效衔接系统充分体现了可持续发展,安全可靠,畅达、快捷、高效,客运交通一体化的先进理念,非常值得我国轨道交通建设和发展借鉴。) <|cite_end|> <|cite_start|> (Reference: Learning Representations for Counterfactual Inference: Observational studies are rising in importance due to the widespread accumulation of data in fields such as healthcare, education, employment and ecology. We consider the task of answering counterfactual questions such as, "Would this patient have lower blood sugar had she received a different medication?". We propose a new algorithmic framework for counterfactual inference which brings together ideas from domain adaptation and representation learning. In addition to a theoretical justification, we perform an empirical comparison with previous approaches to causal inference from observational data. Our deep learning algorithm significantly outperforms the previous state-of-the-art.) <|cite_end|> <|cite_start|> (Reference: Nonparametric Estimation of Heterogeneous Treatment Effects: From Theory to Learning Algorithms: The need to evaluate treatment effectiveness is ubiquitous in most of empirical science, and interest in flexibly investigating effect heterogeneity is growing rapidly. To do so, a multitude of model-agnostic, nonparametric meta-learners have been proposed in recent years. Such learners decompose the treatment effect estimation problem into separate sub-problems, each solvable using standard supervised learning methods. Choosing between different meta-learners in a data-driven manner is difficult, as it requires access to counterfactual information. Therefore, with the ultimate goal of building better understanding of the conditions under which some learners can be expected to perform better than others a priori, we theoretically analyze four broad meta-learning strategies which rely on plug-in estimation and pseudo-outcome regression. We highlight how this theoretical reasoning can be used to guide principled algorithm design and translate our analyses into practice by considering a variety of neural network architectures as base-learners for the discussed meta-learning strategies. In a simulation study, we showcase the relative strengths of the learners under different data-generating processes.) <|cite_end|> <|cite_start|> (Reference: Estimating Conditional Average Treatment Effects with Missing Treatment Information: Estimating conditional average treatment effects (CATE) is challenging, especially when treatment information is missing. Although this is a widespread problem in practice, CATE estimation with missing treatments has received little attention. In this paper, we analyze CATE estimation in the setting with missing treatments where unique challenges arise in the form of covariate shifts. We identify two covariate shifts in our setting: (i) a covariate shift between the treated and control population; and (ii) a covariate shift between the observed and missing treatment population. We first theoretically show the effect of these covariate shifts by deriving a generalization bound for estimating CATE in our setting with missing treatments. Then, motivated by our bound, we develop the missing treatment representation network (MTRNet), a novel CATE estimation algorithm that learns a balanced representation of covariates using domain adaptation. By using balanced representations, MTRNet provides more reliable CATE estimates in the covariate domains where the data are not fully observed. In various experiments with semi-synthetic and real-world data, we show that our algorithm improves over the state-of-the-art by a substantial margin.) <|cite_end|>. Different from that, we focus on longitudinal settings, that is, \emph{over time}. In fact, longitudinal data are nowadays paramount in medical practice. For example, almost all electronic health records (EHRs) nowadays store sequences of medical events over time <|cite_start|> (Reference: Analyzing patient trajectories with artificial intelligence: In digital medicine, patient data typically record health events over time (eg, through electronic health records, wearables, or other sensing technologies) and thus form unique patient trajectories. Patient trajectories are highly predictive of the future course of diseases and therefore facilitate effective care. However, digital medicine often uses only limited patient data, consisting of health events from only a single or small number of time points while ignoring additional information encoded in patient trajectories. To analyze such rich longitudinal data, new artificial intelligence (AI) solutions are needed. In this paper, we provide an overview of the recent efforts to develop trajectory-aware AI solutions and provide suggestions for future directions. Specifically, we examine the implications for developing disease models from patient trajectories along the typical workflow in AI: problem definition, data processing, modeling, evaluation, and interpretation. We conclude with a discussion of how such AI solutions will allow the field to build robust models for personalized risk scoring, subtyping, and disease pathway discovery.) <|cite_end|>. However, estimating counterfactual outcomes over time is challenging. One reason is that counterfactual outcomes are generally never observed. On top of that, directly estimating counterfactual outcomes with traditional machine learning methods in the presence of (time-varying) confounding has a larger generalization error of estimation <|cite_start|> (Reference: Limits of Estimating Heterogeneous Treatment Effects: Guidelines for Practical Algorithm Design: Estimating heterogeneous treatment effects from observational data is a central problem in many domains. Because counterfactual data is inac-cessible, the problem differs fundamentally from supervised learning, and entails a more com-plex set of modeling choices. Despite a variety of recently proposed algorithmic solutions, a principled guideline for building estimators of treatment effects using machine learning algorithms is still lacking. In this paper, we provide such guidelines by characterizing the fundamental limits of estimating heterogeneous treatment effects, and establishing conditions under which these limits can be achieved. Our analysis reveals that the relative importance of the different aspects of observational data vary with the sample size. For instance, we show that selection bias matters only in small-sample regimes, whereas with a large sample size, the way an algorithm models the control and treated outcomes is what bottlenecks its performance. Guided by our analysis, we build a practical algorithm for estimating treatment effects using a non-stationary Gaussian processes with doubly-robust hyperparameters. Using a standard semi-synthetic simulation setup, we show that our algorithm outperforms the state-of-the-art, and that the behavior of ex-isting algorithms conforms with our analysis.) <|cite_end|>, or is even biased (in case of multiple-step-ahead prediction) <|cite_start|> (Reference: Estimation of the causal effects of time-varying exposures: This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying , microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Longitudinal data analysis / editors, Garrett Fitzmaurice ... [et al.]. p. cm.-(Chapman and Hall/CRC series of handbooks of modern statistical methods) Includes bibliographical references and index.) <|cite_end|> <|cite_start|> (Reference: Estimating average causal effects from patient trajectories: In medical practice, treatments are selected based on the expected causal effects on patient outcomes. Here, the gold standard for estimating causal effects are randomized controlled trials; however, such trials are costly and sometimes even unethical. Instead, medical practice is increasingly interested in estimating causal effects among patient (sub)groups from electronic health records, that is, observational data. In this paper, we aim at estimating the average causal effect (ACE) from observational data (patient trajectories) that are collected over time. For this, we propose DeepACE: an end-to-end deep learning model. DeepACE leverages the iterative G-computation formula to adjust for the bias induced by time-varying confounders. Moreover, we develop a novel sequential targeting procedure which ensures that DeepACE has favorable theoretical properties, i.e., is doubly robust and asymptotically efficient. To the best of our knowledge, this is the first work that proposes an end-to-end deep learning model tailored for estimating time-varying ACEs. We compare DeepACE in an extensive number of experiments, confirming that it achieves state-of-the-art performance. We further provide a case study for patients suffering from low back pain to demonstrate that DeepACE generates important and meaningful findings for clinical practice. Our work enables practitioners to develop effective treatment recommendations based on population effects.) <|cite_end|>. Instead, tailored methods are needed. To estimate counterfactual outcomes over time, state-of-the-art methods make nowadays use of machine learning. Prominent examples are: recurrent marginal structural networks~(RMSNs) <|cite_start|> (Reference: Forecasting Treatment Responses Over Time Using Recurrent Marginal Structural Networks.: Electronic health records provide a rich source of data for machine learning methods to learn dynamic treatment responses over time. However, any direct estimation is hampered by the presence of time-dependent confounding, where actions taken are dependent on time-varying variables related to the outcome of interest. Drawing inspiration from marginal structural models, a class of methods in epidemiology which use propensity weighting to adjust for time-dependent confounders, we introduce the Recurrent Marginal Structural Network - a sequence-to-sequence architecture for forecasting a patient's expected response to a series of planned treatments. Using simulations of a state-of-the-art pharmacokinetic-pharmacodynamic (PK-PD) model of tumor growth, we demonstrate the ability of our network to accurately learn unbiased treatment responses from observational data – even under changes in the policy of treatment assignments – and performance gains over benchmarks.) <|cite_end|>, counterfactual recurrent network~(CRN) <|cite_start|> (Reference: Estimating Counterfactual Treatment Outcomes over Time Through Adversarially Balanced Representations: Identifying when to give treatments to patients and how to select among multiple treatments over time are important medical problems with a few existing solutions. In this paper, we introduce the Counterfactual Recurrent Network (CRN), a novel sequence-to-sequence model that leverages the increasingly available patient observational data to estimate treatment effects over time and answer such medical questions. To handle the bias from time-varying confounders, covariates affecting the treatment assignment policy in the observational data, CRN uses domain adversarial training to build balancing representations of the patient history. At each timestep, CRN constructs a treatment invariant representation which removes the association between patient history and treatment assignments and thus can be reliably used for making counterfactual predictions. On a simulated model of tumour growth, with varying degree of time-dependent confounding, we show how our model achieves lower error in estimating counterfactuals and in choosing the correct treatment and timing of treatment than current state-of-the-art methods.) <|cite_end|>, and G-Net <|cite_start|> (Reference: {G-NET: 在不久前一家医药企业召开的会议上,尽管一如既往地有着若干个分会场,参会人员的规模达到了数百人.而且会议过程中包含着来自不同参会方的多方音视频互动,但主办企业和参会者却都感到了此次会议的与众不同。) <|cite_end|>. However, these methods build upon simple long short-term memory~(LSTM) networks, because of which their ability to model complex, long-range dependencies in observational data is limited. Long-range dependencies are omnipresent in medical data; \eg, long-term treatment effects have been observed for obesity <|cite_start|> (Reference: Effective long-term treatment of obesity: a continuing care model: ) <|cite_end|>, multiple sclerosis <|cite_start|> (Reference: Can we measure long-term treatment effects in multiple sclerosis?: ) <|cite_end|>, or diabetes <|cite_start|> (Reference: The Long-Term Effects of Type 1 Diabetes Treatment andComplications on Health-Related Quality of Life A 23-year follow-up of the Diabetes Control and Complications / Epidemiology of Diabetes Interventions and Complications cohort: RESEARCH DESIGN AND METHODSdA total of 1,441 participants, initially 13–39 years of age, were followed for an average of 23.5 years as part of the Diabetes Control and Complications Trial (DCCT) and the Epidemiology of Diabetes Interventions and Complications (EDIC) follow-up study. The Diabetes Quality-of-Life questionnaire (DQOL) was administered annually during DCCT and every other year during EDIC. Biomedical data, including HbA1c levels, exposure to severe hypoglycemia, intercurrent psychiatric events, and development of diabetes complications were collected at regular intervals throughout the follow-up.) <|cite_end|>. To address this, we develop a \longname~(\shortname) for estimating counterfactual outcomes over time. It is carefully designed to capture complex, long-range dependencies in medical data that are nowadays common in EHRs. In this paper, we aim at estimating counterfactual outcomes over time, that is, for one- and multi-step-ahead predictions. For this, we develop a novel \longname~(\shortname). It combines two innovations: (1)~a tailored transformer-based architecture to capture complex, long-range dependencies in the observational data; and (2)~a novel counterfactual domain confusion (CDC) loss for end-to-end training. For~(1), we combine three separate transformer subnetworks for processing time-varying covariates, past treatments, and past outcomes, respectively, into a joint network with in-between cross-attentions. Here, each transformer subnetwork is further extended by (i)~masked multi-head self-attention, (ii)~shared trainable relative positional encoding, and (iii)~attentional dropout. For~(2), we develop a custom end-to-end training procedure based on our CDC loss. This allows us to solve an adversarial balancing objective in which we balance representations to be (a)~predictive of outcomes and (b)~non-predictive of the current treatment assignment. The latter is crucial to address confounding bias and thus reduces the generalization error of counterfactual prediction. Importantly, this objective is different from previously proposed gradient reversal balancing <|cite_start|> (Reference: Unsupervised Domain Adaptation by Backpropagation: Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of "deep" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation. Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.) <|cite_end|> <|cite_start|> (Reference: Estimating Counterfactual Treatment Outcomes over Time Through Adversarially Balanced Representations: Identifying when to give treatments to patients and how to select among multiple treatments over time are important medical problems with a few existing solutions. In this paper, we introduce the Counterfactual Recurrent Network (CRN), a novel sequence-to-sequence model that leverages the increasingly available patient observational data to estimate treatment effects over time and answer such medical questions. To handle the bias from time-varying confounders, covariates affecting the treatment assignment policy in the observational data, CRN uses domain adversarial training to build balancing representations of the patient history. At each timestep, CRN constructs a treatment invariant representation which removes the association between patient history and treatment assignments and thus can be reliably used for making counterfactual predictions. On a simulated model of tumour growth, with varying degree of time-dependent confounding, we show how our model achieves lower error in estimating counterfactuals and in choosing the correct treatment and timing of treatment than current state-of-the-art methods.) <|cite_end|>, as it aims to minimize a reversed KL-divergence to build balanced representations. We demonstrate the effectiveness of our \shortname over state-of-the-art methods using an extensive series of experiments with synthetic and real-world data. Our ablation study (e.g., against a single-subnetwork architecture) shows that neither (1) nor (2) alone are sufficient for learning. Rather, it is crucial to combine our transformer-based architecture based on three subnetworks \emph{and} our novel CDC loss. Overall, our \textbf{main contributions} are as follows:\footnote{Code is available online: \url{https://github.com/Valentyn1997/CausalTransformer}} \vspace{-0.2cm} \begin{enumerate}[noitemsep] \item We propose a new end-to-end model for estimating counterfactual outcomes over time: the \longname~(\shortname). To the best of our knowledge, this is the first transformer tailored to causal inference. \item We develop a custom training procedure for our \shortname based on a novel counterfactual domain confusion (CDC) loss. \item We use synthetic and real-world data to demonstrate that our \shortname achieves state-of-the-art performance. We further achieve this both for one- and multi-step-ahead predictions. \end{enumerate} \vspace{-0.3cm} Related Work \label{sec:related-work} \paragraph{Estimating counterfactual outcomes in static setting.} Extensive literature has focused on estimating counterfactual outcomes (or, analogously, individual treatment effects~[ITE]) in static settings <|cite_start|> (Reference: Learning Representations for Counterfactual Inference: Observational studies are rising in importance due to the widespread accumulation of data in fields such as healthcare, education, employment and ecology. We consider the task of answering counterfactual questions such as, "Would this patient have lower blood sugar had she received a different medication?". We propose a new algorithmic framework for counterfactual inference which brings together ideas from domain adaptation and representation learning. In addition to a theoretical justification, we perform an empirical comparison with previous approaches to causal inference from observational data. Our deep learning algorithm significantly outperforms the previous state-of-the-art.) <|cite_end|> <|cite_start|> (Reference: Bayesian Nonparametric Causal Inference: Information Rates and Learning Algorithms: We investigate the problem of estimating the causal effect of a treatment on individual subjects from observational data, this is a central problem in various application domains, including healthcare, social sciences, and online advertising. Within the Neyman Rubin potential outcomes model, we use the Kullback Leibler (KL) divergence between the estimated and true distributions as a measure of accuracy of the estimate, and we define the information rate of the Bayesian causal inference procedure as the (asymptotic equivalence class of the) expected value of the KL divergence between the estimated and true distributions as a function of the number of samples. Using Fano method, we establish a fundamental limit on the information rate that can be achieved by any Bayesian estimator, and show that this fundamental limit is independent of the selection bias in the observational data. We characterize the Bayesian priors on the potential (factual and counterfactual) outcomes that achieve the optimal information rate. As a consequence, we show that a particular class of priors that have been widely used in the causal inference literature cannot achieve the optimal information rate. On the other hand, a broader class of priors can achieve the optimal information rate. We go on to propose a prior adaptation procedure (which we call the information based empirical Bayes procedure) that optimizes the Bayesian prior by maximizing an information theoretic criterion on the recovered causal effects rather than maximizing the marginal likelihood of the observed (factual) data. Building on our analysis, we construct an information optimal Bayesian causal inference algorithm.) <|cite_end|> <|cite_start|> (Reference: Estimation and Inference of Heterogeneous Treatment Effects using Random Forests: ABSTRACT Many scientific and engineering challenges—ranging from personalized medicine to customized marketing recommendations—require an understanding of treatment effect heterogeneity. In this article, we develop a nonparametric causal forest for estimating heterogeneous treatment effects that extends Breiman’s widely used random forest algorithm. In the potential outcomes framework with unconfoundedness, we show that causal forests are pointwise consistent for the true treatment effect and have an asymptotically Gaussian and centered sampling distribution. We also discuss a practical method for constructing asymptotic confidence intervals for the true treatment effect that are centered at the causal forest estimates. Our theoretical results rely on a generic Gaussian theory for a large family of random forest algorithms. To our knowledge, this is the first set of results that allows any type of random forest, including classification and regression forests, to be used for provably valid statistical inference. In experiments, we find causal forests to be substantially more powerful than classical methods based on nearest-neighbor matching, especially in the presence of irrelevant covariates.) <|cite_end|> <|cite_start|> (Reference: Nonparametric Estimation of Heterogeneous Treatment Effects: From Theory to Learning Algorithms: The need to evaluate treatment effectiveness is ubiquitous in most of empirical science, and interest in flexibly investigating effect heterogeneity is growing rapidly. To do so, a multitude of model-agnostic, nonparametric meta-learners have been proposed in recent years. Such learners decompose the treatment effect estimation problem into separate sub-problems, each solvable using standard supervised learning methods. Choosing between different meta-learners in a data-driven manner is difficult, as it requires access to counterfactual information. Therefore, with the ultimate goal of building better understanding of the conditions under which some learners can be expected to perform better than others a priori, we theoretically analyze four broad meta-learning strategies which rely on plug-in estimation and pseudo-outcome regression. We highlight how this theoretical reasoning can be used to guide principled algorithm design and translate our analyses into practice by considering a variety of neural network architectures as base-learners for the discussed meta-learning strategies. In a simulation study, we showcase the relative strengths of the learners under different data-generating processes.) <|cite_end|>. Several works have adapted deep learning for that purpose <|cite_start|> (Reference: Learning Representations for Counterfactual Inference: Observational studies are rising in importance due to the widespread accumulation of data in fields such as healthcare, education, employment and ecology. We consider the task of answering counterfactual questions such as, "Would this patient have lower blood sugar had she received a different medication?". We propose a new algorithmic framework for counterfactual inference which brings together ideas from domain adaptation and representation learning. In addition to a theoretical justification, we perform an empirical comparison with previous approaches to causal inference from observational data. Our deep learning algorithm significantly outperforms the previous state-of-the-art.) <|cite_end|>. In the static setting, the input is given by cross-sectional data, and, as such, there are \emph{no} time-varying covariates, treatments, and outcomes. However, we are interested in counterfactual outcome estimation over time. \paragraph{Estimating counterfactual outcomes over time.} Methods for estimating time-varying outcomes were originally introduced in epidemiology and make widespread use of simple linear models. Here, the aim is to estimate average (non-individual) effects of time-varying treatments. Examples of such methods include G-computation, marginal structural models (MSMs), and structural nested models <|cite_start|> (Reference: A new approach to causal inference in mortality studies with a sustained exposure period—application to control of the healthy worker survivor effect: ) <|cite_end|> <|cite_start|> (Reference: Marginal structural models and causal inference in epidemiology: In observational studies with exposures or treatments that vary over time, standard approaches for adjustment of confounding are biased when there exist time-dependent confounders that are also affected by previous treatment. This paper introduces marginal structural models, a new class of causal models that allow for improved adjustment of confounding in those situations. The parameters of a marginal structural model can be consistently estimated using a new class of estimators, the inverse-probability-of-treatment weighted estimators.) <|cite_end|> <|cite_start|> (Reference: Marginal structural models to estimate the joint causal effect of nonrandomized treatments: Even in the absence of unmeasured confounding factors or model misspecification, standard methods for estimating the causal effect of time-varying treatments on survival are biased when (a) there exists a time-dependent risk factor for survival that also predicts subsequent treatment, and (b) past treatment history predicts subsequent risk factor level. In contrast, methods based on marginal structural models (MSMs) can provide consistent estimates of causal effects when unmeasured confounding and model misspecification are absent. MSMs are a new class of causal models whose parameters are estimated using a new class of estimators—inverse-probability-of-treatment weighted estimators. We use a marginal structural Cox proportional hazards model to estimate the joint effect of zidovudine (AZT) and prophylaxis therapy for Pneumocystis carinii pneumonia on the survival of HIV-positive men in the Multicenter AIDS Cohort Study, an observational study of homosexual men. We obtained an estimated causal mortality rate (hazard) ratio of .67 (conservative 95% confidence interval .46-.98) for AZT and of 1.14 (.79, 1.64) for prophylaxis therapy. These estimates will be consistent for the true causal rate ratios when the functional forms chosen for our models are correct and data have been obtained on all time-independent and time-dependent covariates that predict both subsequent treatment and mortality.) <|cite_end|> <|cite_start|> (Reference: Estimation of the causal effects of time-varying exposures: This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying , microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Longitudinal data analysis / editors, Garrett Fitzmaurice ... [et al.]. p. cm.-(Chapman and Hall/CRC series of handbooks of modern statistical methods) Includes bibliographical references and index.) <|cite_end|>. To address the limited expressiveness of linear models, several Bayesian non-parametric methods were proposed <|cite_start|> (Reference: Comparison of Chebyshev ’ s Inequality and Non-parametric B-Basis to Estimate Failure Strength of Composite Open Hole Tension Tests: B-basis failure strength represents the lower 10 percentile with 95% confidence level. In many risk averse applications the true statistical distribution is unknown, and the B-basis is calculated using a non-parametric formulation. Chebyshev’s inequality makes no assumption about the statistical distribution and can be used to bound the 10 percentile. It is possible to improve these bounds by restricting Chebyshev’s inequality to a class of statistical distributions. B-basis failure strengths are compared using these methods on a collection of composite open hole tension tests.) <|cite_end|> <|cite_start|> (Reference: Reliable Decision Support using Counterfactual Models: Decision-makers are faced with the challenge of estimating what is likely to happen when they take an action. For instance, if I choose not to treat this patient, are they likely to die? Practitioners commonly use supervised learning algorithms to fit predictive models that help decision-makers reason about likely future outcomes, but we show that this approach is unreliable, and sometimes even dangerous. The key issue is that supervised learning algorithms are highly sensitive to the policy used to choose actions in the training data, which causes the model to capture relationships that do not generalize. We propose using a different learning objective that predicts counterfactuals instead of predicting outcomes under an existing action policy as in supervised learning. To support decision-making in temporal settings, we introduce the Counterfactual Gaussian Process (CGP) to predict the counterfactual future progression of continuous-time trajectories under sequences of future actions. We demonstrate the benefits of the CGP on two important decision-support tasks: risk prediction and "what if?" reasoning for individualized treatment planning.) <|cite_end|> <|cite_start|> (Reference: Treatment-Response Models for Counterfactual Reasoning with Continuous-time, Continuous-valued Interventions: Treatment effects can be estimated from observational data as the difference in potential outcomes. In this paper, we address the challenge of estimating the potential outcome when treatment-dose levels can vary continuously over time. Further, the outcome variable may not be measured at a regular frequency. Our proposed solution represents the treatment response curves using linear time-invariant dynamical systems---this provides a flexible means for modeling response over time to highly variable dose curves. Moreover, for multivariate data, the proposed method: uncovers shared structure in treatment response and the baseline across multiple markers; and, flexibly models challenging correlation structure both across and within signals over time. For this, we build upon the framework of multiple-output Gaussian Processes. On simulated and a challenging clinical dataset, we show significant gains in accuracy over state-of-the-art models.) <|cite_end|>. However, these make strong assumptions regarding the data generation mechanism, and are not designed for multi-dimensional outcomes as well as static covariates. Other methods build upon recurrent neural networks <|cite_start|> (Reference: Disentangled Counterfactual Recurrent Networks for Treatment Effect Inference over Time: Choosing the best treatment-plan for each individual patient requires accurate forecasts of their outcome trajectories as a function of the treatment, over time. While large observational data sets constitute rich sources of information to learn from, they also contain biases as treatments are rarely assigned randomly in practice. To provide accurate and unbiased forecasts, we introduce the Disentangled Counterfactual Recurrent Network (DCRN), a novel sequence-to-sequence architecture that estimates treatment outcomes over time by learning representations of patient histories that are disentangled into three separate latent factors: a treatment factor, influencing only treatment selection; an outcome factor, influencing only the outcome; and a confounding factor, influencing both. With an architecture that is completely inspired by the causal structure of treatment influence over time, we advance forecast accuracy and disease understanding, as our architecture allows for practitioners to infer which patient features influence which part in a patient's trajectory, contrasting other approaches in this domain. We demonstrate that DCRN outperforms current state-of-the-art methods in forecasting treatment responses, on both real and simulated data.) <|cite_end|> but these are restricted to single-time treatments or make stronger assumptions for identifiability, which do not hold for our setting (see Appendix~\ref{app:methods-table}). There are several methods that build upon the potential outcomes framework <|cite_start|> (Reference: Bayesian Inference for Causal Effects: The Role of Randomization: ) <|cite_end|> <|cite_start|> (Reference: Estimation of the causal effects of time-varying exposures: This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying , microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Longitudinal data analysis / editors, Garrett Fitzmaurice ... [et al.]. p. cm.-(Chapman and Hall/CRC series of handbooks of modern statistical methods) Includes bibliographical references and index.) <|cite_end|>, and, thus, ensure identifiability by making the same assumptions as we do (see Sec.~\ref{sec:problem-formulation}). Here, state-of-the-art methods are recurrent marginal structural networks~(RMSNs) <|cite_start|> (Reference: Forecasting Treatment Responses Over Time Using Recurrent Marginal Structural Networks.: Electronic health records provide a rich source of data for machine learning methods to learn dynamic treatment responses over time. However, any direct estimation is hampered by the presence of time-dependent confounding, where actions taken are dependent on time-varying variables related to the outcome of interest. Drawing inspiration from marginal structural models, a class of methods in epidemiology which use propensity weighting to adjust for time-dependent confounders, we introduce the Recurrent Marginal Structural Network - a sequence-to-sequence architecture for forecasting a patient's expected response to a series of planned treatments. Using simulations of a state-of-the-art pharmacokinetic-pharmacodynamic (PK-PD) model of tumor growth, we demonstrate the ability of our network to accurately learn unbiased treatment responses from observational data – even under changes in the policy of treatment assignments – and performance gains over benchmarks.) <|cite_end|>, counterfactual recurrent network~(CRN) <|cite_start|> (Reference: Estimating Counterfactual Treatment Outcomes over Time Through Adversarially Balanced Representations: Identifying when to give treatments to patients and how to select among multiple treatments over time are important medical problems with a few existing solutions. In this paper, we introduce the Counterfactual Recurrent Network (CRN), a novel sequence-to-sequence model that leverages the increasingly available patient observational data to estimate treatment effects over time and answer such medical questions. To handle the bias from time-varying confounders, covariates affecting the treatment assignment policy in the observational data, CRN uses domain adversarial training to build balancing representations of the patient history. At each timestep, CRN constructs a treatment invariant representation which removes the association between patient history and treatment assignments and thus can be reliably used for making counterfactual predictions. On a simulated model of tumour growth, with varying degree of time-dependent confounding, we show how our model achieves lower error in estimating counterfactuals and in choosing the correct treatment and timing of treatment than current state-of-the-art methods.) <|cite_end|>, and G-Net <|cite_start|> (Reference: {G-NET: 在不久前一家医药企业召开的会议上,尽管一如既往地有着若干个分会场,参会人员的规模达到了数百人.而且会议过程中包含着来自不同参会方的多方音视频互动,但主办企业和参会者却都感到了此次会议的与众不同。) <|cite_end|>. These methods address bias due to time-varying confounding in different ways. RMSNs combine two propensity networks and use the predicted inverse probability of treatment weighting~(IPTW) scores for training the prediction networks. CRN uses an adversarial objective to produce a sequence of balanced representations, which are simultaneously predictive of the outcome but non-predictive of the current treatment assignment. G-Net aims to predict both outcomes and time-varying covariates, and then performs G-computation for multiple-step-ahead prediction. All of three aforementioned methods are built on top of one/two-layer LSTM encoder-decoder architectures. Because of that, they are limited in their ability to capture long-range, complex dependencies between time-varying confounders (\ie, time-varying covariates, previous treatments, and previous outcomes). However, such complex data are nowadays widespread in medical practice (\eg, EHRs) <|cite_start|> (Reference: Analyzing patient trajectories with artificial intelligence: In digital medicine, patient data typically record health events over time (eg, through electronic health records, wearables, or other sensing technologies) and thus form unique patient trajectories. Patient trajectories are highly predictive of the future course of diseases and therefore facilitate effective care. However, digital medicine often uses only limited patient data, consisting of health events from only a single or small number of time points while ignoring additional information encoded in patient trajectories. To analyze such rich longitudinal data, new artificial intelligence (AI) solutions are needed. In this paper, we provide an overview of the recent efforts to develop trajectory-aware AI solutions and provide suggestions for future directions. Specifically, we examine the implications for developing disease models from patient trajectories along the typical workflow in AI: problem definition, data processing, modeling, evaluation, and interpretation. We conclude with a discussion of how such AI solutions will allow the field to build robust models for personalized risk scoring, subtyping, and disease pathway discovery.) <|cite_end|>, which may impede the performance of the previous methods for real-world medical data. As a remedy, we develop a \emph{deep} transformer network for counterfactual outcomes estimation over time. \paragraph{Transformers.} Transformers refer to deep neural networks for sequential data that typically adopt a custom self-attention mechanism <|cite_start|> (Reference: Attention Is All You Need: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.) <|cite_end|>. This makes transformers both flexible and powerful in modeling long-range associative dependencies for sequence-to-sequence tasks. Prominent examples come from natural language processing (e.g., BERT <|cite_start|> (Reference: {Bert: Багато актуальних NLP завдань, включаючи задачу авто пунктуації, залежать від ефективного вирішення завдання прогнозування – визначення того, який саме  токен повинен бути наступним. У даній роботі розглянута підзадача прогнозування наступного токену на основі попередніх. Основною проблемою існуючих підходів є те, що вони не однаково ефективні. З метою вирішення цієї проблеми у даній роботі розглядається використання двонаправлених кодерів моделі BERT з даними, які були токенізовані.) <|cite_end|>, RoBERTa <|cite_start|> (Reference: {Roberta: This is a book with a clear message and position. It argues that Jews cannot embrace the cultural components of Judaism without appreciating the legal aspects of the Jewish tradition. The author also endeavours to apply to Jewish law the method of cultural analysis used in secular legal studies. The book is also a call for a culturally nuanced approach to halakhah in order to meet contemporary challenges. This involves the rejection of Jewish law solely as the embodiment of divine law and instead offers an argument for the ‘‘gray of compromise’’—for diversity in a Judaism that openly accepts inconvenient social realities but that also recognizes the paramountcy of the halakhah as the basis of all Jewish life and thought. It seeks to find a middle ground between privileging the mesorah and the aggada. This ideological position might be considered predictable given the author’s background as a distinguished legal scholar, a Conservadox Jew and a woman who aspires to being a fully accepted and participating member of the community. That is not to say the arguments are all obvious. If one ignores the author’s repetitious obsession of trying to fit everything into a cultural analysis paradigm approach to Judaism she displays an impressive knowledge of traditional texts and sources. The historical coverage of the origins and norms of Jewish law and their development by the emerging Jewish denominations is thorough and rigorous. The contemporary topics surveyed in the chapters cover almost the whole ballpark of the issues facing Jews today. They include the role of the Israeli state; Women and synagogue ritual; Homosexuality; Who is a Jew?; Sabbath laws; Jewish identity in the U.S.; the BDS movement and anti-Semitism. The material covering questions and controversies where there is a clear cut relevance to halakhah are) <|cite_end|>, and GPT-3 <|cite_start|> (Reference: In Advances in Neural Information Processing Systems: ) <|cite_end|>). Other examples include image understanding tasks <|cite_start|> (Reference: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.) <|cite_end|>, multi-modal tasks (image/video captioning) <|cite_start|> (Reference: {CPTR: 일반적으로 카세그레인 오프셋 복 반사경(Cassegrain Dual Offset Reflector)은 위성 통신용 안테나로 사용되지만, 여기서는 CPTR(Compact Payload Test Range)을 위한 반사경 시스템으로 해석하였다. 시험 영역의 근접 전계는 물리 광학법(Physical Optics)을 적용하여 계산하였다. CPTR은 균일한 평면파 제공을 목적으로 하며, 이를 위해 최소한의 진폭과 위상 리플(ripple)을 가져야 하며, 교차 편파 또한 작아야 한다. 따라서 본 논문에서는 반사경 구조 및 시험 영역의 위치에 따른 근접 전계 패턴을 구하여 전계의 리플, 테이퍼와 교차 편파를 고찰하였다. 특히 통신용 반사경 안테나에서는 나타나지 않는 안테나 축방향의 교차 편파 성분을 고찰하였다.) <|cite_end|>, math problem solving <|cite_start|> (Reference: Enhancing the Transformer with Explicit Relational Encoding for Math Problem Solving: We incorporate Tensor-Product Representations within the Transformer in order to better support the explicit representation of relation structure. Our Tensor-Product Transformer (TP-Transformer) sets a new state of the art on the recently-introduced Mathematics Dataset containing 56 categories of free-form math word-problems. The essential component of the model is a novel attention mechanism, called TP-Attention, which explicitly encodes the relations between each Transformer cell and the other cells from which values have been retrieved by attention. TP-Attention goes beyond linear combination of retrieved values, strengthening representation-building and resolving ambiguities introduced by multiple layers of standard attention. The TP-Transformer's attention maps give better insights into how it is capable of solving the Mathematics Dataset's challenging problems. Pretrained models and code will be made available after publication.) <|cite_end|>, and time-series forecasting <|cite_start|> (Reference: Probabilistic Transformer For Time Series Analysis: Generative modeling of multivariate time series has remained challenging partly due to the complex, non-deterministic dynamics across long-distance time steps. In this paper, we propose deep probabilistic methods that combine state-space models (SSMs) with transformer architectures. In contrast to previously proposed SSMs, our approaches use attention mechanism to model non-Markovian dynamics in the latent space and avoid recurrent neural networks entirely. We also extend our models to include several layers of stochastic variables organized in a hierarchy for further expressiveness. Compared to transformer models, ours are probabilistic, non-autoregressive, and capable of generating diverse long-term forecasts with accounted uncertainty. Extensive experiments show that our models consistently outperform competitive baselines on various tasks and datasets, including time series forecasting and human motion prediction.) <|cite_end|> <|cite_start|> (Reference: Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting: Many real-world applications require the prediction of long sequence time-series, such as electricity consumption planning. Long sequence time-series forecasting (LSTF) demands a high prediction capacity of the model, which is the ability to capture precise long-range dependency coupling between output and input efficiently. Recent studies have shown the potential of Transformer to increase the prediction capacity. However, there are several severe issues with Transformer that prevent it from being directly applicable to LSTF, including quadratic time complexity, high memory usage, and inherent limitation of the encoder-decoder architecture. To address these issues, we design an efficient transformer-based model for LSTF, named Informer, with three distinctive characteristics: (i) a $ProbSparse$ self-attention mechanism, which achieves $O(L \log L)$ in time complexity and memory usage, and has comparable performance on sequences' dependency alignment. (ii) the self-attention distilling highlights dominating attention by halving cascading layer input, and efficiently handles extreme long input sequences. (iii) the generative style decoder, while conceptually simple, predicts the long time-series sequences at one forward operation rather than a step-by-step way, which drastically improves the inference speed of long-sequence predictions. Extensive experiments on four large-scale datasets demonstrate that Informer significantly outperforms existing methods and provides a new solution to the LSTF problem.) <|cite_end|>. However, to the best of our knowledge, no paper has developed transformers specifically for causal inference. This presents our novelty. <|paper_end|>
[ "<|reference_start|> Estimation of the causal effects of time-varying exposures: This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying , microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Longitudinal data analysis / editors, Garrett Fitzmaurice ... [et al.]. p. cm.-(Chapman and Hall/CRC series of handbooks of modern statistical methods) Includes bibliographical references and index. <|reference_end|>", "<|reference_start|> Comparison of Chebyshev ’ s Inequality and Non-parametric B-Basis to Estimate Failure Strength of Composite Open Hole Tension Tests: B-basis failure strength represents the lower 10 percentile with 95% confidence level. In many risk averse applications the true statistical distribution is unknown, and the B-basis is calculated using a non-parametric formulation. Chebyshev’s inequality makes no assumption about the statistical distribution and can be used to bound the 10 percentile. It is possible to improve these bounds by restricting Chebyshev’s inequality to a class of statistical distributions. B-basis failure strengths are compared using these methods on a collection of composite open hole tension tests. <|reference_end|>", "<|reference_start|> Estimating Counterfactual Treatment Outcomes over Time Through Adversarially Balanced Representations: Identifying when to give treatments to patients and how to select among multiple treatments over time are important medical problems with a few existing solutions. In this paper, we introduce the Counterfactual Recurrent Network (CRN), a novel sequence-to-sequence model that leverages the increasingly available patient observational data to estimate treatment effects over time and answer such medical questions. To handle the bias from time-varying confounders, covariates affecting the treatment assignment policy in the observational data, CRN uses domain adversarial training to build balancing representations of the patient history. At each timestep, CRN constructs a treatment invariant representation which removes the association between patient history and treatment assignments and thus can be reliably used for making counterfactual predictions. On a simulated model of tumour growth, with varying degree of time-dependent confounding, we show how our model achieves lower error in estimating counterfactuals and in choosing the correct treatment and timing of treatment than current state-of-the-art methods. <|reference_end|>", "<|reference_start|> {CPTR: 일반적으로 카세그레인 오프셋 복 반사경(Cassegrain Dual Offset Reflector)은 위성 통신용 안테나로 사용되지만, 여기서는 CPTR(Compact Payload Test Range)을 위한 반사경 시스템으로 해석하였다. 시험 영역의 근접 전계는 물리 광학법(Physical Optics)을 적용하여 계산하였다. CPTR은 균일한 평면파 제공을 목적으로 하며, 이를 위해 최소한의 진폭과 위상 리플(ripple)을 가져야 하며, 교차 편파 또한 작아야 한다. 따라서 본 논문에서는 반사경 구조 및 시험 영역의 위치에 따른 근접 전계 패턴을 구하여 전계의 리플, 테이퍼와 교차 편파를 고찰하였다. 특히 통신용 반사경 안테나에서는 나타나지 않는 안테나 축방향의 교차 편파 성분을 고찰하였다. <|reference_end|>" ]
[ 27, 28, 35, 43 ]
{"<|multi_cite_1_1|>": "ss-2127314", "<|multi_cite_1_2|>": "ss-1221638", "<|multi_cite_2_1|>": "ss-1180758", "<|multi_cite_2_2|>": "ss-1532639", "<|multi_cite_2_3|>": "arxiv-97793", "<|multi_cite_2_4|>": "arxiv-317280", "<|multi_cite_2_5|>": "arxiv-402937", "<|cite_3|>": "ss-1221639", "<|cite_4|>": "ss-1384915", "<|multi_cite_5_1|>": "ss-1093855", "<|multi_cite_5_2|>": "arxiv-402860", "<|cite_6|>": "ss-922898", "<|cite_7|>": "arxiv-247508", "<|cite_8|>": "ss-842755", "<|cite_9|>": "ss-1221640", "<|cite_10|>": "ss-1221641", "<|cite_11|>": "ss-1221642", "<|multi_cite_12_1|>": "arxiv-66621", "<|multi_cite_12_2|>": "arxiv-247508", "<|multi_cite_13_1|>": "arxiv-97793", "<|multi_cite_13_2|>": "arxiv-143951", "<|multi_cite_13_3|>": "ss-1285805", "<|multi_cite_13_5|>": "arxiv-317280", "<|multi_cite_14_1|>": "arxiv-97793", "<|multi_cite_15_1|>": "ss-1093858", "<|multi_cite_15_2|>": "ss-1522517", "<|multi_cite_15_3|>": "ss-2539972", "<|multi_cite_15_4|>": "ss-1093855", "<|multi_cite_16_1|>": "ss-708753", "<|multi_cite_16_2|>": "arxiv-120462", "<|multi_cite_16_3|>": "arxiv-121057", "<|multi_cite_17_2|>": "arxiv-385969", "<|multi_cite_18_1|>": "ss-1387487", "<|multi_cite_18_2|>": "ss-1093855", "<|cite_19|>": "ss-922898", "<|cite_20|>": "arxiv-247508", "<|cite_21|>": "ss-842755", "<|cite_22|>": "ss-1221639", "<|cite_23|>": "arxiv-126595", "<|cite_24|>": "ss-1457177", "<|cite_25|>": "ss-933653", "<|cite_26|>": "ss-832115", "<|cite_27|>": "arxiv-298443", "<|cite_28|>": "ss-1221643", "<|cite_29|>": "arxiv-228902", "<|multi_cite_30_1|>": "ss-1361054", "<|multi_cite_30_2|>": "arxiv-309848"}
2404.12604
<|paper_start|> Title: Transmitter Side Beyond-Diagonal RIS for mmWave Integrated Sensing and Communications Abstract: Transmitter Side Beyond-Diagonal RIS for mmWave Integrated Sensing and Communications: This work initiates the study of a beyond-diagonal reconfigurable intelligent surface (BD-RIS)-aided transmitter architecture for integrated sensing and communication (ISAC) in the millimeter-wave (mmWave) frequency band. Deploying BD-RIS at the transmitter side not only alleviates the need for extensive fully digital radio frequency (RF) chains but also enhances both communication and sensing performance. These benefits are facilitated by the additional design flexibility introduced by the fully-connected scattering matrix of BD-RIS. To achieve the aforementioned benefits, in this work, we propose an efficient two-stage algorithm to design the digital beamforming of the transmitter and the scattering matrix of the BD-RIS with the aim of jointly maximizing the sum rate for multiple communication users and minimizing the largest eigenvalue of the Cramer-Rao bound (CRB) matrix for multiple sensing targets. Numerical results show that the transmitter-side BD-RIS-aided mmWave ISAC outperforms the conventional diagonal-RIS-aided ones in both communication and sensing performance. Introduction Integrated sensing and communication (ISAC) has emerged as a critical enabler for next-generation wireless networks. This attributes to its potential for sharing the spectrum, hardware architectures, and signal processing modules between communication and sensing functionalities. Meanwhile, the incorporation of millimeter-wave (mmWave) technique opens the door to high data rates for communications and high-resolution capabilities for target sensing <|cite_start|> (Reference: Integrated Sensing and Communications: Toward Dual-Functional Wireless Networks for 6G and Beyond: As the standardization of 5G solidifies, researchers are speculating what 6G will be. The integration of sensing functionality is emerging as a key feature of the 6G Radio Access Network (RAN), allowing for the exploitation of dense cell infrastructures to construct a perceptive network. In this IEEE Journal on Selected Areas in Communications (JSAC) Special Issue overview, we provide a comprehensive review on the background, range of key applications and state-of-the-art approaches of Integrated Sensing and Communications (ISAC). We commence by discussing the interplay between sensing and communications (S&C) from a historical point of view, and then consider the multiple facets of ISAC and the resulting performance gains. By introducing both ongoing and potential use cases, we shed light on the industrial progress and standardization activities related to ISAC. We analyze a number of performance tradeoffs between S&C, spanning from information theoretical limits to physical layer performance tradeoffs, and the cross-layer design tradeoffs. Next, we discuss the signal processing aspects of ISAC, namely ISAC waveform design and receive signal processing. As a step further, we provide our vision on the deeper integration between S&C within the framework of perceptive networks, where the two functionalities are expected to mutually assist each other, i.e., via communication-assisted sensing and sensing-assisted communications. Finally, we identify the potential integration of ISAC with other emerging communication technologies, and their positive impacts on the future of wireless networks.) <|cite_end|>. Therefore, mmWave holds great promise for ISAC systems. However, owing to the severe path loss caused by the short wavelength characteristics of mmWave, a large number of transmit antennas along with extensive use of fully digital radio frequency (RF) chains are normally required at the transmitter to achieve high beamforming gain, resulting in huge power consumption <|cite_start|> (Reference: Intelligent Surface-Aided Transmitter Architectures for Millimeter-Wave Ultra Massive MIMO Systems: In this article, we study two novel massive multiple-input multiple-output (MIMO) transmitter architectures for millimeter wave (mmWave) communications which comprise few active antennas, each equipped with a dedicated radio frequency (RF) chain, that illuminate a nearby large intelligent reflecting/transmitting surface (IRS/ITS). The IRS (ITS) consists of a large number of low-cost and energy-efficient passive antenna elements which are able to reflect (transmit) a phase-shifted version of the incident electromagnetic field. Similar to lens array (LA) antennas, IRS/ITS-aided antenna architectures are energy efficient due to the almost lossless over-the-air connection between the active antennas and the intelligent surface. However, unlike for LA antennas, for which the number of active antennas has to linearly grow with the number of passive elements (i.e., the lens aperture) due to the non-reconfigurablility (i.e., non-intelligence) of the lens, for IRS/ITS-aided antennas, the reconfigurablility of the IRS/ITS facilitates scaling up the number of radiating passive elements without increasing the number of costly and bulky active antennas. We show that the constraints that the precoders for IRS/ITS-aided antennas have to meet differ from those of conventional MIMO architectures. Taking these constraints into account and exploiting the sparsity of mmWave channels, we design two efficient precoders; one based on maximizing the mutual information and one based on approximating the optimal unconstrained fully digital (FD) precoder via the orthogonal matching pursuit algorithm. Furthermore, we develop a power consumption model for IRS/ITS-aided antennas that takes into account the impacts of the IRS/ITS imperfections, namely the spillover loss, taper loss, aperture loss, and phase shifter loss. Moreover, we study the effect that the various system parameters have on the achievable rate and show that a proper positioning of the active antennas with respect to the IRS/ITS leads to a considerable performance improvement. Our simulation results reveal that unlike conventional MIMO architectures, IRS/ITS-aided antennas are both highly energy efficient and fully scalable in terms of the number of transmitting (passive) antennas. Therefore, IRS/ITS-aided antennas are promising candidates for realizing the potential of mmWave ultra massive MIMO communications in practice.) <|cite_end|>. This calls for low-cost solutions for mmWave ISAC systems. One promising solution for mmWave ISAC is utilizing the reconfigurable intelligent surfaces (RIS) to assist the transmission. RIS, composed of numerous passive elements, shows its capability to reconfigure the wireless propagation environment for communication and sensing functionalities in the mmWave band <|cite_start|> (Reference: {CRB: One of the important issues in many of array systems such as Radar, Sonar, Mobile, and satellite telecommunications is the estimation of DOA of narrowband received signal. CRB is very important in evaluation of parameter estimation. CRB is the lower bound estimation error variance for any unbiased estimation. In this paper, the array antenna with equal distance arrays is extended in two separated subarrays. At first we study the lower bound of estimation error variance for Direction-of-Arrival in array antennas using CRB method. Then, with extending the above method, the estimation error variance for Direction-of-Arrival in array antennas with two separated subarrays is computed. It is observed that if the size of array increases, the estimation accuracy also increases. But the cost of array and complication of the system also increase. Therefore, we suggest using array antennas with separated subarrays. Furthermore, when signal to noise ratio in the communications system is low, by using of array antennas with two separated subarrays, the Direction-of-Arrival is estimated with high accuracy. Simulation results show that as the distance between the two subarrays and the distance between the antennas increase, the estimation error variance decreases. It should be noted that the distance between antennas should not be more than wavelength of received signal. This causes the ambiguity in estimation and grows up the sidelobes) <|cite_end|>. Moreover, RIS adjusts the phase of the incident signals at an ultra-low power cost. It eliminates the need for extensive RF processing at the transmitter <|cite_start|> (Reference: Intelligent Surface-Aided Transmitter Architectures for Millimeter-Wave Ultra Massive MIMO Systems: In this article, we study two novel massive multiple-input multiple-output (MIMO) transmitter architectures for millimeter wave (mmWave) communications which comprise few active antennas, each equipped with a dedicated radio frequency (RF) chain, that illuminate a nearby large intelligent reflecting/transmitting surface (IRS/ITS). The IRS (ITS) consists of a large number of low-cost and energy-efficient passive antenna elements which are able to reflect (transmit) a phase-shifted version of the incident electromagnetic field. Similar to lens array (LA) antennas, IRS/ITS-aided antenna architectures are energy efficient due to the almost lossless over-the-air connection between the active antennas and the intelligent surface. However, unlike for LA antennas, for which the number of active antennas has to linearly grow with the number of passive elements (i.e., the lens aperture) due to the non-reconfigurablility (i.e., non-intelligence) of the lens, for IRS/ITS-aided antennas, the reconfigurablility of the IRS/ITS facilitates scaling up the number of radiating passive elements without increasing the number of costly and bulky active antennas. We show that the constraints that the precoders for IRS/ITS-aided antennas have to meet differ from those of conventional MIMO architectures. Taking these constraints into account and exploiting the sparsity of mmWave channels, we design two efficient precoders; one based on maximizing the mutual information and one based on approximating the optimal unconstrained fully digital (FD) precoder via the orthogonal matching pursuit algorithm. Furthermore, we develop a power consumption model for IRS/ITS-aided antennas that takes into account the impacts of the IRS/ITS imperfections, namely the spillover loss, taper loss, aperture loss, and phase shifter loss. Moreover, we study the effect that the various system parameters have on the achievable rate and show that a proper positioning of the active antennas with respect to the IRS/ITS leads to a considerable performance improvement. Our simulation results reveal that unlike conventional MIMO architectures, IRS/ITS-aided antennas are both highly energy efficient and fully scalable in terms of the number of transmitting (passive) antennas. Therefore, IRS/ITS-aided antennas are promising candidates for realizing the potential of mmWave ultra massive MIMO communications in practice.) <|cite_end|>. However, most of the existing studies on RIS-aided mmWave ISAC <|cite_start|> (Reference: {CRB: One of the important issues in many of array systems such as Radar, Sonar, Mobile, and satellite telecommunications is the estimation of DOA of narrowband received signal. CRB is very important in evaluation of parameter estimation. CRB is the lower bound estimation error variance for any unbiased estimation. In this paper, the array antenna with equal distance arrays is extended in two separated subarrays. At first we study the lower bound of estimation error variance for Direction-of-Arrival in array antennas using CRB method. Then, with extending the above method, the estimation error variance for Direction-of-Arrival in array antennas with two separated subarrays is computed. It is observed that if the size of array increases, the estimation accuracy also increases. But the cost of array and complication of the system also increase. Therefore, we suggest using array antennas with separated subarrays. Furthermore, when signal to noise ratio in the communications system is low, by using of array antennas with two separated subarrays, the Direction-of-Arrival is estimated with high accuracy. Simulation results show that as the distance between the two subarrays and the distance between the antennas increase, the estimation error variance decreases. It should be noted that the distance between antennas should not be more than wavelength of received signal. This causes the ambiguity in estimation and grows up the sidelobes) <|cite_end|> primarily concentrate on the receiver-side RIS with more emphasis on the former benefit of RIS in propagation reconfiguration. In addition, a revolutionary RIS architecture named beyond-diagonal RIS (BD-RIS) has been recently proposed <|cite_start|> (Reference: Modeling and Architecture Design of Reconfigurable Intelligent Surfaces Using Scattering Parameter Network Analysis: Reconfigurable intelligent surfaces (RISs) are an emerging technology for future wireless communication. The vast majority of recent research on RIS has focused on system level optimizations. However, developing straightforward and tractable electromagnetic models that are suitable for RIS aided communication modeling remains an open issue. In this paper, we address this issue and derive communication models by using rigorous scattering parameter network analysis. We also propose new RIS architectures based on group and fully connected reconfigurable impedance networks that can adjust not only the phases but also the magnitudes of the impinging waves, which are more general and more efficient than conventional single connected reconfigurable impedance network that only adjusts the phases of the impinging waves. In addition, the scaling law of the received signal power of an RIS aided system with reconfigurable impedance networks is also derived. Compared with the single connected reconfigurable impedance network, our group and fully connected reconfigurable impedance network can increase the received signal power by up to 62%, or maintain the same received signal power with a number of RIS elements reduced by up to 21%. We also investigate the proposed architecture in deployments with distance-dependent pathloss and Rician fading channel, and show that the proposed group and fully connected reconfigurable impedance networks outperform the single connected case by up to 34% and 48%, respectively.) <|cite_end|> <|cite_start|> (Reference: Beyond Diagonal Reconfigurable Intelligent Surfaces: From Transmitting and Reflecting Modes to Single-, Group-, and Fully-Connected Architectures: Reconfigurable intelligent surfaces (RISs) are envisioned as a promising technology for future wireless communications. With various hardware realizations, RISs can work under different modes (reflective/transmissive/hybrid) or have different architectures (single/group/fully-connected). However, most existing research focused on single-connected reflective RISs, mathematically characterized by diagonal phase shift matrices, while there is a lack of a comprehensive study for RISs unifying different modes/architectures. In this paper, we solve this issue by analyzing and proposing a general RIS-aided communication model. Specifically, we establish an RIS model not limited to diagonal phase shift matrices, a novel branch referred to as beyond diagonal RIS (BD-RIS), unifying modes and architectures. With the proposed model, we develop efficient algorithms to jointly design transmit precoder and BDRIS matrix to maximize the sum-rate for RIS-aided systems. We also provide simulation results to compare the performance of BD-RISs with different modes/architectures. Simulation results show that under the same mode, fully- and group-connected RIS can effectively increase the sum-rate performance compared with single-connected RIS, and that hybrid RIS outperforms reflective/transmissive RIS with the same architecture.) <|cite_end|>. Different from conventional diagonal-RIS (D-RIS) where each element operates independently, in fully-connected BD-RIS <|cite_start|> (Reference: Beyond Diagonal Reconfigurable Intelligent Surfaces: From Transmitting and Reflecting Modes to Single-, Group-, and Fully-Connected Architectures: Reconfigurable intelligent surfaces (RISs) are envisioned as a promising technology for future wireless communications. With various hardware realizations, RISs can work under different modes (reflective/transmissive/hybrid) or have different architectures (single/group/fully-connected). However, most existing research focused on single-connected reflective RISs, mathematically characterized by diagonal phase shift matrices, while there is a lack of a comprehensive study for RISs unifying different modes/architectures. In this paper, we solve this issue by analyzing and proposing a general RIS-aided communication model. Specifically, we establish an RIS model not limited to diagonal phase shift matrices, a novel branch referred to as beyond diagonal RIS (BD-RIS), unifying modes and architectures. With the proposed model, we develop efficient algorithms to jointly design transmit precoder and BDRIS matrix to maximize the sum-rate for RIS-aided systems. We also provide simulation results to compare the performance of BD-RISs with different modes/architectures. Simulation results show that under the same mode, fully- and group-connected RIS can effectively increase the sum-rate performance compared with single-connected RIS, and that hybrid RIS outperforms reflective/transmissive RIS with the same architecture.) <|cite_end|>, all elements are connected to each other. Recent studies <|cite_start|> (Reference: A dual-function radar-communication system empowered by beyond diagonal reconfigurable intelligent surface: This work focuses on the use of reconfigurable intelligent surface (RIS) in dual-function radar-communication (DFRC) systems to improve communication capacity and sensing precision, and enhance coverage for both functions. In contrast to most of the existing RIS aided DFRC works where the RIS is modeled as a diagonal phase shift matrix and can only reflect signals to half space, we propose a novel beyond diagonal RIS (BD-RIS) aided DFRC system. Specifically, the proposed BD-RIS supports the hybrid reflecting and transmitting mode, and is compatible with flexible single/group/fully-connected architectures, enabling the system to realize full-space coverage. To achieve the expected benefits, we jointly optimize the transmit waveform, the BD-RIS coefficients, and sensing receive filters, by maximizing the minimum signal-to-clutter-plus-noise ratio for fair target detection, subject to the constraints of the communication quality of service, different BD-RIS architectures and power budget. To solve the non-convex and non-smooth max-min problem, a general solution based on the alternating direction method of multipliers is provided for all considered BD-RIS architectures. Numerical simulations validate the efficacy of the proposed algorithm and show the superiority of the BD-RIS aided DFRC system in terms of both communication and sensing compared to conventional RIS aided DFRC.) <|cite_end|> <|cite_start|> (Reference: Enhancing ISAC Network Throughput Using Beyond Diagonal RIS: Emerging literature has shown that deploying reconfigurable intelligent surface (RIS) can remarkably promote integrated sensing and communication (ISAC) system’s performance. Meanwhile, the emerging novel beyond-diagonal (BD)-RIS architecture has manifested its superior beamforming capability over the conventional diagonal RIS. This letter investigates utilizing fully-connected BD-RIS to improve ISAC system’s throughput while ensuring sensing quality, By utilizing majorization-minimization (MM) and penalty-dual-decomposition (PDD) method, we develop an efficient algorithm to tackle the orthogonality condition and non-convex quartic inequality involving BD-RIS. Numerical results demonstrate the effectiveness of our solution and the benefit of BD-RIS deployment in ISAC network.) <|cite_end|> have shown the superior performance gain of the receiver-side BD-RIS over conventional D-RIS in terms of both communication and sensing performance. However, the transmitter-side BD-RIS for ISAC system and its application in the mmWave frequency band has not been investigated yet. Inspired by the transmitter-side RIS-aided communication network introduced in <|cite_start|> (Reference: Intelligent Surface-Aided Transmitter Architectures for Millimeter-Wave Ultra Massive MIMO Systems: In this article, we study two novel massive multiple-input multiple-output (MIMO) transmitter architectures for millimeter wave (mmWave) communications which comprise few active antennas, each equipped with a dedicated radio frequency (RF) chain, that illuminate a nearby large intelligent reflecting/transmitting surface (IRS/ITS). The IRS (ITS) consists of a large number of low-cost and energy-efficient passive antenna elements which are able to reflect (transmit) a phase-shifted version of the incident electromagnetic field. Similar to lens array (LA) antennas, IRS/ITS-aided antenna architectures are energy efficient due to the almost lossless over-the-air connection between the active antennas and the intelligent surface. However, unlike for LA antennas, for which the number of active antennas has to linearly grow with the number of passive elements (i.e., the lens aperture) due to the non-reconfigurablility (i.e., non-intelligence) of the lens, for IRS/ITS-aided antennas, the reconfigurablility of the IRS/ITS facilitates scaling up the number of radiating passive elements without increasing the number of costly and bulky active antennas. We show that the constraints that the precoders for IRS/ITS-aided antennas have to meet differ from those of conventional MIMO architectures. Taking these constraints into account and exploiting the sparsity of mmWave channels, we design two efficient precoders; one based on maximizing the mutual information and one based on approximating the optimal unconstrained fully digital (FD) precoder via the orthogonal matching pursuit algorithm. Furthermore, we develop a power consumption model for IRS/ITS-aided antennas that takes into account the impacts of the IRS/ITS imperfections, namely the spillover loss, taper loss, aperture loss, and phase shifter loss. Moreover, we study the effect that the various system parameters have on the achievable rate and show that a proper positioning of the active antennas with respect to the IRS/ITS leads to a considerable performance improvement. Our simulation results reveal that unlike conventional MIMO architectures, IRS/ITS-aided antennas are both highly energy efficient and fully scalable in terms of the number of transmitting (passive) antennas. Therefore, IRS/ITS-aided antennas are promising candidates for realizing the potential of mmWave ultra massive MIMO communications in practice.) <|cite_end|> <|cite_start|> (Reference: Hybrid Beamforming Design for ITS-Assisted Wireless Networks: This letter proposes a hybrid beamforming design for an intelligent transmissive surface (ITS)-assisted transmitter wireless network. We aim to suppress the sidelobes and optimize the mainlobes of the transmit beams by minimizing the proposed cost function based on the least squares (LS) for the digital beamforming vector of the base station (BS) and the phase shifts of the ITS. To solve the minimization problem, we adopt an efficient algorithm based on the alternating optimization (AO) method to design the digital beamforming vector and the phase shifts of the ITS in an alternating manner. In particular, the alternating direction method of multipliers (ADMM) algorithm is utilized to obtain the optimal phase shifts of the ITS. Finally, we verify the improvement achieved by the proposed algorithm in terms of the beam response compared to the benchmark schemes by the simulation results.) <|cite_end|> <|cite_start|> (Reference: Transmitter Side Beyond-Diagonal Reconfigurable Intelligent Surface for Massive MIMO Networks: This letter focuses on a transmitter or base station (BS) side beyond-diagonal reflecting intelligent surface (BD-RIS) deployment strategy to enhance the spectral efficiency (SE) of a time-division-duplex massive multiple-input multiple-output (MaMIMO) network. In this strategy, the active antenna array utilizes a BD-RIS at the BS to serve multiple users in the downlink. Based on the knowledge of statistical channel state information (CSI), the BD-RIS coefficients matrix is optimized by employing a novel manifold algorithm, and the power control coefficients are then optimized with the objective of maximizing the minimum SE. Through numerical results we illustrate the SE performance of the proposed transmission framework and compare it with that of a conventional MaMIMO transmission for different network settings.) <|cite_end|>, in this work, we initiate the study of a transmitter-side BD-RIS-aided mmWave ISAC system. Specifically, we design the digital beamforming of the transmitter and the scattering matrix of the BD-RIS to jointly maximize the communication sum rate and minimize the largest eigenvalue of the sensing Cram{\'e}r-Rao bound (CRB). An efficient two-stage optimization method is proposed, where the scattering matrix of the BD-RIS is obtained via the symmetric unitary projection, and the digital beamforming is optimized subsequently via the successive convex approximation (SCA) method. Numerical results demonstrate that BD-RIS-aided mmWave ISAC achieves a better communication and sensing trade-off compared to the conventional D-RIS-aided ones. \begin{figure} \centering \includegraphics[width=1\linewidth]{model.eps} \vspace{-0.7cm} \caption{The system model of BD-RIS-aided transmitter architecture for ISAC.} \label{model} \vspace{-0.5cm} \end{figure} <|paper_end|>
[ "<|reference_start|> Intelligent Surface-Aided Transmitter Architectures for Millimeter-Wave Ultra Massive MIMO Systems: In this article, we study two novel massive multiple-input multiple-output (MIMO) transmitter architectures for millimeter wave (mmWave) communications which comprise few active antennas, each equipped with a dedicated radio frequency (RF) chain, that illuminate a nearby large intelligent reflecting/transmitting surface (IRS/ITS). The IRS (ITS) consists of a large number of low-cost and energy-efficient passive antenna elements which are able to reflect (transmit) a phase-shifted version of the incident electromagnetic field. Similar to lens array (LA) antennas, IRS/ITS-aided antenna architectures are energy efficient due to the almost lossless over-the-air connection between the active antennas and the intelligent surface. However, unlike for LA antennas, for which the number of active antennas has to linearly grow with the number of passive elements (i.e., the lens aperture) due to the non-reconfigurablility (i.e., non-intelligence) of the lens, for IRS/ITS-aided antennas, the reconfigurablility of the IRS/ITS facilitates scaling up the number of radiating passive elements without increasing the number of costly and bulky active antennas. We show that the constraints that the precoders for IRS/ITS-aided antennas have to meet differ from those of conventional MIMO architectures. Taking these constraints into account and exploiting the sparsity of mmWave channels, we design two efficient precoders; one based on maximizing the mutual information and one based on approximating the optimal unconstrained fully digital (FD) precoder via the orthogonal matching pursuit algorithm. Furthermore, we develop a power consumption model for IRS/ITS-aided antennas that takes into account the impacts of the IRS/ITS imperfections, namely the spillover loss, taper loss, aperture loss, and phase shifter loss. Moreover, we study the effect that the various system parameters have on the achievable rate and show that a proper positioning of the active antennas with respect to the IRS/ITS leads to a considerable performance improvement. Our simulation results reveal that unlike conventional MIMO architectures, IRS/ITS-aided antennas are both highly energy efficient and fully scalable in terms of the number of transmitting (passive) antennas. Therefore, IRS/ITS-aided antennas are promising candidates for realizing the potential of mmWave ultra massive MIMO communications in practice. <|reference_end|>", "<|reference_start|> Beyond Diagonal Reconfigurable Intelligent Surfaces: From Transmitting and Reflecting Modes to Single-, Group-, and Fully-Connected Architectures: Reconfigurable intelligent surfaces (RISs) are envisioned as a promising technology for future wireless communications. With various hardware realizations, RISs can work under different modes (reflective/transmissive/hybrid) or have different architectures (single/group/fully-connected). However, most existing research focused on single-connected reflective RISs, mathematically characterized by diagonal phase shift matrices, while there is a lack of a comprehensive study for RISs unifying different modes/architectures. In this paper, we solve this issue by analyzing and proposing a general RIS-aided communication model. Specifically, we establish an RIS model not limited to diagonal phase shift matrices, a novel branch referred to as beyond diagonal RIS (BD-RIS), unifying modes and architectures. With the proposed model, we develop efficient algorithms to jointly design transmit precoder and BDRIS matrix to maximize the sum-rate for RIS-aided systems. We also provide simulation results to compare the performance of BD-RISs with different modes/architectures. Simulation results show that under the same mode, fully- and group-connected RIS can effectively increase the sum-rate performance compared with single-connected RIS, and that hybrid RIS outperforms reflective/transmissive RIS with the same architecture. <|reference_end|>", "<|reference_start|> Intelligent Surface-Aided Transmitter Architectures for Millimeter-Wave Ultra Massive MIMO Systems: In this article, we study two novel massive multiple-input multiple-output (MIMO) transmitter architectures for millimeter wave (mmWave) communications which comprise few active antennas, each equipped with a dedicated radio frequency (RF) chain, that illuminate a nearby large intelligent reflecting/transmitting surface (IRS/ITS). The IRS (ITS) consists of a large number of low-cost and energy-efficient passive antenna elements which are able to reflect (transmit) a phase-shifted version of the incident electromagnetic field. Similar to lens array (LA) antennas, IRS/ITS-aided antenna architectures are energy efficient due to the almost lossless over-the-air connection between the active antennas and the intelligent surface. However, unlike for LA antennas, for which the number of active antennas has to linearly grow with the number of passive elements (i.e., the lens aperture) due to the non-reconfigurablility (i.e., non-intelligence) of the lens, for IRS/ITS-aided antennas, the reconfigurablility of the IRS/ITS facilitates scaling up the number of radiating passive elements without increasing the number of costly and bulky active antennas. We show that the constraints that the precoders for IRS/ITS-aided antennas have to meet differ from those of conventional MIMO architectures. Taking these constraints into account and exploiting the sparsity of mmWave channels, we design two efficient precoders; one based on maximizing the mutual information and one based on approximating the optimal unconstrained fully digital (FD) precoder via the orthogonal matching pursuit algorithm. Furthermore, we develop a power consumption model for IRS/ITS-aided antennas that takes into account the impacts of the IRS/ITS imperfections, namely the spillover loss, taper loss, aperture loss, and phase shifter loss. Moreover, we study the effect that the various system parameters have on the achievable rate and show that a proper positioning of the active antennas with respect to the IRS/ITS leads to a considerable performance improvement. Our simulation results reveal that unlike conventional MIMO architectures, IRS/ITS-aided antennas are both highly energy efficient and fully scalable in terms of the number of transmitting (passive) antennas. Therefore, IRS/ITS-aided antennas are promising candidates for realizing the potential of mmWave ultra massive MIMO communications in practice. <|reference_end|>", "<|reference_start|> Hybrid Beamforming Design for ITS-Assisted Wireless Networks: This letter proposes a hybrid beamforming design for an intelligent transmissive surface (ITS)-assisted transmitter wireless network. We aim to suppress the sidelobes and optimize the mainlobes of the transmit beams by minimizing the proposed cost function based on the least squares (LS) for the digital beamforming vector of the base station (BS) and the phase shifts of the ITS. To solve the minimization problem, we adopt an efficient algorithm based on the alternating optimization (AO) method to design the digital beamforming vector and the phase shifts of the ITS in an alternating manner. In particular, the alternating direction method of multipliers (ADMM) algorithm is utilized to obtain the optimal phase shifts of the ITS. Finally, we verify the improvement achieved by the proposed algorithm in terms of the beam response compared to the benchmark schemes by the simulation results. <|reference_end|>" ]
[ 1, 6, 10, 11 ]
{"<|cite_1|>": "ss-779325", "<|cite_2|>": "ss-988212", "<|multi_cite_3_3|>": "ss-2305288", "<|cite_4|>": "ss-988212", "<|multi_cite_5_3|>": "ss-2305288", "<|multi_cite_6_1|>": "arxiv-305250", "<|multi_cite_6_2|>": "arxiv-417724", "<|cite_7|>": "arxiv-417724", "<|multi_cite_8_1|>": "ss-1835273", "<|multi_cite_8_2|>": "ss-1371283", "<|multi_cite_9_1|>": "ss-988212", "<|multi_cite_9_2|>": "ss-1861896", "<|multi_cite_9_3|>": "ss-1861897"}
0911.5515
<|paper_start|> Title: Finite Dimensional Statistical Inference Abstract: Finite Dimensional Statistical Inference: In this paper, we derive the explicit series expansion of the eigenvalue distribution of various models, namely the case of non-central Wishart distributions, as well as correlated zero mean Wishart distributions. The tools used extend those of the free probability framework, which have been quite successful for high dimensional statistical inference (when the size of the matrices tends to infinity), also known as free deconvolution. This contribution focuses on the finite Gaussian case and proposes algorithmic methods to compute the moments. Cases where asymptotic results fail to apply are also discussed. Introduction Random matrix and free probability theory have fruitful applications in many fields of research, such as digital communication <|cite_start|> (Reference: On the capacity of multi-antenna Gaussian channels: We investigate the use of multi-antennas at both ends of a point-to-point communication system over the additive Gaussian channel. We consider a system with t transmit antennas and r receive antennas in which the received vector v/spl isin/C/sup /spl tau// depends on the transmitted vector u/spl isin/C/sup /spl tau// via: v=Hu+w where H/spl isin/C/sup r/spl times/t/ is the channel transfer matrix and w is zero-mean complex circular symmetric Gaussian noise. We assume that E[ww]=/spl sigma//sup 2/I/sub r/. The transmitter is constrained in its total power, i.e., E[uu]/spl les/E/sub s/. We assume that the channel matrix H is known at both ends of the communication system, and that the waveform channel is flat over the bandwidth of interest.) <|cite_end|>, mathematical finance <|cite_start|> (Reference: Theory of Financial Risk and Derivative Pricing - From Statistical Physics to Risk Management: This book is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. A catalogue record for this book is available from the British Library Library of Congress Cataloguing in Publication data Bouchaud, Jean-Philippe, 1962– Theory of financial risk and derivative pricing : from statistical physics to risk management / Jean-Philippe Bouchaud and Marc Potters.–2nd edn p. cm. Rev. edn of: Theory of financial risks. 2000. Includes bibliographical references and index. Contents Foreword page xiii Preface xv 1 Probability theory: basic notions 1 1.1 Introduction 1 1.2 Probability distributions 3 1.3 Typical values and deviations 4 1.4 Moments and characteristic function 6 1.5 Divergence of moments – asymptotic behaviour 7 1.6 Gaussian distribution 7 1.7 Log-normal distribution 8 1.8 Lévy distributions and Paretian tails 10 1.9 Other distributions (*) 1 4 1.10 Summary 16 2 Maximum and addition of random variables 17 2.1 Maximum of random variables 17 2.2 Sums of random variables 21 2.2.1 Convolutions 21 2.2.2 Additivity of cumulants and of tail amplitudes 22 2.2.3 Stable distributions and self-similarity 23 2.3 Central limit theorem 24 2.3.1 Convergence to a Gaussian 25 2.3.2 Convergence to a Lévy distribution 27 2.3.3 Large deviations 28 2.3.4 Steepest descent method and Cramèr function (*) 3 0 2.3.5 The CLT at work on simple cases 32 2.3.6 Truncated Lévy distributions 35 2.3.7 Conclusion: survival and vanishing of tails 36 2.4 From sum to max: progressive dominance of extremes (*) 3 7 2.5 Linear correlations and fractional Brownian motion 38 2.6 Summary 40 vi Contents 3 Continuous time limit, Ito calculus and path integrals 43 3.1 Divisibility and the continuous time limit 43 3.1.1 Divisibility 43 3.1.2 Infinite divisibility 44 3.1.3 Poisson jump processes 45 3.2 Functions of the Brownian motion and Ito calculus 47 3.2.1 Ito's lemma 47 3.2.2 Novikov's formula 49 3.2.3 Stratonovich's prescription 50 3.3 Other techniques 51 3.3.1 Path integrals 51 3.3.2 Girsanov's formula and the Martin–Siggia–Rose trick () <|cite_end|> and nuclear physics <|cite_start|> (Reference: RANDOM-MATRIX THEORIES IN QUANTUM PHYSICS : COMMON CONCEPTS: ) <|cite_end|>. In particular, the free probability framework <|cite_start|> (Reference: Addition of certain non-commuting random variables: ) <|cite_end|> <|cite_start|> (Reference: Limit laws for Random matrices and free products: ) <|cite_end|> <|cite_start|> (Reference: The semicircle law, free random variables and entropy: Overview Probability laws and noncommutative random variables The free relation Analytic function theory and infinitely divisible laws Random matrices and asymptotically free relation Large deviations for random matrices Free entropy of noncommutative random variables Relation to operator algebras Bibliography Index.) <|cite_end|> can be used for high dimensional statistical inference (or free deconvolution), i.e., to retrieve the eigenvalue distributions of involved functionals of random matrices. The general idea of deconvolution is related to the following problem <|cite_start|> (Reference: Free deconvolution: from theory to practice: —In this paper, we provide an algorithmic method to compute the singular values of sum of rectangular matrices based on the free cumulants approach and illustrate its application to wireless communications. We first recall the algorithms working for sum/products of square random matrices, which have already been presented in some previous papers and we then introduce the main contribution of this paper which provides a general method working for rectangular random matrices, based on the recent theoretical work of Benaych-Georges. In its full generality, the computation of the eigenvalues requires some sophisticated tools related to free probability and the explicit spectrum (eigenvalue distribution) of the matrices can hardly be obtained (except for some trivial cases). From an implementation perspective, this has led the community to the misconception that free probability has no practical application. This contribution takes the opposite view and shows how the free cumulants approach in free probability provides the right shift from theory to practice.) <|cite_end|>: Given ${\bf A}$, ${\bf B}$ two $n\times n$ independent square Hermitian (or symmetric) random matrices:\\ 1) Can one derive the eigenvalue distribution of ${\bf A}$ from the ones of ${\bf A} + {\bf B}$ and ${\bf B}$? If feasible in the large $n$-limit, this operation is named additive free deconvolution,\\ 2) Can one derive the eigenvalue distribution of ${\bf A}$ from the ones of ${\bf AB}$ and ${\bf B}$? If feasible in the large $n$-limit, this operation is named multiplicative free deconvolution. In the literature, deconvolution for the large $n$-limit has been studied, and the methods generally used to compute it are the method of moments <|cite_start|> (Reference: Addition of certain non-commuting random variables: ) <|cite_end|>, and the Stieltjes transform method <|cite_start|> (Reference: On the empirical distribution of eigenvalues of large dimensional information-plus-noise-type matrices: ) <|cite_end|>. The expressions turn out to be quite simple if some kind of asymptotic freeness <|cite_start|> (Reference: The semicircle law, free random variables and entropy: Overview Probability laws and noncommutative random variables The free relation Analytic function theory and infinitely divisible laws Random matrices and asymptotically free relation Large deviations for random matrices Free entropy of noncommutative random variables Relation to operator algebras Bibliography Index.) <|cite_end|> of the matrices involved is assumed. However, freeness usually does not hold for finite matrices. Quite remarkably, the method of moments can still be used to propose an algorithmic method to compute these operations. The goal of this contribution is exactly to propose a general finite dimensional statistical inference framework based on the method of moments, which is implemented in software. As the calculations are quite tedious, and for sake of clarity, we focus in this contribution on Gaussian matrices\footnote{Cases such as Vandermonde matrices can also be implemented in the same vein <|cite_start|> (Reference: Asymptotic Behaviour of Random Vandermonde Matrices with Entries on the Unit Circle: Analytical methods for finding moments of random Vandermonde matrices with entries on the unit circle are developed. Vandermonde Matrices play an important role in signal processing and wireless applications such as direction of arrival estimation, precoding, and sparse sampling theory, just to name a few. Within this framework, we extend classical freeness results on random matrices with independent, identically distributed (i.i.d.) entries and show that Vandermonde structured matrices can be treated in the same vein with different tools. We focus on various types of matrices, such as Vandermonde matrices with and without uniform phase distributions, as well as generalized Vandermonde matrices. In each case, we provide explicit expressions of the moments of the associated Gram matrix, as well as more advanced models involving the Vandermonde matrix. Comparisons with classical i.i.d. random matrix theory are provided, and deconvolution results are discussed. We review some applications of the results to the fields of signal processing and wireless communications.) <|cite_end|> <|cite_start|> (Reference: Convolution operations arising from Vandermonde matrices: Different types of convolution operations involving large Vandermonde matrices are considered. The convolutions parallel those of large Gaussian matrices and additive and multiplicative free convolution. First additive and multiplicative convolution of Vandermonde matrices and deterministic diagonal matrices are considered. After this, several cases of additive and multiplicative convolution of two independent Vandermonde matrices are considered. It is also shown that the convergence of any combination of Vandermonde matrices is almost sure. We will divide the considered convolutions into two types: those which depend on the phase distribution of the Vandermonde matrices, and those which depend only on the spectra of the matrices. A general criterion is presented to find which type applies for any given convolution. A simulation is presented, verifying the results. Implementations of all considered convolutions are provided and discussed, together with the challenges in making these implementations efficient. The implementation is based on the technique of Fourier-Motzkin elimination, and is quite general as it can be applied to virtually any combination of Vandermonde matrices. Generalizations to related random matrices, such as Toeplitz and Hankel matrices, are also discussed.) <|cite_end|>. The general case is, however, more difficult.}. The method of moments <|cite_start|> (Reference: Free deconvolution: from theory to practice: —In this paper, we provide an algorithmic method to compute the singular values of sum of rectangular matrices based on the free cumulants approach and illustrate its application to wireless communications. We first recall the algorithms working for sum/products of square random matrices, which have already been presented in some previous papers and we then introduce the main contribution of this paper which provides a general method working for rectangular random matrices, based on the recent theoretical work of Benaych-Georges. In its full generality, the computation of the eigenvalues requires some sophisticated tools related to free probability and the explicit spectrum (eigenvalue distribution) of the matrices can hardly be obtained (except for some trivial cases). From an implementation perspective, this has led the community to the misconception that free probability has no practical application. This contribution takes the opposite view and shows how the free cumulants approach in free probability provides the right shift from theory to practice.) <|cite_end|> is based on the relations between the moments of the different matrices involved. It provides a series expansion of the eigenvalue distribution of the involved matrices. For a given $n\times n$ random matrix ${\bf A}$, the $p$-th moment of ${\bf A}$ is defined as \begin{equation} \label{moment} t_{{\bf A}}^{n,p} = \E\left[ \mathrm{tr}({\bf A}^p) \right]=\int \lambda^pd\rho_n(\lambda) \end{equation} where $\E$ is the expectation, $\mathrm{tr}$ the normalized trace, and $d\rho_n$ the associated empirical mean measure defined by $d\rho_n(\lambda)=\E\left(\frac{1}{n} \sum_{i=1}^n \delta(\lambda-\lambda_i)\right)$, where $\lambda_i$ are the eigenvalues of ${\bf A}$. Quite remarkably, when $n\to \infty$, $t_{{\bf A}}^{n,p}$ converges in many cases almost surely to an analytical expression $t_{{\bf A}}^{p}$ that depends only on some specific parameters of ${\bf A}$ (such as the distribution of its entries)\footnote{Note that in the following, when speaking of moments of matrices, we refer to the moments of the associated measure.}. This enables to reduce the dimensionality of the problem and simplifies the computation of convolution of measures. In recent works deconvolution has been analyzed when $n\to \infty$ for some particular matrices ${\bf A}$ and ${\bf B}$, such as when ${\bf A}$ and ${\bf B}$ are free <|cite_start|> (Reference: Free deconvolution for signal processing applications: Situations in many fields of research, such as digital communications, nuclear physics and mathematical finance, can be modelled with random matrices. When the matrices get large, free probability theory is an invaluable tool for describing the asymptotic behaviour of many systems. It will be shown how free probability can be used to aid in source detection for certain systems. Sample covariance matrices for systems with noise are the starting point in our source detection problem. Multiplicative free deconvolution is shown to be a method which can aid in expressing limit eigenvalue distributions for sample covariance matrices, and to simplify estimators for eigenvalue distributions of covariance matrices.) <|cite_end|>, or ${\bf A}$ random Vandermonde and ${\bf B}$ diagonal <|cite_start|> (Reference: Asymptotic Behaviour of Random Vandermonde Matrices with Entries on the Unit Circle: Analytical methods for finding moments of random Vandermonde matrices with entries on the unit circle are developed. Vandermonde Matrices play an important role in signal processing and wireless applications such as direction of arrival estimation, precoding, and sparse sampling theory, just to name a few. Within this framework, we extend classical freeness results on random matrices with independent, identically distributed (i.i.d.) entries and show that Vandermonde structured matrices can be treated in the same vein with different tools. We focus on various types of matrices, such as Vandermonde matrices with and without uniform phase distributions, as well as generalized Vandermonde matrices. In each case, we provide explicit expressions of the moments of the associated Gram matrix, as well as more advanced models involving the Vandermonde matrix. Comparisons with classical i.i.d. random matrix theory are provided, and deconvolution results are discussed. We review some applications of the results to the fields of signal processing and wireless communications.) <|cite_end|> <|cite_start|> (Reference: Convolution operations arising from Vandermonde matrices: Different types of convolution operations involving large Vandermonde matrices are considered. The convolutions parallel those of large Gaussian matrices and additive and multiplicative free convolution. First additive and multiplicative convolution of Vandermonde matrices and deterministic diagonal matrices are considered. After this, several cases of additive and multiplicative convolution of two independent Vandermonde matrices are considered. It is also shown that the convergence of any combination of Vandermonde matrices is almost sure. We will divide the considered convolutions into two types: those which depend on the phase distribution of the Vandermonde matrices, and those which depend only on the spectra of the matrices. A general criterion is presented to find which type applies for any given convolution. A simulation is presented, verifying the results. Implementations of all considered convolutions are provided and discussed, together with the challenges in making these implementations efficient. The implementation is based on the technique of Fourier-Motzkin elimination, and is quite general as it can be applied to virtually any combination of Vandermonde matrices. Generalizations to related random matrices, such as Toeplitz and Hankel matrices, are also discussed.) <|cite_end|>. The inference framework described in this contribution is based on the method of moments in the finite case: it takes a set of moments as input, and produces a set of moments as output, with the dimensions of the matrices considered finite. The framework is flexible enough to allow for repeated combinations of the random matrices we consider, and the patterns in such combinations are reflected nicely in the algorithms. The framework also lends itself naturally to combinations with other types of random matrices, for which support has already been implemented in the framework <|cite_start|> (Reference: Convolution operations arising from Vandermonde matrices: Different types of convolution operations involving large Vandermonde matrices are considered. The convolutions parallel those of large Gaussian matrices and additive and multiplicative free convolution. First additive and multiplicative convolution of Vandermonde matrices and deterministic diagonal matrices are considered. After this, several cases of additive and multiplicative convolution of two independent Vandermonde matrices are considered. It is also shown that the convergence of any combination of Vandermonde matrices is almost sure. We will divide the considered convolutions into two types: those which depend on the phase distribution of the Vandermonde matrices, and those which depend only on the spectra of the matrices. A general criterion is presented to find which type applies for any given convolution. A simulation is presented, verifying the results. Implementations of all considered convolutions are provided and discussed, together with the challenges in making these implementations efficient. The implementation is based on the technique of Fourier-Motzkin elimination, and is quite general as it can be applied to virtually any combination of Vandermonde matrices. Generalizations to related random matrices, such as Toeplitz and Hankel matrices, are also discussed.) <|cite_end|>. This flexibility, exploited with the method of moments, is somewhat in contrast to methods such as the Stieltjes transform method <|cite_start|> (Reference: On the empirical distribution of eigenvalues of large dimensional information-plus-noise-type matrices: ) <|cite_end|>, where combining patterns of matrices naturally leads to more complex equations for the Stieltjes transforms (when possible) and can only be performed in the large $n$-limit. While the simplest patterns we consider are sums and products, we also consider products of many independent matrices. The algorithms are based on iterations through partitions and permutations as in <|cite_start|> (Reference: Random matrices and K-theory for exact C*-algebras: Uranium oxide hydrate is produced by irradiating with light a solution of a suitable diluent, water-soluble uranium salt, carboxylate ion, and a rate-promoting amount of at least one suitable crown ether.) <|cite_end|>, where the case of a Wishart matrix was considered. Our methods build heavily on the simple form which the moments of complex Gaussian random variables have, as exploited in <|cite_start|> (Reference: Random matrices and K-theory for exact C*-algebras: Uranium oxide hydrate is produced by irradiating with light a solution of a suitable diluent, water-soluble uranium salt, carboxylate ion, and a rate-promoting amount of at least one suitable crown ether.) <|cite_end|>. We remark that, in certain cases, it is possible to implement the method of moments in a different way also <|cite_start|> (Reference: Random matrices with complex Gaussian entries: ) <|cite_end|> <|cite_start|> (Reference: A note on averages over random matrix ensembles: Abstract. In this work we find a closed form expression for matrix averages over the Gaussian ensemble. More precisely, given an n × n Hermitian matrix A and a continuous function f(x) we find a closed form expression for the expectation E(Tr(f(XAX∗))) where X is a Gaussian n × n matrix with complex independent and identically distributed entries of zero mean and variance 1. Taking f(x) = log(1+x) this gives us another formula for the capacity of the MIMO communication channel and taking f(x) = (1 + x) gives us the minimum MMSE achieved by a linear receiver.) <|cite_end|>. However, we are not aware of any attempts to make an inference framework as general as the one presented here. The case presented in <|cite_start|> (Reference: A note on averages over random matrix ensembles: Abstract. In this work we find a closed form expression for matrix averages over the Gaussian ensemble. More precisely, given an n × n Hermitian matrix A and a continuous function f(x) we find a closed form expression for the expectation E(Tr(f(XAX∗))) where X is a Gaussian n × n matrix with complex independent and identically distributed entries of zero mean and variance 1. Taking f(x) = log(1+x) this gives us another formula for the capacity of the MIMO communication channel and taking f(x) = (1 + x) gives us the minimum MMSE achieved by a linear receiver.) <|cite_end|>, for instance, handles only certain zero-mean, one-sided correlated Wishart matrices. The paper is organized as follows. Section~\ref{section:essentials} provides background essentials on random matrix theory and combinatorics needed to state the main results. Parts of Section~\ref{section:essentials} is rather technical, but it is not necessary to understand all details therein to understand the statement of the main results. These are summarized in Section~\ref{section:theorems}. First, algorithms for the simplest patterns (sums and products of random matrices) in the finite dimensional statistical inference framework are presented. Then, recursive algorithms for products of many Wishart matrices and a deterministic matrix are included, as well with some general remarks on how the general situation can be attacked from these basic algorithms. We then explain how algorithms for deconvolution can be obtained within the same framework, and formalize the corresponding moment estimators. Section~\ref{software} presents details on the software implementation of the finite dimensional statistical inference framework. Section~\ref{simulations} presents some simulations and useful applications showing the implications of the presented results in various applied fields. <|paper_end|>
[ "<|reference_start|> RANDOM-MATRIX THEORIES IN QUANTUM PHYSICS : COMMON CONCEPTS: <|reference_end|>", "<|reference_start|> Limit laws for Random matrices and free products: <|reference_end|>", "<|reference_start|> Asymptotic Behaviour of Random Vandermonde Matrices with Entries on the Unit Circle: Analytical methods for finding moments of random Vandermonde matrices with entries on the unit circle are developed. Vandermonde Matrices play an important role in signal processing and wireless applications such as direction of arrival estimation, precoding, and sparse sampling theory, just to name a few. Within this framework, we extend classical freeness results on random matrices with independent, identically distributed (i.i.d.) entries and show that Vandermonde structured matrices can be treated in the same vein with different tools. We focus on various types of matrices, such as Vandermonde matrices with and without uniform phase distributions, as well as generalized Vandermonde matrices. In each case, we provide explicit expressions of the moments of the associated Gram matrix, as well as more advanced models involving the Vandermonde matrix. Comparisons with classical i.i.d. random matrix theory are provided, and deconvolution results are discussed. We review some applications of the results to the fields of signal processing and wireless communications. <|reference_end|>", "<|reference_start|> A note on averages over random matrix ensembles: Abstract. In this work we find a closed form expression for matrix averages over the Gaussian ensemble. More precisely, given an n × n Hermitian matrix A and a continuous function f(x) we find a closed form expression for the expectation E(Tr(f(XAX∗))) where X is a Gaussian n × n matrix with complex independent and identically distributed entries of zero mean and variance 1. Taking f(x) = log(1+x) this gives us another formula for the capacity of the MIMO communication channel and taking f(x) = (1 + x) gives us the minimum MMSE achieved by a linear receiver. <|reference_end|>" ]
[ 2, 4, 14, 21 ]
{"<|cite_1|>": "ss-957971", "<|cite_2|>": "ss-1219990", "<|cite_3|>": "ss-1645097", "<|multi_cite_4_1|>": "ss-1695485", "<|multi_cite_4_4|>": "ss-854107", "<|multi_cite_4_5|>": "ss-1439971", "<|cite_5|>": "ss-1695487", "<|cite_6|>": "ss-1695485", "<|cite_7|>": "ss-1695486", "<|cite_8|>": "ss-1439971", "<|multi_cite_9_1|>": "arxiv-2846", "<|multi_cite_9_2|>": "arxiv-9699", "<|cite_10|>": "ss-1695487", "<|cite_11|>": "arxiv-675381", "<|multi_cite_12_1|>": "arxiv-2846", "<|multi_cite_12_2|>": "arxiv-9699", "<|cite_13|>": "arxiv-9699", "<|cite_14|>": "ss-1695486", "<|cite_15|>": "ss-1771741", "<|cite_16|>": "ss-1771741", "<|multi_cite_17_1|>": "ss-1771742", "<|multi_cite_17_2|>": "ss-1771743", "<|cite_18|>": "ss-1771743"}
2409.09254-1
<|cite_start|> (Reference: PVNet: A Joint Convolutional Network of Point Cloud and Multi-View for 3D Shape Recognition: 3D object recognition has attracted wide research attention in the field of multimedia and computer vision. With the recent proliferation of deep learning, various deep models with different representations have achieved the state-of-the-art performance. Among them, point cloud and multi-view based 3D shape representations are promising recently, and their corresponding deep models have shown significant performance on 3D shape recognition. However, there is little effort concentrating point cloud data and multi-view data for 3D shape representation, which is, in our consideration, beneficial and compensated to each other. In this paper, we propose the Point-View Network (PVNet), the first framework integrating both the point cloud and the multi-view data towards joint 3D shape recognition. More specifically, an embedding attention fusion scheme is proposed that could employ high-level features from the multi-view data to model the intrinsic correlation and discriminability of different structure features from the point cloud data. In particular, the discriminative descriptions are quantified and leveraged as the soft attention mask to further refine the structure feature of the 3D shape. We have evaluated the proposed method on the ModelNet40 dataset for 3D shape classification and retrieval tasks. Experimental results and comparisons with state-of-the-art methods demonstrate that our framework can achieve superior performance.) <|cite_end|> <|cite_start|> (Reference: Angular Triplet-Center Loss for Multi-view 3D Shape Retrieval: How to obtain the desirable representation of a 3D shape, which is discriminative across categories and polymerized within classes, is a significant challenge in 3D shape retrieval. Most existing 3D shape retrieval methods focus on capturing strong discriminative shape representation with softmax loss for the classification task, while the shape feature learning with metric loss is neglected for 3D shape retrieval. In this paper, we address this problem based on the intuition that the cosine distance of shape embeddings should be close enough within the same class and far away across categories. Since most of 3D shape retrieval tasks use cosine distance of shape features for measuring shape similarity, we propose a novel metric loss named angular triplet-center loss, which directly optimizes the cosine distances between the features. It inherits the triplet-center loss property to achieve larger inter-class distance and smaller intra-class distance simultaneously. Unlike previous metric loss utilized in 3D shape retrieval methods, where Euclidean distance is adopted and the margin design is difficult, the proposed method is more convenient to train feature embeddings and more suitable for 3D shape retrieval. Moreover, the angle margin is adopted to replace the cosine margin in order to provide more explicit discriminative constraints on an embedding space. Extensive experimental results on two popular 3D object retrieval benchmarks, ModelNet40 and ShapeNetCore 55, demonstrate the effectiveness of our proposed loss, and our method has achieved state-of-the-art results on various 3D shape datasets.) <|cite_end|> <|cite_start|> (Reference: British Machine Vision Conference (BMVC): ) <|cite_end|> <|cite_start|> (Reference: Learning Discriminative 3D Shape Representations by View Discerning Networks: In view-based 3D shape recognition, extracting discriminative visual representation of 3D shapes from projected images is considered the core problem. Projections with low discriminative ability can adversely influence the final 3D shape representation. Especially under the real situations with background clutter and object occlusion, the adverse effect is even more severe. To resolve this problem, we propose a novel deep neural network, View Discerning Network, which learns to judge the quality of views and adjust their contributions to the representation of shapes. In this network, a Score Generation Unit is devised to evaluate the quality of each projected image with score vectors. These score vectors are used to weight the image features and the weighted features perform much better than original features in 3D shape recognition task. In particular, we introduce two structures of Score Generation Unit, Channel-wise Score Unit and Part-wise Score Unit, to assess the quality of feature maps from different perspectives. Our network aggregates features and scores in an end-to-end framework, so that final shape descriptors are directly obtained from its output. Our experiments on ModelNet and ShapeNet Core55 show that View Discerning Network outperforms the state-of-the-arts in terms of the retrieval task, with excellent robustness against background clutter and object occlusion.) <|cite_end|> <|cite_start|> (Reference: 3D object representation learning: A set-to-set matching perspective: In this paper, we tackle the 3D object representation learning from the perspective of set-to-set matching. Given two 3D objects, calculating their similarity is formulated as the problem of set-to-set similarity measurement between two set of local patches. As local convolutional features from convolutional feature maps are natural representations of local patches, the set-to-set matching between sets of local patches is further converted into a local features pooling problem. To highlight good matchings and suppress the bad ones, we exploit two pooling methods: 1) bilinear pooling and 2) VLAD pooling. We analyze their effectiveness in enhancing the set-to-set matching and meanwhile establish their connection. Moreover, to balance different components inherent in a bilinear-pooled feature, we propose the harmonized bilinear pooling operation, which follows the spirits of intra-normalization used in VLAD pooling. To achieve an end-to-end trainable framework, we implement the proposed harmonized bilinear pooling and intra-normalized VLAD as two layers to construct two types of neural network, multi-view harmonized bilinear network (MHBN) and multi-view VLAD network (MVLADN). Systematic experiments conducted on two public benchmark datasets demonstrate the efficacy of the proposed MHBN and MVLADN in 3D object recognition.) <|cite_end|>independently process different views of a 3D shape by a shared CNN. The extracted features are fused with pooling operation or some variants to form a compact 3D shape descriptor. We group these methods into \emph{Independent Views}, shown in Figure~\ref{fig:independent_view}. Although the simple design made them stand out at the time, the interaction among different views was insufficient. In the second category, a growing number of methods model multiple views as a sequence <|cite_start|> (Reference: Emphasizing 3D Properties in Recurrent Multi-View Aggregation for 3D Shape Retrieval: Multi-view based shape descriptors have achieved impressive performance for 3D shape retrieval. The core of view-based methods is to interpret 3D structures through 2D observations. However, most existing methods pay more attention to discriminative models and none of them necessarily incorporate the 3D properties of the objects. To resolve this problem, we propose an encoder-decoder recurrent feature aggregation network (ERFA-Net) to emphasize the 3D properties of 3D shapes in multi-view features aggregation. In our network, a view sequence of the shape is trained to encode a discriminative shape embedding and estimate unseen rendered views of any viewpoints. This generation task gives an effective supervision which makes the network exploit 3D properties of shapes through various 2D images. During feature aggregation, a discriminative feature representation across multiple views is effectively exploited based on LSTM network. The proposed 3D representation has following advantages against other state-of-the-art: 1) it performs robust discrimination under the existence of noise such as view missing and occlusion, because of the improvement brought by 3D properties. 2) it has strong generative capabilities, which is useful for various 3D shape tasks. We evaluate ERFA-Net on two popular 3D shape datasets, ModelNet and ShapeNetCore55, and ERFA-Net outperforms the state-of-the-art methods significantly. Extensive experiments show the effectiveness and robustness of the proposed 3D representation.) <|cite_end|> <|cite_start|> (Reference: {SeqViews2SeqLabels: Learning 3D global features via aggregating sequential views by RNN with attention: Learning 3D global features by aggregating multiple views has been introduced as a successful strategy for 3D shape analysis. In recent deep learning models with end-to-end training, pooling is a widely adopted procedure for view aggregation. However, pooling merely retains the max or mean value over all views, which disregards the content information of almost all views and also the spatial information among the views. To resolve these issues, we propose Sequential Views To Sequential Labels (SeqViews2SeqLabels) as a novel deep learning model with an encoder–decoder structure based on recurrent neural networks (RNNs) with attention. SeqViews2SeqLabels consists of two connected parts, an encoder-RNN followed by a decoder-RNN, that aim to learn the global features by aggregating sequential views and then performing shape classification from the learned global features, respectively. Specifically, the encoder-RNN learns the global features by simultaneously encoding the spatial and content information of sequential views, which captures the semantics of the view sequence. With the proposed prediction of sequential labels, the decoder-RNN performs more accurate classification using the learned global features by predicting sequential labels step by step. Learning to predict sequential labels provides more and finer discriminative information among shape classes to learn, which alleviates the overfitting problem inherent in training using a limited number of 3D shapes. Moreover, we introduce an attention mechanism to further improve the discriminative ability of SeqViews2SeqLabels. This mechanism increases the weight of views that are distinctive to each shape class, and it dramatically reduces the effect of selecting the first view position. Shape classification and retrieval results under three large-scale benchmarks verify that SeqViews2SeqLabels learns more discriminative global features by more effectively aggregating sequential views than state-of-the-art methods.) <|cite_end|> <|cite_start|> (Reference: {3D2SeqViews: Aggregating sequential views for 3D global feature learning by CNN with hierarchical attention aggregation: Learning 3D global features by aggregating multiple views is important. Pooling is widely used to aggregate views in deep learning models. However, pooling disregards a lot of content information within views and the spatial relationship among the views, which limits the discriminability of learned features. To resolve this issue, 3D to Sequential Views (3D2SeqViews) is proposed to more effectively aggregate the sequential views using convolutional neural networks with a novel hierarchical attention aggregation. Specifically, the content information within each view is first encoded. Then, the encoded view content information and the sequential spatiality among the views are simultaneously aggregated by the hierarchical attention aggregation, where view-level attention and class-level attention are proposed to hierarchically weight sequential views and shape classes. View-level attention is learned to indicate how much attention is paid to each view by each shape class, which subsequently weights sequential views through a novel recursive view integration. Recursive view integration learns the semantic meaning of view sequence, which is robust to the first view position. Furthermore, class-level attention is introduced to describe how much attention is paid to each shape class, which innovatively employs the discriminative ability of the fine-tuned network. 3D2SeqViews learns more discriminative features than the state-of-the-art, which leads to the outperforming results in shape classification and retrieval under three large-scale benchmarks.) <|cite_end|> <|cite_start|> (Reference: VERAM: View-Enhanced Recurrent Attention Model for 3D Shape Classification: Multi-view deep neural network is perhaps the most successful approach in 3D shape classification. However, the fusion of multi-view features based on max or average pooling lacks a view selection mechanism, limiting its application in, e.g., multi-view active object recognition by a robot. This paper presents VERAM, a recurrent attention model capable of actively selecting a sequence of views for highly accurate 3D shape classification. VERAM addresses an important issue commonly found in existing attention-based models, i.e., the unbalanced training of the subnetworks corresponding to next view estimation and shape classification. The classification subnetwork is easily overfitted while the view estimation one is usually poorly trained, leading to a suboptimal classification performance. This is surmounted by three essential view-enhancement strategies: 1) enhancing the information flow of gradient backpropagation for the view estimation subnetwork, 2) devising a highly informative reward function for the reinforcement training of view estimation and 3) formulating a novel loss function that explicitly circumvents view duplication. Taking grayscale image as input and AlexNet as CNN architecture, VERAM with 9 views achieves instance-level and class-level accuracy of 95:5% and 95:3% on ModelNet10, 93:7% and 92:1% on ModelNet40, both are the state-of-the-art performance under the same number of views.) <|cite_end|> <|cite_start|> (Reference: Learning Multi-View Representation With LSTM for 3-D Shape Recognition and Retrieval: Shape representation for 3-D models is an important topic in computer vision, multimedia analysis, and computer graphics. Recent multiview-based methods demonstrate promising performance for 3-D shape recognition and retrieval. However, most multiview-based methods ignore the correlations of multiple views or suffer from high computional cost. In this paper, we propose a novel multiview-based network architecture for 3-D shape recognition and retrieval. Our network combines convolutional neural networks (CNNs) with long short-term memory (LSTM) to exploit the correlative information from multiple views. Well-pretrained CNNs with residual connections are first used to extract a low-level feature of each view image rendered from a 3-D shape. Then, a LSTM and a sequence voting layer are employed to aggregate these features into a shape descriptor. The highway network and a three-step training strategy are also adopted to boost the optimization of the deep network. Experimental results on two public datasets demonstrate that the proposed method achieves promising performance for 3-D shape recognition and the state-of-the-art performance for the 3-D shape retrieval.) <|cite_end|>to increase information exchange, which are grouped into \emph{View Sequence} in Figure~\ref{fig:view_sequence}. They deploy RNNs, like GRU <|cite_start|> (Reference: Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling: In this paper we compare different types of recurrent units in recurrent neural networks (RNNs). Especially, we focus on more sophisticated units that implement a gating mechanism, such as a long short-term memory (LSTM) unit and a recently proposed gated recurrent unit (GRU). We evaluate these recurrent units on the tasks of polyphonic music modeling and speech signal modeling. Our experiments revealed that these advanced recurrent units are indeed better than more traditional recurrent units such as tanh units. Also, we found GRU to be comparable to LSTM.) <|cite_end|>and LSTM <|cite_start|> (Reference: Long {Short-Term} memory: Model compression is significant for the wide adoption of Recurrent Neural Networks (RNNs) in both user devices possessing limited resources and business clusters requiring quick responses to large-scale service requests. This work aims to learn structurally-sparse Long Short-Term Memory (LSTM) by reducing the sizes of basic structures within LSTM units, including input updates, gates, hidden states, cell states and outputs. Independently reducing the sizes of basic structures can result in inconsistent dimensions among them, and consequently, end up with invalid LSTM units. To overcome the problem, we propose Intrinsic Sparse Structures (ISS) in LSTMs. Removing a component of ISS will simultaneously decrease the sizes of all basic structures by one and thereby always maintain the dimension consistency. By learning ISS within LSTM units, the obtained LSTMs remain regular while having much smaller basic structures. Based on group Lasso regularization, our method achieves 10.59× speedup without losing any perplexity of a language modeling of Penn TreeBank dataset. It is also successfully evaluated through a compact model with only 2.69M weights for machine Question Answering of SQuAD dataset. Our approach is successfully extended to nonLSTM RNNs, like Recurrent Highway Networks (RHNs). Our source code is available1.) <|cite_end|>, to learn the view relations. However, a strong assumption behind \emph{View Sequence} is that the views are collected from a circle around the 3D shape. In many cases, the assumption may be invalid since the views can be rendered from random viewpoints, so they are unordered. To alleviate this limitation, later methods describe views with a more flexible structure, graph <|cite_start|> (Reference: View-GCN: View-Based Graph Convolutional Network for 3D Shape Analysis: View-based approach that recognizes 3D shape through its projected 2D images has achieved state-of-the-art results for 3D shape recognition. The major challenge for view-based approach is how to aggregate multi-view features to be a global shape descriptor. In this work, we propose a novel view-based Graph Convolutional Neural Network, dubbed as view-GCN, to recognize 3D shape based on graph representation of multiple views in flexible view configurations. We first construct view-graph with multiple views as graph nodes, then design a graph convolutional neural network over view-graph to hierarchically learn discriminative shape descriptor considering relations of multiple views. The view-GCN is a hierarchical network based on local and non-local graph convolution for feature transform, and selective view-sampling for graph coarsening. Extensive experiments on benchmark datasets show that view-GCN achieves state-of-the-art results for 3D shape classification and retrieval.) <|cite_end|> <|cite_start|> (Reference: Learning View-Based Graph Convolutional Network for Multi-View 3D Shape Analysis: View-based approach that recognizes 3D shape through its projected 2D images has achieved state-of-the-art results for 3D shape recognition. The major challenges are how to aggregate multi-view features and deal with 3D shapes in arbitrary poses. We propose two versions of a novel view-based Graph Convolutional Network, dubbed view-GCN and view-GCN++, to recognize 3D shape based on graph representation of multiple views. We first construct view-graph with multiple views as graph nodes, then design two graph convolutional networks over the view-graph to hierarchically learn discriminative shape descriptor considering relations of multiple views. Specifically, view-GCN is a hierarchical network based on two pivotal operations, i.e., feature transform based on local positional and non-local graph convolution, and graph coarsening based on a selective view-sampling operation. To deal with rotation sensitivity, we further propose view-GCN++ with local attentional graph convolution operation and rotation robust view-sampling operation for graph coarsening. By these designs, view-GCN++ achieves invariance to transformations under the finite subgroup of rotation group SO(3). Extensive experiments on benchmark datasets (i.e., ModelNet40, ScanObjectNN, RGBD and ShapeNet Core55) show that view-GCN and view-GCN++ achieve state-of-the-art results for 3D shape classification and retrieval tasks under aligned and rotated settings.) <|cite_end|> <|cite_start|> (Reference: Walk in Views: Multi-view Path Aggregation Graph Network for 3D Shape Analysis: ) <|cite_end|>or hyper-graph <|cite_start|> (Reference: Inductive multi-hypergraph learning and its application on view-based 3d object classification: The wide 3D applications have led to increasing amount of 3D object data, and thus effective 3D object classification technique has become an urgent requirement. One important and challenging task for 3D object classification is how to formulate the 3D data correlation and exploit it. Most of the previous works focus on learning optimal pairwise distance metric for object comparison, which may lose the global correlation among 3D objects. Recently, a transductive hypergraph learning has been investigated for classification, which can jointly explore the correlation among multiple objects, including both the labeled and unlabeled data. Although these methods have shown better performance, they are still limited due to 1) a considerable amount of testing data may not be available in practice and 2) the high computational cost to test new coming data. To handle this problem, considering the multi-modal representations of 3D objects in practice, we propose an inductive multi-hypergraph learning algorithm, which targets on learning an optimal projection for the multi-modal training data. In this method, all the training data are formulated in multi-hypergraph based on the features, and the inductive learning is conducted to learn the projection matrices and the optimal multi-hypergraph combination weights simultaneously. Different from the transductive learning on hypergraph, the high cost training process is off-line, and the testing process is very efficient for the inductive learning on hypergraph. We have conducted experiments on two 3D benchmarks, i.e., the NTU and the ModelNet40 data sets, and compared the proposed algorithm with the state-of-the-art methods and traditional transductive multi-hypergraph learning methods. Experimental results have demonstrated that the proposed method can achieve effective and efficient classification performance. We also note that the proposed method is a general framework and has the potential to be applied in other applications in practice.) <|cite_end|> <|cite_start|> (Reference: Hypergraph Neural Networks: In this paper, we present a hypergraph neural networks (HGNN) framework for data representation learning, which can encode high-order data correlation in a hypergraph structure. Confronting the challenges of learning representation for complex data in real practice, we propose to incorporate such data structure in a hypergraph, which is more flexible on data modeling, especially when dealing with complex data. In this method, a hyperedge convolution operation is designed to handle the data correlation during representation learning. In this way, traditional hypergraph learning procedure can be conducted using hyperedge convolution operations efficiently. HGNN is able to learn the hidden layer representation considering the high-order data structure, which is a general framework considering the complex data correlations. We have conducted experiments on citation network classification and visual object recognition tasks and compared HGNN with graph convolutional networks and other traditional methods. Experimental results demonstrate that the proposed HGNN method outperforms recent state-of-the-art methods. We can also reveal from the results that the proposed HGNN is superior when dealing with multi-modal data compared with existing methods.) <|cite_end|> <|cite_start|> (Reference: HGNN+: General Hypergraph Neural Networks: Graph Neural Networks have attracted increasing attention in recent years. However, existing GNN frameworks are deployed based upon simple graphs, which limits their applications in dealing with complex data correlation of multi-modal/multi-type data in practice. A few hypergraph-based methods have recently been proposed to address the problem of multi-modal/multi-type data correlation by directly concatenating the hypergraphs constructed from each single individual modality/type, which is difficult to learn an adaptive weight for each modality/type. In this paper, we extend the original conference version HGNN, and introduce a general high-order multi-modal/multi-type data correlation modeling framework called HGNN<inline-formula><tex-math notation="LaTeX">$^+$</tex-math><alternatives><mml:math><mml:msup><mml:mrow/><mml:mo>+</mml:mo></mml:msup></mml:math><inline-graphic xlink:href="feng-ieq1-3182052.gif"/></alternatives></inline-formula> to learn an optimal representation in a single hypergraph based framework. It is achieved by bridging multi-modal/multi-type data and hyperedge with hyperedge groups. Specifically, in our method, hyperedge groups are first constructed to represent latent high-order correlations in each specific modality/type with explicit or implicit graph structures. An adaptive hyperedge group fusion strategy is then used to effectively fuse the correlations from different modalities/types in a unified hypergraph. After that a new hypergraph convolution scheme performed in spatial domain is used to learn a general data representation for various tasks. We have evaluated this framework on several popular datasets and compared it with recent state-of-the-art methods. The comprehensive evaluations indicate that the proposed HGNN<inline-formula><tex-math notation="LaTeX">$^+$</tex-math><alternatives><mml:math><mml:msup><mml:mrow/><mml:mo>+</mml:mo></mml:msup></mml:math><inline-graphic xlink:href="feng-ieq2-3182052.gif"/></alternatives></inline-formula> framework can consistently outperform existing methods with a significant margin, especially when modeling implicit data correlations. We also release a toolbox called THU-DeepHypergraph for the proposed framework, which can be used for various of applications, such as data classification, retrieval and recommendation.) <|cite_end|>, and develop graph convolution networks (GCNs) to propagate features among views, called \emph{View Graph} in Figure~\ref{fig:view_graph}. Methods in this category show both flexibility and promising gains, whereas they require to construct a view graph for each 3D shape according to the positions of camera viewpoints, which introduces additional computation overheads. Meanwhile, the viewpoints may be unknown and the message propagation on the graph may not be straightforward for distant views. Some other methods also explore rotations <|cite_start|> (Reference: RotationNet: Joint Object Categorization and Pose Estimation Using Multiviews from Unsupervised Viewpoints: We propose a Convolutional Neural Network (CNN)-based model "RotationNet," which takes multi-view images of an object as input and jointly estimates its pose and object category. Unlike previous approaches that use known viewpoint labels for training, our method treats the viewpoint labels as latent variables, which are learned in an unsupervised manner during the training using an unaligned object dataset. RotationNet is designed to use only a partial set of multi-view images for inference, and this property makes it useful in practical scenarios where only partial views are available. Moreover, our pose alignment strategy enables one to obtain view-specific feature representations shared across classes, which is important to maintain high accuracy in both object categorization and pose estimation. Effectiveness of RotationNet is demonstrated by its superior performance to the state-of-the-art methods of 3D object classification on 10- and 40-class ModelNet datasets. We also show that RotationNet, even trained without known poses, achieves the state-of-the-art performance on an object pose estimation dataset. The code is available on https://github.com/kanezaki/rotationnet) <|cite_end|> <|cite_start|> (Reference: Equivariant Multi-View Networks: Several popular approaches to 3D vision tasks process multiple views of the input independently with deep neural networks pre-trained on natural images, achieving view permutation invariance through a single round of pooling over all views. We argue that this operation discards important information and leads to subpar global descriptors. In this paper, we propose a group convolutional approach to multiple view aggregation where convolutions are performed over a discrete subgroup of the rotation group, enabling, thus, joint reasoning over all views in an equivariant (instead of invariant) fashion, up to the very last layer. We further develop this idea to operate on smaller discrete homogeneous spaces of the rotation group, where a polar view representation is used to maintain equivariance with only a fraction of the number of input views. We set the new state of the art in several large scale 3D shape retrieval tasks, and show additional applications to panoramic scene classification.) <|cite_end|>, region-to-region relations <|cite_start|> (Reference: Learning relationships for multi-view 3D object recognition: Recognizing 3D object has attracted plenty of attention recently, and view-based methods have achieved best results until now. However, previous view-based methods ignore the region-to-region and view-to-view relationships between different view images, which are crucial for multi-view 3D object representation. To tackle this problem, we propose a Relation Network to effectively connect corresponding regions from different viewpoints, and therefore reinforce the information of individual view image. In addition, the Relation Network exploits the inter-relationships over a group of views, and integrates those views to obtain a discriminative 3D object representation. Systematic experiments conducted on ModelNet dataset demonstrate the effectiveness of our proposed methods for both 3D object recognition and retrieval tasks.) <|cite_end|>, multi-layered height-maps <|cite_start|> (Reference: Learning 3D Shapes as Multi-Layered Height-maps using 2D Convolutional Networks: We present a novel global representation of 3D shapes, suitable for the application of 2D CNNs. We represent 3D shapes as multi-layered height-maps (MLH) where at each grid location, we store multiple instances of height maps, thereby representing 3D shape detail that is hidden behind several layers of occlusion. We provide a novel view merging method for combining view dependent information (Eg. MLH descriptors) from multiple views. Because of the ability of using 2D CNNs, our method is highly memory efficient in terms of input resolution compared to the voxel based input. Together with MLH descriptors and our multi view merging, we achieve the state-of-the-art result in classification on ModelNet dataset.) <|cite_end|>, view correspondences <|cite_start|> (Reference: Multi-View 3D Shape Recognition via Correspondence-Aware Deep Learning: In recent years, multi-view learning has emerged as a promising approach for 3D shape recognition, which identifies a 3D shape based on its 2D views taken from different viewpoints. Usually, the correspondences inside a view or across different views encode the spatial arrangement of object parts and the symmetry of the object, which provide useful geometric cues for recognition. However, such view correspondences have not been explicitly and fully exploited in existing work. In this paper, we propose a correspondence-aware representation (CAR) module, which explicitly finds potential intra-view correspondences and cross-view correspondences via $k$ NN search in semantic space and then aggregates the shape features from the correspondences via learned transforms. Particularly, the spatial relations of correspondences in terms of their viewpoint positions and intra-view locations are taken into account for learning correspondence-aware features. Incorporating the CAR module into a ResNet-18 backbone, we propose an effective deep model called CAR-Net for 3D shape classification and retrieval. Extensive experiments have demonstrated the effectiveness of the CAR module as well as the excellent performance of the CAR-Net.) <|cite_end|>, viewpoints selection <|cite_start|> (Reference: MVTN: Multi-View Transformation Network for 3D Shape Recognition: Multi-view projection methods have demonstrated their ability to reach state-of-the-art performance on 3D shape recognition. Those methods learn different ways to aggregate information from multiple views. However, the camera view-points for those views tend to be heuristically set and fixed for all shapes. To circumvent the lack of dynamism of current multi-view methods, we propose to learn those view-points. In particular, we introduce the Multi-View Transformation Network (MVTN) that regresses optimal view-points for 3D shape recognition, building upon advances in differentiable rendering. As a result, MVTN can be trained end-to-end along with any multi-view network for 3D shape classification. We integrate MVTN in a novel adaptive multi-view pipeline that can render either 3D meshes or point clouds. MVTN exhibits clear performance gains in the tasks of 3D shape classification and 3D shape retrieval without the need for extra training supervision. In these tasks, MVTN achieves state-of-the-art performance on ModelNet40, ShapeNet Core55, and the most recent and realistic ScanObjectNN dataset (up to 6% improvement). Interestingly, we also show that MVTN can provide network robustness against rotation and occlusion in the 3D domain. The code is available at https://github.com/ajhamdi/MVTN .) <|cite_end|>, voint cloud representations <|cite_start|> (Reference: Voint Cloud: Multi-View Point Cloud Representation for 3D Understanding: Multi-view projection methods have demonstrated promising performance on 3D understanding tasks like 3D classification and segmentation. However, it remains unclear how to combine such multi-view methods with the widely available 3D point clouds. Previous methods use unlearned heuristics to combine features at the point level. To this end, we introduce the concept of the multi-view point cloud (Voint cloud), representing each 3D point as a set of features extracted from several view-points. This novel 3D Voint cloud representation combines the compactness of 3D point cloud representation with the natural view-awareness of multi-view representation. Naturally, we can equip this new representation with convolutional and pooling operations. We deploy a Voint neural network (VointNet) to learn representations in the Voint space. Our novel representation achieves \sota performance on 3D classification, shape retrieval, and robust 3D part segmentation on standard benchmarks ( ScanObjectNN, ShapeNet Core55, and ShapeNet Parts).) <|cite_end|>when recognizing 3D shapes. They can hardly be divided into the above categories, but multi-view correlations in these methods still need to be enriched. By revisiting existing works, two aspects are identified critical for improving multi-view 3D shape analysis but are not explicitly pointed out in previous literature. The first is how to organize the views so they can communicate flexibly and freely. The second is how to model multi-view correlations directly and explicitly. It is worth noting that the second ingredient is usually coupled with the first, just like GCNs designed for view graphs and RNNs customized for the view sequences. In this paper, we propose to organize the multiple views of a 3D shape into a more flexible structure, $e.g.$, \emph{View Set}, shown in Figure~\ref{fig:view_set}, where elements are permutation invariant. This is consistent with the fact that 3D shape understanding is actually not dependent on the order of input views. For example, in Figure~\ref{fig:view_sequence}, whether the side view is placed first, middle or last in the inputs, the recognition result produced by the model should always be \verb|airplane|. Unlike existing methods analyzed above, this perspective removes inappropriate assumptions and restrictions about the relations between the views, thus is more practical and reasonable in real-world applications. More importantly, a \emph{ViewSet Transformer} (\textbf{VSFormer}) is devised to release the power of multiple views and adaptively learn the pairwise and higher-order relations among the views and integrate multi-view information. The attention architecture is a natural choice because it aligns with the view set's characteristics. First, we theoretically reveal that the Cartesian product of a view set can be formulated by the correlation matrix, which can be decomposed into attention operations mathematically. Second, the attention mechanism is essentially a set operator and inherently good at capturing correlations between the elements in a set. Third, this mechanism is flexible enough that it makes minimal assumptions about the inputs, which matches our expectation that there are no predefined relations or restrictions for views. Overall, the proposed approach presents a one-stop solution that directly captures the correlations of all view pairs in the set, which promotes the flexible and free exchange of multi-view information. Several critical designs are presented in VSFormer. (1) The position encodings of input views are removed since views are permutation invariant. (2) The class token is removed because it is irrelevant to capturing the correlations of view pairs in the set. (3) The number of attention blocks is greatly reduced as the size of a view set is relatively small ($\leq$ 20 in most cases). The details of the proposed approach will be elaborated in Section~\ref{sec:method}. Systematic experiments suggest that VSFormer around the flexible set and explicit relation grasping unleashes astonishing capabilities and obtains new records in downstream tasks. In short, the contributions of this paper include: \begin{itemize} \item We identify two key aspects of multi-view 3D shape understanding, organizing views reasonably and modeling their relations explicitly, albeit they are critical for performance improvement but absent in previous literature. \item We propose a Transformer-based model, named VSFormer, to capture the correlations of all view pairs directly for better multi-view information exchange and fusion. At the same time, a theoretical analysis is accompanied to support such a design. \item Extensive experiments demonstrate the superb performances of the proposed approach and the ablation studies shed light on the various sources of performance gains. For the recognition task, VSFormer reaches 98.4\%(+4.1\%), 95.9\%(+1.9\%), 98.8\%(+1.1\%) overall accuracy on RGBD, ScanObjectNN, ModelNet40, respectively. The results surpass all existing methods and achieve new state of the arts. For 3D shape retrieval, VSFormer also sets new records in multiple dimensions on the SHREC'17 benchmark. \end{itemize} \begin{figure*}[t] \begin{center} \includegraphics[width=\linewidth]{figures/architecture.pdf} \end{center} \caption{\textbf{The overall architecture of VSFormer.} It consists of 4 modules: Initializer (Init), Encoder, Transition (Transit) and Decoder. Encoder is responsible for grasping pairwise and higher-order correlations of views in a set.} \label{fig:architecture} \end{figure*} Related Work In this section, we review the multi-view 3D shape analysis methods, explore the deployment of set and attention in these methods, and discuss the latest progress in the field. \subsection{Multi-view 3D Shape Analysis} Existing methods aggregate multi-view information for 3D shape understanding in different ways. \subsubsection{Independent Views} Early work like MVCNN series <|cite_start|> (Reference: 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015: ) <|cite_end|> <|cite_start|> (Reference: A Deeper Look at 3D Shape Classifiers: We investigate the role of representations and architectures for classifying 3D shapes in terms of their computational efficiency, generalization, and robustness to adversarial transformations. By varying the number of training examples and employing cross-modal transfer learning we study the role of initialization of existing deep architectures for 3D shape classification. Our analysis shows that multiview methods continue to offer the best generalization even without pretraining on large labeled image datasets, and even when trained on simplified inputs such as binary silhouettes. Furthermore, the performance of voxel-based 3D convolutional networks and point-based architectures can be improved via cross-modal transfer from image representations. Finally, we analyze the robustness of 3D shape classifiers to adversarial transformations and present a novel approach for generating adversarial perturbations of a 3D shape for multiview classifiers using a differentiable renderer. We find that point-based networks are more robust to point position perturbations while voxel-based and multiview networks are easily fooled with the addition of imperceptible noise to the input.) <|cite_end|>and its follow-up works <|cite_start|> (Reference: GVCNN: Group-view convolutional neural networks for 3D shape recognition: 3D shape recognition has attracted much attention recently. Its recent advances advocate the usage of deep features and achieve the state-of-the-art performance. However, existing deep features for 3D shape recognition are restricted to a view-to-shape setting, which learns the shape descriptor from the view-level feature directly. Despite the exciting progress on view-based 3D shape description, the intrinsic hierarchical correlation and discriminability among views have not been well exploited, which is important for 3D shape representation. To tackle this issue, in this paper, we propose a group-view convolutional neural network (GVCNN) framework for hierarchical correlation modeling towards discriminative 3D shape description. The proposed GVCNN framework is composed of a hierarchical view-group-shape architecture, i.e., from the view level, the group level and the shape level, which are organized using a grouping strategy. Concretely, we first use an expanded CNN to extract a view level descriptor. Then, a grouping module is introduced to estimate the content discrimination of each view, based on which all views can be splitted into different groups according to their discriminative level. A group level description can be further generated by pooling from view descriptors. Finally, all group level descriptors are combined into the shape level descriptor according to their discriminative weights. Experimental results and comparison with state-of-the-art methods show that our proposed GVCNN method can achieve a significant performance gain on both the 3D shape classification and retrieval tasks.) <|cite_end|> <|cite_start|> (Reference: {Multi-view Harmonized Bilinear Network for 3D Object Recognition: View-based methods have achieved considerable success in 3D object recognition tasks. Different from existing view-based methods pooling the view-wise features, we tackle this problem from the perspective of patches-to-patches similarity measurement. By exploiting the relationship between polynomial kernel and bilinear pooling, we obtain an effective 3D object representation by aggregating local convolutional features through bilinear pooling. Meanwhile, we harmonize different components inherited in the bilinear feature to obtain a more discriminative representation. To achieve an end-to-end trainable framework, we incorporate the harmonized bilinear pooling as a layer of a network, constituting the proposed Multi-view Harmonized Bilinear Network (MHBN). Systematic experiments conducted on two public benchmark datasets demonstrate the efficacy of the proposed methods in 3D object recognition.) <|cite_end|> <|cite_start|> (Reference: PVNet: A Joint Convolutional Network of Point Cloud and Multi-View for 3D Shape Recognition: 3D object recognition has attracted wide research attention in the field of multimedia and computer vision. With the recent proliferation of deep learning, various deep models with different representations have achieved the state-of-the-art performance. Among them, point cloud and multi-view based 3D shape representations are promising recently, and their corresponding deep models have shown significant performance on 3D shape recognition. However, there is little effort concentrating point cloud data and multi-view data for 3D shape representation, which is, in our consideration, beneficial and compensated to each other. In this paper, we propose the Point-View Network (PVNet), the first framework integrating both the point cloud and the multi-view data towards joint 3D shape recognition. More specifically, an embedding attention fusion scheme is proposed that could employ high-level features from the multi-view data to model the intrinsic correlation and discriminability of different structure features from the point cloud data. In particular, the discriminative descriptions are quantified and leveraged as the soft attention mask to further refine the structure feature of the 3D shape. We have evaluated the proposed method on the ModelNet40 dataset for 3D shape classification and retrieval tasks. Experimental results and comparisons with state-of-the-art methods demonstrate that our framework can achieve superior performance.) <|cite_end|> <|cite_start|> (Reference: Angular Triplet-Center Loss for Multi-view 3D Shape Retrieval: How to obtain the desirable representation of a 3D shape, which is discriminative across categories and polymerized within classes, is a significant challenge in 3D shape retrieval. Most existing 3D shape retrieval methods focus on capturing strong discriminative shape representation with softmax loss for the classification task, while the shape feature learning with metric loss is neglected for 3D shape retrieval. In this paper, we address this problem based on the intuition that the cosine distance of shape embeddings should be close enough within the same class and far away across categories. Since most of 3D shape retrieval tasks use cosine distance of shape features for measuring shape similarity, we propose a novel metric loss named angular triplet-center loss, which directly optimizes the cosine distances between the features. It inherits the triplet-center loss property to achieve larger inter-class distance and smaller intra-class distance simultaneously. Unlike previous metric loss utilized in 3D shape retrieval methods, where Euclidean distance is adopted and the margin design is difficult, the proposed method is more convenient to train feature embeddings and more suitable for 3D shape retrieval. Moreover, the angle margin is adopted to replace the cosine margin in order to provide more explicit discriminative constraints on an embedding space. Extensive experimental results on two popular 3D object retrieval benchmarks, ModelNet40 and ShapeNetCore 55, demonstrate the effectiveness of our proposed loss, and our method has achieved state-of-the-art results on various 3D shape datasets.) <|cite_end|> <|cite_start|> (Reference: British Machine Vision Conference (BMVC): ) <|cite_end|> <|cite_start|> (Reference: Learning Discriminative 3D Shape Representations by View Discerning Networks: In view-based 3D shape recognition, extracting discriminative visual representation of 3D shapes from projected images is considered the core problem. Projections with low discriminative ability can adversely influence the final 3D shape representation. Especially under the real situations with background clutter and object occlusion, the adverse effect is even more severe. To resolve this problem, we propose a novel deep neural network, View Discerning Network, which learns to judge the quality of views and adjust their contributions to the representation of shapes. In this network, a Score Generation Unit is devised to evaluate the quality of each projected image with score vectors. These score vectors are used to weight the image features and the weighted features perform much better than original features in 3D shape recognition task. In particular, we introduce two structures of Score Generation Unit, Channel-wise Score Unit and Part-wise Score Unit, to assess the quality of feature maps from different perspectives. Our network aggregates features and scores in an end-to-end framework, so that final shape descriptors are directly obtained from its output. Our experiments on ModelNet and ShapeNet Core55 show that View Discerning Network outperforms the state-of-the-arts in terms of the retrieval task, with excellent robustness against background clutter and object occlusion.) <|cite_end|> <|cite_start|> (Reference: 3D object representation learning: A set-to-set matching perspective: In this paper, we tackle the 3D object representation learning from the perspective of set-to-set matching. Given two 3D objects, calculating their similarity is formulated as the problem of set-to-set similarity measurement between two set of local patches. As local convolutional features from convolutional feature maps are natural representations of local patches, the set-to-set matching between sets of local patches is further converted into a local features pooling problem. To highlight good matchings and suppress the bad ones, we exploit two pooling methods: 1) bilinear pooling and 2) VLAD pooling. We analyze their effectiveness in enhancing the set-to-set matching and meanwhile establish their connection. Moreover, to balance different components inherent in a bilinear-pooled feature, we propose the harmonized bilinear pooling operation, which follows the spirits of intra-normalization used in VLAD pooling. To achieve an end-to-end trainable framework, we implement the proposed harmonized bilinear pooling and intra-normalized VLAD as two layers to construct two types of neural network, multi-view harmonized bilinear network (MHBN) and multi-view VLAD network (MVLADN). Systematic experiments conducted on two public benchmark datasets demonstrate the efficacy of the proposed MHBN and MVLADN in 3D object recognition.) <|cite_end|>extract view features independently using a shared CNN, then fuse the extracted features using the pooling operation or some variants. The simple strategy may discard a lot of useful information and the views are not well treated as a whole thus information flow among views needs to be increased. \subsubsection{View Sequence} Researchers perceive the problems and propose various descriptions to incorporate multiple views of a 3D shape into specific data structures. For example, RNN-based <|cite_start|> (Reference: Emphasizing 3D Properties in Recurrent Multi-View Aggregation for 3D Shape Retrieval: Multi-view based shape descriptors have achieved impressive performance for 3D shape retrieval. The core of view-based methods is to interpret 3D structures through 2D observations. However, most existing methods pay more attention to discriminative models and none of them necessarily incorporate the 3D properties of the objects. To resolve this problem, we propose an encoder-decoder recurrent feature aggregation network (ERFA-Net) to emphasize the 3D properties of 3D shapes in multi-view features aggregation. In our network, a view sequence of the shape is trained to encode a discriminative shape embedding and estimate unseen rendered views of any viewpoints. This generation task gives an effective supervision which makes the network exploit 3D properties of shapes through various 2D images. During feature aggregation, a discriminative feature representation across multiple views is effectively exploited based on LSTM network. The proposed 3D representation has following advantages against other state-of-the-art: 1) it performs robust discrimination under the existence of noise such as view missing and occlusion, because of the improvement brought by 3D properties. 2) it has strong generative capabilities, which is useful for various 3D shape tasks. We evaluate ERFA-Net on two popular 3D shape datasets, ModelNet and ShapeNetCore55, and ERFA-Net outperforms the state-of-the-art methods significantly. Extensive experiments show the effectiveness and robustness of the proposed 3D representation.) <|cite_end|> <|cite_start|> (Reference: {SeqViews2SeqLabels: Learning 3D global features via aggregating sequential views by RNN with attention: Learning 3D global features by aggregating multiple views has been introduced as a successful strategy for 3D shape analysis. In recent deep learning models with end-to-end training, pooling is a widely adopted procedure for view aggregation. However, pooling merely retains the max or mean value over all views, which disregards the content information of almost all views and also the spatial information among the views. To resolve these issues, we propose Sequential Views To Sequential Labels (SeqViews2SeqLabels) as a novel deep learning model with an encoder–decoder structure based on recurrent neural networks (RNNs) with attention. SeqViews2SeqLabels consists of two connected parts, an encoder-RNN followed by a decoder-RNN, that aim to learn the global features by aggregating sequential views and then performing shape classification from the learned global features, respectively. Specifically, the encoder-RNN learns the global features by simultaneously encoding the spatial and content information of sequential views, which captures the semantics of the view sequence. With the proposed prediction of sequential labels, the decoder-RNN performs more accurate classification using the learned global features by predicting sequential labels step by step. Learning to predict sequential labels provides more and finer discriminative information among shape classes to learn, which alleviates the overfitting problem inherent in training using a limited number of 3D shapes. Moreover, we introduce an attention mechanism to further improve the discriminative ability of SeqViews2SeqLabels. This mechanism increases the weight of views that are distinctive to each shape class, and it dramatically reduces the effect of selecting the first view position. Shape classification and retrieval results under three large-scale benchmarks verify that SeqViews2SeqLabels learns more discriminative global features by more effectively aggregating sequential views than state-of-the-art methods.) <|cite_end|> <|cite_start|> (Reference: {3D2SeqViews: Aggregating sequential views for 3D global feature learning by CNN with hierarchical attention aggregation: Learning 3D global features by aggregating multiple views is important. Pooling is widely used to aggregate views in deep learning models. However, pooling disregards a lot of content information within views and the spatial relationship among the views, which limits the discriminability of learned features. To resolve this issue, 3D to Sequential Views (3D2SeqViews) is proposed to more effectively aggregate the sequential views using convolutional neural networks with a novel hierarchical attention aggregation. Specifically, the content information within each view is first encoded. Then, the encoded view content information and the sequential spatiality among the views are simultaneously aggregated by the hierarchical attention aggregation, where view-level attention and class-level attention are proposed to hierarchically weight sequential views and shape classes. View-level attention is learned to indicate how much attention is paid to each view by each shape class, which subsequently weights sequential views through a novel recursive view integration. Recursive view integration learns the semantic meaning of view sequence, which is robust to the first view position. Furthermore, class-level attention is introduced to describe how much attention is paid to each shape class, which innovatively employs the discriminative ability of the fine-tuned network. 3D2SeqViews learns more discriminative features than the state-of-the-art, which leads to the outperforming results in shape classification and retrieval under three large-scale benchmarks.) <|cite_end|> <|cite_start|> (Reference: VERAM: View-Enhanced Recurrent Attention Model for 3D Shape Classification: Multi-view deep neural network is perhaps the most successful approach in 3D shape classification. However, the fusion of multi-view features based on max or average pooling lacks a view selection mechanism, limiting its application in, e.g., multi-view active object recognition by a robot. This paper presents VERAM, a recurrent attention model capable of actively selecting a sequence of views for highly accurate 3D shape classification. VERAM addresses an important issue commonly found in existing attention-based models, i.e., the unbalanced training of the subnetworks corresponding to next view estimation and shape classification. The classification subnetwork is easily overfitted while the view estimation one is usually poorly trained, leading to a suboptimal classification performance. This is surmounted by three essential view-enhancement strategies: 1) enhancing the information flow of gradient backpropagation for the view estimation subnetwork, 2) devising a highly informative reward function for the reinforcement training of view estimation and 3) formulating a novel loss function that explicitly circumvents view duplication. Taking grayscale image as input and AlexNet as CNN architecture, VERAM with 9 views achieves instance-level and class-level accuracy of 95:5% and 95:3% on ModelNet10, 93:7% and 92:1% on ModelNet40, both are the state-of-the-art performance under the same number of views.) <|cite_end|> <|cite_start|> (Reference: Learning Multi-View Representation With LSTM for 3-D Shape Recognition and Retrieval: Shape representation for 3-D models is an important topic in computer vision, multimedia analysis, and computer graphics. Recent multiview-based methods demonstrate promising performance for 3-D shape recognition and retrieval. However, most multiview-based methods ignore the correlations of multiple views or suffer from high computional cost. In this paper, we propose a novel multiview-based network architecture for 3-D shape recognition and retrieval. Our network combines convolutional neural networks (CNNs) with long short-term memory (LSTM) to exploit the correlative information from multiple views. Well-pretrained CNNs with residual connections are first used to extract a low-level feature of each view image rendered from a 3-D shape. Then, a LSTM and a sequence voting layer are employed to aggregate these features into a shape descriptor. The highway network and a three-step training strategy are also adopted to boost the optimization of the deep network. Experimental results on two public datasets demonstrate that the proposed method achieves promising performance for 3-D shape recognition and the state-of-the-art performance for the 3-D shape retrieval.) <|cite_end|>and ViT-based <|cite_start|> (Reference: MVT: Multi-view Vision Transformer for 3D Object Recognition: Inspired by the great success achieved by CNN in image recognition, view-based methods applied CNNs to model the projected views for 3D object understanding and achieved excellent performance. Nevertheless, multi-view CNN models cannot model the communications between patches from different views, limiting its effectiveness in 3D object recognition. Inspired by the recent success gained by vision Transformer in image recognition, we propose a Multi-view Vision Transformer (MVT) for 3D object recognition. Since each patch feature in a Transformer block has a global reception field, it naturally achieves communications between patches from different views. Meanwhile, it takes much less inductive bias compared with its CNN counterparts. Considering both effectiveness and efficiency, we develop a global-local structure for our MVT. Our experiments on two public benchmarks, ModelNet40 and ModelNet10, demonstrate the competitive performance of our MVT.) <|cite_end|> <|cite_start|> (Reference: Multi-range view aggregation network with vision transformer feature fusion for 3D object retrieval: View-based methods have achieved state-of-the-art performance in 3D object retrieval. However, view-based methods still encounter two major challenges. The first is how to leverage the inter-view correlation to enhance view-level visual features. The second is how to effectively fuse view-level features into a discriminative global descriptor. Towards these two challenges, we propose a multi-range view aggregation network (MRVA-Net) with a vision transformer based feature fusion scheme for 3D object retrieval. Unlike the existing methods which only consider aggregating neighboring or adjacent views which could bring in redundant information, we propose a multi-range view aggregation module to enhance individual view representations through view aggregation beyond only neighboring views but also incorporate the views at different ranges. Furthermore, to generate the global descriptor from view-level features, we propose to employ the multi-head self-attention mechanism introduced by vision transformer to fuse the view-level features. Extensive experiments conducted on three public datasets including ModelNet40, ShapeNet Core55 and MCB-A demonstrate the superiority of the proposed network over the state-of-the-art methods in 3D object retrieval.) <|cite_end|>methods are proposed to operate on the view sequence. \subsubsection{View Graph} The graph-based models <|cite_start|> (Reference: Hypergraph Neural Networks: In this paper, we present a hypergraph neural networks (HGNN) framework for data representation learning, which can encode high-order data correlation in a hypergraph structure. Confronting the challenges of learning representation for complex data in real practice, we propose to incorporate such data structure in a hypergraph, which is more flexible on data modeling, especially when dealing with complex data. In this method, a hyperedge convolution operation is designed to handle the data correlation during representation learning. In this way, traditional hypergraph learning procedure can be conducted using hyperedge convolution operations efficiently. HGNN is able to learn the hidden layer representation considering the high-order data structure, which is a general framework considering the complex data correlations. We have conducted experiments on citation network classification and visual object recognition tasks and compared HGNN with graph convolutional networks and other traditional methods. Experimental results demonstrate that the proposed HGNN method outperforms recent state-of-the-art methods. We can also reveal from the results that the proposed HGNN is superior when dealing with multi-modal data compared with existing methods.) <|cite_end|> <|cite_start|> (Reference: Inductive multi-hypergraph learning and its application on view-based 3d object classification: The wide 3D applications have led to increasing amount of 3D object data, and thus effective 3D object classification technique has become an urgent requirement. One important and challenging task for 3D object classification is how to formulate the 3D data correlation and exploit it. Most of the previous works focus on learning optimal pairwise distance metric for object comparison, which may lose the global correlation among 3D objects. Recently, a transductive hypergraph learning has been investigated for classification, which can jointly explore the correlation among multiple objects, including both the labeled and unlabeled data. Although these methods have shown better performance, they are still limited due to 1) a considerable amount of testing data may not be available in practice and 2) the high computational cost to test new coming data. To handle this problem, considering the multi-modal representations of 3D objects in practice, we propose an inductive multi-hypergraph learning algorithm, which targets on learning an optimal projection for the multi-modal training data. In this method, all the training data are formulated in multi-hypergraph based on the features, and the inductive learning is conducted to learn the projection matrices and the optimal multi-hypergraph combination weights simultaneously. Different from the transductive learning on hypergraph, the high cost training process is off-line, and the testing process is very efficient for the inductive learning on hypergraph. We have conducted experiments on two 3D benchmarks, i.e., the NTU and the ModelNet40 data sets, and compared the proposed algorithm with the state-of-the-art methods and traditional transductive multi-hypergraph learning methods. Experimental results have demonstrated that the proposed method can achieve effective and efficient classification performance. We also note that the proposed method is a general framework and has the potential to be applied in other applications in practice.) <|cite_end|> <|cite_start|> (Reference: View-GCN: View-Based Graph Convolutional Network for 3D Shape Analysis: View-based approach that recognizes 3D shape through its projected 2D images has achieved state-of-the-art results for 3D shape recognition. The major challenge for view-based approach is how to aggregate multi-view features to be a global shape descriptor. In this work, we propose a novel view-based Graph Convolutional Neural Network, dubbed as view-GCN, to recognize 3D shape based on graph representation of multiple views in flexible view configurations. We first construct view-graph with multiple views as graph nodes, then design a graph convolutional neural network over view-graph to hierarchically learn discriminative shape descriptor considering relations of multiple views. The view-GCN is a hierarchical network based on local and non-local graph convolution for feature transform, and selective view-sampling for graph coarsening. Extensive experiments on benchmark datasets show that view-GCN achieves state-of-the-art results for 3D shape classification and retrieval.) <|cite_end|> <|cite_start|> (Reference: Learning View-Based Graph Convolutional Network for Multi-View 3D Shape Analysis: View-based approach that recognizes 3D shape through its projected 2D images has achieved state-of-the-art results for 3D shape recognition. The major challenges are how to aggregate multi-view features and deal with 3D shapes in arbitrary poses. We propose two versions of a novel view-based Graph Convolutional Network, dubbed view-GCN and view-GCN++, to recognize 3D shape based on graph representation of multiple views. We first construct view-graph with multiple views as graph nodes, then design two graph convolutional networks over the view-graph to hierarchically learn discriminative shape descriptor considering relations of multiple views. Specifically, view-GCN is a hierarchical network based on two pivotal operations, i.e., feature transform based on local positional and non-local graph convolution, and graph coarsening based on a selective view-sampling operation. To deal with rotation sensitivity, we further propose view-GCN++ with local attentional graph convolution operation and rotation robust view-sampling operation for graph coarsening. By these designs, view-GCN++ achieves invariance to transformations under the finite subgroup of rotation group SO(3). Extensive experiments on benchmark datasets (i.e., ModelNet40, ScanObjectNN, RGBD and ShapeNet Core55) show that view-GCN and view-GCN++ achieve state-of-the-art results for 3D shape classification and retrieval tasks under aligned and rotated settings.) <|cite_end|> <|cite_start|> (Reference: HGNN+: General Hypergraph Neural Networks: Graph Neural Networks have attracted increasing attention in recent years. However, existing GNN frameworks are deployed based upon simple graphs, which limits their applications in dealing with complex data correlation of multi-modal/multi-type data in practice. A few hypergraph-based methods have recently been proposed to address the problem of multi-modal/multi-type data correlation by directly concatenating the hypergraphs constructed from each single individual modality/type, which is difficult to learn an adaptive weight for each modality/type. In this paper, we extend the original conference version HGNN, and introduce a general high-order multi-modal/multi-type data correlation modeling framework called HGNN<inline-formula><tex-math notation="LaTeX">$^+$</tex-math><alternatives><mml:math><mml:msup><mml:mrow/><mml:mo>+</mml:mo></mml:msup></mml:math><inline-graphic xlink:href="feng-ieq1-3182052.gif"/></alternatives></inline-formula> to learn an optimal representation in a single hypergraph based framework. It is achieved by bridging multi-modal/multi-type data and hyperedge with hyperedge groups. Specifically, in our method, hyperedge groups are first constructed to represent latent high-order correlations in each specific modality/type with explicit or implicit graph structures. An adaptive hyperedge group fusion strategy is then used to effectively fuse the correlations from different modalities/types in a unified hypergraph. After that a new hypergraph convolution scheme performed in spatial domain is used to learn a general data representation for various tasks. We have evaluated this framework on several popular datasets and compared it with recent state-of-the-art methods. The comprehensive evaluations indicate that the proposed HGNN<inline-formula><tex-math notation="LaTeX">$^+$</tex-math><alternatives><mml:math><mml:msup><mml:mrow/><mml:mo>+</mml:mo></mml:msup></mml:math><inline-graphic xlink:href="feng-ieq2-3182052.gif"/></alternatives></inline-formula> framework can consistently outperform existing methods with a significant margin, especially when modeling implicit data correlations. We also release a toolbox called THU-DeepHypergraph for the proposed framework, which can be used for various of applications, such as data classification, retrieval and recommendation.) <|cite_end|> <|cite_start|> (Reference: Walk in Views: Multi-view Path Aggregation Graph Network for 3D Shape Analysis: ) <|cite_end|>assume the relations among views as graphs and develop GCNs to capture multi-view interaction. However, message propagation between distant nodes on a view graph may not be straightforward and graph construction leads to additional computation overheads. \subsubsection{View Set} This paper presents a more flexible and practical structure, \emph{View Set}, which neither makes assumptions about views nor introduces additional overheads. Based on that, a view set attention model is devised to adaptively grasp the correlations for all view pairs. Some other methods also explore rotations <|cite_start|> (Reference: RotationNet: Joint Object Categorization and Pose Estimation Using Multiviews from Unsupervised Viewpoints: We propose a Convolutional Neural Network (CNN)-based model "RotationNet," which takes multi-view images of an object as input and jointly estimates its pose and object category. Unlike previous approaches that use known viewpoint labels for training, our method treats the viewpoint labels as latent variables, which are learned in an unsupervised manner during the training using an unaligned object dataset. RotationNet is designed to use only a partial set of multi-view images for inference, and this property makes it useful in practical scenarios where only partial views are available. Moreover, our pose alignment strategy enables one to obtain view-specific feature representations shared across classes, which is important to maintain high accuracy in both object categorization and pose estimation. Effectiveness of RotationNet is demonstrated by its superior performance to the state-of-the-art methods of 3D object classification on 10- and 40-class ModelNet datasets. We also show that RotationNet, even trained without known poses, achieves the state-of-the-art performance on an object pose estimation dataset. The code is available on https://github.com/kanezaki/rotationnet) <|cite_end|> <|cite_start|> (Reference: Equivariant Multi-View Networks: Several popular approaches to 3D vision tasks process multiple views of the input independently with deep neural networks pre-trained on natural images, achieving view permutation invariance through a single round of pooling over all views. We argue that this operation discards important information and leads to subpar global descriptors. In this paper, we propose a group convolutional approach to multiple view aggregation where convolutions are performed over a discrete subgroup of the rotation group, enabling, thus, joint reasoning over all views in an equivariant (instead of invariant) fashion, up to the very last layer. We further develop this idea to operate on smaller discrete homogeneous spaces of the rotation group, where a polar view representation is used to maintain equivariance with only a fraction of the number of input views. We set the new state of the art in several large scale 3D shape retrieval tasks, and show additional applications to panoramic scene classification.) <|cite_end|>, region-to-region relations <|cite_start|> (Reference: Learning relationships for multi-view 3D object recognition: Recognizing 3D object has attracted plenty of attention recently, and view-based methods have achieved best results until now. However, previous view-based methods ignore the region-to-region and view-to-view relationships between different view images, which are crucial for multi-view 3D object representation. To tackle this problem, we propose a Relation Network to effectively connect corresponding regions from different viewpoints, and therefore reinforce the information of individual view image. In addition, the Relation Network exploits the inter-relationships over a group of views, and integrates those views to obtain a discriminative 3D object representation. Systematic experiments conducted on ModelNet dataset demonstrate the effectiveness of our proposed methods for both 3D object recognition and retrieval tasks.) <|cite_end|>, multi-layered height-maps representations <|cite_start|> (Reference: Learning 3D Shapes as Multi-Layered Height-maps using 2D Convolutional Networks: We present a novel global representation of 3D shapes, suitable for the application of 2D CNNs. We represent 3D shapes as multi-layered height-maps (MLH) where at each grid location, we store multiple instances of height maps, thereby representing 3D shape detail that is hidden behind several layers of occlusion. We provide a novel view merging method for combining view dependent information (Eg. MLH descriptors) from multiple views. Because of the ability of using 2D CNNs, our method is highly memory efficient in terms of input resolution compared to the voxel based input. Together with MLH descriptors and our multi view merging, we achieve the state-of-the-art result in classification on ModelNet dataset.) <|cite_end|>, view correspondences <|cite_start|> (Reference: Multi-View 3D Shape Recognition via Correspondence-Aware Deep Learning: In recent years, multi-view learning has emerged as a promising approach for 3D shape recognition, which identifies a 3D shape based on its 2D views taken from different viewpoints. Usually, the correspondences inside a view or across different views encode the spatial arrangement of object parts and the symmetry of the object, which provide useful geometric cues for recognition. However, such view correspondences have not been explicitly and fully exploited in existing work. In this paper, we propose a correspondence-aware representation (CAR) module, which explicitly finds potential intra-view correspondences and cross-view correspondences via $k$ NN search in semantic space and then aggregates the shape features from the correspondences via learned transforms. Particularly, the spatial relations of correspondences in terms of their viewpoint positions and intra-view locations are taken into account for learning correspondence-aware features. Incorporating the CAR module into a ResNet-18 backbone, we propose an effective deep model called CAR-Net for 3D shape classification and retrieval. Extensive experiments have demonstrated the effectiveness of the CAR module as well as the excellent performance of the CAR-Net.) <|cite_end|>, viewpoints selection <|cite_start|> (Reference: MVTN: Multi-View Transformation Network for 3D Shape Recognition: Multi-view projection methods have demonstrated their ability to reach state-of-the-art performance on 3D shape recognition. Those methods learn different ways to aggregate information from multiple views. However, the camera view-points for those views tend to be heuristically set and fixed for all shapes. To circumvent the lack of dynamism of current multi-view methods, we propose to learn those view-points. In particular, we introduce the Multi-View Transformation Network (MVTN) that regresses optimal view-points for 3D shape recognition, building upon advances in differentiable rendering. As a result, MVTN can be trained end-to-end along with any multi-view network for 3D shape classification. We integrate MVTN in a novel adaptive multi-view pipeline that can render either 3D meshes or point clouds. MVTN exhibits clear performance gains in the tasks of 3D shape classification and 3D shape retrieval without the need for extra training supervision. In these tasks, MVTN achieves state-of-the-art performance on ModelNet40, ShapeNet Core55, and the most recent and realistic ScanObjectNN dataset (up to 6% improvement). Interestingly, we also show that MVTN can provide network robustness against rotation and occlusion in the 3D domain. The code is available at https://github.com/ajhamdi/MVTN .) <|cite_end|>, voint cloud representations <|cite_start|> (Reference: Voint Cloud: Multi-View Point Cloud Representation for 3D Understanding: Multi-view projection methods have demonstrated promising performance on 3D understanding tasks like 3D classification and segmentation. However, it remains unclear how to combine such multi-view methods with the widely available 3D point clouds. Previous methods use unlearned heuristics to combine features at the point level. To this end, we introduce the concept of the multi-view point cloud (Voint cloud), representing each 3D point as a set of features extracted from several view-points. This novel 3D Voint cloud representation combines the compactness of 3D point cloud representation with the natural view-awareness of multi-view representation. Naturally, we can equip this new representation with convolutional and pooling operations. We deploy a Voint neural network (VointNet) to learn representations in the Voint space. Our novel representation achieves \sota performance on 3D classification, shape retrieval, and robust 3D part segmentation on standard benchmarks ( ScanObjectNN, ShapeNet Core55, and ShapeNet Parts).) <|cite_end|>when analyzing 3D shapes. Their multi-view interaction still needs to be strengthened. \subsection{Set in Multi-view 3D Shape Analysis} Previous works also mention ``set" in multi-view 3D shape analysis. But they basically refer to different concepts from the proposed one. For instance, RCPCNN <|cite_start|> (Reference: British Machine Vision Conference (BMVC): ) <|cite_end|>introduces a dominant set clustering and pooling module to improve MVCNN <|cite_start|> (Reference: 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015: ) <|cite_end|>. Johns \textit{et al.} <|cite_start|> (Reference: In-line phase contrast computed tomography of carbon/carbon composites: X-ray phase contrast computed tomography (PC-CT) permits the non-destructive visualization of the internal structures of low atomic number materials and has become an invaluable analysis tool for the development and the applications of new materials. Here we implement an in-line phase contrast CT imaging technique for Carbon/Carbon composites, which consists of a scanning mode with object offset and the corresponding reconstruction algorithm. At each CT view angle, two original interference pattern intensity projection images with different geometrical magnification are acquired. The corresponding phase integral projection is retrieved from the recorded original images by the detector. Finally the phase contrast CT image is reconstructed by the algorithm from the retrieved projection. This work comprises a numerical study of the method and its experimental verification using one Carbon/Carbon composite dataset measured at an in-line phase contrast CT system with micro-focus X-ray tube source. The numerical and experimental results demonstrate that the presented technique can improve the imaging contrast of Carbon/Carbon composites. It will be of interest for the applications of in-line phase contrast CT in material science.) <|cite_end|>decompose a sequence of views into a set of view pairs. They classify each pair independently and weigh the contribution of each pair. MHBN <|cite_start|> (Reference: {Multi-view Harmonized Bilinear Network for 3D Object Recognition: View-based methods have achieved considerable success in 3D object recognition tasks. Different from existing view-based methods pooling the view-wise features, we tackle this problem from the perspective of patches-to-patches similarity measurement. By exploiting the relationship between polynomial kernel and bilinear pooling, we obtain an effective 3D object representation by aggregating local convolutional features through bilinear pooling. Meanwhile, we harmonize different components inherited in the bilinear feature to obtain a more discriminative representation. To achieve an end-to-end trainable framework, we incorporate the harmonized bilinear pooling as a layer of a network, constituting the proposed Multi-view Harmonized Bilinear Network (MHBN). Systematic experiments conducted on two public benchmark datasets demonstrate the efficacy of the proposed methods in 3D object recognition.) <|cite_end|>considers patches-to-patches (set-to-set) similarity of different views and aggregates local features using bilinear pooling. Yu \textit{et al.} extend MHBN by introducing VLAD layer
[ "<|reference_start|> Hypergraph Neural Networks: In this paper, we present a hypergraph neural networks (HGNN) framework for data representation learning, which can encode high-order data correlation in a hypergraph structure. Confronting the challenges of learning representation for complex data in real practice, we propose to incorporate such data structure in a hypergraph, which is more flexible on data modeling, especially when dealing with complex data. In this method, a hyperedge convolution operation is designed to handle the data correlation during representation learning. In this way, traditional hypergraph learning procedure can be conducted using hyperedge convolution operations efficiently. HGNN is able to learn the hidden layer representation considering the high-order data structure, which is a general framework considering the complex data correlations. We have conducted experiments on citation network classification and visual object recognition tasks and compared HGNN with graph convolutional networks and other traditional methods. Experimental results demonstrate that the proposed HGNN method outperforms recent state-of-the-art methods. We can also reveal from the results that the proposed HGNN is superior when dealing with multi-modal data compared with existing methods. <|reference_end|>", "<|reference_start|> GVCNN: Group-view convolutional neural networks for 3D shape recognition: 3D shape recognition has attracted much attention recently. Its recent advances advocate the usage of deep features and achieve the state-of-the-art performance. However, existing deep features for 3D shape recognition are restricted to a view-to-shape setting, which learns the shape descriptor from the view-level feature directly. Despite the exciting progress on view-based 3D shape description, the intrinsic hierarchical correlation and discriminability among views have not been well exploited, which is important for 3D shape representation. To tackle this issue, in this paper, we propose a group-view convolutional neural network (GVCNN) framework for hierarchical correlation modeling towards discriminative 3D shape description. The proposed GVCNN framework is composed of a hierarchical view-group-shape architecture, i.e., from the view level, the group level and the shape level, which are organized using a grouping strategy. Concretely, we first use an expanded CNN to extract a view level descriptor. Then, a grouping module is introduced to estimate the content discrimination of each view, based on which all views can be splitted into different groups according to their discriminative level. A group level description can be further generated by pooling from view descriptors. Finally, all group level descriptors are combined into the shape level descriptor according to their discriminative weights. Experimental results and comparison with state-of-the-art methods show that our proposed GVCNN method can achieve a significant performance gain on both the 3D shape classification and retrieval tasks. <|reference_end|>", "<|reference_start|> HGNN+: General Hypergraph Neural Networks: Graph Neural Networks have attracted increasing attention in recent years. However, existing GNN frameworks are deployed based upon simple graphs, which limits their applications in dealing with complex data correlation of multi-modal/multi-type data in practice. A few hypergraph-based methods have recently been proposed to address the problem of multi-modal/multi-type data correlation by directly concatenating the hypergraphs constructed from each single individual modality/type, which is difficult to learn an adaptive weight for each modality/type. In this paper, we extend the original conference version HGNN, and introduce a general high-order multi-modal/multi-type data correlation modeling framework called HGNN<inline-formula><tex-math notation=\"LaTeX\">$^+$</tex-math><alternatives><mml:math><mml:msup><mml:mrow/><mml:mo>+</mml:mo></mml:msup></mml:math><inline-graphic xlink:href=\"feng-ieq1-3182052.gif\"/></alternatives></inline-formula> to learn an optimal representation in a single hypergraph based framework. It is achieved by bridging multi-modal/multi-type data and hyperedge with hyperedge groups. Specifically, in our method, hyperedge groups are first constructed to represent latent high-order correlations in each specific modality/type with explicit or implicit graph structures. An adaptive hyperedge group fusion strategy is then used to effectively fuse the correlations from different modalities/types in a unified hypergraph. After that a new hypergraph convolution scheme performed in spatial domain is used to learn a general data representation for various tasks. We have evaluated this framework on several popular datasets and compared it with recent state-of-the-art methods. The comprehensive evaluations indicate that the proposed HGNN<inline-formula><tex-math notation=\"LaTeX\">$^+$</tex-math><alternatives><mml:math><mml:msup><mml:mrow/><mml:mo>+</mml:mo></mml:msup></mml:math><inline-graphic xlink:href=\"feng-ieq2-3182052.gif\"/></alternatives></inline-formula> framework can consistently outperform existing methods with a significant margin, especially when modeling implicit data correlations. We also release a toolbox called THU-DeepHypergraph for the proposed framework, which can be used for various of applications, such as data classification, retrieval and recommendation. <|reference_end|>", "<|reference_start|> Multi-View 3D Shape Recognition via Correspondence-Aware Deep Learning: In recent years, multi-view learning has emerged as a promising approach for 3D shape recognition, which identifies a 3D shape based on its 2D views taken from different viewpoints. Usually, the correspondences inside a view or across different views encode the spatial arrangement of object parts and the symmetry of the object, which provide useful geometric cues for recognition. However, such view correspondences have not been explicitly and fully exploited in existing work. In this paper, we propose a correspondence-aware representation (CAR) module, which explicitly finds potential intra-view correspondences and cross-view correspondences via $k$ NN search in semantic space and then aggregates the shape features from the correspondences via learned transforms. Particularly, the spatial relations of correspondences in terms of their viewpoint positions and intra-view locations are taken into account for learning correspondence-aware features. Incorporating the CAR module into a ResNet-18 backbone, we propose an effective deep model called CAR-Net for 3D shape classification and retrieval. Extensive experiments have demonstrated the effectiveness of the CAR module as well as the excellent performance of the CAR-Net. <|reference_end|>" ]
[ 16, 27, 45, 51 ]
{"<|multi_cite_1_1|>": "ss-889485", "<|multi_cite_1_2|>": "arxiv-88804", "<|multi_cite_1_3|>": "ss-889486", "<|multi_cite_1_4|>": "arxiv-469795", "<|multi_cite_1_5|>": "arxiv-475406", "<|multi_cite_2_1|>": "arxiv-111622", "<|multi_cite_2_2|>": "ss-832115", "<|multi_cite_2_3|>": "arxiv-146225", "<|multi_cite_2_4|>": "arxiv-200635", "<|multi_cite_2_5|>": "arxiv-180832", "<|multi_cite_2_6|>": "ss-1237296", "<|multi_cite_2_7|>": "arxiv-200151", "<|multi_cite_2_8|>": "arxiv-251310", "<|multi_cite_2_9|>": "arxiv-338644", "<|multi_cite_2_10|>": "ss-1534044", "<|multi_cite_2_11|>": "arxiv-300895", "<|multi_cite_2_12|>": "ss-809612", "<|multi_cite_2_13|>": "arxiv-253934", "<|multi_cite_3_1|>": "arxiv-62554", "<|multi_cite_3_2|>": "ss-1304834", "<|multi_cite_3_3|>": "ss-1194976", "<|multi_cite_3_4|>": "arxiv-140386", "<|multi_cite_3_5|>": "arxiv-153331", "<|multi_cite_4_1|>": "ss-716610", "<|multi_cite_4_2|>": "arxiv-171760", "<|multi_cite_4_3|>": "ss-1258425", "<|multi_cite_4_4|>": "arxiv-170009", "<|multi_cite_4_5|>": "ss-889487", "<|multi_cite_4_6|>": "ss-1203150", "<|multi_cite_4_7|>": "arxiv-173916", "<|multi_cite_4_8|>": "arxiv-197712", "<|multi_cite_4_9|>": "arxiv-181305", "<|multi_cite_4_10|>": "arxiv-168906", "<|multi_cite_4_11|>": "ss-979588", "<|multi_cite_4_12|>": "ss-694827", "<|multi_cite_4_13|>": "arxiv-169749", "<|multi_cite_4_14|>": "ss-1265571", "<|multi_cite_4_15|>": "ss-1254136", "<|multi_cite_4_16|>": "ss-1112619", "<|multi_cite_4_17|>": "arxiv-306009", "<|multi_cite_4_18|>": "ss-685224", "<|multi_cite_4_19|>": "ss-683050", "<|multi_cite_4_20|>": "arxiv-384166", "<|multi_cite_4_21|>": "ss-889488", "<|multi_cite_5_1|>": "arxiv-171760", "<|multi_cite_5_2|>": "ss-1112619", "<|multi_cite_5_3|>": "arxiv-376683", "<|multi_cite_5_4|>": "ss-889489", "<|multi_cite_5_5|>": "arxiv-306009", "<|multi_cite_5_6|>": "ss-683050", "<|multi_cite_5_7|>": "ss-685224", "<|multi_cite_5_8|>": "arxiv-384166", "<|multi_cite_5_9|>": "ss-889488", "<|multi_cite_6_1|>": "ss-1194976", "<|multi_cite_6_2|>": "arxiv-251310", "<|multi_cite_6_3|>": "arxiv-338644", "<|multi_cite_6_4|>": "arxiv-300895", "<|multi_cite_6_5|>": "ss-809612", "<|cite_7|>": "ss-716610", "<|multi_cite_8_1|>": "arxiv-171760", "<|multi_cite_8_2|>": "ss-1258425", "<|multi_cite_8_3|>": "ss-865357", "<|multi_cite_8_4|>": "arxiv-170009", "<|multi_cite_8_5|>": "arxiv-181305", "<|multi_cite_8_6|>": "ss-1203150", "<|multi_cite_8_7|>": "arxiv-168906", "<|multi_cite_8_8|>": "ss-1658836", "<|multi_cite_9_1|>": "ss-889487", "<|multi_cite_9_2|>": "ss-979588", "<|multi_cite_9_3|>": "ss-694827", "<|multi_cite_9_4|>": "arxiv-169749", "<|multi_cite_9_5|>": "ss-1265571", "<|cite_10|>": "arxiv-70006", "<|cite_11|>": "ss-710343", "<|multi_cite_12_1|>": "ss-1112619", "<|multi_cite_12_2|>": "ss-685224", "<|multi_cite_12_3|>": "ss-889488", "<|multi_cite_13_1|>": "ss-1254136", "<|multi_cite_13_2|>": "arxiv-173916", "<|multi_cite_13_3|>": "ss-683050", "<|multi_cite_14_1|>": "arxiv-94287", "<|multi_cite_14_2|>": "arxiv-197712", "<|cite_15|>": "ss-1258426", "<|cite_16|>": "arxiv-166816", "<|cite_17|>": "ss-889489", "<|cite_18|>": "arxiv-306009", "<|cite_19|>": "arxiv-384166", "<|multi_cite_20_1|>": "ss-716610", "<|multi_cite_20_2|>": "arxiv-171760", "<|multi_cite_21_1|>": "ss-1258425", "<|multi_cite_21_2|>": "ss-865357", "<|multi_cite_21_3|>": "arxiv-170009", "<|multi_cite_21_4|>": "arxiv-181305", "<|multi_cite_21_5|>": "ss-1203150", "<|multi_cite_21_6|>": "arxiv-168906", "<|multi_cite_21_7|>": "ss-1658836", "<|multi_cite_22_1|>": "ss-889487", "<|multi_cite_22_2|>": "ss-979588", "<|multi_cite_22_3|>": "ss-694827", "<|multi_cite_22_4|>": "arxiv-169749", "<|multi_cite_22_5|>": "ss-1265571", "<|multi_cite_23_1|>": "arxiv-376683", "<|multi_cite_23_2|>": "ss-2224907", "<|multi_cite_24_1|>": "arxiv-173916", "<|multi_cite_24_2|>": "ss-1254136", "<|multi_cite_24_3|>": "ss-1112619", "<|multi_cite_24_4|>": "ss-685224", "<|multi_cite_24_5|>": "ss-683050", "<|multi_cite_24_6|>": "ss-889488", "<|multi_cite_25_1|>": "arxiv-94287", "<|multi_cite_25_2|>": "arxiv-197712", "<|cite_26|>": "ss-1258426", "<|cite_27|>": "arxiv-166816", "<|cite_28|>": "ss-889489", "<|cite_29|>": "arxiv-306009", "<|cite_30|>": "arxiv-384166", "<|cite_31|>": "ss-1203150", "<|cite_32|>": "ss-716610", "<|cite_33|>": "ss-1194976", "<|cite_34|>": "ss-865357", "<|cite_35|>": "ss-1658836", "<|cite_36|>": "arxiv-169749", "<|cite_37|>": "ss-979588", "<|cite_38|>": "ss-694827", "<|cite_39|>": "ss-832115", "<|cite_40|>": "arxiv-376683", "<|cite_41|>": "arxiv-298443", "<|cite_42|>": "ss-2224907", "<|cite_43|>": "arxiv-384166", "<|cite_44|>": "ss-683050", "<|cite_45|>": "ss-685224", "<|cite_46|>": "ss-889488", "<|cite_47|>": "ss-1112619", "<|cite_48|>": "arxiv-298443"}
2003.11647
<|paper_start|> Title: Deep Grouping Model for Unified Perceptual Parsing Abstract: Deep Grouping Model for Unified Perceptual Parsing: The perceptual-based grouping process produces a hierarchical and compositional image representation that helps both human and machine vision systems recognize heterogeneous visual concepts. Examples can be found in the classical hierarchical superpixel segmentation or image parsing works. However, the grouping process is largely overlooked in modern CNN-based image segmentation networks due to many challenges, including the inherent incompatibility between the grid-shaped CNN feature map and the irregular-shaped perceptual grouping hierarchy. Overcoming these challenges, we propose a deep grouping model (DGM) that tightly marries the two types of representations and defines a bottom-up and a top-down process for feature exchanging. When evaluating the model on the recent Broden+ dataset for the unified perceptual parsing task, it achieves state-of-the-art results while having a small computational overhead compared to other contextual-based segmentation models. Furthermore, the DGM has better interpretability compared with modern CNN methods. Introduction Deep CNN methods have achieved substantial performance improvement compared with non-CNN methods in the field of semantic segmentation <|cite_start|> (Reference: Fully Convolutional Networks for Semantic Segmentation: Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build "fully convolutional" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20% relative improvement to 62.2% mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.) <|cite_end|> <|cite_start|> (Reference: DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs: In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7% mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.) <|cite_end|>. Many of them can achieve even better performance by incorporating \textit{good practices} that have long been discovered in non-CNN methods, \eg, multiscale features <|cite_start|> (Reference: Pyramid Scene Parsing Network: Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction tasks. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields new record of mIoU accuracy 85.4% on PASCAL VOC 2012 and accuracy 80.2% on Cityscapes.) <|cite_end|> <|cite_start|> (Reference: Unified Perceptual Parsing for Scene Understanding: Humans recognize the visual world at multiple levels: we effortlessly categorize scenes and detect objects inside, while also identifying the textures and surfaces of the objects along with their different compositional parts. In this paper, we study a new task called Unified Perceptual Parsing, which requires the machine vision systems to recognize as many visual concepts as possible from a given image. A multi-task framework called UPerNet and a training strategy are developed to learn from heterogeneous image annotations. We benchmark our framework on Unified Perceptual Parsing and show that it is able to effectively segment a wide range of concepts from images. The trained networks are further applied to discover visual knowledge in natural scenes. Models are available at \url{https://github.com/CSAILVision/unifiedparsing}.) <|cite_end|> <|cite_start|> (Reference: Deep High-Resolution Representation Learning for Visual Recognition: High-resolution representations are essential for position-sensitive vision problems, such as human pose estimation, semantic segmentation, and object detection. Existing state-of-the-art frameworks first encode the input image as a low-resolution representation through a subnetwork that is formed by connecting high-to-low resolution convolutions \emph{in series} (e.g., ResNet, VGGNet), and then recover the high-resolution representation from the encoded low-resolution representation. Instead, our proposed network, named as High-Resolution Network (HRNet), maintains high-resolution representations through the whole process. There are two key characteristics: (i) Connect the high-to-low resolution convolution streams \emph{in parallel}; (ii) Repeatedly exchange the information across resolutions. The benefit is that the resulting representation is semantically richer and spatially more precise. We show the superiority of the proposed HRNet in a wide range of applications, including human pose estimation, semantic segmentation, and object detection, suggesting that the HRNet is a stronger backbone for computer vision problems. All the codes are available at~{\url{https://github.com/HRNet}}.) <|cite_end|> and contextual information <|cite_start|> (Reference: Context Encoding for Semantic Segmentation: Recent work has made significant progress in improving spatial resolution for pixelwise labeling with Fully Convolutional Network (FCN) framework by employing Dilated/Atrous convolution, utilizing multi-scale features and refining boundaries. In this paper, we explore the impact of global contextual information in semantic segmentation by introducing the Context Encoding Module, which captures the semantic context of scenes and selectively highlights class-dependent featuremaps. The proposed Context Encoding Module significantly improves semantic segmentation results with only marginal extra computation cost over FCN. Our approach has achieved new state-of-the-art results 51.7% mIoU on PASCAL-Context, 85.9% mIoU on PASCAL VOC 2012. Our single model achieves a final score of 0.5567 on ADE20K test set, which surpass the winning entry of COCO-Place Challenge in 2017. In addition, we also explore how the Context Encoding Module can improve the feature representation of relatively shallow networks for the image classification on CIFAR-10 dataset. Our 14 layer network has achieved an error rate of 3.45%, which is comparable with state-of-the-art approaches with over 10 times more layers. The source code for the complete system are publicly available.) <|cite_end|> <|cite_start|> (Reference: OCNet: Object Context Network for Scene Parsing: In this paper, we address the semantic segmentation task with a new context aggregation scheme named \emph{object context}, which focuses on enhancing the role of object information. Motivated by the fact that the category of each pixel is inherited from the object it belongs to, we define the object context for each pixel as the set of pixels that belong to the same category as the given pixel in the image. We use a binary relation matrix to represent the relationship between all pixels, where the value one indicates the two selected pixels belong to the same category and zero otherwise. We propose to use a dense relation matrix to serve as a surrogate for the binary relation matrix. The dense relation matrix is capable to emphasize the contribution of object information as the relation scores tend to be larger on the object pixels than the other pixels. Considering that the dense relation matrix estimation requires quadratic computation overhead and memory consumption w.r.t. the input size, we propose an efficient interlaced sparse self-attention scheme to model the dense relations between any two of all pixels via the combination of two sparse relation matrices. To capture richer context information, we further combine our interlaced sparse self-attention scheme with the conventional multi-scale context schemes including pyramid pooling~\citep{zhao2017pyramid} and atrous spatial pyramid pooling~\citep{chen2018deeplab}. We empirically show the advantages of our approach with competitive performances on five challenging benchmarks including: Cityscapes, ADE20K, LIP, PASCAL-Context and COCO-Stuff) <|cite_end|> <|cite_start|> (Reference: Adaptive pyramid context network for semantic segmentation: Recent studies witnessed that context features can significantly improve the performance of deep semantic segmentation networks. Current context based segmentation methods differ with each other in how to construct context features and perform differently in practice. This paper firstly introduces three desirable properties of context features in segmentation task. Specially, we find that Global-guided Local Affinity (GLA) can play a vital role in constructing effective context features, while this property has been largely ignored in previous works. Based on this analysis, this paper proposes Adaptive Pyramid Context Network (APCNet) for semantic segmentation. APCNet adaptively constructs multi-scale contextual representations with multiple well-designed Adaptive Context Modules (ACMs). Specifically, each ACM leverages a global image representation as a guidance to estimate the local affinity coefficients for each sub-region, and then calculates a context vector with these affinities. We empirically evaluate our APCNet on three semantic segmentation and scene parsing datasets, including PASCAL VOC 2012, Pascal-Context, and ADE20K dataset. Experimental results show that APCNet achieves state-of-the-art performance on all three benchmarks, and obtains a new record 84.2% on PASCAL VOC 2012 test set without MS COCO pre-trained and any post-processing.) <|cite_end|> <|cite_start|> (Reference: Adaptive Context Network for Scene Parsing: Recent works attempt to improve scene parsing performance by exploring different levels of contexts, and typically train a well-designed convolutional network to exploit useful contexts across all pixels equally. However, in this paper, we find that the context demands are varying from different pixels or regions in each image. Based on this observation, we propose an Adaptive Context Network (ACNet) to capture the pixel-aware contexts by a competitive fusion of global context and local context according to different per-pixel demands. Specifically, when given a pixel, the global context demand is measured by the similarity between the global feature and its local feature, whose reverse value can be used to measure the local context demand. We model the two demand measurements by the proposed global context module and local context module, respectively, to generate adaptive contextual features. Furthermore, we import multiple such modules to build several adaptive context blocks in different levels of network to obtain a coarse-to-fine result. Finally, comprehensive experimental evaluations demonstrate the effectiveness of the proposed ACNet, and new state-of-the-arts performances are achieved on all four public datasets, i.e. Cityscapes, ADE20K, PASCAL Context, and COCO Stuff.) <|cite_end|> <|cite_start|> (Reference: Co-occurrent features in semantic segmentation: Recent work has achieved great success in utilizing global contextual information for semantic segmentation, including increasing the receptive field and aggregating pyramid feature representations. In this paper, we go beyond global context and explore the fine-grained representation using co-occurrent features by introducing Co-occurrent Feature Model, which predicts the distribution of co-occurrent features for a given target. To leverage the semantic context in the co-occurrent features, we build an Aggregated Co-occurrent Feature (ACF) Module by aggregating the probability of the co-occurrent feature with the co-occurrent context. ACF Module learns a fine-grained spatial invariant representation to capture co-occurrent context information across the scene. Our approach significantly improves the segmentation results using FCN and achieves superior performance 54.0% mIoU on Pascal Context, 87.2% mIoU on Pascal VOC 2012 and 44.89% mIoU on ADE20K datasets. The source code and complete system will be publicly available upon publication.) <|cite_end|>. However, recent works still have some key limitations. First, many CNN-based methods are solely driven by the cross-entropy loss computed against ground-truth pixel labels, lacking an explicit modeling of the perceptual grouping process, which is an integral part in the human visual system <|cite_start|> (Reference: Global ensemble texture representations are critical to rapid scene perception.: Traditionally, recognizing the objects within a scene has been treated as a prerequisite to recognizing the scene itself. However, research now suggests that the ability to rapidly recognize visual scenes could be supported by global properties of the scene itself rather than the objects within the scene. Here, we argue for a particular instantiation of this view: That scenes are recognized by treating them as a global texture and processing the pattern of orientations and spatial frequencies across different areas of the scene without recognizing any objects. To test this model, we asked whether there is a link between how proficient individuals are at rapid scene perception and how proficiently they represent simple spatial patterns of orientation information (global ensemble texture). We find a significant and selective correlation between these tasks, suggesting a link between scene perception and spatial ensemble tasks but not nonspatial summary statistics In a second and third experiment, we additionally show that global ensemble texture information is not only associated with scene recognition, but that preserving only global ensemble texture information from scenes is sufficient to support rapid scene perception; however, preserving the same information is not sufficient for object recognition. Thus, global ensemble texture alone is sufficient to allow activation of scene representations but not object representations. Together, these results provide evidence for a view of scene recognition based on global ensemble texture rather than a view based purely on objects or on nonspatially localized global properties.) <|cite_end|>. Second, most modelings are still focusing on regular-shaped feature maps, which creates not only significant overhead in a multi-scale representation when considering feature-to-feature attention but also is sub-optimal for modeling irregular-shaped semantic regions on the image. \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{figures/teaser.pdf} \caption{Perceptual grouping process. From fine to coarse: neighboring pixels form a part; parts group into an object; and objects combine into a contextual region. The DGM aims to marry a CNN with the grouping hierarchy for unified perceptual parsing of images. The grouping hierarchy is dynamically computed based on the CNN features, and the CNN features are enhanced by the grouping cues from the graph hierarchy. The model is applied to unified perceptual parsing task to show superiority of DGM.} \label{fig:teaser} \vspace{-5mm} \end{figure} To overcome these limitations, we revisit the classical perceptual grouping methods, \eg, superpixel segmentation <|cite_start|> (Reference: Learning a classification model for segmentation: We propose a two-class classification model for grouping. Human segmented natural images are used as positive examples. Negative examples of grouping are constructed by randomly matching human segmentations and images. In a preprocessing stage an image is over-segmented into super-pixels. We define a variety of features derived from the classical Gestalt cues, including contour, texture, brightness and good continuation. Information-theoretic analysis is applied to evaluate the power of these grouping cues. We train a linear classifier to combine these features. To demonstrate the power of the classification model, a simple algorithm is used to randomly search for good segmentations. Results are shown on a wide range of images.) <|cite_end|> <|cite_start|> (Reference: Efficient Graph-Based Image Segmentation: ) <|cite_end|> <|cite_start|> (Reference: Superpixel lattices: Unsupervised over-segmentation of an image into superpixels is a common preprocessing step for image parsing algorithms. Ideally, every pixel within each superpixel region will belong to the same real-world object. Existing algorithms generate superpixels that forfeit many useful properties of the regular topology of the original pixels: for example, the nth superpixel has no consistent position or relationship with its neighbors. We propose a novel algorithm that produces superpixels that are forced to conform to a grid (a regular superpixel lattice). Despite this added topological constraint, our algorithm is comparable in terms of speed and accuracy to alternative segmentation approaches. To demonstrate this, we use evaluation metrics based on (i) image reconstruction (ii) comparison to human-segmented images and (iii) stability of segmentation over subsequent frames of video sequences.) <|cite_end|> <|cite_start|> (Reference: Multiscale Combinatorial Grouping for Image Segmentation and Object Proposal Generation: We propose a unified approach for bottom-up hierarchical image segmentation and object proposal generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object proposals by exploring efficiently their combinatorial space. We also present Single-scale Combinatorial Grouping (SCG), a faster version of MCG that produces competitive proposals in under five second per image. We conduct an extensive and comprehensive empirical validation on the BSDS500, SegVOC12, SBD, and COCO datasets, showing that MCG produces state-of-the-art contours, hierarchical regions, and object proposals.) <|cite_end|> and image parsing <|cite_start|> (Reference: Image Parsing: Unifying Segmentation, Detection, and Recognition: ) <|cite_end|> <|cite_start|> (Reference: Describing the scene as a whole: {{Joint}} object detection, scene classification and semantic segmentation: In this paper we propose an approach to holistic scene understanding that reasons jointly about regions, location, class and spatial extent of objects, presence of a class in the image, as well as the scene type. Learning and inference in our model are efficient as we reason at the segment level, and introduce auxiliary variables that allow us to decompose the inherent high-order potentials into pairwise potentials between a few variables with small number of states (at most the number of classes). Inference is done via a convergent message-passing algorithm, which, unlike graph-cuts inference, has no submodularity restrictions and does not require potential specific moves. We believe this is very important, as it allows us to encode our ideas and prior knowledge about the problem without the need to change the inference engine every time we introduce a new potential. Our approach outperforms the state-of-the-art on the MSRC-21 benchmark, while being much faster. Importantly, our holistic model is able to improve performance in all tasks.) <|cite_end|>, which were extensively studied before the predominance of CNNs in segmentation. The seminal work by Tu \etal <|cite_start|> (Reference: Image Parsing: Unifying Segmentation, Detection, and Recognition: ) <|cite_end|> represents an image as a hierarchical graph, a.k.a. \textit{parsing graph}. In their depicted example, an image of \textit{a football match scene} is first decomposed into three elements: person, sports field, and spectator, then these elements are further decomposed, \eg, the person consists of face and body texture. Such a graph is both compositional (\eg, lower-level semantics induce grouping cues for higher-level semantics) and decompositional (\eg, higher-level semantics provide feature support for lower-level semantics), and it varies upon the input image. In this work, we explore whether it is beneficial to inject such a perceptual grouping process explicitly in modern CNN frameworks for a unified image parsing of the scene (see Fig.~\ref{fig:teaser} for an example). Three challenges arise when incorporating the perceptual grouping process as a hierarchical graph in a deep CNN. First, there is feature incompatibility between the grid-shaped CNN feature maps and irregular-shaped graph nodes, not to mention how to benefit one from the other. Second, it is unclear how to dynamically grow the grouping hierarchy based on different levels of feature semantics extracted from the image. Although superpixel segmentation map provides a plausible initial grouping based on low-level textural and edge cues, high-level semantics of larger receptive fields are needed when growing parts into objects. Third, a holistic understanding of the scene is required when considering the unified pcerceptual parsing task. For example, knowing the scene-level \textit{kitchen} label helps clarify \textit{countertop} against \textit{desk}. It is easy to do in a CNN but difficult in a parsing graph hierarchy. To tackle the challenges as mentioned above, we propose a novel \textit{Deep Grouping Model (DGM)}, which contains a few modules that are general enough to adapt to many CNNs. The \textit{Expectation-Maximization Graph Pooling} (\textit{EMGP}) module and \textit{Projection} module transform multi-resolution feature maps into a multi-level graph by grouping different regions on the feature map in a bottom-up fashion (\ie, from high- to low-resolution). They have several advantages. Since the model groups pixels and regions iteratively, the number of nodes in the graph is far smaller than the number of pixels on a feature map, which reduces computational overhead. The relationship between different levels of the hierarchy are learned during grouping, rather than assuming a uniform distribution such as in bilinear interpolation or adaptive average pooling on a grid feature map <|cite_start|> (Reference: Unified Perceptual Parsing for Scene Understanding: Humans recognize the visual world at multiple levels: we effortlessly categorize scenes and detect objects inside, while also identifying the textures and surfaces of the objects along with their different compositional parts. In this paper, we study a new task called Unified Perceptual Parsing, which requires the machine vision systems to recognize as many visual concepts as possible from a given image. A multi-task framework called UPerNet and a training strategy are developed to learn from heterogeneous image annotations. We benchmark our framework on Unified Perceptual Parsing and show that it is able to effectively segment a wide range of concepts from images. The trained networks are further applied to discover visual knowledge in natural scenes. Models are available at \url{https://github.com/CSAILVision/unifiedparsing}.) <|cite_end|> <|cite_start|> (Reference: Pyramid Scene Parsing Network: Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction tasks. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields new record of mIoU accuracy 85.4% on PASCAL VOC 2012 and accuracy 80.2% on Cityscapes.) <|cite_end|>. Furthermore, the contextual information at one level of hierarchy can be quantified via edge weights in a graph, which is sparser than fully-connected non-local block <|cite_start|> (Reference: Non-local Neural Networks: Both convolutional and recurrent operations are building blocks that process one local neighborhood at a time. In this paper, we present non-local operations as a generic family of building blocks for capturing long-range dependencies. Inspired by the classical non-local means method in computer vision, our non-local operation computes the response at a position as a weighted sum of the features at all positions. This building block can be plugged into many computer vision architectures. On the task of video classification, even without any bells and whistles, our non-local models can compete or outperform current competition winners on both Kinetics and Charades datasets. In static image recognition, our non-local models improve object detection/segmentation and pose estimation on the COCO suite of tasks. Code is available at https://github.com/facebookresearch/video-nonlocal-net .) <|cite_end|> <|cite_start|> (Reference: OCNet: Object Context Network for Scene Parsing: In this paper, we address the semantic segmentation task with a new context aggregation scheme named \emph{object context}, which focuses on enhancing the role of object information. Motivated by the fact that the category of each pixel is inherited from the object it belongs to, we define the object context for each pixel as the set of pixels that belong to the same category as the given pixel in the image. We use a binary relation matrix to represent the relationship between all pixels, where the value one indicates the two selected pixels belong to the same category and zero otherwise. We propose to use a dense relation matrix to serve as a surrogate for the binary relation matrix. The dense relation matrix is capable to emphasize the contribution of object information as the relation scores tend to be larger on the object pixels than the other pixels. Considering that the dense relation matrix estimation requires quadratic computation overhead and memory consumption w.r.t. the input size, we propose an efficient interlaced sparse self-attention scheme to model the dense relations between any two of all pixels via the combination of two sparse relation matrices. To capture richer context information, we further combine our interlaced sparse self-attention scheme with the conventional multi-scale context schemes including pyramid pooling~\citep{zhao2017pyramid} and atrous spatial pyramid pooling~\citep{chen2018deeplab}. We empirically show the advantages of our approach with competitive performances on five challenging benchmarks including: Cityscapes, ADE20K, LIP, PASCAL-Context and COCO-Stuff) <|cite_end|>, leading to a lower overhead. We put forward a \textit{Top-down Message Passing (TDMP)} module, which propagates contextual information from the top-level graph to the bottom level graph by utilizing grouping results from \textit{EMGP}. In this way, higher level context can be propagated \textit{adaptively} to the corresponding irregular-shaped regions. For instance, object context features (\eg, human) at higher-level graph will be propagated to its corresponding parts (\eg, arms, legs, torso, etc.) at lower-level graph. Similarly, global scene context can also be propagated down to lower-level graph containing objects. Our proposed \textit{TDMP} module is especially useful in the multi-task settings, where lower-level features enhanced by high-level semantics are able to produce better results. At the end, we use \textit{Re-projection} module to re-project features from the hierarchical graph back to multi-resolution grid feature maps, which are used for down-stream tasks. In order to prove the effectiveness of the proposed model, we apply our model on unified perceptual parsing task, a challenging task to recognize diverse perceptual concepts, including object (or stuff) segmentation, parts segmentation, scene classification, material segmentation, and texture prediction. We use the recent Broden+ dataset <|cite_start|> (Reference: Network Dissection: Quantifying Interpretability of Deep Visual Representations: We propose a general framework called Network Dissection for quantifying the interpretability of latent representations of CNNs by evaluating the alignment between individual hidden units and a set of semantic concepts. Given any CNN model, the proposed method draws on a broad data set of visual concepts to score the semantics of hidden units at each intermediate convolutional layer. The units with semantics are given labels across a range of objects, parts, scenes, textures, materials, and colors. We use the proposed method to test the hypothesis that interpretability of units is equivalent to random linear combinations of units, then we apply our method to compare the latent representations of various networks when trained to solve different supervised and self-supervised training tasks. We further analyze the effect of training iterations, compare networks trained with different initializations, examine the impact of network depth and width, and measure the effect of dropout and batch normalization on the interpretability of deep visual representations. We demonstrate that the proposed method can shed light on characteristics of CNN models and training methods that go beyond measurements of their discriminative power.) <|cite_end|>, a large-scale dataset combining five different datasets with heterogeneous task labels, that is designed for the unified perceptual parsing task. Our method is trained in a multi-task learning fashion, and we evaluate our model on each subtask. Results show that our method achieves the state-of-the-art on Broden+ dataset in every subtask. Furthermore, the proposed DGM provides better interpretability thanks to the hierarchical graph representation. By using the grouping result , DGM can be applied to other two applications: 1) click propagation, 2) explainability with Grad-CAM, which are the building blocks in recent works on interactive segmentation <|cite_start|> (Reference: Deep Interactive Object Selection: Interactive object selection is a very important research problem and has many applications. Previous algorithms require substantial user interactions to estimate the foreground and background distributions. In this paper, we present a novel deep learning based algorithm which has a much better understanding of objectness and thus can reduce user interactions to just a few clicks. Our algorithm transforms user provided positive and negative clicks into two Euclidean distance maps which are then concatenated with the RGB channels of images to compose (image, user interactions) pairs. We generate many of such pairs by combining several random sampling strategies to model user click patterns and use them to fine tune deep Fully Convolutional Networks (FCNs). Finally the output probability maps of our FCN 8s model is integrated with graph cut optimization to refine the boundary segments. Our model is trained on the PASCAL segmentation dataset and evaluated on other datasets with different object classes. Experimental results on both seen and unseen objects clearly demonstrate that our algorithm has a good generalization ability and is superior to all existing interactive object selection approaches.) <|cite_end|> <|cite_start|> (Reference: Content-aware multi-level guidance for interactive instance segmentation: In interactive instance segmentation, users give feedback to iteratively refine segmentation masks. The user-provided clicks are transformed into guidance maps which provide the network with necessary cues on the whereabouts of the object of interest. Guidance maps used in current systems are purely distance-based and are either too localized or non-informative. We propose a novel transformation of user clicks to generate content-aware guidance maps that leverage the hierarchical structural information present in an image. Using our guidance maps, even the most basic FCNs are able to outperform existing approaches that require state-of-the-art segmentation networks pre-trained on large scale segmentation datasets. We demonstrate the effectiveness of our proposed transformation strategy through comprehensive experimentation in which we significantly raise state-of-the-art on four standard interactive segmentation benchmarks.) <|cite_end|> and weakly-supervised segmentation <|cite_start|> (Reference: Object Region Mining with Adversarial Erasing: A Simple Classification to Semantic Segmentation Approach: We investigate a principle way to progressively mine discriminative object regions using classification networks to address the weakly-supervised semantic segmentation problems. Classification networks are only responsive to small and sparse discriminative regions from the object of interest, which deviates from the requirement of the segmentation task that needs to localize dense, interior and integral regions for pixel-wise inference. To mitigate this gap, we propose a new adversarial erasing approach for localizing and expanding object regions progressively. Starting with a single small object region, our proposed approach drives the classification network to sequentially discover new and complement object regions by erasing the current mined regions in an adversarial manner. These localized regions eventually constitute a dense and complete object region for learning semantic segmentation. To further enhance the quality of the discovered regions by adversarial erasing, an online prohibitive segmentation learning approach is developed to collaborate with adversarial erasing by providing auxiliary segmentation supervision modulated by the more reliable classification scores. Despite its apparent simplicity, the proposed approach achieves 55.0% and 55.7% mean Intersection-over-Union (mIoU) scores on PASCAL VOC 2012 val and test sets, which are the new state-of-the-arts.) <|cite_end|> <|cite_start|> (Reference: Weakly-Supervised Semantic Segmentation by Iteratively Mining Common Object Features: Weakly-supervised semantic segmentation under image tags supervision is a challenging task as it directly associates high-level semantic to low-level appearance. To bridge this gap, in this paper, we propose an iterative bottom-up and top-down framework which alternatively expands object regions and optimizes segmentation network. We start from initial localization produced by classification networks. While classification networks are only responsive to small and coarse discriminative object regions, we argue that, these regions contain significant common features about objects. So in the bottom-up step, we mine common object features from the initial localization and expand object regions with the mined features. To supplement non-discriminative regions, saliency maps are then considered under Bayesian framework to refine the object regions. Then in the top-down step, the refined object regions are used as supervision to train the segmentation network and to predict object masks. These object masks provide more accurate localization and contain more regions of object. Further, we take these object masks as initial localization and mine common object features from them. These processes are conducted iteratively to progressively produce fine object masks and optimize segmentation networks. Experimental results on Pascal VOC 2012 dataset demonstrate that the proposed method outperforms previous state-of-the-art methods by a large margin.) <|cite_end|> <|cite_start|> (Reference: Hide-and-Seek: Forcing a Network to be Meticulous for Weakly-supervised Object and Action Localization: We propose `Hide-and-Seek', a weakly-supervised framework that aims to improve object localization in images and action localization in videos. Most existing weakly-supervised methods localize only the most discriminative parts of an object rather than all relevant parts, which leads to suboptimal performance. Our key idea is to hide patches in a training image randomly, forcing the network to seek other relevant parts when the most discriminative part is hidden. Our approach only needs to modify the input image and can work with any network designed for object localization. During testing, we do not need to hide any patches. Our Hide-and-Seek approach obtains superior performance compared to previous methods for weakly-supervised object localization on the ILSVRC dataset. We also demonstrate that our framework can be easily extended to weakly-supervised action localization.) <|cite_end|>. \begin{figure*}[t] \includegraphics[width=0.85\textwidth]{figures/model.pdf} \centering \caption{An overview of the proposed Deep Grouping Model (DGM).} \label{Fig:main_fig} \end{figure*} Related Work \label{Sec:related_work} \noindent \textbf{Grouping-based Method.} Grouping-based segmentation method is extensively utilized before the deep learning methods. Ren \etal <|cite_start|> (Reference: Learning a classification model for segmentation: We propose a two-class classification model for grouping. Human segmented natural images are used as positive examples. Negative examples of grouping are constructed by randomly matching human segmentations and images. In a preprocessing stage an image is over-segmented into super-pixels. We define a variety of features derived from the classical Gestalt cues, including contour, texture, brightness and good continuation. Information-theoretic analysis is applied to evaluate the power of these grouping cues. We train a linear classifier to combine these features. To demonstrate the power of the classification model, a simple algorithm is used to randomly search for good segmentations. Results are shown on a wide range of images.) <|cite_end|> propose grouping pixels into superpixels using Gestalt cues. Hierarchical grouping methods <|cite_start|> (Reference: Multiscale combinatorial grouping: We propose a unified approach for bottom-up hierarchical image segmentation and object candidate generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object candidates by exploring efficiently their combinatorial space. We conduct extensive experiments on both the BSDS500 and on the PASCAL 2012 segmentation datasets, showing that MCG produces state-of-the-art contours, hierarchical regions and object candidates.) <|cite_end|> <|cite_start|> (Reference: Multiscale Combinatorial Grouping for Image Segmentation and Object Proposal Generation: We propose a unified approach for bottom-up hierarchical image segmentation and object proposal generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object proposals by exploring efficiently their combinatorial space. We also present Single-scale Combinatorial Grouping (SCG), a faster version of MCG that produces competitive proposals in under five second per image. We conduct an extensive and comprehensive empirical validation on the BSDS500, SegVOC12, SBD, and COCO datasets, showing that MCG produces state-of-the-art contours, hierarchical regions, and object proposals.) <|cite_end|> <|cite_start|> (Reference: {Selective Search for Object Recognition: This paper evaluates the selective search algorithm implemented by J.R.R. Uijlings et al. The selective search algorithm addresses the problem of object recognition. In particular the selective search has emphasis on the inherit hierarchical structure of images. This is done by combining segmentation for object recognition with exhaustive search. The advantage of exhaustive search is that is aims to capture all object locations, and the advantage of segmentation is that it uses image structure to guide the search for object locations. The selective search results in a small set of data-driven, class-independent, high quality locations. The results of selective search have been outstanding with exceptional scores across the Pascal Image challenges. This paper evaluates external potential challenges where the algorithm may fail to recognize an object. These instances may include camouflaged object, which may be obvious to a human but not so much to the selective search algorithm. Keywords—Object recognition, selective search, segmentation, exhaustive search, hierarchical image structure.) <|cite_end|> <|cite_start|> (Reference: Actor-Action Semantic Segmentation with Grouping Process Models: Actor-action semantic segmentation made an important step toward advanced video understanding problems: what action is happening; who is performing the action; and where is the action in space-time. Current models for this problem are local, based on layered CRFs, and are unable to capture long-ranging interaction of video parts. We propose a new model that combines these local labeling CRFs with a hierarchical supervoxel decomposition. The supervoxels provide cues for possible groupings of nodes, at various scales, in the CRFs to encourage adaptive, high-order groups for more effective labeling. Our model is dynamic and continuously exchanges information during inference: the local CRFs influence what supervoxels in the hierarchy are active, and these active nodes influence the connectivity in the CRF; we hence call it a grouping process model. The experimental results on a recent large-scale video dataset show a large margin of 60% relative improvement over the state of the art, which demonstrates the effectiveness of the dynamic, bidirectional flow between labeling and grouping.) <|cite_end|> <|cite_start|> (Reference: Flattening supervoxel hierarchies by the uniform entropy slice: Supervoxel hierarchies provide a rich multiscale decomposition of a given video suitable for subsequent processing in video analysis. The hierarchies are typically computed by an unsupervised process that is susceptible to under-segmentation at coarse levels and over-segmentation at fine levels, which make it a challenge to adopt the hierarchies for later use. In this paper, we propose the first method to overcome this limitation and flatten the hierarchy into a single segmentation. Our method, called the uniform entropy slice, seeks a selection of supervoxels that balances the relative level of information in the selected supervoxels based on some post hoc feature criterion such as object-ness. For example, with this criterion, in regions nearby objects, our method prefers finer supervoxels to capture the local details, but in regions away from any objects we prefer coarser supervoxels. We formulate the uniform entropy slice as a binary quadratic program and implement four different feature criteria, both unsupervised and supervised, to drive the flattening. Although we apply it only to supervoxel hierarchies in this paper, our method is generally applicable to segmentation tree hierarchies. Our experiments demonstrate both strong qualitative performance and superior quantitative performance to state of the art baselines on benchmark internet videos.) <|cite_end|> <|cite_start|> (Reference: Streaming Hierarchical Video Segmentation: ) <|cite_end|> are also proposed for both image segmentation and video segmentation tasks. More recently, some deep learning methods start using grouping in the segmentation task. Gadde \etal <|cite_start|> (Reference: Superpixel Convolutional Networks using Bilateral Inceptions: In this paper we propose a CNN architecture for semantic image segmentation. We introduce a new 'bilateral inception' module that can be inserted in existing CNN architectures and performs bilateral filtering, at multiple feature-scales, between superpixels in an image. The feature spaces for bilateral filtering and other parameters of the module are learned end-to-end using standard backpropagation techniques. The bilateral inception module addresses two issues that arise with general CNN segmentation architectures. First, this module propagates information between (super) pixels while respecting image edges, thus using the structured information of the problem for improved results. Second, the layer recovers a full resolution segmentation result from the lower resolution solution of a CNN. In the experiments, we modify several existing CNN architectures by inserting our inception module between the last CNN (1x1 convolution) layers. Empirical results on three different datasets show reliable improvements not only in comparison to the baseline networks, but also in comparison to several dense-pixel prediction techniques such as CRFs, while being competitive in time.) <|cite_end|> use superpixels to upsample CNN's low resolution prediction to the original image size. <|cite_start|> (Reference: Learning Superpixels With Segmentation-Aware Affinity Loss: Superpixel segmentation has been widely used in many computer vision tasks. Existing superpixel algorithms are mainly based on hand-crafted features, which often fail to preserve weak object boundaries. In this work, we leverage deep neural networks to facilitate extracting superpixels from images. We show a simple integration of deep features with existing superpixel algorithms does not result in better performance as these features do not model segmentation. Instead, we propose a segmentation-aware affinity learning approach for superpixel segmentation. Specifically, we propose a new loss function that takes the segmentation error into account for affinity learning. We also develop the Pixel Affinity Net for affinity prediction. Extensive experimental results show that the proposed algorithm based on the learned segmentation-aware loss performs favorably against the state-of-the-art methods. We also demonstrate the use of the learned superpixels in numerous vision applications with consistent improvements.) <|cite_end|> <|cite_start|> (Reference: Superpixel Sampling Networks: Superpixels provide an efficient low/mid-level representation of image data, which greatly reduces the number of image primitives for subsequent vision tasks. Existing superpixel algorithms are not differentiable, making them difficult to integrate into otherwise end-to-end trainable deep neural networks. We develop a new differentiable model for superpixel sampling that leverages deep networks for learning superpixel segmentation. The resulting "Superpixel Sampling Network" (SSN) is end-to-end trainable, which allows learning task-specific superpixels with flexible loss functions and has fast runtime. Extensive experimental analysis indicates that SSNs not only outperform existing superpixel algorithms on traditional segmentation benchmarks, but can also learn superpixels for other tasks. In addition, SSNs can be easily integrated into downstream deep networks resulting in performance improvements.) <|cite_end|> use deep feature rather than traditional low-level cues to predict superpixel map. Two works are closely related to our work. <|cite_start|> (Reference: Local Relation Networks for Image Recognition: The convolution layer has been the dominant feature extractor in computer vision for years. However, the spatial aggregation in convolution is basically a pattern matching process that applies fixed filters which are inefficient at modeling visual elements with varying spatial distributions. This paper presents a new image feature extractor, called the local relation layer, that adaptively determines aggregation weights based on the compositional relationship of local pixel pairs. With this relational approach, it can composite visual elements into higher-level entities in a more efficient manner that benefits semantic inference. A network built with local relation layers, called the Local Relation Network (LR-Net), is found to provide greater modeling capacity than its counterpart built with regular convolution on large-scale recognition tasks such as ImageNet classification.) <|cite_end|> puts forward local relation layer to model pixel-pair affinity in a predefined $7\times7$ square neighborhood, while our proposed model considers the neighborhood adaptively in an irregular-shaped region. Liang \etal <|cite_start|> (Reference: Interpretable Structure-Evolving LSTM: This paper develops a general framework for learning interpretable data representation via Long Short-Term Memory (LSTM) recurrent neural networks over hierarchal graph structures. Instead of learning LSTM models over the pre-fixed structures, we propose to further learn the intermediate interpretable multi-level graph structures in a progressive and stochastic way from data during the LSTM network optimization. We thus call this model the structure-evolving LSTM. In particular, starting with an initial element-level graph representation where each node is a small data element, the structure-evolving LSTM gradually evolves the multi-level graph representations by stochastically merging the graph nodes with high compatibilities along the stacked LSTM layers. In each LSTM layer, we estimate the compatibility of two connected nodes from their corresponding LSTM gate outputs, which is used to generate a merging probability. The candidate graph structures are accordingly generated where the nodes are grouped into cliques with their merging probabilities. We then produce the new graph structure with a Metropolis-Hasting algorithm, which alleviates the risk of getting stuck in local optimums by stochastic sampling with an acceptance probability. Once a graph structure is accepted, a higher-level graph is then constructed by taking the partitioned cliques as its nodes. During the evolving process, representation becomes more abstracted in higher-levels where redundant information is filtered out, allowing more efficient propagation of long-range data dependencies. We evaluate the effectiveness of structure-evolving LSTM in the application of semantic object parsing and demonstrate its advantage over state-of-the-art LSTM models on standard benchmarks.) <|cite_end|> propose structure-envolving LSTM where Graph LSTM <|cite_start|> (Reference: Semantic Object Parsing with Graph LSTM: By taking the semantic object parsing task as an exemplar application scenario, we propose the Graph Long Short-Term Memory (Graph LSTM) network, which is the generalization of LSTM from sequential data or multi-dimensional data to general graph-structured data. Particularly, instead of evenly and fixedly dividing an image to pixels or patches in existing multi-dimensional LSTM structures (e.g., Row, Grid and Diagonal LSTMs), we take each arbitrary-shaped superpixel as a semantically consistent node, and adaptively construct an undirected graph for each image, where the spatial relations of the superpixels are naturally used as edges. Constructed on such an adaptive graph topology, the Graph LSTM is more naturally aligned with the visual patterns in the image (e.g., object boundaries or appearance similarities) and provides a more economical information propagation route. Furthermore, for each optimization step over Graph LSTM, we propose to use a confidence-driven scheme to update the hidden and memory states of nodes progressively till all nodes are updated. In addition, for each node, the forgets gates are adaptively learned to capture different degrees of semantic correlation with neighboring nodes. Comprehensive evaluations on four diverse semantic object parsing datasets well demonstrate the significant superiority of our Graph LSTM over other state-of-the-art solutions.) <|cite_end|> is used for updating node features. In their work, only one pair of nodes is merged each time when a coarser graph is generated. Compared with <|cite_start|> (Reference: Interpretable Structure-Evolving LSTM: This paper develops a general framework for learning interpretable data representation via Long Short-Term Memory (LSTM) recurrent neural networks over hierarchal graph structures. Instead of learning LSTM models over the pre-fixed structures, we propose to further learn the intermediate interpretable multi-level graph structures in a progressive and stochastic way from data during the LSTM network optimization. We thus call this model the structure-evolving LSTM. In particular, starting with an initial element-level graph representation where each node is a small data element, the structure-evolving LSTM gradually evolves the multi-level graph representations by stochastically merging the graph nodes with high compatibilities along the stacked LSTM layers. In each LSTM layer, we estimate the compatibility of two connected nodes from their corresponding LSTM gate outputs, which is used to generate a merging probability. The candidate graph structures are accordingly generated where the nodes are grouped into cliques with their merging probabilities. We then produce the new graph structure with a Metropolis-Hasting algorithm, which alleviates the risk of getting stuck in local optimums by stochastic sampling with an acceptance probability. Once a graph structure is accepted, a higher-level graph is then constructed by taking the partitioned cliques as its nodes. During the evolving process, representation becomes more abstracted in higher-levels where redundant information is filtered out, allowing more efficient propagation of long-range data dependencies. We evaluate the effectiveness of structure-evolving LSTM in the application of semantic object parsing and demonstrate its advantage over state-of-the-art LSTM models on standard benchmarks.) <|cite_end|>, our model groups nodes more quickly thus reduces computational overhead. Farabet \etal <|cite_start|> (Reference: {Learning hierarchical features for scene labeling: Scene labeling consists of labeling each pixel in an image with the category of the object it belongs to. We propose a method that uses a multiscale convolutional network trained from raw pixels to extract dense feature vectors that encode regions of multiple sizes centered on each pixel. The method alleviates the need for engineered features, and produces a powerful representation that captures texture, shape, and contextual information. We report results using multiple postprocessing methods to produce the final labeling. Among those, we propose a technique to automatically retrieve, from a pool of segmentation components, an optimal set of components that best explain the scene; these components are arbitrary, for example, they can be taken from a segmentation tree or from any family of oversegmentations. The system yields record accuracies on the SIFT Flow dataset (33 classes) and the Barcelona dataset (170 classes) and near-record accuracy on Stanford background dataset (eight classes), while being an order of magnitude faster than competing approaches, producing a 320×240 image labeling in less than a second, including feature extraction.) <|cite_end|> use multi-scale convolutional feature and conditional random field to regulate the probability of each pixel in segmentation prediction. In contrast, our work learns both grouping hierarchy and top-down message passing at feature level in a end-to-end fashion. \noindent \textbf{Graph Neural Network.} Some recent works employ Graph Neural Network on segmentation task. Liang \etal <|cite_start|> (Reference: Symbolic Graph Reasoning Meets Convolutions: Beyond local convolution networks, we explore how to harness various external human knowledge for endowing the networks with the capability of semantic global reasoning. Rather than using separate graphical models (e.g. CRF) or constraints for modeling broader dependencies, we propose a new Symbolic Graph Reasoning (SGR) layer, which performs reasoning over a group of symbolic nodes whose outputs explicitly represent different properties of each semantic in a prior knowledge graph. To cooperate with local convolutions, each SGR is constituted by three modules: a) a primal local-to-semantic voting module where the features of all symbolic nodes are generated by voting from local representations; b) a graph reasoning module propagates information over knowledge graph to achieve global semantic coherency; c) a dual semantic-to-local mapping module learns new associations of the evolved symbolic nodes with local representations, and accordingly enhances local features. The SGR layer can be injected between any convolution layers and instantiated with distinct prior graphs. Extensive experiments show incorporating SGR significantly improves plain ConvNets on three semantic segmentation tasks and one image classification task. More analyses show the SGR layer learns shared symbolic representations for domains/datasets with the different label set given a universal knowledge graph, demonstrating its superior generalization capability.) <|cite_end|> map feature maps to a concept tree to enable concept reasoning. Other works <|cite_start|> (Reference: Beyond Grids: Learning Graph Representations for Visual Recognition: We propose learning graph representations from 2D feature maps for visual recognition. Our method draws inspiration from region based recognition, and learns to transform a 2D image into a graph structure. The vertices of the graph define clusters of pixels ("regions"), and the edges measure the similarity between these clusters in a feature space. Our method further learns to propagate information across all vertices on the graph, and is able to project the learned graph representation back into 2D grids. Our graph representation facilitates reasoning beyond regular grids and can capture long range dependencies among regions. We demonstrate that our model can be trained from end-to-end, and is easily integrated into existing networks. Finally, we evaluate our method on three challenging recognition tasks: semantic segmentation, object detection and object instance segmentation. For all tasks, our method outperforms state-of-the-art methods.) <|cite_end|> <|cite_start|> (Reference: Graph-Based Global Reasoning Networks: Globally modeling and reasoning over relations between regions can be beneficial for many computer vision tasks on both images and videos. Convolutional Neural Networks (CNNs) excel at modeling local relations by convolution operations, but they are typically inefficient at capturing global relations between distant regions and require stacking multiple convolution layers. In this work, we propose a new approach for reasoning globally in which a set of features are globally aggregated over the coordinate space and then projected to an interaction space where relational reasoning can be efficiently computed. After reasoning, relation-aware features are distributed back to the original coordinate space for down-stream tasks. We further present a highly efficient instantiation of the proposed approach and introduce the Global Reasoning unit (GloRe unit) that implements the coordinate-interaction space mapping by weighted global pooling and weighted broadcasting, and the relation reasoning via graph convolution on a small graph in interaction space. The proposed GloRe unit is lightweight, end-to-end trainable and can be easily plugged into existing CNNs for a wide range of tasks. Extensive experiments show our GloRe unit can consistently boost the performance of state-of-the-art backbone architectures, including ResNet, ResNeXt, SE-Net and DPN, for both 2D and 3D CNNs, on image classification, semantic segmentation and video action recognition task.) <|cite_end|> project feature map to graph via linear transformation with learned anchor vectors or convolutional weights, which may be successful in classifying single pixel's semantic meaning but does not consider similarity between pairs of pixels to group them into a region. Ying \etal <|cite_start|> (Reference: Hierarchical Graph Representation Learning with Differentiable Pooling: Recently, graph neural networks (GNNs) have revolutionized the field of graph representation learning through effectively learned node embeddings, and achieved state-of-the-art results in tasks such as node classification and link prediction. However, current GNN methods are inherently flat and do not learn hierarchical representations of graphs---a limitation that is especially problematic for the task of graph classification, where the goal is to predict the label associated with an entire graph. Here we propose DiffPool, a differentiable graph pooling module that can generate hierarchical representations of graphs and can be combined with various graph neural network architectures in an end-to-end fashion. DiffPool learns a differentiable soft cluster assignment for nodes at each layer of a deep GNN, mapping nodes to a set of clusters, which then form the coarsened input for the next GNN layer. Our experimental results show that combining existing GNN methods with DiffPool yields an average improvement of 5-10% accuracy on graph classification benchmarks, compared to all existing pooling approaches, achieving a new state-of-the-art on four out of five benchmark data sets.) <|cite_end|> propose a differentiable pooling method through predicting pooling weights by GraphSAGE <|cite_start|> (Reference: Inductive Representation Learning on Large Graphs: Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.) <|cite_end|>, but the method does not consider pairwise similarity between graph nodes and the number of clusters is also fixed. In comparison, our model considers pairwise affinity among nodes and supports a dynamic number of clustering centers. \noindent \textbf{Contextual Modeling.} Given the success of self-attention mechanism in many recognition tasks <|cite_start|> (Reference: Non-local Neural Networks: Both convolutional and recurrent operations are building blocks that process one local neighborhood at a time. In this paper, we present non-local operations as a generic family of building blocks for capturing long-range dependencies. Inspired by the classical non-local means method in computer vision, our non-local operation computes the response at a position as a weighted sum of the features at all positions. This building block can be plugged into many computer vision architectures. On the task of video classification, even without any bells and whistles, our non-local models can compete or outperform current competition winners on both Kinetics and Charades datasets. In static image recognition, our non-local models improve object detection/segmentation and pose estimation on the COCO suite of tasks. Code is available at https://github.com/facebookresearch/video-nonlocal-net .) <|cite_end|>, recent work introduces self-attention module in the semantic segmentation field from different perspectives. Yuan \etal <|cite_start|> (Reference: OCNet: Object Context Network for Scene Parsing: In this paper, we address the semantic segmentation task with a new context aggregation scheme named \emph{object context}, which focuses on enhancing the role of object information. Motivated by the fact that the category of each pixel is inherited from the object it belongs to, we define the object context for each pixel as the set of pixels that belong to the same category as the given pixel in the image. We use a binary relation matrix to represent the relationship between all pixels, where the value one indicates the two selected pixels belong to the same category and zero otherwise. We propose to use a dense relation matrix to serve as a surrogate for the binary relation matrix. The dense relation matrix is capable to emphasize the contribution of object information as the relation scores tend to be larger on the object pixels than the other pixels. Considering that the dense relation matrix estimation requires quadratic computation overhead and memory consumption w.r.t. the input size, we propose an efficient interlaced sparse self-attention scheme to model the dense relations between any two of all pixels via the combination of two sparse relation matrices. To capture richer context information, we further combine our interlaced sparse self-attention scheme with the conventional multi-scale context schemes including pyramid pooling~\citep{zhao2017pyramid} and atrous spatial pyramid pooling~\citep{chen2018deeplab}. We empirically show the advantages of our approach with competitive performances on five challenging benchmarks including: Cityscapes, ADE20K, LIP, PASCAL-Context and COCO-Stuff) <|cite_end|> propose object context pooling module. Fu \etal <|cite_start|> (Reference: Dual Attention Network for Scene Segmentation: In this paper, we address the scene segmentation task by capturing rich contextual dependencies based on the selfattention mechanism. Unlike previous works that capture contexts by multi-scale features fusion, we propose a Dual Attention Networks (DANet) to adaptively integrate local features with their global dependencies. Specifically, we append two types of attention modules on top of traditional dilated FCN, which model the semantic interdependencies in spatial and channel dimensions respectively. The position attention module selectively aggregates the features at each position by a weighted sum of the features at all positions. Similar features would be related to each other regardless of their distances. Meanwhile, the channel attention module selectively emphasizes interdependent channel maps by integrating associated features among all channel maps. We sum the outputs of the two attention modules to further improve feature representation which contributes to more precise segmentation results. We achieve new state-of-the-art segmentation performance on three challenging scene segmentation datasets, i.e., Cityscapes, PASCAL Context and COCO Stuff dataset. In particular, a Mean IoU score of 81.5% on Cityscapes test set is achieved without using coarse data. We make the code and trained model publicly available at https://github.com/junfu1115/DANet) <|cite_end|> apply attention mechanism on both position and channel. The aforementioned non-local based context modeling method creates large overhead since similarity between each pair of grid needs to be computed on the feature map. He \etal <|cite_start|> (Reference: Adaptive pyramid context network for semantic segmentation: Recent studies witnessed that context features can significantly improve the performance of deep semantic segmentation networks. Current context based segmentation methods differ with each other in how to construct context features and perform differently in practice. This paper firstly introduces three desirable properties of context features in segmentation task. Specially, we find that Global-guided Local Affinity (GLA) can play a vital role in constructing effective context features, while this property has been largely ignored in previous works. Based on this analysis, this paper proposes Adaptive Pyramid Context Network (APCNet) for semantic segmentation. APCNet adaptively constructs multi-scale contextual representations with multiple well-designed Adaptive Context Modules (ACMs). Specifically, each ACM leverages a global image representation as a guidance to estimate the local affinity coefficients for each sub-region, and then calculates a context vector with these affinities. We empirically evaluate our APCNet on three semantic segmentation and scene parsing datasets, including PASCAL VOC 2012, Pascal-Context, and ADE20K dataset. Experimental results show that APCNet achieves state-of-the-art performance on all three benchmarks, and obtains a new record 84.2% on PASCAL VOC 2012 test set without MS COCO pre-trained and any post-processing.) <|cite_end|> introduces adaptive context module to model the affinity between region feature and pixel feature, where the region feature is computed from average pooling on square patch. In comparison with non-local based method and adaptive context module, our method models the context between nodes at different levels of the graph hierarchy, which not only leads to lower overhead but also allow contextual information flow to irregular-shaped regions. <|paper_end|>
[ "<|reference_start|> Superpixel lattices: Unsupervised over-segmentation of an image into superpixels is a common preprocessing step for image parsing algorithms. Ideally, every pixel within each superpixel region will belong to the same real-world object. Existing algorithms generate superpixels that forfeit many useful properties of the regular topology of the original pixels: for example, the nth superpixel has no consistent position or relationship with its neighbors. We propose a novel algorithm that produces superpixels that are forced to conform to a grid (a regular superpixel lattice). Despite this added topological constraint, our algorithm is comparable in terms of speed and accuracy to alternative segmentation approaches. To demonstrate this, we use evaluation metrics based on (i) image reconstruction (ii) comparison to human-segmented images and (iii) stability of segmentation over subsequent frames of video sequences. <|reference_end|>", "<|reference_start|> Superpixel Convolutional Networks using Bilateral Inceptions: In this paper we propose a CNN architecture for semantic image segmentation. We introduce a new 'bilateral inception' module that can be inserted in existing CNN architectures and performs bilateral filtering, at multiple feature-scales, between superpixels in an image. The feature spaces for bilateral filtering and other parameters of the module are learned end-to-end using standard backpropagation techniques. The bilateral inception module addresses two issues that arise with general CNN segmentation architectures. First, this module propagates information between (super) pixels while respecting image edges, thus using the structured information of the problem for improved results. Second, the layer recovers a full resolution segmentation result from the lower resolution solution of a CNN. In the experiments, we modify several existing CNN architectures by inserting our inception module between the last CNN (1x1 convolution) layers. Empirical results on three different datasets show reliable improvements not only in comparison to the baseline networks, but also in comparison to several dense-pixel prediction techniques such as CRFs, while being competitive in time. <|reference_end|>", "<|reference_start|> Interpretable Structure-Evolving LSTM: This paper develops a general framework for learning interpretable data representation via Long Short-Term Memory (LSTM) recurrent neural networks over hierarchal graph structures. Instead of learning LSTM models over the pre-fixed structures, we propose to further learn the intermediate interpretable multi-level graph structures in a progressive and stochastic way from data during the LSTM network optimization. We thus call this model the structure-evolving LSTM. In particular, starting with an initial element-level graph representation where each node is a small data element, the structure-evolving LSTM gradually evolves the multi-level graph representations by stochastically merging the graph nodes with high compatibilities along the stacked LSTM layers. In each LSTM layer, we estimate the compatibility of two connected nodes from their corresponding LSTM gate outputs, which is used to generate a merging probability. The candidate graph structures are accordingly generated where the nodes are grouped into cliques with their merging probabilities. We then produce the new graph structure with a Metropolis-Hasting algorithm, which alleviates the risk of getting stuck in local optimums by stochastic sampling with an acceptance probability. Once a graph structure is accepted, a higher-level graph is then constructed by taking the partitioned cliques as its nodes. During the evolving process, representation becomes more abstracted in higher-levels where redundant information is filtered out, allowing more efficient propagation of long-range data dependencies. We evaluate the effectiveness of structure-evolving LSTM in the application of semantic object parsing and demonstrate its advantage over state-of-the-art LSTM models on standard benchmarks. <|reference_end|>", "<|reference_start|> Adaptive pyramid context network for semantic segmentation: Recent studies witnessed that context features can significantly improve the performance of deep semantic segmentation networks. Current context based segmentation methods differ with each other in how to construct context features and perform differently in practice. This paper firstly introduces three desirable properties of context features in segmentation task. Specially, we find that Global-guided Local Affinity (GLA) can play a vital role in constructing effective context features, while this property has been largely ignored in previous works. Based on this analysis, this paper proposes Adaptive Pyramid Context Network (APCNet) for semantic segmentation. APCNet adaptively constructs multi-scale contextual representations with multiple well-designed Adaptive Context Modules (ACMs). Specifically, each ACM leverages a global image representation as a guidance to estimate the local affinity coefficients for each sub-region, and then calculates a context vector with these affinities. We empirically evaluate our APCNet on three semantic segmentation and scene parsing datasets, including PASCAL VOC 2012, Pascal-Context, and ADE20K dataset. Experimental results show that APCNet achieves state-of-the-art performance on all three benchmarks, and obtains a new record 84.2% on PASCAL VOC 2012 test set without MS COCO pre-trained and any post-processing. <|reference_end|>" ]
[ 13, 35, 39, 51 ]
{"<|multi_cite_1_1|>": "arxiv-68791", "<|multi_cite_1_2|>": "arxiv-99247", "<|multi_cite_2_1|>": "arxiv-111759", "<|multi_cite_2_2|>": "arxiv-167314", "<|multi_cite_2_3|>": "arxiv-219750", "<|multi_cite_3_1|>": "arxiv-152599", "<|multi_cite_3_2|>": "arxiv-171258", "<|multi_cite_3_3|>": "ss-1530283", "<|multi_cite_3_4|>": "arxiv-232521", "<|multi_cite_3_5|>": "ss-1268189", "<|cite_4|>": "ss-1104925", "<|multi_cite_5_1|>": "ss-1939804", "<|multi_cite_5_2|>": "ss-996976", "<|multi_cite_5_3|>": "ss-777377", "<|multi_cite_5_4|>": "arxiv-73981", "<|multi_cite_6_1|>": "ss-1268679", "<|multi_cite_6_3|>": "ss-1902395", "<|cite_7|>": "ss-1268679", "<|multi_cite_8_1|>": "arxiv-167314", "<|multi_cite_8_2|>": "arxiv-111759", "<|multi_cite_9_1|>": "arxiv-140845", "<|multi_cite_9_2|>": "arxiv-171258", "<|cite_10|>": "arxiv-122063", "<|multi_cite_11_1|>": "arxiv-93870", "<|multi_cite_11_2|>": "ss-682979", "<|multi_cite_12_1|>": "arxiv-119949", "<|multi_cite_12_2|>": "arxiv-162259", "<|multi_cite_12_3|>": "arxiv-121644", "<|cite_13|>": "ss-1939804", "<|multi_cite_14_1|>": "ss-1931744", "<|multi_cite_14_2|>": "arxiv-73981", "<|multi_cite_14_3|>": "ss-1102672", "<|multi_cite_14_4|>": "arxiv-89765", "<|multi_cite_14_5|>": "ss-1104926", "<|multi_cite_14_6|>": "ss-1967269", "<|cite_15|>": "arxiv-87738", "<|multi_cite_16_1|>": "ss-1090981", "<|multi_cite_16_2|>": "arxiv-167301", "<|cite_17|>": "arxiv-201530", "<|cite_18|>": "arxiv-118633", "<|cite_19|>": "arxiv-94461", "<|cite_20|>": "arxiv-118633", "<|cite_21|>": "ss-986462", "<|cite_22|>": "ss-1280592", "<|multi_cite_23_1|>": "ss-1257891", "<|multi_cite_23_2|>": "arxiv-182638", "<|cite_24|>": "arxiv-163509", "<|cite_25|>": "arxiv-126204", "<|cite_26|>": "arxiv-140845", "<|cite_27|>": "arxiv-171258", "<|cite_28|>": "arxiv-171924", "<|cite_29|>": "ss-1530283"}
2011.05319
<|paper_start|> Title: Grounding Implicit Goal Description for Robot Indoor Navigation Via Recursive Belief Update Abstract: Grounding Implicit Goal Description for Robot Indoor Navigation Via Recursive Belief Update: Natural language-based robotic navigation remains a challenging problem due to the human knowledge of navigation constraints, and destination is not directly compatible with the robot knowledge base. In this paper, we aim to translate natural destination commands into high-level robot navigation plans given a map of interest. We identify grammatically associated segments of destination description and recursively apply each of them to update a belief distribution of an area over the given map.We train a destination grounding model using a dataset of single-step belief update for precise, proximity, and directional modifier types. We demonstrate our method on real-world navigation task in an office consisting of 80 areas. Offline experimental results show that our method can directly extract goal destination from unheard, long, and composite text commands asked by humans. This enables users to specify their destination goals for the robot in general and natural form. Hardware experiment results also show that the designed model brings much convenience for specifying a navigation goal to a service robot. Introduction In human-robot interaction, natural language is one of the most desirable forms of communication between users and robots <|cite_start|> (Reference: Understanding natural language commands for robotic navigation and mobile manipulation.: This paper describes a new model for understanding natural language commands given to autonomous systems that perform navigation and mobile manipulation in semi-structured environments. Previous approaches have used models with fixed structure to infer the likelihood of a sequence of actions given the environment and the command. In contrast, our framework, called Generalized Grounding Graphs, dynamically instantiates a probabilistic graphical model for a particular natural language command according to the command's hierarchical and compositional semantic structure. Our system performs inference in the model to successfully find and execute plans corresponding to natural language commands such as "Put the tire pallet on the truck." The model is trained using a corpus of commands collected using crowdsourcing. We pair each command with robot actions and use the corpus to learn the parameters of the model. We evaluate the robot's performance by inferring plans from natural language commands, executing each plan in a realistic robot simulator, and asking users to evaluate the system's performance. We demonstrate that our system can successfully follow many natural language commands from the corpus.) <|cite_end|> <|cite_start|> (Reference: Safe Navigation with Human Instructions in Complex Scenes: In this paper, we present a robotic navigation algorithm with natural language interfaces, which enables a robot to safely walk through a changing environment with moving persons by following human instructions such as "go to the restaurant and keep away from people". We first classify human instructions into three types: the goal, the constraints, and uninformative phrases. Next, we provide grounding for the extracted goal and constraint items in a dynamic manner along with the navigation process, to deal with the target objects that are too far away for sensor observation and the appearance of moving obstacles like humans. In particular, for a goal phrase (e.g., "go to the restaurant"), we ground it to a location in a predefined semantic map and treat it as a goal for a global motion planner, which plans a collision-free path in the workspace for the robot to follow. For a constraint phrase (e.g., "keep away from people"), we dynamically add the corresponding constraint into a local planner by adjusting the values of a local costmap according to the results returned by the object detection module. The updated costmap is then used to compute a local collision avoidance control for the safe navigation of the robot. By combining natural language processing, motion planning, and computer vision, our developed system is demonstrated to be able to successfully follow natural language navigation instructions to achieve navigation tasks in both simulated and real-world scenarios. Videos are available at https://sites.google.com/view/snhi) <|cite_end|>. However, the interpretation of natural language remains an extremely hard problem for robots. One major issue is that even with the successful conversion of speech to text, there is still a considerable gap between text and its appropriate interpretation. One scenario where this issue exists is language-based robot navigation. Consider the map layout of an office in Fig.~\ref{fig:intro}. If one wants the robot to deliver a document to meeting room 124, he/she would command differently depending on his/her knowledge about the office. If the user knows the exact room number ``124'', he/she would say ``Go to room 124''. Otherwise, the user might refer to the meeting room with respect to a nearby location, an alternative but very intuitive command would be ``Go to the meeting room near the north exit.'' Despite the tiny modification of the original command, it has already become non-trivial for robots to understand due to the reasoning required. \begin{figure}[ht!] \centering \includegraphics[width=\columnwidth]{img/intro.png} \caption{Destination grounding translates implicit destination descriptions (red) into robot-compatible locations (green). } \label{fig:intro} \end{figure} The above example represents a common yet challenging situation in natural language-based navigation where the users' knowledge about desired destinations is not directly compatible with the robot knowledge base. This manifests especially when the user is unable to uniquely refer to the destination without proper explanation, such as ``the meeting room near the north exit.'' This destination can be easily made unique by assigning a coordinate to it, which, unfortunately, is normally done by robots only. A common workaround for human users is to describe by reference to landmarks or places easier to describe, yielding the alternative command mentioned previously. The major difficulty here is that although it is feasible for robots to memorize a map with extremely high fidelity, it is impractical to store all possible relations and interactions of map locations. However, the latter one is more commonly invoked in human language; in this way, people only need to memorize a few map elements and can potentially indicate anywhere in the map by adding adequate references and implications. In this paper, we formally define the above problem as \textit{implicit destination} in navigation instructions and aim at a solution to enabling robots to correctly ground implicit destination descriptions to specific locations on a map of interest. Recently, interesting results for translating navigation instructions to a high-level plan are discussed in <|cite_start|> (Reference: Learning to Interpret Natural Language Navigation Instructions from Observations: The ability to understand natural-language instructions is critical to building intelligent agents that interact with humans. We present a system that learns to transform natural-language navigation instructions into executable formal plans. Given no prior linguistic knowledge, the system learns by simply observing how humans follow navigation instructions. The system is evaluated in three complex virtual indoor environments with numerous objects and landmarks. A previously collected realistic corpus of complex English navigation instructions for these environments is used for training and testing data. By using a learned lexicon to refine inferred plans and a supervised learner to induce a semantic parser, the system is able to automatically learnto correctly interpret a reasonable fraction of the complex instructions in this corpus.) <|cite_end|> <|cite_start|> (Reference: Translating Navigation Instructions in Natural Language to a High-Level Plan for Behavioral Robot Navigation: We propose an end-to-end deep learning model for translating free-form natural language instructions to a high-level plan for behavioral robot navigation. We use attention models to connect information from both the user instructions and a topological representation of the environment. We evaluate our model's performance on a new dataset containing 10,050 pairs of navigation instructions. Our model significantly outperforms baseline approaches. Furthermore, our results suggest that it is possible to leverage the environment map as a relevant knowledge base to facilitate the translation of free-form navigational instruction.) <|cite_end|>, where a graph is first built according to the preliminary knowledge of the environment. The detailed language text commands then guide the graph search algorithms to obtain the viable edges, which eventually form the navigation plan. However, this approach requires rather rich preliminary information, thus knowing the destination is enough to generate path plan. Hence the text commands appear to be redundant. In literature, a general frame of reinforcement learning (RL) has been investigated for the purpose of grounding the natural languages into robot behaviors <|cite_start|> (Reference: Grounding natural language instructions to semantic goal representations for abstraction and generalization: ) <|cite_end|> <|cite_start|> (Reference: Sequence-to-Sequence Language Grounding of Non-Markovian Task Specifications.: A method includes enabling a robot to learn a mapping between English language commands and Linear Temporal Logic (LTL) expressions, wherein neural sequence-to-sequence learning models are employed to infer a LTL sequence corresponding to a given natural language command.) <|cite_end|> <|cite_start|> (Reference: Planning with State Abstractions for Non-Markovian Task Specifications: Often times, we specify tasks for a robot using temporal language that can also span different levels of abstraction. The example command ``go to the kitchen before going to the second floor'' contains spatial abstraction, given that ``floor'' consists of individual rooms that can also be referred to in isolation ("kitchen", for example). There is also a temporal ordering of events, defined by the word "before". Previous works have used Linear Temporal Logic (LTL) to interpret temporal language (such as "before"), and Abstract Markov Decision Processes (AMDPs) to interpret hierarchical abstractions (such as "kitchen" and "second floor"), separately. To handle both types of commands at once, we introduce the Abstract Product Markov Decision Process (AP-MDP), a novel approach capable of representing non-Markovian reward functions at different levels of abstractions. The AP-MDP framework translates LTL into its corresponding automata, creates a product Markov Decision Process (MDP) of the LTL specification and the environment MDP, and decomposes the problem into subproblems to enable efficient planning with abstractions. AP-MDP performs faster than a non-hierarchical method of solving LTL problems in over 95% of tasks, and this number only increases as the size of the environment domain increases. We also present a neural sequence-to-sequence model trained to translate language commands into LTL expression, and a new corpus of non-Markovian language commands spanning different levels of abstraction. We test our framework with the collected language commands on a drone, demonstrating that our approach enables a robot to efficiently solve temporal commands at different levels of abstraction.) <|cite_end|>. Normally the system description leverages a system model related to Markov Decision Process (MDP) such as object-orientated MPD (OO-MDP) or linear temporal logic (LTL). Based on such robot behavior model, RL algorithms are explored in order to build a mapping from natural language to certain action. Nonetheless, applications of such framework is restricted, due to the limited number of robotic tasks that can be transferred into a MDP model. Combining vision and language for navigation has also attract much research attention <|cite_start|> (Reference: Learning to Navigate Unseen Environments: Back Translation with Environmental Dropout: A grand goal in AI is to build a robot that can accurately navigate based on natural language instructions, which requires the agent to perceive the scene, understand and ground language, and act in the real-world environment. One key challenge here is to learn to navigate in new environments that are unseen during training. Most of the existing approaches perform dramatically worse in unseen environments as compared to seen ones. In this paper, we present a generalizable navigational agent. Our agent is trained in two stages. The first stage is training via mixed imitation and reinforcement learning, combining the benefits from both off-policy and on-policy optimization. The second stage is fine-tuning via newly-introduced 'unseen' triplets (environment, path, instruction). To generate these unseen triplets, we propose a simple but effective 'environmental dropout' method to mimic unseen environments, which overcomes the problem of limited seen environment variability. Next, we apply semi-supervised learning (via back-translation) on these dropped-out environments to generate new paths and instructions. Empirically, we show that our agent is substantially better at generalizability when fine-tuned with these triplets, outperforming the state-of-art approaches by a large margin on the private unseen test set of the Room-to-Room task, and achieving the top rank on the leaderboard.) <|cite_end|> <|cite_start|> (Reference: Vision-and-Dialog Navigation: Robots navigating in human environments should use language to ask for assistance and be able to understand human responses. To study this challenge, we introduce Cooperative Vision-and-Dialog Navigation, a dataset of over 2k embodied, human-human dialogs situated in simulated, photorealistic home environments. The Navigator asks questions to their partner, the Oracle, who has privileged access to the best next steps the Navigator should take according to a shortest path planner. To train agents that search an environment for a goal location, we define the Navigation from Dialog History task. An agent, given a target object and a dialog history between humans cooperating to find that object, must infer navigation actions towards the goal in unexplored environments. We establish an initial, multi-modal sequence-to-sequence model and demonstrate that looking farther back in the dialog history improves performance. Sourcecode and a live interface demo can be found at https://cvdn.dev/) <|cite_end|> <|cite_start|> (Reference: Transferable Representation Learning in Vision-and-Language Navigation: Vision-and-Language Navigation (VLN) tasks such as Room-to-Room (R2R) require machine agents to interpret natural language instructions and learn to act in visually realistic environments to achieve navigation goals. The overall task requires competence in several perception problems: successful agents combine spatio-temporal, vision and language understanding to produce appropriate action sequences. Our approach adapts pre-trained vision and language representations to relevant in-domain tasks making them more effective for VLN. Specifically, the representations are adapted to solve both a cross-modal sequence alignment and sequence coherence task. In the sequence alignment task, the model determines whether an instruction corresponds to a sequence of visual frames. In the sequence coherence task, the model determines whether the perceptual sequences are predictive sequentially in the instruction-conditioned latent space. By transferring the domain-adapted representations, we improve competitive agents in R2R as measured by the success rate weighted by path length (SPL) metric.) <|cite_end|>. These approaches normally try to reveal the correspondence between the image and the text, based on which the next step action is selected among a finite set of pre-defined actions. In this paper, we aim to translate natural destination commands into high-level robot navigation plans given a map of interest, where each area in the map is associated with as a tuple of strings with unique area id, area category, and area name. Instead of directly taking the whole destination noun phrase as input, we decompose it into a sequence of grammatically associated segments, namely - \textit{modifier} and recursively apply each of them to update the belief distribution based on the prior belief. We categorize modifiers into ``dummy'', ``proximity'', ``precise'', or ``directional'' according to how they update the prior. Each modifier type also implies constraints on the type of prior it applies to; ``precise'' believes refer to specific areas, while ``proximity '' believes refer to an orientation or proximity relation with respect to its prior. We demonstrate our method on real-world navigation task in an office consisting of 80 areas. The robot's goal is to ground the destination description given by a user to the office map. Since the language involved in single updates are highly limited, we analytically generate training data based on a finite rule set for all different types of modifiers for a single belief update. We then train the learnable update functions on single update steps for each type of modifier by minimizing the total losses of all supervised terms that are applicable for each update type. Our model achieves ~$90\%$ accuracy for the area grounding of single-step precise belief update. We further demonstrate a composite belief update on realistic human instructions. Experimental results show that our method can directly extract goal destination from unlabeled, long and composite text commands asked by humans. This enables users to specify their destination goals for the robot in general and naturalistic form. <|paper_end|>
[ "<|reference_start|> Learning to Interpret Natural Language Navigation Instructions from\nObservations: The ability to understand natural-language instructions is critical to building intelligent agents that interact with humans. We present a system that learns to transform natural-language navigation instructions into executable formal plans. Given no prior linguistic knowledge, the system learns by simply observing how humans follow navigation instructions. The system is evaluated in three complex virtual indoor environments with numerous objects and landmarks. A previously collected realistic corpus of complex English navigation instructions for these environments is used for training and testing data. By using a learned lexicon to refine inferred plans and a supervised learner to induce a semantic parser, the system is able to automatically learnto correctly interpret a reasonable fraction of the complex instructions in this corpus. <|reference_end|>", "<|reference_start|> Translating Navigation Instructions in Natural Language to a High-Level Plan for Behavioral Robot Navigation: We propose an end-to-end deep learning model for translating free-form natural language instructions to a high-level plan for behavioral robot navigation. We use attention models to connect information from both the user instructions and a topological representation of the environment. We evaluate our model's performance on a new dataset containing 10,050 pairs of navigation instructions. Our model significantly outperforms baseline approaches. Furthermore, our results suggest that it is possible to leverage the environment map as a relevant knowledge base to facilitate the translation of free-form navigational instruction. <|reference_end|>", "<|reference_start|> Sequence-to-Sequence Language Grounding of Non-Markovian Task Specifications.: A method includes enabling a robot to learn a mapping between English language commands and Linear Temporal Logic (LTL) expressions, wherein neural sequence-to-sequence learning models are employed to infer a LTL sequence corresponding to a given natural language command. <|reference_end|>", "<|reference_start|> Transferable Representation Learning in Vision-and-Language Navigation: Vision-and-Language Navigation (VLN) tasks such as Room-to-Room (R2R) require machine agents to interpret natural language instructions and learn to act in visually realistic environments to achieve navigation goals. The overall task requires competence in several perception problems: successful agents combine spatio-temporal, vision and language understanding to produce appropriate action sequences. Our approach adapts pre-trained vision and language representations to relevant in-domain tasks making them more effective for VLN. Specifically, the representations are adapted to solve both a cross-modal sequence alignment and sequence coherence task. In the sequence alignment task, the model determines whether an instruction corresponds to a sequence of visual frames. In the sequence coherence task, the model determines whether the perceptual sequences are predictive sequentially in the instruction-conditioned latent space. By transferring the domain-adapted representations, we improve competitive agents in R2R as measured by the success rate weighted by path length (SPL) metric. <|reference_end|>" ]
[ 2, 3, 5, 9 ]
{"<|multi_cite_1_1|>": "ss-1276303", "<|multi_cite_1_2|>": "arxiv-172356", "<|multi_cite_2_1|>": "ss-771482", "<|multi_cite_2_2|>": "arxiv-174641", "<|multi_cite_3_1|>": "ss-1195604", "<|multi_cite_3_2|>": "ss-855241", "<|multi_cite_3_3|>": "arxiv-206555", "<|multi_cite_4_1|>": "arxiv-198938", "<|multi_cite_4_2|>": "arxiv-213948", "<|multi_cite_4_3|>": "arxiv-218132"}
2308.08033
<|paper_start|> Title: Domain Adaptation for Code Model-based Unit Test Case Generation Abstract: Domain Adaptation for Code Model-based Unit Test Case Generation: Recently, deep learning-based test case generation approaches have been proposed to automate the generation of unit test cases. In this study, we leverage Transformer-based code models to generate unit tests with the help of Domain Adaptation (DA) at a project level. Specifically, we use CodeT5, a relatively small language model trained on source code data, and fine-tune it on the test generation task. Then, we apply domain adaptation to each target project data to learn project-specific knowledge (project-level DA). We use the Methods2test dataset to fine-tune CodeT5 for the test generation task and the Defects4j dataset for project-level domain adaptation and evaluation. We compare our approach with (a) CodeT5 fine-tuned on the test generation without DA, (b) the A3Test tool, and (c) GPT-4 on five projects from the Defects4j dataset. The results show that tests generated using DA can increase the line coverage by 18.62%, 19.88%, and 18.02% and mutation score by 16.45%, 16.01%, and 12.99% compared to the above (a), (b), and (c) baselines, respectively. The overall results show consistent improvements in metrics such as parse rate, compile rate, BLEU, and CodeBLEU. In addition, we show that our approach can be seen as a complementary solution alongside existing search-based test generation tools such as EvoSuite, to increase the overall coverage and mutation scores with an average of 34.42% and 6.8%, for line coverage and mutation score, respectively. Introduction \label{sec:intro} Code models that are pre-trained on a large corpus of source code have been introduced to automate numerous software development tasks such as comment generation, code translation, and code generation <|cite_start|> (Reference: Prompt Engineering or Fine Tuning: An Empirical Assessment of Large Language Models in Automated Software Engineering Tasks: In this paper, we investigate the effectiveness of state-of-the-art LLM, i.e., GPT-4, with three different prompting engineering techniques (i.e., basic prompting, in-context learning, and task-specific prompting) against 18 fine-tuned LLMs on three typical ASE tasks, i.e., code generation, code summarization, and code translation. Our quantitative analysis of these prompting strategies suggests that prompt engineering GPT-4 cannot necessarily and significantly outperform fine-tuning smaller/older LLMs in all three tasks. For comment generation, GPT-4 with the best prompting strategy (i.e., task-specific prompt) had outperformed the first-ranked fine-tuned model by 8.33% points on average in BLEU. However, for code generation, the first-ranked fine-tuned model outperforms GPT-4 with best prompting by 16.61% and 28.3% points, on average in BLEU. For code translation, GPT-4 and fine-tuned baselines tie as they outperform each other on different translation tasks. To explore the impact of different prompting strategies, we conducted a user study with 27 graduate students and 10 industry practitioners. From our qualitative analysis, we find that the GPT-4 with conversational prompts (i.e., when a human provides feedback and instructions back and forth with a model to achieve best results) showed drastic improvement compared to GPT-4 with automatic prompting strategies. Moreover, we observe that participants tend to request improvements, add more context, or give specific instructions as conversational prompts, which goes beyond typical and generic prompting strategies. Our study suggests that, at its current state, GPT-4 with conversational prompting has great potential for ASE tasks, but fully automated prompt engineering with no human in the loop requires more study and improvement.) <|cite_end|> <|cite_start|> (Reference: Learning and Evaluating Contextual Embedding of Source Code: Recent research has achieved impressive results on understanding and improving source code by building up on machine-learning techniques developed for natural languages. A significant advancement in natural-language understanding has come with the development of pre-trained contextual embeddings, such as BERT, which can be fine-tuned for downstream tasks with less labeled data and training budget, while achieving better accuracies. However, there is no attempt yet to obtain a high-quality contextual embedding of source code, and to evaluate it on multiple program-understanding tasks simultaneously; that is the gap that this paper aims to mitigate. Specifically, first, we curate a massive, deduplicated corpus of 7.4M Python files from GitHub, which we use to pre-train CuBERT, an open-sourced code-understanding BERT model; and, second, we create an open-sourced benchmark that comprises five classification tasks and one program-repair task, akin to code-understanding tasks proposed in the literature before. We fine-tune CuBERT on our benchmark tasks, and compare the resulting models to different variants of Word2Vec token embeddings, BiLSTM and Transformer models, as well as published state-of-the-art models, showing that CuBERT outperforms them all, even with shorter training, and with fewer labeled examples. Future work on source-code embedding can benefit from reusing our benchmark, and from comparing against CuBERT models as a strong baseline.) <|cite_end|> <|cite_start|> (Reference: Unified Pre-training for Program Understanding and Generation: Code summarization and generation empower conversion between programming language (PL) and natural language (NL), while code translation avails the migration of legacy code from one PL to another. This paper introduces PLBART, a sequence-to-sequence model capable of performing a broad spectrum of program and language understanding and generation tasks. PLBART is pre-trained on an extensive collection of Java and Python functions and associated NL text via denoising autoencoding. Experiments on code summarization in the English language, code generation, and code translation in seven programming languages show that PLBART outperforms or rivals state-of-the-art models. Moreover, experiments on discriminative tasks, e.g., program repair, clone detection, and vulnerable code detection, demonstrate PLBART's effectiveness in program understanding. Furthermore, analysis reveals that PLBART learns program syntax, style (e.g., identifier naming convention), logical flow (e.g., if block inside an else block is equivalent to else if block) that are crucial to program semantics and thus excels even with limited annotations.) <|cite_end|>. Among these downstream tasks, unit test generation, which can be seen as a neural machine translation task, has started gaining its spotlight recently <|cite_start|> (Reference: Unit Test Case Generation with Transformers and Focal Context: Automated unit test case generation tools facilitate test-driven development and support developers by suggesting tests intended to identify flaws in their code. Existing approaches are usually guided by the test coverage criteria, generating synthetic test cases that are often difficult for developers to read or understand. In this paper we propose AthenaTest, an approach that aims to generate unit test cases by learning from real-world focal methods and developer-written testcases. We formulate unit test case generation as a sequence-to-sequence learning task, adopting a two-step training procedure consisting of denoising pretraining on a large unsupervised Java corpus, and supervised finetuning for a downstream translation task of generating unit tests. We investigate the impact of natural language and source code pretraining, as well as the focal context information surrounding the focal method. Both techniques provide improvements in terms of validation loss, with pretraining yielding 25% relative improvement and focal context providing additional 11.1% improvement. We also introduce Methods2Test, the largest publicly available supervised parallel corpus of unit test case methods and corresponding focal methods in Java, which comprises 780K test cases mined from 91K open-source repositories from GitHub. We evaluate AthenaTest on five defects4j projects, generating 25K passing test cases covering 43.7% of the focal methods with only 30 attempts. We execute the test cases, collect test coverage information, and compare them with test cases generated by EvoSuite and GPT-3, finding that our approach outperforms GPT-3 and has comparable coverage w.r.t. EvoSuite. Finally, we survey professional developers on their preference in terms of readability, understandability, and testing effectiveness of the generated tests, showing overwhelmingly preference towards AthenaTest.) <|cite_end|> <|cite_start|> (Reference: A3Test: Assertion-Augmented Automated Test Case Generation: Test case generation is an important activity, yet a time-consuming and laborious task. Recently, AthenaTest -- a deep learning approach for generating unit test cases -- is proposed. However, AthenaTest can generate less than one-fifth of the test cases correctly, due to a lack of assertion knowledge and test signature verification. In this paper, we propose A3Test, a DL-based test case generation approach that is augmented by assertion knowledge with a mechanism to verify naming consistency and test signatures. A3Test leverages the domain adaptation principles where the goal is to adapt the existing knowledge from an assertion generation task to the test case generation task. We also introduce a verification approach to verify naming consistency and test signatures. Through an evaluation of 5,278 focal methods from the Defects4j dataset, we find that our A3Test (1) achieves 147% more correct test cases and 15% more method coverage, with a lower number of generated test cases than AthenaTest; (2) still outperforms the existing pre-trained models for the test case generation task; (3) contributes substantially to performance improvement via our own proposed assertion pre-training and the verification components; (4) is 97.2% much faster while being more accurate than AthenaTest.) <|cite_end|>. There are several reasons for the challenges in unit test case generation: (a) The robustness of the code generation model is more challenging to achieve, as slight miss generation would lead to an error. Unit test case generation, in particular, might be more challenging than regular code generation as test cases tend to have more minor differences between code. For example, a line of assertion statements or a couple of statements to instantiate objects might drive the program into an interesting and testable state. (b) Properly evaluating the generated test cases requires executing the generated tests to calculate test adequacy metrics, which is time-consuming and typically requires non-trivial manual labor, e.g., resolving dependencies. (c) Domain shift problem <|cite_start|> (Reference: Improving Automated Program Repair with Domain Adaptation: Automated Program Repair (APR) is defined as the process of fixing a bug/defect in the source code, by an automated tool. APR tools have recently experienced promising results by leveraging state-of-the-art Neural Language Processing (NLP) techniques. APR tools such as TFix and CodeXGLUE combine text-to-text transformers with software-specific techniques are outperforming alternatives, these days. However, in most APR studies the train and test sets are chosen from the same set of projects. In reality, however, APR models are meant to be generalizable to new and different projects. Therefore, there is a potential threat that reported APR models with high effectiveness perform poorly when the characteristics of the new project or its bugs are different than the training set's(Domain Shift). In this study, we first define and measure the domain shift problem in automated program repair. Then, we then propose a domain adaptation framework that can adapt an APR model for a given target project. We conduct an empirical study with three domain adaptation methods FullFineTuning, TuningWithLightWeightAdapterLayers, and CurriculumLearning using two state-of-the-art domain adaptation tools (TFix and CodeXGLUE) and two APR models on 611 bugs from 19 projects. The results show that our proposed framework can improve the effectiveness of TFix by 13.05% and CodeXGLUE by 23.4%. Another contribution of this study is the proposal of a data synthesis method to address the lack of labelled data in APR. We leverage transformers to create a bug generator model. We use the generated synthetic data to domain adapt TFix and CodeXGLUE on the projects with no data (Zero-shot learning), which results in an average improvement of 5.76% and 24.42% for TFix and CodeXGLUE, respectively.) <|cite_end|> occurs when the pre-trained models cannot transfer their code knowledge to a new target project due to different code distributions in various domains of projects. Despite these shortcomings, test case generation based on deep neural code models has advantages. The generated tests from neural models are similar since the models are trained on human-written code. Therefore, they are more readable and maintainable than the alternative automatically generated test cases. As previous literature suggests~\cite {tufano2020unit}, developers prefer neural model-generated tests over other automatically created test cases since they are more readable and understandable. They also target different faults (the same as those targeted by the developer-written tests) compared to tests generated by, e.g., search-based approaches, which usually focus on maximizing code coverage. To address the shortcomings of pre-trained code models for test case generation, i.e., low performance, insufficient evaluation, and domain shift, we propose a simple yet novel technique by adopting two different levels of fine-tuning/domain adaptation: task and project. In our approach, first, we fine-tune the \emph{CodeT5} pre-trained model with a task-specific dataset to customize the model for generating unit test cases, given a method under test. Then, we apply domain adaptation with the project-specific dataset to learn the proper code knowledge and create higher-quality test cases for mitigating the impact of the domain shift problem. We also conduct a more thorough investigation by evaluating test adequacy and textual similarity metrics to address the insufficient evaluation problem. Regardless of the simplicity of the idea, we note that this approach is 1) novel and 2) effective as it enables the relatively smaller model (\emph{CodeT5} with 220M parameters) to outperform much bigger models (\emph{GPT-4} with 1.76T). Our framework uses automated post-processing of simple heuristics to mitigate compilability/executability issues. We use the \emph{Methods2test} dataset <|cite_start|> (Reference: Methods2Test: A dataset of focal methods mapped to test cases: Unit testing is an essential part of the software development process, which helps to identify issues with source code in early stages of development and prevent regressions. Machine learning has emerged as viable approach to help software developers generate automated unit tests. However, generating reliable unit test cases that are semantically correct and capable of catching software bugs or unintended behavior via machine learning requires large, metadata-rich, datasets. In this paper we present Methods2Test: A dataset of focal methods mapped to test cases: a large, supervised dataset of test cases mapped to corresponding methods under test (i.e., focal methods). This dataset contains 780,944 pairs of JUnit tests and focal methods, extracted from a total of 91,385 Java open source projects hosted on GitHub with licenses permitting re-distribution. The main challenge behind the creation of the Methods2Test was to establish a reliable mapping between a test case and the relevant focal method. To this aim, we designed a set of heuristics, based on developers' best practices in software testing, which identify the likely focal method for a given test case. To facilitate further analysis, we store a rich set of metadata for each method-test pair in JSON-formatted files. Additionally, we extract textual corpus from the dataset at different context levels, which we provide both in raw and tokenized forms, in order to enable researchers to train and evaluate machine learning models for Automated Test Generation. Methods2Test is publicly available at: https://github.com/microsoft/methods2test) <|cite_end|> for fine-tuning the test case generation task. We apply domain adaptation to the models by leveraging human-written unit test cases for each project. For evaluation and domain adaptation, we use the \emph{Defects4j} dataset <|cite_start|> (Reference: Defects4J: a database of existing faults to enable controlled testing studies for Java programs: Empirical studies in software testing research may not be comparable, reproducible, or characteristic of practice. One reason is that real bugs are too infrequently used in software testing research. Extracting and reproducing real bugs is challenging and as a result hand-seeded faults or mutants are commonly used as a substitute. This paper presents Defects4J, a database and extensible framework providing real bugs to enable reproducible studies in software testing research. The initial version of Defects4J contains 357 real bugs from 5 real-world open source pro- grams. Each real bug is accompanied by a comprehensive test suite that can expose (demonstrate) that bug. Defects4J is extensible and builds on top of each program’s version con- trol system. Once a program is configured in Defects4J, new bugs can be added to the database with little or no effort. Defects4J features a framework to easily access faulty and fixed program versions and corresponding test suites. This framework also provides a high-level interface to common tasks in software testing research, making it easy to con- duct and reproduce empirical studies. Defects4J is publicly available at http://defects4j.org.) <|cite_end|>. We compare the effectiveness of our approach with and without domain adaptation. We also investigate two other baselines, namely \emph{GPT-4} (the largest and the state-of-the-art LLM) and \emph{A3Test} (state-of-the-art neural test case generation method which exploits task-knowledge domain adaptation). Our model with project-level domain adaptation outperforms all the baselines on {all the studied metrics, except for the parse rate of \emph{GPT-4}.} Furthermore, our approach can be used alongside search-based test generation to increase their line coverage and mutation score. We show that using domain adaptation, we can improve the line coverage with an average of 18.62\%, 19.88\%, and 18.02\% {and mutation score by 16.45\%, 16.01\%, and 12.99\%} compared to \emph{CodeT5} without DA, \emph{A3Test}, and \emph{GPT-4} baselines, respectively. We also show that our approach can increase the overall coverage and mutation scores of \emph{EvoSuite} when used alongside each other, with an average of 34.42\% and 6.8\% for line coverage and mutation score, respectively. In summary, our main contributions are as follows: \begin{enumerate} \item We propose a line-level neural test case generation framework leveraging domain adaptation, which creates high-quality unit test cases {(compilable, similar to human-written, and test-adequate)}. \item We conducted an empirical study on \emph{Defects4j} benchmark dataset <|cite_start|> (Reference: Defects4J: a database of existing faults to enable controlled testing studies for Java programs: Empirical studies in software testing research may not be comparable, reproducible, or characteristic of practice. One reason is that real bugs are too infrequently used in software testing research. Extracting and reproducing real bugs is challenging and as a result hand-seeded faults or mutants are commonly used as a substitute. This paper presents Defects4J, a database and extensible framework providing real bugs to enable reproducible studies in software testing research. The initial version of Defects4J contains 357 real bugs from 5 real-world open source pro- grams. Each real bug is accompanied by a comprehensive test suite that can expose (demonstrate) that bug. Defects4J is extensible and builds on top of each program’s version con- trol system. Once a program is configured in Defects4J, new bugs can be added to the database with little or no effort. Defects4J features a framework to easily access faulty and fixed program versions and corresponding test suites. This framework also provides a high-level interface to common tasks in software testing research, making it easy to con- duct and reproduce empirical studies. Defects4J is publicly available at http://defects4j.org.) <|cite_end|>, which shows our approach improves the performance of the most related work \emph{AthenaTest}, \emph{A3Test}, and \emph{GPT-4}) from the literature. \item We also show that our approach can cover lines that neither developer-written tests nor a baseline search-based testing tool can cover. We also showed that we can kill new mutants compared to the search-based tools. \item Unlike most related work, we execute the generated test cases and evaluate them with proper test adequacy metrics (i.e., code coverage and mutation score), which require much more effort to calculate compared to BLEU/CodeBLEU. We also report the BLEU and CodeBLEU scores, which are much used in the literature for automated evaluation metrics. \end{enumerate} The code for our proposed approach and the experiment's scripts and raw data are publicly available for replication\footnote{\url{https://github.com/shinjh0849/unit_tc_generation}}. We organized the rest of this paper as follows. Section~\ref{sec:bg} introduces the background of neural models for code and unit test generation. Section~\ref{sec:app} presents the approach of our test case generation framework. Section~\ref{sec:settings} shows the experimental setup. Section~\ref{sec:evaluation} presents the evaluation results. Section~\ref{sec:threats} discusses the possible threats in our study. Section~\ref{sec:con} concludes this paper. Related Work \label{sec:bg} \subsection{Search-based Software Testing} In search-based software testing (SBST), the problem of test case generation is translated into an optimization problem over a test adequacy criterion such as code coverage <|cite_start|> (Reference: {Search-based software test data generation: a survey: The use of metaheuristic search techniques for the automatic generation of test data has been a burgeoning interest for many researchers in recent years. Previous attempts to automate the test generation process have been limited, having been constrained by the size and complexity of software, and the basic fact that, in general, test data generation is an undecidable problem. Metaheuristic search techniques offer much promise in regard to these problems. Metaheuristic search techniques are high‐level frameworks, which utilize heuristics to seek solutions for combinatorial problems at a reasonable computational cost. To date, metaheuristic search techniques have been applied to automate test data generation for structural and functional testing; the testing of grey‐box properties, for example safety constraints; and also non‐functional properties, such as worst‐case execution time. This paper surveys some of the work undertaken in this field, discussing possible new future directions of research for each of its different individual areas. Copyright © 2004 John Wiley & Sons, Ltd.) <|cite_end|>. For instance, \emph{EvoSuite} <|cite_start|> (Reference: Evosuite: automatic test suite generation for object-oriented software: To find defects in software, one needs test cases that execute the software systematically, and oracles that assess the correctness of the observed behavior when running these test cases. This paper presents EvoSuite, a tool that automatically generates test cases with assertions for classes written in Java code. To achieve this, EvoSuite applies a novel hybrid approach that generates and optimizes whole test suites towards satisfying a coverage criterion. For the produced test suites, EvoSuite suggests possible oracles by adding small and effective sets of assertions that concisely summarize the current behavior; these assertions allow the developer to detect deviations from expected behavior, and to capture the current behavior in order to protect against future defects breaking this behavior.) <|cite_end|> is an SBST tool that generates test cases to optimize statement or branch coverage of the generated tests. It uses a genetic algorithm to evolve a test suite toward a higher quality set (more coverage with minimum tests). While SBST tools have shown great effectiveness, studies report their limitation in understandability or readability <|cite_start|> (Reference: DeepTC-Enhancer: Improving the readability of automatically generated tests: Automated test case generation tools have been successfully proposed to reduce the amount of human and infrastructure resources required to write and run test cases. However, recent studies demonstrate that the readability of generated tests is very limited due to (i) uninformative identifiers and (ii) lack of proper documentation. Prior studies proposed techniques to improve test readability by either generating natural language summaries or meaningful methods names. While these approaches are shown to improve test readability, they are also affected by two limitations: (1) generated summaries are often perceived as too verbose and redundant by developers, and (2) readable tests require both proper method names but also meaningful identifiers (within-method readability). In this work, we combine template based methods and Deep Learning (DL) approaches to automatically generate test case scenarios (elicited from natural language patterns of test case statements) as well as to train DL models on path-based representations of source code to generate meaningful identifier names. Our approach, called DeepTC-Enhancer, recommends documentation and identifier names with the ultimate goal of enhancing readability of automatically generated test cases. An empirical evaluation with 36 external and internal developers shows that (1) DeepTC-Enhancer outperforms significantly the baseline approach for generating summaries and performs equally with the baseline approach for test case renaming, (2) the transformation proposed by DeepTC-Enhancer results in a significant increase in readability of automatically generated test cases, and (3) there is a significant difference in the feature preferences between external and internal developers.) <|cite_end|> <|cite_start|> (Reference: An Empirical Investigation on the Readability of Manual and Generated Test Cases: Software testing is one of the most crucial tasks in the typical development process. Developers are usually required to write unit test cases for the code they implement. Since this is a time-consuming task, in last years many approaches and tools for automatic test case generation — such as EvoSuite — have been introduced. Nevertheless, developers have to maintain and evolve tests to sustain the changes in the source code; therefore, having readable test cases is important to ease such a process. However, it is still not clear whether developers make an effort in writing readable unit tests. Therefore, in this paper, we conduct an explorative study comparing the readability of manually written test cases with the classes they test. Moreover, we deepen such analysis looking at the readability of automatically generated test cases. Our results suggest that developers tend to neglect the readability of test cases and that automatically generated test cases are generally even less readable than manually written ones.) <|cite_end|> <|cite_start|> (Reference: {Modeling Readability to Improve Unit Tests: Writing good unit tests can be tedious and error prone, but even once they are written, the job is not done: Developers need to reason about unit tests throughout software development and evolution, in order to diagnose test failures, maintain the tests, and to understand code written by other developers. Unreadable tests are more difficult to maintain and lose some of their value to developers. To overcome this problem, we propose a domain-specific model of unit test readability based on human judgements, and use this model to augment automated unit test generation. The resulting approach can automatically generate test suites with both high coverage and also improved readability. In human studies users prefer our improved tests and are able to answer maintenance questions about them 14% more quickly at the same level of accuracy.) <|cite_end|>, quality <|cite_start|> (Reference: Scented since the beginning: On the diffuseness of test smells in automatically generated test code: ) <|cite_end|> <|cite_start|> (Reference: Automatic test case generation: What if test code quality matters?: Test case generation tools that optimize code coverage have been extensively investigated. Recently, researchers have suggested to add other non-coverage criteria, such as memory consumption or readability, to increase the practical usefulness of generated tests. In this paper, we observe that test code quality metrics, and test cohesion and coupling in particular, are valuable candidates as additional criteria. Indeed, tests with low cohesion and/or high coupling have been shown to have a negative impact on future maintenance activities. In an exploratory investigation we show that most generated tests are indeed affected by poor test code quality. For this reason, we incorporate cohesion and coupling metrics into the main loop of search-based algorithm for test case generation. Through an empirical study we show that our approach is not only able to generate tests that are more cohesive and less coupled, but can (i) increase branch coverage up to 10% when enough time is given to the search and (ii) result in statistically shorter tests.) <|cite_end|>, and their performance in detecting actual bugs from the generated unit test cases. <|cite_start|> (Reference: A multi-objective genetic algorithm to test data generation: Evolutionary testing has successfully applied search based optimization algorithms to the test data generation problem. The existing works use different techniques and fitness functions. However, the used functions consider only one objective, which is, in general, related to the coverage of a testing criterion. But, in practice, there are many factors that can influence the generation of test data, such as memory consumption, execution time, revealed faults, and etc. Considering this fact, this work explores a ultiobjective optimization approach for test data generation. A framework that implements a multi-objective genetic algorithm is described. Two different representations for the population are used, which allows the test of procedural and object-oriented code. Combinations of three objectives are experimentally evaluated: coverage of structural test criteria, ability to reveal faults, and execution time.) <|cite_end|> <|cite_start|> (Reference: On the effectiveness of manual and automatic unit test generation: The importance of testing has recently seen a significant growth, thanks to its benefits to software design (e.g. think of test-driven development), implementation and maintenance support. As a consequence of this, nowadays it is quite common to introduce a test suite into an existing system, which was not designed for it. The software engineer must then decide whether using tools which automatically generate unit tests (test suites necessary foundations) and how. This paper tries to deal with the issue of choosing the best approach. We will describe how different generation techniques (both manual and automatic) have been applied to a real case study. We will compare achieved results using several metrics in order to identify different approaches benefits and shortcomings. We will conclude showing the measure how the adoption of tools for automatic test creation can shift the trade-off between time and quality.) <|cite_end|> \subsection{Domain Adaption} Domain adaptation is a technique for modifying a model trained on one domain to perform well on a different but related domain. The goal is to leverage the knowledge gained from the source domain to improve the performance of the target domain, mainly when the target domain has limited labeled data. Domain adaptation is a type of transfer learning which aims to transfer knowledge from one task to another. Nam et al. <|cite_start|> (Reference: Transfer defect learning: Many software defect prediction approaches have been proposed and most are effective in within-project prediction settings. However, for new projects or projects with limited training data, it is desirable to learn a prediction model by using sufficient training data from existing source projects and then apply the model to some target projects (cross-project defect prediction). Unfortunately, the performance of cross-project defect prediction is generally poor, largely because of feature distribution differences between the source and target projects. In this paper, we apply a state-of-the-art transfer learning approach, TCA, to make feature distributions in source and target projects similar. In addition, we propose a novel transfer defect learning approach, TCA+, by extending TCA. Our experimental results for eight open-source projects show that TCA+ significantly improves cross-project prediction performance.) <|cite_end|> proposed a novel transfer defect learning approach, \emph{TCA+}, which applies a transfer learning technique to reduce the data distribution difference between source and target projects for cross-project defect prediction. \emph{TCA+} also selects a suitable normalization option based on the similarity of data set characteristics between the source and target projects and significantly improves prediction performance. Patel et al. <|cite_start|> (Reference: Visual Domain Adaptation: A Survey of Recent Advances: In pattern recognition and computer vision, one is often faced with scenarios where the training data used to learn a model have different distribution from the data on which the model is applied. Regardless of the cause, any distributional change that occurs after learning a classifier can degrade its performance at test time. Domain adaptation tries to mitigate this degradation. In this article, we provide a survey of domain adaptation methods for visual recognition. We discuss the merits and drawbacks of existing domain adaptation approaches and identify promising avenues for research in this rapidly evolving field.) <|cite_end|> did a survey about domain adaptation methods for visual recognition. The paper discusses the challenges, assumptions, and formulations of domain adaptation and categorizes the existing methods into feature augmentation, feature transformation, parameter adaptation, dictionary learning, and others. It also highlights the advantages and limitations of each category and identifies some promising directions for future research in this field. Farahani et al. <|cite_start|> (Reference: A Brief Review of Domain Adaptation: Classical machine learning assumes that the training and test sets come from the same distributions. Therefore, a model learned from the labeled training data is expected to perform well on the test data. However, This assumption may not always hold in real-world applications where the training and the test data fall from different distributions, due to many factors, e.g., collecting the training and test sets from different sources, or having an out-dated training set due to the change of data over time. In this case, there would be a discrepancy across domain distributions, and naively applying the trained model on the new dataset may cause degradation in the performance. Domain adaptation is a sub-field within machine learning that aims to cope with these types of problems by aligning the disparity between domains such that the trained model can be generalized into the domain of interest. This paper focuses on unsupervised domain adaptation, where the labels are only available in the source domain. It addresses the categorization of domain adaptation from different viewpoints. Besides, It presents some successful shallow and deep domain adaptation approaches that aim to deal with domain adaptation problems.) <|cite_end|> have briefly reviewed domain adaptation. It introduces the main categories, challenges, and domain adaptation approaches, focusing on unsupervised domain adaptation. Zirak et al. <|cite_start|> (Reference: Improving Automated Program Repair with Domain Adaptation: Automated Program Repair (APR) is defined as the process of fixing a bug/defect in the source code, by an automated tool. APR tools have recently experienced promising results by leveraging state-of-the-art Neural Language Processing (NLP) techniques. APR tools such as TFix and CodeXGLUE combine text-to-text transformers with software-specific techniques are outperforming alternatives, these days. However, in most APR studies the train and test sets are chosen from the same set of projects. In reality, however, APR models are meant to be generalizable to new and different projects. Therefore, there is a potential threat that reported APR models with high effectiveness perform poorly when the characteristics of the new project or its bugs are different than the training set's(Domain Shift). In this study, we first define and measure the domain shift problem in automated program repair. Then, we then propose a domain adaptation framework that can adapt an APR model for a given target project. We conduct an empirical study with three domain adaptation methods FullFineTuning, TuningWithLightWeightAdapterLayers, and CurriculumLearning using two state-of-the-art domain adaptation tools (TFix and CodeXGLUE) and two APR models on 611 bugs from 19 projects. The results show that our proposed framework can improve the effectiveness of TFix by 13.05% and CodeXGLUE by 23.4%. Another contribution of this study is the proposal of a data synthesis method to address the lack of labelled data in APR. We leverage transformers to create a bug generator model. We use the generated synthetic data to domain adapt TFix and CodeXGLUE on the projects with no data (Zero-shot learning), which results in an average improvement of 5.76% and 24.42% for TFix and CodeXGLUE, respectively.) <|cite_end|> propose a domain adaptation framework for automated program repair (APR) models that can improve their effectiveness on new and different projects. The framework uses three methods: full fine-tuning, tuning with lightweight adapter layers, and curriculum learning. It also employs a data synthesis method to create artificial bugs for zero-shot learning. \subsection{Neural Models for Unit Test Generation} Deep neural models of code for unit test case generation are limited and relatively new. They can be grouped into two categories, i.e., test oracle generation and unit test case generation. \subsubsection{Test Oracle Generation} Test oracle generation aims to generate oracles, e.g., meaningful assertion statements when the focal context (method under test together with its class information, i.e., class method signature and class fields) and the corresponding test prefix are given <|cite_start|> (Reference: Assessing Evaluation Metrics for Neural Test Oracle Generation: In this work, we revisit existing oracle generation studies plus ChatGPT to empirically investigate the current standing of their performance in both NLG-based and test adequacy metrics. Specifically, we train and run four state-of-the-art test oracle generation models on five NLG-based and two test adequacy metrics for our analysis. We apply two different correlation analyses between these two different sets of metrics. Surprisingly, we found no significant correlation between the NLG-based metrics and test adequacy metrics. For instance, oracles generated from ChatGPT on the project activemq-artemis had the highest performance on all the NLG-based metrics among the studied NOGs, however, it had the most number of projects with a decrease in test adequacy metrics compared to all the studied NOGs. We further conduct a qualitative analysis to explore the reasons behind our observations, we found that oracles with high NLG-based metrics but low test adequacy metrics tend to have complex or multiple chained method invocations within the oracle's parameters, making it hard for the model to generate completely, affecting the test adequacy metrics. On the other hand, oracles with low NLG-based metrics but high test adequacy metrics tend to have to call different assertion types or a different method that functions similarly to the ones in the ground truth. Overall, this work complements prior studies on test oracle generation with an extensive performance evaluation with both NLG and test adequacy metrics and provides guidelines for better assessment of deep learning applications in software test generation in the future.) <|cite_end|>. Test prefixes are statements in a unit test case with the oracles (assertion statements, try-catch clause, etc.) removed. Test prefixes drive the program into a desired testable state. In general, the problem of oracle generation is a subset of the whole test case generation. \emph{ATLAS} (\underline{A}u\underline{T}omatic \underline{L}earning of \underline{A}ssert \underline{S}tatements) <|cite_start|> (Reference: On Learning Meaningful Assert Statements for Unit Test Cases: Software testing is an essential part of the software lifecycle and requires a substantial amount of time and effort. It has been estimated that software developers spend close to 50% of their time on testing the code they write. For these reasons, a long standing goal within the research community is to (partially) automate software testing. While several techniques and tools have been proposed to automatically generate test methods, recent work has criticized the quality and usefulness of the assert statements they generate. Therefore, we employ a Neural Machine Translation (NMT) based approach called Atlas(AuTomatic Learning of Assert Statements) to automatically generate meaningful assert statements for test methods. Given a test method and a focal method (i.e.,the main method under test), Atlas can predict a meaningful assert statement to assess the correctness of the focal method. We applied Atlas to thousands of test methods from GitHub projects and it was able to predict the exact assert statement manually written by developers in 31% of the cases when only considering the top-1 predicted assert. When considering the top-5 predicted assert statements, Atlas is able to predict exact matches in 50% of the cases. These promising results hint to the potential usefulness ofour approach as (i) a complement to automatic test case generation techniques, and (ii) a code completion support for developers, whocan benefit from the recommended assert statements while writing test code.) <|cite_end|> is the first to utilize deep neural models for assertion generation. They could generate assertions with the BLEU-4 score of 61.85\%. Yu et al. <|cite_start|> (Reference: Automated Assertion Generation via Information Retrieval and Its Integration with Deep Learning: Unit testing could be used to validate the correctness of basic units of the software system under test. To reduce manual efforts in conducting unit testing, the research community has contributed with tools that automatically generate unit test cases, including test inputs and test oracles (e.g., assertions). Recently, ATLAS, a deep learning (DL) based approach, was proposed to generate assertions for a unit test based on other already written unit tests. Despite promising, the effectiveness of ATLAS is still limited. To improve the effectiveness, in this work, we make the first attempt to leverage Information Retrieval (IR) in assertion generation and propose an IR-based approach, including the technique of IR-based assertion retrieval and the technique of retrieved-assertion adaptation. In addition, we propose an integration approach to combine our IR-based approach with a DL-based approach (e.g., ATLAS) to further improve the effectiveness. Our experimental results show that our IR-based approach outperforms the state-of-the-art DL-based ap-proach, and integrating our IR-based approach with the DL-based approach can further achieve higher accuracy. Our results convey an important message that information retrieval could be competitive and worthwhile to pursue for software engineering tasks such as assertion generation, and should be seriously considered by the research community given that in recent years deep learning solutions have been over-popularly adopted by the research community for software engineering tasks.) <|cite_end|> introduced an approach to integrate information retrieval techniques, using Jaccard coefficient, Overlap, and Dice coefficient <|cite_start|> (Reference: Measures of the Amount of Ecologic Association Between Species: ) <|cite_end|> with the deep neural approach \emph{ATLAS}. With their approach, they could boost the BLEU score up to 78.86\%. \emph{TOGA} (a neural method for \underline{T}est \underline{O}racle \underline{G}ener\underline{A}tion) <|cite_start|> (Reference: TOGA: A Neural Method for Test Oracle Generation: Testing is widely recognized as an important stage of the software development lifecycle. Effective software testing can provide benefits such as bug finding, preventing regressions, and documentation. In terms of documentation, unit tests express a unit's intended functionality, as conceived by the developer. A test oracle, typically expressed as an condition, documents the intended behavior of a unit under a given test prefix. Synthesizing a functional test oracle is a challenging problem, as it must capture the intended functionality rather than the implemented functionality. In this paper, we propose TOGA (a neural method for Test Oracle GenerAtion), a unified transformer-based neural approach to infer both exceptional and assertion test oracles based on the context of the focal method. Our approach can handle units with ambiguous or missing documentation, and even units with a missing implementation. We evaluate our approach on both oracle inference accuracy and functional bug-finding. Our technique improves accuracy by 33\% over existing oracle inference approaches, achieving 96\% overall accuracy on a held out test dataset. Furthermore, we show that when integrated with a automated test generation tool (EvoSuite), our approach finds 57 real world bugs in large-scale Java programs, including 30 bugs that are not found by any other automated testing method in our evaluation.) <|cite_end|> was proposed to use a unified transformer-based neural model to generate both try-catch clause and assertion statements for unit test case oracles. For generating the try-catch clause, they had 86\% of exact match accuracy and 69\% for assertion statements. Tufano et al. <|cite_start|> (Reference: Generating Accurate Assert Statements for Unit Test Cases using Pretrained Transformers: Unit testing represents the foundational basis of the software testing pyramid, beneath integration and end-to-end testing. Automated software testing researchers have proposed a variety of techniques to assist developers in this time-consuming task. In this paper we present an approach to support developers in writing unit test cases by generating accurate and useful assert statements. Our approach is based on a state-of-the-art transformer model initially pretrained on an English textual corpus. This semantically rich model is then trained in a semi-supervised fashion on a large corpus of source code. Finally, we finetune this model on the task of generating assert statements for unit tests. The resulting model is able to generate accurate assert statements for a given method under test. In our empirical evaluation, the model was able to predict the exact assert statements written by developers in 62% of the cases in the first attempt. The results show 80% relative improvement for top-1 accuracy over the previous RNN-based approach in the literature. We also show the substantial impact of the pretraining process on the performances of our model, as well as comparing it with assert auto-completion task. Finally, we demonstrate how our approach can be used to augment EvoSuite test cases, with additional asserts leading to improved test coverage.) <|cite_end|> proposed to apply the \emph{BART} pre-training model trained with natural language and source code corpus and then fine-tune on \emph{ATLAS} dataset. They achieved an exact match accuracy of 62.47\% with a beam size of one. The main difference between our work and test oracle generation is that test oracle generation models only focus on the oracle part of the test case. Generating test prefixes is a non-trivial task, which calls into the need to generate whole unit test cases. \subsubsection{Unit Test Case Generation} There have not been many studies related to automating the generation of whole test cases. Liu et al. <|cite_start|> (Reference: Automatic Text Input Generation for Mobile Testing: Many designs have been proposed to improve the automated mobile testing. Despite these improvements, providing appropriate text inputs remains a prominent obstacle, which hinders the large-scale adoption of automated testing approaches. The key challenge is how to automatically produce the most relevant text in a use case context. For example, a valid website address should be entered in the address bar of a mobile browser app to continue the testing of the app, a singer's name should be entered in the search bar of a music recommendation app. Without the proper text inputs, the testing would get stuck. We propose a novel deep learning based approach to address the challenge, which reduces the problem to a minimization problem. Another challenge is how to make the approach generally applicable to both the trained apps and the untrained apps. We leverage the Word2Vec model to address the challenge. We have built our approaches as a tool and evaluated it with 50 iOS mobile apps including Firefox and Wikipedia. The results show that our approach significantly outperforms existing automatic text input generation methods.) <|cite_end|> exploited deep learning models to generate relevant text inputs to test user interfaces for mobile applications. Saes <|cite_start|> (Reference: Unit test generation using machine learning: Test suite generators could help software engineers to ensure software quality by detecting software faults. These generators can be applied to software projects that do not have an initial test suite, a test suite can be generated which is maintained and optimized by the developers. Testing helps to check if a program works and, also if it continues to work after changes. This helps to prevent software from failing and aids developers in applying changes and minimizing the possibility to introduce errors in other (critical) parts of the software. State-of-the-art test generators are still only able to capture a small portion of potential software faults. The Search-Based Software Testing 2017 workshop compared four unit test generation tools. These generators were only capable of achieving an average mutation coverage below 51%, which is lower than the score of the initial unit test suite written by software engineers. We propose a test suite generator driven by neural networks, which has the potential to detect mutants that could only be detected by manually written unit tests. In this research, multiple networks, trained on open-source projects, are evaluated on their ability to generate test suites. The dataset contains the unit tests and the code it tests. The unit test method names are used to link unit tests to methods under test. With our linking mechanism, we were able to link 27.41% (36,301 out of 132,449) tests. Our machine learning model could generate parsable code in 86.69% (241/278) of the time. This high number of parsable code indicates that the neural network learned patterns between code and tests, which indicates that neural networks are applicable for test generation.) <|cite_end|> generated a test suite for Java projects by identifying the connections between focal methods and their corresponding tests. They have gathered more than 780K pairs of focal and test methods utilizing the JUnit testing framework from GitHub. They could generate test cases with a parsability of 86.69\%. However, they did not evaluate how correct or effective the generated test cases were in identifying bugs or covering code. Tufano et al. <|cite_start|> (Reference: Unit Test Case Generation with Transformers and Focal Context: Automated unit test case generation tools facilitate test-driven development and support developers by suggesting tests intended to identify flaws in their code. Existing approaches are usually guided by the test coverage criteria, generating synthetic test cases that are often difficult for developers to read or understand. In this paper we propose AthenaTest, an approach that aims to generate unit test cases by learning from real-world focal methods and developer-written testcases. We formulate unit test case generation as a sequence-to-sequence learning task, adopting a two-step training procedure consisting of denoising pretraining on a large unsupervised Java corpus, and supervised finetuning for a downstream translation task of generating unit tests. We investigate the impact of natural language and source code pretraining, as well as the focal context information surrounding the focal method. Both techniques provide improvements in terms of validation loss, with pretraining yielding 25% relative improvement and focal context providing additional 11.1% improvement. We also introduce Methods2Test, the largest publicly available supervised parallel corpus of unit test case methods and corresponding focal methods in Java, which comprises 780K test cases mined from 91K open-source repositories from GitHub. We evaluate AthenaTest on five defects4j projects, generating 25K passing test cases covering 43.7% of the focal methods with only 30 attempts. We execute the test cases, collect test coverage information, and compare them with test cases generated by EvoSuite and GPT-3, finding that our approach outperforms GPT-3 and has comparable coverage w.r.t. EvoSuite. Finally, we survey professional developers on their preference in terms of readability, understandability, and testing effectiveness of the generated tests, showing overwhelmingly preference towards AthenaTest.) <|cite_end|> proposed \emph{AthenaTest} which exploits the \emph{BART} pre-trained model on both natural language and source code corpora then fine-tune on \emph{Methods2Test} <|cite_start|> (Reference: Methods2Test: A dataset of focal methods mapped to test cases: Unit testing is an essential part of the software development process, which helps to identify issues with source code in early stages of development and prevent regressions. Machine learning has emerged as viable approach to help software developers generate automated unit tests. However, generating reliable unit test cases that are semantically correct and capable of catching software bugs or unintended behavior via machine learning requires large, metadata-rich, datasets. In this paper we present Methods2Test: A dataset of focal methods mapped to test cases: a large, supervised dataset of test cases mapped to corresponding methods under test (i.e., focal methods). This dataset contains 780,944 pairs of JUnit tests and focal methods, extracted from a total of 91,385 Java open source projects hosted on GitHub with licenses permitting re-distribution. The main challenge behind the creation of the Methods2Test was to establish a reliable mapping between a test case and the relevant focal method. To this aim, we designed a set of heuristics, based on developers' best practices in software testing, which identify the likely focal method for a given test case. To facilitate further analysis, we store a rich set of metadata for each method-test pair in JSON-formatted files. Additionally, we extract textual corpus from the dataset at different context levels, which we provide both in raw and tokenized forms, in order to enable researchers to train and evaluate machine learning models for Automated Test Generation. Methods2Test is publicly available at: https://github.com/microsoft/methods2test) <|cite_end|> dataset, to generate whole unit test cases when a focal method and its context is given. They have found that their method could correctly test 43\% focal methods, with 16\% of the candidates being correct. Alagarsamy et al. <|cite_start|> (Reference: A3Test: Assertion-Augmented Automated Test Case Generation: Test case generation is an important activity, yet a time-consuming and laborious task. Recently, AthenaTest -- a deep learning approach for generating unit test cases -- is proposed. However, AthenaTest can generate less than one-fifth of the test cases correctly, due to a lack of assertion knowledge and test signature verification. In this paper, we propose A3Test, a DL-based test case generation approach that is augmented by assertion knowledge with a mechanism to verify naming consistency and test signatures. A3Test leverages the domain adaptation principles where the goal is to adapt the existing knowledge from an assertion generation task to the test case generation task. We also introduce a verification approach to verify naming consistency and test signatures. Through an evaluation of 5,278 focal methods from the Defects4j dataset, we find that our A3Test (1) achieves 147% more correct test cases and 15% more method coverage, with a lower number of generated test cases than AthenaTest; (2) still outperforms the existing pre-trained models for the test case generation task; (3) contributes substantially to performance improvement via our own proposed assertion pre-training and the verification components; (4) is 97.2% much faster while being more accurate than AthenaTest.) <|cite_end|> proposed \emph{A3Test}, which is a test case generation approach that is augmented by a test oracle generation task and includes a mechanism to verify naming consistency and test signatures. It performs domain adaptation at a task level, i.e., test oracle generation task to whole test case generation task, achieving more correct test cases and method coverage than \emph{AthenaTest}. { Lemieux et al. <|cite_start|> (Reference: CodaMOSA: Escaping coverage plateaus in test generation with pre-trained large language models: Search-based software testing (SBST) generates high-coverage test cases for programs under test with a combination of test case generation and mutation. SBST's performance relies on there being a reasonable probability of generating test cases that exercise the core logic of the program under test. Given such test cases, SBST can then explore the space around them to exercise various parts of the program. This paper explores whether Large Language Models (LLMs) of code, such as OpenAI's Codex, can be used to help SBST's exploration. Our proposed algorithm, CodaMosa, conducts SBST until its coverage improvements stall, then asks Codex to provide example test cases for under-covered functions. These examples help SBST redirect its search to more useful areas of the search space. On an evaluation over 486 benchmarks, CodaMosa achieves statistically significantly higher coverage on many more benchmarks (173 and 279) than it reduces coverage on (10 and 4), compared to SBST and LLM-only baselines.) <|cite_end|> proposed \emph{CodaMOSA}, an SBST approach that leverages LLM for escaping the coverage plateau for Python code bases. Schafer et al. <|cite_start|> (Reference: An Empirical Evaluation of Using Large Language Models for Automated Unit Test Generation: Unit tests play a key role in ensuring the correctness of software. However, manually creating unit tests is a laborious task, motivating the need for automation. Large Language Models (LLMs) have recently been applied to this problem, utilizing additional training or few-shot learning on examples of existing tests. This paper presents a large-scale empirical evaluation on the effectiveness of LLMs for automated unit test generation without additional training or manual effort, providing the LLM with the signature and implementation of the function under test, along with usage examples extracted from documentation. We also attempt to repair failed generated tests by re-prompting the model with the failing test and error message. We implement our approach in TestPilot, a test generation tool for JavaScript that automatically generates unit tests for all API functions in an npm package. We evaluate TestPilot using OpenAI's gpt3.5-turbo LLM on 25 npm packages with a total of 1,684 API functions. The generated tests achieve a median statement coverage of 70.2% and branch coverage of 52.8%, significantly improving on Nessie, a recent feedback-directed JavaScript test generation technique, which achieves only 51.3% statement coverage and 25.6% branch coverage. We also find that 92.8% of TestPilot's generated tests have no more than 50% similarity with existing tests (as measured by normalized edit distance), with none of them being exact copies. Finally, we run TestPilot with two additional LLMs, OpenAI's older code-cushman-002 LLM and the open LLM StarCoder. Overall, we observed similar results with the former (68.2% median statement coverage), and somewhat worse results with the latter (54.0% median statement coverage), suggesting that the effectiveness of the approach is influenced by the size and training set of the LLM, but does not fundamentally depend on the specific model.) <|cite_end|> proposed \emph{TestPilot}, a test cases generation approach that leverages LLM, usage examples mined from package documentation, and error logs for npm packages (JavaScript). Yuan et al. <|cite_start|> (Reference: No More Manual Tests? Evaluating and Improving ChatGPT for Unit Test Generation: Unit testing is essential in detecting bugs in functionally-discrete program units. Manually writing high-quality unit tests is time-consuming and laborious. Although traditional techniques can generate tests with reasonable coverage, they exhibit low readability and cannot be directly adopted by developers. Recent work has shown the large potential of large language models (LLMs) in unit test generation, which can generate more human-like and meaningful test code. ChatGPT, the latest LLM incorporating instruction tuning and reinforcement learning, has performed well in various domains. However, It remains unclear how effective ChatGPT is in unit test generation. In this work, we perform the first empirical study to evaluate ChatGPT's capability of unit test generation. Specifically, we conduct a quantitative analysis and a user study to systematically investigate the quality of its generated tests regarding the correctness, sufficiency, readability, and usability. The tests generated by ChatGPT still suffer from correctness issues, including diverse compilation errors and execution failures. Still, the passing tests generated by ChatGPT resemble manually-written tests by achieving comparable coverage, readability, and even sometimes developers' preference. Our findings indicate that generating unit tests with ChatGPT could be very promising if the correctness of its generated tests could be further improved. Inspired by our findings above, we propose ChatTESTER, a novel ChatGPT-based unit test generation approach, which leverages ChatGPT itself to improve the quality of its generated tests. ChatTESTER incorporates an initial test generator and an iterative test refiner. Our evaluation demonstrates the effectiveness of ChatTESTER by generating 34.3% more compilable tests and 18.7% more tests with correct assertions than the default ChatGPT.) <|cite_end|> proposed \emph{ChatTester}, which is a LLM-based test case generation model that exploits \emph{ChatGPT} and iterative generate-and-validate prompt engineering strategy with execution feedback. Nie et al. <|cite_start|> (Reference: Learning Deep Semantics for Test Completion: Writing tests is a time-consuming yet essential task during software development. We propose to leverage recent advances in deep learning for text and code generation to assist developers in writing tests. We formalize the novel task of test completion to automatically complete the next statement in a test method based on the context of prior statements and the code under test. We develop TeCo -- a deep learning model using code semantics for test completion. The key insight underlying TeCo is that predicting the next statement in a test method requires reasoning about code execution, which is hard to do with only syntax-level data that existing code completion models use. TeCo extracts and uses six kinds of code semantics data, including the execution result of prior statements and the execution context of the test method. To provide a testbed for this new task, as well as to evaluate TeCo, we collect a corpus of 130,934 test methods from 1,270 open-source Java projects. Our results show that TeCo achieves an exact-match accuracy of 18, which is 29% higher than the best baseline using syntax-level data only. When measuring functional correctness of generated next statement, TeCo can generate runnable code in 29% of the cases compared to 18% obtained by the best baseline. Moreover, TeCo is significantly better than prior work on test oracle generation.) <|cite_end|> proposed \emph{TeCo}, a deep encoder-decoder test completion model that learns different levels of code semantics and re-ranking by execution. A test completion model generates the following statement of a unit test case when the previous line and method under test are given. } Our study continues in this direction and proposes domain adaptation at a project level to improve \emph{AthenaTest} and \emph{A3Test} as our most related work. Unlike these papers, we evaluate based on classic software testing criteria (i.e., code coverage and mutation testing). Most existing approaches only report BLUE scores or similar NLP-based metrics that do not correlate with the effectiveness (adequacy) of the generated test cases. Although there is literature regarding test case generation, it has shown that we still have challenges in generating correct and effective test cases that reveal bugs for practical usage. <|paper_end|>
[ "<|reference_start|> Defects4J: a database of existing faults to enable controlled testing studies for Java programs: Empirical studies in software testing research may not be comparable, reproducible, or characteristic of practice. One reason is that real bugs are too infrequently used in software testing research. Extracting and reproducing real bugs is challenging and as a result hand-seeded faults or mutants are commonly used as a substitute. This paper presents Defects4J, a database and extensible framework providing real bugs to enable reproducible studies in software testing research. The initial version of Defects4J contains 357 real bugs from 5 real-world open source pro- grams. Each real bug is accompanied by a comprehensive test suite that can expose (demonstrate) that bug. Defects4J is extensible and builds on top of each program’s version con- trol system. Once a program is configured in Defects4J, new bugs can be added to the database with little or no effort. Defects4J features a framework to easily access faulty and fixed program versions and corresponding test suites. This framework also provides a high-level interface to common tasks in software testing research, making it easy to con- duct and reproduce empirical studies. Defects4J is publicly available at http://defects4j.org. <|reference_end|>", "<|reference_start|> {Search-based software test data generation: a\nsurvey: The use of metaheuristic search techniques for the automatic generation of test data has been a burgeoning interest for many researchers in recent years. Previous attempts to automate the test generation process have been limited, having been constrained by the size and complexity of software, and the basic fact that, in general, test data generation is an undecidable problem. Metaheuristic search techniques offer much promise in regard to these problems. Metaheuristic search techniques are high‐level frameworks, which utilize heuristics to seek solutions for combinatorial problems at a reasonable computational cost. To date, metaheuristic search techniques have been applied to automate test data generation for structural and functional testing; the testing of grey‐box properties, for example safety constraints; and also non‐functional properties, such as worst‐case execution time. This paper surveys some of the work undertaken in this field, discussing possible new future directions of research for each of its different individual areas. Copyright © 2004 John Wiley & Sons, Ltd. <|reference_end|>", "<|reference_start|> {Modeling Readability to Improve Unit Tests: Writing good unit tests can be tedious and error prone, but even once they are written, the job is not done: Developers need to reason about unit tests throughout software development and evolution, in order to diagnose test failures, maintain the tests, and to understand code written by other developers. Unreadable tests are more difficult to maintain and lose some of their value to developers. To overcome this problem, we propose a domain-specific model of unit test readability based on human judgements, and use this model to augment automated unit test generation. The resulting approach can automatically generate test suites with both high coverage and also improved readability. In human studies users prefer our improved tests and are able to answer maintenance questions about them 14% more quickly at the same level of accuracy. <|reference_end|>", "<|reference_start|> TOGA: A Neural Method for Test Oracle Generation: Testing is widely recognized as an important stage of the software development lifecycle. Effective software testing can provide benefits such as bug finding, preventing regressions, and documentation. In terms of documentation, unit tests express a unit's intended functionality, as conceived by the developer. A test oracle, typically expressed as an condition, documents the intended behavior of a unit under a given test prefix. Synthesizing a functional test oracle is a challenging problem, as it must capture the intended functionality rather than the implemented functionality. In this paper, we propose TOGA (a neural method for Test Oracle GenerAtion), a unified transformer-based neural approach to infer both exceptional and assertion test oracles based on the context of the focal method. Our approach can handle units with ambiguous or missing documentation, and even units with a missing implementation. We evaluate our approach on both oracle inference accuracy and functional bug-finding. Our technique improves accuracy by 33\\% over existing oracle inference approaches, achieving 96\\% overall accuracy on a held out test dataset. Furthermore, we show that when integrated with a automated test generation tool (EvoSuite), our approach finds 57 real world bugs in large-scale Java programs, including 30 bugs that are not found by any other automated testing method in our evaluation. <|reference_end|>" ]
[ 7, 9, 13, 26 ]
{"<|multi_cite_1_2|>": "arxiv-549506", "<|multi_cite_1_3|>": "arxiv-241596", "<|multi_cite_1_4|>": "arxiv-326703", "<|multi_cite_2_1|>": "arxiv-289429", "<|multi_cite_2_2|>": "arxiv-482928", "<|cite_3|>": "arxiv-471213", "<|cite_4|>": "arxiv-407951", "<|cite_5|>": "ss-728302", "<|cite_6|>": "ss-728302", "<|cite_7|>": "ss-695381", "<|cite_8|>": "ss-1253831", "<|multi_cite_9_1|>": "ss-2076997", "<|multi_cite_9_2|>": "ss-1253060", "<|multi_cite_9_3|>": "ss-1777436", "<|multi_cite_10_1|>": "ss-1291816", "<|multi_cite_10_2|>": "ss-2164730", "<|multi_cite_11_1|>": "ss-2302534", "<|multi_cite_11_2|>": "ss-1162028", "<|cite_12|>": "ss-1976345", "<|cite_13|>": "ss-848152", "<|cite_14|>": "arxiv-294811", "<|cite_15|>": "arxiv-471213", "<|cite_16|>": "arxiv-548174", "<|cite_17|>": "arxiv-248194", "<|cite_18|>": "ss-816636", "<|cite_21|>": "ss-1349448", "<|cite_22|>": "arxiv-368124", "<|cite_23|>": "arxiv-289435", "<|cite_24|>": "ss-1451676", "<|cite_25|>": "ss-816635", "<|cite_26|>": "arxiv-289429", "<|cite_27|>": "arxiv-407951", "<|cite_28|>": "arxiv-482928", "<|cite_29|>": "ss-1758642", "<|cite_30|>": "arxiv-481187", "<|cite_31|>": "arxiv-502802", "<|cite_32|>": "arxiv-482836"}
0902.3114
<|paper_start|> Title: Analysis of the Second Moment of the LT Decoder Abstract: Analysis of the Second Moment of the LT Decoder: We analyze the second moment of the ripple size during the LT decoding process and prove that the standard deviation of the ripple size for an LT-code with length $k$ is of the order of $\sqrt k.$ Together with a result by Karp et. al stating that the expectation of the ripple size is of the order of $k$ [3], this gives bounds on the error probability of the LT decoder. We also give an analytic expression for the variance of the ripple size up to terms of constant order, and refine the expression in [3] for the expectation of the ripple size up to terms of the order of $1/k$, thus providing a first step towards an analytic finite-length analysis of LT decoding. Introduction We assume the reader is familiar with Fountain codes, LT-codes and belief propagation (BP) decoding. For details, the reader is referred to <|cite_start|> (Reference: LT codes: We introduce LT codes, the first rateless erasure codes that are very efficient as the data length grows.) <|cite_end|>, <|cite_start|> (Reference: Raptor Codes: A Fountain code is a code of fixed dimension and a limitless block-length. This is a class of codes with many interesting properties and applications. In this talk I will introduce several classes of probabilistic Fountain codes, including LT-and Raptor codes, show tools for their design and analysis, and discuss how they are used today to solve various data transmission problems on heterogenous unreliable networks. I will also talk about the theory of these codes when transmission takes place over non-erasure channels, and low-complexity algorithms are used for their decoding.) <|cite_end|>. We consider LT-codes with parameters $(k,\Omega(x))$, where $k$ is the message length and $\Omega(x)=\sum \Omega_i x^i$ is the degree distribution of the output symbols during encoding. An important set to consider is the set of output symbols of degree $1$ (the \textit{ripple}). The size of the ripple varies during the decoding process, as high-degree output symbols become of degree $1$ after the removal of their edges, and as ripple elements become useless after the recovering of their unique neighbor. The decoding is in error if and only if the ripple becomes empty before all the input symbols are recovered. A natural question is thus whether we can track the size of the ripple, in the expectation, during the decoding process. Karp et al. <|cite_start|> (Reference: Finite length analysis of LT codes: This paper provides an efficient method for analyzing the error probability of the belief propagation (BP) decoder applied to LT Codes. Each output symbol is generated independently by sampling from a distribution and adding the input symbols corresponding to the support of the sampled vector.) <|cite_end|> proved that the expected ripple size is linear in $k$ throughout most of the decoding process. Their asymptotic analytic expressions for the expected ripple size can be found in section \ref{prelim}. They also derive an expression for the expected \textit{cloud} size throughout decoding, where the cloud is defined at each decoding step as the set of output symbols of degree strictly higher than $1$. In this paper, we extend their analysis in two ways. First, we consider higher moments of the cloud and ripple size in order to upper bound the error probability of the LT decoder. More specifically, we use similar methods to derive an expression for the variance of the ripple size and prove that it is also linear in $k$ throughout most of the decoding process. We can then use this expression together with the expression for the expectation to offer a guarantee for successful decoding, as follows: if, for fixed LT-code parameters, $R(u)$ is the expectation and $\sigma_R(u)$ is the standard deviation of the ripple size when $u$ symbols are unrecovered, then if the function \begin{equation}\label{hc} h_c(u) = R(u) - c \cdot \sigma_R(u) \end{equation} for some parameter $c$ never takes negative values, we can upper bound the error probability of the LT decoder by the probability that the ripple size deviates from its mean by more than $c$ standard deviations. Second, we take the first step towards an analytic finite-length analysis of the LT decoder, by providing exact expressions for the expectation (variance) of the ripple size up to $O(1/k)$ (constant) terms. This is done by considering lower-order terms in the difference equations, but also by getting tight bounds on the discrepancy introduced by approximating difference equations by differential equations. It is worthy to note that the expressions we deal with are valid for ``most of the decoding process,'' that is, the analysis breaks down when the number of unrecovered symbols is no longer a constant fraction of $k$. This is no issue, however, when one considers Raptor codes, which need only a constant fraction of the input symbols to be recovered by the LT decoder <|cite_start|> (Reference: Raptor Codes: A Fountain code is a code of fixed dimension and a limitless block-length. This is a class of codes with many interesting properties and applications. In this talk I will introduce several classes of probabilistic Fountain codes, including LT-and Raptor codes, show tools for their design and analysis, and discuss how they are used today to solve various data transmission problems on heterogenous unreliable networks. I will also talk about the theory of these codes when transmission takes place over non-erasure channels, and low-complexity algorithms are used for their decoding.) <|cite_end|>. <|paper_end|>
[ "<|reference_start|> LT codes: We introduce LT codes, the first rateless erasure codes that are very efficient as the data length grows. <|reference_end|>", "<|reference_start|> Raptor Codes: A Fountain code is a code of fixed dimension and a limitless block-length. This is a class of codes with many interesting properties and applications. In this talk I will introduce several classes of probabilistic Fountain codes, including LT-and Raptor codes, show tools for their design and analysis, and discuss how they are used today to solve various data transmission problems on heterogenous unreliable networks. I will also talk about the theory of these codes when transmission takes place over non-erasure channels, and low-complexity algorithms are used for their decoding. <|reference_end|>", "<|reference_start|> Finite length analysis of LT codes: This paper provides an efficient method for analyzing the error probability of the belief propagation (BP) decoder applied to LT Codes. Each output symbol is generated independently by sampling from a distribution and adding the input symbols corresponding to the support of the sampled vector. <|reference_end|>", "<|reference_start|> Raptor Codes: A Fountain code is a code of fixed dimension and a limitless block-length. This is a class of codes with many interesting properties and applications. In this talk I will introduce several classes of probabilistic Fountain codes, including LT-and Raptor codes, show tools for their design and analysis, and discuss how they are used today to solve various data transmission problems on heterogenous unreliable networks. I will also talk about the theory of these codes when transmission takes place over non-erasure channels, and low-complexity algorithms are used for their decoding. <|reference_end|>" ]
[ 0, 1, 2, 3 ]
{"<|cite_1|>": "ss-1026310", "<|cite_2|>": "ss-716613", "<|cite_3|>": "ss-1583722", "<|cite_4|>": "ss-716613"}
2201.00977
<|paper_start|> Title: Underwater Object Classification and Detection: first results and open challenges Abstract: Underwater Object Classification and Detection: first results and open challenges: This work reviews the problem of object detection in underwater environments. We analyse and quantify the shortcomings of conventional state-of-the-art (SOTA) algorithms in the computer vision community when applied to this challenging environment, as well as providing insights and general guidelines for future research efforts. First, we assessed if pretraining with the conventional ImageNet is beneficial when the object detector needs to be applied to environments that may be characterised by a different feature distribution. We then investigate whether two-stage detectors yields to better performance with respect to single-stage detectors, in terms of accuracy, intersection of union (IoU), floating operation per second (FLOPS), and inference time. Finally, we assessed the generalisation capability of each model to a lower quality dataset to simulate performance on a real scenario, in which harsher conditions ought to be expected. Our experimental results provide evidence that underwater object detection requires searching for "ad-hoc" architectures than merely training SOTA architectures on new data, and that pretraining is not beneficial. Introduction The ocean is essential for life on our planet and our economy. It works as a global climate control system, and it is an indispensable source of food and energy. Yet, it is the least explored habitat due to its harsh conditions that prevent its exploration by conventional means. The world beneath the ocean represents a thriving environment for autonomous robots, and its useful applications varies from exploring deep sea, protecting and preserving its ecosystems, to defence, archaeology, and rescue missions. Regardless of the application, most underwater robots make use of vision for perceiving the surroundings, and in so object detection plays a critical role in it. The data collection and processing for supporting the learning of underwater object detection exposes new challenges. Several unfavourable factors, such as the scattering and absorption of the light by water and the presence of suspended particles, interfere with the image quality. The background acts as a blurry entity that distorts perspective and lessens the contours and colours, rendering techniques such as enhancement and restoration most needed as well as unsatisfactory for underwater vision (\fref{fig:brackish}). Another crucial factor lies in the small averaging sizes of the objects that populate the aquatic environments; current deep learning-based detectors suffer loss in performance on small objects even in conventional settings. Finally, due to low bandwidth communication channels available underwater, the entire process has to be performed within the onboard capabilities, enhancing the need for fast and efficient algorithms. \begin{figure}[t] \centering \includegraphics[width=.48\textwidth]{figures/brackish_prediction_100.jpg} \caption{A sample image from <|cite_start|> (Reference: Detection of marine animals in a new underwater dataset with varying visibility: The increasing demand for marine monitoring calls for robust automated systems to support researchers in gathering information from marine ecosystems. This includes computer vision based marine organism detection and species classification systems. Current state-of-the-art marine vision systems are based on CNNs, which in nature require a relatively large amount of varied training data. In this paper we present a new publicly available underwater dataset with annotated image sequences of fish, crabs, and starfish captured in brackish water with varying visibility. The dataset is called the Brackish Dataset and it is the first part of a planned long term monitoring of the marine species visiting the strait where the cameras are permanently mounted. To the best of our knowledge, this is the first annotated underwater image dataset captured in temperate brackish waters. In order to obtain a baseline performance for future reference, the YOLOv2 and YOLOv3 CNNs were fine-tuned and tested on the Brackish Dataset.) <|cite_end|> showing the harsh conditions met in underwater environments.} \label{fig:brackish} \end{figure} The latest trend for generic object detection algorithms mainly relies on Convolutional Neural Network (CNN). However, the SOTA object detectors can be divided into two large categories. On one hand, we have two-stage detectors, such as R-CNN (Region-based CNN) <|cite_start|> (Reference: Rich feature hierarchies for accurate object detection and semantic segmentation: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012---achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also compare R-CNN to OverFeat, a recently proposed sliding-window detector based on a similar CNN architecture. We find that R-CNN outperforms OverFeat by a large margin on the 200-class ILSVRC2013 detection dataset. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.) <|cite_end|>, that in a first phase use a Region Proposal Network (RPN) for generating regions of interests on the image. These regions are then fed down the pipeline for the second phase in which classification and bonding-box regression is performed. On the other hand, single-stage detectors, such as Yolo (You Only Look Once) <|cite_start|> (Reference: You Only Look Once: Unified, Real-Time Object Detection: We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is far less likely to predict false detections where nothing exists. Finally, YOLO learns very general representations of objects. It outperforms all other detection methods, including DPM and R-CNN, by a wide margin when generalizing from natural images to artwork on both the Picasso Dataset and the People-Art Dataset.) <|cite_end|> and SSD (Single Shot Multibox Detector) <|cite_start|> (Reference: SSD: Single Shot MultiBox Detector: We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. Our SSD model is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stage and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets confirm that SSD has comparable accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. Compared to other single stage methods, SSD has much better accuracy, even with a smaller input image size. For $300\times 300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a comparable state of the art Faster R-CNN model. Code is available at https://github.com/weiliu89/caffe/tree/ssd .) <|cite_end|>, treat detection as a regression problem by learning class probabilities and bounding-box coordinates. It is well-known that the former approach achieves higher accuracy in detection at the cost of larger inference times w.r.t. the single-stage counterpart. Although efforts have been made to improve efficiency of two-stage detectors by reducing the inference time, the subsequent Fast R-CNN <|cite_start|> (Reference: Fast R-CNN: This paper proposes a Fast Region-based Convolutional Network method (Fast R-CNN) for object detection. Fast R-CNN builds on previous work to efficiently classify object proposals using deep convolutional networks. Compared to previous work, Fast R-CNN employs several innovations to improve training and testing speed while also increasing detection accuracy. Fast R-CNN trains the very deep VGG16 network 9x faster than R-CNN, is 213x faster at test-time, and achieves a higher mAP on PASCAL VOC 2012. Compared to SPPnet, Fast R-CNN trains VGG16 3x faster, tests 10x faster, and is more accurate. Fast R-CNN is implemented in Python and C++ (using Caffe) and is available under the open-source MIT License at https://github.com/rbgirshick/fast-rcnn.) <|cite_end|> and Faster R-CNN <|cite_start|> (Reference: Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks: State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.) <|cite_end|> still do not meet the requirements needed for edge computing devices. \begin{figure*}[t] \centering \includegraphics[width=.95\textwidth]{figures/general_diagram.png} \caption{General object detection pipeline. The input layer is followed up by a feature extractor (i.e., backbone), a feature map for ROI localisation, and finally a regression and classification network.} \label{fig:high_level_method} \end{figure*} Object detection is a more challenging problem than simple object classification since it has to answer to the ``what'' and ``where'' questions for potentially many objects in the same image. Thus, plugging good classifiers in the object detection pipeline not necessarily leads to better detection performance. Figure~\ref{fig:high_level_method} shows the conventional pipeline in which, after the input layer, a backbone network identifies objects within the input image; its output is then fed to the rest of the pipeline, also called \emph{head}. For both single-stage and two-stage detectors, this component is critical and, very recently, identified as a bottleneck--backbones should be trained to support detection and no mere classification--for which a new trend is to select and train better backbones in lieu of outlining better \emph{heads}. In this paper, we will compare performance of mainstream algorithms from both classes (single- and two-stage detectors) in order to highlight the strengths and short-comings of conventional object detectors when applied to underwater scenarios. The rest of the paper is structured as follows. We will present previous effort for underwater object recognition and its applications (Section~\ref{sec:relatedwork}), then we will proceed to introduce the chosen datasets, metrics and object detectors, as well as the training and testing procedures we used for each architecture (Section~\ref{sec:matnmet}). Section~\ref{sec:results} will present and discuss our results. We will conclude the paper with our final remarks. Related Work \label{sec:relatedwork} This section presents an overview of how computer vision methods have been applied to assist in underwater monitoring and detection. There is a considerable amount of research in underwater settings, mostly with biology motivations. More recently, exploration and research inspired to improve underwater vehicles perception has been at the centre of attention from the scientific community. This is due to technological advancements that allow for higher quality footage and sensorial information extraction from the underwater realm. The two underlying processes in the water-light interactions: scattering and absorption <|cite_start|> (Reference: Underwater Image Processing: State of the Art of Restoration and Image Enhancement Methods: The underwater image processing area has received considerable attention within the last decades, showing important achievements. In this paper we review some of the most recent methods that have been specifically developed for the underwater environment. These techniques are capable of extending the range of underwater imaging, improving image contrast and resolution. After considering the basic physics of the light propagation in the water medium, we focus on the different algorithms available in the literature. The conditions for which each of them have been originally developed are highlighted as well as the quality assessment methods used to evaluate their performance.) <|cite_end|> pose several limitations on the reach of existing computer vision methods applied inland. Filtering and processing approaches such as image enhancement by Fabri et. al. <|cite_start|> (Reference: 2018 IEEE International Conference on Robotics and Automation, ICRA 2018, Brisbane, Australia, May 21-25, 2018: ) <|cite_end|> <|cite_start|> (Reference: Fast Underwater Image Enhancement for Improved Visual Perception: In this paper, we present a conditional generative adversarial network-based model for real-time underwater image enhancement. To supervise the adversarial training, we formulate an objective function that evaluates the perceptual image quality based on its global content, color, local texture, and style information. We also present EUVP, a large-scale dataset of a paired and unpaired collection of underwater images (of `poor' and `good' quality) that are captured using seven different cameras over various visibility conditions during oceanic explorations and human-robot collaborative experiments. In addition, we perform several qualitative and quantitative evaluations which suggest that the proposed model can learn to enhance underwater image quality from both paired and unpaired training. More importantly, the enhanced images provide improved performances of standard models for underwater object detection, human pose estimation, and saliency prediction. These results validate that it is suitable for real-time preprocessing in the autonomy pipeline by visually-guided underwater robots. The model and associated training pipelines are available at https://github.com/xahidbuffon/funie-gan.) <|cite_end|> aim at reducing these effects with Generative model approaches. Another factor in underwater perception development is the performance improvements in machine learning methods over the last decade, in both performance and efficiency which made possible the development of robust approaches without compromising in the real-time constraint. Furthermore, recent advancements in robot grasping and manipulation under uncertainty could be combined with more robust object detectors for exploring novel approach of contact-based tasks underwater in autonomous <|cite_start|> (Reference: Sequential trajectory re-planning with tactile information gain for dexterous grasping under object-pose uncertainty: Dexterous grasping of objects with uncertain pose is a hard unsolved problem in robotics. This paper solves this problem using information gain re-planning. First we show how tactile information, acquired during a failed attempt to grasp an object can be used to refine the estimate of that object's pose. Second, we show how this information can be used to replan new reach to grasp trajectories for successive grasp attempts. Finally we show how reach-to-grasp trajectories can be modified, so that they maximise the expected tactile information gain, while simultaneously delivering the hand to the grasp configuration that is most likely to succeed. Our main novel outcome is thus to enable tactile information gain planning for Dexterous, high degree of freedom (DoFs) manipulators. We achieve this using a combination of information gain planning, hierarchical probabilistic roadmap planning, and belief updating from tactile sensors for objects with non-Gaussian pose uncertainty in 6 dimensions. The method is demonstrated in trials with simulated robots. Sequential replanning is shown to achieve a greater success rate than single grasp attempts, and trajectories that maximise information gain require fewer re-planning iterations than conventional planning methods before a grasp is achieved.) <|cite_end|> <|cite_start|> (Reference: Hypothesis-based Belief Planning for Dexterous Grasping: Belief space planning is a viable alternative to formalise partially observable control problems and, in the recent years, its application to robot manipulation problems has grown. However, this planning approach was tried successfully only on simplified control problems. In this paper, we apply belief space planning to the problem of planning dexterous reach-to-grasp trajectories under object pose uncertainty. In our framework, the robot perceives the object to be grasped on-the-fly as a point cloud and compute a full 6D, non-Gaussian distribution over the object's pose (our belief space). The system has no limitations on the geometry of the object, i.e., non-convex objects can be represented, nor assumes that the point cloud is a complete representation of the object. A plan in the belief space is then created to reach and grasp the object, such that the information value of expected contacts along the trajectory is maximised to compensate for the pose uncertainty. If an unexpected contact occurs when performing the action, such information is used to refine the pose distribution and triggers a re-planning. Experimental results show that our planner (IR3ne) improves grasp reliability and compensates for the pose uncertainty such that it doubles the proportion of grasps that succeed on a first attempt.) <|cite_end|> or teleoperated systems <|cite_start|> (Reference: Metrics and Benchmarks for Remote Shared Controllers in Industrial Applications: Remote manipulation is emerging as one of the key robotics tasks needed in extreme environments. Several researchers have investigated how to add AI components into shared controllers to improve their reliability. Nonetheless, the impact of novel research approaches in real-world applications can have a very slow in-take. We propose a set of benchmarks and metrics to evaluate how the AI components of remote shared control algorithms can improve the effectiveness of such frameworks for real industrial applications. We also present an empirical evaluation of a simple intelligent share controller against a manually operated manipulator in a tele-operated grasping scenario.) <|cite_end|> <|cite_start|> (Reference: Human-Robot Interaction With Robust Prediction of Movement Intention Surpasses Manual Control: Designing robotic assistance devices for manipulation tasks is challenging. This work aims at improving accuracy and usability of physical human-robot interaction (pHRI) where a user interacts with a physical robotic device (e.g., a human operated manipulator or exoskeleton) by transmitting signals which need to be interpreted by the machine. Typically these signals are used as an open-loop control, but this approach has several limitations such as low take-up and high cognitive burden for the user. In contrast, a control framework is proposed that can respond robustly and efficiently to intentions of a user by reacting proactively to their commands. The key insight is to include context- and user-awareness in the controller, improving decision making on how to assist the user. Context-awareness is achieved by creating a set of candidate grasp targets and reach-to grasp trajectories in a cluttered scene. User-awareness is implemented as a linear time-variant feedback controller (TV-LQR) over the generated trajectories to facilitate the motion towards the most likely intention of a user. The system also dynamically recovers from incorrect predictions. Experimental results in a virtual environment of two degrees of freedom control show the capability of this approach to outperform manual control. By robustly predicting the user’s intention, the proposed controller allows the subject to achieve superhuman performance in terms of accuracy and thereby usability.) <|cite_end|> <|cite_start|> (Reference: Automatic Detection of Myocontrol Failures Based upon Situational Context Information: Myoelectric control systems for assistive devices are still unreliable. The user's input signals can become unstable over time due to e.g. fatigue, electrode displacement, or sweat. Hence, such controllers need to be constantly updated and heavily rely on user feedback. In this paper, we present an automatic failure detection method which learns when plausible predictions become unreliable and model updates are necessary. Our key insight is to enhance the control system with a set of generative models that learn sensible behaviour for a desired task from human demonstration. We illustrate our approach on a grasping scenario in Virtual Reality, in which the user is asked to grasp a bottle on a table. From demonstration our model learns the reach-to-grasp motion from a resting position to two grasps (power grasp and tridigital grasp) and how to predict the most adequate grasp from local context, e.g. tridigital grasp on the bottle cap or around the bottleneck. By measuring the error between new grasp attempts and the model prediction, the system can effectively detect which input commands do not reflect the user's intention. We evaluated our model in two cases: i) with both position and rotation information of the wrist pose, and ii) with only rotational information. Our results show that our approach detects statistically highly significant differences in error distributions with p < 0.001 between successful and failed grasp attempts in both cases.) <|cite_end|>. The area of object classification became active with recent efforts in the constructions of marine datasets for coral reefs <|cite_start|> (Reference: 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, June 16-21, 2012: ) <|cite_end|>, fish <|cite_start|> (Reference: Wildfish: A large benchmark for fish recognition in the wild: Fish recognition is an important task to understand the marine ecosystem and biodiversity. It is often challenging to identify fish species in the wild, due to the following difficulties. First, most fish benchmarks are small-scale, which may limit the representation power of machine learning models. Second, the number of fish species is huge, and there may still exist unknown categories in our planet. The traditional classifiers often fail to deal with this open-set scenario. Third, certain fish species are highly-confused. It is often hard to figure out the subtle differences, only by the unconstrained images. Motivated by these facts, we introduce a large-scale WildFish benchmark for fish recognition in the wild. Specifically, we make three contributions in this paper. First, WildFish is the largest image data set for wild fish recognition, to our best knowledge. It consists of 1000 fish categories with 54,459 unconstrained images, allowing to train high-capacity models for automatic fish classification. Second, we propose a novel open-set fish classification task for realistic scenarios, and investigate the open-set deep learning framework with a number of practical designs. Third, we propose a novel fine-grained recognition task, with the guidance of pairwise textual descriptions. Via leveraging the comparison knowledge in the sentence, we design a multi-modal fish net to effectively distinguish two confused categories in a pair. Finally, we release WildFish (https://github.com/PeiqinZhuang/WildFish), in order to bring benefit to more research studies in multimedia and beyond.) <|cite_end|> and diver-robot underwater interactions in <|cite_start|> (Reference: CADDY Underwater Stereo-Vision Dataset for Human–Robot Interaction (HRI) in the Context of Diver Activities: In this article, we present a novel underwater dataset collected from several field trials within the EU FP7 project “Cognitive autonomous diving buddy (CADDY)”, where an Autonomous Underwater Vehicle (AUV) was used to interact with divers and monitor their activities. To our knowledge, this is one of the first efforts to collect a large public dataset in underwater environments with the purpose of studying and boosting object classification, segmentation and human pose estimation tasks. The first part of the dataset contains stereo camera recordings (≈10 K) of divers performing hand gestures to communicate with an AUV in different environmental conditions. The gestures can be used to test the robustness of visual detection and classification algorithms in underwater conditions, e.g., under color attenuation and light backscatter. The second part includes stereo footage (≈12.7 K) of divers free-swimming in front of the AUV, along with synchronized measurements from Inertial Measurement Units (IMU) located throughout the diver’s suit (DiverNet), which serve as ground-truth for human pose and tracking methods. In both cases, these rectified images allow the investigation of 3D representation and reasoning pipelines from low-texture targets commonly present in underwater scenarios. This work describes the recording platform, sensor calibration procedure plus the data format and the software utilities provided to use the dataset.) <|cite_end|> <|cite_start|> (Reference: Towards a Generic Diver-Following Algorithm: Balancing Robustness and Efficiency in Deep Visual Detection: This paper explores the design and development of a class of robust diver-following algorithms for autonomous underwater robots. By considering the operational challenges for underwater visual tracking in diverse real-world settings, we formulate a set of desired features of a generic diver following algorithm. We attempt to accommodate these features and maximize general tracking performance by exploiting the state-of-the-art deep object detection models. We fine-tune the building blocks of these models with a goal of balancing the trade-off between robustness and efficiency in an onboard setting under real-time constraints. Subsequently, we design an architecturally simple Convolutional Neural Network (CNN)-based diver-detection model that is much faster than the state-of-the-art deep models yet provides comparable detection performances. In addition, we validate the performance and effectiveness of the proposed diver-following modules through a number of field experiments in closed-water and open-water environments.) <|cite_end|>. Likewise, the applications on Convolutional Neural Networks (CNNs) gave more possibilities to underwater applications as shown by Villon et. al <|cite_start|> (Reference: Coral Reef Fish Detection and Recognition in Underwater Videos by Supervised Machine Learning: Comparison Between Deep Learning and HOG+SVM Methods: ) <|cite_end|>, while Yang et. al. <|cite_start|> (Reference: Research on underwater object recognition based on YOLOv3: ) <|cite_end|> have compared the YOLOv3 and Faster RCNN on a underwater dataset with the results showing a better ability in detecting smaller object by the single-stage architecture. Underwater data augmentation via generative modelling approaches are common strategies for enhancement and restoration. For instance, Wang et. al. <|cite_start|> (Reference: UDD: an underwater open-sea farm object detection dataset for underwater robot picking: To promote the development of underwater robot picking in sea farms, we propose an underwater open-sea farm object detection dataset called UDD. Concretely, UDD consists of 3 categories (seacucumber, seaurchin, and scallop) with 2227 images. To the best of our knowledge, it's the first dataset collected in a real open-sea farm for underwater robot picking and we also propose a novel Poisson-blending-embedded Generative Adversarial Network (Poisson GAN) to overcome the class-imbalance and massive small objects issues in UDD. By utilizing Poisson GAN to change the number, position, even size of objects in UDD, we construct a large scale augmented dataset (AUDD) containing 18K images. Besides, in order to make the detector better adapted to the underwater picking environment, a dataset (Pre-trained dataset) for pre-training containing 590K images is also proposed. Finally, we design a lightweight network (UnderwaterNet) to address the problems that detecting small objects from cloudy underwater pictures and meeting the efficiency requirements in robots. Specifically, we design a depth-wise-convolution-based Multi-scale Contextual Features Fusion (MFF) block and a Multi-scale Blursampling (MBP) module to reduce the parameters of the network to 1.3M at 48FPS, without any loss on accuracy. Extensive experiments verify the effectiveness of the proposed UnderwaterNet, Poisson GAN, UDD, AUDD, and Pre-trained datasets.) <|cite_end|> proposed a Poisson-blending GAN which overcomes some of the common object detection augmentation pitfalls enabling change of the position, number and size of the object classes in a given image. Chen et. al. <|cite_start|> (Reference: Reveal of Domain Effect: How Visual Restoration Contributes to Object Detection in Aquatic Scenes: Underwater robotic perception usually requires visual restoration and object detection, both of which have been studied for many years. Meanwhile, data domain has a huge impact on modern data-driven leaning process. However, exactly indicating domain effect, the relation between restoration and detection remains unclear. In this paper, we generally investigate the relation of quality-diverse data domain to detection performance. In the meantime, we unveil how visual restoration contributes to object detection in real-world underwater scenes. According to our analysis, five key discoveries are reported: 1) Domain quality has an ignorable effect on within-domain convolutional representation and detection accuracy; 2) low-quality domain leads to higher generalization ability in cross-domain detection; 3) low-quality domain can hardly be well learned in a domain-mixed learning process; 4) degrading recall efficiency, restoration cannot improve within-domain detection accuracy; 5) visual restoration is beneficial to detection in the wild by reducing the domain shift between training data and real-world scenes. Finally, as an illustrative example, we successfully perform underwater object detection with an aquatic robot.) <|cite_end|> provides an analysis of the image restoration effect in the underwater object detection performance and compares the normal dataset approach to a filter based restoration and a GAN-based restoration of the dataset. Similarly, Yoo et. al. <|cite_start|> (Reference: WQT and DG-YOLO: towards domain generalization in underwater object detection: A General Underwater Object Detector (GUOD) should perform well on most of underwater circumstances. However, with limited underwater dataset, conventional object detection methods suffer from domain shift severely. This paper aims to build a GUOD with small underwater dataset with limited types of water quality. First, we propose a data augmentation method Water Quality Transfer (WQT) to increase domain diversity of the original small dataset. Second, for mining the semantic information from data generated by WQT, DG-YOLO is proposed, which consists of three parts: YOLOv3, DIM and IRM penalty. Finally, experiments on original and synthetic URPC2019 dataset prove that WQT+DG-YOLO achieves promising performance of domain generalization in underwater object detection.) <|cite_end|> provided a domain generalization for the URPC dataset by applying a style transfer model. This yields greater performance when compared to training with the normal dataset. In this work, the object detection methodology takes advantage of the more robust fully-supervised training paradigm in CNNs. Additionally we employ two different underwater dataset designed for object detection. The Brackish dataset <|cite_start|> (Reference: Detection of marine animals in a new underwater dataset with varying visibility: The increasing demand for marine monitoring calls for robust automated systems to support researchers in gathering information from marine ecosystems. This includes computer vision based marine organism detection and species classification systems. Current state-of-the-art marine vision systems are based on CNNs, which in nature require a relatively large amount of varied training data. In this paper we present a new publicly available underwater dataset with annotated image sequences of fish, crabs, and starfish captured in brackish water with varying visibility. The dataset is called the Brackish Dataset and it is the first part of a planned long term monitoring of the marine species visiting the strait where the cameras are permanently mounted. To the best of our knowledge, this is the first annotated underwater image dataset captured in temperate brackish waters. In order to obtain a baseline performance for future reference, the YOLOv2 and YOLOv3 CNNs were fine-tuned and tested on the Brackish Dataset.) <|cite_end|> provides us with high quality images comparable to the best enhanced and restored datasets, whilst URPC has been chosen to challenge the detectors with a larger variety of conditions which it is expected to find in real applications at sea. <|paper_end|>
[ "<|reference_start|> Automatic Detection of Myocontrol Failures Based upon Situational Context Information: Myoelectric control systems for assistive devices are still unreliable. The user's input signals can become unstable over time due to e.g. fatigue, electrode displacement, or sweat. Hence, such controllers need to be constantly updated and heavily rely on user feedback. In this paper, we present an automatic failure detection method which learns when plausible predictions become unreliable and model updates are necessary. Our key insight is to enhance the control system with a set of generative models that learn sensible behaviour for a desired task from human demonstration. We illustrate our approach on a grasping scenario in Virtual Reality, in which the user is asked to grasp a bottle on a table. From demonstration our model learns the reach-to-grasp motion from a resting position to two grasps (power grasp and tridigital grasp) and how to predict the most adequate grasp from local context, e.g. tridigital grasp on the bottle cap or around the bottleneck. By measuring the error between new grasp attempts and the model prediction, the system can effectively detect which input commands do not reflect the user's intention. We evaluated our model in two cases: i) with both position and rotation information of the wrist pose, and ii) with only rotational information. Our results show that our approach detects statistically highly significant differences in error distributions with p < 0.001 between successful and failed grasp attempts in both cases. <|reference_end|>", "<|reference_start|> Coral Reef Fish Detection and Recognition in Underwater Videos by Supervised Machine Learning: Comparison Between Deep Learning and HOG+SVM Methods: <|reference_end|>", "<|reference_start|> WQT and DG-YOLO: towards domain generalization in underwater object detection: A General Underwater Object Detector (GUOD) should perform well on most of underwater circumstances. However, with limited underwater dataset, conventional object detection methods suffer from domain shift severely. This paper aims to build a GUOD with small underwater dataset with limited types of water quality. First, we propose a data augmentation method Water Quality Transfer (WQT) to increase domain diversity of the original small dataset. Second, for mining the semantic information from data generated by WQT, DG-YOLO is proposed, which consists of three parts: YOLOv3, DIM and IRM penalty. Finally, experiments on original and synthetic URPC2019 dataset prove that WQT+DG-YOLO achieves promising performance of domain generalization in underwater object detection. <|reference_end|>", "<|reference_start|> Detection of marine animals in a new underwater dataset with varying visibility: The increasing demand for marine monitoring calls for robust automated systems to support researchers in gathering information from marine ecosystems. This includes computer vision based marine organism detection and species classification systems. Current state-of-the-art marine vision systems are based on CNNs, which in nature require a relatively large amount of varied training data. In this paper we present a new publicly available underwater dataset with annotated image sequences of fish, crabs, and starfish captured in brackish water with varying visibility. The dataset is called the Brackish Dataset and it is the first part of a planned long term monitoring of the marine species visiting the strait where the cameras are permanently mounted. To the best of our knowledge, this is the first annotated underwater image dataset captured in temperate brackish waters. In order to obtain a baseline performance for future reference, the YOLOv2 and YOLOv3 CNNs were fine-tuned and tested on the Brackish Dataset. <|reference_end|>" ]
[ 13, 18, 22, 23 ]
{"<|cite_1|>": "ss-917282", "<|cite_2|>": "arxiv-52559", "<|cite_3|>": "arxiv-79041", "<|cite_4|>": "arxiv-88684", "<|cite_5|>": "arxiv-76959", "<|cite_6|>": "arxiv-78819", "<|cite_8|>": "ss-1319819", "<|multi_cite_9_1|>": "ss-765535", "<|multi_cite_9_2|>": "arxiv-196469", "<|multi_cite_10_2|>": "ss-1019125", "<|multi_cite_10_3|>": "arxiv-195189", "<|multi_cite_11_1|>": "arxiv-210600", "<|multi_cite_11_2|>": "ss-2413365", "<|multi_cite_11_3|>": "arxiv-211777", "<|cite_12|>": "ss-1092158", "<|cite_13|>": "ss-810596", "<|multi_cite_14_1|>": "ss-1053677", "<|multi_cite_14_2|>": "arxiv-173176", "<|cite_15|>": "ss-697421", "<|cite_16|>": "ss-2413366", "<|cite_17|>": "ss-1966254", "<|cite_18|>": "arxiv-251913", "<|cite_19|>": "arxiv-259215", "<|cite_20|>": "ss-917282"}
2001.09485
<|paper_start|> Title: Multimodal Data Fusion based on the Global Workspace Theory Abstract: Multimodal Data Fusion based on the Global Workspace Theory: We propose a novel neural network architecture, named the Global Workspace Network (GWN), which addresses the challenge of dynamic and unspecified uncertainties in multimodal data fusion. Our GWN is a model of attention across modalities and evolving through time, and is inspired by the well-established Global Workspace Theory from the field of cognitive science. The GWN achieved average F1 score of 0.92 for discrimination between pain patients and healthy participants and average F1 score = 0.75 for further classification of three pain levels for a patient, both based on the multimodal EmoPain dataset captured from people with chronic pain and healthy people performing different types of exercise movements in unconstrained settings. In these tasks, the GWN significantly outperforms the typical fusion approach of merging by concatenation. We further provide extensive analysis of the behaviour of the GWN and its ability to address uncertainties (hidden noise) in multimodal data. Introduction Reasoning about and interpreting multiple sources of information concurrently is an important task in machine learning research as life involves streaming of data from multiple modalities <|cite_start|> (Reference: Multimodal Machine Learning: A Survey and Taxonomy: Our experience of the world is multimodal - we see objects, hear sounds, feel texture, smell odors, and taste flavors. Modality refers to the way in which something happens or is experienced and a research problem is characterized as multimodal when it includes multiple such modalities. In order for Artificial Intelligence to make progress in understanding the world around us, it needs to be able to interpret such multimodal signals together. Multimodal machine learning aims to build models that can process and relate information from multiple modalities. It is a vibrant multi-disciplinary field of increasing importance and with extraordinary potential. Instead of focusing on specific multimodal applications, this paper surveys the recent advances in multimodal machine learning itself and presents them in a common taxonomy. We go beyond the typical early and late fusion categorization and identify broader challenges that are faced by multimodal machine learning, namely: representation, translation, alignment, fusion, and co-learning. This new taxonomy will enable researchers to better understand the state of the field and identify directions for future research.) <|cite_end|>. Multimodal data fusion, which leverages the combination of multiple modalities, is a valuable strategy <|cite_start|> (Reference: Multimodal fusion for multimedia analysis: a survey: ) <|cite_end|> <|cite_start|> (Reference: Multimodal fusion of brain imaging data: A key to finding the missing link(s) in complex mental illness.: ) <|cite_end|> <|cite_start|> (Reference: Attention-Based Multimodal Fusion for Video Description: Currently successful methods for video description are based on encoder-decoder sentence generation using recur-rent neural networks (RNNs). Recent work has shown the advantage of integrating temporal and/or spatial attention mechanisms into these models, in which the decoder net-work predicts each word in the description by selectively giving more weight to encoded features from specific time frames (temporal attention) or to features from specific spatial regions (spatial attention). In this paper, we propose to expand the attention model to selectively attend not just to specific times or spatial regions, but to specific modalities of input such as image features, motion features, and audio features. Our new modality-dependent attention mechanism, which we call multimodal attention, provides a natural way to fuse multimodal information for video description. We evaluate our method on the Youtube2Text dataset, achieving results that are competitive with current state of the art. More importantly, we demonstrate that our model incorporating multimodal attention as well as temporal attention significantly outperforms the model that uses temporal attention alone.) <|cite_end|> <|cite_start|> (Reference: Weakly paired multimodal fusion for object recognition: The ever-growing development of sensor technology has led to the use of multimodal sensors to develop robotics and automation systems. It is therefore highly expected to develop methodologies capable of integrating information from multimodal sensors with the goal of improving the performance of surveillance, diagnosis, prediction, and so on. However, real multimodal data often suffer from significant weak-pairing characteristics, i.e., the full pairing between data samples may not be known, while pairing of a group of samples from one modality to a group of samples in another modality is known. In this paper, we establish a novel projective dictionary learning framework for weakly paired multimodal data fusion. By introducing a latent pairing matrix, we realize the simultaneous dictionary learning and the pairing matrix estimation, and therefore improve the fusion effect. In addition, the kernelized version and the optimization algorithms are also addressed. Extensive experimental validations on some existing data sets are performed to show the advantages of the proposed method.Note to Practitioners—In many industrial environments, we usually use multiple heterogeneous sensors, which provide multimodal information. Such multimodal data usually lead to two technical challenges. First, different sensors may provide different patterns of data. Second, the full-pairing information between modalities may not be known. In this paper, we develop a unified model to tackle such problems. This model is based on a projective dictionary learning method, which efficiently produces the representation vector for the original data by an explicit form. In addition, the latent pairing relation between samples can be learned automatically and be used to improve the classification performance. Such a method can be flexibly used for multimodal fusion with full-pairing, partial-pairing and weak-pairing cases.) <|cite_end|>. Its benefits include complementarity of information, higher prediction performance, and robustness <|cite_start|> (Reference: Multimodal Machine Learning: A Survey and Taxonomy: Our experience of the world is multimodal - we see objects, hear sounds, feel texture, smell odors, and taste flavors. Modality refers to the way in which something happens or is experienced and a research problem is characterized as multimodal when it includes multiple such modalities. In order for Artificial Intelligence to make progress in understanding the world around us, it needs to be able to interpret such multimodal signals together. Multimodal machine learning aims to build models that can process and relate information from multiple modalities. It is a vibrant multi-disciplinary field of increasing importance and with extraordinary potential. Instead of focusing on specific multimodal applications, this paper surveys the recent advances in multimodal machine learning itself and presents them in a common taxonomy. We go beyond the typical early and late fusion categorization and identify broader challenges that are faced by multimodal machine learning, namely: representation, translation, alignment, fusion, and co-learning. This new taxonomy will enable researchers to better understand the state of the field and identify directions for future research.) <|cite_end|>. However, multimodal fusion comes with challenges; <|cite_start|> (Reference: {Multimodal data fusion: an overview of methods, challenges, and prospects: In various disciplines, information about the same phenomenon can be acquired from different types of detectors, at different conditions, in multiple experiments or subjects, among others. We use the term “modality” for each such acquisition framework. Due to the rich characteristics of natural phenomena, it is rare that a single modality provides complete knowledge of the phenomenon of interest. The increasing availability of several modalities reporting on the same system introduces new degrees of freedom, which raise questions beyond those related to exploiting each modality separately. As we argue, many of these questions, or “challenges,” are common to multiple domains. This paper deals with two key issues: “why we need data fusion” and “how we perform it.” The first issue is motivated by numerous examples in science and technology, followed by a mathematical framework that showcases some of the benefits that data fusion provides. In order to address the second issue, “diversity” is introduced as a key concept, and a number of data-driven solutions based on matrix and tensor decompositions are discussed, emphasizing how they account for diversity across the data sets. The aim of this paper is to provide the reader, regardless of his or her community of origin, with a taste of the vastness of the field, the prospects, and the opportunities that it holds.) <|cite_end|> specifies them under two categories: (1) challenges of multimodal data acquisition, and (2) uncertainties (such as noisy modalities, missing values, conflicting information) in multimodal data. The former type of challenges could be managed with later pre-processing, e.g. resampling to reconcile different temporal resolutions across modalities <|cite_start|> (Reference: The automatic detection of chronic pain-related expression: requirements, challenges and the multimodal EmoPain dataset: Pain-related emotions are a major barrier to effective self rehabilitation in chronic pain. Automated coaching systems capable of detecting these emotions are a potential solution. This paper lays the foundation for the development of such systems by making three contributions. First, through literature reviews, an overview of how pain is expressed in chronic pain and the motivation for detecting it in physical rehabilitation is provided. Second, a fully labelled multimodal dataset (named `EmoPain') containing high resolution multiple-view face videos, head mounted and room audio signals, full body 3D motion capture and electromyographic signals from back muscles is supplied. Natural unconstrained pain related facial expressions and body movement behaviours were elicited from people with chronic pain carrying out physical exercises. Both instructed and non-instructed exercises were considered to reflect traditional scenarios of physiotherapist directed therapy and home-based self-directed therapy. Two sets of labels were assigned: level of pain from facial expressions annotated by eight raters and the occurrence of six pain-related body behaviours segmented by four experts. Third, through exploratory experiments grounded in the data, the factors and challenges in the automated recognition of such expressions and behaviour are described, the paper concludes by discussing potential avenues in the context of these findings also highlighting differences for the two exercise scenarios addressed.) <|cite_end|>. However, addressing uncertainties in multimodal data requires specialised design of models that can exploit complementarity or discrepancy across modalities <|cite_start|> (Reference: {Multimodal data fusion: an overview of methods, challenges, and prospects: In various disciplines, information about the same phenomenon can be acquired from different types of detectors, at different conditions, in multiple experiments or subjects, among others. We use the term “modality” for each such acquisition framework. Due to the rich characteristics of natural phenomena, it is rare that a single modality provides complete knowledge of the phenomenon of interest. The increasing availability of several modalities reporting on the same system introduces new degrees of freedom, which raise questions beyond those related to exploiting each modality separately. As we argue, many of these questions, or “challenges,” are common to multiple domains. This paper deals with two key issues: “why we need data fusion” and “how we perform it.” The first issue is motivated by numerous examples in science and technology, followed by a mathematical framework that showcases some of the benefits that data fusion provides. In order to address the second issue, “diversity” is introduced as a key concept, and a number of data-driven solutions based on matrix and tensor decompositions are discussed, emphasizing how they account for diversity across the data sets. The aim of this paper is to provide the reader, regardless of his or her community of origin, with a taste of the vastness of the field, the prospects, and the opportunities that it holds.) <|cite_end|>. While there have been approaches such as that address the particular problem of missing modalities, fusion of multimodal data with varying types or levels of uncertainty (e.g. noise) which are not known apriori has been less investigated. Findings of the efficacy of automatic learning of weights (e.g. some ``importance'' or ``confidence'' metric) for individual input features <|cite_start|> (Reference: Simultaneous analysis of coupled data matrices subject to different amounts of noise: In many areas of science, research questions imply the analysis of a set of coupled data blocks, with, for instance, each block being an experimental unit by variable matrix, and the variables being the same in all matrices. To obtain an overall picture of the mechanisms that play a role in the different data matrices, the information in these matrices needs to be integrated. This may be achieved by applying a data-analytic strategy in which a global model is fitted to all data matrices simultaneously, as in some forms of simultaneous component analysis (SCA). Since such a strategy implies that all data entries, regardless the matrix they belong to, contribute equally to the analysis, it may obfuscate the overall picture of the mechanisms underlying the data when the different data matrices are subject to different amounts of noise. One way out is to downweight entries from noisy data matrices in favour of entries from less noisy matrices. Information regarding the amount of noise that is present in each matrix, however, is, in most cases, not available. To deal with these problems, in this paper a novel maximum-likelihood-based simultaneous component analysis method, referred to as MxLSCA, is proposed. Being a stochastic extension of SCA, in MxLSCA the amount of noise in each data matrix is estimated and entries from noisy data matrices are downweighted. Both in an extensive simulation study and in an application to data stemming from cross-cultural emotion psychology, it is shown that the novel MxLSCA strategy outperforms the SCA strategy with respect to disclosing the mechanisms underlying the coupled data.) <|cite_end|> <|cite_start|> (Reference: 21st European Signal Processing Conference, EUSIPCO 2013, Marrakech, Morocco, September 9-13, 2013: ) <|cite_end|> <|cite_start|> (Reference: New algorithm for integration between wireless microwave sensor network and radar for improved rainfall measurement and mapping: Abstract. One of the main challenges for meteorological and hydrological modelling is accurate rainfall measurement and mapping across time and space. To date, the most effective methods for large-scale rainfall estimates are radar, satellites, and, more recently, received signal level (RSL) measurements derived from commercial microwave networks (CMNs). While these methods provide improved spatial resolution over traditional rain gauges, they have their limitations as well. For example, wireless CMNs, which are comprised of microwave links (ML), are dependant upon existing infrastructure and the ML' arbitrary distribution in space. Radar, on the other hand, is known in its limitation for accurately estimating rainfall in urban regions, clutter areas and distant locations. In this paper the pros and cons of the radar and ML methods are considered in order to develop a new algorithm for improving rainfall measurement and mapping, which is based on data fusion of the different sources. The integration is based on an optimal weighted average of the two data sets, taking into account location, number of links, rainfall intensity and time step. Our results indicate that, by using the proposed new method, we not only generate more accurate 2-D rainfall reconstructions, compared with actual rain intensities in space, but also the reconstructed maps are extended to the maximum coverage area. By inspecting three significant rain events, we show that our method outperforms CMNs or the radar alone in rain rate estimation, almost uniformly, both for instantaneous spatial measurements, as well as in calculating total accumulated rainfall. These new improved 2-D rainfall maps, as well as the accurate rainfall measurements over large areas at sub-hourly timescales, will allow for improved understanding, initialization, and calibration of hydrological and meteorological models mainly necessary for water resource management and planning.) <|cite_end|> <|cite_start|> (Reference: A method for judicious fusion of inconsistent multiple sensor data: One of the major problems in sensor fusion is that sensors frequently provide spurious observations which are difficult to predict and model. The spurious measurements from sensors must be identified and eliminated since their incorporation in the fusion pool might lead to inaccurate estimation. This paper presents a unified sensor fusion strategy based on a modified Bayesian approach that can automatically identify the inconsistency in sensor measurements so that the spurious measurements can be eliminated from the data fusion process. The proposed method adds a term to the commonly used Bayesian formulation. This term is an estimate of the probability that the data is not spurious, based upon the measured data and the unknown value of the true state. In fusing two measurements, it has the effect of increasing the variance of the posterior distribution when measurement from one of the sensors is inconsistent with respect to the other. The increase or decrease in variance can be estimated using the information theoretic measure "entropy." The proposed strategy was verified with the help of extensive computations performed on simulated data from three sensors. A comparison was made between two different fusion schemes: centralized fusion in which data obtained from all sensors were fused simultaneously, and a decentralized or sequential Bayesian scheme that proved useful for identifying and eliminating spurious data from the fusion process. The simulations verified that the proposed strategy was able to identify spurious sensor measurements and eliminate them from the fusion process, thus leading to a better overall estimate of the true state. The proposed strategy was also validated with the help of experiments performed using stereo vision cameras, one infrared proximity sensor, and one laser proximity sensor. The information from these three sensing sources was fused to obtain an occupancy profile of the robotic workspace) <|cite_end|> <|cite_start|> (Reference: Scalable Tensor Factorizations for Incomplete Data: The problem of incomplete data - i.e., data with missing or unknown values - in multi-way arrays is ubiquitous in biomedical signal processing, network traffic analysis, bibliometrics, social network analysis, chemometrics, computer vision, communication networks, etc. We consider the problem of how to factorize data sets with missing values with the goal of capturing the underlying latent structure of the data and possibly reconstructing missing values (i.e., tensor completion). We focus on one of the most well-known tensor factorizations that captures multi-linear structure, CANDECOMP/PARAFAC (CP). In the presence of missing data, CP can be formulated as a weighted least squares problem that models only the known entries. We develop an algorithm called CP-WOPT (CP Weighted OPTimization) that uses a first-order optimization approach to solve the weighted least squares problem. Based on extensive numerical experiments, our algorithm is shown to successfully factorize tensors with noise and up to 99% missing data. A unique aspect of our approach is that it scales to sparse large-scale data, e.g., 1000 x 1000 x 1000 with five million known entries (0.5% dense). We further demonstrate the usefulness of CP-WOPT on two real-world applications: a novel EEG (electroencephalogram) application where missing data is frequently encountered due to disconnections of electrodes and the problem of modeling computer network traffic where data may be absent due to the expense of the data collection process.) <|cite_end|>, the basis of attention mechanisms in machine learning <|cite_start|> (Reference: Neural Machine Translation by Jointly Learning to Align and Translate: Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.) <|cite_end|>, suggests that this may be a more relevant approach to factoring uncertainties into multimodal data fusion. However, while uncertainty also evolves through time <|cite_start|> (Reference: {Multimodal data fusion: an overview of methods, challenges, and prospects: In various disciplines, information about the same phenomenon can be acquired from different types of detectors, at different conditions, in multiple experiments or subjects, among others. We use the term “modality” for each such acquisition framework. Due to the rich characteristics of natural phenomena, it is rare that a single modality provides complete knowledge of the phenomenon of interest. The increasing availability of several modalities reporting on the same system introduces new degrees of freedom, which raise questions beyond those related to exploiting each modality separately. As we argue, many of these questions, or “challenges,” are common to multiple domains. This paper deals with two key issues: “why we need data fusion” and “how we perform it.” The first issue is motivated by numerous examples in science and technology, followed by a mathematical framework that showcases some of the benefits that data fusion provides. In order to address the second issue, “diversity” is introduced as a key concept, and a number of data-driven solutions based on matrix and tensor decompositions are discussed, emphasizing how they account for diversity across the data sets. The aim of this paper is to provide the reader, regardless of his or her community of origin, with a taste of the vastness of the field, the prospects, and the opportunities that it holds.) <|cite_end|>, the typical attention approach has been uni-dimensional, i.e. attention across modalities alone or attention over time within individual modalities, e.g. in <|cite_start|> (Reference: Multi-Modal Sequence Fusion via Recursive Attention for Emotion Recognition: Natural human communication is nuanced and inherently multi-modal. Humans possess specialised sensoria for processing vocal, visual, and linguistic, and para-linguistic information, but form an intricately fused percept of the multi-modal data stream to provide a holistic representation. Analysis of emotional content in face-to-face communication is a cognitive task to which humans are particularly attuned, given its sociological importance, and poses a difficult challenge for machine emulation due to the subtlety and expressive variability of cross-modal cues. Inspired by the empirical success of recent so-called End-To-End Memory Networks and related works, we propose an approach based on recursive multi-attention with a shared external memory updated over multiple gated iterations of analysis. We evaluate our model across several large multi-modal datasets and show that global contextualised memory with gated memory update can effectively achieve emotion recognition.) <|cite_end|>. Few studies have explored the propagation of attention across modalities through time. The memory fusion network of <|cite_start|> (Reference: Memory Fusion Network for Multi-view Sequential Learning: Multi-view sequential learning is a fundamental problem in machine learning dealing with multi-view sequences. In a multi-view sequence, there exists two forms of interactions between different views: view-specific interactions and cross-view interactions. In this paper, we present a new neural architecture for multi-view sequential learning called the Memory Fusion Network (MFN) that explicitly accounts for both interactions in a neural architecture and continuously models them through time. The first component of the MFN is called the System of LSTMs, where view-specific interactions are learned in isolation through assigning an LSTM function to each view. The cross-view interactions are then identified using a special attention mechanism called the Delta-memory Attention Network (DMAN) and summarized through time with a Multi-view Gated Memory. Through extensive experimentation, MFN is compared to various proposed approaches for multi-view sequential learning on multiple publicly available benchmark datasets. MFN outperforms all the existing multi-view approaches. Furthermore, MFN outperforms all current state-of-the-art models, setting new state-of-the-art results for these multi-view datasets.) <|cite_end|> which is based on a cross-modality attention module with a memory is one of such rare cases. To address this gap in multimodal data fusion, we propose the Global Workspace Network (GWN) which, like <|cite_start|> (Reference: Memory Fusion Network for Multi-view Sequential Learning: Multi-view sequential learning is a fundamental problem in machine learning dealing with multi-view sequences. In a multi-view sequence, there exists two forms of interactions between different views: view-specific interactions and cross-view interactions. In this paper, we present a new neural architecture for multi-view sequential learning called the Memory Fusion Network (MFN) that explicitly accounts for both interactions in a neural architecture and continuously models them through time. The first component of the MFN is called the System of LSTMs, where view-specific interactions are learned in isolation through assigning an LSTM function to each view. The cross-view interactions are then identified using a special attention mechanism called the Delta-memory Attention Network (DMAN) and summarized through time with a Multi-view Gated Memory. Through extensive experimentation, MFN is compared to various proposed approaches for multi-view sequential learning on multiple publicly available benchmark datasets. MFN outperforms all the existing multi-view approaches. Furthermore, MFN outperforms all current state-of-the-art models, setting new state-of-the-art results for these multi-view datasets.) <|cite_end|>, propagates cross-modality attention through time. However, unlike previous work, the GWN further addresses the problem of differences in feature dimensionalities of the modalities via a common feature space, based on pre-trained autoencoders. In addition, different from <|cite_start|> (Reference: Memory Fusion Network for Multi-view Sequential Learning: Multi-view sequential learning is a fundamental problem in machine learning dealing with multi-view sequences. In a multi-view sequence, there exists two forms of interactions between different views: view-specific interactions and cross-view interactions. In this paper, we present a new neural architecture for multi-view sequential learning called the Memory Fusion Network (MFN) that explicitly accounts for both interactions in a neural architecture and continuously models them through time. The first component of the MFN is called the System of LSTMs, where view-specific interactions are learned in isolation through assigning an LSTM function to each view. The cross-view interactions are then identified using a special attention mechanism called the Delta-memory Attention Network (DMAN) and summarized through time with a Multi-view Gated Memory. Through extensive experimentation, MFN is compared to various proposed approaches for multi-view sequential learning on multiple publicly available benchmark datasets. MFN outperforms all the existing multi-view approaches. Furthermore, MFN outperforms all current state-of-the-art models, setting new state-of-the-art results for these multi-view datasets.) <|cite_end|>, our approach is bio-inspired (grounded in the Global Workspace Theory <|cite_start|> (Reference: In the Theater of Consciousness: ) <|cite_end|> <|cite_start|> (Reference: The conscious access hypothesis: origins and recent evidence: ) <|cite_end|>) and we implement the GWN's cross-modality attention using the widely-tested transformer architecture <|cite_start|> (Reference: Attention Is All You Need: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.) <|cite_end|>. The Global Workspace Theory (GWT) is a well-developed framework (originally proposed as a model of human consciousness <|cite_start|> (Reference: A cognitive theory of consciousness: List of figures and tables Preface Part I. Introduction: 1. What is to be explained some preliminaries Part II. The Basic Model: 2. Model 1: conscious representations are internally consistent and globally distributed 3. The neural basis of conscious experience Part III. The Fundamental Role of Context: 4. Model 2: unconscious contexts shape conscious experience 5. Model 3: conscious experience is informative - it always demands some degree of adaptation Part IV. Goals and Voluntary Control: 6. Model 4: Goal contexts, spontaneous problem solving, and the stream of consciousness 7. Model 5: volition as ideomotor control of thought and action Part V. Attention, self, and conscious self-monitoring: 8. Model 6: attention as control of access to consciousness 9. Model 7. Self as the dominant context of experience and action Part VI. Consciousness is Functional: 10. The functions of consciousness Part VII. Conclusion: 11. A summary and some future directions Glossary and guide to theoretical claims References Name index, Subject index.) <|cite_end|>) in cognitive science. The GWT states that concomitant cognitive processes \textit{compete} for the opportunity to \textit{broadcast} their current state (to peer processes) <|cite_start|> (Reference: 2011 IEEE Conference on Computational Intelligence and Games, CIG 2011, Seoul, South Korea, August 31 - September 3, 2011: ) <|cite_end|>. At each iteration, the winner (a single process or a coalition of processes) earns the privilege of contributing current information in a \textit{global workspace} which can be accessed by all processes (including the winner) <|cite_start|> (Reference: A spiking neuron model of cortical broadcast and competition: ) <|cite_end|>. This competition and broadcast cycle is believed to be ubiquitous in the perceptual regions of the brain <|cite_start|> (Reference: A cognitive theory of consciousness: List of figures and tables Preface Part I. Introduction: 1. What is to be explained some preliminaries Part II. The Basic Model: 2. Model 1: conscious representations are internally consistent and globally distributed 3. The neural basis of conscious experience Part III. The Fundamental Role of Context: 4. Model 2: unconscious contexts shape conscious experience 5. Model 3: conscious experience is informative - it always demands some degree of adaptation Part IV. Goals and Voluntary Control: 6. Model 4: Goal contexts, spontaneous problem solving, and the stream of consciousness 7. Model 5: volition as ideomotor control of thought and action Part V. Attention, self, and conscious self-monitoring: 8. Model 6: attention as control of access to consciousness 9. Model 7. Self as the dominant context of experience and action Part VI. Consciousness is Functional: 10. The functions of consciousness Part VII. Conclusion: 11. A summary and some future directions Glossary and guide to theoretical claims References Name index, Subject index.) <|cite_end|>. Although the literature on GWT includes architectures of biologically-realistic spiking neural networks <|cite_start|> (Reference: A spiking neuron model of cortical broadcast and competition: ) <|cite_end|> <|cite_start|> (Reference: 2011 IEEE Conference on Computational Intelligence and Games, CIG 2011, Seoul, South Korea, August 31 - September 3, 2011: ) <|cite_end|>, to our knowledge, there has been no direct implementation in machine learning. For such implementation, the GWT can be conceptualised as the combination of a compete-and-broadcast procedure and an external memory structure. In contrast to the global workspace, which can be seen as a communication module, the external memory stores information for later use <|cite_start|> (Reference: A cognitive architecture that combines internal simulation with a global workspace: ) <|cite_end|>. By considering each modality in multimodal data as analogous to specialised processes in the brain, the similarity between the compete-and-broadcast cycle and typical cross-modality attention mechanism becomes clear. The repetitiveness of the cycle allows the pattern of attention to evolve over time and, given the external memory module, be used in the primary prediction task of the network. In our implementation of the GWN, the transformer <|cite_start|> (Reference: Attention Is All You Need: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.) <|cite_end|> was leveraged to simulate the compete-and-broadcast component of the GWT, and the Long Short-Term Memory (LSTM) neural network <|cite_start|> (Reference: Long {Short-Term} memory: Model compression is significant for the wide adoption of Recurrent Neural Networks (RNNs) in both user devices possessing limited resources and business clusters requiring quick responses to large-scale service requests. This work aims to learn structurally-sparse Long Short-Term Memory (LSTM) by reducing the sizes of basic structures within LSTM units, including input updates, gates, hidden states, cell states and outputs. Independently reducing the sizes of basic structures can result in inconsistent dimensions among them, and consequently, end up with invalid LSTM units. To overcome the problem, we propose Intrinsic Sparse Structures (ISS) in LSTMs. Removing a component of ISS will simultaneously decrease the sizes of all basic structures by one and thereby always maintain the dimension consistency. By learning ISS within LSTM units, the obtained LSTMs remain regular while having much smaller basic structures. Based on group Lasso regularization, our method achieves 10.59× speedup without losing any perplexity of a language modeling of Penn TreeBank dataset. It is also successfully evaluated through a compact model with only 2.69M weights for machine Question Answering of SQuAD dataset. Our approach is successfully extended to nonLSTM RNNs, like Recurrent Highway Networks (RHNs). Our source code is available1.) <|cite_end|> as its external memory. There are 3 key elements of transformers that illustrate their advantage and relevance to the current task. First is a self-attention mechanism <|cite_start|> (Reference: Long Short-Term Memory-Networks for Machine Reading: In this paper we address the question of how to render sequence-level networks better at handling structured input. We propose a machine reading simulator which processes text incrementally from left to right and performs shallow reasoning with memory and attention. The reader extends the Long Short-Term Memory architecture with a memory network in place of a single memory cell. This enables adaptive memory usage during recurrence with neural attention, offering a way to weakly induce relations among tokens. The system is initially designed to process a single sequence but we also demonstrate how to integrate it with an encoder-decoder architecture. Experiments on language modeling, sentiment analysis, and natural language inference show that our model matches or outperforms the state of the art.) <|cite_end|> <|cite_start|> (Reference: A Deep Reinforced Model for Abstractive Summarization: Attentional, RNN-based encoder-decoder models for abstractive summarization have achieved good performance on short input and output sequences. For longer documents and summaries however these models often include repetitive and incoherent phrases. We introduce a neural network model with a novel intra-attention that attends over the input and continuously generated output separately, and a new training method that combines standard supervised word prediction and reinforcement learning (RL). Models trained only with supervised learning often exhibit "exposure bias" - they assume ground truth is provided at each step during training. However, when standard word prediction is combined with the global sequence prediction training of RL the resulting summaries become more readable. We evaluate this model on the CNN/Daily Mail and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the CNN/Daily Mail dataset, an improvement over previous state-of-the-art models. Human evaluation also shows that our model produces higher quality summaries.) <|cite_end|> that we use as the GWN’s compete-and-broadcast procedure, where each modality independently scores all modalities and integrates the data from them based on the resulting weights. A second merit is the transformer's bagging approach, where multiple attention patterns are learnt in parallel, with the advantage of increased robustness. Finally, a third valuable attribute is its memory-based structure <|cite_start|> (Reference: Memory Networks: We describe a new class of learning models called memory networks. Memory networks reason with inference components combined with a long-term memory component; they learn how to use these jointly. The long-term memory can be read and written to, with the goal of using it for prediction. We investigate these models in the context of question answering (QA) where the long-term memory effectively acts as a (dynamic) knowledge base, and the output is a textual response. We evaluate them on a large-scale QA task, and a smaller, but more complex, toy task generated from a simulated world. In the latter, we show the reasoning power of such models by chaining multiple supporting sentences to answer questions that require understanding the intension of verbs.) <|cite_end|> <|cite_start|> (Reference: Weakly Supervised Memory Networks: In this paper we introduce a variant of Memory Networks that needs significantly less supervision to perform question and answering tasks. The original model requires that the sentences supporting the answer be explicitly indicated during training. In contrast, our approach only requires the answer to the question during training. We apply the model to the synthetic bAbI tasks, showing that our approach is competitive with the supervised approach, particularly when trained on a sufficiently large amount of data. Furthermore, it decisively beats other weakly supervised approaches based on LSTMs. The approach is quite general and can potentially be applied to many other tasks that require capturing long-term dependencies.) <|cite_end|>. Drawing from traditional applications in Natural Language Processing question answering tasks <|cite_start|> (Reference: Weakly Supervised Memory Networks: In this paper we introduce a variant of Memory Networks that needs significantly less supervision to perform question and answering tasks. The original model requires that the sentences supporting the answer be explicitly indicated during training. In contrast, our approach only requires the answer to the question during training. We apply the model to the synthetic bAbI tasks, showing that our approach is competitive with the supervised approach, particularly when trained on a sufficiently large amount of data. Furthermore, it decisively beats other weakly supervised approaches based on LSTMs. The approach is quite general and can potentially be applied to many other tasks that require capturing long-term dependencies.) <|cite_end|> <|cite_start|> (Reference: Key-Value Memory Networks for Directly Reading Documents: Directly reading documents and being able to answer questions from them is an unsolved challenge. To avoid its inherent difficulty, question answering (QA) has been directed towards using Knowledge Bases (KBs) instead, which has proven effective. Unfortunately KBs often suffer from being too restrictive, as the schema cannot support certain types of answers, and too sparse, e.g. Wikipedia contains much more information than Freebase. In this work we introduce a new method, Key-Value Memory Networks, that makes reading documents more viable by utilizing different encodings in the addressing and output stages of the memory read operation. To compare using KBs, information extraction or Wikipedia documents directly in a single framework we construct an analysis tool, WikiMovies, a QA dataset that contains raw text alongside a preprocessed KB, in the domain of movies. Our method reduces the gap between all three settings. It also achieves state-of-the-art results on the existing WikiQA benchmark.) <|cite_end|>, this unit further maps the feature vector into query, key, and value spaces to increase the weighting depth and robustness <|cite_start|> (Reference: An Introductory Survey on Attention Mechanisms in NLP Problems: ) <|cite_end|>. This additionally enables distributed competition versus broadcasting computations. In essence, the query and key forms can be used for the competition while broadcast is performed on value form, which can have more expressive information that is not valuable for the competition. As for the external memory module, in contrast to the use of a custom two-gated recurrent network in <|cite_start|> (Reference: Memory Fusion Network for Multi-view Sequential Learning: Multi-view sequential learning is a fundamental problem in machine learning dealing with multi-view sequences. In a multi-view sequence, there exists two forms of interactions between different views: view-specific interactions and cross-view interactions. In this paper, we present a new neural architecture for multi-view sequential learning called the Memory Fusion Network (MFN) that explicitly accounts for both interactions in a neural architecture and continuously models them through time. The first component of the MFN is called the System of LSTMs, where view-specific interactions are learned in isolation through assigning an LSTM function to each view. The cross-view interactions are then identified using a special attention mechanism called the Delta-memory Attention Network (DMAN) and summarized through time with a Multi-view Gated Memory. Through extensive experimentation, MFN is compared to various proposed approaches for multi-view sequential learning on multiple publicly available benchmark datasets. MFN outperforms all the existing multi-view approaches. Furthermore, MFN outperforms all current state-of-the-art models, setting new state-of-the-art results for these multi-view datasets.) <|cite_end|>, we used the well-established LSTM which has two additional gates <|cite_start|> (Reference: A Critical Review of Recurrent Neural Networks for Sequence Learning: Countless learning tasks require dealing with sequential data. Image captioning, speech synthesis, and music generation all require that a model produce outputs that are sequences. In other domains, such as time series prediction, video analysis, and musical information retrieval, a model must learn from inputs that are sequences. Interactive tasks, such as translating natural language, engaging in dialogue, and controlling a robot, often demand both capabilities. Recurrent neural networks (RNNs) are connectionist models that capture the dynamics of sequences via cycles in the network of nodes. Unlike standard feedforward neural networks, recurrent networks retain a state that can represent information from an arbitrarily long context window. Although recurrent neural networks have traditionally been difficult to train, and often contain millions of parameters, recent advances in network architectures, optimization techniques, and parallel computation have enabled successful large-scale learning with them. In recent years, systems based on long short-term memory (LSTM) and bidirectional (BRNN) architectures have demonstrated ground-breaking performance on tasks as varied as image captioning, language translation, and handwriting recognition. In this survey, we review and synthesize the research that over the past three decades first yielded and then made practical these powerful learning models. When appropriate, we reconcile conflicting notation and nomenclature. Our goal is to provide a self-contained explication of the state of the art together with a historical perspective and references to primary research.) <|cite_end|>. Finally, unlike <|cite_start|> (Reference: Memory Fusion Network for Multi-view Sequential Learning: Multi-view sequential learning is a fundamental problem in machine learning dealing with multi-view sequences. In a multi-view sequence, there exists two forms of interactions between different views: view-specific interactions and cross-view interactions. In this paper, we present a new neural architecture for multi-view sequential learning called the Memory Fusion Network (MFN) that explicitly accounts for both interactions in a neural architecture and continuously models them through time. The first component of the MFN is called the System of LSTMs, where view-specific interactions are learned in isolation through assigning an LSTM function to each view. The cross-view interactions are then identified using a special attention mechanism called the Delta-memory Attention Network (DMAN) and summarized through time with a Multi-view Gated Memory. Through extensive experimentation, MFN is compared to various proposed approaches for multi-view sequential learning on multiple publicly available benchmark datasets. MFN outperforms all the existing multi-view approaches. Furthermore, MFN outperforms all current state-of-the-art models, setting new state-of-the-art results for these multi-view datasets.) <|cite_end|>, we provide extensive analysis of the behaviour of the GWN in the presence of varying degrees of uncertainties across modalities and over time. The contribution of this paper is the GWN architecture which we propose as an approach to fusion of sequential data from multiple modalities. We evaluate the architecture on the EmoPain dataset <|cite_start|> (Reference: The automatic detection of chronic pain-related expression: requirements, challenges and the multimodal EmoPain dataset: Pain-related emotions are a major barrier to effective self rehabilitation in chronic pain. Automated coaching systems capable of detecting these emotions are a potential solution. This paper lays the foundation for the development of such systems by making three contributions. First, through literature reviews, an overview of how pain is expressed in chronic pain and the motivation for detecting it in physical rehabilitation is provided. Second, a fully labelled multimodal dataset (named `EmoPain') containing high resolution multiple-view face videos, head mounted and room audio signals, full body 3D motion capture and electromyographic signals from back muscles is supplied. Natural unconstrained pain related facial expressions and body movement behaviours were elicited from people with chronic pain carrying out physical exercises. Both instructed and non-instructed exercises were considered to reflect traditional scenarios of physiotherapist directed therapy and home-based self-directed therapy. Two sets of labels were assigned: level of pain from facial expressions annotated by eight raters and the occurrence of six pain-related body behaviours segmented by four experts. Third, through exploratory experiments grounded in the data, the factors and challenges in the automated recognition of such expressions and behaviour are described, the paper concludes by discussing potential avenues in the context of these findings also highlighting differences for the two exercise scenarios addressed.) <|cite_end|>, which consists of motion capture and electromyography (EMG) data collected from patients with chronic lower back pain and healthy control participants while they performed exercise movements. While the EMG has four feature dimensions, the motion capture data comprises 78 dimensions. Further, we provide analysis of the GWN's outputs, demonstrating its effectiveness in handling uncertainty in data. The paper is organized as follows. We discuss the state of the art in attention-based machine learning in Section~\ref{sec:litrev}. We then describe in Section~\ref{sec:gwn} the proposed GWN architecture that builds on these and present both validation and analysis of the network in Section~\ref{sec:resultsanddiscussion}. Section~\ref{sec:conclusion} concludes the paper. Related Work \label{sec:litrev} As earlier-stated, there have been different approaches to multimodal fusion. For example, <|cite_start|> (Reference: Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks: Contact-rich manipulation tasks in unstructured environments often require both haptic and visual feedback. However, it is non-trivial to manually design a robot controller that combines modalities with very different characteristics. While deep reinforcement learning has shown success in learning control policies for high-dimensional inputs, these algorithms are generally intractable to deploy on real robots due to sample complexity. We use self-supervision to learn a compact and multimodal representation of our sensory inputs, which can then be used to improve the sample efficiency of our policy learning. We evaluate our method on a peg insertion task, generalizing over different geometry, configurations, and clearances, while being robust to external perturbations. Results for simulated and real robot experiments are presented.) <|cite_end|>~ simply concatenated vectors from individual encoders for each modality. The architecture of, which was mainly tested on non-sequential inputs, learns both individual encodings as well as a common encoding for the different modalities. For the joint encoding in, the individual encodings are merged by multiplication. Rather than cover the literature on multimodal data fusion, we refer the reader to <|cite_start|> (Reference: Multimodal Machine Learning: A Survey and Taxonomy: Our experience of the world is multimodal - we see objects, hear sounds, feel texture, smell odors, and taste flavors. Modality refers to the way in which something happens or is experienced and a research problem is characterized as multimodal when it includes multiple such modalities. In order for Artificial Intelligence to make progress in understanding the world around us, it needs to be able to interpret such multimodal signals together. Multimodal machine learning aims to build models that can process and relate information from multiple modalities. It is a vibrant multi-disciplinary field of increasing importance and with extraordinary potential. Instead of focusing on specific multimodal applications, this paper surveys the recent advances in multimodal machine learning itself and presents them in a common taxonomy. We go beyond the typical early and late fusion categorization and identify broader challenges that are faced by multimodal machine learning, namely: representation, translation, alignment, fusion, and co-learning. This new taxonomy will enable researchers to better understand the state of the field and identify directions for future research.) <|cite_end|> for a comprehensive review and focus our discussion here on attention-based approaches to multimodal data fusion. \textbf{Attention over time in multimodal fusion.} In the literature on neural networks for multimodal data, attention performed on the time axis is usually done separately for each modality, and the resulting context vectors from each modality are then fused as non-temporal features. A representative case of this approach is the Recursive Recurrent Neural Network (RRNN) architecture proposed by <|cite_start|> (Reference: Multi-Modal Sequence Fusion via Recursive Attention for Emotion Recognition: Natural human communication is nuanced and inherently multi-modal. Humans possess specialised sensoria for processing vocal, visual, and linguistic, and para-linguistic information, but form an intricately fused percept of the multi-modal data stream to provide a holistic representation. Analysis of emotional content in face-to-face communication is a cognitive task to which humans are particularly attuned, given its sociological importance, and poses a difficult challenge for machine emulation due to the subtlety and expressive variability of cross-modal cues. Inspired by the empirical success of recent so-called End-To-End Memory Networks and related works, we propose an approach based on recursive multi-attention with a shared external memory updated over multiple gated iterations of analysis. We evaluate our model across several large multi-modal datasets and show that global contextualised memory with gated memory update can effectively achieve emotion recognition.) <|cite_end|>. In their work, different modalities (video, audio, and subtitles) extracted from a subtitled audiovisual dataset were divided into segments of uttered sentences and each segment was used an input to the network. For each modality in a segment, a bi-directional LSTM layer was used to extract features. At a given time step, attention computation is performed for each modality separately and the outputs are concatenated over all modalities together with the current state of a shared memory, which the authors implemented with a Gated Recurrent Unit (GRU) cell <|cite_start|> (Reference: On the Properties of Neural Machine Translation: Encoder-Decoder Approaches: Neural machine translation is a relatively new approach to statistical machine translation based purely on neural networks. The neural machine translation models often consist of an encoder and a decoder. The encoder extracts a fixed-length representation from a variable-length input sentence, and the decoder generates a correct translation from this representation. In this paper, we focus on analyzing the properties of the neural machine translation using two models; RNN Encoder--Decoder and a newly proposed gated recursive convolutional neural network. We show that the neural machine translation performs relatively well on short sentences without unknown words, but its performance degrades rapidly as the length of the sentence and the number of unknown words increase. Furthermore, we find that the proposed gated recursive convolutional network learns a grammatical structure of a sentence automatically.) <|cite_end|>. The outcome is then used to update the state of the memory. An advantage of this work is that since each modality was encoded separately, they do not have to follow a common time axis, which allows each modality to optimally exploit its inherent temporal properties. However, as this method cannot account for attention between modalities, different modalities affect the final prediction equally despite the fact that some modalities could be more noisy than others. Thus, the challenge of the dynamics of uncertainty across modalities remains unsolved. \textbf{Attention across multiple modalities.} Several studies have modelled the relation between modalities in multimodal fusion. The typical approach <|cite_start|> (Reference: Simultaneous analysis of coupled data matrices subject to different amounts of noise: In many areas of science, research questions imply the analysis of a set of coupled data blocks, with, for instance, each block being an experimental unit by variable matrix, and the variables being the same in all matrices. To obtain an overall picture of the mechanisms that play a role in the different data matrices, the information in these matrices needs to be integrated. This may be achieved by applying a data-analytic strategy in which a global model is fitted to all data matrices simultaneously, as in some forms of simultaneous component analysis (SCA). Since such a strategy implies that all data entries, regardless the matrix they belong to, contribute equally to the analysis, it may obfuscate the overall picture of the mechanisms underlying the data when the different data matrices are subject to different amounts of noise. One way out is to downweight entries from noisy data matrices in favour of entries from less noisy matrices. Information regarding the amount of noise that is present in each matrix, however, is, in most cases, not available. To deal with these problems, in this paper a novel maximum-likelihood-based simultaneous component analysis method, referred to as MxLSCA, is proposed. Being a stochastic extension of SCA, in MxLSCA the amount of noise in each data matrix is estimated and entries from noisy data matrices are downweighted. Both in an extensive simulation study and in an application to data stemming from cross-cultural emotion psychology, it is shown that the novel MxLSCA strategy outperforms the SCA strategy with respect to disclosing the mechanisms underlying the coupled data.) <|cite_end|> <|cite_start|> (Reference: 21st European Signal Processing Conference, EUSIPCO 2013, Marrakech, Morocco, September 9-13, 2013: ) <|cite_end|> <|cite_start|> (Reference: New algorithm for integration between wireless microwave sensor network and radar for improved rainfall measurement and mapping: Abstract. One of the main challenges for meteorological and hydrological modelling is accurate rainfall measurement and mapping across time and space. To date, the most effective methods for large-scale rainfall estimates are radar, satellites, and, more recently, received signal level (RSL) measurements derived from commercial microwave networks (CMNs). While these methods provide improved spatial resolution over traditional rain gauges, they have their limitations as well. For example, wireless CMNs, which are comprised of microwave links (ML), are dependant upon existing infrastructure and the ML' arbitrary distribution in space. Radar, on the other hand, is known in its limitation for accurately estimating rainfall in urban regions, clutter areas and distant locations. In this paper the pros and cons of the radar and ML methods are considered in order to develop a new algorithm for improving rainfall measurement and mapping, which is based on data fusion of the different sources. The integration is based on an optimal weighted average of the two data sets, taking into account location, number of links, rainfall intensity and time step. Our results indicate that, by using the proposed new method, we not only generate more accurate 2-D rainfall reconstructions, compared with actual rain intensities in space, but also the reconstructed maps are extended to the maximum coverage area. By inspecting three significant rain events, we show that our method outperforms CMNs or the radar alone in rain rate estimation, almost uniformly, both for instantaneous spatial measurements, as well as in calculating total accumulated rainfall. These new improved 2-D rainfall maps, as well as the accurate rainfall measurements over large areas at sub-hourly timescales, will allow for improved understanding, initialization, and calibration of hydrological and meteorological models mainly necessary for water resource management and planning.) <|cite_end|> is the use of modality weighting although not particularly based on attention mechanisms <|cite_start|> (Reference: Neural Machine Translation by Jointly Learning to Align and Translate: Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.) <|cite_end|>. One study that does explicitly use the attention mechanism is the work of <|cite_start|> (Reference: Attention-Based Multimodal Fusion for Video Description: Currently successful methods for video description are based on encoder-decoder sentence generation using recur-rent neural networks (RNNs). Recent work has shown the advantage of integrating temporal and/or spatial attention mechanisms into these models, in which the decoder net-work predicts each word in the description by selectively giving more weight to encoded features from specific time frames (temporal attention) or to features from specific spatial regions (spatial attention). In this paper, we propose to expand the attention model to selectively attend not just to specific times or spatial regions, but to specific modalities of input such as image features, motion features, and audio features. Our new modality-dependent attention mechanism, which we call multimodal attention, provides a natural way to fuse multimodal information for video description. We evaluate our method on the Youtube2Text dataset, achieving results that are competitive with current state of the art. More importantly, we demonstrate that our model incorporating multimodal attention as well as temporal attention significantly outperforms the model that uses temporal attention alone.) <|cite_end|> on automatic video description. Their approach leverages attention between different modalities using an encoder-decoder architecture <|cite_start|> (Reference: Neural Machine Translation by Jointly Learning to Align and Translate: Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.) <|cite_end|> with separate encoders for each modality and a single decoder. Features of each modality are encoded separately and the decoder weights them to generate a context vector as an output. A similar study <|cite_start|> (Reference: Multimodal Attention for Neural Machine Translation: The attention mechanism is an important part of the neural machine translation (NMT) where it was reported to produce richer source representation compared to fixed-length encoding sequence-to-sequence models. Recently, the effectiveness of attention has also been explored in the context of image captioning. In this work, we assess the feasibility of a multimodal attention mechanism that simultaneously focus over an image and its natural language description for generating a description in another language. We train several variants of our proposed attention mechanism on the Multi30k multilingual image captioning dataset. We show that a dedicated attention for each modality achieves up to 1.6 points in BLEU and METEOR compared to a textual NMT baseline.) <|cite_end|> applies multimodal attention in neural machine translation where images are leveraged in translating the description texts from one language to another. The image and text modalities were first encoded using pre-trained ResNet-50 <|cite_start|> (Reference: Deep Residual Learning for Image Recognition: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.) <|cite_end|> and bi-directional GRU neural networks <|cite_start|> (Reference: On the Properties of Neural Machine Translation: Encoder-Decoder Approaches: Neural machine translation is a relatively new approach to statistical machine translation based purely on neural networks. The neural machine translation models often consist of an encoder and a decoder. The encoder extracts a fixed-length representation from a variable-length input sentence, and the decoder generates a correct translation from this representation. In this paper, we focus on analyzing the properties of the neural machine translation using two models; RNN Encoder--Decoder and a newly proposed gated recursive convolutional neural network. We show that the neural machine translation performs relatively well on short sentences without unknown words, but its performance degrades rapidly as the length of the sentence and the number of unknown words increase. Furthermore, we find that the proposed gated recursive convolutional network learns a grammatical structure of a sentence automatically.) <|cite_end|> respectively. Then, attention scores were computed for these encodings. More recently, authors of <|cite_start|> (Reference: A genre-aware attention model to improve the likability prediction of books: Likability prediction of books has many uses. Readers, writers, as well as the publishing industry, can all benefit from automatic book likability prediction systems. In order to make reliable decisions, these systems need to assimilate information from different aspects of a book in a sensible way. We propose a novel multimodal neural architecture that incorporates genre supervision to assign weights to individual feature types. Our proposed method is capable of dynamically tailoring weights given to feature types based on the characteristics of each book. Our architecture achieves competitive results and even outperforms state-of-the-art for this task.) <|cite_end|> place an attention layer on top of several modality-specific feature encoding layers to model the importance of different modalities in book genre prediction. There are many other works <|cite_start|> (Reference: Hierarchical Question-Image Co-Attention for Visual Question Answering: A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling "where to look" or visual attention, it is equally important to model "what words to listen to" or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question (and consequently the image via the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3% to 60.5%, and from 61.6% to 63.3% on the COCO-QA dataset. By using ResNet, the performance is further improved to 62.1% for VQA and 65.4% for COCO-QA.) <|cite_end|> <|cite_start|> (Reference: Attention in multimodal neural networks for person re-identification: In spite of increasing interest from the research community, person re-identification remains an unsolved problem. Correctly deciding on a true match by comparing images of a person, captured by several cameras, requires extraction of discriminative features to counter challenges such as changes in lighting, viewpoint and occlusion. Besides devising novel feature descriptors, the setup can be changed to capture persons from an overhead viewpoint rather than a horizontal. Furthermore, additional modalities can be considered that are not affected by similar environmental changes as RGB images. In this work, we present a Multimodal ATtention network (MAT) based on RGB and depth modalities. We combine a Convolution Neural Network with an attention module to extract local and discriminative features that are fused with globally extracted features. Attention is based on correlation between the two modalities and we finally also fuse RGB and depth features to generate a joint multilevel RGB-D feature. Experiments conducted on three datasets captured from an overhead view show the importance of attention, increasing accuracies by 3.43%, 2.01% and 2.13% on OPR, DPI-T and TVPR, respectively.) <|cite_end|> <|cite_start|> (Reference: Visual Attention Model for Name Tagging in Multimodal Social Media: Everyday billions of multimodal posts containing both images and text are shared in social media sites such as Snapchat, Twitter or Instagram. This combination of image and text in a single message allows for more creative and expressive forms of communication, and has become increasingly common in such sites. This new paradigm brings new challenges for natural language understanding, as the textual component tends to be shorter, more informal, and often is only understood if combined with the visual context. In this paper, we explore the task of name tagging in multimodal social media posts. We start by creating two new multimodal datasets: the first based on Twitter posts and the second based on Snapchat captions (exclusively submitted to public and crowd-sourced stories). We then propose a novel model architecture based on Visual Attention that not only provides deeper visual understanding on the decisions of the model, but also significantly outperforms other state-of-the-art baseline methods for this task.) <|cite_end|> <|cite_start|> (Reference: Heterogeneous Memory Enhanced Multimodal Attention Model for Video Question Answering: In this paper, we propose a novel end-to-end trainable Video Question Answering (VideoQA) framework with three major components: 1) a new heterogeneous memory which can effectively learn global context information from appearance and motion features; 2) a redesigned question memory which helps understand the complex semantics of question and highlights queried subjects; and 3) a new multimodal fusion layer which performs multi-step reasoning by attending to relevant visual and textual hints with self-updated attention. Our VideoQA model firstly generates the global context-aware visual and textual features respectively by interacting current inputs with memory contents. After that, it makes the attentional fusion of the multimodal visual and textual representations to infer the correct answer. Multiple cycles of reasoning can be made to iteratively refine attention weights of the multimodal data and improve the final representation of the QA pair. Experimental results demonstrate our approach achieves state-of-the-art performance on four VideoQA benchmark datasets.) <|cite_end|> that leverage this technique, i.e. encoding sequential/temporal data for each modality before computing attention weighting and fusing encoded modality-specific features. While it is appropriate for obtaining modality-specific feature representation, it does not allow in-depth quantification of the complex interactions between modalities through time. \textbf{Attention across modalities and through time.} As discussed in the introduction, <|cite_start|> (Reference: Memory Fusion Network for Multi-view Sequential Learning: Multi-view sequential learning is a fundamental problem in machine learning dealing with multi-view sequences. In a multi-view sequence, there exists two forms of interactions between different views: view-specific interactions and cross-view interactions. In this paper, we present a new neural architecture for multi-view sequential learning called the Memory Fusion Network (MFN) that explicitly accounts for both interactions in a neural architecture and continuously models them through time. The first component of the MFN is called the System of LSTMs, where view-specific interactions are learned in isolation through assigning an LSTM function to each view. The cross-view interactions are then identified using a special attention mechanism called the Delta-memory Attention Network (DMAN) and summarized through time with a Multi-view Gated Memory. Through extensive experimentation, MFN is compared to various proposed approaches for multi-view sequential learning on multiple publicly available benchmark datasets. MFN outperforms all the existing multi-view approaches. Furthermore, MFN outperforms all current state-of-the-art models, setting new state-of-the-art results for these multi-view datasets.) <|cite_end|> addresses the limitation of attention over time alone or across modality only by considering both the interaction of multiple modalities and the temporal variations in this interaction. Their architecture is based on separate time encoding of individual modalities. A cross-modality attention is then computed and applied for each time slice. Instead of a single time step per slice, each slice consists of successive time steps $t$ and $t-1$. The weighted multimodal encodings for a given time slice are then fed into a memory module with retain and update gates which are based on neural networks that have the encodings as input. A recurrent update is done using the gate outputs, the previous memory state, and the proposed memory state which is also the output of a neural network computation on the encodings. The findings of <|cite_start|> (Reference: Memory Fusion Network for Multi-view Sequential Learning: Multi-view sequential learning is a fundamental problem in machine learning dealing with multi-view sequences. In a multi-view sequence, there exists two forms of interactions between different views: view-specific interactions and cross-view interactions. In this paper, we present a new neural architecture for multi-view sequential learning called the Memory Fusion Network (MFN) that explicitly accounts for both interactions in a neural architecture and continuously models them through time. The first component of the MFN is called the System of LSTMs, where view-specific interactions are learned in isolation through assigning an LSTM function to each view. The cross-view interactions are then identified using a special attention mechanism called the Delta-memory Attention Network (DMAN) and summarized through time with a Multi-view Gated Memory. Through extensive experimentation, MFN is compared to various proposed approaches for multi-view sequential learning on multiple publicly available benchmark datasets. MFN outperforms all the existing multi-view approaches. Furthermore, MFN outperforms all current state-of-the-art models, setting new state-of-the-art results for these multi-view datasets.) <|cite_end|> in a set of ablation studies suggest that propagation of attention through time improves prediction performance. The GWN architecture that we propose makes further advance with implementation of the cross-modality attention module based on the self-attending, multi-head attention transformer architecture <|cite_start|> (Reference: Attention Is All You Need: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.) <|cite_end|>. The GWN additionally addresses the confounding challenge of different feature and/or temporal dimensionalities across the modalities to be fused. While <|cite_start|> (Reference: Memory Fusion Network for Multi-view Sequential Learning: Multi-view sequential learning is a fundamental problem in machine learning dealing with multi-view sequences. In a multi-view sequence, there exists two forms of interactions between different views: view-specific interactions and cross-view interactions. In this paper, we present a new neural architecture for multi-view sequential learning called the Memory Fusion Network (MFN) that explicitly accounts for both interactions in a neural architecture and continuously models them through time. The first component of the MFN is called the System of LSTMs, where view-specific interactions are learned in isolation through assigning an LSTM function to each view. The cross-view interactions are then identified using a special attention mechanism called the Delta-memory Attention Network (DMAN) and summarized through time with a Multi-view Gated Memory. Through extensive experimentation, MFN is compared to various proposed approaches for multi-view sequential learning on multiple publicly available benchmark datasets. MFN outperforms all the existing multi-view approaches. Furthermore, MFN outperforms all current state-of-the-art models, setting new state-of-the-art results for these multi-view datasets.) <|cite_end|> evaluate their model on data with such characteristic, they do not clarify how their architecture deals with this. In the GWN, we take the approach of learning a common dimensionality across modalities. Based on further controlled experiments, we also contribute analysis of the effect of noise in one of the modalities. <|paper_end|>
[ "<|reference_start|> Memory Fusion Network for Multi-view Sequential Learning: Multi-view sequential learning is a fundamental problem in machine learning dealing with multi-view sequences. In a multi-view sequence, there exists two forms of interactions between different views: view-specific interactions and cross-view interactions. In this paper, we present a new neural architecture for multi-view sequential learning called the Memory Fusion Network (MFN) that explicitly accounts for both interactions in a neural architecture and continuously models them through time. The first component of the MFN is called the System of LSTMs, where view-specific interactions are learned in isolation through assigning an LSTM function to each view. The cross-view interactions are then identified using a special attention mechanism called the Delta-memory Attention Network (DMAN) and summarized through time with a Multi-view Gated Memory. Through extensive experimentation, MFN is compared to various proposed approaches for multi-view sequential learning on multiple publicly available benchmark datasets. MFN outperforms all the existing multi-view approaches. Furthermore, MFN outperforms all current state-of-the-art models, setting new state-of-the-art results for these multi-view datasets. <|reference_end|>", "<|reference_start|> Long Short-Term Memory-Networks for Machine Reading: In this paper we address the question of how to render sequence-level networks better at handling structured input. We propose a machine reading simulator which processes text incrementally from left to right and performs shallow reasoning with memory and attention. The reader extends the Long Short-Term Memory architecture with a memory network in place of a single memory cell. This enables adaptive memory usage during recurrence with neural attention, offering a way to weakly induce relations among tokens. The system is initially designed to process a single sequence but we also demonstrate how to integrate it with an encoder-decoder architecture. Experiments on language modeling, sentiment analysis, and natural language inference show that our model matches or outperforms the state of the art. <|reference_end|>", "<|reference_start|> Key-Value Memory Networks for Directly Reading Documents: Directly reading documents and being able to answer questions from them is an unsolved challenge. To avoid its inherent difficulty, question answering (QA) has been directed towards using Knowledge Bases (KBs) instead, which has proven effective. Unfortunately KBs often suffer from being too restrictive, as the schema cannot support certain types of answers, and too sparse, e.g. Wikipedia contains much more information than Freebase. In this work we introduce a new method, Key-Value Memory Networks, that makes reading documents more viable by utilizing different encodings in the addressing and output stages of the memory read operation. To compare using KBs, information extraction or Wikipedia documents directly in a single framework we construct an analysis tool, WikiMovies, a QA dataset that contains raw text alongside a preprocessed KB, in the domain of movies. Our method reduces the gap between all three settings. It also achieves state-of-the-art results on the existing WikiQA benchmark. <|reference_end|>", "<|reference_start|> Multimodal Attention for Neural Machine Translation: The attention mechanism is an important part of the neural machine translation (NMT) where it was reported to produce richer source representation compared to fixed-length encoding sequence-to-sequence models. Recently, the effectiveness of attention has also been explored in the context of image captioning. In this work, we assess the feasibility of a multimodal attention mechanism that simultaneously focus over an image and its natural language description for generating a description in another language. We train several variants of our proposed attention mechanism on the Multi30k multilingual image captioning dataset. We show that a dedicated attention for each modality achieves up to 1.6 points in BLEU and METEOR compared to a textual NMT baseline. <|reference_end|>" ]
[ 17, 32, 37, 53 ]
{"<|cite_1|>": "ss-1538934", "<|multi_cite_2_1|>": "ss-1062886", "<|multi_cite_2_2|>": "ss-1287582", "<|multi_cite_2_3|>": "arxiv-114193", "<|multi_cite_2_4|>": "ss-848988", "<|cite_3|>": "ss-1538934", "<|cite_4|>": "ss-1238004", "<|cite_5|>": "ss-1191791", "<|cite_6|>": "ss-1238004", "<|multi_cite_8_1|>": "ss-2390211", "<|multi_cite_8_2|>": "ss-1359800", "<|multi_cite_8_3|>": "ss-2390212", "<|multi_cite_8_4|>": "ss-1396866", "<|multi_cite_8_5|>": "arxiv-13521", "<|cite_9|>": "arxiv-65503", "<|cite_10|>": "ss-1238004", "<|cite_11|>": "ss-771857", "<|cite_12|>": "arxiv-147131", "<|cite_13|>": "arxiv-147131", "<|cite_14|>": "arxiv-147131", "<|multi_cite_15_1|>": "ss-2390213", "<|multi_cite_15_2|>": "ss-1488154", "<|cite_16|>": "arxiv-126595", "<|cite_17|>": "ss-1290024", "<|cite_18|>": "ss-1370985", "<|cite_19|>": "ss-2390214", "<|cite_20|>": "ss-1290024", "<|multi_cite_21_1|>": "ss-2390214", "<|multi_cite_21_2|>": "ss-1370985", "<|cite_22|>": "ss-1290027", "<|cite_23|>": "arxiv-126595", "<|multi_cite_24_1|>": "ss-710343", "<|multi_cite_25_1|>": "arxiv-90993", "<|multi_cite_25_2|>": "arxiv-123897", "<|multi_cite_26_1|>": "arxiv-67359", "<|multi_cite_26_2|>": "ss-2102508", "<|multi_cite_27_1|>": "ss-2102508", "<|multi_cite_27_2|>": "arxiv-99775", "<|cite_28|>": "ss-2390215", "<|cite_29|>": "arxiv-147131", "<|cite_30|>": "arxiv-78537", "<|cite_31|>": "arxiv-147131", "<|cite_32|>": "ss-1191791", "<|cite_33|>": "arxiv-177375", "<|cite_36|>": "arxiv-125183", "<|cite_37|>": "ss-771857", "<|cite_38|>": "arxiv-65632", "<|multi_cite_39_1|>": "ss-2390211", "<|multi_cite_39_2|>": "ss-1359800", "<|multi_cite_39_3|>": "ss-2390212", "<|cite_40|>": "arxiv-65503", "<|cite_41|>": "arxiv-114193", "<|cite_42|>": "arxiv-65503", "<|cite_43|>": "arxiv-105726", "<|cite_44|>": "arxiv-88870", "<|cite_45|>": "arxiv-65632", "<|cite_46|>": "ss-1800183", "<|multi_cite_47_1|>": "arxiv-99055", "<|multi_cite_47_2|>": "ss-1632951", "<|multi_cite_47_3|>": "ss-1292433", "<|multi_cite_47_4|>": "arxiv-199001", "<|cite_48|>": "arxiv-147131", "<|cite_49|>": "arxiv-147131", "<|cite_50|>": "arxiv-126595", "<|cite_51|>": "arxiv-147131"}
1706.01347
<|paper_start|> Title: Balanced Facilities on Random Graphs Abstract: Balanced Facilities on Random Graphs: Given a graph G with n vertices and k players, each of which is placing a facility on one of the vertices of G, we define the score of the i'th player to be the number of vertices for which, among all players, the facility placed by the i'th player is the closest. A placement is balanced if all players get roughly the same score. A graph is balanced if all placements on it are balanced. Viewing balancedness as a desired property in various scenarios, in this paper we study balancedness properties of graphs, concentrating on random graphs and on expanders. We show that, while both random graphs and expanders tend to have good balancedness properties, random graphs are, in general, more balanced. In addition, we formulate and prove intractability of the combinatorial problem of deciding whether a given graph is balanced; then, building upon our analysis on random graphs and expanders, we devise two efficient algorithms which, with high probability, generate balancedness certificates. Our first algorithm is based on graph traversal, while the other relies on spectral properties. Introduction \label{section:introduction} Consider a game played by $k$ players on some graph $G$. The players place facilities on vertices of $G$ such that each player places one facility. For each player we define a \emph{score}, defined as the number of vertices which, among all other facilities, are closest to his or her facility; ties are broken evenly, such that if there are $z$ facilities closest to a vertex, then this vertex incurs a score increase of $1 / z$ to each of these facilities. Such games are subject to extensive research; some prominent study areas are \emph{Voronoi games on graphs} <|cite_start|> (Reference: Voronoi Game on Graphs: \textit{Voronoi game} is a geometric model of competitive facility location problem played between two players. Users are generally modeled as points uniformly distributed on a given underlying space. Each player chooses a set of points in the underlying space to place their facilities. Each user avails service from its nearest facility. Service zone of a facility consists of the set of users which are closer to it than any other facility. Payoff of each player is defined by the quantity of users served by all of its facilities. The objective of each player is to maximize their respective payoff. In this paper we consider the two players {\it Voronoi game} where the underlying space is a road network modeled by a graph. In this framework we consider the problem of finding $k$ optimal facility locations of Player 2 given any placement of $m$ facilities by Player 1. Our main result is a dynamic programming based polynomial time algorithm for this problem on tree network. On the other hand, we show that the problem is strongly $\mathcal{NP}$-complete for graphs. This proves that finding a winning strategy of P2 is $\mathcal{NP}$-complete. Consequently, we design an $1-\frac{1}{e}$ factor approximation algorithm, where $e \approx 2.718$.) <|cite_end|> <|cite_start|> (Reference: Nash equilibria in Voronoi games on graphs: In this paper we study a game where every player is to choose a vertex (facility) in a given undirected graph. All vertices (customers) are then assigned to closest facilities and a player's payoff is the number of customers assigned to it. We show that deciding the existence of a Nash equilibrium for a given graph is NP-hard which to our knowledge is the first result of this kind for a zero-sum game. We also introduce a new measure, the social cost discrepancy, defined as the ratio of the costs between the worst and the best Nash equilibria. We show that the social cost discrepancy in our game is Omega(sqrt(n/k)) and O(sqrt(kn)), where n is the number of vertices and k the number of players.) <|cite_end|> <|cite_start|> (Reference: Voronoi Games on Cycle Graphs: ) <|cite_end|> <|cite_start|> (Reference: The Voronoi game on graphs and its complexity: ) <|cite_end|> and \emph{Competitive facility location games} <|cite_start|> (Reference: Competitive facility location under attrition: ) <|cite_end|> <|cite_start|> (Reference: The Competitive Facility Location Problem in a Duopoly: Connections to the 1-Median Problem: ) <|cite_end|> <|cite_start|> (Reference: Discrete Voronoi Games and $\epsilon$-Nets, in Two and Three Dimensions: The one-round discrete Voronoi game, with respect to a $n$-point user set $U$, consists of two players Player 1 ($\mathcal{P}_1$) and Player 2 ($\mathcal{P}_2$). At first, $\mathcal{P}_1$ chooses a set of facilities $F_1$ following which $\mathcal{P}_2$ chooses another set of facilities $F_2$, disjoint from $F_1$. The payoff of $\mathcal{P}_2$ is defined as the cardinality of the set of points in $U$ which are closer to a facility in $F_2$ than to every facility in $F_1$, and the payoff of $\mathcal{P}_1$ is the difference between the number of users in $U$ and the payoff of $\mathcal{P}_2$. The objective of both the players in the game is to maximize their respective payoffs. In this paper we study the one-round discrete Voronoi game where $\mathcal{P}_1$ places $k$ facilities and $\mathcal{P}_2$ places one facility and we have denoted this game as $VG(k,1)$. Although the optimal solution of this game can be found in polynomial time, the polynomial has a very high degree. In this paper, we focus on achieving approximate solutions to $VG(k,1)$ with significantly better running times. We provide a constant-factor approximate solution to the optimal strategy of $\mathcal{P}_1$ in $VG(k,1)$ by establishing a connection between $VG(k,1)$ and weak $\epsilon$-nets. To the best of our knowledge, this is the first time that Voronoi games are studied from the point of view of $\epsilon$-nets.) <|cite_end|> <|cite_start|> (Reference: Optimal strategies for the one-round discrete Voronoi game on a line: ) <|cite_end|> <|cite_start|> (Reference: Competitive facility location: the Voronoi game: ) <|cite_end|>, where the players try to maximize their score. In this paper, however, we concentrate on balancedness properties of such games, thus consider having the score of the players be as close to each other as possible to be desired. Indeed, in some sense, in this paper we take the point of view of the network designer by studying balancedness properties of certain graphs. To this end, we say that a placement of $k$ facilities on a graph is \emph{$z$-balanced} if all facilities get roughly the same score; specifically, a placement of $k$ facilities is \emph{$z$-balanced} if the score of each facility is at least $\lfloor n / k \rfloor - z$ and at most $\lceil n / k \rceil + z$. We say further that a graph is \emph{$z$-balanced} if all placements on it are $z$-balanced. A more formal definition is given in Section~\ref{section:preliminaries}. \pagebreak Graph balancedness, besides being a natural and an interesting graph property from a combinatorial point of view, is motivated by certain scenarios, two of which we briefly mention next. As a first example, consider a computer network to be built; the network acts as the graph upon facilities, such as computer servers might be built. It is of interest to have a network with good balancedness properties, so that it will remain fair and efficient where such servers would be employed on top of it. As a second example, consider a design of a city to be built; the city topology and, for instance, its roads, act as the graph upon facilities, such as hospitals and child-care centers might be built. It is of interest to have a city with good balancedness properties, so it would be able to accommodate the needs of its future residents. Indeed, a city with bad balancedness properties might eventually become unpleasant and socially inferior. Thus, we believe that it is worthwhile to study balancedness properties of graphs, as well as algorithms for verifying whether a given graph is balanced. Some research has been done on balancedness of facilities in graphs, including work on designing practical algorithms for finding balanced allocations <|cite_start|> (Reference: The discrete facility location problem with balanced allocation of customers: ) <|cite_end|> and work considering balancedness in a perculation-like model <|cite_start|> (Reference: Fixed speed competition on the configuration model with infinite variance degrees: Unequal speeds: We study competition of two spreading colors starting from single sources on the configuration model with i.i.d. degrees following a power-law distribution with exponent τ ∈ (2, 3). In this model two colors spread with a fixed but not necessarily equal speed on the unweighted random graph. We show that if the speeds are not equal, then the faster color paints almost all vertices, while the slower color can paint only a random subpolynomial fraction of the vertices. We investigate the case when the speeds are equal and typical distances in a follow-up paper.) <|cite_end|> <|cite_start|> (Reference: Fixed speed competition on the configuration model with infinite variance degrees: unequal speeds: We study competition of two spreading colors starting from single sources on the configuration model with i.i.d. degrees following a power-law distribution with exponent τ ∈ (2, 3). In this model two colors spread with a fixed but not necessarily equal speed on the unweighted random graph. We show that if the speeds are not equal, then the faster color paints almost all vertices, while the slower color can paint only a random subpolynomial fraction of the vertices. We investigate the case when the speeds are equal and typical distances in a follow-up paper.) <|cite_end|>. Other, different notions of balancedness in facility location games have been studied as well <|cite_start|> (Reference: Balancing graph voronoi diagrams: Many facility location problems are concerned with minimizing operation and transportation costs by partitioning territory into regions of similar size, each of which is served by a facility. For many optimization problems, the overall cost can be reduced by means ofa partitioning into balanced subsets, especially in those cases where the cost associated with a subset is superlinear in its size.In this paper, we consider the problem of generating a Voronoi partition of a discrete graph so as to achieve balance conditions on the region sizes.Through experimentation, we first establishthat the region sizes of randomly-generated graph Voronoi diagrams vary greatly in practice. We then show how to achieve a balanced partition of a graph via Voronoi site resampling. For bounded-degree graphs, where each of the $n$ nodes has degree at most $d$, and for an initial randomly-chosen set of $s$ Voronoi nodes,we prove that, by extending the set of Voronoi nodes using an algorithm by Thorup and Zwick, each Voronoi region has size at most $4dn/s+1$ nodes, and that the expected size of the extended set of Voronoi nodes is at most $2s\log n$.) <|cite_end|>. For an elaborate discussion on balancedness notions in facility location games, see <|cite_start|> (Reference: Equity measurement in facility location analysis: A review and framework: ) <|cite_end|>. In this paper we analyze balancedness properties of certain graphs, seeking to identify graphs which are balanced. Specifically, we concentrate on random graphs and also on expanders, showing that these graphs usually have good balancedness properties. Then, building upon our analysis on random graphs and expanders, we provide efficient algorithms for verifying whether a given graph is balanced. \para{Initial Observations.} As one of our goals is to identify graphs which have good balancedness properties, let us identify certain such graphs. As first examples, observe that complete graphs and empty graphs are $0$-balanced for any number of players $k$; indeed, the score of each player is exactly $n / k$, for any placement of $k$ facilities on such graphs. Both complete graphs and empty graphs are vertex-transitive graphs <|cite_start|> (Reference: Algebraic Graph Theory: All lectures will be held in Max Bell 159 (Max Bell Building accessible by walkway on 2nd floor of Corbett Hall). LCD projector, overhead projectors and blackboards are available for presentations. Note that the meeting space designated for BIRS is the lower level of Max Bell, Rooms 155–159. Please respect that all other space has been contracted to other Banff Centre guests, including any Food and Beverage in those areas. Please remember to scan your meal card at the host/hostess station in the dining room for each meal.) <|cite_end|>, and we mention that any vertex-transitive graph is $0$-balanced for two players; further, a natural generalization of vertex-transitivity to sets of $k$ players yields graphs which are $0$-balanced also for $k$ players. Naturally, however, not all graphs have good balancedness properties. For example, consider two players playing on the path graph $P_n$, which is the graph with vertices $V = \{v_1, \ldots, v_n\}$ and edges $E = \{ \{ v_i, v_{i + 1} \} : i \in [n - 1]\}$. Some placements of two facilities on the path graph $P_n$ are balanced, for example where one facility is placed on $v_{n / 2}$ and the other facility is placed on $v_{n / 2 + 1}$: for $n \in \mathbb{N}_{\text{even}}$, such a placement is $0$-balanced, while for $n \in \mathbb{N}_{\text{odd}}$, such a placement is only $1$-balanced. Some placements of two facilities on the path graph $P_n$, however, are not balanced, for example where one facility is placed on $v_1$ and the other facility is placed on $v_2$: one player would have a score of $1$ while the other would have a score of $n - 1$. This means that path graphs are not ($n - 2$)-balanced for two players, which is, considering balancedness, ``the worst it can get''. \para{Overview of the Paper.} Preliminaries are provided in Section~\ref{section:preliminaries}. Then, motivated by our desire to identify graphs which are balanced and to further understand which factors influence graph balancedness, in Section~\ref{section:randomgraphs} we consider random graphs and study their balancedness properties. We show, in Theorem~\ref{thm:random graphs are balanced}, that random graphs have good balancedness properties. Inspecting our proof of Theorem~\ref{thm:random graphs are balanced}, it looks as if what causes random graphs to be balanced is the fact that they are well-connected in a uniform way; well-behaved (in the above-mentioned manner) graphs are usually referred to as expander graphs, and thus, in Section~\ref{section:expanders} we consider balancedness properties of expander graphs. Specifically, we consider spectral expander graphs (for a precise definition, see Section~\ref{section:preliminaries}), and mention that, with high probability, a random graph is a spectral expander (see <|cite_start|> (Reference: Spectral techniques applied to sparse random graphs: We analyze the eigenvalue gap for the adjacency matrices of sparse random graphs. Let λ1 ≥ … ≥ λn be the eigenvalues of an n‐vertex graph, and let λ = max[λ2,|λn|]. Let c be a large enough constant. For graphs of average degree d = c log n it is well known that λ1 ≥ d, and we show that $\lambda = O(\sqrt{d})$. For d = c it is no longer true that $\lambda = O(\sqrt{d})$, but we show that by removing a small number of vertices of highest degree in G, one gets a graph G′ for which $\lambda = O(\sqrt{d})$. Our proofs are based on the techniques of Friedman Kahn and Szemeredi from STOC 1989, who proved similar results for regular graphs. Our results are useful for extending the analysis of certain heuristics to sparser instances of NP‐hard problems. We illustrate this by removing some unnecessary logarithmic factors in the density of k‐SAT formulas that are refuted by the algorithm of Goerdt and Krivelevich from STACS 2001. © 2005 Wiley Periodicals, Inc. Random Struct. Alg., 2005) <|cite_end|> <|cite_start|> (Reference: The eigenvalues of random symmetric matrices: ) <|cite_end|> <|cite_start|> (Reference: On the second eigenvalue of random regular graphs: The following is an extended abstract for two papers, one written by Kahn and Szemeredi, the other written by Friedman, which have been combined at the request of the STOC committee. The introduction was written jointly, the second section by Kahn and Szemeredi, and the third by Friedman, Let G be a d-regular (i.e. each vertex has degree d) undirected graph on n nodes. It’s adjacency matrix is symmetric, and therefore has real eigenvalues Ar = d 2 x2 >_ *-. >_ X, with IX,] 5 d. Graphs for which X2 and) <|cite_end|>, e.g., for a proof of this fact). In Theorem~\ref{thm:expanders are balanced}, we show that expanders have good balancedness properties. Somewhat surprisingly, though, it turns out that, with respect to their balancedness, expanders are inferior to random graphs; in particular, for some values of the average degree of these graphs, some random graphs are balanced while some expanders are not: in Theorem~\ref{thm:expander which is not balanced} we show an example for such expander which is not balanced. This means that, even though the expansion of random graphs influences their balancedness, it is not sufficient, and the inherent randomness of these graphs is also important for their balancedness. In Section~\ref{section:algorithms} we consider the algorithmic problem of deciding whether a given graph is balanced. We begin that section by proving that the corresponding combinatorial problem is intractable. Then, building upon the analysis described in Sections~\ref{section:randomgraphs} (for random graphs) and~\ref{section:expanders} (for expander graphs), we describe, in Sections~\ref{section:algorithmone} and~\ref{section:algorithmtwo}, two efficient algorithms which, given a graph, provide a \emph{balancedness certificate}: in Section~\ref{section:algorithmone} we discuss an algorithm, based on graph traversal, which produces, in $O(n^2)$ time, a certificate that a graph is balanced; for random graphs, it produces such a certificate with high probability. In Section~\ref{section:algorithmtwo} we discuss a different algorithm, based on spectral analysis, which produces, in $O(d \cdot n \log n)$ (where $d$ is the average degree), a randomized certificate that a graph is balanced; for random graphs, it produces such a certificate with high probability. We conclude the paper in Section~\ref{section:outlook} with a discussion on directions for future research. <|paper_end|>
[ "<|reference_start|> Voronoi Game on Graphs: \\textit{Voronoi game} is a geometric model of competitive facility location problem played between two players. Users are generally modeled as points uniformly distributed on a given underlying space. Each player chooses a set of points in the underlying space to place their facilities. Each user avails service from its nearest facility. Service zone of a facility consists of the set of users which are closer to it than any other facility. Payoff of each player is defined by the quantity of users served by all of its facilities. The objective of each player is to maximize their respective payoff. In this paper we consider the two players {\\it Voronoi game} where the underlying space is a road network modeled by a graph. In this framework we consider the problem of finding $k$ optimal facility locations of Player 2 given any placement of $m$ facilities by Player 1. Our main result is a dynamic programming based polynomial time algorithm for this problem on tree network. On the other hand, we show that the problem is strongly $\\mathcal{NP}$-complete for graphs. This proves that finding a winning strategy of P2 is $\\mathcal{NP}$-complete. Consequently, we design an $1-\\frac{1}{e}$ factor approximation algorithm, where $e \\approx 2.718$. <|reference_end|>", "<|reference_start|> The Competitive Facility Location Problem in a Duopoly: Connections to the 1-Median Problem: <|reference_end|>", "<|reference_start|> Balancing graph voronoi diagrams: Many facility location problems are concerned with minimizing operation and transportation costs by partitioning territory into regions of similar size, each of which is served by a facility. For many optimization problems, the overall cost can be reduced by means ofa partitioning into balanced subsets, especially in those cases where the cost associated with a subset is superlinear in its size.In this paper, we consider the problem of generating a Voronoi partition of a discrete graph so as to achieve balance conditions on the region sizes.Through experimentation, we first establishthat the region sizes of randomly-generated graph Voronoi diagrams vary greatly in practice. We then show how to achieve a balanced partition of a graph via Voronoi site resampling. For bounded-degree graphs, where each of the $n$ nodes has degree at most $d$, and for an initial randomly-chosen set of $s$ Voronoi nodes,we prove that, by extending the set of Voronoi nodes using an algorithm by Thorup and Zwick, each Voronoi region has size at most $4dn/s+1$ nodes, and that the expected size of the extended set of Voronoi nodes is at most $2s\\log n$. <|reference_end|>", "<|reference_start|> On the second eigenvalue of random regular graphs: The following is an extended abstract for two papers, one written by Kahn and Szemeredi, the other written by Friedman, which have been combined at the request of the STOC committee. The introduction was written jointly, the second section by Kahn and Szemeredi, and the third by Friedman, Let G be a d-regular (i.e. each vertex has degree d) undirected graph on n nodes. It’s adjacency matrix is symmetric, and therefore has real eigenvalues Ar = d 2 x2 >_ *-. >_ X, with IX,] 5 d. Graphs for which X2 and <|reference_end|>" ]
[ 0, 5, 12, 17 ]
{"<|multi_cite_1_1|>": "arxiv-64258", "<|multi_cite_1_2|>": "arxiv-675610", "<|multi_cite_1_3|>": "ss-1398884", "<|multi_cite_1_4|>": "ss-802639", "<|multi_cite_2_1|>": "ss-802640", "<|multi_cite_2_2|>": "ss-1007145", "<|multi_cite_2_3|>": "arxiv-71793", "<|multi_cite_2_4|>": "ss-1719251", "<|multi_cite_2_5|>": "ss-1316414", "<|cite_3|>": "ss-802641", "<|multi_cite_4_1|>": "ss-802642", "<|multi_cite_4_2|>": "ss-802643", "<|cite_5|>": "ss-1146010", "<|cite_6|>": "ss-2107736", "<|cite_7|>": "ss-1294329", "<|multi_cite_8_1|>": "ss-1513706", "<|multi_cite_8_2|>": "ss-802644", "<|multi_cite_8_3|>": "ss-1933865"}
2403.15456
<|paper_start|> Title: WoLF: Wide-scope Large Language Model Framework for CXR Understanding Abstract: WoLF: Wide-scope Large Language Model Framework for CXR Understanding: Significant methodological strides have been made toward Chest X-ray (CXR) understanding via modern vision-language models (VLMs), demonstrating impressive Visual Question Answering (VQA) and CXR report generation abilities. However, existing CXR understanding frameworks still possess several procedural caveats. (1) Previous methods solely use CXR reports, which are insufficient for comprehensive Visual Question Answering (VQA), especially when additional health-related data like medication history and prior diagnoses are needed. (2) Previous methods use raw CXR reports, which are often arbitrarily structured. While modern language models can understand various text formats, restructuring reports for clearer, organized anatomy-based information could enhance their usefulness. (3) Current evaluation methods for CXR-VQA primarily emphasize linguistic correctness, lacking the capability to offer nuanced assessments of the generated answers. In this work, to address the aforementioned caveats, we introduce WoLF, a Wide-scope Large Language Model Framework for CXR understanding. To resolve (1), we capture multi-faceted records of patients, which are utilized for accurate diagnoses in real-world clinical scenarios. Specifically, we adopt the Electronic Health Records (EHR) to generate instruction-following data suited for CXR understanding. Regarding (2), we enhance report generation performance by decoupling knowledge in CXR reports based on anatomical structure even within the attention step via masked attention. To address (3), we introduce an AI-evaluation protocol optimized for assessing the capabilities of LLM. Through extensive experimental validations, WoLF demonstrates superior performance over other models on MIMIC-CXR in the AI-evaluation arena about VQA (up to +9.47%p mean score) and by metrics about report generation (+7.3%p BLEU-1). Introduction \begin{figure}[!t] \includegraphics[width=\textwidth]{figures/figure1.pdf} \caption{Comparisons with other models for VQA scenario given a CXR image. Green thumbs indicate the quality of the response is good (accurate, helpful), while red thumbs indicate bad (inaccurate, evasive), with respect to target answers.} \label{fig1} \end{figure} Recent years have witnessed significant progress in the field of Chest X-ray (CXR) understanding, particularly through downstream tasks like Visual Question Answering (VQA) and automated report generation. Despite considerable advancements, we raise issues that models engaged in Chest X-ray (CXR) understanding persistently encounter several challenges from a framework standpoint. \underline{$\mathbf{\mathfrak{(1)}}$} Existing approaches <|cite_start|> (Reference: XrayGPT: Chest Radiographs Summarization using Medical Vision-Language Models: The latest breakthroughs in large vision-language models, such as Bard and GPT-4, have showcased extraordinary abilities in performing a wide range of tasks. Such models are trained on massive datasets comprising billions of public image-text pairs with diverse tasks. However, their performance on task-specific domains, such as radiology, is still under-investigated and potentially limited due to a lack of sophistication in understanding biomedical images. On the other hand, conversational medical models have exhibited remarkable success but have mainly focused on text-based analysis. In this paper, we introduce XrayGPT, a novel conversational medical vision-language model that can analyze and answer open-ended questions about chest radiographs. Specifically, we align both medical visual encoder (MedClip) with a fine-tuned large language model (Vicuna), using a simple linear transformation. This alignment enables our model to possess exceptional visual conversation abilities, grounded in a deep understanding of radiographs and medical domain knowledge. To enhance the performance of LLMs in the medical context, we generate ~217k interactive and high-quality summaries from free-text radiology reports. These summaries serve to enhance the performance of LLMs through the fine-tuning process. Our approach opens up new avenues the research for advancing the automated analysis of chest radiographs. Our open-source demos, models, and instruction sets are available at: https://github.com/mbzuai-oryx/XrayGPT.) <|cite_end|> predominantly depend on CXR reports for supervised learning, overlooking the crucial aspect of incorporating patients' personalized health records, which are diagnoses-supportive in real-world clinical scenarios. \underline{$\mathbf{\mathfrak{(2)}}$} Additionally, the performance of report generation is constrained by the unstructured format of CXR reports. Unstructured raw CXR reports, exemplified by Fig.~\ref{fig2}(b), impede the ability of models to learn CXR anatomical structures in supervised learning settings, owing to their non-intuitive format. \underline{$\mathbf{\mathfrak{(3)}}$} Lastly, the existing evaluation metrics for CXR-VQA primarily focus on the correctness of answers, which falls short in assessing the generative language models' comprehensive understanding of CXR imagery. To tackle the issues illustrated above, we introduce WoLF, a \textbf{W}ide-sc\textbf{o}pe \textbf{L}arge Language Model \textbf{F}ramework for CXR understanding. We will delve into the specifics of our approach, detailing the innovative solutions we develop for each challenge: \underline{$\mathbf{\mathfrak{(1)}}$} For more in-depth use of such systems in practice, as exemplified in Fig.~\ref{fig1}, the model must consider various patient records, including Electronic Health Records (EHR). Thus, we hypothesize that incorporating patients' personalized EHR records can enhance the CXR understanding of vision-language models. To validate this hypothesis, we introduce {\it Health-specific Instruction Tuning (HIT)} to deal with the existing limitations that training merely relies on CXR reports. \underline{$\mathbf{\mathfrak{(2)}}$} Unorganized CXR reports restrict the advancement in report generation tasks. To push the envelope, we present {\it \anatomy{} (\shortanatomy{})} to separate the reports into anatomy-specific findings. The generated targets give a model a direct understanding of a specific anatomical structure, without being disturbed by other structures. Synchronized with \shortanatomy{}, we introduce {\it Anatomy-localizing Masked Attention (AMA)} that promotes independent learning on each anatomical structure. \underline{$\mathbf{\mathfrak{(3)}}$} Current evaluation methods for CXR-VQA mostly emphasize linguistic correctness. These methods are incapable of assessing the responses from generative language models across a wide range. Inspired by <|cite_start|> (Reference: RLAIF vs. RLHF: Scaling Reinforcement Learning from Human Feedback with AI Feedback: Reinforcement learning from human feedback (RLHF) has proven effective in aligning large language models (LLMs) with human preferences, but gathering high-quality preference labels is expensive. RL from AI Feedback (RLAIF), introduced in Bai et al., offers a promising alternative that trains the reward model (RM) on preferences generated by an off-the-shelf LLM. Across the tasks of summarization, helpful dialogue generation, and harmless dialogue generation, we show that RLAIF achieves comparable performance to RLHF. Furthermore, we take a step towards"self-improvement"by demonstrating that RLAIF can outperform a supervised fine-tuned baseline even when the AI labeler is the same size as the policy, or even the exact same checkpoint as the initial policy. Finally, we introduce direct-RLAIF (d-RLAIF) - a technique that circumvents RM training by obtaining rewards directly from an off-the-shelf LLM during RL, which achieves superior performance to canonical RLAIF. Our results suggest that RLAIF can achieve performance on-par with using human feedback, offering a potential solution to the scalability limitations of RLHF.) <|cite_end|> <|cite_start|> (Reference: G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment: The quality of texts generated by natural language generation (NLG) systems is hard to measure automatically. Conventional reference-based metrics, such as BLEU and ROUGE, have been shown to have relatively low correlation with human judgments, especially for tasks that require creativity and diversity. Recent studies suggest using large language models (LLMs) as reference-free metrics for NLG evaluation, which have the benefit of being applicable to new tasks that lack human references. However, these LLM-based evaluators still have lower human correspondence than medium-size neural evaluators. In this work, we present G-Eval, a framework of using large language models with chain-of-thoughts (CoT) and a form-filling paradigm, to assess the quality of NLG outputs. We experiment with two generation tasks, text summarization and dialogue generation. We show that G-Eval with GPT-4 as the backbone model achieves a Spearman correlation of 0.514 with human on summarization task, outperforming all previous methods by a large margin. We also propose preliminary analysis on the behavior of LLM-based evaluators, and highlight the potential issue of LLM-based evaluators having a bias towards the LLM-generated texts. The code is at https://github.com/nlpyang/geval) <|cite_end|>, we provide a novel {\it AI-evaluation protocol} that is well-suited to generative language models across dimensions of \textit{Accuracy, Helpfulness, Relevance, Hallucination}, and \textit{Universality}. Through our extensive AI evaluation, we can discern the extent to which models understand CXR from their VQA results, rather than just evaluating the correctness of the models' responses. To sum up, the contribution of our model can be described at the \textit{macro} and \textit{micro} level, respectively; \noindent Macroscopically, our framework covers data reformulation, training method to improve CXR understanding, and AI-evaluation protocol. Microscopically, \textbf{(\lowercase\expandafter{\romannumeral1})} we present a novel instruction-following data tuning method called Health-specific Instruction Tuning (HIT) designed for interplay between personalized health records and visual representations of CXR. \textbf{(\lowercase\expandafter{\romannumeral2})} We propose \anatomy{} (\shortanatomy{}), for hierarchically breaking down a radiology report by anatomical structures. Furthermore, we present Anatomical-localizing Masked Attention to support the merits of decoupled data from \shortanatomy{}, enabling expertised visual-language comprehension for each anatomical structure. \textbf{(\lowercase\expandafter{\romannumeral3})} As the final step of the framework, we introduce AI-evaluation for advanced analysis of our model. This evaluates the broad capabilities of generative language models on the VQA task. \textbf{(\lowercase\expandafter{\romannumeral4})} Through these methods, our study achieved state-of-the-art performance in the report generation and VQA tasks on MIMIC-CXR <|cite_start|> (Reference: MIMIC-CXR, a de-identified publicly available database of chest radiographs with free-text reports: ) <|cite_end|> and IU-Xray <|cite_start|> (Reference: Preparing a collection of radiology examinations for distribution and retrieval: OBJECTIVE Clinical documents made available for secondary use play an increasingly important role in discovery of clinical knowledge, development of research methods, and education. An important step in facilitating secondary use of clinical document collections is easy access to descriptions and samples that represent the content of the collections. This paper presents an approach to developing a collection of radiology examinations, including both the images and radiologist narrative reports, and making them publicly available in a searchable database. MATERIALS AND METHODS The authors collected 3996 radiology reports from the Indiana Network for Patient Care and 8121 associated images from the hospitals' picture archiving systems. The images and reports were de-identified automatically and then the automatic de-identification was manually verified. The authors coded the key findings of the reports and empirically assessed the benefits of manual coding on retrieval. RESULTS The automatic de-identification of the narrative was aggressive and achieved 100% precision at the cost of rendering a few findings uninterpretable. Automatic de-identification of images was not quite as perfect. Images for two of 3996 patients (0.05%) showed protected health information. Manual encoding of findings improved retrieval precision. CONCLUSION Stringent de-identification methods can remove all identifiers from text radiology reports. DICOM de-identification of images does not remove all identifying information and needs special attention to images scanned from film. Adding manual coding to the radiologist narrative reports significantly improved relevancy of the retrieved clinical documents. The de-identified Indiana chest X-ray collection is available for searching and downloading from the National Library of Medicine (http://openi.nlm.nih.gov/).) <|cite_end|>. \begin{figure}[t!] \includegraphics[width=\textwidth]{figures/figure2.pdf} \label{fig2_a} \caption{Data generation overview of HIT and \shortanatomy{}: (a) We generate health-specific instruction-following dataset. In (a), Cyan and orange sequences are queries about EHR and findings in CXR respectively. (b) We reorganize original CXR reports into sequences of anatomy-specific structures through the use of a knowledge graph, $G$.} \label{fig2} \end{figure} <|paper_end|>
[ "<|reference_start|> XrayGPT: Chest Radiographs Summarization using Medical Vision-Language Models: The latest breakthroughs in large vision-language models, such as Bard and GPT-4, have showcased extraordinary abilities in performing a wide range of tasks. Such models are trained on massive datasets comprising billions of public image-text pairs with diverse tasks. However, their performance on task-specific domains, such as radiology, is still under-investigated and potentially limited due to a lack of sophistication in understanding biomedical images. On the other hand, conversational medical models have exhibited remarkable success but have mainly focused on text-based analysis. In this paper, we introduce XrayGPT, a novel conversational medical vision-language model that can analyze and answer open-ended questions about chest radiographs. Specifically, we align both medical visual encoder (MedClip) with a fine-tuned large language model (Vicuna), using a simple linear transformation. This alignment enables our model to possess exceptional visual conversation abilities, grounded in a deep understanding of radiographs and medical domain knowledge. To enhance the performance of LLMs in the medical context, we generate ~217k interactive and high-quality summaries from free-text radiology reports. These summaries serve to enhance the performance of LLMs through the fine-tuning process. Our approach opens up new avenues the research for advancing the automated analysis of chest radiographs. Our open-source demos, models, and instruction sets are available at: https://github.com/mbzuai-oryx/XrayGPT. <|reference_end|>", "<|reference_start|> RLAIF vs. RLHF: Scaling Reinforcement Learning from Human Feedback with AI Feedback: Reinforcement learning from human feedback (RLHF) has proven effective in aligning large language models (LLMs) with human preferences, but gathering high-quality preference labels is expensive. RL from AI Feedback (RLAIF), introduced in Bai et al., offers a promising alternative that trains the reward model (RM) on preferences generated by an off-the-shelf LLM. Across the tasks of summarization, helpful dialogue generation, and harmless dialogue generation, we show that RLAIF achieves comparable performance to RLHF. Furthermore, we take a step towards\"self-improvement\"by demonstrating that RLAIF can outperform a supervised fine-tuned baseline even when the AI labeler is the same size as the policy, or even the exact same checkpoint as the initial policy. Finally, we introduce direct-RLAIF (d-RLAIF) - a technique that circumvents RM training by obtaining rewards directly from an off-the-shelf LLM during RL, which achieves superior performance to canonical RLAIF. Our results suggest that RLAIF can achieve performance on-par with using human feedback, offering a potential solution to the scalability limitations of RLHF. <|reference_end|>", "<|reference_start|> G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment: The quality of texts generated by natural language generation (NLG) systems is hard to measure automatically. Conventional reference-based metrics, such as BLEU and ROUGE, have been shown to have relatively low correlation with human judgments, especially for tasks that require creativity and diversity. Recent studies suggest using large language models (LLMs) as reference-free metrics for NLG evaluation, which have the benefit of being applicable to new tasks that lack human references. However, these LLM-based evaluators still have lower human correspondence than medium-size neural evaluators. In this work, we present G-Eval, a framework of using large language models with chain-of-thoughts (CoT) and a form-filling paradigm, to assess the quality of NLG outputs. We experiment with two generation tasks, text summarization and dialogue generation. We show that G-Eval with GPT-4 as the backbone model achieves a Spearman correlation of 0.514 with human on summarization task, outperforming all previous methods by a large margin. We also propose preliminary analysis on the behavior of LLM-based evaluators, and highlight the potential issue of LLM-based evaluators having a bias towards the LLM-generated texts. The code is at https://github.com/nlpyang/geval <|reference_end|>", "<|reference_start|> MIMIC-CXR, a de-identified publicly available database of chest radiographs with free-text reports: <|reference_end|>" ]
[ 0, 1, 2, 3 ]
{"<|multi_cite_1_2|>": "arxiv-515456", "<|multi_cite_2_1|>": "ss-1194779", "<|multi_cite_2_2|>": "arxiv-493002", "<|cite_3|>": "ss-1350695", "<|cite_4|>": "ss-949554"}
2311.12842
<|paper_start|> Title: Multimodal Identification of Alzheimer's Disease: A Review Abstract: Multimodal Identification of Alzheimer's Disease: A Review: Alzheimer's disease is a progressive neurological disorder characterized by cognitive impairment and memory loss. With the increasing aging population, the incidence of AD is continuously rising, making early diagnosis and intervention an urgent need. In recent years, a considerable number of teams have applied computer-aided diagnostic techniques to early classification research of AD. Most studies have utilized imaging modalities such as magnetic resonance imaging (MRI), positron emission tomography (PET), and electroencephalogram (EEG). However, there have also been studies that attempted to use other modalities as input features for the models, such as sound, posture, biomarkers, cognitive assessment scores, and their fusion. Experimental results have shown that the combination of multiple modalities often leads to better performance compared to a single modality. Therefore, this paper will focus on different modalities and their fusion, thoroughly elucidate the mechanisms of various modalities, explore which methods should be combined to better harness their utility, analyze and summarize the literature in the field of early classification of AD in recent years, in order to explore more possibilities of modality combinations. Introduction \label{sec:introduction} \quad As one of the most common neurodegenerative disorder, Alzheimer's disease (AD) is affecting the elderly around the world. According to the latest data from the World Health Organization (WHO), it is expected that the number of dementia patients will reach 55 million in 2019, and this number will increase to 139 million in the coming 2050. Among them, Alzheimer's disease is the most common cause of dementia, accounting for approximately 60-80\% of dementia cases. Alzheimer's disease is a degenerative neurological disease characterized by progressive loss of cognition and memory. Currently, the academic community usually believes that AD is related to the neurofibrillary tangles (NFT) and the extracellular Amyloid-β ($A\beta$) deposition, which cause neurons and synapses loss or damage, inflammation and brain tissue atrophy are other changes <|cite_start|> (Reference: {2023 Alzheimer's disease facts and figures: This article describes the public health impact of Alzheimer's disease, including prevalence and incidence, mortality and morbidity, use and costs of care, and the overall impact on family caregivers, the dementia workforce and society. The Special Report examines the patient journey from awareness of cognitive changes to potential treatment with drugs that change the underlying biology of Alzheimer's. An estimated 6.7 million Americans age 65 and older are living with Alzheimer's dementia today. This number could grow to 13.8 million by 2060 barring the development of medical breakthroughs to prevent, slow or cure AD. Official death certificates recorded 121,499 deaths from AD in 2019, and Alzheimer's disease was officially listed as the sixth‐leading cause of death in the United States. In 2020 and 2021, when COVID‐19 entered the ranks of the top ten causes of death, Alzheimer's was the seventh‐leading cause of death. Alzheimer's remains the fifth‐leading cause of death among Americans age 65 and older. Between 2000 and 2019, deaths from stroke, heart disease and HIV decreased, whereas reported deaths from AD increased more than 145%. This trajectory of deaths from AD was likely exacerbated by the COVID‐19 pandemic in 2020 and 2021. More than 11 million family members and other unpaid caregivers provided an estimated 18 billion hours of care to people with Alzheimer's or other dementias in 2022. These figures reflect a decline in the number of caregivers compared with a decade earlier, as well as an increase in the amount of care provided by each remaining caregiver. Unpaid dementia caregiving was valued at $339.5 billion in 2022. Its costs, however, extend to family caregivers’ increased risk for emotional distress and negative mental and physical health outcomes — costs that have been aggravated by COVID‐19. Members of the paid health care workforce are involved in diagnosing, treating and caring for people with dementia. In recent years, however, a shortage of such workers has developed in the United States. This shortage — brought about, in part, by COVID‐19 — has occurred at a time when more members of the dementia care workforce are needed. Therefore, programs will be needed to attract workers and better train health care teams. Average per‐person Medicare payments for services to beneficiaries age 65 and older with AD or other dementias are almost three times as great as payments for beneficiaries without these conditions, and Medicaid payments are more than 22 times as great. Total payments in 2023 for health care, long‐term care and hospice services for people age 65 and older with dementia are estimated to be $345 billion. The Special Report examines whether there will be sufficient numbers of physician specialists to provide Alzheimer's care and treatment now that two drugs are available that change the underlying biology of Alzheimer's disease.) <|cite_end|>. Alzheimer's disease can cause changes in brain structure and function, affecting patients from multiple aspects such as speech, emotion, and behavior. As the condition worsens, patients often become disconnected from society, lose their ability to take care of themselves, and burden their families and society. \quad There is still no way to completely cure and reverse the progression of dementia. However, early, accurate, and comprehensive diagnosis of Alzheimer's disease can provide timely intervention and slow down the progression of the disease. Experts are increasingly recognizing the importance of early diagnosis of Alzheimer's disease. At present, the clinical examination methods of Alzheimer's disease mainly include: Cognitive Assessment, Non-neuroimaging Biomarkers, Voice and Speech examination, Posture examination, Neuroimaging examination etc. \quad With the rapid development of large language models (LLMs) like ChatGPT, there has been an emergence of conversational systems based on natural language processing techniques. These systems, including HuatuoGPT <|cite_start|> (Reference: HuatuoGPT, towards Taming Language Model to Be a Doctor: In this paper, we present HuatuoGPT, a large language model (LLM) for medical consultation. The core recipe of HuatuoGPT is to leverage both \textit{distilled data from ChatGPT} and \textit{real-world data from doctors} in the supervised fine-tuned stage. The responses of ChatGPT are usually detailed, well-presented and informative while it cannot perform like a doctor in many aspects, e.g. for integrative diagnosis. We argue that real-world data from doctors would be complementary to distilled data in the sense the former could tame a distilled language model to perform like doctors. To better leverage the strengths of both data, we train a reward model to align the language model with the merits that both data bring, following an RLAIF (reinforced learning from AI feedback) fashion. To evaluate and benchmark the models, we propose a comprehensive evaluation scheme (including automatic and manual metrics). Experimental results demonstrate that HuatuoGPT achieves state-of-the-art results in performing medical consultation among open-source LLMs in GPT-4 evaluation, human evaluation, and medical benchmark datasets. It is worth noting that by using additional real-world data and RLAIF, the distilled language model (i.e., HuatuoGPT) outperforms its teacher model ChatGPT in most cases. Our code, data, and models are publicly available at \url{https://github.com/FreedomIntelligence/HuatuoGPT}. The online demo is available at \url{https://www.HuatuoGPT.cn/}.) <|cite_end|>, BenTsao <|cite_start|> (Reference: HuaTuo: Tuning LLaMA Model with Chinese Medical Knowledge: Large Language Models (LLMs), such as the LLaMA model, have demonstrated their effectiveness in various general-domain natural language processing (NLP) tasks. Nevertheless, LLMs have not yet performed optimally in biomedical domain tasks due to the need for medical expertise in the responses. In response to this challenge, we propose HuaTuo, a LLaMA-based model that has been supervised-fine-tuned with generated QA (Question-Answer) instances. The experimental results demonstrate that HuaTuo generates responses that possess more reliable medical knowledge. Our proposed HuaTuo model is accessible at https://github.com/SCIR-HI/Huatuo-Llama-Med-Chinese.) <|cite_end|>, and DoctorGLM <|cite_start|> (Reference: DoctorGLM: Fine-tuning your Chinese Doctor is not a Herculean Task: The recent progress of large language models (LLMs), including ChatGPT and GPT-4, in comprehending and responding to human instructions has been remarkable. Nevertheless, these models typically perform better in English and have not been explicitly trained for the medical domain, resulting in suboptimal precision in diagnoses, drug recommendations, and other medical advice. Additionally, training and deploying a dialogue model is still believed to be impossible for hospitals, hindering the promotion of LLMs. To tackle these challenges, we have collected databases of medical dialogues in Chinese with ChatGPT's help and adopted several techniques to train an easy-deploy LLM. Remarkably, we were able to fine-tune the ChatGLM-6B on a single A100 80G in 13 hours, which means having a healthcare-purpose LLM can be very affordable. DoctorGLM is currently an early-stage engineering attempt and contain various mistakes. We are sharing it with the broader community to invite feedback and suggestions to improve its healthcare-focused capabilities: https://github.com/xionghonglin/DoctorGLM.) <|cite_end|>, have shown promising performance in the field of medical diagnosis and consultation. Interestingly, we have found that this chatbot-based diagnostic model, utilizing ChatGPT, has exhibited relatively high intelligence. However, due to the complexity and multifactorial nature of Alzheimer's disease, the reliability of its inferences is often questionable, as they are solely based on textual input provided by the patient. The characteristics of a single modality may not be sufficient to support accurate early diagnosis. The changes caused by AD may also have similar manifestations in other diseases. A better approach would be to incorporate multiple modalities from the patient, including text, voice, images, and more, as diagnostic evidence. Multimodal diagnostic methods emerge as the times require. Based on the success of vision and language-based large models, multimodal diagnostic systems appear to be more robust and reliable. Multimodal diagnosis of AD poses a challenging problem with significant implications for the future. Therefore, this review will discuss the methods of multimodal AD diagnosis. Below, we will briefly introduce the diagnostic methods for each modality up to now, and finally discuss the methods for multimodal diagnosis of Alzheimer's disease (AD). More detailed content will be described in the main text. \subsection{Neuroimaging} \quad Clinical trials have shown that AD will bring key changes to the patient's brain, such as the accumulation of the protein fragment beta-amyloid into clumps (called beta-amyloid plaques) outside neurons and the accumulation of an abnormal form of the protein tau (called tau tangles) inside neurons <|cite_start|> (Reference: {2023 Alzheimer's disease facts and figures: This article describes the public health impact of Alzheimer's disease, including prevalence and incidence, mortality and morbidity, use and costs of care, and the overall impact on family caregivers, the dementia workforce and society. The Special Report examines the patient journey from awareness of cognitive changes to potential treatment with drugs that change the underlying biology of Alzheimer's. An estimated 6.7 million Americans age 65 and older are living with Alzheimer's dementia today. This number could grow to 13.8 million by 2060 barring the development of medical breakthroughs to prevent, slow or cure AD. Official death certificates recorded 121,499 deaths from AD in 2019, and Alzheimer's disease was officially listed as the sixth‐leading cause of death in the United States. In 2020 and 2021, when COVID‐19 entered the ranks of the top ten causes of death, Alzheimer's was the seventh‐leading cause of death. Alzheimer's remains the fifth‐leading cause of death among Americans age 65 and older. Between 2000 and 2019, deaths from stroke, heart disease and HIV decreased, whereas reported deaths from AD increased more than 145%. This trajectory of deaths from AD was likely exacerbated by the COVID‐19 pandemic in 2020 and 2021. More than 11 million family members and other unpaid caregivers provided an estimated 18 billion hours of care to people with Alzheimer's or other dementias in 2022. These figures reflect a decline in the number of caregivers compared with a decade earlier, as well as an increase in the amount of care provided by each remaining caregiver. Unpaid dementia caregiving was valued at $339.5 billion in 2022. Its costs, however, extend to family caregivers’ increased risk for emotional distress and negative mental and physical health outcomes — costs that have been aggravated by COVID‐19. Members of the paid health care workforce are involved in diagnosing, treating and caring for people with dementia. In recent years, however, a shortage of such workers has developed in the United States. This shortage — brought about, in part, by COVID‐19 — has occurred at a time when more members of the dementia care workforce are needed. Therefore, programs will be needed to attract workers and better train health care teams. Average per‐person Medicare payments for services to beneficiaries age 65 and older with AD or other dementias are almost three times as great as payments for beneficiaries without these conditions, and Medicaid payments are more than 22 times as great. Total payments in 2023 for health care, long‐term care and hospice services for people age 65 and older with dementia are estimated to be $345 billion. The Special Report examines whether there will be sufficient numbers of physician specialists to provide Alzheimer's care and treatment now that two drugs are available that change the underlying biology of Alzheimer's disease.) <|cite_end|>. Brain atrophy is another change, which is due to cell loss and decreased ability of cells to metabolize glucose (glucose is the main fuel for the brain). Thus, AD is associated with pathological amyloid deposition, structural brain atrophy, and altered brain metabolism <|cite_start|> (Reference: Early diagnosis of Alzheimer's disease based on deep learning: A systematic review.: ) <|cite_end|>. In recent decades, major advances in neuroimaging techniques have made these techniques one of the most important biomarkers in the diagnosis of AD. Neuroimaging techniques offer valuable insights into the human brain. Structural magnetic resonance imaging (MRI) enables the detection of brain atrophy, while functional imaging modalities like positron emission tomography (PET) and functional MRI (fMRI) are capable of identifying hypometabolism <|cite_start|> (Reference: Deep learning for Alzheimer's disease diagnosis: A survey: ) <|cite_end|>. Furthermore, metrics such as mean diffusivity (MD) and fractional anisotropy (FA) measured by diffusion tensor imaging (DTI) provide indications of a person's cognitive status. Additionally, electroencephalography (EEG) allows for the assessment of communication activity between nerve cells, while magnetoencephalography (MEG) measures the magnetic fields generated by currents flowing within neurons, providing insights into brain activity <|cite_start|> (Reference: On the early diagnosis of Alzheimer's Disease from multimodal signals: A survey: ) <|cite_end|>. By employing these diverse techniques, researchers gain a comprehensive understanding of the brain's structure, function, and cognitive processes which can help them develop diagnostic methods. \quad However, multimodal imaging studies may offer various advantages over unimodal imaging studies. Multimodal imaging studies are able to study the temporal and topographical relations between many pathological variables, thus improving our understanding of pathophysiological interactions in the body. This approach allows direct comparison of the diagnostic capabilities of different imaging modalities in the same patient sample <|cite_start|> (Reference: Multimodal imaging in Alzheimer's disease: validity and usefulness for early detection: ) <|cite_end|>. Lu et al. <|cite_start|> (Reference: Multimodal and Multiscale Deep Neural Networks for the Early Diagnosis of Alzheimer’s Disease using structural MR and FDG-PET images: ) <|cite_end|> found that a network classifier constructed using a combination of FDG-PET and structural MRI images outperformed a network constructed using structural MRI or FDG-PET alone. Liu et al. <|cite_start|> (Reference: Multimodal Neuroimaging Feature Learning for Multiclass Diagnosis of Alzheimer’s Disease: The accurate diagnosis of Alzheimer's disease (AD) is essential for patient care and will be increasingly important as disease modifying agents become available, early in the course of the disease. Although studies have applied machine learning methods for the computer-aided diagnosis of AD, a bottleneck in the diagnostic performance was shown in previous methods, due to the lacking of efficient strategies for representing neuroimaging biomarkers. In this study, we designed a novel diagnostic framework with deep learning architecture to aid the diagnosis of AD. This framework uses a zero-masking strategy for data fusion to extract complementary information from multiple data modalities. Compared to the previous state-of-the-art workflows, our method is capable of fusing multimodal neuroimaging features in one setting and has the potential to require less labeled data. A performance gain was achieved in both binary classification and multiclass classification of AD. The advantages and limitations of the proposed framework are discussed.) <|cite_end|> used a zero-masking strategy for data fusion to extract complementary information from MR and PET to classify AD patients into four AD stages. \quad Multimodality imaging studies are equally challenging because of the large number of imaging markers that are potential candidates for predicting disease transformation. This situation leads to two related problems. First, as the number of candidate features increases, the risk of data overfitting increases, and second, as the number of candidate features increases, covariance in the predictor variables becomes more severe <|cite_start|> (Reference: Multimodal imaging in Alzheimer's disease: validity and usefulness for early detection: ) <|cite_end|>. Whether these problems can be solved is the key to multimodal imaging research. \subsection{Cognitive Assessment} \quad Based on the patient's cognitive abilities, several tests are available to assess the level of AD (Alzheimer's disease) and MCI (Mild cognitive impairment). These include: the MMSE (Mini-Mental State) <|cite_start|> (Reference: “Mini-mental state”: A practical method for grading the cognitive state of patients for the clinician: ) <|cite_end|>, a simplified cognitive mental state test in the form of a score, which provides a quantitative assessment of cognitive state; The MoCA (The Montreal Cognitive Assessment) <|cite_start|> (Reference: The Montreal Cognitive Assessment, MoCA: a brief screening tool for mild cognitive impairment: Objectives: To develop a 10‐minute cognitive screening tool (Montreal Cognitive Assessment, MoCA) to assist first‐line physicians in detection of mild cognitive impairment (MCI), a clinical state that often progresses to dementia.) <|cite_end|>, which provides a rapid assessment of different levels of cognitive impairment; and the ADAS-Cog (Alzheimer's Disease Assessment Scale - Cognitive) <|cite_start|> (Reference: A new rating scale for Alzheimer’s disease.: A new rating instrument, the Alzheimer's Disease Assessment Scale, was designed specifically to evaluate the severity of cognitive and noncognitive behavioral dysfunctions characteristic of persons with Alzheimer's disease. Item descriptions, administration procedures, and scoring are outlined. Twenty-seven subjects with Alzheimer's disease and 28 normal elderly subjects were rated on 40 items. Twenty-one items with significant intraclass correlation coefficients for interrater reliability (range, .650-.989) and significant Spearman rank-order correlation coefficients for test-retest reliability (range, .514-1) constitute the final scale. Subjects with Alzheimer's disease had significantly more cognitive and noncognitive dysfunction than the normal elderly subjects.) <|cite_end|>, another commonly used clinical and experimental cognitive assessment tool; SCIP (Severe Cognitive Impairment Profile) <|cite_start|> (Reference: Neuropsychological assessment of severely demeted elderly: the severe cognitive impairment profile.: BACKGROUND Although the assessment of cognitive functioning in the late stages of Alzheimer's Disease (AD) is important for identifying abilities that may improve communication and interactions with severely impaired patients in clinical and institutional settings and for assessing the efficacy of pharmacologic agents and behavioral interventions for the treatment of AD, few adequate instruments exist for measuring the cognitive capacities of these severely demented individuals. OBJECTIVES To evaluate the reliability and validity of the Severe Cognitive Impairment Profile (SCIP), a measure of neuropsychological functioning in severely demented patients, and compare it with other available instruments. DESIGN AND METHODS We administered the Mattis Dementia Rating Scale (DRS), Mini-Mental State Examination (MMSE), SCIP, and Severe Impairment Battery (SIB) to 41 severely demented patients with AD participating in an AD research center. We used (1) Spearman rank correlation coefficients to assess interrater and test-retest reliability and construct validity of the SCIP; (2) one-way analysis of variance with post hoc comparisons to examine performance on the SCIP and the SIB at different levels of dementia severity; and (3) descriptive statistics to establish the sensitivity of the SCIP to cognitive functioning in a subgroup of very severely demented patients. RESULTS Interrater and test-retest reliability correlation coefficients were highly significant for total SCIP score (r=0.99 and r=0.96, respectively) as well as for all SCIP subscales. High correlations were also found between SCIP scores and two widely used tests of global cognitive functioning, the DRS (r=0.91) and the MMSE (r=0.84), suggesting good construct validity. The SCIP was able to significantly differentiate between four groups of severely impaired patients divided by level of dementia severity, while the SIB was unable to differentiate between the less severely demented groups. A subgroup of 16 very severely demented patients (DRS score, <50 points) obtained an average of 45% of total possible points on the SCIP, compared with an average of 1% and 21% of total possible points on the MMSE and DRS, respectively. After approximately 1 year of decline, 12 severely demented patients with AD were able to correctly answer an average of more than 58% of the items on the SCIP, compared with only 30% on the DRS and 20% on the MMSE. CONCLUSIONS The SCIP is a reliable, valid measure of neuropsychological functioning in severely demented patients with AD with the ability to avoid both floor and ceiling effects and to evaluate a wider range of cognitive abilities than other tests used with severely impaired individuals.) <|cite_end|>, SIB (Severe Impairment Battery) <|cite_start|> (Reference: Neuropsychological assessment of the severely impaired elderly patient.: ) <|cite_end|> and so on. Roalf et al. compared the MMSE with the MoCA and concluded that the MoCA as a global assessment tool is superior to the MMSE and provides a reliable and simple conversion method from MoCA to MMSE scores <|cite_start|> (Reference: Comparative accuracies of two common screening instruments for classification of Alzheimer's disease, mild cognitive impairment, and healthy aging: ) <|cite_end|>. \quad Most of these cognitive assessment methods are lengthy and complex and do not apply to all patients in all stages of dementia and do not perform well enough in terms of sensitivity <|cite_start|> (Reference: Review of Alzheimer's disease scales: is there a need for a new multi-domain scale for therapy evaluation in medical practice?: ) <|cite_end|> <|cite_start|> (Reference: On the early diagnosis of Alzheimer's Disease from multimodal signals: A survey: ) <|cite_end|>. Although these cognitive assessment methods can provide a quantitative evaluation of cognitive status and help doctors understand the cognitive state of patients, they have limitations in terms of sensitivity. They may not capture subtle changes in certain cognitive domains. Some tests also require professional personnel for evaluation and interpretation, and the test results may be influenced by factors such as education and cultural background. \subsection{Non-neuroimaging Biomarkers} \quad Biomarkers are objective measurements of biological or pathogenic processes aimed at assessing disease risk or prognosis, guiding clinical diagnosis or monitoring therapeutic interventions <|cite_start|> (Reference: Strategic roadmap for an early diagnosis of Alzheimer's disease based on biomarkers: ) <|cite_end|>. Changes in biomarkers can be obtained from neuroimaging on the one hand, and from changes in the composition of biofluids on the other. Combined with clinical approaches and cognitive tests, biomarkers will be more useful to accomplish an accurate assessment of cognitive impairment and its causes, even allowing clinicians to identify and detect pathology caused by AD before it occurs <|cite_start|> (Reference: Multimodal techniques for diagnosis and prognosis of Alzheimer's disease: ) <|cite_end|>. The section regarding neuroimaging-based biomarkers will be discussed in detail in the neuroimaging section. \quad Among the Non-neuroimaging Biomarkers and changes that have proven useful so far are hippocampal atrophy on decreased $A\beta_{42}$ in cerebrospinal fluid (CSF) <|cite_start|> (Reference: Cerebrospinal fluid $\beta$-amyloid 42 and tau proteins as biomarkers of Alzheimer-type pathologic changes in the brain: BACKGROUND There is a clear need to develop an objective diagnostic test for Alzheimer disease (AD). Changes in the levels of cerebrospinal fluid (CSF) tau protein and beta-amyloid 42 (Abeta42) peptide in patients with AD have been well documented, but the relationship between these biomarkers and neuropathologic changes in the brain is not established. OBJECTIVE To study the relationship between antemortem CSF biomarker levels and Alzheimer-type neuropathologic changes in the brain. DESIGN Cross-sectional study to correlate levels of CSF Abeta42, total tau, and phosphorylated tau protein with neuropathologic changes in the brain. SETTING Academic research. Patients The study included 123 patients (79 with clinically diagnosed AD, 29 with other dementia, and 15 with other neurologic disease). All underwent clinical evaluation and provided antemortem lumbar CSF samples, and neuropathologic data were collected from September 11, 1990, to March 13, 2003, in the Department of Neuroscience and Neurology, University of Kuopio, Kuopio, Finland. MAIN OUTCOME MEASURES Levels of CSF Abeta42, total tau, and phosphorylated tau protein were measured using standard commercial immunoassays. Neuropathologic evaluations included the classic silver impregnation method and immunohistochemistry for Abeta, hyperphosphorylated tau, and alpha-synuclein. RESULTS Cerebrospinal fluid Abeta42 and tau protein levels were related to amyloid load and the presence of neurofibrillary pathologic abnormalities in the brain. Cerebrospinal fluid Abeta42 level correlated inversely with total Abeta load in the brain, and CSF tau level correlated with results of immunohistochemistry for hyperphosphorylated tau and with the presence of neocortical neurofibrillary tangles. In multivariate logistic regression analysis, the number of neuritic plaques in the brain remained a significant predictor of decreased CSF Abeta42 level and of increased CSF tau level. Based on the ratio of phosphorylated tau level to Abeta42 level, sensitivity was 91.6%, and specificity was 85.7%, with an overall accuracy of 90.2% for the presence of pathologic neuritic plaque in the brain. CONCLUSIONS Cerebrospinal fluid Abeta42 and tau proteins are biomarkers of AD-associated pathologic changes in the brain. The combination of abnormally low CSF Abeta42 level and abnormally high CSF tau level predicted the presence of AD pathologic features with high accuracy. This combination assay may be helpful in diagnosing the presence of AD pathologic changes in the brain.) <|cite_end|>, and increased tau protein and phosphorylated tau protein <|cite_start|> (Reference: Association between CSF biomarkers and incipient Alzheimer's disease in patients with mild cognitive impairment: a follow-up study: ) <|cite_end|>. In addition, blood-based biomarkers have likewise attracted the attention of experts, blood samples can be obtained in a less invasive and cheaper way <|cite_start|> (Reference: Developing novel blood-based biomarkers for Alzheimer's disease: ) <|cite_end|>. Similar to CSF, blood-based biomarkers can measure $A\beta_{42}$ and other forms of $A\beta$ proteins, as well as different forms of tau. Additionally, Neurofilament light chain (NfL) shows promise as a blood-based biomarker for AD. Moveover, in the process of diagnosing AD, genetic factors should not be overlooked. An example of a gene-based biomarker used to detect AD is the $\varepsilon4$ allele of the APOE gene. This genetic variation can be detected through blood samples or buccal swab samples. \quad These biomarkers can provide direct information about pathological processes such as abnormal protein deposition, inflammatory reactions, and neuronal damage, which help understand the development and progression of AD. However, they lack sufficient specificity, meaning that they may also be present in other neurological disorders or normal aging processes. Some biomarkers require the collection of specific samples, such as cerebrospinal fluid, which may involve a more specialized and technically demanding process, making it relatively less accessible for widespread use. \subsection{Posture} \quad In recent years, some progress has been made in research on posture (mainly face and gait) in the diagnosis of Alzheimer's disease (AD). Although postures are not currently the primary diagnostic criteria for AD, they play an important role as an aid in early diagnosis and monitoring. \quad In terms of faces, studies have found that AD patients have problems with facial emotion expression and facial emotion comprehension. AD patients are impaired in facial expression recognition <|cite_start|> (Reference: Facial expression recognition in Alzheimer's disease: A systematic review: ABSTRACT Introduction: It is well established that behavioral variant frontotemporal dementia can impair social and emotional function. However, there is no consensus regarding how Alzheimer’s disease can affect facial expression recognition. We aim to systematically review all the literature addressing this issue over the last 10 years. Method: We conducted a search based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). The search for literature was undertaken on 19 September 2017, using Pubmed, SciELO, BIREME, and Thomson Reuters Web of Science electronic databases. The key terms for the search were: Alzheimer’s disease, dementia, and facial expression recognition. Results: We screened 173 articles, and 22 of them were selected. The most common methodology involved showing participants photographs of people expressing the six basic emotions—fear, anger, sadness, disgust, surprise, and happiness. Results were ambiguous. Among people with mild Alzheimer’s disease, happiness was easier to recognize than the other five basic emotions, with sadness and anger the most difficult to recognize. In addition, the intensity level of the emotions presented seems to be important, and facial expression recognition is related to specific cognitive capacities, including executive function and visuoperceptual abilities. Impairment in facial expression recognition does not appear to be a consistent neuropsychological finding in Alzheimer’s disease. Conclusions: The lack of standardized assessment instruments and the heterogeneity of the methods and samples used across studies hamper comparisons. Future researches should investigate facial expression recognition through more ecological and standardized methods.) <|cite_end|> and have significant emotion recognition deficits <|cite_start|> (Reference: Emotion recognition of morphed facial expressions in presymptomatic and symptomatic frontotemporal dementia, and Alzheimer’s dementia: ) <|cite_end|>, Fiona et al. <|cite_start|> (Reference: Degradation of emotion processing ability in corticobasal syndrome and Alzheimer’s disease: Disturbed emotion processing and difficulty with social interactions are present to variable degrees in dementia. They are characteristic features of frontotemporal dementia, whereas these deficits tend to be mild in Alzheimer's disease, reflecting the different patterns of neurodegeneration seen in these disorders. Corticobasal syndrome is an atypical parkinsonian disorder clinically and pathologically related to frontotemporal dementia. Corticobasal syndrome typically presents as a motor disturbance, although cognitive and behavioural changes are now recognized. Pathological changes are found in frontoparietal cortical regions and in the basal ganglia; regions that are heavily involved in emotion processing. Despite the overlap with frontotemporal dementia and the observed regions of brain atrophy, emotion processing has not been systematically explored in corticobasal syndrome. This study aimed to (i) comprehensively examine emotion processing in corticobasal syndrome in comparison to Alzheimer's disease, to determine whether emotion processing deficits exist in this syndrome, beyond those seen in Alzheimer's disease; and (ii) identify the neural correlates underlying emotion processing in corticobasal syndrome and Alzheimer's disease. Sixteen patients with corticobasal syndrome, 18 patients with Alzheimer's disease and 22 matched healthy control subjects were assessed on a comprehensive battery of face and emotion processing tasks. Behavioural analyses revealed deficits in both basic face processing and high-level emotion processing tasks in patients with corticobasal syndrome. Notably, the emotion processing disturbance persisted even after controlling for face processing deficits. In contrast, patients with Alzheimer's disease were impaired on high-level complex and cognitively demanding emotion recognition tasks (Ekman 60, The Awareness of Social Inference Test) only. Neuroimaging analyses using FreeSurfer revealed that emotion processing deficits in corticobasal syndrome were associated with basal ganglia volume loss as well as cortical thinning of the left paracentral gyrus/precuneus region. In Alzheimer's disease, however, emotion processing deficits were associated with atrophy in a different set of brain regions, including the right cingulate and the bilateral insulae, as well as the hippocampi, right amygdala and nucleus accumbens bilaterally. Our results demonstrate that patients with corticobasal syndrome experience widespread deficits in emotion processing, and these deficits are related to changes in brain regions known to be crucial for emotion processing. These findings have important clinical implications for the treatment and management of these patients.) <|cite_end|> found that in Alzheimer's disease, emotion processing deficits were only found in complex and cognitively demanding emotion recognition tasks, while behavioral performance in simple face processing and emotion matching tasks was within the normal range. This offers the potential to use facial expression analysis as a tool for early AD diagnosis and monitoring. \quad Gait is a complex cognitive task requiring coordination between a wide range of brain regions, and even in the milder stages of the disease, gait impairment may reflect dementia-induced neurodegeneration <|cite_start|> (Reference: Do Alzheimer's and Lewy body disease have discrete pathological signatures of gait?: ) <|cite_end|>. There is growing evidence that cognitive, sensory, and motor changes may precede the clinical manifestations of AD by several years <|cite_start|> (Reference: Digital biomarkers for Alzheimer’s disease: the mobile/wearable devices opportunity: ) <|cite_end|>. Gait disturbances reported in early AD include slower gait, shorter stride length, lower cadence (longer stride time/gait cycle), and greater inter-stride variability <|cite_start|> (Reference: Whole-Day Gait Monitoring in Patients with Alzheimer’s Disease: A Relationship between Attention and Gait Cycle.: Background: Gait impairment in patients with Alzheimer’s disease (AD) and its relationship with cognitive function has been described, but reports of gait analysis in AD in daily living are limited. Objective: To investigate whether gait pattern of patients with AD in daily living is associated with cognitive function. Methods: Gait was recorded in 24 patients with AD and 9 healthy controls (HC) for 24 hours by using a portable gait rhythmogram. Mean gait cycle and gait acceleration were compared between the AD and HC groups. For the AD group, these gait metrics were assessed for correlations with cognitive function, as determined by the Mini Mental State Examination and Wechsler Memory Scale-Revised (WMS-R). Results: Although both gait parameters were not different between the patients with AD and HC, gait cycle in patients with AD was positively correlated with attention/concentration scores on the WMS-R (r = 0.578), and not with memory function. Patients with AD with attention scores as high as HC displayed a longer gait cycle than both HC (p = 0.048) and patients with AD with lower attention scores (p = 0.011). The patients with AD with lower attention scores showed a similar gait cycle with HC (p = 0.994). Conclusion: Patients with AD with impaired attentional function walk with faster gait cycle comparable to HC in daily living walking, which was unexpected based on previous gait analysis in clinical settings. This result probably reflects diminished consciousness to either the environment or instability of gait in the patients with AD with impaired attention.) <|cite_end|>. Therefore, gait analysis is also considered to be an important tool for assessing motor function and cognitive status in AD patients. For the assessment of gait characteristics, Rosaria et al. suggested that gait characteristics can be divided into temporal, kinematic and kinetic characteristics <|cite_start|> (Reference: Human Gait Analysis in Neurodegenerative Diseases: a Review.: This paper reviews the recent literature on technologies and methodologies for quantitative human gait analysis in the context of neurodegenerative diseases. The use of technological instruments can be of great support in both clinical diagnosis and severity assessment of these pathologies. In this paper, sensors, features and processing methodologies have been reviewed in order to provide a highly consistent work that explores the issues related to gait analysis. First, the phases of the human gait cycle are briefly explained, along with some non-normal gait patterns (gait abnormalities) typical of some neurodegenerative diseases. Then the paper reports the most common processing techniques for both feature selection and extraction and for classification and clustering. Finally, a conclusive discussion on current open problems and future directions is outlined.) <|cite_end|>. \quad In conclusion, facial and gait analysis has shown some potential in AD diagnosis. As technology continues to evolve and more research work is done, these aids are expected to be a useful addition to early AD diagnosis. However, facial and gait analysis is still in the research phase and more validation and standardization work is needed. Individual differences and other factors may have an impact on facial and gait performance, so further research is needed to determine their accuracy and reliability in AD diagnosis and monitoring. \subsection{Sound} \quad Voice and Speech problems are considered to be one of the most typical symptoms of AD, which is a direct and unavoidable consequence of cognitive impairment <|cite_start|> (Reference: From Beetle to Bug: Progression of Error Types in Naming in Alzheimer’s Disease: From Beetle to Bug: Progression of Error Types in Naming in Alzheimer’s Disease Laura M. Gonnerman 1 , Justin M. Aronoff 2 , Amit Almor 3 , Daniel Kempler 4 , & Elaine S. Andersen 2 Department of Psychology, Lehigh University, Bethlehem, PA 18015 Program in Neuroscience, University of Southern California, Los Angeles, CA 90089-2520 Department of Psychology, University of South Carolina, Columbia, SC, 29208 Comunication Sciences and Disorders, Emerson College, Boston, MA 02116-4624 The distributed feature approach to semantic memory organization has been supported by data from patients with Alzheimer’s disease (AD) (e.g., Gonnerman et al., 1997). This account makes specific predictions about the types of errors one would expect in AD as semantic memory deteriorates, with initially more contrast coordinate errors, followed by superordinates, and finally an increase in unrelated responses. We investigate these predictions using a picture naming task, with both natural kinds and artifacts. Method Participants The young normal (YN) group included 25 USC undergraduates, the old normal (ON) group 24 healthy elderly, and the Alzheimer’s (AD) group 15 individuals diagnosed with AD, matched with the ON group for age. Materials and Procedure Participants named 144 color pictures, with 12 items each from six natural kinds and six artifacts categories, controlled for familiarity, imageability, frequency, and typicality. Results & Discussion Percent Error Type The YN group correctly named 86% of the pictures, ON 85%, and AD 62%, indicating a significant impairment in naming for the AD group, (t (15) = -4.15, p < .0009), but no significant difference between YN and ON controls. To examine the types of errors AD patients made as their naming impairment progressed, errors were coded into three categories: 1) contrast coordinate, giving the name of another category member (e.g., calling a zebra ‘horse’); 2) superordinate, giving the category label rather than the object name (e.g., ‘bug’ for beetle); and 3) unrelated, where the response was not from the same category (e.g., ‘flute’ for cucumber). No responses, ‘I don’t know’, and machine errors were not included in the analysis. To determine if the prevalence of a given error type was affected by the degree of damage, ratios of each error type over the total number of errors were calculated. Overall, there were initially significantly more contrast coordinate errors than superordinates (t(327)=-4.7, p <.00001), followed by unrelated responses (t(190)=-3.5, p <.001). This is consistent with the progression of errors in studies of patients with semantic dementia (Hodges et al., 1995). We were most interested in the progression of errors within natural kind versus artifact categories (see Figure 1 below). The pattern of change varied by domain. As expected, there were more contrast coordinate errors in both natural kinds and artifacts early on, declining with increasing damage. Interestingly, while superordinate errors increased for natural kinds, they decreased for artifacts. The distributed feature approach provides a natural account of this pattern. As damage increases, the core features of natural kinds concepts are still available because they have more intercorrelations. The activation of these core features permits activation of the superordinate name, whereas the lack of similar correlations in artifact categories leads to a steady decrease in superordinate responses for artifacts. Finally, there is a greater increase in unrelated responses in artifacts compared to natural kinds in later damage stages. Acknowledgments This research was supported by NIA grant R01 AG-11774- 04 and by NIH training grant 5T32MH20003-05. References Gonnerman, L.M., Andersen, E.S., Devlin, J.T., Kempler, D. & Seidenberg, M.S. (1997) Double dissociation of semantic categories in Alzheimer’s disease. Brain and Language, 57, 254-279. Hodges, J.R, Graham, N. & Patterson, K. (1995). Charting the progression in semantic dementia--implications for the organization of semantic memory. Memory, 3, 463-495. contrast coordinate superordinate Natural Kinds unrelated Artifacts Percent Naming Errors Percent Naming Errors Figure 1. Percentage of error types as naming errors increase for natural kind (left) and artifact (right) concepts.) <|cite_end|>. It has been demonstrated that AD patients perform poorly on different language tests <|cite_start|> (Reference: The neuropsychological profile of Alzheimer disease: Neuropsychological assessment has featured prominently over the past 30 years in the characterization of dementia associated with Alzheimer disease (AD). Clinical neuropsychological methods have identified the earliest, most definitive cognitive and behavioral symptoms of illness, contributing to the identification, staging, and tracking of disease. With increasing public awareness of dementia, disease detection has moved to earlier stages of illness, at a time when deficits are both behaviorally and pathologically selective. For reasons that are not well understood, early AD pathology frequently targets large-scale neuroanatomical networks for episodic memory before other networks that subserve language, attention, executive functions, and visuospatial abilities. This chapter reviews the pathognomonic neuropsychological features of AD dementia and how these differ from "normal," age-related cognitive decline and from other neurodegenerative diseases that cause dementia, including cortical Lewy body disease, frontotemporal lobar degeneration, and cerebrovascular disease.) <|cite_end|>, presenting naming and word-finding difficulties (anomia) leading to circumlocution, as well as difficulty accessing semantic information intentionally, leading to a general semantic deterioration <|cite_start|> (Reference: The neuropsychology of dementia.: Dementia is a common disorder which may be due to a number of different conditions including Alzheimer's disease, vascular dementia, dementia with Lewy bodies and frontal lobe dementias. Neuropsychological assessment has an important role to play in establishing differential diagnosis and in terms of informing management and monitoring response to recently introduced antidementia drugs. This review briefly summarises the key clinical and neuropsychological features of the different dementias and then discusses both clinically useful screening tests and more detailed cognitive assessment batteries that are frequently used in dementia, as well as the purpose and content of clinical neuropsychological testing.) <|cite_end|>, and behaving differently from normal in some acoustic and rhythmic features <|cite_start|> (Reference: Prosodic Impairment in Dementia: Review of the Literature.: OBJECTIVE Prosody, an important aspect of spoken language, is defined as the emphasis placed on certain syllables, changes in tempo or timing, and variance in pitch and intonation. Most studies investigating expression and comprehension of prosody have focused primarily on emotional prosody and less extensively on supralexical prosody. The distinction is indeed important, as the latter conveys information such as interrogative or assertive mode, whereas the former delivers emotional connotation, such as happiness, anger, and sadness. These functions appear to rely on distinct neuronal networks, supported by functional neuroimaging studies that show activation of the right hemisphere, specifically in the right inferior frontal area during emotional detection. CONCLUSION This review summarizes the studies conducted on prosody impairment in Alzheimer's disease and other dementias, with emphasis on experiments designed to investigate the emotional vs. the supralexical aspect of speech production. We also discussed the available tools validated to test and quantify the prosodic impairment.) <|cite_end|>. This demonstrates the feasibility of using voice modality to diagnose Alzheimer's disease and that Voice and Speech can be used as an efficient, inexpensive, and easy-to-use tool to help in the diagnosis of AD. \quad People with AD differ from normal people in semantics, syntax, and rhythm, and researchers have been able to diagnose AD by looking at a variety of features. The main conventional features used in Alzheimer's disease research are: Frequential Aspects (including interruptions, Voice periods, Fundamental frequency); Intensity (including Amplitude and Phonatory stability); Voice Quality (including noise); Biomechanical aspects (including Vocal fold body Movement Tongue movement) <|cite_start|> (Reference: Alzheimer's disease and automatic speech analysis: A review: ) <|cite_end|>. Many studies have also identified acoustic measures that are highly correlated with pathological speech features or speech alterations <|cite_start|> (Reference: Challenges in concussion detection using vocal acoustic biomarkers: Acoustic metrics extracted from speech have the potential to serve as novel biomarkers for a variety of neurological and neurodevelopmental conditions, as is evidenced by the rapidly growing corpus of research articles studying the links between brain impairments and speech. In this paper, we discuss the advantages and the disadvantages of speech biomarkers and the various challenges in the design and the implementation of portable speech-based diagnostic and assessment tools. Furthermore, we provide a case study, presenting our experiences in developing an assessment tool for the detection of mild traumatic brain injuries (concussions) and discuss the challenges in obtaining and analyzing large sets of speech recordings that can be used to study the impact of brain injuries on vocal features.) <|cite_end|>. In recent years, with technological advances some new methods have been gradually invested in AD diagnosis, Fasih et al <|cite_start|> (Reference: An assessment of paralinguistic acoustic features for detection of Alzheimer's dementia in spontaneous speech: Speech analysis could provide an indicator of Alzheimer's disease and help develop clinical tools for automatically detecting and monitoring disease progression. While previous studies have employed acoustic (speech) features for characterisation of Alzheimer's dementia, these studies focused on a few common prosodic features, often in combination with lexical and syntactic features which require transcription. We present a detailed study of the predictive value of purely acoustic features automatically extracted from spontaneous speech for Alzheimer's dementia detection, from a computational paralinguistics perspective. The effectiveness of several state-of-the-art paralinguistic feature sets for Alzheimer's detection were assessed on a balanced sample of DementiaBank's Pitt spontaneous speech dataset, with patients matched by gender and age. The feature sets assessed were the extended Geneva minimalistic acoustic parameter set (eGeMAPS), the emobase feature set, the ComParE 2013 feature set, and new Multi-Resolution Cochleagram (MRCG) features. Furthermore, we introduce a new active data representation (ADR) method for feature extraction in Alzheimer's dementia recognition. Results show that classification models based solely on acoustic speech features extracted through our ADR method can achieve accuracy levels comparable to those achieved by models that employ higher-level language features. Analysis of the results suggests that all feature sets contribute information not captured by other feature sets. We show that while the eGeMAPS feature set provides slightly better accuracy than other feature sets individually (71.34%), “hard fusion” of feature sets improves accuracy to 78.70%.) <|cite_end|> first used eGeMAPS <|cite_start|> (Reference: The Geneva Minimalistic Acoustic Parameter Set (GEMAPS) for Voice Research and Affective Computing: Work on voice sciences over recent decades has led to a proliferation of acoustic parameters that are used quite selectively and are not always extracted in a similar fashion. With many independent teams working in different research areas, shared standards become an essential safeguard to ensure compliance with state-of-the-art methods allowing appropriate comparison of results across studies and potential integration and combination of extraction and recognition systems. In this paper we propose a basic standard acoustic parameter set for various areas of automatic voice analysis, such as paralinguistic or clinical speech analysis. In contrast to a large brute-force parameter set, we present a minimalistic set of voice parameters here. These were selected based on a) their potential to index affective physiological changes in voice production, b) their proven value in former studies as well as their automatic extractability, and c) their theoretical significance. The set is intended to provide a common baseline for evaluation of future research and eliminate differences caused by varying parameter sets or even different implementations of the same parameters. Our implementation is publicly available with the openSMILE toolkit. Comparative evaluations of the proposed feature set and large baseline feature sets of INTERSPEECH challenges show a high performance of the proposed set in relation to its size.) <|cite_end|>, emobase <|cite_start|> (Reference: Proceedings of the 26th ACM international conference on Multimedia: ) <|cite_end|> and ComParE <|cite_start|> (Reference: Recent Developments in openSMILE, the Munich Open-Source Multimedia Feature Extractor: We present recent developments in the openSMILE feature extraction toolkit. Version 2.0 now unites feature extraction paradigms from speech, music, and general sound events with basic video features for multi-modal processing. Descriptors from audio and video can be processed jointly in a single framework allowing for time synchronization of parameters, on-line incremental processing as well as off-line and batch processing, and the extraction of statistical functionals (feature summaries), such as moments, peaks, regression parameters, etc. Postprocessing of the features includes statistical classifiers such as support vector machine models or file export for popular toolkits such as Weka or HTK. Available low-level descriptors include popular speech, music and video features including Mel-frequency and similar cepstral and spectral coefficients, Chroma, CENS, auditory model based loudness, voice quality, local binary pattern, color, and optical flow histograms. Besides, voice activity detection, pitch tracking and face detection are supported. openSMILE is implemented in C++, using standard open source libraries for on-line audio and video input. It is fast, runs on Unix and Windows platforms, and has a modular, component based architecture which makes extensions via plug-ins easy. openSMILE 2.0 is distributed under a research license and can be downloaded from http://opensmile.sourceforge.net/.) <|cite_end|> feature sets as Alzheimer's disease empirical attempts to introduce and evaluate a new method to represent these acoustic features ADR (active data representation). Liu et al. <|cite_start|> (Reference: A new machine learning method for identifying Alzheimer's disease: ) <|cite_end|> divide a person's speech data into multiple segments and use the extracted spectrogram features from the speech data to identify AD. \quad As a non-invasive and rapid diagnostic method, recognition technology based on patient voice data can effectively reduce medical costs compared to medical images that are difficult to obtain. Especially, natural language processing technology, signal processing technology, and deep learning technology have been developed significantly in recent years, and the technology based on automatic processing of voice signal records is gradually becoming mature. \subsection{Multimodal} \quad Currently, most studies on Alzheimer's disease (AD) use a single data model for prediction, which may have limitations. Psychological or Cognitive appraisal questionnaires may be too subjective and may lack sensitivity <|cite_start|> (Reference: On the early diagnosis of Alzheimer's Disease from multimodal signals: A survey: ) <|cite_end|>. Changes in both posture and voice may be influenced by factors unrelated to AD, such as normal aging. Neuroimaging also suffers from cost and availability issues (availability of PET and MRI scanning instruments varies widely between countries) and from patient bias (e.g., sensitivity to radiation exposure) <|cite_start|> (Reference: Multimodal imaging in Alzheimer's disease: validity and usefulness for early detection: ) <|cite_end|>. \quad It has been shown that fusing complementary information from multiple modalities can improve the diagnostic performance of AD. Multimodal data contains the fusion of complementary information(e.g., Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET) and genetic data) <|cite_start|> (Reference: Latent representation learning for Alzheimer��s disease diagnosis with incomplete multi-modality neuroimaging and genetic data: The fusion of complementary information contained in multi-modality data [e.g., magnetic resonance imaging (MRI), positron emission tomography (PET), and genetic data] has advanced the progress of automated Alzheimer’s disease (AD) diagnosis. However, multi-modality based AD diagnostic models are often hindered by the missing data, i.e., not all the subjects have complete multi-modality data. One simple solution used by many previous studies is to discard samples with missing modalities. However, this significantly reduces the number of training samples, thus leading to a sub-optimal classification model. Furthermore, when building the classification model, most existing methods simply concatenate features from different modalities into a single feature vector without considering their underlying associations. As features from different modalities are often closely related (e.g., MRI and PET features are extracted from the same brain region), utilizing their inter-modality associations may improve the robustness of the diagnostic model. To this end, we propose a novel latent representation learning method for multi-modality based AD diagnosis. Specifically, we use all the available samples (including samples with incomplete modality data) to learn a latent representation space. Within this space, we not only use samples with complete multi-modality data to learn a common latent representation, but also use samples with incomplete multi-modality data to learn independent modality-specific latent representations. We then project the latent representations to the label space for AD diagnosis. We perform experiments using 737 subjects from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database, and the experimental results verify the effectiveness of our proposed method.) <|cite_end|>. Landau et al. also found complementary information between acquired genetic, cerebrospinal fluid, neuroimaging, and cognitive measures <|cite_start|> (Reference: Comparing predictors of conversion and decline in mild cognitive impairment: Objective: A variety of measurements have been individually linked to decline in mild cognitive impairment (MCI), but the identification of optimal markers for predicting disease progression remains unresolved. The goal of this study was to evaluate the prognostic ability of genetic, CSF, neuroimaging, and cognitive measurements obtained in the same participants. Methods: APOE ε4 allele frequency, CSF proteins (Aβ1-42, total tau, hyperphosphorylated tau [p-tau181p]), glucose metabolism (FDG-PET), hippocampal volume, and episodic memory performance were evaluated at baseline in patients with amnestic MCI (n = 85), using data from a large multisite study (Alzheimer's Disease Neuroimaging Initiative). Patients were classified as normal or abnormal on each predictor variable based on externally derived cutoffs, and then variables were evaluated as predictors of subsequent conversion to Alzheimer disease (AD) and cognitive decline (Alzheimer's Disease Assessment Scale–Cognitive Subscale) during a variable follow-up period (1.9 ± 0.4 years). Results: Patients with MCI converted to AD at an annual rate of 17.2%. Subjects with MCI who had abnormal results on both FDG-PET and episodic memory were 11.7 times more likely to convert to AD than subjects who had normal results on both measures (p ≤ 0.02). In addition, the CSF ratio p-tau181p/Aβ1-42 (β = 1.10 ± 0.53; p = 0.04) and, marginally, FDG-PET predicted cognitive decline. Conclusions: Baseline FDG-PET and episodic memory predict conversion to AD, whereas p-tau181p/Aβ1-42 and, marginally, FDG-PET predict longitudinal cognitive decline. Complementary information provided by these biomarkers may aid in future selection of patients for clinical trials or identification of patients likely to benefit from a therapeutic intervention.) <|cite_end|>. \quad However, there are still challenges in fusing data from multiple modalities to diagnose AD. To begin with, it is important to recognize that different types of data are inherently diverse. Each modality, such as neuroimaging and genetic data, exhibits distinct data distributions, varying numbers of features, and differing levels of diagnostic discrimination for conditions like Alzheimer's disease (AD) <|cite_start|> (Reference: Effective feature learning and fusion of multimodality data using stage-wise deep neural network for dementia diagnosis: In this article, the authors aim to maximally utilize multimodality neuroimaging and genetic data for identifying Alzheimer's disease (AD) and its prodromal status, Mild Cognitive Impairment (MCI), from normal aging subjects. Multimodality neuroimaging data such as MRI and PET provide valuable insights into brain abnormalities, while genetic data such as single nucleotide polymorphism (SNP) provide information about a patient's AD risk factors. When these data are used together, the accuracy of AD diagnosis may be improved. However, these data are heterogeneous (e.g., with different data distributions), and have different number of samples (e.g., with far less number of PET samples than the number of MRI or SNPs). Thus, learning an effective model using these data is challenging. To this end, we present a novel three‐stage deep feature learning and fusion framework, where deep neural network is trained stage‐wise. Each stage of the network learns feature representations for different combinations of modalities, via effective training using the maximum number of available samples. Specifically, in the first stage, we learn latent representations (i.e., high‐level features) for each modality independently, so that the heterogeneity among modalities can be partially addressed, and high‐level features from different modalities can be combined in the next stage. In the second stage, we learn joint latent features for each pair of modality combination by using the high‐level features learned from the first stage. In the third stage, we learn the diagnostic labels by fusing the learned joint latent features from the second stage. To further increase the number of samples during training, we also use data at multiple scanning time points for each training subject in the dataset. We evaluate the proposed framework using Alzheimer's disease neuroimaging initiative (ADNI) dataset for AD diagnosis, and the experimental results show that the proposed framework outperforms other state‐of‐the‐art methods.) <|cite_end|>. To fuse multimodal data, traditional approaches usually first perform feature selection for each modality separately, and then cascade the selected features used for diagnosis or prognosis. Nevertheless, this approach ignores the potential connection between different modal data <|cite_start|> (Reference: Latent representation learning for Alzheimer��s disease diagnosis with incomplete multi-modality neuroimaging and genetic data: The fusion of complementary information contained in multi-modality data [e.g., magnetic resonance imaging (MRI), positron emission tomography (PET), and genetic data] has advanced the progress of automated Alzheimer’s disease (AD) diagnosis. However, multi-modality based AD diagnostic models are often hindered by the missing data, i.e., not all the subjects have complete multi-modality data. One simple solution used by many previous studies is to discard samples with missing modalities. However, this significantly reduces the number of training samples, thus leading to a sub-optimal classification model. Furthermore, when building the classification model, most existing methods simply concatenate features from different modalities into a single feature vector without considering their underlying associations. As features from different modalities are often closely related (e.g., MRI and PET features are extracted from the same brain region), utilizing their inter-modality associations may improve the robustness of the diagnostic model. To this end, we propose a novel latent representation learning method for multi-modality based AD diagnosis. Specifically, we use all the available samples (including samples with incomplete modality data) to learn a latent representation space. Within this space, we not only use samples with complete multi-modality data to learn a common latent representation, but also use samples with incomplete multi-modality data to learn independent modality-specific latent representations. We then project the latent representations to the label space for AD diagnosis. We perform experiments using 737 subjects from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database, and the experimental results verify the effectiveness of our proposed method.) <|cite_end|>. \quad The second challenge is the high dimensionality problem encountered in the fusion analysis of multimodal imaging for diagnostic AD. When combining data from multiple modalities, the resulting dataset tends to have a high number of dimensions. For instance, a single neuroimaging scan, such as MR or PET images, contains millions of voxels. Classical methods <|cite_start|> (Reference: Multimodal Data Analysis of Alzheimer’s Disease Based on Clustering Evolutionary Random Forest: Alzheimer's disease (AD) has become a severe medical challenge. Advances in technologies produced high-dimensional data of different modalities including functional magnetic resonance imaging (fMRI) and single nucleotide polymorphism (SNP). Understanding the complex association patterns among these heterogeneous and complementary data is of benefit to the diagnosis and prevention of AD. In this paper, we apply the appropriate correlation analysis method to detect the relationships between brain regions and genes, and propose “brain region-gene pairs” as the multimodal features of the sample. In addition, we put forward a novel data analysis method from technology aspect, cluster evolutionary random forest (CERF), which is suitable for “brain region-gene pairs”. The idea of clustering evolution is introduced to improve the generalization performance of random forest which is constructed by randomly selecting samples and sample features. Through hierarchical clustering of decision trees in random forest, the decision trees with higher similarity are clustered into one class, and the decision trees with the best performance are retained to enhance the diversity between decision trees. Furthermore, based on CERF, we integrate feature construction, feature selection and sample classification to find the optimal combination of different methods, and design a comprehensive diagnostic framework for AD. The framework is validated by the samples with both fMRI and SNP data from ADNI. The results show that we can effectively identify AD patients and discover some brain regions and genes associated with AD significantly based on this framework. These findings are conducive to the clinical treatment and prevention of AD.) <|cite_end|>, such as principal component analysis (PCA), independent component analysis (ICA), and linear discriminant analysis (LDA), are used in many studies to solve the high-dimensional problem of multimodal fusion analysis. These methods achieve attribute parsimony, but researchers need to put a lot of effort to analyze some important fusion features separately <|cite_start|> (Reference: Multimodal Data Analysis of Alzheimer’s Disease Based on Clustering Evolutionary Random Forest: Alzheimer's disease (AD) has become a severe medical challenge. Advances in technologies produced high-dimensional data of different modalities including functional magnetic resonance imaging (fMRI) and single nucleotide polymorphism (SNP). Understanding the complex association patterns among these heterogeneous and complementary data is of benefit to the diagnosis and prevention of AD. In this paper, we apply the appropriate correlation analysis method to detect the relationships between brain regions and genes, and propose “brain region-gene pairs” as the multimodal features of the sample. In addition, we put forward a novel data analysis method from technology aspect, cluster evolutionary random forest (CERF), which is suitable for “brain region-gene pairs”. The idea of clustering evolution is introduced to improve the generalization performance of random forest which is constructed by randomly selecting samples and sample features. Through hierarchical clustering of decision trees in random forest, the decision trees with higher similarity are clustered into one class, and the decision trees with the best performance are retained to enhance the diversity between decision trees. Furthermore, based on CERF, we integrate feature construction, feature selection and sample classification to find the optimal combination of different methods, and design a comprehensive diagnostic framework for AD. The framework is validated by the samples with both fMRI and SNP data from ADNI. The results show that we can effectively identify AD patients and discover some brain regions and genes associated with AD significantly based on this framework. These findings are conducive to the clinical treatment and prevention of AD.) <|cite_end|>. \quad Lastly, there is the issue of incomplete data, where not all samples possess complete multimodal data. Typically, researchers opt to discard samples with missing data, thereby increasing the risk of sample loss. Alternatively, one approach involves interpolating the missing data using methods like zero interpolation, k-nearest neighbor (KNN), or Expectation Maximization. However, this interpolation approach may introduce unnecessary noise, subsequently compromising the model's performance. \quad Many researchers have proposed solutions to the above challenges. Zhang et al. <|cite_start|> (Reference: Multimodal classification of Alzheimer's disease and mild cognitive impairment: ) <|cite_end|> proposed a general framework based on kernel methods, which can effectively combine MRI, PET, and CSF features and naturally embed them into traditional support vector machines to effectively solve, achieving high accuracy in AD classification. Zhou et al. <|cite_start|> (Reference: Effective feature learning and fusion of multimodality data using stage-wise deep neural network for dementia diagnosis: In this article, the authors aim to maximally utilize multimodality neuroimaging and genetic data for identifying Alzheimer's disease (AD) and its prodromal status, Mild Cognitive Impairment (MCI), from normal aging subjects. Multimodality neuroimaging data such as MRI and PET provide valuable insights into brain abnormalities, while genetic data such as single nucleotide polymorphism (SNP) provide information about a patient's AD risk factors. When these data are used together, the accuracy of AD diagnosis may be improved. However, these data are heterogeneous (e.g., with different data distributions), and have different number of samples (e.g., with far less number of PET samples than the number of MRI or SNPs). Thus, learning an effective model using these data is challenging. To this end, we present a novel three‐stage deep feature learning and fusion framework, where deep neural network is trained stage‐wise. Each stage of the network learns feature representations for different combinations of modalities, via effective training using the maximum number of available samples. Specifically, in the first stage, we learn latent representations (i.e., high‐level features) for each modality independently, so that the heterogeneity among modalities can be partially addressed, and high‐level features from different modalities can be combined in the next stage. In the second stage, we learn joint latent features for each pair of modality combination by using the high‐level features learned from the first stage. In the third stage, we learn the diagnostic labels by fusing the learned joint latent features from the second stage. To further increase the number of samples during training, we also use data at multiple scanning time points for each training subject in the dataset. We evaluate the proposed framework using Alzheimer's disease neuroimaging initiative (ADNI) dataset for AD diagnosis, and the experimental results show that the proposed framework outperforms other state‐of‐the‐art methods.) <|cite_end|> proposed a three-stage deep feature learning and fusion framework, which utilizes multimodal neural image data (i.e. MRI and PET) and genetic data (i.e. SNP) to learn potential representations for each individual modality and joint potential representations for each pair of modalities in the first two stages. In the third stage, the classification model is learned using joint potential representations from all modality pairs. Janani et al. <|cite_start|> (Reference: Multimodal deep learning models for early detection of Alzheimer’s disease stage: ) <|cite_end|>used the stack denoising automatic encoder to process EHR and SNP data, used 3D Convolutional neural network (CNNs) to train MRI imaging data, cascaded these intermediate features and transferred them to the classification, indicating that the multi-modal data analysis using DL is better than the single-mode DL model. \subsection{The main highlights of this literature survey} \quad(1) We have conducted a comprehensive review of the current mainstream AD diagnostic methods based on various modalities, summarizing the research progress and recent advancements in these modalities over the past five years. \quad(2) We have analyzed the latest research on the application of multimodal techniques in AD diagnosis and discussed the current challenges in multimodal fusion. We also present different solutions proposed by researchers in this field. \quad(3) We provide possible directions and suggestions for future multimodal AD diagnostic technologies. <|paper_end|>
[ "<|reference_start|> HuatuoGPT, towards Taming Language Model to Be a Doctor: In this paper, we present HuatuoGPT, a large language model (LLM) for medical consultation. The core recipe of HuatuoGPT is to leverage both \\textit{distilled data from ChatGPT} and \\textit{real-world data from doctors} in the supervised fine-tuned stage. The responses of ChatGPT are usually detailed, well-presented and informative while it cannot perform like a doctor in many aspects, e.g. for integrative diagnosis. We argue that real-world data from doctors would be complementary to distilled data in the sense the former could tame a distilled language model to perform like doctors. To better leverage the strengths of both data, we train a reward model to align the language model with the merits that both data bring, following an RLAIF (reinforced learning from AI feedback) fashion. To evaluate and benchmark the models, we propose a comprehensive evaluation scheme (including automatic and manual metrics). Experimental results demonstrate that HuatuoGPT achieves state-of-the-art results in performing medical consultation among open-source LLMs in GPT-4 evaluation, human evaluation, and medical benchmark datasets. It is worth noting that by using additional real-world data and RLAIF, the distilled language model (i.e., HuatuoGPT) outperforms its teacher model ChatGPT in most cases. Our code, data, and models are publicly available at \\url{https://github.com/FreedomIntelligence/HuatuoGPT}. The online demo is available at \\url{https://www.HuatuoGPT.cn/}. <|reference_end|>", "<|reference_start|> Multimodal Neuroimaging Feature Learning for Multiclass Diagnosis of Alzheimer’s Disease: The accurate diagnosis of Alzheimer's disease (AD) is essential for patient care and will be increasingly important as disease modifying agents become available, early in the course of the disease. Although studies have applied machine learning methods for the computer-aided diagnosis of AD, a bottleneck in the diagnostic performance was shown in previous methods, due to the lacking of efficient strategies for representing neuroimaging biomarkers. In this study, we designed a novel diagnostic framework with deep learning architecture to aid the diagnosis of AD. This framework uses a zero-masking strategy for data fusion to extract complementary information from multiple data modalities. Compared to the previous state-of-the-art workflows, our method is capable of fusing multimodal neuroimaging features in one setting and has the potential to require less labeled data. A performance gain was achieved in both binary classification and multiclass classification of AD. The advantages and limitations of the proposed framework are discussed. <|reference_end|>", "<|reference_start|> Review of Alzheimer's disease scales: is there a need for a new multi-domain scale for therapy evaluation in medical practice?: <|reference_end|>", "<|reference_start|> Human Gait Analysis in Neurodegenerative Diseases: a Review.: This paper reviews the recent literature on technologies and methodologies for quantitative human gait analysis in the context of neurodegenerative diseases. The use of technological instruments can be of great support in both clinical diagnosis and severity assessment of these pathologies. In this paper, sensors, features and processing methodologies have been reviewed in order to provide a highly consistent work that explores the issues related to gait analysis. First, the phases of the human gait cycle are briefly explained, along with some non-normal gait patterns (gait abnormalities) typical of some neurodegenerative diseases. Then the paper reports the most common processing techniques for both feature selection and extraction and for classification and clustering. Finally, a conclusive discussion on current open problems and future directions is outlined. <|reference_end|>" ]
[ 1, 10, 18, 31 ]
{"<|cite_1|>": "ss-892484", "<|cite_2|>": "arxiv-508537", "<|cite_3|>": "arxiv-497019", "<|cite_4|>": "arxiv-494203", "<|cite_5|>": "ss-892484", "<|cite_6|>": "ss-2505633", "<|cite_7|>": "ss-1469050", "<|cite_8|>": "ss-1606932", "<|cite_9|>": "ss-2083940", "<|cite_10|>": "ss-1300756", "<|cite_11|>": "ss-1449475", "<|cite_12|>": "ss-2083940", "<|cite_13|>": "ss-971688", "<|cite_14|>": "ss-866537", "<|cite_15|>": "ss-800623", "<|cite_16|>": "ss-2505634", "<|cite_17|>": "ss-2505635", "<|cite_18|>": "ss-2505636", "<|multi_cite_19_1|>": "ss-2505637", "<|multi_cite_19_2|>": "ss-1606932", "<|cite_20|>": "ss-1746679", "<|cite_21|>": "ss-1190282", "<|cite_22|>": "ss-2505638", "<|cite_23|>": "ss-2505639", "<|cite_24|>": "ss-2505640", "<|cite_25|>": "ss-2492750", "<|cite_26|>": "ss-2505641", "<|cite_27|>": "ss-2505642", "<|cite_28|>": "ss-1608119", "<|cite_29|>": "ss-982949", "<|cite_30|>": "ss-2505643", "<|cite_31|>": "ss-2505644", "<|cite_32|>": "ss-2505645", "<|cite_33|>": "ss-2256469", "<|cite_34|>": "ss-2128619", "<|cite_35|>": "ss-2505646", "<|cite_36|>": "ss-1973623", "<|cite_37|>": "ss-2496149", "<|cite_38|>": "ss-1526066", "<|cite_39|>": "ss-973452", "<|cite_40|>": "ss-1033512", "<|cite_41|>": "ss-1011329", "<|cite_42|>": "ss-2547882", "<|cite_43|>": "ss-1606932", "<|cite_44|>": "ss-2083940", "<|cite_45|>": "ss-691131", "<|cite_46|>": "ss-2505647", "<|cite_47|>": "ss-2554050", "<|cite_48|>": "ss-691131", "<|cite_49|>": "ss-819753", "<|cite_50|>": "ss-819753", "<|cite_51|>": "ss-692341", "<|cite_52|>": "ss-2554050", "<|cite_53|>": "ss-715157"}
1308.5608
<|paper_start|> Title: Implicit Resolution Abstract: Implicit Resolution: Let \Omega be a set of unsatisfiable clauses, an implicit resolution refutation of \Omega is a circuit \beta with a resolution proof {\alpha} of the statement "\beta describes a correct tree-like resolution refutation of \Omega". We show that such system is p-equivalent to Extended Frege. More generally, let {\tau} be a tautology, a [P, Q]-proof of {\tau} is a pair (\alpha,\beta) s.t. \alpha is a P-proof of the statement "\beta is a circuit describing a correct Q-proof of \tau". We prove that [EF,P] \leq p [R,P] for arbitrary Cook-Reckhow proof system P. Introduction In proof complexity one of the basic questions that remained open is whether or not there is an optimal proof system. Although there is no consensus whether such proof system should exist it is generally believed that Extended Frege is the pivotal case in the sense that if such optimal proof system exists then Extended Frege is currently the most natural candidate. This is because Extended Frege corresponds to the complexity class $P/poly$ and many attempts in constructing proof systems that are conjecturally stronger than Extended Frege ended up in producing systems that are equivalent to Extended Frege. Implicit proofs were introduced by Kraj{\'{\i}}{\v{c}}ek <|cite_start|> (Reference: Implicit proofs: Abstract. We describe a general method how to construct from a prepositional proof system P a possibly much stronger proof system iP. The system iP operates with exponentially long P-proofs described “implicitly” by polynomial size circuits. As an example we prove that proof system iEF, implicit EF, corresponds to bounded arithmetic theory and hence, in particular, polynomially simulates the quantified prepositional calculus G and the -consequences of proved with one use of exponentiation. Furthermore, the soundness of iEF is not provable in . An iteration of the construction yields a proof system corresponding to T2 + Exp and, in principle, to much stronger theories.) <|cite_end|> as a general framework for direct combinatorial constructions of strong proof systems beyond Extended Frege. The idea is to succinctly describe an exponential size proof by some polynomial size circuit then supplement such circuit with an additional correctness proof. Loosely speaking, let $P$ and $Q$ be some existing proof systems and $\tau$ be a tautology, a $[P, Q]$-proof of $\tau$ is a pair $(\alpha, \beta)$ s.t. $\alpha$ is a $P$-proof of the formalized statement ``$\beta$ is a circuit describing a correct $Q$-proof of $\tau$''. For any proof system $P$ the implicit version of $P$, denoted $iP$, is the proof system defined as $[P,P]$. Whilst a hierarchy of implicit proof systems based on Extended Frege were introduced in <|cite_start|> (Reference: Implicit proofs: Abstract. We describe a general method how to construct from a prepositional proof system P a possibly much stronger proof system iP. The system iP operates with exponentially long P-proofs described “implicitly” by polynomial size circuits. As an example we prove that proof system iEF, implicit EF, corresponds to bounded arithmetic theory and hence, in particular, polynomially simulates the quantified prepositional calculus G and the -consequences of proved with one use of exponentiation. Furthermore, the soundness of iEF is not provable in . An iteration of the construction yields a proof system corresponding to T2 + Exp and, in principle, to much stronger theories.) <|cite_end|>, the system $iEF$, is of specific interest. We may think of $iEF$ as the ``succinct'' version of exponential size Extended Frege proofs and it bears a correspondence to exponential time computation and serves as the base case of the iterated construction of a strong implicit proof system whose soundness is not provable in the theory $T^2+Exp$ where the exponentiation function is total. We would therefore expect that insights into the problem whether $iEF$ is indeed stronger than Extended Frege, shall contribute to the empirical evidences towards the study of the existence of optimal proof systems. In contrast to strong proof systems such as Extended Frege of which we do not even have any candidate hard tautologies, resolution is a refutational proof system with the resolution rule as the only derivation rule. It had been extensively studied since its introduction and substantial progress had been made in understanding its limits. Resolution is known to be inefficient for proving a number of combinatorial principles. For example, Haken <|cite_start|> (Reference: The Intractability of Resolution: ) <|cite_end|> first proved that the propositional pigeon hole principle requires exponential size resolution refutations. More recently, systematic treatment on lower bounds of resolution in terms of clause width was presented by Ben-Sasson and Widgerson in <|cite_start|> (Reference: Short proofs are narrow---resolution made simple: The widthof a Resolution proof is defined to be the maximal number of literals in any clause of the proof. In this paper, we relate proof width to proof length (=size), in both general Resolution, and its tree-like variant. The following consequences of these relations reveal width as a crucial “resource” of Resolution proofs. In one direction, the relations allow us to give simple, unified proofs for almost all known exponential lower bounds on size of resolution proofs, as well as several interesting new ones. They all follow from width lower bounds, and we show how these follow from natural expansion property of clauses of the input tautology. In the other direction, the width-size relations naturally suggest a simple dynamic programming procedure for automated theorem proving—one which simply searches for small width proofs. This relation guarantees that the runnuing time (and thus the size of the produced proof) is at most quasi-polynomial in the smallest tree-like proof. This algorithm is never much worse than any of the recursive automated provers (such as DLL) used in practice. In contrast, we present a family of tautologies on which it is exponentially faster.) <|cite_end|>. In this paper we are motivated to understand Extended Frege in terms of resolution and implicit proofs. In Theorem \ref{t_main} we show that Extended Frege is p-equivalent to a resolution based proof system in the framework of implicit proofs\footnote{This was first conjectured by Kraj{\'{\i}}{\v{c}}ek.}. We generalize the construction in Theorem \ref{t_gen} to prove that $[EF,P] \leq_p [R,P]$ for any proof system $P$, hence showing that $iEF$ collapses to $[R,EF]$, although we are not able to address the precise strength of the latter. As a by product, in Lemma \ref{t_c} we show that existence of an $NP$ search algorithm that is provably correct in Extended Frege implies existence of such algorithm provably correct in resolution. The paper is organized as follows. We briefly review the definition of resolution and fix notation in Section \ref{s_pre}. In Section \ref{s_circuit} we present a prototype of the key technical construction in terms of correctness of $NP$ search algorithms. In Section \ref{s_ires} we give precise definition of implicit resolution and prove the main result that it is p-equivalent to Extended Frege. In Section \ref{s_gen} we outline the construction applied to general implicit proof systems and briefly discuss generalizations to subsystems of $EF$. <|paper_end|>
[ "<|reference_start|> Implicit proofs: Abstract. We describe a general method how to construct from a prepositional proof system P a possibly much stronger proof system iP. The system iP operates with exponentially long P-proofs described “implicitly” by polynomial size circuits. As an example we prove that proof system iEF, implicit EF, corresponds to bounded arithmetic theory and hence, in particular, polynomially simulates the quantified prepositional calculus G and the -consequences of proved with one use of exponentiation. Furthermore, the soundness of iEF is not provable in . An iteration of the construction yields a proof system corresponding to T2 + Exp and, in principle, to much stronger theories. <|reference_end|>", "<|reference_start|> Implicit proofs: Abstract. We describe a general method how to construct from a prepositional proof system P a possibly much stronger proof system iP. The system iP operates with exponentially long P-proofs described “implicitly” by polynomial size circuits. As an example we prove that proof system iEF, implicit EF, corresponds to bounded arithmetic theory and hence, in particular, polynomially simulates the quantified prepositional calculus G and the -consequences of proved with one use of exponentiation. Furthermore, the soundness of iEF is not provable in . An iteration of the construction yields a proof system corresponding to T2 + Exp and, in principle, to much stronger theories. <|reference_end|>", "<|reference_start|> The Intractability of Resolution: <|reference_end|>", "<|reference_start|> Short proofs are narrow---resolution made simple: The widthof a Resolution proof is defined to be the maximal number of literals in any clause of the proof. In this paper, we relate proof width to proof length (=size), in both general Resolution, and its tree-like variant. The following consequences of these relations reveal width as a crucial “resource” of Resolution proofs.\nIn one direction, the relations allow us to give simple, unified proofs for almost all known exponential lower bounds on size of resolution proofs, as well as several interesting new ones. They all follow from width lower bounds, and we show how these follow from natural expansion property of clauses of the input tautology.\nIn the other direction, the width-size relations naturally suggest a simple dynamic programming procedure for automated theorem proving—one which simply searches for small width proofs. This relation guarantees that the runnuing time (and thus the size of the produced proof) is at most quasi-polynomial in the smallest tree-like proof. This algorithm is never much worse than any of the recursive automated provers (such as DLL) used in practice. In contrast, we present a family of tautologies on which it is exponentially faster. <|reference_end|>" ]
[ 0, 1, 2, 3 ]
{"<|cite_1|>": "ss-1178468", "<|cite_2|>": "ss-1178468", "<|cite_3|>": "ss-902058", "<|cite_4|>": "ss-1241845"}
1902.02580
<|paper_start|> Title: The few-get-richer: a surprising consequence of popularity-based rankings Abstract: The few-get-richer: a surprising consequence of popularity-based rankings: Ranking algorithms play a crucial role in online platforms ranging from search engines to recommender systems. In this paper, we identify a surprising consequence of popularity-based rankings: the fewer the items reporting a given signal, the higher the share of the overall traffic they collectively attract. This few-get-richer effect emerges in settings where there are few distinct classes of items (e.g., left-leaning news sources versus right-leaning news sources), and items are ranked based on their popularity. We demonstrate analytically that the few-get-richer effect emerges when people tend to click on top-ranked items and have heterogeneous preferences for the classes of items. Using simulations, we analyze how the strength of the effect changes with assumptions about the setting and human behavior. We also test our predictions experimentally in an online experiment with human participants. Our findings have important implications to understand the spread of misinformation. Introduction Ranking systems are at the core of many online services, including search engines, recommender systems, or news feeds in social media. Recent research suggests that the underlying ranking algorithms may impact society, playing an active role in the spread of misinformation <|cite_start|> (Reference: {The spread of true and false news online: Lies spread faster than the truth There is worldwide concern over false news and the possibility that it can influence political, economic, and social well-being. To understand how false news spreads, Vosoughi et al. used a data set of rumor cascades on Twitter from 2006 to 2017. About 126,000 rumors were spread by ∼3 million people. False news reached more people than the truth; the top 1% of false news cascades diffused to between 1000 and 100,000 people, whereas the truth rarely diffused to more than 1000 people. Falsehood also diffused faster than the truth. The degree of novelty and the emotional reactions of recipients may be responsible for the differences observed. Science, this issue p. 1146 A large-scale analysis of tweets reveals that false rumors spread further and faster than the truth. We investigated the differential diffusion of all of the verified true and false news stories distributed on Twitter from 2006 to 2017. The data comprise ~126,000 stories tweeted by ~3 million people more than 4.5 million times. We classified news as true or false using information from six independent fact-checking organizations that exhibited 95 to 98% agreement on the classifications. Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information. We found that false news was more novel than true news, which suggests that people were more likely to share novel information. Whereas false stories inspired fear, disgust, and surprise in replies, true stories inspired anticipation, sadness, joy, and trust. Contrary to conventional wisdom, robots accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it.) <|cite_end|>, political polarization <|cite_start|> (Reference: Biased assimilation, homophily, and the dynamics of polarization: We study the issue of polarization in society through a model of opinion formation. We say an opinion formation process is polarizing if it results in increased divergence of opinions. Empirical studies have shown that homophily, i.e., greater interaction between like-minded individuals, results in polarization. However, we show that DeGroot’s well-known model of opinion formation based on repeated averaging can never be polarizing, even if individuals are arbitrarily homophilous. We generalize DeGroot’s model to account for a phenomenon well known in social psychology as biased assimilation: When presented with mixed or inconclusive evidence on a complex issue, individuals draw undue support for their initial position, thereby arriving at a more extreme opinion. We show that in a simple model of homophilous networks, our biased opinion formation process results in polarization if individuals are sufficiently biased. In other words, homophily alone, without biased assimilation, is not sufficient to polarize society. Quite interestingly, biased assimilation also provides a framework to analyze the polarizing effect of Internet-based recommender systems that show us personalized content.) <|cite_end|>, or trustworthiness <|cite_start|> (Reference: Through the Google Goggles: Sociopolitical Bias in Search Engine Design: ) <|cite_end|>. They might also reinforce existing judgment biases <|cite_start|> (Reference: Bias on the web: Bias in Web data and use taints the algorithms behind Web-based applications, delivering equally biased results.) <|cite_end|>. Rankings systematically affect the information people access about products, services, events, or ideas, because users are more likely to click on top-ranked items <|cite_start|> (Reference: Accurately interpreting clickthrough data as implicit feedback: This paper examines the reliability of implicit feedback generated from clickthrough data in WWW search. Analyzing the users' decision process using eyetracking and comparing implicit feedback against manual relevance judgments, we conclude that clicks are informative but biased. While this makes the interpretation of clicks as absolute relevance judgments difficult, we show that relative preferences derived from clicks are reasonably accurate on average.) <|cite_end|> <|cite_start|> (Reference: Position Bias Estimation for Unbiased Learning to Rank in Personal Search: A well-known challenge in learning from click data is its inherent bias and most notably position bias. Traditional click models aim to extract the ‹query, document› relevance and the estimated bias is usually discarded after relevance is extracted. In contrast, the most recent work on unbiased learning-to-rank can effectively leverage the bias and thus focuses on estimating bias rather than relevance [20, 31]. Existing approaches use search result randomization over a small percentage of production traffic to estimate the position bias. This is not desired because result randomization can negatively impact users' search experience. In this paper, we compare different schemes for result randomization (i.e., RandTopN and RandPair) and show their negative effect in personal search. Then we study how to infer such bias from regular click data without relying on randomization. We propose a regression-based Expectation-Maximization (EM) algorithm that is based on a position bias click model and that can handle highly sparse clicks in personal search. We evaluate our EM algorithm and the extracted bias in the learning-to-rank setting. Our results show that it is promising to extract position bias from regular clicks without result randomization. The extracted bias can improve the learning-to-rank algorithms significantly. In addition, we compare the pointwise and pairwise learning-to-rank models. Our results show that pairwise models are more effective in leveraging the estimated bias.) <|cite_end|> <|cite_start|> (Reference: How Endogenous Crowd Formation Undermines the Wisdom of the Crowd in Online Ratings: People frequently consult average ratings on online recommendation platforms before making consumption decisions. Research on the wisdom-of-the-crowd phenomenon suggests that average ratings provide unbiased quality estimates. Yet we argue that the process by which average ratings are updated creates a systematic bias. In analyses of more than 80 million online ratings, we found that items with high average ratings tend to attract more additional ratings than items with low average ratings. We call this asymmetry in how average ratings are updated endogenous crowd formation. Using computer simulations, we showed that it implies the emergence of a negative bias in average ratings. This bias affects items with few ratings particularly strongly, which leads to ranking mistakes. The average-rating rankings of items with few ratings are worse than their quality rankings. We found evidence for the predicted pattern of biases in an experiment and in analyses of large online-rating data sets.) <|cite_end|>. When items are ranked based on popularity, this leads to a self-reinforcing dynamics according to which popular items become increasingly more popular <|cite_start|> (Reference: {Experimental Study of Inequality and Unpredictability in an Artificial Cultural Market: Hit songs, books, and movies are many times more successful than average, suggesting that “the best” alternatives are qualitatively different from “the rest”; yet experts routinely fail to predict which products will succeed. We investigated this paradox experimentally, by creating an artificial “music market” in which 14,341 participants downloaded previously unknown songs either with or without knowledge of previous participants' choices. Increasing the strength of social influence increased both inequality and unpredictability of success. Success was also only partly determined by quality: The best songs rarely did poorly, and the worst rarely did well, but any other result was possible.) <|cite_end|>. In this paper, we identify a surprising effect of popularity-based rankings. Consider a setting with two distinct classes of news sources that differ in their political orientations, e.g., left-leaning or right-leaning. We show that, under a fairly broad set of conditions, \emph{the total share} of web traffic (proportion of clicks) attracted by a given class of news sources \emph{decreases} with the number of news sources in that class. We call this phenomenon the `few-get-richer' effect. For example, if there are 20 news sources, the total number of clicks on left-leaning sources will be larger when there are just 3 of these sources than when there are 17 of them. Intuition suggests that popular items should be more relevant and trustworthy than unpopular ones. Yet, extensive research indicates that popularity is often not very informative about quality, especially in settings characterized by `rich-get-richer' dynamics (sometimes called the `Matthew effect') <|cite_start|> (Reference: Information Sampling, Belief Synchronization, and Collective Illusions: We demonstrate that a sampling-based mechanism can offer an alternative explanation for belief synchronization in social groups and the persistence of collective illusions. Our model assumes that people are more likely to sample popular alternatives than unpopular alternatives. We show that this mechanism is sufficient to explain belief synchronization: a strong majority of opinions will likely emerge in favor of one alternative. The reason is that the group is unlikely to move away from a state in which one alternative is very unpopular. If by chance most people come to dislike alternative A, they are all unlikely to sample it again and their opinions of A remain the same. When A is in fact the best alternative, a collective illusion has emerged because people mistakenly believe that a suboptimal alternative is the best. Our model implies that such a collective illusion is persistent. The model thus offers an existence proof that a collective illusion can occur even in settings where people do not infer ...) <|cite_end|> <|cite_start|> (Reference: Information Sampling, Judgment, and the Environment: Application to the Effect of Popularity on Evaluations: If people avoid alternatives they dislike, a negative evaluative bias emerges because errors of under-evaluation are unlikely to be corrected. Prior work that analyzed this mechanism has shown that when the social environment exposes people to avoided alternatives (i.e., it makes them resample them), then evaluations can become systematically more positive. In this paper, we clarify the conditions under which this happens. By analyzing a simple learning model, we show that whether additional exposures induced by the social environment lead to more positive or more negative evaluations depends on how prior evaluations and the social environment interact in driving resampling. We apply these insights to the study of the effect of popularity on evaluations. We show theoretically that increased popularity leads to more positive evaluations when popularity mainly increases the chances of resampling for individuals with low current evaluations. Data on repeat stays at hotels are consistent with this condition: The popularity of a hotel mainly impacts the chances of a repeat stay for individuals with low satisfaction scores. Our results illustrate how a sampling approach can help to explain when and why people tend to like popular alternatives. They also shed new light on the polarization of attitudes across social groups.) <|cite_end|> <|cite_start|> (Reference: The Matthew effect in science. The reward and communication systems of science are considered.: ) <|cite_end|> <|cite_start|> (Reference: {Experimental Study of Inequality and Unpredictability in an Artificial Cultural Market: Hit songs, books, and movies are many times more successful than average, suggesting that “the best” alternatives are qualitatively different from “the rest”; yet experts routinely fail to predict which products will succeed. We investigated this paradox experimentally, by creating an artificial “music market” in which 14,341 participants downloaded previously unknown songs either with or without knowledge of previous participants' choices. Increasing the strength of social influence increased both inequality and unpredictability of success. Success was also only partly determined by quality: The best songs rarely did poorly, and the worst rarely did well, but any other result was possible.) <|cite_end|>, or information cascades <|cite_start|> (Reference: A simple model of herd behavior: We analyze a sequential decision model in which each decision maker looks at the decisions made by previous decision makers in taking her own decision. This is rational for her because these other decision makers may have some information that is important for her. We then show that the decision rules that are chosen by optimizing individuals will be characterized by herd behavior; i.e., people will be doing what others are doing rather than using their information. We then show that the resulting equilibrium is inefficient.) <|cite_end|> <|cite_start|> (Reference: You have printed the following article : A Theory of Fads , Fashion , Custom , and Cultural Change as Informational Cascades: ) <|cite_end|>. In these settings, the randomness inherent to the dynamics of the system implies that items that become the most popular are not always those with the best quality. The `few-get-richer' effect adds to research on the `rich-get-richer' dynamics by showing that popularity-based rankings do not only create `noise' in the ranking, but can also lead to a systematic ranking bias: when there are two distinct classes of items, items from the smaller class become better ranked than similar items from the larger class. The few-get-richer effect emerges in settings characterized by two design features. The first feature consists in the ranking of items in terms of popularity (i.e., items with more clicks are higher ranked). The second feature is a partition of the available items in two (or more) distinct classes. We make two reasonable behavioral assumptions. The first assumption is users' tendency to click on top-ranked items. The second assumption is that users have heterogeneous preferences for the item classes. Some users have a preference for items of a particular class, while others have a preference for items of other classes. Still other users are indifferent to the item class. Returning to our news search example, suppose there are few left-leaning and many right-leaning news sources. We assume there are three types of users: left-leaning, right-leaning, and indifferent. The heterogeneous preference assumption means that left-leaning users are more likely to click on left-leaning news sources, right-leaning users are more likely to click on right-leaning news sources, and indifferent users click exclusively based on rank. Even if the left-leaning news sources are unpopular, left-leaning individuals will seek them out. Because there are few such news sources, the clicks of these left-leaning individuals will be concentrated on a few news sources, and these sources will tend to `shoot up to the top'. Once a news source has gotten close to the top, it will attract not only the clicks from the left-leaning individuals, but also the clicks of indifferent users, simply because of the rich-get-richer dynamics. This is the few-get-richer effect. Related Work Our results contribute to the understanding of the limitations of recommender systems <|cite_start|> (Reference: Evaluating {{Collaborative Filtering Recommender Systems: Recommender systems have been evaluated in many, often incomparable, ways. In this article, we review the key decisions in evaluating collaborative filtering recommender systems: the user tasks being evaluated, the types of analysis and datasets being used, the ways in which prediction quality is measured, the evaluation of prediction attributes other than quality, and the user-based evaluation of the system as a whole. In addition to reviewing the evaluation strategies used by prior researchers, we present empirical results from the analysis of various accuracy metrics on one content domain where all the tested metrics collapsed roughly into three equivalence classes. Metrics within each equivalency class were strongly correlated, while metrics from different equivalency classes were uncorrelated.) <|cite_end|> <|cite_start|> (Reference: Being accurate is not enough: How accuracy metrics have hurt recommender systems: Recommender systems have shown great potential to help users find interesting and relevant items from within a large information space. Most research up to this point has focused on improving the accuracy of recommender systems. We believe that not only has this narrow focus been misguided, but has even been detrimental to the field. The recommendations that are most accurate according to the standard metrics are sometimes not the recommendations that are most useful to users. In this paper, we propose informal arguments that the recommender community should move beyond the conventional accuracy metrics and their associated experimental methodologies. We propose new user-centric directions for evaluating recommender systems.) <|cite_end|> <|cite_start|> (Reference: How Endogenous Crowd Formation Undermines the Wisdom of the Crowd in Online Ratings: People frequently consult average ratings on online recommendation platforms before making consumption decisions. Research on the wisdom-of-the-crowd phenomenon suggests that average ratings provide unbiased quality estimates. Yet we argue that the process by which average ratings are updated creates a systematic bias. In analyses of more than 80 million online ratings, we found that items with high average ratings tend to attract more additional ratings than items with low average ratings. We call this asymmetry in how average ratings are updated endogenous crowd formation. Using computer simulations, we showed that it implies the emergence of a negative bias in average ratings. This bias affects items with few ratings particularly strongly, which leads to ranking mistakes. The average-rating rankings of items with few ratings are worse than their quality rankings. We found evidence for the predicted pattern of biases in an experiment and in analyses of large online-rating data sets.) <|cite_end|> <|cite_start|> (Reference: How algorithmic popularity bias hinders or promotes quality: Algorithms that favor popular items are used to help us select among many choices, from engaging articles on a social media news feed to songs and books that others have purchased, and from top-raked search engine results to highly-cited scientific papers. The goal of these algorithms is to identify high-quality items such as reliable news, beautiful movies, prestigious information sources, and important discoveries --- in short, high-quality content should rank at the top. Prior work has shown that choosing what is popular may amplify random fluctuations and ultimately lead to sub-optimal rankings. Nonetheless, it is often assumed that recommending what is popular will help high-quality content "bubble up" in practice. Here we identify the conditions in which popularity may be a viable proxy for quality content by studying a simple model of cultural market endowed with an intrinsic notion of quality. A parameter representing the cognitive cost of exploration controls the critical trade-off between quality and popularity. We find a regime of intermediate exploration cost where an optimal balance exists, such that choosing what is popular actually promotes high-quality items to the top. Outside of these limits, however, popularity bias is more likely to hinder quality. These findings clarify the effects of algorithmic popularity bias on quality outcomes, and may inform the design of more principled mechanisms for techno-social cultural markets.) <|cite_end|> <|cite_start|> (Reference: Implementing the "wisdom of the crowd": We study a novel mechanism design model in which agents each arrive sequentially and choose one action from a set of actions with unknown rewards. The information revealed by the principal affects the incentives of the agents to explore and generate new information. We characterize the optimal disclosure policy of a planner whose goal is to maximize social welfare. One interpretation of our result is the implementation of what is known as the “wisdom of the crowd.” This topic has become increasingly relevant with the rapid spread of the Internet over the past decade.) <|cite_end|> <|cite_start|> (Reference: Recommender systems as mechanisms for social learning: This article studies how a recommender system may incentivize users to learn about a product collaboratively. To improve the incentives for early exploration, the optimal design trades off fully transparent disclosure by selectively overrecommending the product (or “spamming”) to a fraction of users. Under the optimal scheme, the designer spams very little on a product immediately after its release but gradually increases its frequency; she stops it altogether when she becomes sufficiently pessimistic about the product. The recommender’s product research and intrinsic/naive users “seed” incentives for user exploration and determine the speed and trajectory of social learning. Potential applications for various Internet recommendation platforms and implications for review/ratings inflation are discussed.) <|cite_end|>, with direct applications to the design of fair, transparent and efficient ranking systems <|cite_start|> (Reference: Fairness and transparency in ranking: Ranking in Information Retrieval (IR) has been traditionally evaluated from the perspective of the relevance of search engine results to people searching for information, i.e., the extent to which the system provides "the right information, to the right people, in the right way, at the right time." However, people in current IR systems are not only the ones issuing search queries, but increasingly they are also the ones being searched. This raises several new problems in IR that have been addressed in recent research, particularly with respect to fairness/non-discrimination, accountability, and transparency. This is a summary of some these initial developments.) <|cite_end|> <|cite_start|> (Reference: Equity of Attention: Amortizing Individual Fairness in Rankings: Rankings of people and items are at the heart of selection-making, match-making, and recommender systems, ranging from employment sites to sharing economy platforms. As ranking positions influence the amount of attention the ranked subjects receive, biases in rankings can lead to unfair distribution of opportunities and resources, such as jobs or income. This paper proposes new measures and mechanisms to quantify and mitigate unfairness from a bias inherent to all rankings, namely, the position bias, which leads to disproportionately less attention being paid to low-ranked subjects. Our approach differs from recent fair ranking approaches in two important ways. First, existing works measure unfairness at the level of subject groups while our measures capture unfairness at the level of individual subjects, and as such subsume group unfairness. Second, as no single ranking can achieve individual attention fairness, we propose a novel mechanism that achieves amortized fairness, where attention accumulated across a series of rankings is proportional to accumulated relevance. We formulate the challenge of achieving amortized individual fairness subject to constraints on ranking quality as an online optimization problem and show that it can be solved as an integer linear program. Our experimental evaluation reveals that unfair attention distribution in rankings can be substantial, and demonstrates that our method can improve individual fairness while retaining high ranking quality.) <|cite_end|> <|cite_start|> (Reference: FA*IR: A Fair Top-k Ranking Algorithm: In this work, we define and solve the Fair Top-k Ranking problem, in which we want to determine a subset of k candidates from a large pool of n >> k candidates, maximizing utility (i.e., select the "best" candidates) subject to group fairness criteria. Our ranked group fairness definition extends group fairness using the standard notion of protected groups and is based on ensuring that the proportion of protected candidates in every prefix of the top-k ranking remains statistically above or indistinguishable from a given minimum. Utility is operationalized in two ways: (i) every candidate included in the top-$k$ should be more qualified than every candidate not included; and (ii) for every pair of candidates in the top-k, the more qualified candidate should be ranked above. An efficient algorithm is presented for producing the Fair Top-k Ranking, and tested experimentally on existing datasets as well as new datasets released with this paper, showing that our approach yields small distortions with respect to rankings that maximize utility without considering fairness criteria. To the best of our knowledge, this is the first algorithm grounded in statistical tests that can mitigate biases in the representation of an under-represented group along a ranked list.) <|cite_end|>, as well as methods to reduce the spread of misinformation <|cite_start|> (Reference: Leveraging the Crowd to Detect and Reduce the Spread of Fake News and Misinformation: Online social networking sites are experimenting with the following crowd-powered procedure to reduce the spread of fake news and misinformation: whenever a user is exposed to a story through her feed, she can flag the story as misinformation and, if the story receives enough flags, it is sent to a trusted third party for fact checking. If this party identifies the story as misinformation, it is marked as disputed. However, given the uncertain number of exposures, the high cost of fact checking, and the trade-off between flags and exposures, the above mentioned procedure requires careful reasoning and smart algorithms which, to the best of our knowledge, do not exist to date. In this paper, we first introduce a flexible representation of the above procedure using the framework of marked temporal point processes. Then, we develop a scalable online algorithm, Curb, to select which stories to send for fact checking and when to do so to efficiently reduce the spread of misinformation with provable guarantees. In doing so, we need to solve a novel stochastic optimal control problem for stochastic differential equations with jumps, which is of independent interest. Experiments on two real-world datasets gathered from Twitter and Weibo show that our algorithm may be able to effectively reduce the spread of fake news and misinformation.) <|cite_end|> <|cite_start|> (Reference: {The spread of true and false news online: Lies spread faster than the truth There is worldwide concern over false news and the possibility that it can influence political, economic, and social well-being. To understand how false news spreads, Vosoughi et al. used a data set of rumor cascades on Twitter from 2006 to 2017. About 126,000 rumors were spread by ∼3 million people. False news reached more people than the truth; the top 1% of false news cascades diffused to between 1000 and 100,000 people, whereas the truth rarely diffused to more than 1000 people. Falsehood also diffused faster than the truth. The degree of novelty and the emotional reactions of recipients may be responsible for the differences observed. Science, this issue p. 1146 A large-scale analysis of tweets reveals that false rumors spread further and faster than the truth. We investigated the differential diffusion of all of the verified true and false news stories distributed on Twitter from 2006 to 2017. The data comprise ~126,000 stories tweeted by ~3 million people more than 4.5 million times. We classified news as true or false using information from six independent fact-checking organizations that exhibited 95 to 98% agreement on the classifications. Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information. We found that false news was more novel than true news, which suggests that people were more likely to share novel information. Whereas false stories inspired fear, disgust, and surprise in replies, true stories inspired anticipation, sadness, joy, and trust. Contrary to conventional wisdom, robots accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it.) <|cite_end|> or uncivil behavior <|cite_start|> (Reference: News values, cognitive biases, and partisan incivility in comment sections: Partisan incivility is prevalent in news comments, but we have limited insight into how journalists and news users engage with it. Gatekeeping, cognitive bias, and social identity theories suggest that journalists may tolerate incivility while users actively promote partisan incivility. Using 9.6 million comments from The New York Times, we analyze whether the presence of uncivil and partisan terms affects how journalists and news users engage with comments. Results show that partisanship and incivility increase recommendations and the likelihood of receiving an abuse flag. Swearing increases the likelihood of a comment being rejected and reduces the chances of being highlighted as a NYT Pick. These findings suggest that journalists and news users interact with partisan incivility differently, and that some forms of incivility may be promoted or tacitly accepted in comments.) <|cite_end|> <|cite_start|> (Reference: Anyone Can Become a Troll: Causes of Trolling Behavior in Online Discussions: In online communities, antisocial behavior such as trolling disrupts constructive discussion. While prior work suggests that trolling behavior is confined to a vocal and antisocial minority, we demonstrate that ordinary people can engage in such behavior as well. We propose two primary trigger mechanisms: the individual's mood, and the surrounding context of a discussion (e.g., exposure to prior trolling behavior). Through an experiment simulating an online discussion, we find that both negative mood and seeing troll posts by others significantly increases the probability of a user trolling, and together double this probability. To support and extend these results, we study how these same mechanisms play out in the wild via a data-driven, longitudinal analysis of a large online news discussion community. This analysis reveals temporal mood effects, and explores long range patterns of repeated exposure to trolling. A predictive model of trolling behavior shows that mood and discussion context together can explain trolling behavior better than an individual's history of trolling. These results combine to suggest that ordinary people can, under the right circumstances, behave like trolls.) <|cite_end|>. An extensive literature on modeling (click) user behavior is weakly related to our work <|cite_start|> (Reference: User-click Modeling for Understanding and Predicting Search-behavior: Recent advances in search users' click modeling consider both users' search queries and click/skip behavior on documents to infer the user's perceived relevance. Most of these models, including dynamic Bayesian networks (DBN) and user browsing models (UBM), use probabilistic models to understand user click behavior based on individual queries. The user behavior is more complex when her actions to satisfy her information needs form a search session, which may include multiple queries and subsequent click behaviors on various items on search result pages. Previous research is limited to treating each query within a search session in isolation, without paying attention to their dynamic interactions with other queries in a search session. Investigating this problem, we consider the sequence of queries and their clicks in a search session as a task and propose a task-centric click model~(TCM). TCM characterizes user behavior related to a task as a collective whole. Specifically, we identify and consider two new biases in TCM as the basis for user modeling. The first indicates that users tend to express their information needs incrementally in a task, and thus perform more clicks as their needs become clearer. The other illustrates that users tend to click fresh documents that are not included in the results of previous queries. Using these biases, TCM is more accurately able to capture user search behavior. Extensive experimental results demonstrate that by considering all the task information collectively, TCM can better interpret user click behavior and achieve significant improvements in terms of ranking metrics of NDCG and perplexity.) <|cite_end|> <|cite_start|> (Reference: A dynamic Bayesian network click model for web search ranking: As with any application of machine learning, web search ranking requires labeled data. The labels usually come in the form of relevance assessments made by editors. Click logs can also provide an important source of implicit feedback and can be used as a cheap proxy for editorial labels. The main difficulty however comes from the so called position bias - urls appearing in lower positions are less likely to be clicked even if they are relevant. In this paper, we propose a Dynamic Bayesian Network which aims at providing us with unbiased estimation of the relevance from the click logs. Experiments show that the proposed click model outperforms other existing click models in predicting both click-through rate and relevance.) <|cite_end|> <|cite_start|> (Reference: A User Browsing Model to Predict Search Engine Click Data from Past Observations.: Search engine click logs provide an invaluable source of relevance information but this information is biased because we ignore which documents from the result list the users have actually seen before and after they clicked. Otherwise, we could estimate document relevance by simple counting. In this paper, we propose a set of assumptions on user browsing behavior that allows the estimation of the probability that a document is seen, thereby providing an unbiased estimate of document relevance. To train, test and compare our model to the best alternatives described in the Literature, we gather a large set of real data and proceed to an extensive cross-validation experiment. Our solution outperforms very significantly all previous models. As a side effect, we gain insight into the browsing behavior of users and we can compare it to the conclusions of an eye-tracking experiments by Joachims et al. [12]. In particular, our findings confirm that a user almost always see the document directly after a clicked document. They also explain why documents situated just after a very relevant document are clicked more often.) <|cite_end|> <|cite_start|> (Reference: Predicting Clicks: Estimating the Click-through Rate for New Ads: Search engine advertising has become a significant element of the Web browsing experience. Choosing the right ads for the query and the order in which they are displayed greatly affects the probability that a user will see and click on each ad. This ranking has a strong impact on the revenue the search engine receives from the ads. Further, showing the user an ad that they prefer to click on improves user satisfaction. For these reasons, it is important to be able to accurately estimate the click-through rate of ads in the system. For ads that have been displayed repeatedly, this is empirically measurable, but for new ads, other means must be used. We show that we can use features of ads, terms, and advertisers to learn a model that accurately predicts the click-though rate for new ads. We also show that using our model improves the convergence and performance of an advertising system. As a result, our model increases both revenue and user satisfaction.) <|cite_end|>. Closer to our work, several papers have proposed models of the dynamics of interactions between individual searches and ranking algorithms, e.g., for understanding the feedback loop between ranking system and user queries <|cite_start|> (Reference: Collective attention and ranking methods: In a world with a tremendous amount of choices, ranking systems are becoming increasingly important in helping individuals to find information relevant to them. As such, rankings play a crucial role of influencing the attention that is devoted to the various alternatives. This role generates a feedback when the ranking is based on citations, as is the case for PageRank used by Google. The attention bias due to published rankings affects new stated opinions (citations), which will, in turn, affect the next ranking. The purpose of this paper is to investigate this feedback by studying some simple but reasonable dynamics. We show that the long run behavior of the process much depends on the preferences, in particular on their diversity, and on the used ranking method. Two main families of methods are investigated, one based on the notion of 'handicaps', the other one on the notion of peers' rankings.) <|cite_end|>, explaining the observed mitigation of search engines' popularity bias <|cite_start|> (Reference: Topical interests and the mitigation of search engine bias: Search engines have become key media for our scientific, economic, and social activities by enabling people to access information on the web despite its size and complexity. On the down side, search engines bias the traffic of users according to their page ranking strategies, and it has been argued that they create a vicious cycle that amplifies the dominance of established and already popular sites. This bias could lead to a dangerous monopoly of information. We show that, contrary to intuition, empirical data do not support this conclusion; popular sites receive far less traffic than predicted. We discuss a model that accurately predicts traffic data patterns by taking into consideration the topical interests of users and their searching behavior in addition to the way search engines rank pages. The heterogeneity of user interests explains the observed mitigation of search engines’ popularity bias.) <|cite_end|>, or the competition of memes using limited attention <|cite_start|> (Reference: Competition among memes in a world with limited attention: ) <|cite_end|>. The paper closest to ours is <|cite_start|> (Reference: Opinion Dynamics via Search Engines (and other Algorithmic Gatekeepers): Ranking algorithms are the information gatekeepers of the Internet era. We develop a stylized model to study the effects of ranking algorithms on opinion dynamics. We consider a search engine that uses an algorithm based on popularity and on personalization. We find that popularity-based rankings generate an advantage of the fewer effect: fewer websites reporting a given signal attract relatively more traffic overall. This highlights a novel, ranking-driven channel that explains the diffusion of misinformation, as websites reporting incorrect information may attract an amplified amount of traffic precisely because they are few. Furthermore, when individuals provide sufficiently positive feedback to the ranking algorithm, popularity-based rankings tend to aggregate information while personalization acts in the opposite direction.) <|cite_end|>, which also obtains a few-get-richer effect in a model where individuals get multiple signals and where (news) items are ranked via a probabilistic popularity-based ranking. Besides being simpler, our model works with a discrete and deterministic ranking of the websites rather than a continuous and probabilistic one. Among other things, this allows for a tighter connection with the experiment of Section~\ref{sec:exp}. <|paper_end|>
[ "<|reference_start|> {The spread of true and false news online: Lies spread faster than the truth There is worldwide concern over false news and the possibility that it can influence political, economic, and social well-being. To understand how false news spreads, Vosoughi et al. used a data set of rumor cascades on Twitter from 2006 to 2017. About 126,000 rumors were spread by ∼3 million people. False news reached more people than the truth; the top 1% of false news cascades diffused to between 1000 and 100,000 people, whereas the truth rarely diffused to more than 1000 people. Falsehood also diffused faster than the truth. The degree of novelty and the emotional reactions of recipients may be responsible for the differences observed. Science, this issue p. 1146 A large-scale analysis of tweets reveals that false rumors spread further and faster than the truth. We investigated the differential diffusion of all of the verified true and false news stories distributed on Twitter from 2006 to 2017. The data comprise ~126,000 stories tweeted by ~3 million people more than 4.5 million times. We classified news as true or false using information from six independent fact-checking organizations that exhibited 95 to 98% agreement on the classifications. Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information. We found that false news was more novel than true news, which suggests that people were more likely to share novel information. Whereas false stories inspired fear, disgust, and surprise in replies, true stories inspired anticipation, sadness, joy, and trust. Contrary to conventional wisdom, robots accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it. <|reference_end|>", "<|reference_start|> Through the Google Goggles: Sociopolitical Bias in Search Engine Design: <|reference_end|>", "<|reference_start|> {The spread of true and false news online: Lies spread faster than the truth There is worldwide concern over false news and the possibility that it can influence political, economic, and social well-being. To understand how false news spreads, Vosoughi et al. used a data set of rumor cascades on Twitter from 2006 to 2017. About 126,000 rumors were spread by ∼3 million people. False news reached more people than the truth; the top 1% of false news cascades diffused to between 1000 and 100,000 people, whereas the truth rarely diffused to more than 1000 people. Falsehood also diffused faster than the truth. The degree of novelty and the emotional reactions of recipients may be responsible for the differences observed. Science, this issue p. 1146 A large-scale analysis of tweets reveals that false rumors spread further and faster than the truth. We investigated the differential diffusion of all of the verified true and false news stories distributed on Twitter from 2006 to 2017. The data comprise ~126,000 stories tweeted by ~3 million people more than 4.5 million times. We classified news as true or false using information from six independent fact-checking organizations that exhibited 95 to 98% agreement on the classifications. Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information. We found that false news was more novel than true news, which suggests that people were more likely to share novel information. Whereas false stories inspired fear, disgust, and surprise in replies, true stories inspired anticipation, sadness, joy, and trust. Contrary to conventional wisdom, robots accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it. <|reference_end|>", "<|reference_start|> Topical interests and the mitigation of search engine bias: Search engines have become key media for our scientific, economic, and social activities by enabling people to access information on the web despite its size and complexity. On the down side, search engines bias the traffic of users according to their page ranking strategies, and it has been argued that they create a vicious cycle that amplifies the dominance of established and already popular sites. This bias could lead to a dangerous monopoly of information. We show that, contrary to intuition, empirical data do not support this conclusion; popular sites receive far less traffic than predicted. We discuss a model that accurately predicts traffic data patterns by taking into consideration the topical interests of users and their searching behavior in addition to the way search engines rank pages. The heterogeneity of user interests explains the observed mitigation of search engines’ popularity bias. <|reference_end|>" ]
[ 0, 2, 24, 32 ]
{"<|cite_1|>": "ss-1517289", "<|cite_2|>": "ss-1020319", "<|cite_3|>": "ss-1330835", "<|cite_4|>": "ss-1143416", "<|multi_cite_5_1|>": "ss-993795", "<|multi_cite_5_2|>": "ss-770030", "<|multi_cite_5_3|>": "ss-1878459", "<|cite_6|>": "ss-1059378", "<|multi_cite_7_1|>": "ss-1878460", "<|multi_cite_7_2|>": "ss-1878461", "<|multi_cite_7_3|>": "ss-680991", "<|multi_cite_7_4|>": "ss-1059378", "<|multi_cite_8_1|>": "ss-1347265", "<|multi_cite_8_2|>": "ss-1347264", "<|multi_cite_9_1|>": "ss-1249973", "<|multi_cite_9_2|>": "ss-897015", "<|multi_cite_9_3|>": "ss-1878459", "<|multi_cite_9_4|>": "arxiv-128307", "<|multi_cite_9_5|>": "ss-802973", "<|multi_cite_9_6|>": "ss-1282043", "<|multi_cite_10_1|>": "ss-1292251", "<|multi_cite_10_2|>": "arxiv-157354", "<|multi_cite_10_3|>": "arxiv-127212", "<|multi_cite_11_1|>": "arxiv-141380", "<|multi_cite_11_2|>": "ss-1517289", "<|multi_cite_12_1|>": "ss-710376", "<|multi_cite_12_2|>": "arxiv-115805", "<|multi_cite_13_1|>": "ss-1878462", "<|multi_cite_13_2|>": "ss-1129768", "<|multi_cite_13_3|>": "ss-770026", "<|multi_cite_13_4|>": "ss-1430097", "<|cite_14|>": "ss-1878463", "<|cite_15|>": "ss-1378938", "<|cite_16|>": "ss-1569185", "<|cite_17|>": "arxiv-176448"}
2112.14317
<|paper_start|> Title: Quantum Merkle Trees Abstract: Quantum Merkle Trees: Committing to information is a central task in cryptography, where a party (typically called a prover) stores a piece of information (e.g., a bit string) with the promise of not changing it. This information can be accessed by another party (typically called the verifier), who can later learn the information and verify that it was not meddled with. Merkle trees are a well-known construction for doing so in a succinct manner, in which the verifier can learn any part of the information by receiving a short proof from the honest prover. Despite its significance in classical cryptography, there was no quantum analog of the Merkle tree. A direct generalization using the Quantum Random Oracle Model (QROM) does not seem to be secure. In this work, we propose the quantum Merkle tree. It is based on what we call the Quantum Haar Random Oracle Model (QHROM). In QHROM, both the prover and the verifier have access to a Haar random quantum oracle $G$ and its inverse. Using the quantum Merkle tree, we propose a succinct quantum argument for the Gap-$k$-Local-Hamiltonian problem. Assuming the Quantum PCP conjecture is true, this succinct argument extends to all of QMA. This work raises a number of interesting open research problems. Introduction A commitment scheme <|cite_start|> (Reference: Minimum Disclosure Proofs of Knowledge: ) <|cite_end|> is a cryptographic primitive that allows a party (\ie, a prover) to (1) commit to a piece of information such as a bit string while keeping it hidden from others and (2) reveal the information they have committed to later. Commitment schemes are designed to ensure that a party cannot change the information after they have committed to it. Commitment schemes have numerous applications in cryptography, such as the construction of protocols for secure coin flipping, zero-knowledge proofs, and secure computation. The Merkle tree <|cite_start|> (Reference: A Digital Signature Based on a Conventional Encryption Function: ) <|cite_end|> is an efficient example of commitment schemes, which captures the following scenario: There are two parties, the prover $\caP$ and the verifier $\caV$. $\caP$ first computes a short string called the commitment which is denoted by $\commit(x)$ from a long input string $x$ and sends $\commit(x)$ to $\caV$. Then $\caV$ asks $\caP$ to reveal a subset of bits of $x$ together with a short message that would enable $\caV$ to verify that the string $x$ has not been altered. The security promise is that after $\caP$ sends $\commit(x)$ to $\caV$, then upon $\caV$'s request of any subset of bits, a computational bounded $\caP$ can only reveal those bits faithfully. Namely, if $\caP$ claims that the $i$-th bit of $x$ is the wrong value $1-x_i$, then her claim will be rejected by $\caV$ with high probability. The Merkle tree has wide applications in cryptography since it allows $\caP$ to \emph{delegate} a potentially very long string to $\caV$ (\ie, a database) while enabling $\caV$ to maintain an \emph{efficient verifiable} random access to that string (say to any subset of the bits of the string). A well-known application of the Merkle tree is the succinct arguments for $\NP$ from probabilistically checkable proofs <|cite_start|> (Reference: A note on efficient zero-knowledge proofs and arguments (extended abstract): In this note, we present new zero-knowledge interactive proofs and arguments for languages in <italic>NP</italic>. To show that <italic>x ε L</italic>, with an error probability of at most 2<supscrpt>-<italic>k</italic></supscrpt>, our zero-knowledge proof system requires <italic>O</italic>(|<italic>x</italic>|<supscrpt><italic>c</italic><subscrpt>1</subscrpt></supscrpt>)+<italic>O</italic>(lg<supscrpt><italic>c</italic><subscrpt>2</subscrpt></supscrpt>|<italic>x</italic>|)<italic>k</italic> ideal bit commitments, where <italic>c</italic><subscrpt>1</subscrpt> and <italic>c</italic><subscrpt>2</subscrpt> depend only on <italic>L</italic>. This construction is the first in the ideal bit commitment model that achieves large values of <italic>k</italic> more efficiently than by running <italic>k</italic> independent iterations of the base interactive proof system. Under suitable complexity assumptions, we exhibit zero knowledge arguments that require <italic>O</italic>(lg<supscrpt>c</supscrpt>|<italic>x</italic>|<italic>kl</italic> bits of communication, where <italic>c</italic> depends only on <italic>L</italic>, and <italic>l</italic> is the security parameter for the prover. This is the first construction in which the total amount of communication can be less than that needed to transmit the <italic>NP</italic> witness. Our protocols are based on efficiently checkable proofs for <italic>NP</italic>[4].) <|cite_end|> <|cite_start|> (Reference: COMPUTATIONALLY SOUND PROOFS *: This paper puts forward a new notion of a proof based on computational complexity and explores its implications for computation at large. Computationally sound proofs provide, in a novel and meaningful framework, answers to old and new questions in complexity theory. In particular, given a random oracle or a new complexity assumption, they enable us to prove that verifying is easier than deciding for all theorems; provide a quite effective way to prove membership in computationally hard languages (such as ${\cal C}o$-$\cal N \cal P$-complete ones); and show that every computation possesses a short certificate vouching its correctness. Finally, if a special type of computationally sound proof exists, we show that Blum's notion of program checking can be meaningfully broadened so as to prove that $\cal N \cal P$-complete languages are checkable.) <|cite_end|> or interactive oracle proofs <|cite_start|> (Reference: Fast Reed-Solomon Interactive Oracle Proofs of Proximity: The family of Reed-Solomon (RS) codes plays a prominent role in the construction of quasilinear probabilistically checkable proofs (PCPs) and interactive oracle proofs (IOPs) with perfect zero knowledge and polylogarithmic verifiers. The large concrete computational complexity required to prove membership in RS codes is one of the biggest obstacles to deploying such PCP/IOP systems in practice. To advance on this problem we present a new interactive oracle proof of proximity (IOPP) for RS codes; we call it the Fast RS IOPP (FRI) because (i) it resembles the ubiquitous Fast Fourier Transform (FFT) and (ii) the arithmetic complexity of its prover is strictly linear and that of the verifier is strictly logarithmic (in comparison, FFT arithmetic complexity is quasi-linear but not strictly linear). Prior RS IOPPs and PCPs of proximity (PCPPs) required super-linear proving time even for polynomially large query complexity. For codes of block-length N, the arithmetic complexity of the (interactive) FRI prover is less than 6 * N, while the (interactive) FRI verifier has arithmetic complexity <= 21 * log N, query complexity 2 * log N and constant soundness - words that are delta-far from the code are rejected with probability min{delta * (1-o(1)),delta_0} where delta_0 is a positive constant that depends mainly on the code rate. The particular combination of query complexity and soundness obtained by FRI is better than that of the quasilinear PCPP of [Ben-Sasson and Sudan, SICOMP 2008], even with the tighter soundness analysis of [Ben-Sasson et al., STOC 2013; ECCC 2016]; consequently, FRI is likely to facilitate better concretely efficient zero knowledge proof and argument systems. Previous concretely efficient PCPPs and IOPPs suffered a constant multiplicative factor loss in soundness with each round of "proof composition" and thus used at most O(log log N) rounds. We show that when delta is smaller than the unique decoding radius of the code, FRI suffers only a negligible additive loss in soundness. This observation allows us to increase the number of "proof composition" rounds to Theta(log N) and thereby reduce prover and verifier running time for fixed soundness.) <|cite_end|>, where by succinctness one means that the total communication between the prover and verifier constitutes a small number of bits, say $\polylog(n)$ bits of communication. Despite being very influential in (classical) cryptography, there is no known quantum analog of the Merkle tree that allows committing to quantum states. Such a quantum analog is appealing since it would allow a party to delegate a large quantum state $\sigma$ to another party while maintaining verifiable access to individual qubits. Protocols based on the classical Merkle tree are often analyzed in the random oracle model. There are also quantum models such as the Quantum Random Oracle Model <|cite_start|> (Reference: Random Oracles in a Quantum World: The interest in post-quantum cryptography - classical systems that remain secure in the presence of a quantum adversary - has generated elegant proposals for new cryptosystems. Some of these systems are set in the random oracle model and are proven secure relative to adversaries that have classical access to the random oracle. We argue that to prove post-quantum security one needs to prove security in the quantum-accessible random oracle model where the adversary can query the random oracle with quantum states. We begin by separating the classical and quantum-accessible random oracle models by presenting a scheme that is secure when the adversary is given classical access to the random oracle, but is insecure when the adversary can make quantum oracle queries. We then set out to develop generic conditions under which a classical random oracle proof implies security in the quantum-accessible random oracle model. We introduce the concept of a history-free reduction which is a category of classical random oracle reductions that basically determine oracle answers independently of the history of previous queries, and we prove that such reductions imply security in the quantum model. We then show that certain post-quantum proposals, including ones based on lattices, can be proven secure using history-free reductions and are therefore post-quantum secure. We conclude with a rich set of open problems in this area.) <|cite_end|> ($\QROM$) for analyzing the quantum attacks against the classical Merkle tree. There are works showing that classical Merkle-tree-based protocols are secure against quantum attacks <|cite_start|> (Reference: Succinct Arguments in the Quantum Random Oracle Model: ) <|cite_end|> <|cite_start|> (Reference: Post-Quantum Succinct Arguments: Breaking the Quantum Rewinding Barrier: We prove that Kilian's four-message succinct argument system is post-quantum secure in the standard model when instantiated with any probabilistically checkable proof and any collapsing hash function (which in turn exist based on the post-quantum hardness of Learning with Errors). This yields the first post-quantum succinct argument system from any falsifiable assumption. At the heart of our proof is a new quantum rewinding procedure that enables a reduction to repeatedly query a quantum adversary for accepting transcripts as many times as desired. Prior techniques were limited to a constant number of accepting transcripts.) <|cite_end|>. These works showed that commitment to classical bit strings by the Merkle tree cannot be broken by quantum adversaries. Here we hope to obtain a quantum analog of the Merkle tree that can be used to commit to quantum states. In this work, we propose a new random oracle model which we call the Quantum Haar Random Oracle Model ($\QHROM$). We then use it in our construction of the {\it Quantum Merkle tree}. We then use it to propose a quantum analog of Kilian's succinct argument for $\NP$ and conjecture its security. \subsection{The Merkle Tree Algorithm} Our definition of $\QHROM$ is motivated by our adaptation of the Merkle tree to the quantum setting, so it is instructive to recall the standard Merkle tree algorithm. Let $\blkpara \in \N$ be the block-length parameter. We assume that both $\caP$ and $\caV$ have access to a random oracle function $h\colon \bits^{2\blkpara} \to \bits^{\blkpara}$. For simplicity of the argument, we will first focus on the simplest non-trivial case of a Merkel tree with two leaves and depth one, and take $n = 2\blkpara$ to be the length of the string that $\caP$ wishes to commit to. Here the string $x$ resides on the leaves and $\commit(x)$ string resides on the root. As we will see shortly, a straightforward adaption of the Merkle tree to the quantum setting is not secure even in this simple setting. \begin{figure}[H] \centering \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto, semithick,scale = 1.2] \tikzstyle{every state}=[text=black,rectangle, minimum width=2.5cm] \tikzstyle{arrow}=[thick] \node (comname) at (-0.1,2) {\footnotesize $\commit(x)$}; \node [state] (com) at (-0.1,1.2) {$h(x_1,\dotsc,x_{2\blkpara})$}; \node [state] (data1) at (-1.3,0) {$x_1,\dotsc,x_\blkpara$}; \node [state] (data2) at (1.1,0) {$x_{\blkpara+1},\dotsc,x_{2\blkpara}$}; \draw (comname) -- (-2.1,2); \node (send) at (-3.1,2) {$\caP$ sends to $\caV$}; \end{tikzpicture} \caption{An illustration of the toy example for the classical Merkle tree} \label{fig:toy-example-classic} \end{figure} In this simplified setting, the protocol starts by $\caP$ simply sending the hash value $\tilde{h} = h(x)$ of $x \in \bits^{2\blkpara}$ as the $\commit(x)$ of length $\blkpara$ to $\caV$ (see~\autoref{fig:toy-example-classic} for an illustration). Then $\caV$ requests the values of a subset of bits in $x$, for which the honest $\caP$ simply responds by revealing the whole string $x$ to $\caV$. Then $\caV$ checks that the string has the same hash value $\tilde{h}$. If a (dishonest) $\caP$ can first commit to $x$ and later convince $\caV$ that its $i$-th bit is $1 - x_i$, then $\caP$ has found two strings $x \ne \tilde{x}$ with $h(x) = h(\tilde{x})$. This requires at least $2^{\blkpara/2}$ queries to the random oracle $h$ due to the birthday paradox, which is infeasible. \subsection{A Failed Attempt to Adapt Merkle Tree in the Quantum Setting} Let us see how one might directly try to adapt the special case above of the Merkle tree algorithm to the quantum setting. An immediate idea is that, given a $2\blkpara$-qubit quantum state $\spz{\psi} = \sum_{z} \alpha_z \spz{z}$ in the register denoted by $\data$, $\caP$ treats $h$ as a quantum oracle $O_h$\footnote{That is, $O_h \spz{x}\spz{y} = \spz{x}\spz{y \oplus h(x)}$, where $x\in \bits^{2\blkpara}$, $y \in \bits^\blkpara$, and $\oplus$ denotes the entry-wise addition over $\GF(2)$.}, creates $\blkpara$ qubits initialized to $\spz{0^\blkpara}$ in register $\com$, applies $O_h$ to both $\data$ and $\com$ to obtain $\sum_{z} \alpha_z \spz{z} \spz{h(z)}$, and sends the $\com$ register to $\caV$; see~\autoref{fig:toy-example} for an illustration. \begin{figure}[H] \centering \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto, semithick,scale = 1.2] \tikzstyle{every state}=[text=black,rectangle, minimum width=1.5cm] \tikzstyle{arrow}=[thick] \node (comname) at (-0.1,2) {\footnotesize $\com$: $\blkpara$ qubits}; \node [state] (com) at (-0.1,1.2) {$\spz{0^\blkpara}$}; \node [state] (data1) at (-1.3,0) {}; \node [state] (data2) at (1.1,0) {}; \node (phistate) at (-0.1,0) {$\spz{\psi}$}; \node[text width=5cm] (applyG) at (0,-1) {$\underbrace{~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~}_{\text{$\data$: $2\blkpara$ qubits}}$}; \draw [decorate,decoration={brace,amplitude=10pt,mirror,raise=4pt},yshift=0pt] (2.5,-1.2) -- (2.5,2) node [black,midway,xshift=3.0cm] {\footnotesize apply $\caG$ or $O_h$}; \draw (comname) -- (-2.1,2); \node (send) at (-3.1,2) {$\caP$ sends to $\caV$}; \end{tikzpicture} \caption{An illustration of the toy example for the quantum Merkle tree} \label{fig:toy-example} \end{figure} To reveal qubits in $\spz{\psi}$, $\caP$ simply sends the $\data$ register to $\caV$ as well, and $\caV$ applies $O_h$ again to both $\data$ and $\com$, and measures $\com$ in the computational basis to check if it is $0^{\blkpara}$ and rejects immediately otherwise. However, this is not secure against \emph{phase attack}. After sending $\com$ to $\caV$, for every Boolean function $f\colon \bits^{2\blkpara} \to \bits$, $\caP$ can apply the unitary $\spz{z} \mapsto (-1)^{f(z)} \spz{z}$ to $\data$, and then sends it to $\caV$. One can see that $\caV$ still accepts this state with probability $1$, but $\caP$ has cheated by changing the state from $\sum_{z} \alpha_z \spz{z}$ to $\sum_{z} (-1)^{f(z)} \alpha_z \spz{z}$, which can be entirely a different state for some function $f$. The issue above is that the mapping $O_h$, $\spz{x}\spz{y} \mapsto \spz{x}\spz{y \oplus h(x)}$, has too much structure to be exploited by the attacker. This immediately suggests to us to consider a more random choice of quantum oracles which indeed we take to be \emph{the most} random choice of quantum oracles: a Haar random quantum oracle. Comment: One way to address the phase attack above is to make $O_h$ more complicated. For example, instead of applying $O_h$ once to the registers $\data$ and $\com$, we can repeatedly apply $O_h \Had^{\otimes 2\blkpara}$ several times ($\Had$ denotes the Hadamard gate). We found such a construction more cumbersome and even harder to analyze compared to a Haar random unitary. Moreover, it is conjectured~\cite[Section~6]{JiL018} that similar constructions may already be indistinguishable from a Haar random unitary (see~\autoref{sec:suc-discussions} for more discussions). Hence, it seems more natural to directly work with a Haar random unitary. \subsection{The Quantum Haar Random Oracle Model ($\QHROM$) and Quantum Merkle Tree} We are now ready to introduce the Quantum Haar Random Oracle Model ($\QHROM$). In $\QHROM$ both $\caP$ and $\caV$ have access to a Haar random quantum oracle $\caG$ and its inverse $\caG^\dagger$ that act on $3\blkpara$ qubits (see~\autoref{defi:QHROM} for a precise definition). The protocol between $\caP$ and $\caV$ remains the same for the special case $n = 2\blkpara$ except for replacing $O_h$ by $\caG$. It is easy to see that since $\caG$ completely \emph{obfuscates} the original state $\spz{\psi}$, the phase attack described above no longer applies. Next, we describe the quantum Merkle tree in the general setting in which $n$ can be arbitrarily large and denotes the number of qubits in the state that $\caP$ wishes to commit to. Given a quantum state $\sigma$ on $n = \blkpara \cdot \ell$ qubits for some $\ell=2^d$ and $d \in \N$,\footnote{We can always pad any quantum state to such length by adding dummy qubits. This at most doubles the number of qubits.} we partition $x$ into $\ell$ consecutive blocks of length $\blkpara$ as $x^{(1)},x^{(2)},\dotsc,x^{(\ell)}$. Then, we build a perfect binary tree with $\ell$ leaves (see~\autoref{fig:binarytree}), where each leaf corresponds to a block of the input. Next, from the leaves to the root, we assign to each node $\alpha$ a $\blkpara$-qubit register $\com_\alpha$ as follows: (1) if $\alpha$ is a leaf, then $\com_\alpha$ is simply the qubits of the assigned block and (2) if $\alpha$ is an intermediate node with two children $\beta$ and $\gamma$, then we initialize $\com_\alpha$ to $\spz{0^\blkpara}$, and apply $\caG$ to the three registers $\com_\beta$, $\com_\gamma$, and $\com_\alpha$. Finally, $\caP$ sends the register $\com_{\sf rt}$ to $\caV$, where ${\sf rt}$ is the root of the binary tree. Suppose $\caV$ requests the state of the $i$-th block $x^{(i)}$ of the quantum state. To reveal the $i$-th block $x^{(i)}$ on a leaf (which we denote $\mu$) of the tree $\caP$ sends all the $\com_\alpha$ for nodes $\alpha$ that are the (1) ancestors of $\mu$, (2) siblings of an ancestor of $\mu$, or (3) $\mu$ or the sibling of $\mu$. $\caV$ then ``undoes'' all the applied $\caG$ in the exact reverse order by applying $\caG^\dagger$ to the registers sent by $\caP$ starting from the register $\com_{\sf rt}$, and then from the root downwards to the leaves. After that, for every ancestor $\alpha$ of $\mu$, $\caV$ checks that $\com_\alpha$ is $\spz{0^\blkpara}$ by measuring it in the computational basis. To illustrate, if $\caV$ asks for the block $x^{(2)}$, then $\caP$ sends the corresponding $\com_\alpha$ registers for all diamond shape nodes in~\autoref{fig:binarytree-intro}. \begin{figure} \begin{center} \begin{tikzpicture}[level/.style={sibling distance=120mm/#1},scale = 0.5] \node [circle,draw]{$1$} child { node [diamond,draw] {$2$} child { node [diamond,draw] {$4$} child { node {$\vdots$} child {node [diamond,draw,scale = 0.75, minimum size=1.3cm] (aa) {\footnotesize$\ell$}} child {node [diamond,draw,scale = 0.75, minimum size=1.3cm] (a) {\footnotesize$\ell + 1$}} } child {node {$\vdots$}} } child { node [diamond,draw] {$5$} child {node {$\vdots$}} child {node {$\vdots$}} } } child { node [diamond,draw] {$3$} child { node [circle,draw] {$6$} child {node {$\vdots$}} child {node {$\vdots$}} } child { node [circle,draw] {$7$} child {node {$\vdots$}} child {node {$\vdots$} child {node [circle,draw,scale = 0.75, minimum size=1.3cm] (b) {\footnotesize$2\ell - 2$}} child {node [circle,draw,scale = 0.75, minimum size=1.3cm] (bb){\footnotesize$2\ell - 1$}} } } }; \path (a) -- (b) node [midway] {$\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots$}; \node [below of= aa] {$x^{(1)}$}; \node [below of= a] {$x^{(2)}$}; \node [below of= b] {$x^{(\ell-1)}$}; \node [below of= bb] {$x^{(\ell)}$}; \end{tikzpicture} \caption{An illustration of the quantum Merkle tree with $\ell = 2^{d}$ input blocks; when block $x^{(2)}$ is requested by $\caV$, $\caP$ sends all the diamond shape nodes.}\label{fig:binarytree-intro} \end{center} \end{figure} Comment: So how might one heuristically instantiate a Haar-Random unitary? One might use a random quantum circuit that well approximates the behavior of a Haar unitary. For example, one might use a polynomially deep circuit. One way to formalize the degree of approximation is via the ideas in $k$-design <|cite_start|> (Reference: Local Random Quantum Circuits are Approximate Polynomial-Designs: ) <|cite_end|>. \subsection{A Candidate for Succinct Quantum Argument for Gap-$k$-$\LH$ in $\QHROM$} Similar to Kilian's succinct argument for $\NP$, the quantum Merkle tree naturally suggests a succinct argument $\Pisuc$ for the Gap Local Hamiltonian Problem. We first recall its definition below. \begin{definition} (Gap-$k$-Local Hamiltonian Problem) Given $\alpha,\beta$ with $0<\alpha<\beta\le 1$ and a $k$-local Hamiltonian with $m$ local terms $\{ H_i \}_{i \in [m]}$ such that $0 \le H_i \le I$, decide whether $\lmin(\sum_{i=1}^{m}H_i)$ is at most $\alpha m$ or at least $\beta m$. Below we abbreviate this problem by $(\alpha,\beta)\text{-}k\text{-}\LH$. \end{definition} Formally, in $\Pisuc$ the honest prover $\caP$ applies the quantum Merkle tree to a ground state $\sigma$ of $\sum_{i=1}^{m} H_i$, and sends $\com_{\sf rt}$ to $\caV$. Then $\caV$ draws an integer $i$ from $\{1,2,\dotsc,m\}$ uniformly at random and asks $\caP$ to reveal the qubits in the support of the term $H_i$. $\caV$ does the decommitment from the root towards the qubits in the support of $H_i$ as described above. If in this decommitment phase the ancestors of the qubits in the support of $H_i$ all result in $0^\blkpara$ it proceeds to the last step. In the last step, it measures the POVM $\{H_i,1 - H_i\}$ on the qubits in the support of $H_i$ and rejects if it sees $H_i$. Indeed, this is the natural analog of Kilian's succinct argument <|cite_start|> (Reference: A note on efficient zero-knowledge proofs and arguments (extended abstract): In this note, we present new zero-knowledge interactive proofs and arguments for languages in <italic>NP</italic>. To show that <italic>x ε L</italic>, with an error probability of at most 2<supscrpt>-<italic>k</italic></supscrpt>, our zero-knowledge proof system requires <italic>O</italic>(|<italic>x</italic>|<supscrpt><italic>c</italic><subscrpt>1</subscrpt></supscrpt>)+<italic>O</italic>(lg<supscrpt><italic>c</italic><subscrpt>2</subscrpt></supscrpt>|<italic>x</italic>|)<italic>k</italic> ideal bit commitments, where <italic>c</italic><subscrpt>1</subscrpt> and <italic>c</italic><subscrpt>2</subscrpt> depend only on <italic>L</italic>. This construction is the first in the ideal bit commitment model that achieves large values of <italic>k</italic> more efficiently than by running <italic>k</italic> independent iterations of the base interactive proof system. Under suitable complexity assumptions, we exhibit zero knowledge arguments that require <italic>O</italic>(lg<supscrpt>c</supscrpt>|<italic>x</italic>|<italic>kl</italic> bits of communication, where <italic>c</italic> depends only on <italic>L</italic>, and <italic>l</italic> is the security parameter for the prover. This is the first construction in which the total amount of communication can be less than that needed to transmit the <italic>NP</italic> witness. Our protocols are based on efficiently checkable proofs for <italic>NP</italic>[4].) <|cite_end|> in the quantum setting. We prove that if $\caP$ follows the protocol, then (1) when $\lambda_{\sf min}(\sum_{i=1}^m H_i) \le \alpha \cdot m$, $\caP$ can make $\caV$ accept with probability at least $1 - \alpha$, and (2) when $\lambda_{\sf min}(\sum_{i=1}^m H_i) \ge \beta \cdot m$, $\caP$ cannot force $\caV$ to accept with a probability greater than $1 - \beta < 1 - \alpha$ (See~\autoref{theo:Pisuc-c-and-s} for details). By a sequential repetition argument, the completeness $1-\alpha$ and the soundness $1-\beta$ can be boosted to $1 - n^{-\omega(1)}$ and $n^{-\omega(1)}$ respectively where $\omega (1)$ means super constant. However, a malicious $\caP$ may not follow the protocol, but instead come up with some arbitrary states for the different nodes that are not the result of the quantum Merkle tree algorithm and send those to $\caV$ instead. We currently do not know how to analyze such an arbitrary attack, but we conjecture the following: \begin{conjecture}\label{conj:main-conj} For the constants $k \in \N$ and $0 < \alpha < \beta \le 1$, $\Pisuc$ (with sequential repetition) for $(\alpha,\beta)$-$k$-$\LH$ has completeness $1 - n^{-\omega(1)}$ and soundness $n^{-\omega(1)}$ in $\QHROM$. \end{conjecture} \subsection{Open Questions} We believe our inability to prove~\autoref{conj:main-conj} is mainly due to the lack of tools available for analyzing this new $\QHROM$ setting. We remark that only two years ago <|cite_start|> (Reference: Succinct Arguments in the Quantum Random Oracle Model: ) <|cite_end|> managed to prove that the succinct argument for $\NP$ <|cite_start|> (Reference: A note on efficient zero-knowledge proofs and arguments (extended abstract): In this note, we present new zero-knowledge interactive proofs and arguments for languages in <italic>NP</italic>. To show that <italic>x ε L</italic>, with an error probability of at most 2<supscrpt>-<italic>k</italic></supscrpt>, our zero-knowledge proof system requires <italic>O</italic>(|<italic>x</italic>|<supscrpt><italic>c</italic><subscrpt>1</subscrpt></supscrpt>)+<italic>O</italic>(lg<supscrpt><italic>c</italic><subscrpt>2</subscrpt></supscrpt>|<italic>x</italic>|)<italic>k</italic> ideal bit commitments, where <italic>c</italic><subscrpt>1</subscrpt> and <italic>c</italic><subscrpt>2</subscrpt> depend only on <italic>L</italic>. This construction is the first in the ideal bit commitment model that achieves large values of <italic>k</italic> more efficiently than by running <italic>k</italic> independent iterations of the base interactive proof system. Under suitable complexity assumptions, we exhibit zero knowledge arguments that require <italic>O</italic>(lg<supscrpt>c</supscrpt>|<italic>x</italic>|<italic>kl</italic> bits of communication, where <italic>c</italic> depends only on <italic>L</italic>, and <italic>l</italic> is the security parameter for the prover. This is the first construction in which the total amount of communication can be less than that needed to transmit the <italic>NP</italic> witness. Our protocols are based on efficiently checkable proofs for <italic>NP</italic>[4].) <|cite_end|> <|cite_start|> (Reference: COMPUTATIONALLY SOUND PROOFS *: This paper puts forward a new notion of a proof based on computational complexity and explores its implications for computation at large. Computationally sound proofs provide, in a novel and meaningful framework, answers to old and new questions in complexity theory. In particular, given a random oracle or a new complexity assumption, they enable us to prove that verifying is easier than deciding for all theorems; provide a quite effective way to prove membership in computationally hard languages (such as ${\cal C}o$-$\cal N \cal P$-complete ones); and show that every computation possesses a short certificate vouching its correctness. Finally, if a special type of computationally sound proof exists, we show that Blum's notion of program checking can be meaningfully broadened so as to prove that $\cal N \cal P$-complete languages are checkable.) <|cite_end|> is secure in the $\QROM$ model by using the recently proposed \emph{compressed oracles} technique introduced in <|cite_start|> (Reference: How to Record Quantum Queries, and Applications to Quantum Indifferentiability: ) <|cite_end|> which gives a nice way to analyze $\QROM$. To prove the security of our succinct argument for Gap-$k$-$\LH$, one likely needs similar advances for analyzing the $\QHROM$. We now list some specific open problems: \begin{open} Is there an analog of the compressed oracle technique in <|cite_start|> (Reference: How to Record Quantum Queries, and Applications to Quantum Indifferentiability: ) <|cite_end|> for the $\QHROM$? \end{open} Above we generalized Kilian's constant-round succinct argument <|cite_start|> (Reference: A note on efficient zero-knowledge proofs and arguments (extended abstract): In this note, we present new zero-knowledge interactive proofs and arguments for languages in <italic>NP</italic>. To show that <italic>x ε L</italic>, with an error probability of at most 2<supscrpt>-<italic>k</italic></supscrpt>, our zero-knowledge proof system requires <italic>O</italic>(|<italic>x</italic>|<supscrpt><italic>c</italic><subscrpt>1</subscrpt></supscrpt>)+<italic>O</italic>(lg<supscrpt><italic>c</italic><subscrpt>2</subscrpt></supscrpt>|<italic>x</italic>|)<italic>k</italic> ideal bit commitments, where <italic>c</italic><subscrpt>1</subscrpt> and <italic>c</italic><subscrpt>2</subscrpt> depend only on <italic>L</italic>. This construction is the first in the ideal bit commitment model that achieves large values of <italic>k</italic> more efficiently than by running <italic>k</italic> independent iterations of the base interactive proof system. Under suitable complexity assumptions, we exhibit zero knowledge arguments that require <italic>O</italic>(lg<supscrpt>c</supscrpt>|<italic>x</italic>|<italic>kl</italic> bits of communication, where <italic>c</italic> depends only on <italic>L</italic>, and <italic>l</italic> is the security parameter for the prover. This is the first construction in which the total amount of communication can be less than that needed to transmit the <italic>NP</italic> witness. Our protocols are based on efficiently checkable proofs for <italic>NP</italic>[4].) <|cite_end|> to the quantum setting and conjectured its soundness. A natural open question is whether we can generalize Micali's non-interactive succinct argument for $\NP$ <|cite_start|> (Reference: COMPUTATIONALLY SOUND PROOFS *: This paper puts forward a new notion of a proof based on computational complexity and explores its implications for computation at large. Computationally sound proofs provide, in a novel and meaningful framework, answers to old and new questions in complexity theory. In particular, given a random oracle or a new complexity assumption, they enable us to prove that verifying is easier than deciding for all theorems; provide a quite effective way to prove membership in computationally hard languages (such as ${\cal C}o$-$\cal N \cal P$-complete ones); and show that every computation possesses a short certificate vouching its correctness. Finally, if a special type of computationally sound proof exists, we show that Blum's notion of program checking can be meaningfully broadened so as to prove that $\cal N \cal P$-complete languages are checkable.) <|cite_end|> to the quantum settings as well. \begin{open} Is there an analog of Micali's non-interactive succinct argument for Gap-$k$-$\LH$? \end{open} A particularly useful feature of previous succinct arguments for $\NP$ <|cite_start|> (Reference: A note on efficient zero-knowledge proofs and arguments (extended abstract): In this note, we present new zero-knowledge interactive proofs and arguments for languages in <italic>NP</italic>. To show that <italic>x ε L</italic>, with an error probability of at most 2<supscrpt>-<italic>k</italic></supscrpt>, our zero-knowledge proof system requires <italic>O</italic>(|<italic>x</italic>|<supscrpt><italic>c</italic><subscrpt>1</subscrpt></supscrpt>)+<italic>O</italic>(lg<supscrpt><italic>c</italic><subscrpt>2</subscrpt></supscrpt>|<italic>x</italic>|)<italic>k</italic> ideal bit commitments, where <italic>c</italic><subscrpt>1</subscrpt> and <italic>c</italic><subscrpt>2</subscrpt> depend only on <italic>L</italic>. This construction is the first in the ideal bit commitment model that achieves large values of <italic>k</italic> more efficiently than by running <italic>k</italic> independent iterations of the base interactive proof system. Under suitable complexity assumptions, we exhibit zero knowledge arguments that require <italic>O</italic>(lg<supscrpt>c</supscrpt>|<italic>x</italic>|<italic>kl</italic> bits of communication, where <italic>c</italic> depends only on <italic>L</italic>, and <italic>l</italic> is the security parameter for the prover. This is the first construction in which the total amount of communication can be less than that needed to transmit the <italic>NP</italic> witness. Our protocols are based on efficiently checkable proofs for <italic>NP</italic>[4].) <|cite_end|> <|cite_start|> (Reference: COMPUTATIONALLY SOUND PROOFS *: This paper puts forward a new notion of a proof based on computational complexity and explores its implications for computation at large. Computationally sound proofs provide, in a novel and meaningful framework, answers to old and new questions in complexity theory. In particular, given a random oracle or a new complexity assumption, they enable us to prove that verifying is easier than deciding for all theorems; provide a quite effective way to prove membership in computationally hard languages (such as ${\cal C}o$-$\cal N \cal P$-complete ones); and show that every computation possesses a short certificate vouching its correctness. Finally, if a special type of computationally sound proof exists, we show that Blum's notion of program checking can be meaningfully broadened so as to prove that $\cal N \cal P$-complete languages are checkable.) <|cite_end|> is that they can be made \emph{zero-knowledge} with minimal overhead. A natural open question is whether we can make our proposed succinct argument for Gap-$k$-$\LH$ zero-knowledge as well. \begin{open} Is there a zero-knowledge succinct argument for Gap-$k$-$\LH$ in $\QHROM$? \end{open} Another open question is whether we can instantiate the protocol $\Pisuc$ in the more standard $\QROM$. A natural way to do this is to simulate a Haar random unitary by a standard random oracle; see~\cite[Section~6]{JiL018} for some candidate constructions. \begin{open} Is there a succinct argument for Gap-$k$-$\LH$ in $\QROM$? \end{open} <|paper_end|>
[ "<|reference_start|> Fast Reed-Solomon Interactive Oracle Proofs of Proximity: The family of Reed-Solomon (RS) codes plays a prominent role in the construction of quasilinear probabilistically checkable proofs (PCPs) and interactive oracle proofs (IOPs) with perfect zero knowledge and polylogarithmic verifiers. The large concrete computational complexity required to prove membership in RS codes is one of the biggest obstacles to deploying such PCP/IOP systems in practice.\nTo advance on this problem we present a new interactive oracle proof of proximity (IOPP) for RS codes; we call it the Fast RS IOPP (FRI) because (i) it resembles the ubiquitous Fast Fourier Transform (FFT) and (ii) the arithmetic complexity of its prover is strictly linear and that of the verifier is strictly logarithmic (in comparison, FFT arithmetic complexity is quasi-linear but not strictly linear). Prior RS IOPPs and PCPs of proximity (PCPPs) required super-linear proving time even for polynomially large query complexity.\nFor codes of block-length N, the arithmetic complexity of the (interactive) FRI prover is less than 6 * N, while the (interactive) FRI verifier has arithmetic complexity <= 21 * log N, query complexity 2 * log N and constant soundness - words that are delta-far from the code are rejected with probability min{delta * (1-o(1)),delta_0} where delta_0 is a positive constant that depends mainly on the code rate. The particular combination of query complexity and soundness obtained by FRI is better than that of the quasilinear PCPP of [Ben-Sasson and Sudan, SICOMP 2008], even with the tighter soundness analysis of [Ben-Sasson et al., STOC 2013; ECCC 2016]; consequently, FRI is likely to facilitate better concretely efficient zero knowledge proof and argument systems.\nPrevious concretely efficient PCPPs and IOPPs suffered a constant multiplicative factor loss in soundness with each round of \"proof composition\" and thus used at most O(log log N) rounds. We show that when delta is smaller than the unique decoding radius of the code, FRI suffers only a negligible additive loss in soundness. This observation allows us to increase the number of \"proof composition\" rounds to Theta(log N) and thereby reduce prover and verifier running time for fixed soundness. <|reference_end|>", "<|reference_start|> Succinct Arguments in the Quantum Random Oracle Model: <|reference_end|>", "<|reference_start|> A note on efficient zero-knowledge proofs and arguments (extended abstract): In this note, we present new zero-knowledge interactive proofs and arguments for languages in <italic>NP</italic>. To show that <italic>x ε L</italic>, with an error probability of at most 2<supscrpt>-<italic>k</italic></supscrpt>, our zero-knowledge proof system requires <italic>O</italic>(|<italic>x</italic>|<supscrpt><italic>c</italic><subscrpt>1</subscrpt></supscrpt>)+<italic>O</italic>(lg<supscrpt><italic>c</italic><subscrpt>2</subscrpt></supscrpt>|<italic>x</italic>|)<italic>k</italic> ideal bit commitments, where <italic>c</italic><subscrpt>1</subscrpt> and <italic>c</italic><subscrpt>2</subscrpt> depend only on <italic>L</italic>. This construction is the first in the ideal bit commitment model that achieves large values of <italic>k</italic> more efficiently than by running <italic>k</italic> independent iterations of the base interactive proof system. Under suitable complexity assumptions, we exhibit zero knowledge arguments that require <italic>O</italic>(lg<supscrpt>c</supscrpt>|<italic>x</italic>|<italic>kl</italic> bits of communication, where <italic>c</italic> depends only on <italic>L</italic>, and <italic>l</italic> is the security parameter for the prover. This is the first construction in which the total amount of communication can be less than that needed to transmit the <italic>NP</italic> witness. Our protocols are based on efficiently checkable proofs for <italic>NP</italic>[4]. <|reference_end|>", "<|reference_start|> COMPUTATIONALLY SOUND PROOFS *: This paper puts forward a new notion of a proof based on computational complexity and explores its implications for computation at large. \nComputationally sound proofs provide, in a novel and meaningful framework, answers to old and new questions in complexity theory. In particular, given a random oracle or a new complexity assumption, they enable us to \nprove that verifying is easier than deciding for all theorems; provide a quite effective way to prove membership in computationally hard languages (such as ${\\cal C}o$-$\\cal N \\cal P$-complete ones); and show that every computation possesses a short certificate vouching its correctness. \n \nFinally, if a special type of computationally sound proof exists, we show that Blum's notion of program checking can be meaningfully broadened so as to prove that $\\cal N \\cal P$-complete languages are checkable. <|reference_end|>" ]
[ 4, 10, 15, 18 ]
{"<|cite_1|>": "ss-2555985", "<|cite_2|>": "ss-1255796", "<|multi_cite_3_1|>": "ss-810895", "<|multi_cite_3_2|>": "ss-1309123", "<|cite_4|>": "ss-1309121", "<|cite_5|>": "arxiv-15264", "<|multi_cite_6_1|>": "ss-1961944", "<|multi_cite_6_2|>": "ss-2173061", "<|cite_7|>": "ss-1279501", "<|cite_8|>": "ss-810895", "<|cite_9|>": "ss-1961944", "<|multi_cite_10_1|>": "ss-810895", "<|multi_cite_10_2|>": "ss-1309123", "<|cite_11|>": "ss-1068931", "<|cite_12|>": "ss-1068931", "<|cite_13|>": "ss-810895", "<|cite_14|>": "ss-1309123", "<|multi_cite_15_1|>": "ss-810895", "<|multi_cite_15_2|>": "ss-1309123"}
1410.8594
<|paper_start|> Title: Normality in non-integer bases and polynomial time randomness Abstract: Normality in non-integer bases and polynomial time randomness: It is known that if $x\in[0,1]$ is polynomial time random (i.e. no polynomial time computable martingale succeeds on the binary fractional expansion of $x$) then $x$ is normal in any integer base greater than one. We show that if $x$ is polynomial time random and $\beta>1$ is Pisot, then $x$ is "normal in base $\beta$", in the sense that the sequence $(x\beta^n)_{n\in\mathbb{N}}$ is uniformly distributed modulo one. We work with the notion of "$P$-martingale", a generalization of martingales to non-uniform distributions, and show that a sequence over a finite alphabet is distributed according to an irreducible, invariant Markov measure~$P$ if an only if no $P$-martingale whose betting factors are computed by a deterministic finite automaton succeeds on it. This is a generalization of Schnorr and Stimm's characterization of normal sequences in integer bases. Our results use tools and techniques from symbolic dynamics, together with automata theory and algorithmic randomness. Introduction A weak notion of randomness for sequences over a finite alphabet $\Sigma=\{0,\dots,b-1\}$ ($b\in\NN$) is {\em normality}, introduced by Borel in 1909. Normality may be regarded as a ``law of large numbers" for blocks of events, in the sense that the average occurrences of a block $\sigma\in\Sigma^*$ of length $n$ converges to $|\Sigma|^{-n}$. A real number $x$ is called \textit{normal in base b} ($b\in\NN$) if its expansion in base $b$ is normal. While almost all numbers are normal to all bases it is not too difficult to see that this notion is not base invariant. In fact for any multiplicatively independent bases $b$ and $b'$ the set of numbers normal to $b$ but not normal to $b'$ has full Hausdorff dimension <|cite_start|> (Reference: The Hausdorff Dimension of a Set of Normal Numbers II: Abstract Let R, S be a partition of 2, 3,… so that rational powers fall in the same class. Let (λn) be any real sequence; we show that there exists a set N, of dimension 1, so that (x + λn) (n = 1,2, …) are normal to every base from R and to no base from S, for every x ∈ N.) <|cite_end|>. We say a number $x$ is \textit{absolutely normal} if it is normal in all integer bases greater than one. It is not difficult to see that $x$ is normal in base $b$ if and only if the sequence $(xb^n)_{n\in\NN}$ is u.d.\ modulo one, and then $x$ is absolutely normal if and only if $(xb^n)_{n\in\NN}$ is uniformly distributed (u.d.) modulo one for all integer $b>1$. Polynomial time randomness is another weak notion of randomness. We say that $x$ is \textit{polynomial time random in base $b$} if no martingale (a formalization of {\em betting strategy}) on the alphabet $\{0,\dots,b-1\}$ which is computable in polynomial time succeeds on the expansion of $x$ in base $b$. A result of Schnorr <|cite_start|> (Reference: Zufälligkeit und Wahrscheinlichkeit: ) <|cite_end|> states that if $x$ is polynomial time random in base $b$ then $x$ is normal in base $b$. It was recently shown <|cite_start|> (Reference: Feasible Analysis, Randomness, and Base Invariance: ) <|cite_end|> that polynomial time randomness is base invariant, so that being polynomial time random in a single base implies being normal for all bases, i.e.\ being absolutely normal. The converse is not true, since there are absolutely normal numbers which are computable in polynomial time <|cite_start|> (Reference: Feasible Analysis, Randomness, and Base Invariance: ) <|cite_end|> <|cite_start|> (Reference: A polynomial-time algorithm for computing absolutely normal numbers: ) <|cite_end|>, and these cannot be polynomial time random. The following question was left open in <|cite_start|> (Reference: Feasible Analysis, Randomness, and Base Invariance: ) <|cite_end|>: \begin{question} Suppose that $x$ is polynomial time random. Is the sequence $(x\beta^n)_{n\in\NN}$ u.d.\ modulo one for all rational $\beta>1$? \end{question} The distribution of $(x\beta^n)_{n\in\NN}$ modulo one for rational $\beta$ seems, however, fairly intractable. It is unknown, for instance, if $((3/2)^n)_{n\in\NN}$ is u.d.\ modulo one. Our first main result is that there is a class of algebraic reals for which the question may be readily handled: \begin{theorem}\label{thm:main1} If $x$ is polynomial time random then the sequence $(x\beta^n)_{n\in\NN}$ is u.d.\ modulo one for all Pisot $\beta>1$. \end{theorem} Observe that any non-integer Pisot $\beta$ is irrational, and as a consequence of a result of Brown, Moran and Pearce \cite[Theorem 2]{Brown.Moran.etal:86}, there are uncountably many reals which are absolutely normal but $(x\beta^n)_{n\in\NN}$ is not u.d.\ modulo one. The formulation of normality to integer bases $\beta$ in terms of modulo one uniform distribution allows us to understand normality as equivalent to what ergodic theory calls \textit{genericity}, an equivalence which boils down to two facts: 1) the map $T_\beta(x)=(\beta x) \mod 1$ on $[0,1)$ is equivalent to a ``shift" rightwards in the space of sequences $\{0,\dots,\beta-1\}^\mathbb{N}$ when $x$ is mapped to its base $\beta$ expansion; 2) $(x\beta^n)\mod 1=T_\beta^n(x)$. When a non-integer base $\beta$ is considered, 2) is immediately false, while 1) has no clear reformulation, since there is no obvious candidate for a space of sequences that ``represent" numbers in base $\beta$. It is here that the theory of $\beta$-shifts and $\beta$-representations, developed, among others, by Parry <|cite_start|> (Reference: Algebraic independence of the power series related to the beta expansions of real numbers (Analytic Number Theory : Arithmetic Properties of Transcendental Functions and their Applications): ) <|cite_end|> and Bertrand, helps fill in the missing pieces. Once the space of sequences that represent numbers in the base $\beta$ (using symbols from $\Sigma=\{0,\dots,\lceil\beta\rceil-1\}$) is defined, it is equipped with a natural shift transformation and a measure $P_\beta$ called the \textit{Parry measure}, which plays the same role that the uniform or Lebesgue measure played in integer representation. Indeed, a result by Bertrand says that, when $\beta$ is Pisot, if a real number $x$ has a $\beta$-expansion that is distributed according to $P_\beta$ (this is the analogue notion to being ``normal in base $\beta$''), then $(x\beta^n)_{n\in\NN}$ is u.d.\ modulo one. To see how this is useful for the proof of Theorem~\ref{thm:main1}, let us say we have a number $z$ such that $(z\beta^n)_{n\in\NN}$ is not u.d.\ modulo one. Then, by Bertrand's theorem, its $\beta$-representation would have some block $\sigma$ whose average occurrences do not converge to $P_\beta(\sigma)$. We would then want to construct a polynomial time martingale that succeeds by betting on that block, as is done in the integer base case. However, this cannot be done in a straightforward manner, since the martingale condition as used in the algorithmic randomness literature, assumes outcomes should be distributed according to the uniform measure. We work with a generalized definition of martingales which captures the idea of a ``fair" betting strategy when expansions are supposed to obey some non-uniform distribution $P$. Indeed, this definition of a \textit{P-martingale} will capture the broader sense of \textit{martingale} as it is used in probability theory. In this setting, not only may the probability of the next symbol be different from $|\Sigma|^{-1}$, it may also show all forms of conditional dependence on the preceding symbols. It should be noted that randomness notions under measures different from Lebesgue have already been considered in, for example, <|cite_start|> (Reference: Randomness: Beyond lebesgue measure: Much of the recent research on algorithmic randomness has focused on randomness for Lebesgue measure. While, from a computability theoretic point of view, the picture remains unchanged if one passes to arbitrary computable measures, interesting phenomena occur if one studies the the set of reals which are random for an arbitrary (continuous) probability measure or a generalized Hausdorff measure on Cantor space. This paper tries to give a survey of some of the research that has been done on randomness for non-Lebesgue measures.) <|cite_end|>. Schnorr and Stimm <|cite_start|> (Reference: Endliche Automaten und Zufallsfolgen: ) <|cite_end|> show that a sequence is normal in base $b$ if and only if no martingale on the alphabet of $b$ digits whose betting factors are computed by a deterministic finite automaton (DFA) succeeds on the expansion of $x$ in base $b$. Our second main result is a generalization of this last statement in terms of $P$-martingales: \begin{theorem}\label{thm:main2} A sequence is distributed according to an irreducible, invariant Markov measure $P$ if an only if no $P$-martingale whose betting factors are computed by a DFA succeeds on it. \end{theorem} The importance of Markov measures is that they exhibit enough memorylessness to make them compatible with the memoryless structure of a DFA. As regards $\beta$-representations, a second result by Bertrand establishes that for $\beta$ Pisot $P_\beta$, the natural measure on $\beta$-expansions, is ``hidden'' Markov. By extending Theorem~\ref{thm:main2} to hidden Markov measures we are able to construct a $P_\beta$-martingale generated by a DFA that succeeds on the $\beta$-expansion of $z$. We use the polynomial time computability of the $\beta$-expansion and of the measure $P_\beta$ to show that an integer base (i.e.\ classical) martingale which succeeds on $z$ can be constructed from our $P_\beta$-martingale, following the same ideas used in <|cite_start|> (Reference: Feasible Analysis, Randomness, and Base Invariance: ) <|cite_end|>. \subsection{Outline} The paper is organized as follows. In \S\ref{sec:symdyn} we introduce some basics from symbolic dynamics, mainly the definition of Markov and sofic subshifts, and the notion of sequences distributed according to invariant measures $P$ over the shift. In \S\ref{sec:main} we introduce the notion of $P$-(super)martigales and show the characterization given by Theorem~\ref{thm:main2}. In \S\ref{sec:beta-exp-and-pisot} we introduce some definitions and results regarded to representation of reals in non-integer bases, in particular, Pisot bases. Finally, in \S\ref{sec:polyrndness} we put all pieces together to get Theorem~\ref{thm:main1}. <|paper_end|>
[ "<|reference_start|> Zufälligkeit und Wahrscheinlichkeit: <|reference_end|>", "<|reference_start|> Feasible Analysis, Randomness, and Base Invariance: <|reference_end|>", "<|reference_start|> A polynomial-time algorithm for computing absolutely normal numbers: <|reference_end|>", "<|reference_start|> Endliche Automaten und Zufallsfolgen: <|reference_end|>" ]
[ 1, 3, 4, 8 ]
{"<|cite_1|>": "ss-1790400", "<|cite_2|>": "ss-1721988", "<|cite_3|>": "ss-1085746", "<|multi_cite_4_1|>": "ss-1085746", "<|multi_cite_4_2|>": "ss-1085744", "<|cite_5|>": "ss-1085746", "<|cite_6|>": "ss-1438850", "<|cite_8|>": "ss-1790401", "<|cite_9|>": "ss-1081208", "<|cite_10|>": "ss-1085746"}
1705.02758
<|paper_start|> Title: Deep Descriptor Transforming for Image Co-Localization Abstract: Deep Descriptor Transforming for Image Co-Localization: Reusable model design becomes desirable with the rapid expansion of machine learning applications. In this paper, we focus on the reusability of pre-trained deep convolutional models. Specifically, different from treating pre-trained models as feature extractors, we reveal more treasures beneath convolutional layers, i.e., the convolutional activations could act as a detector for the common object in the image co-localization problem. We propose a simple but effective method, named Deep Descriptor Transforming (DDT), for evaluating the correlations of descriptors and then obtaining the category-consistent regions, which can accurately locate the common object in a set of images. Empirical studies validate the effectiveness of the proposed DDT method. On benchmark image co-localization datasets, DDT consistently outperforms existing state-of-the-art methods by a large margin. Moreover, DDT also demonstrates good generalization ability for unseen categories and robustness for dealing with noisy data. Introduction Model reuse <|cite_start|> (Reference: Learnware: on the future of machine learning: ) <|cite_end|> attempts to construct a model by utilizing existing available models, mostly trained for other tasks, rather than building a model from scratch. Particularly in deep learning, since deep convolutional neural networks have achieved great success in various tasks involving images, videos, texts and more, there are several studies have the flavor of reusing deep models pre-trained on ImageNet <|cite_start|> (Reference: ImageNet Large Scale Visual Recognition Challenge: The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the five years of the challenge, and propose future directions and improvements.) <|cite_end|>. In machine learning, the Fixed Model Reuse scheme <|cite_start|> (Reference: Deep learning for fixed model reuse: Model reuse attempts to construct a model by utilizing existing available models, mostly trained for other tasks, rather than building a model from scratch. It is helpful to reduce the time cost, data amount, and expertise required. Deep learning has achieved great success in various tasks involving images, voices and videos. There are several studies have the sense of model reuse, by trying to reuse pre-trained deep networks architectures or deep model features to train a new deep model. They, however, neglect the fact that there are many other fixed models or features available. In this paper, we propose a more thorough model reuse scheme, FMR (Fixed Model Reuse). FMR utilizes the learning power of deep models to implicitly grab the useful discriminative information from fixed model/features that have been widely used in general tasks. We firstly arrange the convolution layers of a deep network and the provided fixed model/features in parallel, fully connecting to the output layer nodes. Then, the dependencies between the output layer nodes and the fixed model/features are knockdown such that only the raw feature inputs are needed when the model is being used for testing, though the helpful information in the fixed model/features have already been incorporated into the model. On one hand, by the FMR scheme, the required amount of training data can be significantly reduced because of the reuse of fixed model/features. On the other hand, the fixed model/features are not explicitly used in testing, and thus, the scheme can be quite useful in applications where the fixed model/features are protected by patents or commercial secrets. Experiments on five real-world datasets validate the effectiveness of FMR compared with state-of-the-art deep methods.) <|cite_end|> is proposed recently for using the sophisticated fixed model/features from a well-trained deep model, rather than transferring with pre-trained weights. In computer vision, pre-trained models on ImageNet have also been successfully adopted to various usages, e.g., as universal feature extractors <|cite_start|> (Reference: Relaxed Multiple-Instance SVM with Application to Object Discovery: Multiple-instance learning (MIL) has served as an important tool for a wide range of vision applications, for instance, image classification, object detection, and visual tracking. In this paper, we propose a novel method to solve the classical MIL problem, named relaxed multiple-instance SVM (RMI-SVM). We treat the positiveness of instance as a continuous variable, use Noisy-OR model to enforce the MIL constraints, and jointly optimize the bag label and instance label in a unified framework. The optimization problem can be efficiently solved using stochastic gradient decent. The extensive experiments demonstrate that RMI-SVM consistently achieves superior performance on various benchmarks for MIL. Moreover, we simply applied RMI-SVM to a challenging vision task, common object discovery. The state-of-the-art results of object discovery on Pascal VOC datasets further confirm the advantages of the proposed method.) <|cite_end|> <|cite_start|> (Reference: Image Co-localization by Mimicking a Good Detector's Confidence Score Distribution: Given a set of images containing objects from the same category, the task of image co-localization is to identify and localize each instance. This paper shows that this problem can be solved by a simple but intriguing idea, that is, a common object detector can be learnt by making its detection confidence scores distributed like those of a strongly supervised detector. More specifically, we observe that given a set of object proposals extracted from an image that contains the object of interest, an accurate strongly supervised object detector should give high scores to only a small minority of proposals, and low scores to most of them. Thus, we devise an entropy-based objective function to enforce the above property when learning the common object detector. Once the detector is learnt, we resort to a segmentation approach to refine the localization. We show that despite its simplicity, our approach outperforms state-of-the-art methods.) <|cite_end|>, object proposal generators <|cite_start|> (Reference: DeepProposal: Hunting Objects by Cascading Deep Convolutional Layers: In this paper we evaluate the quality of the activation layers of a convolutional neural network (CNN) for the gen- eration of object proposals. We generate hypotheses in a sliding-window fashion over different activation layers and show that the final convolutional layers can find the object of interest with high recall but poor localization due to the coarseness of the feature maps. Instead, the first layers of the network can better localize the object of interest but with a reduced recall. Based on this observation we design a method for proposing object locations that is based on CNN features and that combines the best of both worlds. We build an inverse cascade that, going from the final to the initial convolutional layers of the CNN, selects the most promising object locations and refines their boxes in a coarse-to-fine manner. The method is efficient, because i) it uses the same features extracted for detection, ii) it aggregates features using integral images, and iii) it avoids a dense evaluation of the proposals due to the inverse coarse-to-fine cascade. The method is also accurate; it outperforms most of the previously proposed object proposals approaches and when plugged into a CNN-based detector produces state-of-the- art detection performance.) <|cite_end|>, etc. In particular, <|cite_start|> (Reference: Selective Convolutional Descriptor Aggregation for Fine-Grained Image Retrieval: Deep convolutional neural network models pre-trained for the ImageNet classification task have been successfully adopted to tasks in other domains, such as texture description and object proposal generation, but these tasks require annotations for images in the new domain. In this paper, we focus on a novel and challenging task in the pure unsupervised setting: fine-grained image retrieval. Even with image labels, fine-grained images are difficult to classify, let alone the unsupervised retrieval task. We propose the Selective Convolutional Descriptor Aggregation (SCDA) method. SCDA firstly localizes the main object in fine-grained images, a step that discards the noisy background and keeps useful deep descriptors. The selected descriptors are then aggregated and dimensionality reduced into a short feature vector using the best practices we found. SCDA is unsupervised, using no image label or bounding box annotation. Experiments on six fine-grained datasets confirm the effectiveness of SCDA for fine-grained image retrieval. Besides, visualization of the SCDA features shows that they correspond to visual attributes (even subtle ones), which might explain SCDA's high mean average precision in fine-grained retrieval. Moreover, on general image retrieval datasets, SCDA achieves comparable retrieval results with state-of-the-art general image retrieval approaches.) <|cite_end|> proposed the SCDA method to utilize pre-trained models for both localizing a single fine-grained object (e.g., birds of different species) in each image and retrieving fine-grained images of the same classes/species in an unsupervised fashion. \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth, height=20em]{pipeline} \caption{Pipeline of the proposed DDT method for image co-localization. In this instance, the goal is to localize the \emph{airplane} within each image. Note that, there might be few noisy images in the image set. (Best viewed in color.)} \label{fig:pipeline} \vspace{-0.15em} \end{figure} In this paper, we reveal that the convolutional activations can be a detector for the \emph{common object} in image co-localization. Image co-localization is a fundamental computer vision problem, which simultaneously localizes objects of the same category across a set of distinct images. Specifically, we propose a simple but effective method named DDT (Deep Descriptor Transforming) for image co-localization. In DDT, the deep convolutional descriptors extracted from pre-trained models are transformed into a new space, where it can evaluate the correlations between these descriptors. By leveraging the correlations among the image set, the common object inside these images can be located automatically without additional supervision signals. The pipeline of DDT is shown in Fig.~\ref{fig:pipeline}. To our best knowledge, this is \emph{the first work} to demonstrate the possibility of convolutional activations/descriptors in pre-trained models \emph{being able to act as a detector for the common object.} Experimental results show that DDT significantly outperforms existing state-of-the-art methods, including image co-localization and weakly supervised object localization, in both the deep learning and hand-crafted feature scenarios. Besides, we empirically show that DDT has a good generalization ability for unseen images apart from ImageNet. More importantly, the proposed method is robust, because DDT can also detect the noisy images which do not contain the common object. <|paper_end|>
[ "<|reference_start|> ImageNet Large Scale Visual Recognition Challenge: The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the five years of the challenge, and propose future directions and improvements. <|reference_end|>", "<|reference_start|> Deep learning for fixed model reuse: Model reuse attempts to construct a model by utilizing existing available models, mostly trained for other tasks, rather than building a model from scratch. It is helpful to reduce the time cost, data amount, and expertise required. Deep learning has achieved great success in various tasks involving images, voices and videos. There are several studies have the sense of model reuse, by trying to reuse pre-trained deep networks architectures or deep model features to train a new deep model. They, however, neglect the fact that there are many other fixed models or features available. In this paper, we propose a more thorough model reuse scheme, FMR (Fixed Model Reuse). FMR utilizes the learning power of deep models to implicitly grab the useful discriminative information from fixed model/features that have been widely used in general tasks. We firstly arrange the convolution layers of a deep network and the provided fixed model/features in parallel, fully connecting to the output layer nodes. Then, the dependencies between the output layer nodes and the fixed model/features are knockdown such that only the raw feature inputs are needed when the model is being used for testing, though the helpful information in the fixed model/features have already been incorporated into the model. On one hand, by the FMR scheme, the required amount of training data can be significantly reduced because of the reuse of fixed model/features. On the other hand, the fixed model/features are not explicitly used in testing, and thus, the scheme can be quite useful in applications where the fixed model/features are protected by patents or commercial secrets. Experiments on five real-world datasets validate the effectiveness of FMR compared with state-of-the-art deep methods. <|reference_end|>", "<|reference_start|> Image Co-localization by Mimicking a Good Detector's Confidence Score Distribution: Given a set of images containing objects from the same category, the task of image co-localization is to identify and localize each instance. This paper shows that this problem can be solved by a simple but intriguing idea, that is, a common object detector can be learnt by making its detection confidence scores distributed like those of a strongly supervised detector. More specifically, we observe that given a set of object proposals extracted from an image that contains the object of interest, an accurate strongly supervised object detector should give high scores to only a small minority of proposals, and low scores to most of them. Thus, we devise an entropy-based objective function to enforce the above property when learning the common object detector. Once the detector is learnt, we resort to a segmentation approach to refine the localization. We show that despite its simplicity, our approach outperforms state-of-the-art methods. <|reference_end|>", "<|reference_start|> Selective Convolutional Descriptor Aggregation for Fine-Grained Image Retrieval: Deep convolutional neural network models pre-trained for the ImageNet classification task have been successfully adopted to tasks in other domains, such as texture description and object proposal generation, but these tasks require annotations for images in the new domain. In this paper, we focus on a novel and challenging task in the pure unsupervised setting: fine-grained image retrieval. Even with image labels, fine-grained images are difficult to classify, let alone the unsupervised retrieval task. We propose the Selective Convolutional Descriptor Aggregation (SCDA) method. SCDA firstly localizes the main object in fine-grained images, a step that discards the noisy background and keeps useful deep descriptors. The selected descriptors are then aggregated and dimensionality reduced into a short feature vector using the best practices we found. SCDA is unsupervised, using no image label or bounding box annotation. Experiments on six fine-grained datasets confirm the effectiveness of SCDA for fine-grained image retrieval. Besides, visualization of the SCDA features shows that they correspond to visual attributes (even subtle ones), which might explain SCDA's high mean average precision in fine-grained retrieval. Moreover, on general image retrieval datasets, SCDA achieves comparable retrieval results with state-of-the-art general image retrieval approaches. <|reference_end|>" ]
[ 1, 2, 4, 6 ]
{"<|cite_1|>": "ss-1260036", "<|cite_2|>": "arxiv-65515", "<|cite_3|>": "ss-1976075", "<|multi_cite_4_1|>": "arxiv-85038", "<|multi_cite_4_2|>": "arxiv-93995", "<|cite_5|>": "arxiv-85591", "<|cite_6|>": "arxiv-96162"}
2203.12622
<|paper_start|> Title: Are Evolutionary Algorithms Safe Optimizers? Abstract: Are Evolutionary Algorithms Safe Optimizers?: We consider a type of constrained optimization problem, where the violation of a constraint leads to an irrevocable loss, such as breakage of a valuable experimental resource/platform or loss of human life. Such problems are referred to as safe optimization problems (SafeOPs). While SafeOPs have received attention in the machine learning community in recent years, there was little interest in the evolutionary computation (EC) community despite some early attempts between 2009 and 2011. Moreover, there is a lack of acceptable guidelines on how to benchmark different algorithms for SafeOPs, an area where the EC community has significant experience in. Driven by the need for more efficient algorithms and benchmark guidelines for SafeOPs, the objective of this paper is to reignite the interest of this problem class in the EC community. To achieve this we (i) provide a formal definition of SafeOPs and contrast it to other types of optimization problems that the EC community is familiar with, (ii) investigate the impact of key SafeOP parameters on the performance of selected safe optimization algorithms, (iii) benchmark EC against state-of-the-art safe optimization algorithms from the machine learning community, and (iv) provide an open-source Python framework to replicate and extend our work. Introduction This work focuses on \emph{safe optimization problems}, a special type of constrained optimization problem that has received rather little attention in the evolutionary computation (EC) community, but more so in the machine learning community. Such problems are subject to constraints that, when violated, result in an irrevocable loss of valuable experimental platform/resource, such as breakage of a machine used for experiments, or even injury to a patient <|cite_start|> (Reference: Safe Learning and Optimization Techniques: Towards a Survey of the State of the Art: Safe learning and optimization deals with learning and optimization problems that avoid, as much as possible, the evaluation of non-safe input points, which are solutions, policies, or strategies that cause an irrecoverable loss (e.g., breakage of a machine or equipment, or life threat). Although a comprehensive survey of safe reinforcement learning algorithms was published in 2015, a number of new algorithms have been proposed thereafter, and related works in active learning and in optimization were not considered. This paper reviews those algorithms from a number of domains including reinforcement learning, Gaussian process regression and classification, evolutionary algorithms, and active learning. We provide the fundamental concepts on which the reviewed algorithms are based and a characterization of the individual algorithms. We conclude by explaining how the algorithms are connected and suggestions for future research.) <|cite_end|>. Here, these constraints are referred to as safety constraints, and evaluations of input points (candidate solutions) that violate a safety constraint are called unsafe evaluations. Typically, the objective function and any \emph{safety constraint function} that defines one side of a safety constraint (these concepts will be explained in detail in Section~\ref{sec:ps}) are given as black-box functions and their evaluation is expensive. Examples of safe optimization problems (SafeOPs) include clinical experiments <|cite_start|> (Reference: Stagewise Safe Bayesian Optimization with Gaussian Processes: Enforcing safety is a key aspect of many problems pertaining to sequential decision making under uncertainty, which require the decisions made at every step to be both informative of the optimal decision and also safe. For example, we value both efficacy and comfort in medical therapy, and efficiency and safety in robotic control. We consider this problem of optimizing an unknown utility function with absolute feedback or preference feedback subject to unknown safety constraints. We develop an efficient safe Bayesian optimization algorithm, StageOpt, that separates safe region expansion and utility function maximization into two distinct stages. Compared to existing approaches which interleave between expansion and optimization, we show that StageOpt is more efficient and naturally applicable to a broader class of problems. We provide theoretical guarantees for both the satisfaction of safety constraints as well as convergence to the optimal utility value. We evaluate StageOpt on both a variety of synthetic experiments, as well as in clinical practice. We demonstrate that StageOpt is more effective than existing safe optimization approaches, and is able to safely and effectively optimize spinal cord stimulation therapy in our clinical experiments.) <|cite_end|> <|cite_start|> (Reference: Safe Exploration for Optimization with Gaussian Processes: We consider sequential decision problems under uncertainty, where we seek to optimize an unknown function from noisy samples. This requires balancing exploration (learning about the objective) and exploitation (localizing the maximum), a problem well-studied in the multiarmed bandit literature. In many applications, however, we require that the sampled function values exceed some prespecified "safety" threshold, a requirement that existing algorithms fail to meet. Examples include medical applications where patient comfort must be guaranteed, recommender systems aiming to avoid user dissatisfaction, and robotic control, where one seeks to avoid controls causing physical harm to the platform. We tackle this novel, yet rich, set of problems under the assumption that the unknown function satisfies regularity conditions expressed via a Gaussian process prior. We develop an efficient algorithm called SAFEOPT, and theoretically guarantee its convergence to a natural notion of optimum reachable under safety constraints. We evaluate SAFEOPT on synthetic data, as well as two real applications: movie recommendation, and therapeutic spinal cord stimulation.) <|cite_end|>, controller optimization for quadrotor vehicle <|cite_start|> (Reference: Dependence in constrained Bayesian optimization: ) <|cite_end|> <|cite_start|> (Reference: Safe controller optimization for quadrotors with Gaussian processes: One of the most fundamental problems when designing controllers for dynamic systems is the tuning of the controller parameters. Typically, a model of the system is used to obtain an initial controller, but ultimately the controller parameters must be tuned manually on the real system to achieve the best performance. To avoid this manual tuning step, methods from machine learning, such as Bayesian optimization, have been used. However, as these methods evaluate different controller parameters on the real system, safety-critical system failures may happen. In this paper, we overcome this problem by applying, for the first time, a recently developed safe optimization algorithm, SafeOpt, to the problem of automatic controller parameter tuning. Given an initial, low-performance controller, SafeOpt automatically optimizes the parameters of a control law while guaranteeing safety. It models the underlying performance measure as a Gaussian process and only explores new controller parameters whose performance lies above a safe performance threshold with high probability. Experimental results on a quadrotor vehicle indicate that the proposed method enables fast, automatic, and safe optimization of controller parameters without human intervention.) <|cite_end|> <|cite_start|> (Reference: Bayesian Optimization with Safety Constraints: Safe and Automatic Parameter Tuning in Robotics: Robotic algorithms typically depend on various parameters, the choice of which significantly affects the robot's performance. While an initial guess for the parameters may be obtained from dynamic models of the robot, parameters are usually tuned manually on the real system to achieve the best performance. Optimization algorithms, such as Bayesian optimization, have been used to automate this process. However, these methods may evaluate unsafe parameters during the optimization process that lead to safety-critical system failures. Recently, a safe Bayesian optimization algorithm, called SafeOpt, has been developed, which guarantees that the performance of the system never falls below a critical value; that is, safety is defined based on the performance function. However, coupling performance and safety is often not desirable in robotics. For example, high-gain controllers might achieve low average tracking error (performance), but can overshoot and violate input constraints. In this paper, we present a generalized algorithm that allows for multiple safety constraints separate from the objective. Given an initial set of safe parameters, the algorithm maximizes performance but only evaluates parameters that satisfy safety for all constraints with high probability. To this end, it carefully explores the parameter space by exploiting regularity assumptions in terms of a Gaussian process prior. Moreover, we show how context variables can be used to safely transfer knowledge to new situations and tasks. We provide a theoretical analysis and demonstrate that the proposed algorithm enables fast, automatic, and safe optimization of tuning parameters in experiments on a quadrotor vehicle.) <|cite_end|>, engine calibration <|cite_start|> (Reference: Safe Active Learning and Safe Bayesian Optimization for Tuning a PI-Controller: ) <|cite_end|> <|cite_start|> (Reference: Avoidance of constraint violation for experiment-based evolutionary multi-objective optimization: Experiment-based optimization using Evolutionary Algorithms (EAs) is a promising approach for real world problems in which construction of simulation models is difficult. When using EAs, three difficulties have to be considered. Currently, two difficulties, uncertainty of the evaluation value and limitation of the number of evaluations, are active research topics into EAs. However, the other difficulty, avoidance of extreme trial, has not entered into the spotlight. Extreme trials run the ‘risk’ of breakdown of the optimized object and its measurement instruments in experiment-based optimization. In this paper, we consider that the extreme trial means a large constraint violation of the problems, and install the concept of ‘risky-constraint’. Then, to avoid risky-constraint violation, we propose a violation avoidance method and combine it with Multi-objective Evolutionary Algorithms (MOEAs). The effectiveness of the proposed method is confirmed through numerical experiments and real common-rail diesel engine experiments.) <|cite_end|>, and simulation-based optimization <|cite_start|> (Reference: Gaussian process optimization with failures: classification and convergence proof: ) <|cite_end|> <|cite_start|> (Reference: A classification approach to efficient global optimization in presence of non-computable domains: ) <|cite_end|>. There are two algorithmic branches in safe optimization <|cite_start|> (Reference: Safe Learning and Optimization Techniques: Towards a Survey of the State of the Art: Safe learning and optimization deals with learning and optimization problems that avoid, as much as possible, the evaluation of non-safe input points, which are solutions, policies, or strategies that cause an irrecoverable loss (e.g., breakage of a machine or equipment, or life threat). Although a comprehensive survey of safe reinforcement learning algorithms was published in 2015, a number of new algorithms have been proposed thereafter, and related works in active learning and in optimization were not considered. This paper reviews those algorithms from a number of domains including reinforcement learning, Gaussian process regression and classification, evolutionary algorithms, and active learning. We provide the fundamental concepts on which the reviewed algorithms are based and a characterization of the individual algorithms. We conclude by explaining how the algorithms are connected and suggestions for future research.) <|cite_end|>: Safe optimization through evolutionary algorithms (safe EAs) vs Gaussian process (GP) regression (safe GPs). Although safe optimization was first considered by the EC community in 2009 <|cite_start|> (Reference: Avoidance of constraint violation for experiment-based evolutionary multi-objective optimization: Experiment-based optimization using Evolutionary Algorithms (EAs) is a promising approach for real world problems in which construction of simulation models is difficult. When using EAs, three difficulties have to be considered. Currently, two difficulties, uncertainty of the evaluation value and limitation of the number of evaluations, are active research topics into EAs. However, the other difficulty, avoidance of extreme trial, has not entered into the spotlight. Extreme trials run the ‘risk’ of breakdown of the optimized object and its measurement instruments in experiment-based optimization. In this paper, we consider that the extreme trial means a large constraint violation of the problems, and install the concept of ‘risky-constraint’. Then, to avoid risky-constraint violation, we propose a violation avoidance method and combine it with Multi-objective Evolutionary Algorithms (MOEAs). The effectiveness of the proposed method is confirmed through numerical experiments and real common-rail diesel engine experiments.) <|cite_end|> and 2011 <|cite_start|> (Reference: Evolutionary Search in Lethal Environments: In Natural evolution, a mutation may be lethal, causing an abrupt end to an evolving lineage. This fact has a tendency to cause evolution to "prefer" mutationally robust solutions (which can in turn slow innovation), an effect that has been studied previously, especially in the context of evolution on neutral plateaux. Here, we tackle related issues but from the perspective of a practical optimization scenario. We wish to evolve a finite population of entities quickly (i.e. improve them), but when a lethal solution (modelled here as one below a certain fitness threshold) is evaluated, it is immediately removed from the population and the population size is reduced by one. This models certain closed-loop evolution scenarios that may be encountered, for example, when evolving nano-technologies or autonomous robots. We motivate this scenario, and find that evolutionary search performs best in a lethal environment when limiting randomness in the solution generation process, e.g. by using elitism, above-average selection pressure, a less random mutating operator, and no or little crossover. For NKa landscapes, these strategies turn out to be particularly important on rugged and non-homogeneous landscapes (i.e. for large K and α).) <|cite_end|> <|cite_start|> (Reference: Tuning Evolutionary Search for Closed-Loop Optimization: Closed-loop optimization deals with problems in which candidate solutions are evaluated by conducting experiments, e.g. physical or biochemical experiments. Although this form of optimization is becoming more popular across the sciences, it may be subject to rather unexplored resourcing issues, as any experiment may require resources in order to be conducted. In this thesis we are concerned with understanding how evolutionary search is affected by three particular resourcing issues -- ephemeral resource constraints (ERCs), changes of variables, and lethal environments -- and the development of search strategies to combat these issues.The thesis makes three broad contributions. First, we motivate and formally define the resourcing issues considered. Here, concrete examples in a range of applications are given. Secondly, we theoretically and empirically investigate the effect of the resourcing issues considered on evolutionary search. This investigation reveals that resourcing issues affect optimization in general, and that clear patterns emerge relating specific properties of the different resourcing issues to performance effects. Thirdly, we develop and analyze various search strategies augmented on an evolutionary algorithm (EA) for coping with resourcing issues. To cope specifically with ERCs, we develop several static constraint-handling strategies, and investigate the application of reinforcement learning techniques to learn when to switch between these static strategies during an optimization process. We also develop several online resource-purchasing strategies to cope with ERCs that leave the arrangement of resources to the hands of the optimizer. For problems subject to changes of variables relating to the resources, we find that knowing which variables are changed provides an optimizer with valuable information, which we exploit using a novel dynamic strategy. Finally, for lethal environments, where visiting parts of the search space can cause the permanent loss of resources, we observe that a standard EA's population may be reduced in size rapidly, complicating the search for innovative solutions. To cope with such scenarios, we consider some non-standard EA setups that are able to innovate genetically whilst simultaneously mitigating risks to the evolving population.) <|cite_end|>, we are not aware of any further research on this topic. In comparison, the machine learning community has actively worked on SafeOPs from 2015. Research on safe optimization is fragmented with no unified guidelines on how to benchmark algorithms for safe optimization. When algorithms for SafeOPs are benchmarked, an aspect of performance relates to the best objective function values achieved <|cite_start|> (Reference: Safe Active Learning and Safe Bayesian Optimization for Tuning a PI-Controller: ) <|cite_end|> <|cite_start|> (Reference: A classification approach to efficient global optimization in presence of non-computable domains: ) <|cite_end|> <|cite_start|> (Reference: Safe controller optimization for quadrotors with Gaussian processes: One of the most fundamental problems when designing controllers for dynamic systems is the tuning of the controller parameters. Typically, a model of the system is used to obtain an initial controller, but ultimately the controller parameters must be tuned manually on the real system to achieve the best performance. To avoid this manual tuning step, methods from machine learning, such as Bayesian optimization, have been used. However, as these methods evaluate different controller parameters on the real system, safety-critical system failures may happen. In this paper, we overcome this problem by applying, for the first time, a recently developed safe optimization algorithm, SafeOpt, to the problem of automatic controller parameter tuning. Given an initial, low-performance controller, SafeOpt automatically optimizes the parameters of a control law while guaranteeing safety. It models the underlying performance measure as a Gaussian process and only explores new controller parameters whose performance lies above a safe performance threshold with high probability. Experimental results on a quadrotor vehicle indicate that the proposed method enables fast, automatic, and safe optimization of controller parameters without human intervention.) <|cite_end|> <|cite_start|> (Reference: Gaussian process optimization with failures: classification and convergence proof: ) <|cite_end|> <|cite_start|> (Reference: Avoidance of constraint violation for experiment-based evolutionary multi-objective optimization: Experiment-based optimization using Evolutionary Algorithms (EAs) is a promising approach for real world problems in which construction of simulation models is difficult. When using EAs, three difficulties have to be considered. Currently, two difficulties, uncertainty of the evaluation value and limitation of the number of evaluations, are active research topics into EAs. However, the other difficulty, avoidance of extreme trial, has not entered into the spotlight. Extreme trials run the ‘risk’ of breakdown of the optimized object and its measurement instruments in experiment-based optimization. In this paper, we consider that the extreme trial means a large constraint violation of the problems, and install the concept of ‘risky-constraint’. Then, to avoid risky-constraint violation, we propose a violation avoidance method and combine it with Multi-objective Evolutionary Algorithms (MOEAs). The effectiveness of the proposed method is confirmed through numerical experiments and real common-rail diesel engine experiments.) <|cite_end|> <|cite_start|> (Reference: Evolutionary Search in Lethal Environments: In Natural evolution, a mutation may be lethal, causing an abrupt end to an evolving lineage. This fact has a tendency to cause evolution to "prefer" mutationally robust solutions (which can in turn slow innovation), an effect that has been studied previously, especially in the context of evolution on neutral plateaux. Here, we tackle related issues but from the perspective of a practical optimization scenario. We wish to evolve a finite population of entities quickly (i.e. improve them), but when a lethal solution (modelled here as one below a certain fitness threshold) is evaluated, it is immediately removed from the population and the population size is reduced by one. This models certain closed-loop evolution scenarios that may be encountered, for example, when evolving nano-technologies or autonomous robots. We motivate this scenario, and find that evolutionary search performs best in a lethal environment when limiting randomness in the solution generation process, e.g. by using elitism, above-average selection pressure, a less random mutating operator, and no or little crossover. For NKa landscapes, these strategies turn out to be particularly important on rugged and non-homogeneous landscapes (i.e. for large K and α).) <|cite_end|> <|cite_start|> (Reference: Tuning Evolutionary Search for Closed-Loop Optimization: Closed-loop optimization deals with problems in which candidate solutions are evaluated by conducting experiments, e.g. physical or biochemical experiments. Although this form of optimization is becoming more popular across the sciences, it may be subject to rather unexplored resourcing issues, as any experiment may require resources in order to be conducted. In this thesis we are concerned with understanding how evolutionary search is affected by three particular resourcing issues -- ephemeral resource constraints (ERCs), changes of variables, and lethal environments -- and the development of search strategies to combat these issues.The thesis makes three broad contributions. First, we motivate and formally define the resourcing issues considered. Here, concrete examples in a range of applications are given. Secondly, we theoretically and empirically investigate the effect of the resourcing issues considered on evolutionary search. This investigation reveals that resourcing issues affect optimization in general, and that clear patterns emerge relating specific properties of the different resourcing issues to performance effects. Thirdly, we develop and analyze various search strategies augmented on an evolutionary algorithm (EA) for coping with resourcing issues. To cope specifically with ERCs, we develop several static constraint-handling strategies, and investigate the application of reinforcement learning techniques to learn when to switch between these static strategies during an optimization process. We also develop several online resource-purchasing strategies to cope with ERCs that leave the arrangement of resources to the hands of the optimizer. For problems subject to changes of variables relating to the resources, we find that knowing which variables are changed provides an optimizer with valuable information, which we exploit using a novel dynamic strategy. Finally, for lethal environments, where visiting parts of the search space can cause the permanent loss of resources, we observe that a standard EA's population may be reduced in size rapidly, complicating the search for innovative solutions. To cope with such scenarios, we consider some non-standard EA setups that are able to innovate genetically whilst simultaneously mitigating risks to the evolving population.) <|cite_end|> <|cite_start|> (Reference: Safe Exploration for Optimization with Gaussian Processes: We consider sequential decision problems under uncertainty, where we seek to optimize an unknown function from noisy samples. This requires balancing exploration (learning about the objective) and exploitation (localizing the maximum), a problem well-studied in the multiarmed bandit literature. In many applications, however, we require that the sampled function values exceed some prespecified "safety" threshold, a requirement that existing algorithms fail to meet. Examples include medical applications where patient comfort must be guaranteed, recommender systems aiming to avoid user dissatisfaction, and robotic control, where one seeks to avoid controls causing physical harm to the platform. We tackle this novel, yet rich, set of problems under the assumption that the unknown function satisfies regularity conditions expressed via a Gaussian process prior. We develop an efficient algorithm called SAFEOPT, and theoretically guarantee its convergence to a natural notion of optimum reachable under safety constraints. We evaluate SAFEOPT on synthetic data, as well as two real applications: movie recommendation, and therapeutic spinal cord stimulation.) <|cite_end|> <|cite_start|> (Reference: Stagewise Safe Bayesian Optimization with Gaussian Processes: Enforcing safety is a key aspect of many problems pertaining to sequential decision making under uncertainty, which require the decisions made at every step to be both informative of the optimal decision and also safe. For example, we value both efficacy and comfort in medical therapy, and efficiency and safety in robotic control. We consider this problem of optimizing an unknown utility function with absolute feedback or preference feedback subject to unknown safety constraints. We develop an efficient safe Bayesian optimization algorithm, StageOpt, that separates safe region expansion and utility function maximization into two distinct stages. Compared to existing approaches which interleave between expansion and optimization, we show that StageOpt is more efficient and naturally applicable to a broader class of problems. We provide theoretical guarantees for both the satisfaction of safety constraints as well as convergence to the optimal utility value. We evaluate StageOpt on both a variety of synthetic experiments, as well as in clinical practice. We demonstrate that StageOpt is more effective than existing safe optimization approaches, and is able to safely and effectively optimize spinal cord stimulation therapy in our clinical experiments.) <|cite_end|>. Another aspect of performance relates to safety, which can be measured, for example, by the number of unsafe evaluations (or, equivalently, the number of safe evaluations) at each iteration or evaluation step <|cite_start|> (Reference: Gaussian process optimization with failures: classification and convergence proof: ) <|cite_end|> <|cite_start|> (Reference: A classification approach to efficient global optimization in presence of non-computable domains: ) <|cite_end|> <|cite_start|> (Reference: Safe controller optimization for quadrotors with Gaussian processes: One of the most fundamental problems when designing controllers for dynamic systems is the tuning of the controller parameters. Typically, a model of the system is used to obtain an initial controller, but ultimately the controller parameters must be tuned manually on the real system to achieve the best performance. To avoid this manual tuning step, methods from machine learning, such as Bayesian optimization, have been used. However, as these methods evaluate different controller parameters on the real system, safety-critical system failures may happen. In this paper, we overcome this problem by applying, for the first time, a recently developed safe optimization algorithm, SafeOpt, to the problem of automatic controller parameter tuning. Given an initial, low-performance controller, SafeOpt automatically optimizes the parameters of a control law while guaranteeing safety. It models the underlying performance measure as a Gaussian process and only explores new controller parameters whose performance lies above a safe performance threshold with high probability. Experimental results on a quadrotor vehicle indicate that the proposed method enables fast, automatic, and safe optimization of controller parameters without human intervention.) <|cite_end|> <|cite_start|> (Reference: Avoidance of constraint violation for experiment-based evolutionary multi-objective optimization: Experiment-based optimization using Evolutionary Algorithms (EAs) is a promising approach for real world problems in which construction of simulation models is difficult. When using EAs, three difficulties have to be considered. Currently, two difficulties, uncertainty of the evaluation value and limitation of the number of evaluations, are active research topics into EAs. However, the other difficulty, avoidance of extreme trial, has not entered into the spotlight. Extreme trials run the ‘risk’ of breakdown of the optimized object and its measurement instruments in experiment-based optimization. In this paper, we consider that the extreme trial means a large constraint violation of the problems, and install the concept of ‘risky-constraint’. Then, to avoid risky-constraint violation, we propose a violation avoidance method and combine it with Multi-objective Evolutionary Algorithms (MOEAs). The effectiveness of the proposed method is confirmed through numerical experiments and real common-rail diesel engine experiments.) <|cite_end|>, the proportion of survived solutions, i.e., $u_t/ u_0$ where $u_0$ is the initial parent population size and $u_t$ is the number of survived offspring size at $t\nth$ iteration step (individuals violating the safety constraint are removed) <|cite_start|> (Reference: Evolutionary Search in Lethal Environments: In Natural evolution, a mutation may be lethal, causing an abrupt end to an evolving lineage. This fact has a tendency to cause evolution to "prefer" mutationally robust solutions (which can in turn slow innovation), an effect that has been studied previously, especially in the context of evolution on neutral plateaux. Here, we tackle related issues but from the perspective of a practical optimization scenario. We wish to evolve a finite population of entities quickly (i.e. improve them), but when a lethal solution (modelled here as one below a certain fitness threshold) is evaluated, it is immediately removed from the population and the population size is reduced by one. This models certain closed-loop evolution scenarios that may be encountered, for example, when evolving nano-technologies or autonomous robots. We motivate this scenario, and find that evolutionary search performs best in a lethal environment when limiting randomness in the solution generation process, e.g. by using elitism, above-average selection pressure, a less random mutating operator, and no or little crossover. For NKa landscapes, these strategies turn out to be particularly important on rugged and non-homogeneous landscapes (i.e. for large K and α).) <|cite_end|> <|cite_start|> (Reference: Tuning Evolutionary Search for Closed-Loop Optimization: Closed-loop optimization deals with problems in which candidate solutions are evaluated by conducting experiments, e.g. physical or biochemical experiments. Although this form of optimization is becoming more popular across the sciences, it may be subject to rather unexplored resourcing issues, as any experiment may require resources in order to be conducted. In this thesis we are concerned with understanding how evolutionary search is affected by three particular resourcing issues -- ephemeral resource constraints (ERCs), changes of variables, and lethal environments -- and the development of search strategies to combat these issues.The thesis makes three broad contributions. First, we motivate and formally define the resourcing issues considered. Here, concrete examples in a range of applications are given. Secondly, we theoretically and empirically investigate the effect of the resourcing issues considered on evolutionary search. This investigation reveals that resourcing issues affect optimization in general, and that clear patterns emerge relating specific properties of the different resourcing issues to performance effects. Thirdly, we develop and analyze various search strategies augmented on an evolutionary algorithm (EA) for coping with resourcing issues. To cope specifically with ERCs, we develop several static constraint-handling strategies, and investigate the application of reinforcement learning techniques to learn when to switch between these static strategies during an optimization process. We also develop several online resource-purchasing strategies to cope with ERCs that leave the arrangement of resources to the hands of the optimizer. For problems subject to changes of variables relating to the resources, we find that knowing which variables are changed provides an optimizer with valuable information, which we exploit using a novel dynamic strategy. Finally, for lethal environments, where visiting parts of the search space can cause the permanent loss of resources, we observe that a standard EA's population may be reduced in size rapidly, complicating the search for innovative solutions. To cope with such scenarios, we consider some non-standard EA setups that are able to innovate genetically whilst simultaneously mitigating risks to the evolving population.) <|cite_end|>, sensitivity and specificity on the evaluations <|cite_start|> (Reference: Safe Active Learning and Safe Bayesian Optimization for Tuning a PI-Controller: ) <|cite_end|>, and the size of safe set (i.e., the number of input points inferred to be safe) <|cite_start|> (Reference: Safe Exploration for Optimization with Gaussian Processes: We consider sequential decision problems under uncertainty, where we seek to optimize an unknown function from noisy samples. This requires balancing exploration (learning about the objective) and exploitation (localizing the maximum), a problem well-studied in the multiarmed bandit literature. In many applications, however, we require that the sampled function values exceed some prespecified "safety" threshold, a requirement that existing algorithms fail to meet. Examples include medical applications where patient comfort must be guaranteed, recommender systems aiming to avoid user dissatisfaction, and robotic control, where one seeks to avoid controls causing physical harm to the platform. We tackle this novel, yet rich, set of problems under the assumption that the unknown function satisfies regularity conditions expressed via a Gaussian process prior. We develop an efficient algorithm called SAFEOPT, and theoretically guarantee its convergence to a natural notion of optimum reachable under safety constraints. We evaluate SAFEOPT on synthetic data, as well as two real applications: movie recommendation, and therapeutic spinal cord stimulation.) <|cite_end|> <|cite_start|> (Reference: Stagewise Safe Bayesian Optimization with Gaussian Processes: Enforcing safety is a key aspect of many problems pertaining to sequential decision making under uncertainty, which require the decisions made at every step to be both informative of the optimal decision and also safe. For example, we value both efficacy and comfort in medical therapy, and efficiency and safety in robotic control. We consider this problem of optimizing an unknown utility function with absolute feedback or preference feedback subject to unknown safety constraints. We develop an efficient safe Bayesian optimization algorithm, StageOpt, that separates safe region expansion and utility function maximization into two distinct stages. Compared to existing approaches which interleave between expansion and optimization, we show that StageOpt is more efficient and naturally applicable to a broader class of problems. We provide theoretical guarantees for both the satisfaction of safety constraints as well as convergence to the optimal utility value. We evaluate StageOpt on both a variety of synthetic experiments, as well as in clinical practice. We demonstrate that StageOpt is more effective than existing safe optimization approaches, and is able to safely and effectively optimize spinal cord stimulation therapy in our clinical experiments.) <|cite_end|>. However, most papers that propose an algorithm for SafeOPs only benchmark them against similar type of algorithms. In such studies, the capabilities of the proposed algorithms under slightly different SafeOP scenarios are seldom examined. The contributions made by our paper are as follows: \begin{enumerate} \item We provide a formal definition of SafeOPs, discuss their real-world application, and contrast them to other types of problem that the EC community is familiar with. \item We investigate the impact on performance of safe optimization algorithms for several key parameters affecting the complexity of SafeOPs. \item This is the first study that compares safe EA with safe GP algorithms. Previous studies looked at the two algorithm types in isolation (see, for example, <|cite_start|> (Reference: Safe Exploration for Optimization with Gaussian Processes: We consider sequential decision problems under uncertainty, where we seek to optimize an unknown function from noisy samples. This requires balancing exploration (learning about the objective) and exploitation (localizing the maximum), a problem well-studied in the multiarmed bandit literature. In many applications, however, we require that the sampled function values exceed some prespecified "safety" threshold, a requirement that existing algorithms fail to meet. Examples include medical applications where patient comfort must be guaranteed, recommender systems aiming to avoid user dissatisfaction, and robotic control, where one seeks to avoid controls causing physical harm to the platform. We tackle this novel, yet rich, set of problems under the assumption that the unknown function satisfies regularity conditions expressed via a Gaussian process prior. We develop an efficient algorithm called SAFEOPT, and theoretically guarantee its convergence to a natural notion of optimum reachable under safety constraints. We evaluate SAFEOPT on synthetic data, as well as two real applications: movie recommendation, and therapeutic spinal cord stimulation.) <|cite_end|> <|cite_start|> (Reference: Safe controller optimization for quadrotors with Gaussian processes: One of the most fundamental problems when designing controllers for dynamic systems is the tuning of the controller parameters. Typically, a model of the system is used to obtain an initial controller, but ultimately the controller parameters must be tuned manually on the real system to achieve the best performance. To avoid this manual tuning step, methods from machine learning, such as Bayesian optimization, have been used. However, as these methods evaluate different controller parameters on the real system, safety-critical system failures may happen. In this paper, we overcome this problem by applying, for the first time, a recently developed safe optimization algorithm, SafeOpt, to the problem of automatic controller parameter tuning. Given an initial, low-performance controller, SafeOpt automatically optimizes the parameters of a control law while guaranteeing safety. It models the underlying performance measure as a Gaussian process and only explores new controller parameters whose performance lies above a safe performance threshold with high probability. Experimental results on a quadrotor vehicle indicate that the proposed method enables fast, automatic, and safe optimization of controller parameters without human intervention.) <|cite_end|> <|cite_start|> (Reference: Avoidance of constraint violation for experiment-based evolutionary multi-objective optimization: Experiment-based optimization using Evolutionary Algorithms (EAs) is a promising approach for real world problems in which construction of simulation models is difficult. When using EAs, three difficulties have to be considered. Currently, two difficulties, uncertainty of the evaluation value and limitation of the number of evaluations, are active research topics into EAs. However, the other difficulty, avoidance of extreme trial, has not entered into the spotlight. Extreme trials run the ‘risk’ of breakdown of the optimized object and its measurement instruments in experiment-based optimization. In this paper, we consider that the extreme trial means a large constraint violation of the problems, and install the concept of ‘risky-constraint’. Then, to avoid risky-constraint violation, we propose a violation avoidance method and combine it with Multi-objective Evolutionary Algorithms (MOEAs). The effectiveness of the proposed method is confirmed through numerical experiments and real common-rail diesel engine experiments.) <|cite_end|>). \item We propose an initial set of guidelines to carry out benchmark studies of algorithms for SafeOPs. We also make available to the community an open-source Python framework that facilitates the replication and extension of our work. \end{enumerate} The remainder of the paper is organized as follows. In Section~\ref{sec:ps}, a formal definition of the particular SafeOPs used for our experiments is presented. Section~\ref{sec:FC} describes the working principles of existing safe optimization algorithms, and Section~\ref{sec:ES} provides the experimental setup for the benchmark study carried out in Section~\ref{sec:Results}. Finally, conclusions and future research are discussed in Section~\ref{sec:Conclusion}. <|paper_end|>
[ "<|reference_start|> Gaussian process optimization with failures: classification and convergence proof: <|reference_end|>", "<|reference_start|> Evolutionary Search in Lethal Environments: In Natural evolution, a mutation may be lethal, causing an abrupt end to an evolving lineage. This fact has a tendency to cause evolution to \"prefer\" mutationally robust solutions (which can in turn slow innovation), an effect that has been studied previously, especially in the context of evolution on neutral plateaux. Here, we tackle related issues but from the perspective of a practical optimization scenario. We wish to evolve a finite population of entities quickly (i.e. improve them), but when a lethal solution (modelled here as one below a certain fitness threshold) is evaluated, it is immediately removed from the population and the population size is reduced by one. This models certain closed-loop evolution scenarios that may be encountered, for example, when evolving nano-technologies or autonomous robots. We motivate this scenario, and find that evolutionary search performs best in a lethal environment when limiting randomness in the solution generation process, e.g. by using elitism, above-average selection pressure, a less random mutating operator, and no or little crossover. For NKa landscapes, these strategies turn out to be particularly important on rugged and non-homogeneous landscapes (i.e. for large K and α). <|reference_end|>", "<|reference_start|> Evolutionary Search in Lethal Environments: In Natural evolution, a mutation may be lethal, causing an abrupt end to an evolving lineage. This fact has a tendency to cause evolution to \"prefer\" mutationally robust solutions (which can in turn slow innovation), an effect that has been studied previously, especially in the context of evolution on neutral plateaux. Here, we tackle related issues but from the perspective of a practical optimization scenario. We wish to evolve a finite population of entities quickly (i.e. improve them), but when a lethal solution (modelled here as one below a certain fitness threshold) is evaluated, it is immediately removed from the population and the population size is reduced by one. This models certain closed-loop evolution scenarios that may be encountered, for example, when evolving nano-technologies or autonomous robots. We motivate this scenario, and find that evolutionary search performs best in a lethal environment when limiting randomness in the solution generation process, e.g. by using elitism, above-average selection pressure, a less random mutating operator, and no or little crossover. For NKa landscapes, these strategies turn out to be particularly important on rugged and non-homogeneous landscapes (i.e. for large K and α). <|reference_end|>", "<|reference_start|> Safe controller optimization for quadrotors with Gaussian processes: One of the most fundamental problems when designing controllers for dynamic systems is the tuning of the controller parameters. Typically, a model of the system is used to obtain an initial controller, but ultimately the controller parameters must be tuned manually on the real system to achieve the best performance. To avoid this manual tuning step, methods from machine learning, such as Bayesian optimization, have been used. However, as these methods evaluate different controller parameters on the real system, safety-critical system failures may happen. In this paper, we overcome this problem by applying, for the first time, a recently developed safe optimization algorithm, SafeOpt, to the problem of automatic controller parameter tuning. Given an initial, low-performance controller, SafeOpt automatically optimizes the parameters of a control law while guaranteeing safety. It models the underlying performance measure as a Gaussian process and only explores new controller parameters whose performance lies above a safe performance threshold with high probability. Experimental results on a quadrotor vehicle indicate that the proposed method enables fast, automatic, and safe optimization of controller parameters without human intervention. <|reference_end|>" ]
[ 8, 12, 19, 25 ]
{"<|cite_1|>": "arxiv-316640", "<|multi_cite_2_1|>": "ss-1671339", "<|multi_cite_2_2|>": "ss-1671340", "<|multi_cite_3_1|>": "ss-1544689", "<|multi_cite_3_2|>": "ss-1356921", "<|multi_cite_3_3|>": "arxiv-92189", "<|multi_cite_4_1|>": "ss-1671341", "<|multi_cite_4_2|>": "ss-1671342", "<|multi_cite_5_1|>": "ss-987180", "<|multi_cite_5_2|>": "ss-987179", "<|cite_6|>": "arxiv-316640", "<|cite_7|>": "ss-1671342", "<|multi_cite_8_1|>": "ss-1671343", "<|multi_cite_8_2|>": "ss-1671344", "<|multi_cite_9_1|>": "ss-1671341", "<|multi_cite_9_2|>": "ss-987179", "<|multi_cite_9_3|>": "ss-1356921", "<|multi_cite_9_4|>": "ss-987180", "<|multi_cite_9_5|>": "ss-1671342", "<|multi_cite_9_6|>": "ss-1671343", "<|multi_cite_9_7|>": "ss-1671344", "<|multi_cite_9_8|>": "ss-1671340", "<|multi_cite_9_9|>": "ss-1671339", "<|multi_cite_10_1|>": "ss-987180", "<|multi_cite_10_2|>": "ss-987179", "<|multi_cite_10_3|>": "ss-1356921", "<|multi_cite_10_4|>": "ss-1671342", "<|multi_cite_11_1|>": "ss-1671343", "<|multi_cite_11_2|>": "ss-1671344", "<|cite_12|>": "ss-1671341", "<|multi_cite_13_1|>": "ss-1671340", "<|multi_cite_13_2|>": "ss-1671339", "<|multi_cite_14_1|>": "ss-1671340", "<|multi_cite_14_2|>": "ss-1356921", "<|multi_cite_14_3|>": "ss-1671342"}
2406.06300
<|paper_start|> Title: Human Gaze and Head Rotation during Navigation, Exploration and Object Manipulation in Shared Environments with Robots Abstract: Human Gaze and Head Rotation during Navigation, Exploration and Object Manipulation in Shared Environments with Robots: The human gaze is an important cue to signal intention, attention, distraction, and the regions of interest in the immediate surroundings. Gaze tracking can transform how robots perceive, understand, and react to people, enabling new modes of robot control, interaction, and collaboration. In this paper, we use gaze tracking data from a rich dataset of human motion (TH\"OR-MAGNI) to investigate the coordination between gaze direction and head rotation of humans engaged in various indoor activities involving navigation, interaction with objects, and collaboration with a mobile robot. In particular, we study the spread and central bias of fixations in diverse activities and examine the correlation between gaze direction and head rotation. We introduce various human motion metrics to enhance the understanding of gaze behavior in dynamic interactions. Finally, we apply semantic object labeling to decompose the gaze distribution into activity-relevant regions. Introduction \label{sec:intro} Robots operating in shared environments with humans can benefit significantly from the ability to track and interpret various cues related to human motion and activity. The context of human motion includes a wide range of cues, such as full-body poses, gestures, gazes, motion velocity, acceleration, and many others. The ability of robots to interpret these cues is essential for several reasons: enhanced safety by predicting human actions, improved efficiency by anticipating human needs, and promotion of more natural interaction between humans and robots by responding to nonverbal signals. Gaze has been described as a window into the human mind. It provides information related to human attention and intention. Integrating gaze tracking into Human-Robot Interaction (HRI) approaches can help robots better understand human behavior and, in turn, help robots navigate shared spaces more effectively and participate in collaborative tasks with greater awareness and adaptability. As such, studying the human gaze in human-robot interaction helps to create robotic systems that smoothly share our spaces and comprehend and anticipate our actions and intentions. That being said, studies of human gaze in motion and dynamic human-robot interactions are still scarce, not least due to the complexity of tracking the gaze of a moving person. Thus, head orientation is often used as a proxy for gaze direction and using head orientation has been shown to improve the interaction between humans and robots <|cite_start|> (Reference: Multi-modal Intention Prediction with Probabilistic Movement Primitives: ) <|cite_end|>. Furthermore, head orientation is successfully used in automated driving settings to infer the attention and intention of pedestrians and cyclists <|cite_start|> (Reference: Vulnerable road user detection and orientation estimation for context-aware automated driving: This thesis addresses the detection, segmentation and orientation estimation of persons in visual data. In particular, the aim of this work is to establish an accurate machine representation of the Vulnerable Road Users (VRU, e.g. pedestrians, cyclists) by using image-based cues to support context-aware automated driving. A robust detection of the VRU is achieved by applying efficient stereo-based proposals within region-based Convolutional Neural Networks. Various network and proposal configurations are compared on a newly introduced dataset focusing on the challenging detection of cyclists in urban areas. A pixel-wise segmentation of the detected VRU facilitates higher-level, semantic scene analysis (e.g. pose estimation, activity analysis). Accurate object segmentations are gained by combining statistical shape models with multiple visual data cues within an iterative framework using a Conditional Random Field formulation. Head and body part locations and orientations are jointly estimated from a set of orientation-specific detector responses. The applied Dynamic Bayesian Network model accounts for spatial and temporal anatomical constraints resulting in stable part localization and orientation estimates. The inferred orientations are used to anticipate the behavior of the VRU by modeling situational awareness within a context-based Switching Linear Dynamic System. Experiments show that such context-aware models lead to a significant improvement in VRU path prediction. Since data annotation and management are indispensable components for the development of complex machine learning applications, two software tools are proposed to support an efficient handling of sensor data and annotations.) <|cite_end|>. However, relying solely on head orientation as an indicator of gaze has limitations due to the complex nature of human attention, which often involves subtle eye movements not captured by head orientation alone <|cite_start|> (Reference: Robot reading human gaze: Why eye tracking is better than head tracking for human-robot collaboration: Robots are at the position to become our everyday companions in the near future. Still, many hurdles need to be cleared to achieve this goal. One of them is the fact that robots are still not able to perceive some important communication cues naturally used by humans, e.g. gaze. In the recent past, eye gaze in robot perception was substituted by its proxy, head orientation. Such an approach is still adopted in many applications today. In this paper we introduce performance improvements to an eye tracking system we previously developed and use it to explore if this approximation is appropriate. More precisely, we compare the impact of the use of eye- or head-based gaze estimation in a human robot interaction experiment with the iCub robot and naïve subjects. We find that the possibility to exploit the richer information carried by eye gaze has a significant impact on the interaction. As a result, our eye tracking system allows for a more efficient human-robot collaboration than a comparable head tracking approach, according to both quantitative measures and subjective evaluation by the human participants.) <|cite_end|>, (see Figure~\ref{fig:cover}). \begin{figure}[t] \centering \includegraphics[width=0.93\linewidth]{Girl1.jpg} \\ \vspace{3pt} \includegraphics[width=0.3\linewidth,height=4cm,keepaspectratio]{NaoCueing.jpg} \includegraphics[width=0.3\linewidth,height=4cm,keepaspectratio]{AfterCue.jpg} \includegraphics[width=0.3\linewidth,height=4cm,keepaspectratio]{FixGoal.jpg} \caption{A participant of the THÖR-MAGNI dataset attends to instructions of the mobile robot <|cite_start|> (Reference: Advantages of Multimodal versus Verbal-Only Robot-to-Human Communication with an Anthropomorphic Robotic Mock Driver: Robots are increasingly used in shared environments with humans, making effective communication a necessity for successful human-robot interaction. In our work, we study a crucial component: active communication of robot intent. Here, we present an anthropomorphic solution where a humanoid robot communicates the intent of its host robot acting as an "Anthropomorphic Robotic Mock Driver" (ARMoD). We evaluate this approach in two experiments in which participants work alongside a mobile robot on various tasks, while the ARMoD communicates a need for human attention, when required, or gives instructions to collaborate on a joint task. The experiments feature two interaction styles of the ARMoD: a verbal-only mode using only speech and a multimodal mode, additionally including robotic gaze and pointing gestures to support communication and register intent in space. Our results show that the multimodal interaction style, including head movements and eye gaze as well as pointing gestures, leads to more natural fixation behavior. Participants naturally identified and fixated longer on the areas relevant for intent communication, and reacted faster to instructions in collaborative tasks. Our research further indicates that the ARMoD intent communication improves engagement and social interaction with mobile robots in workplace settings.) <|cite_end|>. \textbf{Top:} Illustration of the visual difference between the head orientation (\textbf{red}) and gaze direction (\textbf{green}). \textbf{Bottom:} a sequence of gazes on the mobile robot, followed by a shift of attention to the goal point that the robot cued. This shift is followed by a head rotation to center the visual field on the goal point. Fixations are shown with \textbf{white circles}, and their sequences are connected by \textbf{red lines}.} \label{fig:cover} \end{figure} This study analyzes the gaze patterns of people moving and interacting in a dynamic environment shared with robots. We utilize the THÖR-MAGNI dataset, unique for its synchronized data on head orientation, eye movement patterns, and walking trajectories across a diverse group of individuals. In particular, we show the potential and limitations of using head orientation as a proxy for gaze and the complex relationship between head movements and gaze direction. Our study employs various analytical approaches to examine and describe human gaze patterns. Firstly, we focus on the distribution of visual fixations on the 2D tracker plane to evaluate the uncertainty caused by eye rotation relative to head orientation. We extend the analysis of fixations by examining participants' activities and the specific micro-actions they performed during tasks and interactions. We use heatmaps to visualize fixations and identify patterns of visual engagement and attention allocation. To offer a geometric representation of where participants fixated most frequently on these heatmaps in the 2D tracker plane, we apply ellipse-fitting techniques to summarize and analyze areas of highest fixation density, referred to as "central tendencies." Additionally, the levels of engagement are quantified by calculating the average duration and rate of fixations. This allows for a deeper understanding of how participants interacted with their environment and the robots within it. Through this analysis, we aim to provide more effective support for gaze-informed predictions in dynamic settings and highlight the nuanced ways human attention is directed and sustained during human-robot interaction. Furthermore, we investigate the coordination between eye and head movements during attention shifts. We compare our findings in the indoor settings with prior studies in outdoor environments. We correlate head orientation and gaze vectors with motion metrics to link visual attention with physical movement. With this analysis, we seek to support the deployment of appearance-based gaze estimation methods, which struggle with head and eye coordination variability <|cite_start|> (Reference: Automatic Gaze Analysis: A Survey of Deep Learning based Approaches: Eye gaze analysis is an important research problem in the field of Computer Vision and Human-Computer Interaction. Even with notable progress in the last 10 years, automatic gaze analysis still remains challenging due to the uniqueness of eye appearance, eye-head interplay, occlusion, image quality, and illumination conditions. There are several open questions, including what are the important cues to interpret gaze direction in an unconstrained environment without prior knowledge and how to encode them in real-time. We review the progress across a range of gaze analysis tasks and applications to elucidate these fundamental questions, identify effective methods in gaze analysis, and provide possible future directions. We analyze recent gaze estimation and segmentation methods, especially in the unsupervised and weakly supervised domain, based on their advantages and reported evaluation metrics. Our analysis shows that the development of a robust and generic gaze analysis method still needs to address real-world challenges such as unconstrained setup and learning with less supervision. We conclude by discussing future research directions for designing a real-world gaze analysis system that can propagate to other domains including Computer Vision, Augmented Reality (AR), Virtual Reality (VR), and Human Computer Interaction (HCI). Project Page: https://github.com/i-am-shreya/EyeGazeSurvey}{https://github.com/i-am-shreya/EyeGazeSurvey) <|cite_end|>, especially in dynamic environments. Lastly, we leverage the YOLO object detection model to qualify the objects human gaze at more precisely. By identifying and categorizing objects or areas that attract significant visual focus, we gain insights into the semantics of targets of participants' gaze, enriching our understanding of attention allocation in dynamic settings, especially during locomotion. Applying modern computer vision techniques to eye-tracking data is a promising approach to contextually interpreting human attention within the context of HRI. The paper is organized as follows: in Sec. \ref{sec:RL}, we review and motivate the use of human gaze in robotics applications. In Sec. \ref{sec:Methods}, we present our tools to analyze the human gaze during motion in shared environments. In Sec. \ref{sec:discuss}, we draw insights from the conducted analysis, and Sec.~\ref{sec:concl} concludes the paper. Related Work \label{sec:RL} The human gaze plays an increasingly important role in various robotic applications. Gaze tracking has long been used by social robots, for instance, in conversations to manage turn-taking, improve the information exchange, and reinforce mutual understanding. Gaze tracking is useful in collaborative robots, such as handovers, to coordinate the joint maneuver <|cite_start|> (Reference: Joint action understanding improves robot-to-human object handover: The development of trustworthy human-assistive robots is a challenge that goes beyond the traditional boundaries of engineering. Essential components of trustworthiness are safety, predictability and usefulness. In this paper we demonstrate that the integration of joint action understanding from human-human interaction into the human-robot context can significantly improve the success rate of robot-to-human object handover tasks. We take a two layer approach. The first layer handles the physical aspects of the handover. The robot's decision to release the object is informed by a Hidden Markov Model that estimates the state of the handover. Inspired by human-human handover observations, we then introduce a higher-level cognitive layer that models behaviour characteristic for a human user in a handover situation. In particular, we focus on the inclusion of eye gaze / head orientation into the robot's decision making. Our results demonstrate that by integrating these non-verbal cues the success rate of robot-to-human handovers can be significantly improved, resulting in a more robust and therefore safer system.) <|cite_end|>. Gaze can also be a control technique to reference objects. Gaze tracking enables execution of anticipatory control actions <|cite_start|> (Reference: Anticipatory robot control for efficient human-robot collaboration: Efficient collaboration requires collaborators to monitor the behaviors of their partners, make inferences about their task intent, and plan their own actions accordingly. To work seamlessly and efficiently with their human counterparts, robots must similarly rely on predictions of their users' intent in planning their actions. In this paper, we present an anticipatory control method that enables robots to proactively perform task actions based on anticipated actions of their human partners. We implemented this method into a robot system that monitored its user's gaze, predicted his or her task intent based on observed gaze patterns, and performed anticipatory task actions according to its predictions. Results from a human-robot interaction experiment showed that anticipatory control enabled the robot to respond to user requests and complete the task faster-2.5 seconds on average and up to 3.4 seconds-compared to a robot using a reactive control method that did not anticipate user intent. Our findings highlight the promise of performing anticipatory actions for achieving efficient human-robot teamwork.) <|cite_end|>, in particular in hybrid bionic systems such as exo-skeletons <|cite_start|> (Reference: Gaze interface: Utilizing human predictive gaze movements for controlling a hbs: We explore how gaze can be proactively used as part of the control interface for a hybrid bionic system (HBS) in goal directed tasks. Since human gaze behavior has been shown to support hand movement planning, tracking gaze fixation while doing simple, well-learned, object manipulation tasks provides a natural way for inferring the subjectpsilas motion intent. We devise a simple algorithm based on the gaze fixation area and gaze velocity for sending commands to a robot simulating a HBS. This gaze interface is shown to provide early specification of sequential movement goals according to the subjectpsilas action plan (intention) and early triggering of appropriate movement commands.) <|cite_end|>, and aids collaborative search tasks <|cite_start|> (Reference: Coordinating cognition: The costs and benefits of shared gaze during collaborative search: ) <|cite_end|>. In learning tasks, robots can learn from how humans distribute their attention to other moving people to achieve more efficient and natural crowd navigation <|cite_start|> (Reference: Robot Navigation in Crowds by Graph Convolutional Networks with Attention Learned from Human Gaze: Safe and efficient crowd navigation for mobile robot is a crucial yet challenging task. Previous work has shown the power of deep reinforcement learning frameworks to train efficient policies. However, their performance deteriorates when the crowd size grows. We suggest that this can be addressed by enabling the network to identify and pay attention to the humans in the crowd that are most critical to navigation. We propose a novel network utilizing a graph representation to learn the policy. We first train a graph convolutional network based on human gaze data that accurately predicts human attention to different agents in the crowd. Then we incorporate the learned attention into a graph-based reinforcement learning architecture. The proposed attention mechanism enables the assignment of meaningful weightings to the neighbors of the robot, and has the additional benefit of interpretability. Experiments on real-world dense pedestrian datasets with various crowd sizes demonstrate that our model outperforms state-of-art methods by 18.4% in task accomplishment and by 16.4% in time efficiency.) <|cite_end|>. Human gaze tracking can help focus robot attention in imitation learning by limiting the sensor input and the number of irrelevant objects and relations processed <|cite_start|> (Reference: Using human gaze to improve robustness against irrelevant objects in robot manipulation tasks: Deep imitation learning enables the learning of complex visuomotor skills from raw pixel inputs. However, this approach suffers from the problem of overfitting to the training images. The neural network can easily be distracted by task-irrelevant objects. In this letter, we use the human gaze measured by a head-mounted eye tracking device to discard task-irrelevant visual distractions. We propose a mixture density network-based behavior cloning method that learns to imitate the human gaze. The model predicts gaze positions from raw pixel images and crops images around the predicted gazes. Only these cropped images are used to compute the output action. This cropping procedure can remove visual distractions because the gaze is rarely fixated on task-irrelevant objects. This robustness against irrelevant objects can improve the manipulation performance of robots in scenarios where task-irrelevant objects are present. We evaluated our model on four manipulation tasks designed to test the robustness of the model to irrelevant objects. The results indicate that the proposed model can predict the locations of task-relevant objects from gaze positions, is robust to task-irrelevant objects, and exhibits impressive manipulation performance especially in multi-object handling.) <|cite_end|>. Measuring gaze is useful in behavior recognition <|cite_start|> (Reference: Recognizing behavior in hand-eye coordination patterns: Modeling human behavior is important for the design of robots as well as human-computer interfaces that use humanoid avatars. Constructive models have been built, but they have not captured all of the detailed structure of human behavior such as the moment-to-moment deployment and coordination of hand, head and eye gaze used in complex tasks. We show how this data from human subjects performing a task can be used to program a dynamic Bayes network (DBN) which in turn can be used to recognize new performance instances. As a specific demonstration we show that the steps in a complex activity such as sandwich making can be recognized by a DBN in real time.) <|cite_end|> and in driver behavior modeling <|cite_start|> (Reference: Modeling and prediction of human driver behavior: Knowledge of the current and future driving context could facilitate the interaction between human driver and advanced driver assistance systems. A driver's intended actions (the future context) can be inferred from a number of sources, including the driver's current control actions, their visual scanning behavior, and the traffic environment surrounding them. In an approach similar to hidden Markov models, the intended actions (e.g., to turn or change lanes) are modeled as a sequence of internal mental states, each with a characteristic pattern of behavior and environmental state. By observing the temporal patterns of these features, it is possible to determine which action the drivers are beginning or intending to execute. This approach has been successfully demonstrated in a variety of simulated driving conditions for a wide range of driver actions including emergency maneuvers. In these studies, only the control actions of the driver (i.e., steering and acceleration actions) were used to infer the driver's state. We are presently exploring the use of the driver's visual scanning behavior as another source of information about the driver's state. Visual scanning behavior offers the additional advantage of prediction of driver actions since scanning generally takes place in areas ahead of the current car position.) <|cite_end|>, for instance, to find correlations between certain fixation patterns and driving tasks, aiming to detect driver behavior and intention. Finally, models and metrics to describe gaze patterns are useful to mimic human behavior in robot gaze applications <|cite_start|> (Reference: Robot Gaze Behaviors in Human-to-Robot Handovers: We present the results of two studies investigating gaze behaviors of a robot receiving an object from a human. Robot gaze is an important nonverbal behavior during human-robot handovers, yet prior work has only studied robots as givers. From a frame-by-frame video analysis of human-human handovers, we identified four receiver gaze behaviors: gazing at the giver's hand, gazing at their face, and two kinds of face-hand transition gazes. We implemented these behaviors on a robot arm equipped with an anthropomorphic head. In Study 1, participants compared videos of a handover from a human actor to a robot exhibiting these four gaze behaviors. We found that when the robot transitions its head gaze from the giver's face to the giver's hand, participants consider the handover to be more likable, anthropomorphic, and communicative of timing (<inline-formula><tex-math notation="LaTeX">$p< 0.0001$</tex-math></inline-formula>). In Study 2, participants physically performed object handovers with the robot and rated their experiences of the handovers for each of the four gaze behaviors of the robot. We found weaker effects with face gaze rated the most likable (<inline-formula><tex-math notation="LaTeX">$p=0.01$</tex-math></inline-formula>) and anthropomorphic (<inline-formula><tex-math notation="LaTeX">$p=0.03$</tex-math></inline-formula>) behavior. In contrast to previous studies, we found no evidence that the robot's gaze affected the start time of the human's handover.) <|cite_end|>. A natural correlation exists between gaze direction and head orientation <|cite_start|> (Reference: Coordination of the eyes and head during visual orienting: ) <|cite_end|> <|cite_start|> (Reference: The where, what and when of gaze allocation in the lab and the natural environment: ) <|cite_end|> <|cite_start|> (Reference: Adapting the coordination of eyes and head to differences in task and environment during fully-mobile visual exploration: How are eyes and head adapted to meet the demands of visual exploration in different tasks and environments? In two studies, we measured the horizontal movements of the eyes (using mobile eye tracking in Studies 1 and 2) and the head (using inertial sensors in Study 2) while participants completed a walking task and a search and retrieval task in a large, outdoor environment. We found that the spread of visual exploration was greater while searching compared with walking, and this was primarily driven by increased movement of the head as opposed to the eyes. The contributions of the head to gaze shifts of different eccentricities was greater when searching compared to when walking. Findings are discussed with respect to understanding visual exploration as a motor action with multiple degrees of freedom.) <|cite_end|>. Researchers used this natural correlation for robotics applications to achieve natural and effective HRI <|cite_start|> (Reference: On the Benefit of Independent Control of Head and Eye Movements of a Social Robot for Multiparty Human-Robot Interaction: ) <|cite_end|>. In autonomous driving applications, the studies of head movements preceding eye movements highlight the potential of coordinated gaze behavior for goal-directed visual scanning and adapting behaviors to environmental changes <|cite_start|> (Reference: A control strategy of robot eye-head coordinated gaze behavior achieved for minimized neural transmission noise: Many studies have demonstrated the necessity to drive robot displaying natural and appropriate behaviors in social scenes. In this article, efforts were taken to restore the mechanism of human gaze behavior that can be highly informative for robot-human interaction. In order to determine the rules that biological plants obey in gaze behavior, we modeled the eye-head coordinated gaze behavior as a two degree of freedom synthetic system, and obtained a closed-form equation for determining the movement duration and dynamics of it. By solving the equation of this model numerically under the condition of minimal neural transmission noise effect, it was found that this model can reproduce the gaze shift behavior and predict the coordinated trajectories of eye movement and head torsion. The proposed model and methodology was tested on the Xiaopang robot platform. By directly comparing the experimental result with the practical observation data, it indicates that the proposed model and methodology is robust to represent the pattern of human eye-head coordinated gaze behavior, this concludes that the human gaze sequence has evolved as a strategy to optimize the tradeoff between focal fixation accuracy and gaze shift speed.) <|cite_end|>. Our study explores the contribution of eyes and the head to shifts of attention in various activities and compares these findings to outdoor environments <|cite_start|> (Reference: Adapting the coordination of eyes and head to differences in task and environment during fully-mobile visual exploration: How are eyes and head adapted to meet the demands of visual exploration in different tasks and environments? In two studies, we measured the horizontal movements of the eyes (using mobile eye tracking in Studies 1 and 2) and the head (using inertial sensors in Study 2) while participants completed a walking task and a search and retrieval task in a large, outdoor environment. We found that the spread of visual exploration was greater while searching compared with walking, and this was primarily driven by increased movement of the head as opposed to the eyes. The contributions of the head to gaze shifts of different eccentricities was greater when searching compared to when walking. Findings are discussed with respect to understanding visual exploration as a motor action with multiple degrees of freedom.) <|cite_end|>. Head orientation offers a rough estimate of the human gaze. However, eye-tracking proves more fitting in contexts like social interactions within robotics <|cite_start|> (Reference: Robot reading human gaze: Why eye tracking is better than head tracking for human-robot collaboration: Robots are at the position to become our everyday companions in the near future. Still, many hurdles need to be cleared to achieve this goal. One of them is the fact that robots are still not able to perceive some important communication cues naturally used by humans, e.g. gaze. In the recent past, eye gaze in robot perception was substituted by its proxy, head orientation. Such an approach is still adopted in many applications today. In this paper we introduce performance improvements to an eye tracking system we previously developed and use it to explore if this approximation is appropriate. More precisely, we compare the impact of the use of eye- or head-based gaze estimation in a human robot interaction experiment with the iCub robot and naïve subjects. We find that the possibility to exploit the richer information carried by eye gaze has a significant impact on the interaction. As a result, our eye tracking system allows for a more efficient human-robot collaboration than a comparable head tracking approach, according to both quantitative measures and subjective evaluation by the human participants.) <|cite_end|> or robot manipulation tasks <|cite_start|> (Reference: Using human gaze to improve robustness against irrelevant objects in robot manipulation tasks: Deep imitation learning enables the learning of complex visuomotor skills from raw pixel inputs. However, this approach suffers from the problem of overfitting to the training images. The neural network can easily be distracted by task-irrelevant objects. In this letter, we use the human gaze measured by a head-mounted eye tracking device to discard task-irrelevant visual distractions. We propose a mixture density network-based behavior cloning method that learns to imitate the human gaze. The model predicts gaze positions from raw pixel images and crops images around the predicted gazes. Only these cropped images are used to compute the output action. This cropping procedure can remove visual distractions because the gaze is rarely fixated on task-irrelevant objects. This robustness against irrelevant objects can improve the manipulation performance of robots in scenarios where task-irrelevant objects are present. We evaluated our model on four manipulation tasks designed to test the robustness of the model to irrelevant objects. The results indicate that the proposed model can predict the locations of task-relevant objects from gaze positions, is robust to task-irrelevant objects, and exhibits impressive manipulation performance especially in multi-object handling.) <|cite_end|> due to its sensitivity to subtle cues and ability to filter out irrelevant objects. Eye gaze is crucial in these scenarios, prompting analytical tools like 2D heat maps and areas of interest (AOI) to study attention distribution and eye-tracking utility in Human-Robot Interactions <|cite_start|> (Reference: Advantages of Multimodal versus Verbal-Only Robot-to-Human Communication with an Anthropomorphic Robotic Mock Driver: Robots are increasingly used in shared environments with humans, making effective communication a necessity for successful human-robot interaction. In our work, we study a crucial component: active communication of robot intent. Here, we present an anthropomorphic solution where a humanoid robot communicates the intent of its host robot acting as an "Anthropomorphic Robotic Mock Driver" (ARMoD). We evaluate this approach in two experiments in which participants work alongside a mobile robot on various tasks, while the ARMoD communicates a need for human attention, when required, or gives instructions to collaborate on a joint task. The experiments feature two interaction styles of the ARMoD: a verbal-only mode using only speech and a multimodal mode, additionally including robotic gaze and pointing gestures to support communication and register intent in space. Our results show that the multimodal interaction style, including head movements and eye gaze as well as pointing gestures, leads to more natural fixation behavior. Participants naturally identified and fixated longer on the areas relevant for intent communication, and reacted faster to instructions in collaborative tasks. Our research further indicates that the ARMoD intent communication improves engagement and social interaction with mobile robots in workplace settings.) <|cite_end|>. Understanding the semantics of human gaze and attention in environments shared by robots and humans is crucial for enhancing robotics applications, especially in dynamic settings with numerous potential distractors. In our study, we use the YOLOv8 model with a subset of the THÖR-MAGNI eye-tracking data to investigate its potential to interpret the visual attention when navigating and interacting with people. The findings indicate that human attention towards a mobile robot in a shared environment remains constant, regardless of the participant's activity or navigational behavior. <|paper_end|>
[ "<|reference_start|> Coordinating cognition: The costs and benefits of shared gaze during collaborative search: <|reference_end|>", "<|reference_start|> Robot Navigation in Crowds by Graph Convolutional Networks with Attention Learned from Human Gaze: Safe and efficient crowd navigation for mobile robot is a crucial yet challenging task. Previous work has shown the power of deep reinforcement learning frameworks to train efficient policies. However, their performance deteriorates when the crowd size grows. We suggest that this can be addressed by enabling the network to identify and pay attention to the humans in the crowd that are most critical to navigation. We propose a novel network utilizing a graph representation to learn the policy. We first train a graph convolutional network based on human gaze data that accurately predicts human attention to different agents in the crowd. Then we incorporate the learned attention into a graph-based reinforcement learning architecture. The proposed attention mechanism enables the assignment of meaningful weightings to the neighbors of the robot, and has the additional benefit of interpretability. Experiments on real-world dense pedestrian datasets with various crowd sizes demonstrate that our model outperforms state-of-art methods by 18.4% in task accomplishment and by 16.4% in time efficiency. <|reference_end|>", "<|reference_start|> Recognizing behavior in hand-eye coordination patterns: Modeling human behavior is important for the design of robots as well as human-computer interfaces that use humanoid avatars. Constructive models have been built, but they have not captured all of the detailed structure of human behavior such as the moment-to-moment deployment and coordination of hand, head and eye gaze used in complex tasks. We show how this data from human subjects performing a task can be used to program a dynamic Bayes network (DBN) which in turn can be used to recognize new performance instances. As a specific demonstration we show that the steps in a complex activity such as sandwich making can be recognized by a DBN in real time. <|reference_end|>", "<|reference_start|> Adapting the coordination of eyes and head to differences in task and environment during fully-mobile visual exploration: How are eyes and head adapted to meet the demands of visual exploration in different tasks and environments? In two studies, we measured the horizontal movements of the eyes (using mobile eye tracking in Studies 1 and 2) and the head (using inertial sensors in Study 2) while participants completed a walking task and a search and retrieval task in a large, outdoor environment. We found that the spread of visual exploration was greater while searching compared with walking, and this was primarily driven by increased movement of the head as opposed to the eyes. The contributions of the head to gaze shifts of different eccentricities was greater when searching compared to when walking. Findings are discussed with respect to understanding visual exploration as a motor action with multiple degrees of freedom. <|reference_end|>" ]
[ 8, 9, 11, 19 ]
{"<|cite_1|>": "ss-1352659", "<|cite_2|>": "ss-1352660", "<|cite_3|>": "ss-1170250", "<|cite_4|>": "arxiv-520376", "<|cite_6|>": "arxiv-360499", "<|cite_7|>": "ss-1352661", "<|cite_9|>": "ss-1288043", "<|cite_10|>": "ss-1352662", "<|cite_11|>": "ss-1352663", "<|cite_12|>": "arxiv-225120", "<|cite_13|>": "ss-1352664", "<|cite_14|>": "ss-1352665", "<|cite_15|>": "ss-1352666", "<|multi_cite_16_2|>": "ss-1657327", "<|multi_cite_17_1|>": "ss-1215921", "<|multi_cite_17_2|>": "ss-1352667", "<|multi_cite_17_3|>": "ss-1352668", "<|cite_18|>": "ss-1352669", "<|cite_19|>": "ss-1352670", "<|cite_20|>": "ss-1352668", "<|multi_cite_21_1|>": "ss-1170250", "<|cite_22|>": "ss-1352664", "<|multi_cite_23_2|>": "arxiv-520376"}
1806.04450
<|paper_start|> Title: An Ensemble Model for Sentiment Analysis of Hindi-English Code-Mixed Data Abstract: An Ensemble Model for Sentiment Analysis of Hindi-English Code-Mixed Data: In multilingual societies like India, code-mixed social media texts comprise the majority of the Internet. Detecting the sentiment of the code-mixed user opinions plays a crucial role in understanding social, economic and political trends. In this paper, we propose an ensemble of character-trigrams based LSTM model and word-ngrams based Multinomial Naive Bayes (MNB) model to identify the sentiments of Hindi-English (Hi-En) code-mixed data. The ensemble model combines the strengths of rich sequential patterns from the LSTM model and polarity of keywords from the probabilistic ngram model to identify sentiments in sparse and inconsistent code-mixed data. Experiments on reallife user code-mixed data reveals that our approach yields state-of-the-art results as compared to several baselines and other deep learning based proposed methods. Introduction \label{intro} The rapid growth of opinion sharing on social media has led to an increased interest in sentiment analysis of social media texts. Sentiment Analysis can provide invaluable insights ranging from product reviews to capturing trending topics to designing business models for targeted advertisements. Many organizations today rely heavily on sentiment analysis of social media texts to monitor the performance of their products and take the user feedback into account while upgrading to newer versions. Social media texts are informal with several linguistic differences. In multilingual societies like India, users generally combine the prominent language, like English, with their native languages. This process of switching texts between two or more languages is referred to as code-mixing. Millions of internet users in India communicate by mixing their regional languages with English which generates enormous amount of code-mixed social media texts. One of such popular combinations is the mixing of Hindi and English, resulting in Hindi-English (Hi-En) code-mixed data. For example, ``yeh gaana bohut super hai''(this song very super is), meaning \emph{``this is a superb song''}, is a Hi-En code-mixed text. Apart from several existing challenges such as the presence of multiple entities in the text and sarcasm detection, code-mixing brings with it many other unique challenges. The linguistic complexity of code-mixed content is compounded by the presence of spelling variations, transliteration and non-adherence to formal grammar. The romanized\footnote{ https://en.wikipedia.org/wiki/Romanization} code-mixed data on social media presents inherent challenges like word or phrase contractions (``please'' to ``plz''), and non-standard spellings (such as ``cooolll'' or ``suppeerrrrr''), etc. Along with diverse sentence constructions, words in Hindi can have multiple variations when written in English which leads to a large amount of sparse and rare tokens. For instance, ``pyaar''(love) can be written as ``peyar'', ``pyar'', ``piyar'', ``piyaar'', or ``pyaarrrr'', etc. Code-mixing is a well-known problem in the field of NLP. Researchers have put in efforts for language identification, POS tagging and Named Entity Recognition of code-mixed data <|cite_start|> (Reference: i am borrowing ya mixing?'' an analysis of english-hindi code mixing in facebook: Code-Mixing is a frequently observed phenomenon in social media content generated by multi-lingual users. The processing of such data for linguistic analysis as well as computational modelling is challenging due to the linguistic complexity resulting from the nature of the mixing as well as the presence of non-standard variations in spellings and grammar, and transliteration. Our analysis shows the extent of Code-Mixing in English-Hindi data. The classification of Code-Mixed words based on frequency and linguistic typology underline the fact that while there are easily identifiable cases of borrowing and mixing at the two ends, a large majority of the words form a continuum in the middle, emphasizing the need to handle these at different levels for automatic processing of the data.) <|cite_end|> <|cite_start|> (Reference: Word-level language identification using CRF: Code-switching shared task report of MSR India system: We describe a CRF based system for word-level language identification of code-mixed text. Our method uses lexical, contextual, character n-gram, and special character features, and therefore, can easily be replicated across languages. Its performance is benchmarked against the test sets provided by the shared task on code-mixing (Solorio et al., 2014) for four language pairs, namely, EnglishSpanish (En-Es), English-Nepali (En-Ne), English-Mandarin (En-Cn), and Standard Arabic-Arabic (Ar-Ar) Dialects. The experimental results show a consistent performance across the language pairs.) <|cite_end|> <|cite_start|> (Reference: Pos tagging of English-Hindi code-mixed social media content: Code-mixing is frequently observed in user generated content on social media, especially from multilingual users. The linguistic complexity of such content is compounded by presence of spelling variations, transliteration and non-adherance to formal grammar. We describe our initial efforts to create a multi-level annotated corpus of Hindi-English codemixed text collated from Facebook forums, and explore language identification, back-transliteration, normalization and POS tagging of this data. Our results show that language identification and transliteration for Hindi are two major challenges that impact POS tagging accuracy.) <|cite_end|> <|cite_start|> (Reference: Consonant-vowel sequences as subword units for code-mixed languages: In this research work, we develop a state-of-art model for identifying sentiment in Hindi-English code-mixed language. We introduce new phonemic sub-word units for Hindi-English code-mixed text along with a hierarchical deep learning model which uses these sub-word units for predicting sentiment. The results indicate that the model yields a significant increase in accuracy as compared to other models.) <|cite_end|> <|cite_start|> (Reference: Overview of FIRE-2015 shared task on mixed script information retrieval: The Transliterated Search track has been organized for the third year in FIRE-2015. The track had three subtasks. Subtask I was on language labeling of words in code-mixed text fragments; it was conducted for 8 Indian languages: Bangla, Gujarati, Hindi, Kannada, Malayalam, Marathi, Tamil, Telugu, mixed with English. Subtask II was on ad-hoc retrieval of Hindi film lyrics, movie reviews and astrology documents, where both the queries and documents were either in Hindi written in Devanagari or in Roman transliterated form. Subtask III was on transliterated question answering where the documents as well as questions were in Bangla script or Roman transliterated Bangla. A total of 24 runs were submitted by 10 teams, of which 14 runs were for subtask I and 10 runs for subtask II. There were no participation for Subtask III. The overview presents a comprehensive report of the subtasks, datasets, runs submitted and performances.) <|cite_end|> <|cite_start|> (Reference: Overview for the first shared task on language identification in code-switched data: We present an overview of the first shared task on language identification on code-switched data. The shared task included code-switched data from four language pairs: Modern Standard Arabic-Dialectal Arabic (MSA-DA), Mandarin-English (MAN-EN), Nepali-English (NEP-EN), and Spanish-English (SPA-EN). A total of seven teams participated in the task and submitted 42 system runs. The evaluation showed that language identification at the token level is more difficult when the languages present are closely related, as in the case of MSA-DA, where the prediction performance was the lowest among all language pairs. In contrast, the language pairs with the higest F-measure where SPA-EN and NEP-EN. The task made evident that language identification in code-switched data is still far from solved and warrants further research.) <|cite_end|> <|cite_start|> (Reference: Cmee-il: Code mix entity extraction in indian languages from social media text@ fire 2016-an overview: The penetration of smart devices such as mobile phones, tabs has significantly changed the way people communicate. This has led to the growth of usage of social media tools such as twitter, facebook chats for communication. This has led to development of new challenges and perspectives in the language technologies research. Automatic processing of such texts requires us to develop new methodologies. Thus there is great need to develop various automatic systems such as information extraction, retrieval and summarization. Entity recognition is a very important sub task of Information extraction and finds its applications in information retrieval, machine translation and other higher Natural Language Processing (NLP) applications such as co-reference resolution. Some of the main issues in handling of such social media texts are i) Spelling errors ii) Abbreviated new language vocabulary such as “gr8” for great iii) use of symbols such as emoticons/emojis iv) use of meta tags and hash tags v) Code mixing. Entity recognition and extraction has gained increased attention in Indian research community. However there is no benchmark data available where all these systems could be compared on same data for respective languages in this new generation user generated text. Towards this we have organized the Code Mix Entity Extraction in social media text track for Indian languages (CMEE-IL) in the Forum for Information Retrieval Evaluation (FIRE). We present the overview of CMEE-IL 2016 track. This paper describes the corpus created for Hindi-English and Tamil-English. Here we also present overview of the approaches used by the participants. CCS Concepts • Computing methodologies ~ Artificial intelligence • Computing methodologies ~ Natural language processing • Information systems ~ Information extraction) <|cite_end|>. Over the past years, researchers have established deep neural network based state-of-the-art models for sentiment analysis <|cite_start|> (Reference: Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank: Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network. When trained on the new treebank, this model outperforms all previous methods on several metrics. It pushes the state of the art in single sentence positive/negative classification from 80% up to 85.4%. The accuracy of predicting fine-grained sentiment labels for all phrases reaches 80.7%, an improvement of 9.7% over bag of features baselines. Lastly, it is the only model that can accurately capture the effects of negation and its scope at various tree levels for both positive and negative phrases.) <|cite_end|> <|cite_start|> (Reference: Left-Center-Right Separated Neural Network for Aspect-based Sentiment Analysis with Rotatory Attention: Deep learning techniques have achieved success in aspect-based sentiment analysis in recent years. However, there are two important issues that still remain to be further studied, i.e., 1) how to efficiently represent the target especially when the target contains multiple words; 2) how to utilize the interaction between target and left/right contexts to capture the most important words in them. In this paper, we propose an approach, called left-center-right separated neural network with rotatory attention (LCR-Rot), to better address the two problems. Our approach has two characteristics: 1) it has three separated LSTMs, i.e., left, center and right LSTMs, corresponding to three parts of a review (left context, target phrase and right context); 2) it has a rotatory attention mechanism which models the relation between target and left/right contexts. The target2context attention is used to capture the most indicative sentiment words in left/right contexts. Subsequently, the context2target attention is used to capture the most important word in the target. This leads to a two-side representation of the target: left-aware target and right-aware target. We compare our approach on three benchmark datasets with ten related methods proposed recently. The results show that our approach significantly outperforms the state-of-the-art techniques.) <|cite_end|> <|cite_start|> (Reference: Targeted aspect-based sentiment analysis via embedding commonsense knowledge into an attentive LSTM: Analyzing people’s opinions and sentiments towards certain aspects is an important task of natural language understanding. In this paper, we propose a novel solution to targeted aspect-based sentiment analysis, which tackles the challenges of both aspect-based sentiment analysis and targeted sentiment analysis by exploiting commonsense knowledge. We augment the long short-term memory (LSTM) network with a hierarchical attention mechanism consisting of a target-level attention and a sentence-level attention. Commonsense knowledge of sentiment-related concepts is incorporated into the end-to-end training of a deep neural network for sentiment classification. In order to tightly integrate the commonsense knowledge into the recurrent encoder, we propose an extension of LSTM, termed Sentic LSTM. We conduct experiments on two publicly released datasets, which show that the combination of the proposed attention architecture and Sentic LSTM can outperform state-of-the-art methods in targeted aspect sentiment tasks.) <|cite_end|> in English data. For the problem of sentiment analysis of Hi-En code-mixed data, sub-word level representations in LSTM have shown promising results <|cite_start|> (Reference: Towards Sub-Word Level Compositions for Sentiment Analysis of Hindi-English Code Mixed Text: Sentiment analysis (SA) using code-mixed data from social media has several applications in opinion mining ranging from customer satisfaction to social campaign analysis in multilingual societies. Advances in this area are impeded by the lack of a suitable annotated dataset. We introduce a Hindi-English (Hi-En) code-mixed dataset for sentiment analysis and perform empirical analysis comparing the suitability and performance of various state-of-the-art SA methods in social media. In this paper, we introduce learning sub-word level representations in LSTM (Subword-LSTM) architecture instead of character-level or word-level representations. This linguistic prior in our architecture enables us to learn the information about sentiment value of important morphemes. This also seems to work well in highly noisy text containing misspellings as shown in our experiments which is demonstrated in morpheme-level feature maps learned by our model. Also, we hypothesize that encoding this linguistic prior in the Subword-LSTM architecture leads to the superior performance. Our system attains accuracy 4-5% greater than traditional approaches on our dataset, and also outperforms the available system for sentiment analysis in Hi-En code-mixed text by 18%.) <|cite_end|> <|cite_start|> (Reference: Consonant-vowel sequences as subword units for code-mixed languages: In this research work, we develop a state-of-art model for identifying sentiment in Hindi-English code-mixed language. We introduce new phonemic sub-word units for Hindi-English code-mixed text along with a hierarchical deep learning model which uses these sub-word units for predicting sentiment. The results indicate that the model yields a significant increase in accuracy as compared to other models.) <|cite_end|>. However, since the code-mixed data is noisy in nature and the available datasets are smaller in size to tune deep learning models, we hypothesize that n-gram based traditional models should be able to assist deep learning based models in improving the overall accuracy of sentiment analysis in code-mixed data. In this paper, we propose an ensemble model where we combine the outputs of character-trigrams based LSTM model and word ngram based MNB model to predict the sentiment of Hi-En code-mixed texts. While the LSTM model encodes deep sequential patterns in the text, MNB captures low-level word combinations of keywords to compensate for the grammatical inconsistencies. Results reveal that our model is able to outperform other traditional machine learning approaches as well as the deep learning models proposed in literature. The main contribution of the paper are as follows: \begin{itemize} \item We propose the use of well-established character trigrams as sub-word features in LSTM network that shows comparable performance with other proposed methods. This saves the effort of complicated feature engineering in sparse code-mixed data. \item We propose an ensemble of character-trigrams based LSTM model and word-ngrams based MNB model to predict the sentiment of Hi-En code-mixed data. \item We evaluated and compared our model with various traditional machine learning classifiers as well as other state-of-the-art techniques. We also present a qualitative analysis of how ngram based MNB model helps overcome some of the shortcomings of LSTM model. \end{itemize} Rest of the paper is organized as follows. We provide an overview of the existing approaches for sentiment analysis of code-mixed data in Section \ref{relwork}. Section \ref{ourapproach} explains various data pre-processing steps taken, the design and training of the ensemble model. In Section \ref{exp}, we explain our experimental setup, describe the performance of proposed system and compare it with baselines and other methods, proceeded by a discussion of our results. Finally, Section \ref{conc} concludes the paper. Related Work \label{relwork} Information extraction from user-generated code-mixed data is difficult due to its multilingual nature. Language identification tasks have been performed on several code-mixed language pairs <|cite_start|> (Reference: A hybrid approach for transliterated word-level language identification: Crf with post-processing heuristics: In this paper, we describe a hybrid approach for word-level language (WLL) identification of Bangla words written in Roman script and mixed with English words as part of our participation in the shared task on transliterated search at Forum for Information Retrieval Evaluation (FIRE) in 2014. A CRF based machine learning model and post-processing heuristics are employed for the WLL identification task. In addition to language identification, two transliteration systems were built to transliterate detected Bangla words written in Roman script into native Bangla script. The system demonstrated an overall token level language identification accuracy of 0.905. The token level Bangla and English language identification F-scores are 0.899, 0.920 respectively. The two transliteration systems achieved accuracies of 0.062 and 0.037. The word-level language identification system presented in this paper resulted in the best scores across almost all metrics among all the participating systems for the Bangla-English language pair.) <|cite_end|> <|cite_start|> (Reference: Adaptive voting in multiple classifier systems for word level language identification: In social media communication, code switching has become quite a common phenomenon especially for multilingual speakers. Automatic language identification becomes both a necessary and challenging task in such an environment. In this work, we describe a CRF based system with voting approach for code-mixed query word labeling at word-level as part of our participation in the shared task on Mixed Script Information Retrieval at Forum for Information Retrieval Evaluation (FIRE) in 2015. Our method uses character n-gram, simple lexical features and special character features, and therefore, can easily be replicated across languages. The performance of the system was evaluated against the test sets provided by the FIRE 2015 shared task on mixed script information retrieval. Experimental results show encouraging performance across the language pairs. CCS Concepts •Computer systems organization → Embedded systems; Redundancy; Robotics; •Networks → Network reliability;) <|cite_end|> <|cite_start|> (Reference: Analyzing language samples of spanish--english bilingual children for the automated prediction of language dominance: Abstract In this work we study how features typically used in natural language processing tasks, together with measures from syntactic complexity, can be adapted to the problem of developing language profiles of bilingual children. Our experiments show that these features can provide high discriminative value for predicting language dominance from story retells in a Spanish–English bilingual population of children. Moreover, some of our proposed features are even more powerful than measures commonly used by clinical researchers and practitioners for analyzing spontaneous language samples of children. This study shows that the field of natural language processing has the potential to make significant contributions to communication disorders and related areas.) <|cite_end|> <|cite_start|> (Reference: i am borrowing ya mixing?'' an analysis of english-hindi code mixing in facebook: Code-Mixing is a frequently observed phenomenon in social media content generated by multi-lingual users. The processing of such data for linguistic analysis as well as computational modelling is challenging due to the linguistic complexity resulting from the nature of the mixing as well as the presence of non-standard variations in spellings and grammar, and transliteration. Our analysis shows the extent of Code-Mixing in English-Hindi data. The classification of Code-Mixed words based on frequency and linguistic typology underline the fact that while there are easily identifiable cases of borrowing and mixing at the two ends, a large majority of the words form a continuum in the middle, emphasizing the need to handle these at different levels for automatic processing of the data.) <|cite_end|> <|cite_start|> (Reference: Code mixing: A challenge for language identification in the language of social media: In social media communication, multilingual speakers often switch between languages, and, in such an environment, automatic language identification becomes both a necessary and challenging task. In this paper, we describe our work in progress on the problem of automatic language identification for the language of social media. We describe a new dataset that we are in the process of creating, which contains Facebook posts and comments that exhibit code mixing between Bengali, English and Hindi. We also present some preliminary word-level language identification experiments using this dataset. Different techniques are employed, including a simple unsupervised dictionary-based approach, supervised word-level classification with and without contextual clues, and sequence labelling using Conditional Random Fields. We find that the dictionary-based approach is surpassed by supervised classification and sequence labelling, and that it is important to take contextual clues into consideration.) <|cite_end|>. NLP specific tasks such as POS tagging <|cite_start|> (Reference: Analyzing language samples of spanish--english bilingual children for the automated prediction of language dominance: Abstract In this work we study how features typically used in natural language processing tasks, together with measures from syntactic complexity, can be adapted to the problem of developing language profiles of bilingual children. Our experiments show that these features can provide high discriminative value for predicting language dominance from story retells in a Spanish–English bilingual population of children. Moreover, some of our proposed features are even more powerful than measures commonly used by clinical researchers and practitioners for analyzing spontaneous language samples of children. This study shows that the field of natural language processing has the potential to make significant contributions to communication disorders and related areas.) <|cite_end|> <|cite_start|> (Reference: Pos tagging of English-Hindi code-mixed social media content: Code-mixing is frequently observed in user generated content on social media, especially from multilingual users. The linguistic complexity of such content is compounded by presence of spelling variations, transliteration and non-adherance to formal grammar. We describe our initial efforts to create a multi-level annotated corpus of Hindi-English codemixed text collated from Facebook forums, and explore language identification, back-transliteration, normalization and POS tagging of this data. Our results show that language identification and transliteration for Hindi are two major challenges that impact POS tagging accuracy.) <|cite_end|> <|cite_start|> (Reference: Part-of-speech tagging for code-mixed English-Hindi Twitter and Facebook chat messages: The paper reports work on collecting and annotating code-mixed English-Hindi social media text (Twitter and Facebook messages), and experiments on automatic tagging of these corpora, using both a coarse-grained and a fine-grained part-ofspeech tag set. We compare the performance of a combination of language specific taggers to that of applying four machine learning algorithms to the task (Conditional Random Fields, Sequential Minimal Optimization, Naive Bayes and Random Forests), using a range of different features based on word context and wordinternal information.) <|cite_end|> <|cite_start|> (Reference: SMPOST: Parts of Speech Tagger for Code-Mixed Indic Social Media Text: Use of social media has grown dramatically during the last few years. Users follow informal languages in communicating through social media. The language of communication is often mixed in nature, where people transcribe their regional language with English and this technique is found to be extremely popular. Natural language processing (NLP) aims to infer the information from these text where Part-of-Speech (PoS) tagging plays an important role in getting the prosody of the written text. For the task of PoS tagging on Code-Mixed Indian Social Media Text, we develop a supervised system based on Conditional Random Field classifier. In order to tackle the problem effectively, we have focused on extracting rich linguistic features. We participate in three different language pairs, ie. English-Hindi, English-Bengali and English-Telugu on three different social media platforms, Twitter, Facebook & WhatsApp. The proposed system is able to successfully assign coarse as well as fine-grained PoS tag labels for a given a code-mixed sentence. Experiments show that our system is quite generic that shows encouraging performance levels on all the three language pairs in all the domains.) <|cite_end|> and NER <|cite_start|> (Reference: Cmee-il: Code mix entity extraction in indian languages from social media text@ fire 2016-an overview: The penetration of smart devices such as mobile phones, tabs has significantly changed the way people communicate. This has led to the growth of usage of social media tools such as twitter, facebook chats for communication. This has led to development of new challenges and perspectives in the language technologies research. Automatic processing of such texts requires us to develop new methodologies. Thus there is great need to develop various automatic systems such as information extraction, retrieval and summarization. Entity recognition is a very important sub task of Information extraction and finds its applications in information retrieval, machine translation and other higher Natural Language Processing (NLP) applications such as co-reference resolution. Some of the main issues in handling of such social media texts are i) Spelling errors ii) Abbreviated new language vocabulary such as “gr8” for great iii) use of symbols such as emoticons/emojis iv) use of meta tags and hash tags v) Code mixing. Entity recognition and extraction has gained increased attention in Indian research community. However there is no benchmark data available where all these systems could be compared on same data for respective languages in this new generation user generated text. Towards this we have organized the Code Mix Entity Extraction in social media text track for Indian languages (CMEE-IL) in the Forum for Information Retrieval Evaluation (FIRE). We present the overview of CMEE-IL 2016 track. This paper describes the corpus created for Hindi-English and Tamil-English. Here we also present overview of the approaches used by the participants. CCS Concepts • Computing methodologies ~ Artificial intelligence • Computing methodologies ~ Natural language processing • Information systems ~ Information extraction) <|cite_end|> <|cite_start|> (Reference: A Deep Neural Network based Approach for Entity Extraction in Code-Mixed Indian Social Media Text: The rise in accessibility of web to the mass has led to a spurt in the use of social media making it convenient and powerful way to express and exchange information in their own language(s). India, being enormously diversified country have more than 168 millions users on social media. This diversity is also reflected in their scripts where a majority of users often switch between their native languages to be more expressive. These linguistic variations make automatic entity extraction both a necessary and a challenging problem. In this paper, we report our work for entity extraction in a code-mixed environment. Our proposed approach is based on the popular deep neural network based Gated Recurrent Unit (GRU) archirecture that automatically discovers the higher level features from the text. We do not make use of any handcrafted features or rules, and therefore our proposed model is quite generic in nature. Our experiments on two benchmark datasets of English-Hindi and English-Tamil language pairs show the F-scores of 66 . 04% and 53 . 85% , respectively.) <|cite_end|> have also been performed on the code-mixed data. Initiatives have been taken by shared task like FIRE-2015\footnote{ http://fire.irsi.res.in/fire/2015/home} to study retrieval of mixed script of Indian languages. However, these proposed solutions do not align with the problem of sentiment analysis in code-mixed data. Following the current trend, researchers have seen great success in the task of sentiment analysis of English data using deep neural networks. Recurrent Neural Networks (RNN) and its variants have consistently outperformed traditional sentiment analysis state-of-the-art models <|cite_start|> (Reference: Semi-Supervised Recursive Autoencoders for Predicting Sentiment Distributions: We introduce a novel machine learning framework based on recursive autoencoders for sentence-level prediction of sentiment label distributions. Our method learns vector space representations for multi-word phrases. In sentiment prediction tasks these representations outperform other state-of-the-art approaches on commonly used datasets, such as movie reviews, without using any pre-defined sentiment lexica or polarity shifting rules. We also evaluate the model's ability to predict sentiment distributions on a new dataset based on confessions from the experience project. The dataset consists of personal user stories annotated with multiple labels which, when aggregated, form a multinomial distribution that captures emotional reactions. Our algorithm can more accurately predict distributions over such labels compared to several competitive baselines.) <|cite_end|> <|cite_start|> (Reference: {Semantic Compositionality through Recursive Matrix-Vector Spaces: Single-word vector space models have been very successful at learning lexical information. However, they cannot capture the compositional meaning of longer phrases, preventing them from a deeper understanding of language. We introduce a recursive neural network (RNN) model that learns compositional vector representations for phrases and sentences of arbitrary syntactic type and length. Our model assigns a vector and a matrix to every node in a parse tree: the vector captures the inherent meaning of the constituent, while the matrix captures how it changes the meaning of neighboring words or phrases. This matrix-vector RNN can learn the meaning of operators in propositional logic and natural language. The model obtains state of the art performance on three different experiments: predicting fine-grained sentiment distributions of adverb-adjective pairs; classifying sentiment labels of movie reviews and classifying semantic relationships such as cause-effect or topic-message between nouns using the syntactic path between them.) <|cite_end|> <|cite_start|> (Reference: Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank: Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network. When trained on the new treebank, this model outperforms all previous methods on several metrics. It pushes the state of the art in single sentence positive/negative classification from 80% up to 85.4%. The accuracy of predicting fine-grained sentiment labels for all phrases reaches 80.7%, an improvement of 9.7% over bag of features baselines. Lastly, it is the only model that can accurately capture the effects of negation and its scope at various tree levels for both positive and negative phrases.) <|cite_end|>. <|cite_start|> (Reference: Left-Center-Right Separated Neural Network for Aspect-based Sentiment Analysis with Rotatory Attention: Deep learning techniques have achieved success in aspect-based sentiment analysis in recent years. However, there are two important issues that still remain to be further studied, i.e., 1) how to efficiently represent the target especially when the target contains multiple words; 2) how to utilize the interaction between target and left/right contexts to capture the most important words in them. In this paper, we propose an approach, called left-center-right separated neural network with rotatory attention (LCR-Rot), to better address the two problems. Our approach has two characteristics: 1) it has three separated LSTMs, i.e., left, center and right LSTMs, corresponding to three parts of a review (left context, target phrase and right context); 2) it has a rotatory attention mechanism which models the relation between target and left/right contexts. The target2context attention is used to capture the most indicative sentiment words in left/right contexts. Subsequently, the context2target attention is used to capture the most important word in the target. This leads to a two-side representation of the target: left-aware target and right-aware target. We compare our approach on three benchmark datasets with ten related methods proposed recently. The results show that our approach significantly outperforms the state-of-the-art techniques.) <|cite_end|> employed context2target attention based LSTM model to perform targeted sentiment analysis by capturing most important words in left and right context. <|cite_start|> (Reference: Targeted aspect-based sentiment analysis via embedding commonsense knowledge into an attentive LSTM: Analyzing people’s opinions and sentiments towards certain aspects is an important task of natural language understanding. In this paper, we propose a novel solution to targeted aspect-based sentiment analysis, which tackles the challenges of both aspect-based sentiment analysis and targeted sentiment analysis by exploiting commonsense knowledge. We augment the long short-term memory (LSTM) network with a hierarchical attention mechanism consisting of a target-level attention and a sentence-level attention. Commonsense knowledge of sentiment-related concepts is incorporated into the end-to-end training of a deep neural network for sentiment classification. In order to tightly integrate the commonsense knowledge into the recurrent encoder, we propose an extension of LSTM, termed Sentic LSTM. We conduct experiments on two publicly released datasets, which show that the combination of the proposed attention architecture and Sentic LSTM can outperform state-of-the-art methods in targeted aspect sentiment tasks.) <|cite_end|> integrated common sense knowledge into recurrent encoder to form \emph{sentic} LSTM. Due to the availability of large scale labeled English data, the LSTM models are able to capture rich sequential patterns from the data to capture the sentiments. However, the code-mixed data is limited and sparse in nature, making it difficult for the deep learning techniques to learn generic patterns from the data effectively. In the area of sentiment analysis of Hi-En code-mixed data, very less work has been done so far. A shared task for Sentiment Analysis of Indian Language (Code-Mixed) (SAIL Code-Mixed)\footnote{http://www.dasdipankar.com/SAILCodeMixed.html} on twitter data was organized at ICON-2017\footnote{https://ltrc.iiit.ac.in/icon2017/}. <|cite_start|> (Reference: Shared Task on Sentiment Analysis in Indian Languages (SAIL) Tweets - An Overview: ) <|cite_end|> summarizes the dataset used, various models submitted by the participants and their results. The best submission for the Hi-En language pair used features like GloVe word embeddings with 300 dimensions and TF-IDF scores of word and character ngrams. They trained an ensemble of linear SVM, Logistic Regression and Random Forests to classify the sentiments. Among the deep learning approaches, <|cite_start|> (Reference: Towards Sub-Word Level Compositions for Sentiment Analysis of Hindi-English Code Mixed Text: Sentiment analysis (SA) using code-mixed data from social media has several applications in opinion mining ranging from customer satisfaction to social campaign analysis in multilingual societies. Advances in this area are impeded by the lack of a suitable annotated dataset. We introduce a Hindi-English (Hi-En) code-mixed dataset for sentiment analysis and perform empirical analysis comparing the suitability and performance of various state-of-the-art SA methods in social media. In this paper, we introduce learning sub-word level representations in LSTM (Subword-LSTM) architecture instead of character-level or word-level representations. This linguistic prior in our architecture enables us to learn the information about sentiment value of important morphemes. This also seems to work well in highly noisy text containing misspellings as shown in our experiments which is demonstrated in morpheme-level feature maps learned by our model. Also, we hypothesize that encoding this linguistic prior in the Subword-LSTM architecture leads to the superior performance. Our system attains accuracy 4-5% greater than traditional approaches on our dataset, and also outperforms the available system for sentiment analysis in Hi-En code-mixed text by 18%.) <|cite_end|> employed sub-word level representations in LSTM architecture, yielding state-of-the-art result as compared to other traditional machine learning models and word-polarity based models. However, due to the small and very sparse dataset, we believe that the deep learning based techniques cannot capture all the hidden patterns of the data and specifically could not generalize the rare keywords that have impact on sentiment of the sentence. <|cite_start|> (Reference: Consonant-vowel sequences as subword units for code-mixed languages: In this research work, we develop a state-of-art model for identifying sentiment in Hindi-English code-mixed language. We introduce new phonemic sub-word units for Hindi-English code-mixed text along with a hierarchical deep learning model which uses these sub-word units for predicting sentiment. The results indicate that the model yields a significant increase in accuracy as compared to other models.) <|cite_end|> introduced phonemic sub-word units and used them with a hierarchical Bi-directional LSTM (BiLSTM) model to detect sentiment in Hi-En code-mixed texts. We believe such a complex network with so many weights and hyper-parameters cannot be tuned to its full potential on a small dataset. Therefore, we propose an ensemble model where keyword-based MNB model helps overcome some of the shortcomings of a deep learning based classifier. <|paper_end|>
[ "<|reference_start|> Consonant-vowel sequences as subword units for code-mixed languages: In this research work, we develop a state-of-art model for identifying sentiment in Hindi-English code-mixed language. We introduce new phonemic sub-word units for Hindi-English code-mixed text along with a hierarchical deep learning model which uses these sub-word units for predicting sentiment. The results indicate that the model yields a significant increase in accuracy as compared to other models. <|reference_end|>", "<|reference_start|> Overview for the first shared task on language identification in code-switched data: We present an overview of the first shared task on language identification on code-switched data. The shared task included code-switched data from four language pairs: Modern Standard Arabic-Dialectal Arabic (MSA-DA), Mandarin-English (MAN-EN), Nepali-English (NEP-EN), and Spanish-English (SPA-EN). A total of seven teams participated in the task and submitted 42 system runs. The evaluation showed that language identification at the token level is more difficult when the languages present are closely related, as in the case of MSA-DA, where the prediction performance was the lowest among all language pairs. In contrast, the language pairs with the higest F-measure where SPA-EN and NEP-EN. The task made evident that language identification in code-switched data is still far from solved and warrants further research. <|reference_end|>", "<|reference_start|> Part-of-speech tagging for code-mixed English-Hindi Twitter and Facebook chat messages: The paper reports work on collecting and annotating code-mixed English-Hindi social media text (Twitter and Facebook messages), and experiments on automatic tagging of these corpora, using both a coarse-grained and a fine-grained part-ofspeech tag set. We compare the performance of a combination of language specific taggers to that of applying four machine learning algorithms to the task (Conditional Random Fields, Sequential Minimal Optimization, Naive Bayes and Random Forests), using a range of different features based on word context and wordinternal information. <|reference_end|>", "<|reference_start|> Targeted aspect-based sentiment analysis via embedding commonsense knowledge into an attentive LSTM: Analyzing people’s opinions and sentiments towards certain aspects is an important task of natural language understanding. In this paper, we propose a novel solution to targeted aspect-based sentiment analysis, which tackles the challenges of both aspect-based sentiment analysis and targeted sentiment analysis by exploiting commonsense knowledge. We augment the long short-term memory (LSTM) network with a hierarchical attention mechanism consisting of a target-level attention and a sentence-level attention. Commonsense knowledge of sentiment-related concepts is incorporated into the end-to-end training of a deep neural network for sentiment classification. In order to tightly integrate the commonsense knowledge into the recurrent encoder, we propose an extension of LSTM, termed Sentic LSTM. We conduct experiments on two publicly released datasets, which show that the combination of the proposed attention architecture and Sentic LSTM can outperform state-of-the-art methods in targeted aspect sentiment tasks. <|reference_end|>" ]
[ 3, 5, 19, 27 ]
{"<|multi_cite_1_1|>": "ss-1033774", "<|multi_cite_1_2|>": "ss-1508139", "<|multi_cite_1_3|>": "ss-859097", "<|multi_cite_1_4|>": "ss-2061281", "<|multi_cite_1_5|>": "ss-1079136", "<|multi_cite_1_6|>": "ss-1266136", "<|multi_cite_1_7|>": "ss-2061282", "<|multi_cite_2_1|>": "ss-1355301", "<|multi_cite_2_2|>": "arxiv-147112", "<|multi_cite_2_3|>": "ss-985234", "<|multi_cite_3_1|>": "arxiv-109165", "<|multi_cite_3_2|>": "ss-2061281", "<|multi_cite_4_1|>": "ss-2061283", "<|multi_cite_4_2|>": "ss-1960094", "<|multi_cite_4_3|>": "ss-1960095", "<|multi_cite_4_4|>": "ss-1033774", "<|multi_cite_4_5|>": "ss-1079135", "<|multi_cite_5_1|>": "ss-1960095", "<|multi_cite_5_2|>": "ss-859097", "<|multi_cite_5_3|>": "ss-1459318", "<|multi_cite_5_4|>": "arxiv-115623", "<|multi_cite_6_1|>": "ss-2061282", "<|multi_cite_6_2|>": "ss-862466", "<|multi_cite_7_1|>": "ss-1113543", "<|multi_cite_7_2|>": "ss-741293", "<|multi_cite_7_3|>": "ss-1355301", "<|cite_8|>": "arxiv-147112", "<|cite_9|>": "ss-985234", "<|cite_10|>": "ss-1287188", "<|cite_11|>": "arxiv-109165", "<|cite_12|>": "ss-2061281"}
1712.00512
<|paper_start|> Title: Learning Neural Markers of Schizophrenia Disorder Using Recurrent Neural Networks Abstract: Learning Neural Markers of Schizophrenia Disorder Using Recurrent Neural Networks: Smart systems that can accurately diagnose patients with mental disorders and identify effective treatments based on brain functional imaging data are of great applicability and are gaining much attention. Most previous machine learning studies use hand-designed features, such as functional connectivity, which does not maintain the potential useful information in the spatial relationship between brain regions and the temporal profile of the signal in each region. Here we propose a new method based on recurrent-convolutional neural networks to automatically learn useful representations from segments of 4-D fMRI recordings. Our goal is to exploit both spatial and temporal information in the functional MRI movie (at the whole-brain voxel level) for identifying patients with schizophrenia. Introduction Diagnosis of psychiatric diseases is challenging as there are currently no objective biological markers associated with mental disorders. Similarity of symptoms among different diseases (e.g. depression phase of bipolar disorder and unipolar depression) can lead to inaccurate diagnosis, and to less effective intervention. Worse, there is also no objective biological marker for predicting treatment response in an individual. This oftentimes results in multiple changes in a patient's prescription, often resulting in poor adherence given the medication's side effects. Such inefficiency in the diagnosis and treatment prognosis process for psychiatric disorders has increased the global burden of disease, with mental illness ranking first, before cancer and cardiac conditions, in terms of time lost to disability (WHO 2012 report) and costs <|cite_start|> (Reference: Mental disorders top the list of the most costly conditions in the united states: \$201 billion: Estimates of annual health spending for a comprehensive set of medical conditions are presented for the entire US population and with totals benchmarked to the National Health Expenditure Accounts. In 2013 mental disorders topped the list of most costly conditions, with spending at $201 billion.) <|cite_end|>. In recent years, machine learning techniques have shown success in identifying patients with mental or neurological disorders and in predicting treatment response using brain imaging, especially structural and/or functional MRI (magnetic resonance imaging) data <|cite_start|> (Reference: Using Support Vector Machine to identify imaging biomarkers of neurological and psychiatric disease: A critical review: ) <|cite_end|> <|cite_start|> (Reference: Towards the identification of imaging biomarkers in schizophrenia, using multivariate pattern classification at a single-subject level: ) <|cite_end|> <|cite_start|> (Reference: Multisite prediction of 4-week and 52-week treatment outcomes in patients with first-episode psychosis: a machine learning approach.: ) <|cite_end|> <|cite_start|> (Reference: Learning stable and predictive network-based patterns of schizophrenia and its clinical symptoms: ) <|cite_end|>. Almost all these studies extract features from imaging data then apply standard learning algorithms to produce classifiers, such as support vector machines (SVM) <|cite_start|> (Reference: Using Support Vector Machine to identify imaging biomarkers of neurological and psychiatric disease: A critical review: ) <|cite_end|> <|cite_start|> (Reference: Neuroscience and Biobehavioral Reviews Review from Estimating Activation Locality to Predicting Disorder: a Review of Pattern Recognition for Neuroimaging-based Psychiatric Diagnostics: Psychiatric diagnostics Psychiatric disorders Mental disorders Schizophrenia Bipolar disorder Major depressive disorder Obsessive compulsive disorder Social anxiety disorder Post-traumatic stress disorder Specific phobia Attention-deficit/hyperactivity disorder Autism spectrum disorder a b s t r a c t Psychiatric disorders are increasingly being recognised as having a biological basis, but their diagnosis is made exclusively behaviourally. A promising approach for 'biomarker' discovery has been based on pattern recognition methods applied to neuroimaging data, which could yield clinical utility in future. In this review we survey the literature on pattern recognition for making diagnostic predictions in psychiatric disorders, and evaluate progress made in translating such findings towards clinical application. We evaluate studies on many criteria, including data modalities used, the types of features extracted and algorithm applied. We identify problems common to many studies, such as a relatively small sample size and a primary focus on estimating generalisability within a single study. Furthermore, we highlight challenges that are not widely acknowledged in the field including the importance of accommodating disease prevalence, the necessity of more extensive validation using large carefully acquired samples, the need for methodological innovations to improve accuracy and to discriminate between multiple disorders simultaneously. Finally, we identify specific clinical contexts in which pattern recognition can add value in the short to medium term.) <|cite_end|> that can discriminate between patients and controls, or predict response to treatment. Some typical imaging features extracted from functional MRI (fMRI) or structural MRI (sMRI) data include functional connectivity (FC) and amplitude of low-frequency fluctuations (ALFF) for fMRI, and voxel-based morphometry and gray matter thickness/volume for sMRI. Such features may be extracted voxel-wise (where every voxel is a brain tissue of size $ \sim 1-27 mm^3$) or region-wise, from predefined brain regions (e.g. thalamus, postcentral gyrus). With deep learning techniques providing outstanding performance in various fields, including image classification, speech recognition, and video classification, among others, this approach is being explored in clinical applications, including those involving medical imaging data <|cite_start|> (Reference: {Deep Learning in Medical Image Analysis: Deep Learning for Medical Image InterpretationDeep Learning for Medical Image AnalysisDeep Learning and Convolutional Neural Networks for Medical Image ComputingFrom Fully-Supervised, Single-Task to ScarcelySupervised, Multi-Task Deep Learning for Medical Image AnalysisMachine Learning for Medical Image ReconstructionDeep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision SupportDe Wim Hof methodeMachine Learning for Medical Image ReconstructionDeep Learning and Data Labeling for Medical ApplicationsA Comparison of Deep Learning Algorithms for Medical Image Classification and Image EnhancementMachine Learning in MedicineUnderstanding and Interpreting Machine Learning in Medical Image Computing ApplicationsDeep Learning for Medical Applications with Unique DataMedical ImagingHandbook of Medical Image Computing and Computer Assisted InterventionUncertainty for Safe Utilization of Machine Learning in Medical Imaging, and Perinatal Imaging, Placental and Preterm Image AnalysisMedical Image Computing and Computer-Assisted Intervention – MICCAI 2016Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision SupportMedical Images Classification Using Deep LearningDeep Learning for COVID Image AnalysisMachine Learning in Medical ImagingBrain Tumor MRI Image Segmentation Using Deep Learning TechniquesDeep Learning and Convolutional Neural Networks for Medical Imaging and Clinical) <|cite_end|> <|cite_start|> (Reference: A Survey on Deep Learning in Medical Image Analysis: Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.) <|cite_end|> <|cite_start|> (Reference: {Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs: Importance Deep learning is a family of computational methods that allow an algorithm to program itself by learning from a large set of examples that demonstrate the desired behavior, removing the need to specify rules explicitly. Application of these methods to medical imaging requires further assessment and validation. Objective To apply deep learning to create an algorithm for automated detection of diabetic retinopathy and diabetic macular edema in retinal fundus photographs. Design and Setting A specific type of neural network optimized for image classification called a deep convolutional neural network was trained using a retrospective development data set of 128 175 retinal images, which were graded 3 to 7 times for diabetic retinopathy, diabetic macular edema, and image gradability by a panel of 54 US licensed ophthalmologists and ophthalmology senior residents between May and December 2015. The resultant algorithm was validated in January and February 2016 using 2 separate data sets, both graded by at least 7 US board-certified ophthalmologists with high intragrader consistency. Exposure Deep learning-trained algorithm. Main Outcomes and Measures The sensitivity and specificity of the algorithm for detecting referable diabetic retinopathy (RDR), defined as moderate and worse diabetic retinopathy, referable diabetic macular edema, or both, were generated based on the reference standard of the majority decision of the ophthalmologist panel. The algorithm was evaluated at 2 operating points selected from the development set, one selected for high specificity and another for high sensitivity. Results The EyePACS-1 data set consisted of 9963 images from 4997 patients (mean age, 54.4 years; 62.2% women; prevalence of RDR, 683/8878 fully gradable images [7.8%]); the Messidor-2 data set had 1748 images from 874 patients (mean age, 57.6 years; 42.6% women; prevalence of RDR, 254/1745 fully gradable images [14.6%]). For detecting RDR, the algorithm had an area under the receiver operating curve of 0.991 (95% CI, 0.988-0.993) for EyePACS-1 and 0.990 (95% CI, 0.986-0.995) for Messidor-2. Using the first operating cut point with high specificity, for EyePACS-1, the sensitivity was 90.3% (95% CI, 87.5%-92.7%) and the specificity was 98.1% (95% CI, 97.8%-98.5%). For Messidor-2, the sensitivity was 87.0% (95% CI, 81.1%-91.0%) and the specificity was 98.5% (95% CI, 97.7%-99.1%). Using a second operating point with high sensitivity in the development set, for EyePACS-1 the sensitivity was 97.5% and specificity was 93.4% and for Messidor-2 the sensitivity was 96.1% and specificity was 93.9%. Conclusions and Relevance In this evaluation of retinal fundus photographs from adults with diabetes, an algorithm based on deep machine learning had high sensitivity and specificity for detecting referable diabetic retinopathy. Further research is necessary to determine the feasibility of applying this algorithm in the clinical setting and to determine whether use of the algorithm could lead to improved care and outcomes compared with current ophthalmologic assessment.) <|cite_end|>. In addition to their potential to surpass the performance of other standard machine learning techniques, deep learning methods are attractive because they can be applied directly to the data, skipping the need to extract hand-designed features, a step that is necessary in almost all other machine learning approaches. In addition to the possibility of improving prediction accuracy, deep neural networks (DNN) allow us to move away from hypothesis-driven feature selection to data-driven feature discovery. Various deep learning methods (e.g. multi-layer perceptron, autoencoders, deep belief networks, and convolutional neural networks) have been used to analyze imaging data for various psychiatric and neurological disorders, including but not limited to Alzheimer's disease, ADHD, and Psychosis \citep[see][for a review]{Vieira2017}. Most of these studies use sMRI for predictions in neurological disorders, and much fewer studies use fMRI <|cite_start|> (Reference: Deep learning for neuroimaging: a validation study: Deep learning methods have recently made notable advances in the tasks of classification and representation learning. These tasks are important for brain imaging and neuroscience discovery, making the methods attractive for porting to a neuroimager's toolbox. Success of these methods is, in part, explained by the flexibility of deep learning models. However, this flexibility makes the process of porting to new areas a difficult parameter optimization problem. In this work we demonstrate our results (and feasible parameter ranges) in application of deep learning methods to structural and functional brain imaging data. We also describe a novel constraint-based approach to visualizing high dimensional data. We use it to analyze the effect of parameter choices on data transformations. Our results show that deep learning methods are able to learn physiologically important representations and detect latent relations in neuroimaging data.) <|cite_end|> <|cite_start|> (Reference: Deep neural network with weight sparsity control and pre-training extracts hierarchical features and enhances classification performance: Evidence from whole-brain resting-state functional connectivity patterns of schizophrenia: ) <|cite_end|> <|cite_start|> (Reference: State-space model with deep learning for functional dynamics estimation in resting-state fMRI: ) <|cite_end|> <|cite_start|> (Reference: Deepad: Alzheimer's disease classification via deep convolutional neural networks using MRI and fMRI: To extract patterns from neuroimaging data, various statistical methods and machine learning algorithms have been explored for the diagnosis of Alzheimer’s disease among older adults in both clinical and research applications; however, distinguishing between Alzheimer’s and healthy brain data has been challenging in older adults (age > 75) due to highly similar patterns of brain atrophy and image intensities. Recently, cutting-edge deep learning technologies have rapidly expanded into numerous fields, including medical image analysis. This paper outlines state-of-the-art deep learning-based pipelines employed to distinguish Alzheimer’s magnetic resonance imaging (MRI) and functional MRI (fMRI) from normal healthy control data for a given age group. Using these pipelines, which were executed on a GPU-based high-performance computing platform, the data were strictly and carefully preprocessed. Next, scale- and shift-invariant low- to high-level features were obtained from a high volume of training images using convolutional neural network (CNN) architecture. In this study, fMRI data were used for the first time in deep learning applications for the purposes of medical image analysis and Alzheimer’s disease prediction. These proposed and implemented pipelines, which demonstrate a significant improvement in classification output over other studies, resulted in high and reproducible accuracy rates of 99.9% and 98.84% for the fMRI and MRI pipelines, respectively. Additionally, for clinical purposes, subject-level classification was performed, resulting in an average accuracy rate of 94.32% and 97.88% for the fMRI and MRI pipelines, respectively. Finally, a decision making algorithm designed for the subject-level classification improved the rate to 97.77% for fMRI and 100% for MRI pipelines.) <|cite_end|>, which has been shown to be particularly relevant in predictive analysis of psychiatric disorders (such as schizophrenia) <|cite_start|> (Reference: Dynamic functional connectivity analysis reveals transient states of dysconnectivity in schizophrenia: ) <|cite_end|> <|cite_start|> (Reference: Functional brain networks in schizophrenia: a review: Functional magnetic resonance imaging (fMRI) has become a major technique for studying cognitive function and its disruption in mental illness, including schizophrenia. The major proportion of imaging studies focused primarily upon identifying regions which hemodynamic response amplitudes covary with particular stimuli and differentiate between patient and control groups. In addition to such amplitude based comparisons, one can estimate temporal correlations and compute maps of functional connectivity between regions which include the variance associated with event-related responses as well as intrinsic fluctuations of hemodynamic activity. Functional connectivity maps can be computed by correlating all voxels with a seed region when a spatial prior is available. An alternative are multivariate decompositions such as independent component analysis (ICA) which extract multiple components, each of which is a spatially distinct map of voxels with a common time course. Recent work has shown that these networks are pervasive in relaxed resting and during task performance and hence provide robust measures of intact and disturbed brain activity. This in turn bears the prospect of yielding biomarkers for schizophrenia, which can be described both in terms of disrupted local processing as well as altered global connectivity between large-scale networks. In this review we will summarize functional connectivity measures with a focus upon work with ICA and discuss the meaning of intrinsic fluctuations. In addition, examples of how brain networks have been used for classification of disease will be shown. We present work with functional network connectivity, an approach that enables the evaluation of the interplay between multiple networks and how they are affected in disease. We conclude by discussing new variants of ICA for extracting maximally group discriminative networks from data. In summary, it is clear that identification of brain networks and their inter-relationships with fMRI has great potential to improve our understanding of schizophrenia.) <|cite_end|>. fMRI data measures blood oxygenation level-dependent (BOLD) signal at every brain voxel by taking a scan of the whole brain every 1-3 s. This produces a \textit{movie} of the brain activity (reflected in BOLD signal\footnote{Note the relationship between BOLD signal and neural activity is still under scrutiny <|cite_start|> (Reference: Connecting the dots: R ising capacity to measure extensive arrays of biological parameters has ushered in an era of biomedical big data. As massive datasets from large cohorts become the norm, the discipline of data science has emerged to tackle data-driven problems at the intersection of biomedical research and patient care. We introduce several sources of cardiovascular big data and discuss the importance of maximizing participation in data-driven knowledge production models.) <|cite_end|>.}), either in response to a task (e.g. a motor, sensory, or cognitive task) or simply at rest. Here, our goal is to exploit both spatial and temporal information in the fMRI movie (at the whole-brain voxel level) to distinguish patients with schizophrenia vs healthy controls. We propose using a recurrent convolutional neural network (R-CNN) involving a 3-D CNN followed by a sequential neural network with LSTM (long short term memory) units. The CNN extracts spatial features, which are fed to the LSTM model, that uses the dependencies between time points at every spatial location to generate a label $\in\{patient, control\}$ (see Figure-\ref{fig1}). To our knowledge, this is the first work to apply a recurrent CNN to fMRI data for neurological/psychiatric diagnosis (here schizophrenia). As discussed earlier, most previous fMRI/machine learning studies, including some that used DNNs <|cite_start|> (Reference: Deep neural network with weight sparsity control and pre-training extracts hierarchical features and enhances classification performance: Evidence from whole-brain resting-state functional connectivity patterns of schizophrenia: ) <|cite_end|>, use hand-designed features, in particular FC features <|cite_start|> (Reference: Learning stable and predictive network-based patterns of schizophrenia and its clinical symptoms: ) <|cite_end|>, which collapses the time dimension into one single number (i.e., the correlation coefficient between a pair of time-series). Such approaches do not keep track of the relationships between spatial locations (e.g. voxel or brain regions) either. Here, we expand the work by <|cite_start|> (Reference: Learning Representations from EEG with Deep Recurrent-Convolutional Neural Networks: One of the challenges in modeling cognitive events from electroencephalogram (EEG) data is finding representations that are invariant to inter- and intra-subject differences, as well as to inherent noise associated with such data. Herein, we propose a novel approach for learning such representations from multi-channel EEG time-series, and demonstrate its advantages in the context of mental load classification task. First, we transform EEG activities into a sequence of topology-preserving multi-spectral images, as opposed to standard EEG analysis techniques that ignore such spatial information. Next, we train a deep recurrent-convolutional network inspired by state-of-the-art video classification to learn robust representations from the sequence of images. The proposed approach is designed to preserve the spatial, spectral, and temporal structure of EEG which leads to finding features that are less sensitive to variations and distortions within each dimension. Empirical evaluation on the cognitive load classification task demonstrated significant improvements in classification accuracy over current state-of-the-art approaches in this field.) <|cite_end|>, who successfully applied a R-CNN (with 2-D convolutions) to EEG data in a mental load classification task, to fMRI data (using 3-D convolutions). We used fMRI data in response to an auditory oddball task from patients diagnosed with schizophrenia and healthy controls from FBIRN dataset <|cite_start|> (Reference: The Function Biomedical Informatics Research Network Data Repository: ) <|cite_end|>. The task is to predict whether this sample came from a patient, or a control, based on the preprocessed fMRI BOLD signal at the voxel level, exploiting the temporal and spatial information in the data within an end-to-end deep learning framework. <|paper_end|>
[ "<|reference_start|> Towards the identification of imaging biomarkers in schizophrenia, using multivariate pattern classification at a single-subject level: <|reference_end|>", "<|reference_start|> Using Support Vector Machine to identify imaging biomarkers of neurological and psychiatric disease: A critical review: <|reference_end|>", "<|reference_start|> Functional brain networks in schizophrenia: a review: Functional magnetic resonance imaging (fMRI) has become a major technique for studying cognitive function and its disruption in mental illness, including schizophrenia. The major proportion of imaging studies focused primarily upon identifying regions which hemodynamic response amplitudes covary with particular stimuli and differentiate between patient and control groups. In addition to such amplitude based comparisons, one can estimate temporal correlations and compute maps of functional connectivity between regions which include the variance associated with event-related responses as well as intrinsic fluctuations of hemodynamic activity. Functional connectivity maps can be computed by correlating all voxels with a seed region when a spatial prior is available. An alternative are multivariate decompositions such as independent component analysis (ICA) which extract multiple components, each of which is a spatially distinct map of voxels with a common time course. Recent work has shown that these networks are pervasive in relaxed resting and during task performance and hence provide robust measures of intact and disturbed brain activity. This in turn bears the prospect of yielding biomarkers for schizophrenia, which can be described both in terms of disrupted local processing as well as altered global connectivity between large-scale networks. In this review we will summarize functional connectivity measures with a focus upon work with ICA and discuss the meaning of intrinsic fluctuations. In addition, examples of how brain networks have been used for classification of disease will be shown. We present work with functional network connectivity, an approach that enables the evaluation of the interplay between multiple networks and how they are affected in disease. We conclude by discussing new variants of ICA for extracting maximally group discriminative networks from data. In summary, it is clear that identification of brain networks and their inter-relationships with fMRI has great potential to improve our understanding of schizophrenia. <|reference_end|>", "<|reference_start|> Connecting the dots: R ising capacity to measure extensive arrays of biological parameters has ushered in an era of biomedical big data. As massive datasets from large cohorts become the norm, the discipline of data science has emerged to tackle data-driven problems at the intersection of biomedical research and patient care. We introduce several sources of cardiovascular big data and discuss the importance of maximizing participation in data-driven knowledge production models. <|reference_end|>" ]
[ 2, 5, 15, 16 ]
{"<|cite_1|>": "ss-975762", "<|multi_cite_2_1|>": "ss-975763", "<|multi_cite_2_2|>": "ss-975764", "<|multi_cite_2_3|>": "ss-975765", "<|multi_cite_2_5|>": "ss-863913", "<|multi_cite_3_1|>": "ss-975763", "<|multi_cite_3_2|>": "ss-860010", "<|multi_cite_4_1|>": "ss-1357333", "<|multi_cite_4_2|>": "arxiv-116899", "<|multi_cite_4_3|>": "ss-685934", "<|multi_cite_5_1|>": "arxiv-54300", "<|multi_cite_5_2|>": "ss-1549841", "<|multi_cite_5_3|>": "ss-975766", "<|multi_cite_5_4|>": "ss-975502", "<|multi_cite_6_1|>": "ss-1192826", "<|multi_cite_6_2|>": "ss-975767", "<|cite_7|>": "ss-975768", "<|cite_8|>": "ss-1549841", "<|cite_9|>": "ss-863913", "<|cite_10|>": "arxiv-87658", "<|cite_11|>": "ss-975769"}
1910.12783
<|paper_start|> Title: Distributed Networked Learning with Correlated Data Abstract: Distributed Networked Learning with Correlated Data: We consider a distributed estimation method in a setting with heterogeneous streams of correlated data distributed across nodes in a network. In the considered approach, linear models are estimated locally (i.e., with only local data) subject to a network regularization term that penalizes a local model that differs from neighboring models. We analyze computation dynamics (associated with stochastic gradient updates) and information exchange (associated with exchanging current models with neighboring nodes). We provide a finite-time characterization of convergence of the weighted ensemble average estimate and compare this result to federated learning, an alternative approach to estimation wherein a single model is updated by locally generated gradient updates. This comparison highlights the trade-off between speed vs precision: while model updates take place at a faster rate in federated learning, the proposed networked approach to estimation enables the identification of models with higher precision. We illustrate the method's general applicability in two examples: estimating a Markov random field using wireless sensor networks and modeling prey escape behavior of flocking birds based on a publicly available dataset. Introduction \label{sec:introduction} The ever-growing size and complexity of data create scalability challenges for storage and processing. In certain application domains, data cannot be stored or processed in a single location due to geographical constraints or limited bandwidth. In such cases, a distributed architecture for data storage or processing relying on a network of interconnected computers (not necessarily in the same physical location) is often required. In this paper, we consider the problem of estimating a linear model in real-time based upon heterogeneous streams of correlated data that are distributed across nodes in a network. Since data streams are neither independent nor identically distributed, a model estimated based exclusively on local data may be of arbitrarily low precision. When data centralization is not feasible nor desirable (e.g., due to privacy concerns), the challenge for a distributed approach to estimation consists of identifying algorithmic solutions with low overhead that guarantee improved precision for models obtained exclusively with local data. In this paper, we consider an approach to a distributed estimation that successfully addresses these concerns. In the proposed approach, locally estimated linear models are updated in response to new gradient estimates for either a local loss measure (generalized least squares) or a network regularization function. Such function penalizes a local model in a manner proportional to the distance to other neighboring local models. The regularization-based updates require periodic exchanges of locally identified models amongst neighbors, a task with relatively low communication overhead when models are not high dimensional. In the first part of the paper, we analyze computation dynamics (associated with stochastic gradient updates) and information exchange (associated with exchanging current models with neighboring nodes to compute the gradient of network regularization). To undertake the analysis, we use a continuous-time approximation of the underlying stochastic difference equations, which allows the use of Ito's calculus. In Theorem 1, we provide a finite-time characterization of convergence of the weighted ensemble average estimate using an upper bound on a regularity (or dispersion) measure of the local models. Such upper bound is influenced by the local model exchanging rate and the network connectivity degree. The regularity measure can be made arbitrarily small for a large enough value of a network regularization parameter. In this case, the weighted ensemble average model is also arbitrarily close to any locally estimated model. In Theorem 2, we provide a finite-time characterization of convergence of the weighted ensemble average model error. We show the rate of convergence is determined by {\em smallest} strong convexity parameter across all nodes (i.e. $\kappa>0$) and the {\em slowest} data rate (i.e. $\mu>0$). The asymptotic error is increasing in the {\em worst case} condition number $\frac{\eta}{\kappa}>1$ where $\eta>0$ is the maximum value of the Lipschitz (gradient smoothness) constants associated with each node. The asymptotic error is also increasing in data rate imbalance, i.e., $\frac{\mu'}{\mu}>1$ where $\mu'>0$ is the {\em fastest} data rate across all nodes. This characterization has no dependence on the dimension of the models. Hence, the characterization remains valid for higher-dimensional models as long as the worst-case condition number is bounded (i.e., $\kappa>0$ is bounded away from zero and $\eta< \infty$). We compare this performance characterization with that of an alternative approach to estimation known as {\em federated} learning (FL) (see, e.g., <|cite_start|> (Reference: Federated Machine Learning: Concept and Applications: Today's AI still faces two major challenges. One is that in most industries, data exists in the form of isolated islands. The other is the strengthening of data privacy and security. We propose a possible solution to these challenges: secure federated learning. Beyond the federated learning framework first proposed by Google in 2016, we introduce a comprehensive secure federated learning framework, which includes horizontal federated learning, vertical federated learning and federated transfer learning. We provide definitions, architectures and applications for the federated learning framework, and provide a comprehensive survey of existing works on this subject. In addition, we propose building data networks among organizations based on federated mechanisms as an effective solution to allow knowledge to be shared without compromising user privacy.) <|cite_end|> <|cite_start|> (Reference: Federated Learning: Challenges, Methods, and Future Directions: Federated learning involves training statistical models over remote devices or siloed data centers, such as mobile phones or hospitals, while keeping data localized. Training in heterogeneous and potentially massive networks introduces novel challenges that require a fundamental departure from standard approaches for large-scale machine learning, distributed optimization, and privacy-preserving data analysis. In this article, we discuss the unique characteristics and challenges of federated learning, provide a broad overview of current approaches, and outline several directions of future work that are relevant to a wide range of research communities.) <|cite_end|>). A single model stored in a shared (centralized) parameter server is updated with locally generated gradient updates in this approach. While model updates take place at a faster rate in FL, the proposed networked approach to estimation enables the identification of models with higher precision. This is formalized in two corollaries to Theorem 3. In the first corollary, a large enough network, i.e., one with at least $N>\sqrt{\frac{\mu ^{\prime }\eta }{\mu \kappa }}$ nodes in a connected topology, is shown to asymptotically exhibit higher average model precision. A networked estimation approach is also more robust to heterogeneity in noise distribution. With increasing disparities in noise variance, the FL approach is more vulnerable to noise. For example, if nodes with faster data rates are also noisier, the identified model estimate will inevitably be noisy. In the second corollary we show that the networked approach is guaranteed to outperform FL estimates when a measure of heterogeneity in noise variance across nodes (i.e. $\frac{\bar{\sigma}^{2}}{\min \sigma _{k}^{2}}$) exceeds the threshold $\sqrt{\frac{\mu ^{\prime }\eta }{\mu \kappa }}$. These corollaries highlight a trade-off between speed {\em vs} precision: while model updates take place at a faster rate in federated learning, the proposed networked approach to estimation enables the identification of models with higher precision\footnote{We note that the preliminary version of this study appeared as a conference publication <|cite_start|> (Reference: Distributed Networked Learning with Correlated Data: We consider a distributed estimation method in a setting with heterogeneous streams of correlated data distributed across nodes in a network. In the considered approach, linear models are estimated locally (i.e., with only local data) subject to a network regularization term that penalizes a local model that differs from neighboring models. We analyze computation dynamics (associated with stochastic gradient updates) and information exchange (associated with exchanging current models with neighboring nodes). We provide a finite-time characterization of convergence of the weighted ensemble average estimate and compare this result to federated learning, an alternative approach to estimation wherein a single model is updated by locally generated gradient updates. This comparison highlights the trade-off between speed vs precision: while model updates take place at a faster rate in federated learning, the proposed networked approach to estimation enables the identification of models with higher precision. We illustrate the method's general applicability in two examples: estimating a Markov random field using wireless sensor networks and modeling prey escape behavior of flocking birds based on a publicly available dataset.) <|cite_end|>. The model described here incorporates heterogeneity in the speed of data processing across nodes and does not commit to a particular choice of weights for updates based on models received from neighbors, unlike the preliminary model <|cite_start|> (Reference: Distributed Networked Learning with Correlated Data: We consider a distributed estimation method in a setting with heterogeneous streams of correlated data distributed across nodes in a network. In the considered approach, linear models are estimated locally (i.e., with only local data) subject to a network regularization term that penalizes a local model that differs from neighboring models. We analyze computation dynamics (associated with stochastic gradient updates) and information exchange (associated with exchanging current models with neighboring nodes). We provide a finite-time characterization of convergence of the weighted ensemble average estimate and compare this result to federated learning, an alternative approach to estimation wherein a single model is updated by locally generated gradient updates. This comparison highlights the trade-off between speed vs precision: while model updates take place at a faster rate in federated learning, the proposed networked approach to estimation enables the identification of models with higher precision. We illustrate the method's general applicability in two examples: estimating a Markov random field using wireless sensor networks and modeling prey escape behavior of flocking birds based on a publicly available dataset.) <|cite_end|>. We present convergence results (Theorems 1 and 2) similar to <|cite_start|> (Reference: Distributed Networked Learning with Correlated Data: We consider a distributed estimation method in a setting with heterogeneous streams of correlated data distributed across nodes in a network. In the considered approach, linear models are estimated locally (i.e., with only local data) subject to a network regularization term that penalizes a local model that differs from neighboring models. We analyze computation dynamics (associated with stochastic gradient updates) and information exchange (associated with exchanging current models with neighboring nodes). We provide a finite-time characterization of convergence of the weighted ensemble average estimate and compare this result to federated learning, an alternative approach to estimation wherein a single model is updated by locally generated gradient updates. This comparison highlights the trade-off between speed vs precision: while model updates take place at a faster rate in federated learning, the proposed networked approach to estimation enables the identification of models with higher precision. We illustrate the method's general applicability in two examples: estimating a Markov random field using wireless sensor networks and modeling prey escape behavior of flocking birds based on a publicly available dataset.) <|cite_end|> with these generalizations. These generalizations provide additional insights into the implications of data rate imbalance combined with the heterogeneity of data discussed above. In addition to the model generalization, we provide an analytical comparison of the method's performance with FL (Theorem 3, and Corollaries 1 and 2).}. This paper is related to several strands of the literature. In a ``divide and conquer" approach to distributed data (see, e.g., <|cite_start|> (Reference: Divide and Conquer Kernel Ridge Regression: A Distributed Algorithm with Minimax Optimal Rates: We establish optimal convergence rates for a decomposition-based scalable approach to kernel ridge regression. The method is simple to describe: it randomly partitions a dataset of size N into m subsets of equal size, computes an independent kernel ridge regression estimator for each subset, then averages the local solutions into a global predictor. This partitioning leads to a substantial reduction in computation time versus the standard approach of performing kernel ridge regression on all N samples. Our two main theorems establish that despite the computational speed-up, statistical optimality is retained: as long as m is not too large, the partition-based estimator achieves the statistical minimax rate over all estimators using the set of N samples. As concrete examples, our theory guarantees that the number of processors m may grow nearly linearly for finite-rank kernels and Gaussian kernels and polynomially in N for Sobolev spaces, which in turn allows for substantial reductions in computational cost. We conclude with experiments on both simulated data and a music-prediction task that complement our theoretical results, exhibiting the computational and statistical benefits of our approach.) <|cite_end|>, <|cite_start|> (Reference: A collaborative training algorithm for distributed learning: In this paper, an algorithm is developed for collaboratively training networks of kernel-linear least-squares regression estimators. The algorithm is shown to distributively solve a relaxation of the classical centralized least-squares regression problem. A statistical analysis shows that the generalization error afforded agents by the collaborative training algorithm can be bounded in terms of the relationship between the network topology and the representational capacity of the relevant reproducing kernel Hilbert space. Numerical experiments suggest that the algorithm is effective at reducing noise. The algorithm is relevant to the problem of distributed learning in wireless sensor networks by virtue of its exploitation of local communication. Several new questions for statistical learning theory are proposed.) <|cite_end|> and <|cite_start|> (Reference: Communication-efficient algorithms for statistical optimization: We study two communication-efficient algorithms for distributed statistical optimization on large-scale data. The first algorithm is an averaging method that distributes the N data samples evenly to m machines, performs separate minimization on each subset, and then averages the estimates. We provide a sharp analysis of this average mixture algorithm, showing that under a reasonable set of conditions, the combined parameter achieves mean-squared error that decays as O(N-1 + (N/m)-2). Whenever m ≤ √N, this guarantee matches the best possible rate achievable by a centralized algorithm with access to all N samples. The second algorithm is a novel method, based on an appropriate form of bootstrap. Requiring only a single round of communication, it has mean-squared error that decays as O(N-1 + (N/m)-3), and so is more robust to the amount of parallelization.) <|cite_end|>), individual nodes implement a particular learning algorithm to fit a model for their assigned data set and {\em upon each machine identifying a model}, an ensemble (or global) model is obtained by averaging individual models. This is similar to {\em ensemble} learning (see, e.g., <|cite_start|> (Reference: Ensemble approaches for regression: A survey: The goal of ensemble regression is to combine several models in order to improve the prediction accuracy in learning problems with a numerical target variable. The process of ensemble learning can be divided into three phases: the generation phase, the pruning phase, and the integration phase. We discuss different approaches to each of these phases that are able to deal with the regression problem, categorizing them in terms of their relevant characteristics and linking them to contributions from different fields. Furthermore, this work makes it possible to identify interesting areas for future research.) <|cite_end|>), which refers to methods that combine different models into a single predictive model. For example, bootstrap aggregation (also referred to as ``bagging") is a popular technique for combining regression models from {\em homogeneously} distributed data.\footnote{ A careful selection of weights for computing the average model ensures a reduction of estimation variance along with other desirable properties, see, e.g., <|cite_start|> (Reference: Least squares model averaging: This paper considers the problem of selection of weights for averaging across least squares estimates obtained from a set of models. Existing model average methods are based on exponential Akaike information criterion (AIC) and Bayesian information criterion (BIC) weights. In distinction, this paper proposes selecting the weights by minimizing a Mallows criterion, the latter an estimate of the average squared error from the model average fit. We show that our new Mallows model average (MMA) estimator is asymptotically optimal in the sense of achieving the lowest possible squared error in a class of discrete model average estimators. In a simulation experiment we show that the MMA estimator compares favorably with those based on AIC and BIC weights. The proof of the main result is an application of the work of Li (1987). Copyright The Econometric Society 2007.) <|cite_end|>, <|cite_start|> (Reference: Generalized least squares model averaging: In this article, we propose a method of averaging generalized least squares estimators for linear regression models with heteroskedastic errors. The averaging weights are chosen to minimize Mallows’ Cp-like criterion. We show that the weight vector selected by our method is optimal. It is also shown that this optimality holds even when the variances of the error terms are estimated and the feasible generalized least squares estimators are averaged. The variances can be estimated parametrically or nonparametrically. Monte Carlo simulation results are encouraging. An empirical example illustrates that the proposed method is useful for predicting a measure of firms’ performance.) <|cite_end|>).} While a ``divide and conquer" approaches coupled with a model averaging step can significantly reduce computing time and lower single-machine memory requirements, it relies on a single synchronized step (i.e., computing the ensemble average), which is executed {\em after all} machines have identified a model. In contrast, the approach considered in this paper deals with {\em asynchronous} real-time estimation and regularization for {\em heterogeneous} and {\em correlated} data streams. The considered scheme is related to the literature on consensus optimization (see, e.g., <|cite_start|> (Reference: {Distributed subgradient methods for multi-agent optimization: We study a distributed computation model for optimizing a sum of convex objective functions corresponding to multiple agents. For solving this (not necessarily smooth) optimization problem, we consider a subgradient method that is distributed among the agents. The method involves every agent minimizing his/her own objective function while exchanging information locally with other agents in the network over a time-varying topology. We provide convergence results and convergence rate estimates for the subgradient method. Our convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy.) <|cite_end|> <|cite_start|> (Reference: Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization: Asynchronous parallel implementations of stochastic gradient (SG) have been broadly used in solving deep neural network and received many successes in practice recently. However, existing theories cannot explain their convergence and speedup properties, mainly due to the nonconvexity of most deep learning formulations and the asynchronous parallel mechanism. To fill the gaps in theory and provide theoretical supports, this paper studies two asynchronous parallel implementations of SG: one is on the computer network and the other is on the shared memory system. We establish an ergodic convergence rate $O(1/\sqrt{K})$ for both algorithms and prove that the linear speedup is achievable if the number of workers is bounded by $\sqrt{K}$ ($K$ is the total number of iterations). Our results generalize and improve existing analysis for convex minimization.) <|cite_end|> <|cite_start|> (Reference: Extra: An exact first- order algorithm for decentralized consensus optimization: Recently, there has been growing interest in solving consensus optimization problems in a multiagent network. In this paper, we develop a decentralized algorithm for the consensus optimization problem $\mathrm{minimize}_{x\in\mathbb{R}^p}~\bar{f}(x)=\frac{1}{n}\sum_{i=1}^n f_i(x),$ which is defined over a connected network of $n$ agents, where each function $f_i$ is held privately by agent $i$ and encodes the agent's data and objective. All the agents shall collaboratively find the minimizer while each agent can only communicate with its neighbors. Such a computation scheme avoids a data fusion center or long-distance communication and offers better load balance to the network. This paper proposes a novel decentralized exact first-order algorithm (abbreviated as EXTRA) to solve the consensus optimization problem. “Exact” means that it can converge to the exact solution. EXTRA uses a fixed, large step size, which can be determined independently of the network size or topology. The local variable of every a...) <|cite_end|>) and the recent work on finding the best common linear model in convex machine learning problems <|cite_start|> (Reference: COLA: Decentralized Linear Learning: Decentralized machine learning is a promising emerging paradigm in view of global challenges of data ownership and privacy. We consider learning of linear classification and regression models, in the setting where the training data is decentralized over many user devices, and the learning algorithm must run on-device, on an arbitrary communication network, without a central coordinator. We propose COLA, a new decentralized training algorithm with strong theoretical guarantees and superior practical performance. Our framework overcomes many limitations of existing methods, and achieves communication efficiency, scalability, elasticity as well as resilience to changes in data and participating devices.) <|cite_end|>. However, as we shall show, the proposed approach can not be interpreted as being based upon averaging local models as in consensus-based optimization. The algorithms proposed in <|cite_start|> (Reference: Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization: Asynchronous parallel implementations of stochastic gradient (SG) have been broadly used in solving deep neural network and received many successes in practice recently. However, existing theories cannot explain their convergence and speedup properties, mainly due to the nonconvexity of most deep learning formulations and the asynchronous parallel mechanism. To fill the gaps in theory and provide theoretical supports, this paper studies two asynchronous parallel implementations of SG: one is on the computer network and the other is on the shared memory system. We establish an ergodic convergence rate $O(1/\sqrt{K})$ for both algorithms and prove that the linear speedup is achievable if the number of workers is bounded by $\sqrt{K}$ ($K$ is the total number of iterations). Our results generalize and improve existing analysis for convex minimization.) <|cite_end|> and <|cite_start|> (Reference: Extra: An exact first- order algorithm for decentralized consensus optimization: Recently, there has been growing interest in solving consensus optimization problems in a multiagent network. In this paper, we develop a decentralized algorithm for the consensus optimization problem $\mathrm{minimize}_{x\in\mathbb{R}^p}~\bar{f}(x)=\frac{1}{n}\sum_{i=1}^n f_i(x),$ which is defined over a connected network of $n$ agents, where each function $f_i$ is held privately by agent $i$ and encodes the agent's data and objective. All the agents shall collaboratively find the minimizer while each agent can only communicate with its neighbors. Such a computation scheme avoids a data fusion center or long-distance communication and offers better load balance to the network. This paper proposes a novel decentralized exact first-order algorithm (abbreviated as EXTRA) to solve the consensus optimization problem. “Exact” means that it can converge to the exact solution. EXTRA uses a fixed, large step size, which can be determined independently of the network size or topology. The local variable of every a...) <|cite_end|> are designed for {\em batch} data while our approach deals with {\em streaming} data. For example, in <|cite_start|> (Reference: Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization: Asynchronous parallel implementations of stochastic gradient (SG) have been broadly used in solving deep neural network and received many successes in practice recently. However, existing theories cannot explain their convergence and speedup properties, mainly due to the nonconvexity of most deep learning formulations and the asynchronous parallel mechanism. To fill the gaps in theory and provide theoretical supports, this paper studies two asynchronous parallel implementations of SG: one is on the computer network and the other is on the shared memory system. We establish an ergodic convergence rate $O(1/\sqrt{K})$ for both algorithms and prove that the linear speedup is achievable if the number of workers is bounded by $\sqrt{K}$ ($K$ is the total number of iterations). Our results generalize and improve existing analysis for convex minimization.) <|cite_end|>, gradient estimation noise is assumed independent and homogeneous, while in our approach, gradient estimation noise is {\em correlated} and {\em heterogeneous}. In addition, the algorithms proposed in <|cite_start|> (Reference: Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization: Asynchronous parallel implementations of stochastic gradient (SG) have been broadly used in solving deep neural network and received many successes in practice recently. However, existing theories cannot explain their convergence and speedup properties, mainly due to the nonconvexity of most deep learning formulations and the asynchronous parallel mechanism. To fill the gaps in theory and provide theoretical supports, this paper studies two asynchronous parallel implementations of SG: one is on the computer network and the other is on the shared memory system. We establish an ergodic convergence rate $O(1/\sqrt{K})$ for both algorithms and prove that the linear speedup is achievable if the number of workers is bounded by $\sqrt{K}$ ($K$ is the total number of iterations). Our results generalize and improve existing analysis for convex minimization.) <|cite_end|> and <|cite_start|> (Reference: Extra: An exact first- order algorithm for decentralized consensus optimization: Recently, there has been growing interest in solving consensus optimization problems in a multiagent network. In this paper, we develop a decentralized algorithm for the consensus optimization problem $\mathrm{minimize}_{x\in\mathbb{R}^p}~\bar{f}(x)=\frac{1}{n}\sum_{i=1}^n f_i(x),$ which is defined over a connected network of $n$ agents, where each function $f_i$ is held privately by agent $i$ and encodes the agent's data and objective. All the agents shall collaboratively find the minimizer while each agent can only communicate with its neighbors. Such a computation scheme avoids a data fusion center or long-distance communication and offers better load balance to the network. This paper proposes a novel decentralized exact first-order algorithm (abbreviated as EXTRA) to solve the consensus optimization problem. “Exact” means that it can converge to the exact solution. EXTRA uses a fixed, large step size, which can be determined independently of the network size or topology. The local variable of every a...) <|cite_end|>, every node is {\em equally likely} to be selected at each iteration to update its local model. In contrast, in our approach, data streams are heterogeneous so that certain nodes have faster data streams and thus are more likely to update their models at any point in time. A network regularization penalty for networked learning has been analyzed in a series of papers by <|cite_start|> (Reference: Multitask Diffusion Adaptation over Networks: Adaptive networks are suitable for decentralized inference tasks, e.g., to monitor complex natural phenomena. Recent research works have intensively studied distributed optimization problems in the case where the nodes have to estimate a single optimum parameter vector collaboratively. However, there are many important applications that are multitask-oriented in the sense that there are multiple optimum parameter vectors to be inferred simultaneously, in a collaborative manner, over the area covered by the network. In this paper, we employ diffusion strategies to develop distributed algorithms that address multitask problems by minimizing an appropriate mean-square error criterion with $\ell_2$-regularization. The stability and convergence of the algorithm in the mean and in the mean-square sense is analyzed. Simulations are conducted to verify the theoretical findings, and to illustrate how the distributed strategy can be used in several useful applications related to spectral sensing, target localization, and hyperspectral data unmixing.) <|cite_end|> <|cite_start|> (Reference: Multitask diffusion adaptation over asynchronous networks: The multitask diffusion LMS is an efficient strategy to simultaneously infer, in a collaborative manner, multiple parameter vectors. Existing works on multitask problems assume that all agents respond to data synchronously. In several applications, agents may not be able to act synchronously because networks can be subject to several sources of uncertainties such as changing topology, random link failures, or agents turning on and off for energy conservation. In this work, we describe a model for the solution of multitask problems over asynchronous networks and carry out a detailed mean and mean-square error analysis. Results show that sufficiently small step-sizes can still ensure both stability and performance. Simulations and illustrative examples are provided to verify the theoretical findings. The framework is applied to a particular application involving spectral sensing.) <|cite_end|> <|cite_start|> (Reference: Learning over multitask graphs—Part I: Stability analysis: This paper formulates a multitask optimization problem where agents in the network have individual objectives to meet, or individual parameter vectors to estimate, subject to a smoothness condition over the graph. The smoothness condition softens the transition in the tasks among adjacent nodes and allows incorporating information about the graph structure into the solution of the inference problem. A diffusion strategy is devised that responds to streaming data and employs stochastic approximations in place of actual gradient vectors, which are generally unavailable. The approach relies on minimizing a global cost consisting of the aggregate sum of individual costs regularized by a term that promotes smoothness. We show in this Part I of the work, under conditions on the step-size parameter, that the adaptive strategy induces a contraction mapping and leads to small estimation errors on the order of the small step-size. The results in the accompanying Part II will reveal explicitly the influence of the network topology and the regularization strength on the network performance and will provide insights into the design of effective multitask strategies for distributed inference over networks.) <|cite_end|> <|cite_start|> (Reference: Learning over multitask graphs—Part II: Performance analysis: Part I of this paper formulated a multitask optimization problem where agents in the network have individual objectives to meet, or individual parameter vectors to estimate, subject to a smoothness condition over the graph. A diffusion strategy was devised that responds to streaming data and employs stochastic approximations in place of actual gradient vectors, which are generally unavailable. The approach relied on minimizing a global cost consisting of the aggregate sum of individual costs regularized by a term that promotes smoothness. We examined the first-order, the second-order, and the fourth-order stability of the multitask learning algorithm. The results identified conditions on the step-size parameter, regularization strength, and data characteristics in order to ensure stability. This Part II examines steady-state performance of the strategy. The results reveal explicitly the influence of the network topology and the regularization strength on the network performance and provide insights into the design of effective multitask strategies for distributed inference over networks.) <|cite_end|> <|cite_start|> (Reference: Distributed weighted least-squares estimation with fast convergence for large-scale systems: ) <|cite_end|> <|cite_start|> (Reference: Distributed Networked Real-time Learning: Many machine learning algorithms have been developed under the assumption that datasets are already available in batch form. Yet, in many application domains, data are only available sequentially overtime via compute nodes in different geographic locations. In this article, we consider the problem of learning a model when streaming data cannot be transferred to a single location in a timely fashion. In such cases, a distributed architecture for learning which relies on a network of interconnected “local” nodes is required. We propose a distributed scheme in which every local node implements stochastic gradient updates based upon a local data stream. To ensure robust estimation, a network regularization penalty is used to maintain a measure of cohesion in the ensemble of models. We show that the ensemble average approximates a stationary point and characterizes the degree to which individual models differ from the ensemble average. We compare the results with federated learning to conclude that the proposed approach is more robust to heterogeneity in data streams (data rates and estimation quality). We illustrate the results with an application to image classification with a deep learning model based upon convolutional neural networks.) <|cite_end|> <|cite_start|> (Reference: Distributed Linear Parameter Estimation: Asymptotically Efficient Adaptive Strategies: The paper considers the problem of distributed adaptive linear parameter estimation in multi-agent inference networks. Local sensing model information is only partially available at the agents and inter-agent communication is assumed to be unpredictable. The paper develops a generic mixed time-scale stochastic procedure consisting of simultaneous distributed learning and estimation, in which the agents adaptively assess their relative observation quality over time and fuse the innovations accordingly. Under rather weak assumptions on the statistical model and the inter-agent communication, it is shown that, by properly tuning the consensus potential with respect to the innovation potential, the asymptotic information rate loss incurred in the learning process may be made negligible. As such, it is shown that the agent estimates are asymptotically efficient, in that their asymptotic covariance coincides with that of a centralized estimator (the inverse of the centralized Fisher information rate for Gaussian systems) with perfect global model information and having access to all observations at all times. The proof techniques are mainly based on convergence arguments for non-Markovian mixed time scale stochastic approximation procedures. Several approximation results developed in the process are of independent interest.) <|cite_end|>. In contrast to these papers, we consider a setting with heterogeneous nodes with {\em correlated} data streams {\em asynchronously} updating their respective models at {\em different rates} over time. Finally, the paper is related to the literature of distributed algorithms to solve linear algebraic equations (such as those associated with generalized least squares) over multi-agent networks (see, e.g., <|cite_start|> (Reference: A Distributed Algorithm for Solving a Linear Algebraic Equation: A distributed algorithm is described for solving a linear algebraic equation of the form $Ax=b$ assuming the equation has at least one solution. The equation is simultaneously solved by $m$ agents assuming each agent knows only a subset of the rows of the partitioned matrix $(A,b)$, the current estimates of the equation's solution generated by its neighbors, and nothing more. Each agent recursively updates its estimate by utilizing the current estimates generated by each of its neighbors. Neighbor relations are characterized by a time-dependent directed graph $\mathbb{N}(t)$ whose vertices correspond to agents and whose arcs depict neighbor relations. It is shown that for any matrix $A$ for which the equation has a solution and any sequence of "repeatedly jointly strongly connected graphs" $\mathbb{N}(t)$, $t=1,2,\ldots$, the algorithm causes all agents' estimates to converge exponentially fast to the same solution to $Ax=b$. It is also shown that the neighbor graph sequence must actually be repeatedly jointly strongly connected if exponential convergence is to be assured. A worst case convergence rate bound is derived for the case when $Ax=b$ has a unique solution. It is demonstrated that with minor modification, the algorithm can track the solution to $Ax = b$, even if $A$ and $b$ are changing with time, provided the rates of change of $A$ and $b$ are sufficiently small. It is also shown that in the absence of communication delays, exponential convergence to a solution occurs even if the times at which each agent updates its estimates are not synchronized with the update times of its neighbors. A modification of the algorithm is outlined which enables it to obtain a least squares solution to $Ax=b$ in a distributed manner, even if $Ax=b$ does not have a solution.) <|cite_end|> <|cite_start|> (Reference: Asynchronous distributed algorithms for solving linear algebraic equations: Two asynchronous distributed algorithms are presented for solving a linear equation of the form <inline-formula> <tex-math notation="LaTeX">$Ax=b$</tex-math></inline-formula> with at least one solution. The equation is simultaneously and asynchronously solved by <inline-formula><tex-math notation="LaTeX">$m$</tex-math></inline-formula> agents assuming that each agent knows only a subset of the rows of the partitioned matrix <inline-formula> <tex-math notation="LaTeX">$[A\ \ b]$</tex-math></inline-formula>, the estimates of the equation's solution generated by its neighbors, and nothing more. Neighbor relationships are characterized by a time-dependent directed graph whose vertices correspond to agents and whose arcs depict neighbor relationships. Each agent recursively updates its estimate of a solution at its own event times by utilizing estimates generated by its neighbors which are transmitted with delays. The event time sequences of different agents are not assumed to be synchronized. It is shown that for any matrix-vector pair <inline-formula><tex-math notation="LaTeX">$(A, b)$</tex-math></inline-formula> for which the equation has a solution and any repeatedly jointly strongly connected sequence of neighbor graphs defined on the merged sequence of all agents’ event times, the algorithms cause all agents’ estimates to converge exponentially fast to the same solution to <inline-formula><tex-math notation="LaTeX">$Ax=b$</tex-math> </inline-formula>. The first algorithm requires a specific initialization step at each agent, and the second algorithm works for arbitrary initializations. Explicit expressions for convergence rates are provided, and a relation between local initializations and limiting consensus solutions is established, which is used to solve the least 2-norm solution.) <|cite_end|> <|cite_start|> (Reference: A Distributed Algorithm for Least Squares Solutions: In this technical note, a distributed algorithm is proposed for multiagent networks to achieve a least squares solution of a system of linear equations, in which each agent only knows part of the overall equations and communicates only with its nearby neighbors. The proposed algorithm is discrete time but does not involve small or time-varying step sizes. Given that the network is fixed, connected, and undirected, the proposed algorithm enables all agents in the network to achieve exponentially fast the same least squares solution; this is validated by simulations.) <|cite_end|>). However, unlike these papers, we examine the consequences of heterogeneous and correlated noise in distributed generalized least squares estimation in the present paper. The contributions of this paper are as follows. We develop a distributed estimation scheme that accounts for heterogeneous and correlated distributed datasets and heterogeneity in data processing speed by the nodes. We provide a finite-time characterization of convergence of the weighted ensemble average that captures the performance gap between the centralized and weighted average ensemble models as a function of the data heterogeneity and speed imbalance (Section 3.1-3.3). Via a similar finite-time characterization of the FL performance, we show that the distributed estimation with network regularization outperforms FL when the number of nodes or noise variance across nodes is large (Section 3.4). We demonstrate the relative poor performance of FL when some sensors have access to highly noisy data in wireless sensor network (WSN) estimation of a Gaussian Markov random field (MRF) (Section 4.1). We also show the method's performance on a real dataset using weights proportional to the inverse of the locally estimated noise for local models (Section 4.2). <|paper_end|>
[ "<|reference_start|> Distributed Networked Learning with Correlated Data: We consider a distributed estimation method in a setting with heterogeneous streams of correlated data distributed across nodes in a network. In the considered approach, linear models are estimated locally (i.e., with only local data) subject to a network regularization term that penalizes a local model that differs from neighboring models. We analyze computation dynamics (associated with stochastic gradient updates) and information exchange (associated with exchanging current models with neighboring nodes). We provide a finite-time characterization of convergence of the weighted ensemble average estimate and compare this result to federated learning, an alternative approach to estimation wherein a single model is updated by locally generated gradient updates. This comparison highlights the trade-off between speed vs precision: while model updates take place at a faster rate in federated learning, the proposed networked approach to estimation enables the identification of models with higher precision. We illustrate the method's general applicability in two examples: estimating a Markov random field using wireless sensor networks and modeling prey escape behavior of flocking birds based on a publicly available dataset. <|reference_end|>", "<|reference_start|> Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization: Asynchronous parallel implementations of stochastic gradient (SG) have been broadly used in solving deep neural network and received many successes in practice recently. However, existing theories cannot explain their convergence and speedup properties, mainly due to the nonconvexity of most deep learning formulations and the asynchronous parallel mechanism. To fill the gaps in theory and provide theoretical supports, this paper studies two asynchronous parallel implementations of SG: one is on the computer network and the other is on the shared memory system. We establish an ergodic convergence rate $O(1/\\sqrt{K})$ for both algorithms and prove that the linear speedup is achievable if the number of workers is bounded by $\\sqrt{K}$ ($K$ is the total number of iterations). Our results generalize and improve existing analysis for convex minimization. <|reference_end|>", "<|reference_start|> Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization: Asynchronous parallel implementations of stochastic gradient (SG) have been broadly used in solving deep neural network and received many successes in practice recently. However, existing theories cannot explain their convergence and speedup properties, mainly due to the nonconvexity of most deep learning formulations and the asynchronous parallel mechanism. To fill the gaps in theory and provide theoretical supports, this paper studies two asynchronous parallel implementations of SG: one is on the computer network and the other is on the shared memory system. We establish an ergodic convergence rate $O(1/\\sqrt{K})$ for both algorithms and prove that the linear speedup is achievable if the number of workers is bounded by $\\sqrt{K}$ ($K$ is the total number of iterations). Our results generalize and improve existing analysis for convex minimization. <|reference_end|>", "<|reference_start|> Extra: An exact first- order algorithm for decentralized consensus optimization: Recently, there has been growing interest in solving consensus optimization problems in a multiagent network. In this paper, we develop a decentralized algorithm for the consensus optimization problem $\\mathrm{minimize}_{x\\in\\mathbb{R}^p}~\\bar{f}(x)=\\frac{1}{n}\\sum_{i=1}^n f_i(x),$ which is defined over a connected network of $n$ agents, where each function $f_i$ is held privately by agent $i$ and encodes the agent's data and objective. All the agents shall collaboratively find the minimizer while each agent can only communicate with its neighbors. Such a computation scheme avoids a data fusion center or long-distance communication and offers better load balance to the network. This paper proposes a novel decentralized exact first-order algorithm (abbreviated as EXTRA) to solve the consensus optimization problem. “Exact” means that it can converge to the exact solution. EXTRA uses a fixed, large step size, which can be determined independently of the network size or topology. The local variable of every a... <|reference_end|>" ]
[ 4, 12, 18, 19 ]
{"<|multi_cite_1_1|>": "arxiv-191390", "<|multi_cite_1_2|>": "arxiv-219727", "<|cite_2|>": "arxiv-231229", "<|cite_3|>": "arxiv-231229", "<|cite_4|>": "arxiv-231229", "<|cite_5|>": "arxiv-46055", "<|cite_6|>": "ss-1102656", "<|cite_7|>": "ss-1284347", "<|cite_8|>": "ss-1381875", "<|cite_9|>": "ss-1271194", "<|cite_10|>": "ss-1927142", "<|multi_cite_11_1|>": "ss-1262497", "<|multi_cite_11_2|>": "arxiv-80089", "<|multi_cite_11_3|>": "ss-1232572", "<|cite_12|>": "arxiv-169244", "<|cite_13|>": "arxiv-80089", "<|cite_14|>": "ss-1232572", "<|cite_15|>": "arxiv-80089", "<|cite_16|>": "arxiv-80089", "<|cite_17|>": "ss-1232572", "<|multi_cite_18_1|>": "arxiv-52944", "<|multi_cite_18_2|>": "arxiv-69717", "<|multi_cite_18_3|>": "ss-2020500", "<|multi_cite_18_4|>": "ss-2020501", "<|multi_cite_18_5|>": "ss-2276627", "<|multi_cite_18_6|>": "ss-2020502", "<|multi_cite_18_7|>": "arxiv-24819", "<|multi_cite_19_1|>": "arxiv-73971", "<|multi_cite_19_2|>": "ss-1939488", "<|multi_cite_19_3|>": "ss-2020503"}
2106.01693
<|paper_start|> Title: Bridging the Multiscale Hybrid-Mixed and Multiscale Hybrid High-Order methods Abstract: Bridging the Multiscale Hybrid-Mixed and Multiscale Hybrid High-Order methods: We establish the equivalence between the Multiscale Hybrid-Mixed (MHM) and the Multiscale Hybrid High-Order (MsHHO) methods for a variable diffusion problem with piecewise polynomial source term. Under the idealized assumption that the local problems defining the multiscale basis functions are exactly solved, we prove that the equivalence holds for general polytopal (coarse) meshes and arbitrary approximation orders. We also leverage the interchange of properties to perform a unified convergence analysis, as well as to improve on both methods. Introduction \label{intro} The tremendous development of massively parallel architectures in the last decade has led to a revision of what is expected from computational simulators, which must embed asynchronous and communication-avoiding algorithms. In such a scenario where precision and robustness remain fundamental properties, but algorithms must take full advantage of the new architectures, numerical methods built upon the ``divide-and-conquer" philosophy fulfill these requirements better than standard methods operating in a monolithic fashion on the different scales of the problem at hand. Among the vast literature on the subject, driven by domain decomposition methodologies (see, e.g., <|cite_start|> (Reference: Domain decomposition methods - algorithms and theory: The purpose of this text is to offer a comprehensive and self-contained presentation of some of the most successful and popular domain decomposition preconditioners for finite and spectral element approximations of partial differential equations. Strong emphasis is placed on both algorithmic and mathematical aspects. Some important methods such FETI and balancing Neumann-Neumann methods and algorithms for spectral element methods, not treated previously in any monograph, are covered in detail. Winner of the 2005 Award for Excellence in Professional and Scholarly Publishing - Mathematics/Statistics - of the Association of American Publishers) <|cite_end|> for a survey), multiscale numerical methods emerge as an attractive option to efficiently handle problems with highly heterogeneous coefficients, as well as multi-query scenarios in which the problem solution must be computed for a large number of source terms. These scenarios may arise when considering highly oscillatory, nonlinear, time-dependent models, or within optimization algorithms when solving problems featuring PDE-based constraints, or in models including stochastic processes, to cite a few. The development of multiscale methods started with the seminal work <|cite_start|> (Reference: Generalized finite element methods: their performance and their relation to mixed methods: The notion of a generalized finite element method is introduced. This class of methods is analyzed and their relation to mixed methods is discussed. The class of generalized finite element methods offers a wide variety of computational procedures from which particular procedures can be selected for particular problems. A particular generalized finite element method which is very effective for problems with rough coefficients is discussed in detail.) <|cite_end|>. Important advances were then provided in <|cite_start|> (Reference: Multiscale phenomena: Green's functions, the Dirichlet-to-Neumann formulation, subgrid scale models, bubbles and the origins of stabilized methods: ) <|cite_end|> <|cite_start|> (Reference: The variational multiscale method—a paradigm for computational mechanics: ) <|cite_end|> (cf.~also <|cite_start|> (Reference: A relationship between stabilized finite element methods and the Galerkin method with bubble functions: ) <|cite_end|> <|cite_start|> (Reference: CHOOSING BUBBLES FOR ADVECTION-DIFFUSION PROBLEMS: ) <|cite_end|>, and the unifying viewpoint of <|cite_start|> (Reference: $b=\int g$: \The language of truth is simple" Euripides Abstract In this paper we show the equivalence between the variational multiscale and the residual-free bubbles concepts.) <|cite_end|>) and in <|cite_start|> (Reference: A Multiscale Finite Element Method for Elliptic Problems in Composite Materials and Porous Media: In this paper, we study a multiscale finite element method for solving a class of elliptic problems arising from composite materials and flows in porous media, which contain many spatial scales. The method is designed to efficiently capture the large scale behavior of the solution without resolving all the small scale features. This is accomplished by constructing the multiscale finite element base functions that are adaptive to the local property of the differential operator. Our method is applicable to general multiple-scale problems without restrictive assumptions. The construction of the base functions is fully decoupled from element to element; thus, the method is perfectly parallel and is naturally adapted to massively parallel computers. For the same reason, the method has the ability to handle extremely large degrees of freedom due to highly heterogeneous media, which are intractable by conventional finite element (difference) methods. In contrast to some empirical numerical upscaling methods, the multiscale method is systematic and self- consistent, which makes it easier to analyze. We give a brief analysis of the method, with emphasis on the “resonant sampling” effect. Then, we propose an oversampling technique to remove the resonance effect. We demonstrate the accuracy and efficiency of our method through extensive numerical experiments, which include problems with random coefficients and problems with continuous scales. Parallel implementation and performance of the method are also addressed.) <|cite_end|> <|cite_start|> (Reference: Convergence of a multiscale finite element method for elliptic problems with rapidly oscillating coefficients: We propose a multiscale finite element method for solving second order elliptic equations with rapidly oscillating coefficients. The main purpose is to design a numerical method which is capable of correctly capturing the large scale components of the solution on a coarse grid without accurately resolving all the small scale features in the solution. This is accomplished by incorporating the local microstructures of the differential operator into the finite element base functions. As a consequence, the base functions are adapted to the local properties of the differential operator. In this paper, we provide a detailed convergence analysis of our method under the assumption that the oscillating coefficient is of two scales and is periodic in the fast scale. While such a simplifying assumption is not required by our method, it allows us to use homogenization theory to obtain a useful asymptotic solution structure. The issue of boundary conditions for the base functions will be discussed. Our numerical experiments demonstrate convincingly that our multiscale method indeed converges to the correct solution, independently of the small scale in the homogenization limit. Application of our method to problems with continuous scales is also considered.) <|cite_end|>, laying the ground, respectively, for the Variational Multiscale method, and for the Multiscale Finite Element (MsFE) method. Overall, the common idea behind these multiscale methods is to consider basis functions especially designed so as to upscale to an overlying coarse mesh the sub-mesh variations of the model. Particularly appealing is the fact that the multiscale basis functions are defined by entirely independent problems. From this viewpoint, multiscale numerical methods may also be seen as a (non-iterative) domain decomposition technique <|cite_start|> (Reference: A-posteriori-steered p-robust multigrid and domain decomposition methods with optimal step-sizes for mixed finite element discretizations of elliptic problems: In this work, we develop algebraic solvers for linear systems arising from the discretization of second-order elliptic problems by saddle-point mixed finite element methods of arbitrary polynomial degree $p \ge 0$. We present a multigrid and a two-level domain decomposition approach in two or three space dimensions, which are steered by their respective a posteriori estimators of the algebraic error. First, we extend the results of [A. Mira\c{c}i, J. Pape\v{z}, and M. Vohral\'ik, SIAM J. Sci. Comput. 43 (2021), S117--S145] to the mixed finite element setting. Extending the multigrid procedure itself is rather natural. To obtain analogous theoretical results, however, a multilevel stable decomposition of the velocity space is needed. In two space dimensions, we can treat the velocity space as the curl of a stream-function space, for which the previous results apply. In three space dimensions, we design a novel stable decomposition by combining a one-level high-order local stable decomposition of [Chaumont-Frelet and Vohral\'ik, SIAM J. Numer. Anal. 61 (2023), 1783--1818] and a multilevel lowest-order stable decomposition of [Hiptmair, Wu, and Zheng, Numer. Math. Theory Methods Appl. 5 (2012), 297--332]. This allows us to prove that our multigrid solver contracts the algebraic error at each iteration and, simultaneously, that the associated a posteriori estimator is efficient. A $p$-robust contraction is shown in two space dimensions. Next, we use this multilevel methodology to define a two-level domain decomposition method where the subdomains consist of overlapping patches of coarse-level elements sharing a common coarse-level vertex. We again establish a ($p$-robust) contraction of the solver and efficiency of the a posteriori estimator. Numerical results presented both for the multigrid approach and the domain decomposition method confirm the theoretical findings.) <|cite_end|>. Since the pioneering works on multiscale methods, a large number of improvements and new approaches have been proposed. In the MsFE context (see <|cite_start|> (Reference: Multiscale Finite Element Methods - Theory and Applications: This expository book surveys the main concepts and recent advances in multiscale finite element methods. This monograph is intended for the broader audiences including engineers, applied scientists and those who are interested in multiscale simulations. The book is self-contained, starts from the basic concepts and proceeds to the latest developments in the field. Each chapter of the book starts with a simple introduction and the description of the proposed methods as well as with motivating examples. Numerical examples demonstrating the significance of the proposed methods are presented in each chapter. The book addresses mathematical and numerical issues in multiscale finite element methods and connects them to real-world applications. Narrative introduction provides a key to the book's organization and its scope. To make the presentation accessible to a broader audience, the analyses of the methods are given in the last chapter.) <|cite_end|> for a survey), one can cite the oversampling technique of <|cite_start|> (Reference: Convergence of a nonconforming multiscale finite element method: The multiscale finite element method (MsFEM) [T. Y. Hou, X. H. Wu, and Z. Cai, Math. Comp., 1998, to appear; T. Y. Hou and X. H. Wu, J. Comput. Phys., 134 (1997), pp. 169--189] has been introduced to capture the large scale solutions of elliptic equations with highly oscillatory coefficients. This is accomplished by constructing the multiscale base functions from the local solutions of the elliptic operator. Our previous study reveals that the leading order error in this approach is caused by the ``resonant sampling,'' which leads to large error when the mesh size is close to the small scale of the continuous problem. Similar difficulty also arises in numerical upscaling methods. An oversampling technique has been introduced to alleviate this difficulty [T. Y. Hou and X. H. Wu, J. Comput. Phys., 134 (1997), pp. 169--189]. A consequence of the oversampling method is that the resulting finite element method is no longer conforming. Here we give a detailed analysis of the nonconforming error. Our analysis also reveals a new cell resonance error which is caused by the mismatch between the mesh size and the wavelength of the small scale. We show that the cell resonance error is of lower order. Our numerical experiments demonstrate that the cell resonance error is generically small and is difficult to observe in practice.) <|cite_end|>, as well as the Petrov--Galerkin variant of <|cite_start|> (Reference: Removing the cell resonance error in the multiscale finite element method via a Petrov-Galerkin formulation: We continue the study of the nonconforming multiscale finite element method (Ms- FEM) introduced in 17, 14 for second order elliptic equations with highly oscillatory coefficients. The main difficulty in MsFEM, as well as other numerical upscaling methods, is the scale resonance effect. It has been show that the leading order resonance error can be effectively removed by using an over-sampling technique. Nonetheless, there is still a secondary cell resonance error of O(Є^2/h^2). Here, we introduce a Petrov-Galerkin MsFEM formulation with nonconforming multiscale trial functions and linear test functions. We show that the cell resonance error is eliminated in this formulation and hence the convergence rate is greatly improved. Moreover, we show that a similar formulation can be used to enhance the convergence of an immersed-interface finite element method for elliptic interface problems.) <|cite_end|> (see also <|cite_start|> (Reference: Stabilization arising from PGEM: A review and further developments: ) <|cite_end|>), or the high-order method of <|cite_start|> (Reference: A multiscale finite element method for numerical homogenization: This paper is concerned with a multiscale finite element method for numerically solving second-order scalar elliptic boundary value problems with highly oscillating coefficients. In the spirit of previous other works, our method is based on the coupling of a coarse global mesh and a fine local mesh, the latter being used for computing independently an adapted finite element basis for the coarse mesh. The main idea is the introduction of a composition rule, or change of variables, for the construction of this finite element basis. In particular, this allows for a simple treatment of high-order finite element methods. We provide optimal error estimates in the case of periodically oscillating coefficients. We illustrate our method in various examples.) <|cite_end|> (see also <|cite_start|> (Reference: High-order multiscale finite element method for elliptic problems: In this paper, a new high-order multiscale finite element method (MsFEM) is developed for elliptic problems with highly oscillating coefficients. The method is inspired by the MsFEM developed in [G. Allaire and R. Brizzi, Multiscale Model. Simul., 4 (2005), pp. 790--812], but a more explicit multiscale finite element space is constructed. The approximation space is nonconforming when an oversampling technique is used. We use a Petrov--Galerkin formulation suggested in [T. Y. Hou, X.-H. Wu, and Y. Zhang, Commun. Math. Sci., 2 (2004), pp. 185--205] to simplify the implementation and to improve the accuracy. The method is natural for high-order finite element methods used with the advantage of solving the coarse grained problem. We prove optimal error estimates in the case of periodically oscillating coefficients and support the findings by various numerical experiments.) <|cite_end|>). More recent research directions focus on reducing and possibly eliminating the cell resonance error. In this vein, one can cite the Generalized MsFE method, or the Local Orthogonal Decomposition approach <|cite_start|> (Reference: Oversampling for the Multiscale Finite Element Method: This paper reviews standard oversampling strategies as performed in the multiscale finite element method (MsFEM). Common to those approaches is that the oversampling is performed in the full space ...) <|cite_end|> <|cite_start|> (Reference: Localization of Elliptic Multiscale Problems: This paper constructs a local generalized finite element basis for elliptic problems with heterogeneous and highly varying coefficients. The basis functions are solutions of local problems on vertex patches. The error of the corresponding generalized finite element method decays exponentially with respect to the number of layers of elements in the patches. Hence, on a uniform mesh of size $ H$, patches of diameter $ H\log (1/H)$ are sufficient to preserve a linear rate of convergence in $ H$ without pre-asymptotic or resonance effects. The analysis does not rely on regularity of the solution or scale separation in the coefficient. This result motivates new and justifies old classes of variational multiscale methods. - See more at: http://www.ams.org/journals/mcom/2014-83-290/S0025-5718-2014-02868-8/#sthash.z2CCFXIg.dpuf) <|cite_end|>. Hybridization has also been investigated in the pioneering work <|cite_start|> (Reference: A Multiscale Mortar Mixed Finite Element Method: We develop multiscale mortar mixed finite element discretizations for second order elliptic equations. The continuity of flux is imposed via a mortar finite element space on a coarse grid scale, while the equations in the coarse elements (or subdomains) are discretized on a fine grid scale. The polynomial degree of the mortar and subdomain approximation spaces may differ; in fact, the mortar space achieves approximation comparable to the fine scale on its coarse grid by using higher order polynomials. Our formulation is related to, but more flexible than, existing multiscale finite element and variational multiscale methods. We derive a priori error estimates and show, with appropriate choice of the mortar space, optimal order convergence and some superconvergence on the fine scale for both the solution and its flux. We also derive efficient and reliable a posteriori error estimators, which are used in an adaptive mesh refinement algorithm to obtain appropriate subdomain and mortar grids. Numerical experi...) <|cite_end|> on multiscale mortar mixed finite element methods (see also the multiscale mortar multipoint flux mixed finite element method of <|cite_start|> (Reference: A multiscale mortar multipoint flux mixed finite element method: In this paper, we develop a multiscale mortar multipoint flux mixed finite element method for second order elliptic problems. The equations in the coarse elements (or subdomains) are discretized on a fine grid scale by a multipoint flux mixed finite element method that reduces to cell-centered finite differences on irregular grids. The subdomain grids do not have to match across the interfaces. Continuity of flux between coarse elements is imposed via a mortar finite element space on a coarse grid scale. With an appropriate choice of polynomial degree of the mortar space, we derive optimal order convergence on the fine scale for both the multiscale pressure and velocity, as well as the coarse scale mortar pressure. Some superconvergence results are also derived. The algebraic system is reduced via a non-overlapping domain decomposition to a coarse scale mortar interface problem that is solved using a multiscale flux basis. Numerical experiments are presented to confirm the theory and illustrate the efficiency and flexibility of the method.) <|cite_end|>). These ideas have been adapted later on in the context of (multiscale) Discontinuous Galerkin methods, leading to the Multiscale Hybridizable Discontinuous Galerkin (MsHDG) method of <|cite_start|> (Reference: A multiscale HDG method for second order elliptic equations. Part I: Polynomial and homogenization-based multiscale spaces: We introduce a finite element method for numerical upscaling of second order elliptic equations with highly heterogeneous coefficients. The method is based on a mixed formulation of the problem and the concepts of the domain decomposition and the hybrid discontinuous Galerkin methods. The method utilizes three different scales: (1) the scale of the partition of the domain of the problem, (2) the scale of partition of the boundaries of the subdomains (related to the corresponding space of Lagrange multipliers), and (3) the fine-grid scale that is assumed to resolve the scale of the heterogeneous variation of the coefficients. Our proposed method gives a flexible framework that (1) couples independently generated multiscale basis functions in each coarse patch, (2) provides a stable global coupling independent of local discretization, physical scales, and contrast, and (3) allows avoiding any constraints [Arbogast et al., Multiscale Model. Simul., 6 (2007), pp. 319--346] on coarse spaces. In this paper, we ...) <|cite_end|> (cf.~also the multiscale Weak Galerkin method of <|cite_start|> (Reference: A weak Galerkin generalized multiscale finite element method: ) <|cite_end|>, devised along the same principles in the spirit of the Generalized MsFE method). Interestingly, this latter approach enables to relax the constraints between the mortar space and the polynomial spaces used in the mesh cells. Recently, two families of hybrid multiscale numerical methods that are applicable on general meshes have been proposed, namely the Multiscale Hybrid-Mixed (MHM) and the Multiscale Hybrid High-Order (MsHHO) methods. The MHM method has been first introduced in <|cite_start|> (Reference: A family of Multiscale Hybrid-Mixed finite element methods for the Darcy equation with rough coefficients: ) <|cite_end|>, and further analyzed in <|cite_start|> (Reference: Multiscale Hybrid-Mixed Method: This work presents a priori and a posteriori error analyses of a new multiscale hybrid-mixed method (MHM) for an elliptic model. Specially designed to incorporate multiple scales into the construction of basis functions, this finite element method relaxes the continuity of the primal variable through the action of Lagrange multipliers, while assuring the strong continuity of the normal component of the flux (dual variable). As a result, the dual variable, which stems from a simple postprocessing of the primal variable, preserves local conservation. We prove existence and uniqueness of a solution for the MHM method as well as optimal convergence estimates of any order in the natural norms. Also, we propose a face-residual a posteriori error estimator, and prove that it controls the error of both variables in the natural norms. Several numerical tests assess the theoretical results.) <|cite_end|> <|cite_start|> (Reference: On the robustness of multiscale hybrid-mixed methods: . In this work we prove uniform convergence of the Multiscale Hybrid-Mixed (MHM for short) finite element method for second-order elliptic problems with rough periodic coefficients. The MHM method is shown to avoid resonance errors without adopting oversampling techniques. In particular, we establish that the discretization error for the primal variable in the broken H 1 and L 2 norms are O ( h + ε δ ) and O ( h 2 + hε δ ), respectively, and for the dual variable it is O ( h + ε δ ) in the H (div; · ) norm, where 0 < δ ≤ 1 / 2 (depending on regularity). Such results rely on sharpened asymptotic expansion error estimates for the elliptic models with prescribed Dirichlet, Neumann or mixed boundary conditions.) <|cite_end|> <|cite_start|> (Reference: The multiscale hybrid mixed method in general polygonal meshes: ) <|cite_end|> (see also <|cite_start|> (Reference: Foundations of the MHM Method: ) <|cite_end|> for an abstract setting), whereas the MsHHO method has been proposed in <|cite_start|> (Reference: A hybrid high-order method for highly oscillatory elliptic problems: Abstract We devise a Hybrid High-Order (HHO) method for highly oscillatory elliptic problems that is capable of handling general meshes. The method hinges on discrete unknowns that are polynomials attached to the faces and cells of a coarse mesh; those attached to the cells can be eliminated locally using static condensation. The main building ingredient is a reconstruction operator, local to each coarse cell, that maps onto a fine-scale space spanned by oscillatory basis functions. The present HHO method generalizes the ideas of some existing multiscale approaches, while providing the first complete analysis on general meshes. It also improves on those methods, taking advantage of the flexibility granted by the HHO framework. The method handles arbitrary orders of approximation k≥0{k\geq 0}. For face unknowns that are polynomials of degree k, we devise two versions of the method, depending on the polynomial degree (k-1){(k-1)} or k of the cell unknowns. We prove, in the case of periodic coefficients, an energy-error estimate of the form (ε12+Hk+1+(εH)12){(\varepsilon^{\frac{1}{2}}+H^{k+1}+(\frac{\varepsilon}{H})^{\frac{1}{2}})}, and we illustrate our theoretical findings on some test-cases.) <|cite_end|> <|cite_start|> (Reference: On the Implementation of a Multiscale Hybrid High-Order Method: ) <|cite_end|>, as an extension of the HHO method first introduced in <|cite_start|> (Reference: An Arbitrary-Order and Compact-Stencil Discretization of Diffusion on General Meshes Based on Local Reconstruction Operators: Abstract We develop an arbitrary-order primal method for diffusion problems on general polyhedral meshes. The degrees of freedom are scalar-valued polynomials of the same order at mesh elements and faces. The cornerstone of the method is a local (elementwise) discrete gradient reconstruction operator. The design of the method additionally hinges on a least-squares penalty term on faces weakly enforcing the matching between local element- and face-based degrees of freedom. The scheme is proved to optimally converge in the energy norm and in the L2-norm of the potential for smooth solutions. In the lowest-order case, equivalence with the Hybrid Finite Volume method is shown. The theoretical results are confirmed by numerical experiments up to order 4 on several polygonal meshes.) <|cite_end|> (cf.~also <|cite_start|> (Reference: A Review of Hybrid High-Order Methods: Formulations, Computational Aspects, Comparison with Other Methods: ) <|cite_end|>). The MHM method relates to the mixed multiscale finite element method proposed in <|cite_start|> (Reference: A mixed multiscale finite element method for elliptic problems with oscillating coefficients: The recently introduced multiscale finite element method for solving elliptic equations with oscillating coefficients is designed to capture the large-scale structure of the solutions without resolving all the fine-scale structures. Motivated by the numerical simulation of flow transport in highly heterogeneous porous media, we propose a mixed multiscale finite element method with an over-sampling technique for solving second order elliptic equations with rapidly oscillating coefficients. The multiscale finite element bases are constructed by locally solving Neumann boundary value problems. We provide a detailed convergence analysis of the method under the assumption that the oscillating coefficients are locally periodic. While such a simplifying assumption is not required by our method, it allows us to use homogenization theory to obtain the asymptotic structure of the solutions. Numerical experiments are carried out for flow transport in a porous medium with a random log-normal relative permeability to demonstrate the efficiency and accuracy of the proposed method.) <|cite_end|>, as well as to the subgrid upscaling method of <|cite_start|> (Reference: Subgrid upscaling and mixed multiscale finite elements: Second order elliptic problems in divergence form with a highly varying leading order coefficient on the scale epsilon can be approximated on coarse meshes of spacing H \gg epsilon only if one uses special techniques. The mixed variational multiscale method, also called subgrid upscaling, can be used, and this method is extended to allow oversampling of the local subgrid problems. The method is shown to be equivalent to the multiscale finite element method when one uses the lowest order Raviart--Thomas spaces and provided that there are no fine scale components in the source function f. In the periodic setting, a multiscale error analysis based on homogenization theory of the more general subgrid upscaling method shows that the error is O(epsilon+Hm + \sqrt epsilon/H), where m=1. Moreover, m=2 if one uses the second order Brezzi-Douglas-Marini or Brezzi-Douglas-Duran-Fortin spaces and no oversampling. The error bounding constant depends only on the Hm - 1-norm of f and so is independent of small scales when m=1. When oversampling is not used, a superconvergence result for the pressure approximation is shown.) <|cite_end|> (see \cite[Sec.~5.1.2]{HarVal16} for further details). The MsHHO method generalizes to arbitrary polynomial orders the low-order nonconforming multiscale methods of <|cite_start|> (Reference: MsFEM à la Crouzeix-Raviart for Highly Oscillatory Elliptic Problems: ) <|cite_end|> <|cite_start|> (Reference: An MsFEM-type approach for perforated domains: We follow up on our previous work [C. Le Bris, F. Legoll and A. Lozinski, Chinese Annals of Mathematics 2013] where we have studied a multiscale finite element (MsFEM) type method in the vein of the classical Crouzeix-Raviart finite element method that is specifically adapted for highly oscillatory elliptic problems. We adapt the approach to address here a multiscale problem on a perforated domain. An additional ingredient of our approach is the enrichment of the multiscale finite element space using bubble functions. We first establish a theoretical error estimate. We next show that, on the problem we consider, the approach we propose outperforms all dedicated existing variants of MsFEM we are aware of.) <|cite_end|>. The polynomial unknowns attached to the mesh interfaces in the MsHHO method play a different role with respect to the (coarse) interface unknowns of the MsHDG method of <|cite_start|> (Reference: A multiscale HDG method for second order elliptic equations. Part I: Polynomial and homogenization-based multiscale spaces: We introduce a finite element method for numerical upscaling of second order elliptic equations with highly heterogeneous coefficients. The method is based on a mixed formulation of the problem and the concepts of the domain decomposition and the hybrid discontinuous Galerkin methods. The method utilizes three different scales: (1) the scale of the partition of the domain of the problem, (2) the scale of partition of the boundaries of the subdomains (related to the corresponding space of Lagrange multipliers), and (3) the fine-grid scale that is assumed to resolve the scale of the heterogeneous variation of the coefficients. Our proposed method gives a flexible framework that (1) couples independently generated multiscale basis functions in each coarse patch, (2) provides a stable global coupling independent of local discretization, physical scales, and contrast, and (3) allows avoiding any constraints [Arbogast et al., Multiscale Model. Simul., 6 (2007), pp. 319--346] on coarse spaces. In this paper, we ...) <|cite_end|>. The fundamental difference between these two approaches is that the MsHDG method is based on local Dirichlet problems (the interface unknowns are then the traces of the solution), whereas the MsHHO method is based on local Neumann problems (the interface unknowns are then the coarse moments of the traces of the solution). Notice that the MHM method is also based on local Neumann problems. Note that similar ideas have been developed in the conforming framework in the context of BEM-based FEM <|cite_start|> (Reference: From the Boundary Element Domain Decomposition Methods to Local Trefftz Finite Element Methods on Polyhedral Meshes: ) <|cite_end|> <|cite_start|> (Reference: BEM-based Finite Element Approaches on Polytopal Meshes: ) <|cite_end|>. The MHM and MsHHO methods substantially differ in their construction. Picking the Poisson equation as an example, the MHM method hinges on the primal hybrid formulation analyzed in <|cite_start|> (Reference: Primal hybrid finite element methods for 2nd order elliptic equations: The paper is devoted to the construction of finite element methods for 2nd order elliptic equations based on a primal hybrid variational principle. Optimal error bounds are proved. As a corollary, we obtain a general analysis of nonconforming finite element methods.) <|cite_end|>. As a consequence, while the local problems are defined as coercive Neumann problems, the global upscaled linear system is of saddle-point type, involving face unknowns that are the normal fluxes through the mesh faces (also the Neumann data for the local problems, up to the sign), plus one degree of freedom per mesh cell that enforces a local balance between the normal fluxes and the source term. Notice that the (global) saddle-point structure of the MHM method can be equivalently replaced by a sequence of positive-definite linear systems as shown recently in <|cite_start|> (Reference: Hybrid Localized Spectral Decomposition for multiscale problems: We consider a finite element method for elliptic equation with heterogeneous and possibly high-contrast coefficients based on primal hybrid formulation. A space decomposition as in FETI and BDCC allows a sequential computations of the unknowns through elliptic problems and satisfies equilibrium constraints. One of the resulting problems is non-local but with exponentially decaying solutions, enabling a practical scheme where the basis functions have an extended, but still local, support. We obtain quasi-optimal a priori error estimates for low-contrast problems assuming minimal regularity of the solutions. To also consider the high-contrast case, we propose a variant of our method, enriching the space solution via local eigenvalue problems and obtaining optimal a priori error estimate that mitigates the effect of having coefficients with different magnitudes and again assuming no regularity of the solution. The technique developed is dimensional independent and easy to extend to other problems such as elasticity.) <|cite_end|>. On the other hand, the MsHHO method is directly built upon the primal formulation of the problem. As a consequence, the local (Neumann) problems are defined as constrained minimization problems, and as such exhibit a saddle-point structure. On the contrary, the global upscaled linear system is coercive, and only involves face unknowns that are the coarse moments of the traces of the solution at interfaces. Notice that, as opposed to the MHM method, the MsHHO method also uses cell unknowns (that are locally eliminable from the global upscaled linear system), which are associated with basis functions solving local problems with nonzero source terms. As such, the MsHHO method is naturally suited to deal with multi-query scenarios. In this work, we revisit the MHM and MsHHO methods and we prove an equivalence result between their solutions. Notice that such a relationship is not straightforward since, at first glance, the two methods exhibit structures that are genuinely different. Nonetheless, we demonstrate that such an equivalence holds under the assumption that the source term of the continuous problem is piecewise polynomial (cf.~Theorem~\ref{th:equiv}). For this equivalence to hold, we make the idealized assumption that the local problems defining the multiscale basis functions are exactly solved. The corresponding methods are then referred to as {\em one-level} (cf.~Remark~\ref{rem:sec_lev} for some insight on the equivalence between two-level methods). Leveraging this equivalence result, the present work also contributes to derive, in a unified fashion, an energy-norm error estimate that is valid for both methods (cf.~Theorem~\ref{th:err.est}). More specifically, \begin{itemize} \item in the MHM framework, this result is a refined version (especially in the tracking of the dependency with respect to the diffusion coefficient) of the results in <|cite_start|> (Reference: Multiscale Hybrid-Mixed Method: This work presents a priori and a posteriori error analyses of a new multiscale hybrid-mixed method (MHM) for an elliptic model. Specially designed to incorporate multiple scales into the construction of basis functions, this finite element method relaxes the continuity of the primal variable through the action of Lagrange multipliers, while assuring the strong continuity of the normal component of the flux (dual variable). As a result, the dual variable, which stems from a simple postprocessing of the primal variable, preserves local conservation. We prove existence and uniqueness of a solution for the MHM method as well as optimal convergence estimates of any order in the natural norms. Also, we propose a face-residual a posteriori error estimator, and prove that it controls the error of both variables in the natural norms. Several numerical tests assess the theoretical results.) <|cite_end|>; \item in the MsHHO framework, this result is new and is complementary to the homogenization-based error estimate derived in <|cite_start|> (Reference: A hybrid high-order method for highly oscillatory elliptic problems: Abstract We devise a Hybrid High-Order (HHO) method for highly oscillatory elliptic problems that is capable of handling general meshes. The method hinges on discrete unknowns that are polynomials attached to the faces and cells of a coarse mesh; those attached to the cells can be eliminated locally using static condensation. The main building ingredient is a reconstruction operator, local to each coarse cell, that maps onto a fine-scale space spanned by oscillatory basis functions. The present HHO method generalizes the ideas of some existing multiscale approaches, while providing the first complete analysis on general meshes. It also improves on those methods, taking advantage of the flexibility granted by the HHO framework. The method handles arbitrary orders of approximation k≥0{k\geq 0}. For face unknowns that are polynomials of degree k, we devise two versions of the method, depending on the polynomial degree (k-1){(k-1)} or k of the cell unknowns. We prove, in the case of periodic coefficients, an energy-error estimate of the form (ε12+Hk+1+(εH)12){(\varepsilon^{\frac{1}{2}}+H^{k+1}+(\frac{\varepsilon}{H})^{\frac{1}{2}})}, and we illustrate our theoretical findings on some test-cases.) <|cite_end|>. \end{itemize} We also explore these stimulating results to transfer properties proved for one method to the other, and to reveal how the interplay between the methods can drive advances for both. Notably, we show that \begin{itemize} \item the MHM method can be adapted to deal with multi-query scenarios (cf.~Section~\ref{ssse:MHM}); \item the MsHHO method can be recast as a purely face-based method, in the sense that it can be alternatively defined without using cell unknowns (cf.~Section~\ref{sec:face-based}). \end{itemize} The outline of the article is as follows. Section \ref{model} introduces the model problem, the partition, the notation and a number of useful tools. We present the MHM method in Section \ref{sec:mhm}, and the MsHHO method in Section \ref{MsHHO-method}. The equivalence result is stated in Section \ref{equivalence}, along with some further properties and remarks. The energy-norm error estimate is proved in Section~\ref{se:conv}. The solution strategies for both methods are discussed in Section \ref{basis-ddm}, leveraging the equivalence result at hand to propose enhancements for both methods. Finally, some conclusions are drawn in Section \ref{concl}. <|paper_end|>
[ "<|reference_start|> The variational multiscale method—a paradigm for computational mechanics: <|reference_end|>", "<|reference_start|> Stabilization arising from PGEM: A review and further developments: <|reference_end|>", "<|reference_start|> Localization of Elliptic Multiscale Problems: This paper constructs a local generalized finite element basis for elliptic problems with heterogeneous and highly varying coefficients. The basis functions are solutions of local problems on vertex patches. The error of the corresponding generalized finite element method decays exponentially with respect to the number of layers of elements in the patches. Hence, on a uniform mesh of size $ H$, patches of diameter $ H\\log (1/H)$ are sufficient to preserve a linear rate of convergence in $ H$ without pre-asymptotic or resonance effects. The analysis does not rely on regularity of the solution or scale separation in the coefficient. This result motivates new and justifies old classes of variational multiscale methods. - See more at: http://www.ams.org/journals/mcom/2014-83-290/S0025-5718-2014-02868-8/#sthash.z2CCFXIg.dpuf <|reference_end|>", "<|reference_start|> A weak Galerkin generalized multiscale finite element method: <|reference_end|>" ]
[ 3, 13, 17, 21 ]
{"<|cite_1|>": "ss-730499", "<|cite_2|>": "ss-686445", "<|multi_cite_3_1|>": "ss-727463", "<|multi_cite_3_2|>": "ss-681566", "<|multi_cite_4_1|>": "ss-1259558", "<|multi_cite_4_2|>": "ss-1175992", "<|cite_5|>": "ss-1566723", "<|multi_cite_6_1|>": "ss-1280684", "<|multi_cite_6_2|>": "ss-1531412", "<|cite_7|>": "ss-1359778", "<|cite_8|>": "ss-905130", "<|cite_9|>": "ss-1263086", "<|cite_10|>": "ss-1312552", "<|cite_11|>": "ss-2036075", "<|cite_12|>": "ss-686454", "<|cite_13|>": "ss-686455", "<|multi_cite_15_1|>": "ss-686449", "<|multi_cite_15_2|>": "ss-1744235", "<|cite_16|>": "ss-1280686", "<|cite_17|>": "ss-1223376", "<|cite_18|>": "ss-2036076", "<|cite_19|>": "ss-2036077", "<|cite_20|>": "ss-686457", "<|multi_cite_21_1|>": "ss-1595049", "<|multi_cite_21_2|>": "ss-1873091", "<|multi_cite_21_3|>": "ss-1595051", "<|cite_22|>": "ss-1595050", "<|multi_cite_23_1|>": "ss-686458", "<|multi_cite_23_2|>": "ss-2036078", "<|multi_cite_24_1|>": "ss-1201963", "<|cite_25|>": "ss-1767078", "<|cite_26|>": "ss-1263084", "<|cite_27|>": "ss-1284397", "<|multi_cite_28_1|>": "ss-2036079", "<|multi_cite_28_2|>": "ss-2036080", "<|cite_29|>": "ss-2036076", "<|multi_cite_30_1|>": "ss-1210747", "<|multi_cite_30_2|>": "ss-2389714", "<|cite_31|>": "ss-1767076", "<|cite_32|>": "arxiv-127818", "<|cite_33|>": "ss-1595049", "<|cite_34|>": "ss-686458"}
1106.0730
<|paper_start|> Title: Rademacher complexity of stationary sequences Abstract: Rademacher complexity of stationary sequences: We show how to control the generalization error of time series models wherein past values of the outcome are used to predict future values. The results are based on a generalization of standard i.i.d. concentration inequalities to dependent data without the mixing assumptions common in the time series setting. Our proof and the result are simpler than previous analyses with dependent data or stochastic adversaries which use sequential Rademacher complexities rather than the expected Rademacher complexity for i.i.d. processes. We also derive empirical Rademacher results without mixing assumptions resulting in fully calculable upper bounds. Introduction \label{sec:introduction} Much of the literature in machine learning focuses on studying the behavior of predictions constructed based on a training set $(X_1,Y_1),\ldots,(X_n,Y_n)$ where one wishes to construct a mapping from $X$ to $Y$. This training set may consist of $n$ IID draws from a common distribution, or it may have some dependence property such as ergodicity or mixing behavior <|cite_start|> (Reference: Probably approximately correct learning with beta-mixing input sequences: In this paper, we study the behaviour of PAC learning algorithms when the input sequence is not i.i.d., but is β-mixing instead. A meta-theorem is proved, showing that if an algorithm is (i) PAC when the inputs are i.i.d., and (ii) ‘sub-additive’ in a sense defined in the paper, then the same algorithm continues to be PAC even with β-mixing inputs. It is shown that if a function family is distribution-free learnable or consistently learnable with i.i.d. inputs, then every consistent algorithm is PAC even when the input sequence is β-mixing. Explicit quantitative estimates are derived for the learning rates with β-mixing inputs, in terms of the learning rates with i.i.d. inputs and the β-mixing coefficients of the input sequence. Finally, it is shown that a large of Markov chains have the β-mixing property. Hence the results derived here have wide applicability.) <|cite_end|> <|cite_start|> (Reference: Nonparametric Time Series Prediction Through Adaptive Model Selection: ) <|cite_end|>. It may even be generated by an adversary intent on deceiving us about the relationship <|cite_start|> (Reference: Prediction, learning, and games: Empirical evidence to lend proper credence, however, continues to elude the quality literature. This hardly vexes Taguchi (or most of those who produce the corpus of the discipline), but it is importunate to the reviewer. In many settings, the loss function is unlikely to be symmetric with respect to the target and, furthermore, the behavior on either side of the target is not necessarily the same. Such seemingly obvious deviations have not deterred the vast majority from proclaiming the ubiquity of the function. The current book offers no new insights here. The treatment of experimental design is fairly strong. Taguchi’s use of outer arrays is one of his greatest contributions (and one that has caught the ire of a few academics). The book elucidates design adequately and illuminates Taguchi’s advances. Anyone who is well versed in design will be able to skip the introductions and go straight to the discussion of orthogonal arrays. In this reviewer’s opinion, this is the major strength of the book. Another strength is the extensive set of case studies that cover each topic from the previous chapters. Applications include robust engineering in polymer chemistry, material design in automatic transmissions, improvements in omelet taste, and the use of Mahalanobis distance to measure drug efficacy. The sheer range of topical coverage in the cases will doubtlessly find appeal for virtually any practitioner regardless of specific field. There is the obligatory mention of Six Sigma as it relates to Taguchi’s work. Given the scope of Six Sigma in the current landscape, finding your place therein is necessary. A glaring omission is the lack of a similar consideration of ISO and QS certifications (as is given in Juran). Do not assume that the reviewer sees this as a negative. It is hoped here that Taguchi sees these quality certifications as largely specious and unworthy of a reference. Overall, it is hard not to be impressed with the utter volume of Taguchi’s output. The expanse of coverage is not to be dismissed. As a vehicle for presenting his prolific production, the handbook succeeds. The book may appear to be somewhat self-indulgent (as if 1600+ pages about your previous work could appear otherwise!). No doubt an ambitious undertaking, the authors nevertheless generally hit their mark. One would be hard-pressed not to at least enjoy most of the ride. What is positive (negative) about the book is largely what one perceives to be positive (negative) about Taguchi. The aforementioned lack of scholarly references is unsurprising, because Taguchi largely practiced beyond the boundaries of academia. Many academics have tended to reciprocate with less attention to his work than is probably deserved. What can safely be said is that if you are a fan of Taguchi’s work, this is definitely for you. If you need a single reference for his work or simply desire a “complete quality library,” you cannot go wrong here. Otherwise, it is unlikely that you would be interested. But in the event that you are a practitioner itching to get acquainted with Taguchi and have $150 burning a hole in your wallet or Visa, this one’s a winner.) <|cite_end|> <|cite_start|> (Reference: Online learning: Random averages, combinatorial parameters, and learnability: We develop a theory of online learning by defining several complexity measures. Among them are analogues of Rademacher complexity, covering numbers and fat-shattering dimension from statistical learning theory. Relationship among these complexity measures, their connection to online learning, and tools for bounding them are provided. We apply these results to various learning problems. We provide a complete characterization of online learnability in the supervised setting.) <|cite_end|>. Time series data are different. We observe only a single sequence of random variables $\mathbf{Y}_1^n=(Y_1,\ldots,Y_n)$ taking values in a measurable space $\mathcal{Y}$ and wish to learn a function which takes the past observations as inputs and predicts the future. Suppose, given data from time 1 to time $n$, we wish to predict time $n+h$ for some $h \in \mathbb{N}$. Then for some loss function $\ell: \mathcal{Y}\times\mathcal{Y} \rightarrow \R^+$, and some predictor $g: \mathcal{Y}^n \rightarrow \mathcal{Y}$, we define the \emph{prediction risk}, or \emph{generalization error}, as \begin{equation} \label{eq:13} R(g) := \E[\ell(Y_{n+h},g(\mathbf{Y}_1^n) ]. \end{equation} Here we assume that the data series is stationary, a notion to be defined more precisely later. But this allows us to have some hope of controlling the generalization error defined in (\ref{eq:13}). Absent this sort of behavior, the past and future could be unrelated. Since the true distribution is unknown, so is $R(g)$, but we can attempt to estimate it based on only our observed data. In situations with predictors $X$ and responses $Y$, there is the obvious estimator \begin{equation*} \widetilde{R}_n(g) := \frac{1}{n}\sum_{i=1}^n \ell(Y_i, g(X_i)). \end{equation*} However, in this case, we may use some or all of the past to generate predictions, and similarly, it may be that we have not observed $Y_{i+h}$ for some $i$. To ease notation for the remainder of the paper, assume that we have observed some sequence of data $Y_{1},\ldots,Y_{n+j}$ for $j\in \mathbb{N}$ such that it is possible to evaluate the quantity $\ell(Y_{i+h},g(Y_1,\ldots,Y_{i}))$ for each $i \in \{1,\ldots,n\}$. For time series prediction, we define the \emph{training error} as \begin{equation} \label{eq:one} \widehat{R}_n(g) := \frac{1}{n}\sum_{i=1}^n{\ell(Y_{i+h},g(\mathbf{Y}^i_1)}. \end{equation} Here $g$ is some function chosen out of a class of possible functions $\mathcal{G}$. Choosing a particular prediction function $\widehat{g}$ as the minimizer of $\widehat{R}_n$ over $\mathcal{G}$ is ``empirical risk minimization'' (ERM); this often gives poor results because the choice of $\widehat{g}$ adapts to the training data, causing the training error to be an over-optimistic estimate of the true risk. Additionally, training error must shrink as model complexity grows so that ERM will tend to overfit the data and give poor out-of-sample predictions. While $\widehat{R}_n(\widehat{g})$ converges to $R(\widehat{g})$ for many algorithms, one can show that when $\widehat{g}$ minimizes (\ref{eq:one}), $\E[\widehat{R}_n(\widehat{g})]\leq R(\widehat{g})$. There are a number of ways to mitigate this issue. The first is to restrict the class $\mathcal{G}$. The second is to change the optimization problem, penalizing model complexity. Rather than attempting to estimate $R(g)$, we provide bounds on it which hold with high probability across all possible prediction functions $g\in\mathcal{G}$. A typical result in this literature is a confidence bound on the risk which says that with probability at least $1-\delta$, \[ R(\widehat{g}) \leq \widehat{R}_n(\widehat{g}) + \Gamma(C(\mathcal{G}), n, \delta), \] where $C(\cdot)$ measures the complexity of the model class $\mathcal{G}$, and $\Gamma(\cdot)$ is a function of the complexity, the confidence level, and the number of observed data points. In \S\ref{sec:preliminaries}, we provide some background material necessary to characterize our results, including some concentration inequalities for dependent data. Section~\ref{sec:results} derives risk bounds for time series and gives a novel proof that the standard Rademacher complexity characterizes the flexibility of $\mathcal{G}$. Section~\ref{sec:examples} supplies some straightforward examples showing how dependence affects the quality of bounds. Section~\ref{sec:discussion} concludes and provides some ideas about the future of these results. <|paper_end|>
[ "<|reference_start|> Probably approximately correct learning with beta-mixing input sequences: In this paper, we study the behaviour of PAC learning algorithms when the input sequence is not i.i.d., but is β-mixing instead. A meta-theorem is proved, showing that if an algorithm is (i) PAC when the inputs are i.i.d., and (ii) ‘sub-additive’ in a sense defined in the paper, then the same algorithm continues to be PAC even with β-mixing inputs. It is shown that if a function family is distribution-free learnable or consistently learnable with i.i.d. inputs, then every consistent algorithm is PAC even when the input sequence is β-mixing. Explicit quantitative estimates are derived for the learning rates with β-mixing inputs, in terms of the learning rates with i.i.d. inputs and the β-mixing coefficients of the input sequence. Finally, it is shown that a large of Markov chains have the β-mixing property. Hence the results derived here have wide applicability. <|reference_end|>", "<|reference_start|> Nonparametric Time Series Prediction Through Adaptive Model Selection: <|reference_end|>", "<|reference_start|> Prediction, learning, and games: Empirical evidence to lend proper credence, however, continues to elude the quality literature. This hardly vexes Taguchi (or most of those who produce the corpus of the discipline), but it is importunate to the reviewer. In many settings, the loss function is unlikely to be symmetric with respect to the target and, furthermore, the behavior on either side of the target is not necessarily the same. Such seemingly obvious deviations have not deterred the vast majority from proclaiming the ubiquity of the function. The current book offers no new insights here. The treatment of experimental design is fairly strong. Taguchi’s use of outer arrays is one of his greatest contributions (and one that has caught the ire of a few academics). The book elucidates design adequately and illuminates Taguchi’s advances. Anyone who is well versed in design will be able to skip the introductions and go straight to the discussion of orthogonal arrays. In this reviewer’s opinion, this is the major strength of the book. Another strength is the extensive set of case studies that cover each topic from the previous chapters. Applications include robust engineering in polymer chemistry, material design in automatic transmissions, improvements in omelet taste, and the use of Mahalanobis distance to measure drug efficacy. The sheer range of topical coverage in the cases will doubtlessly find appeal for virtually any practitioner regardless of specific field. There is the obligatory mention of Six Sigma as it relates to Taguchi’s work. Given the scope of Six Sigma in the current landscape, finding your place therein is necessary. A glaring omission is the lack of a similar consideration of ISO and QS certifications (as is given in Juran). Do not assume that the reviewer sees this as a negative. It is hoped here that Taguchi sees these quality certifications as largely specious and unworthy of a reference. Overall, it is hard not to be impressed with the utter volume of Taguchi’s output. The expanse of coverage is not to be dismissed. As a vehicle for presenting his prolific production, the handbook succeeds. The book may appear to be somewhat self-indulgent (as if 1600+ pages about your previous work could appear otherwise!). No doubt an ambitious undertaking, the authors nevertheless generally hit their mark. One would be hard-pressed not to at least enjoy most of the ride. What is positive (negative) about the book is largely what one perceives to be positive (negative) about Taguchi. The aforementioned lack of scholarly references is unsurprising, because Taguchi largely practiced beyond the boundaries of academia. Many academics have tended to reciprocate with less attention to his work than is probably deserved. What can safely be said is that if you are a fan of Taguchi’s work, this is definitely for you. If you need a single reference for his work or simply desire a “complete quality library,” you cannot go wrong here. Otherwise, it is unlikely that you would be interested. But in the event that you are a practitioner itching to get acquainted with Taguchi and have $150 burning a hole in your wallet or Visa, this one’s a winner. <|reference_end|>", "<|reference_start|> Online learning: Random averages, combinatorial parameters, and learnability: We develop a theory of online learning by defining several complexity measures. Among them are analogues of Rademacher complexity, covering numbers and fat-shattering dimension from statistical learning theory. Relationship among these complexity measures, their connection to online learning, and tools for bounding them are provided. We apply these results to various learning problems. We provide a complete characterization of online learnability in the supervised setting. <|reference_end|>" ]
[ 0, 1, 2, 3 ]
{"<|multi_cite_1_2|>": "ss-1692964", "<|multi_cite_1_3|>": "ss-1344237", "<|multi_cite_2_1|>": "ss-1351955", "<|multi_cite_2_2|>": "ss-1376456"}
2306.04488-0
<|paper_start|> Title: Rewarded soups: towards Pareto-optimal alignment by interpolating weights fine-tuned on diverse rewards Abstract: Rewarded soups: towards Pareto-optimal alignment by interpolating weights fine-tuned on diverse rewards: Foundation models are first pre-trained on vast unsupervised datasets and then fine-tuned on labeled data. Reinforcement learning, notably from human feedback (RLHF), can further align the network with the intended usage. Yet the imperfections in the proxy reward may hinder the training and lead to suboptimal results; the diversity of objectives in real-world tasks and human opinions exacerbate the issue. This paper proposes embracing the heterogeneity of diverse rewards by following a multi-policy strategy. Rather than focusing on a single a priori reward, we aim for Pareto-optimal generalization across the entire space of preferences. To this end, we propose rewarded soup, first specializing multiple networks independently (one for each proxy reward) and then interpolating their weights linearly. This succeeds empirically because we show that the weights remain linearly connected when fine-tuned on diverse rewards from a shared pre-trained initialization. We demonstrate the effectiveness of our approach for text-to-text (summarization, Q&A, helpful assistant, review), text-image (image captioning, text-to-image generation, visual grounding, VQA), and control (locomotion) tasks. We hope to enhance the alignment of deep models, and how they interact with the world in all its diversity. Introduction Foundation models <|cite_start|> (Reference: On the opportunities and risks of foundation models.: AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character. This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles(e.g., model architectures, training procedures, data, systems, security, evaluation, theory) to their applications (e.g., law, healthcare, education) and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations). Though foundation models are based on standard deep learning and transfer learning, their scale results in new emergent capabilities,and their effectiveness across so many tasks incentivizes homogenization. Homogenization provides powerful leverage but demands caution, as the defects of the foundation model are inherited by all the adapted models downstream. Despite the impending widespread deployment of foundation models, we currently lack a clear understanding of how they work, when they fail, and what they are even capable of due to their emergent properties. To tackle these questions, we believe much of the critical research on foundation models will require deep interdisciplinary collaboration commensurate with their fundamentally sociotechnical nature.) <|cite_end|>have emerged as the standard paradigm to learn neural networks' weights. They are typically first pre-trained through self-supervision <|cite_start|> (Reference: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding: We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).) <|cite_end|> <|cite_start|> (Reference: Language Models are Few-Shot Learners.: Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.) <|cite_end|> <|cite_start|> (Reference: Emerging Properties in Self-Supervised Vision Transformers: In this paper, we question if self-supervised learning provides new properties to Vision Transformer (ViT) that stand out compared to convolutional networks (convnets). Beyond the fact that adapting self-supervised methods to this architecture works particularly well, we make the following observations: first, self-supervised ViT features contain explicit information about the semantic segmentation of an image, which does not emerge as clearly with supervised ViTs, nor with convnets. Second, these features are also excellent k-NN classifiers, reaching 78.3% top-1 on ImageNet with a small ViT. Our study also underlines the importance of momentum encoder, multi-crop training, and the use of small patches with ViTs. We implement our findings into a simple self-supervised method, called DINO, which we interpret as a form of self-distillation with no labels. We show the synergy between DINO and ViTs by achieving 80.1% top-1 on ImageNet in linear evaluation with ViT-Base.) <|cite_end|> <|cite_start|> (Reference: Learning Transferable Visual Models From Natural Language Supervision: State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.) <|cite_end|>and then fine-tuned <|cite_start|> (Reference: {Learning and Transferring Mid-Level Image Representations using Convolutional Neural Networks: Convolutional neural networks (CNN) have recently shown outstanding image classification performance in the large- scale visual recognition challenge (ILSVRC2012). The success of CNNs is attributed to their ability to learn rich mid-level image representations as opposed to hand-designed low-level features used in other image classification methods. Learning CNNs, however, amounts to estimating millions of parameters and requires a very large number of annotated image samples. This property currently prevents application of CNNs to problems with limited training data. In this work we show how image representations learned with CNNs on large-scale annotated datasets can be efficiently transferred to other visual recognition tasks with limited amount of training data. We design a method to reuse layers trained on the ImageNet dataset to compute mid-level image representation for images in the PASCAL VOC dataset. We show that despite differences in image statistics and tasks in the two datasets, the transferred representation leads to significantly improved results for object and action classification, outperforming the current state of the art on Pascal VOC 2007 and 2012 datasets. We also show promising results for object and action localization.) <|cite_end|> <|cite_start|> (Reference: How transferable are features in deep neural networks?: Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset.) <|cite_end|>via supervised learning <|cite_start|> (Reference: {An overview of statistical learning theory: Statistical learning theory was introduced in the late 1960's. Until the 1990's it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990's new types of learning algorithms (called support vector machines) based on the developed theory were proposed. This made statistical learning theory not only a tool for the theoretical analysis but also a tool for creating practical algorithms for estimating multidimensional functions. This article presents a very general overview of statistical learning theory including both theoretical and algorithmic aspects of the theory. The goal of this overview is to demonstrate how the abstract learning theory established conditions for generalization which are more general than those discussed in classical statistical paradigms and how the understanding of these conditions inspired new algorithmic approaches to function estimation problems. A more detailed overview of the theory (without proofs) can be found in Vapnik (1995). In Vapnik (1998) one can find detailed description of the theory (including proofs).) <|cite_end|>. Yet, collecting labels is expensive, and thus supervision may not cover all possibilities and fail to perfectly align <|cite_start|> (Reference: Concrete Problems in AI Safety: Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.) <|cite_end|> <|cite_start|> (Reference: Alignment for advanced machine learning systems: This chapter surveys eight research areas organized around one question: As learning systems become increasingly intelligent and autonomous, what design principles can best ensure that their behavior is aligned with the interests of the operators? The chapter focuses on two major technical obstacles to AI alignment: the challenge of specifying the right kind of objective functions and the challenge of designing AI systems that avoid unintended consequences and undesirable behavior even in cases where the objective function does not line up perfectly with the intentions of the designers. The questions surveyed include the following: How can we train reinforcement learners to take actions that are more amenable to meaningful assessment by intelligent overseers? What kinds of objective functions incentivize a system to “not have an overly large impact” or “not have many side effects”? The chapter discusses these questions, related work, and potential directions for future research, with the goal of highlighting relevant research topics in machine learning that appear tractable today.) <|cite_end|> <|cite_start|> (Reference: The Alignment Problem from a Deep Learning Perspective: In coming years or decades, artificial general intelligence (AGI) may surpass human capabilities at many critical tasks. We argue that, without substantial effort to prevent it, AGIs could learn to pursue goals that are in conflict (i.e. misaligned) with human interests. If trained like today's most capable models, AGIs could learn to act deceptively to receive higher reward, learn misaligned internally-represented goals which generalize beyond their fine-tuning distributions, and pursue those goals using power-seeking strategies. We review emerging evidence for these properties. AGIs with these properties would be difficult to align and may appear aligned even when they are not. Finally, we briefly outline how the deployment of misaligned AGIs might irreversibly undermine human control over the world, and we review research directions aimed at preventing this outcome.) <|cite_end|>the trained network with the intended applications. Recent works <|cite_start|> (Reference: Learning to summarize with human feedback: ) <|cite_end|> <|cite_start|> (Reference: Training language models to follow instructions with human feedback: Making language models bigger does not inherently make them better at following a user's intent. For example, large language models can generate outputs that are untruthful, toxic, or simply not helpful to the user. In other words, these models are not aligned with their users. In this paper, we show an avenue for aligning language models with user intent on a wide range of tasks by fine-tuning with human feedback. Starting with a set of labeler-written prompts and prompts submitted through the OpenAI API, we collect a dataset of labeler demonstrations of the desired model behavior, which we use to fine-tune GPT-3 using supervised learning. We then collect a dataset of rankings of model outputs, which we use to further fine-tune this supervised model using reinforcement learning from human feedback. We call the resulting models InstructGPT. In human evaluations on our prompt distribution, outputs from the 1.3B parameter InstructGPT model are preferred to outputs from the 175B GPT-3, despite having 100x fewer parameters. Moreover, InstructGPT models show improvements in truthfulness and reductions in toxic output generation while having minimal performance regressions on public NLP datasets. Even though InstructGPT still makes simple mistakes, our results show that fine-tuning with human feedback is a promising direction for aligning language models with human intent.) <|cite_end|> <|cite_start|> (Reference: Tuning computer vision models with task rewards: Misalignment between model predictions and intended usage can be detrimental for the deployment of computer vision models. The issue is exacerbated when the task involves complex structured outputs, as it becomes harder to design procedures which address this misalignment. In natural language processing, this is often addressed using reinforcement learning techniques that align models with a task reward. We adopt this approach and show its surprising effectiveness across multiple computer vision tasks, such as object detection, panoptic segmentation, colorization and image captioning. We believe this approach has the potential to be widely useful for better aligning models with a diverse range of computer vision tasks.) <|cite_end|>showed that deep reinforcement learning (DRL) helps by learning from various types of rewards. A prominent example is reinforcement learning from human feedback (RLHF) <|cite_start|> (Reference: Learning to summarize with human feedback: ) <|cite_end|> <|cite_start|> (Reference: Deep reinforcement learning from human preferences: For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than one percent of our agent's interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any that have been previously learned from human feedback.) <|cite_end|> <|cite_start|> (Reference: Fine-Tuning Language Models from Human Preferences: Reward learning enables the application of reinforcement learning (RL) to tasks where reward is defined by human judgment, building a model of reward by asking humans questions. Most work on reward learning has used simulated environments, but complex information about values is often expressed in natural language, and we believe reward learning for language is a key to making RL practical and safe for real-world tasks. In this paper, we build on advances in generative pretraining of language models to apply reward learning to four natural language tasks: continuing text with positive sentiment or physically descriptive language, and summarization tasks on the TL;DR and CNN/Daily Mail datasets. For stylistic continuation we achieve good results with only 5,000 comparisons evaluated by humans. For summarization, models trained with 60,000 comparisons copy whole sentences from the input but skip irrelevant preamble; this leads to reasonable ROUGE scores and very good performance according to our human labelers, but may be exploiting the fact that labelers rely on simple heuristics.) <|cite_end|> <|cite_start|> (Reference: Recursively Summarizing Books with Human Feedback: A major challenge for scaling machine learning is training models to perform tasks that are very difficult or time-consuming for humans to evaluate. We present progress on this problem on the task of abstractive summarization of entire fiction novels. Our method combines learning from human feedback with recursive task decomposition: we use models trained on smaller parts of the task to assist humans in giving feedback on the broader task. We collect a large volume of demonstrations and comparisons from human labelers, and fine-tune GPT-3 using behavioral cloning and reward modeling to do summarization recursively. At inference time, the model first summarizes small sections of the book and then recursively summarizes these summaries to produce a summary of the entire book. Our human labelers are able to supervise and evaluate the models quickly, despite not having read the entire books themselves. Our resulting model generates sensible summaries of entire books, even matching the quality of human-written summaries in a few cases ($\sim5\%$ of books). We achieve state-of-the-art results on the recent BookSum dataset for book-length summarization. A zero-shot question-answering model using these summaries achieves state-of-the-art results on the challenging NarrativeQA benchmark for answering questions about books and movie scripts. We release datasets of samples from our model.) <|cite_end|>, which appears as the current go-to strategy to refine large language models (LLMs) into powerful conversational agents such as ChatGPT <|cite_start|> (Reference: Training language models to follow instructions with human feedback: Making language models bigger does not inherently make them better at following a user's intent. For example, large language models can generate outputs that are untruthful, toxic, or simply not helpful to the user. In other words, these models are not aligned with their users. In this paper, we show an avenue for aligning language models with user intent on a wide range of tasks by fine-tuning with human feedback. Starting with a set of labeler-written prompts and prompts submitted through the OpenAI API, we collect a dataset of labeler demonstrations of the desired model behavior, which we use to fine-tune GPT-3 using supervised learning. We then collect a dataset of rankings of model outputs, which we use to further fine-tune this supervised model using reinforcement learning from human feedback. We call the resulting models InstructGPT. In human evaluations on our prompt distribution, outputs from the 1.3B parameter InstructGPT model are preferred to outputs from the 175B GPT-3, despite having 100x fewer parameters. Moreover, InstructGPT models show improvements in truthfulness and reductions in toxic output generation while having minimal performance regressions on public NLP datasets. Even though InstructGPT still makes simple mistakes, our results show that fine-tuning with human feedback is a promising direction for aligning language models with human intent.) <|cite_end|> <|cite_start|> (Reference: GPT-4 Technical Report: We report the development of GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs. While less capable than humans in many real-world scenarios, GPT-4 exhibits human-level performance on various professional and academic benchmarks, including passing a simulated bar exam with a score around the top 10% of test takers. GPT-4 is a Transformer-based model pre-trained to predict the next token in a document. The post-training alignment process results in improved performance on measures of factuality and adherence to desired behavior. A core component of this project was developing infrastructure and optimization methods that behave predictably across a wide range of scales. This allowed us to accurately predict some aspects of GPT-4's performance based on models trained with no more than 1/1,000th the compute of GPT-4.) <|cite_end|>. After pre-training on next token prediction <|cite_start|> (Reference: Improving language understanding by generative pre-training: Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant, labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to perform adequately. We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon the state of the art in 9 out of the 12 tasks studied. For instance, we achieve absolute improvements of 8.9% on commonsense reasoning (Stories Cloze Test), 5.7% on question answering (RACE), and 1.5% on textual entailment (MultiNLI).) <|cite_end|>using Web data, the LLMs are fine-tuned to follow instructions <|cite_start|> (Reference: Finetuned Language Models Are Zero-Shot Learners: This paper explores a simple method for improving the zero-shot learning abilities of language models. We show that instruction tuning -- finetuning language models on a collection of tasks described via instructions -- substantially improves zero-shot performance on unseen tasks. We take a 137B parameter pretrained language model and instruction-tune it on over 60 NLP tasks verbalized via natural language instruction templates. We evaluate this instruction-tuned model, which we call FLAN, on unseen task types. FLAN substantially improves the performance of its unmodified counterpart and surpasses zero-shot 175B GPT-3 on 20 of 25 tasks that we evaluate. FLAN even outperforms few-shot GPT-3 by a large margin on ANLI, RTE, BoolQ, AI2-ARC, OpenbookQA, and StoryCloze. Ablation studies reveal that number of finetuning datasets, model scale, and natural language instructions are key to the success of instruction tuning.) <|cite_end|> <|cite_start|> (Reference: Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks: How well can NLP models generalize to a variety of unseen tasks when provided with task instructions? To address this question, we first introduce Super-NaturalInstructions, a benchmark of 1,616 diverse NLP tasks and their expert-written instructions. Our collection covers 76 distinct task types, including but not limited to classification, extraction, infilling, sequence tagging, text rewriting, and text composition. This large and diverse collection of tasks enables rigorous benchmarking of cross-task generalization under instructions -- training models to follow instructions on a subset of tasks and evaluating them on the remaining unseen ones. Furthermore, we build Tk-Instruct, a transformer model trained to follow a variety of in-context instructions (plain language task definitions or k-shot examples). Our experiments show that Tk-Instruct outperforms existing instruction-following models such as InstructGPT by over 9% on our benchmark despite being an order of magnitude smaller. We further analyze generalization as a function of various scaling parameters, such as the number of observed tasks, the number of instances per task, and model sizes. We hope our dataset and model facilitate future progress towards more general-purpose NLP models.) <|cite_end|>before reward maximization. This RL strategy enhances alignment by evaluating the entire generated sentence instead of each token independently, handling the diversity of correct answers and allowing for negative feedback <|cite_start|> (Reference: Reinforcement learning for language models: Recent interest in Large Language Models (LLMs) and human alignment calls for effective finetuning methods. In this work, we investigate how reinforcement learning (RL) can be used to finetune the downstream performance of pretrained Language Models (LMs). Recently, on-policy RL algorithms have shown promise for text generation tasks. However, they face several empirical challenges, including (1) training instability due to the large action space and (2) sample inefficiency. In this paper, we explore methods to address both of these limitations. First, we implemented a variety of sampling techniques which effectively restrict the total action space without compromising performance, and which show significant improvement over vanilla proximal policy optimization (PPO). Second, we implemented an off-policy and value-based algorithm: Deep-Q Learning (DQN). We demonstrate the DQN can be applied to finetuning langauge models for downstream applications. However, futher exploration and tuning is required to determine whether it can achieve better sample efficiency compared to on-policy algorithms.) <|cite_end|>. Similar strategies have been useful in computer vision (CV) <|cite_start|> (Reference: Tuning computer vision models with task rewards: Misalignment between model predictions and intended usage can be detrimental for the deployment of computer vision models. The issue is exacerbated when the task involves complex structured outputs, as it becomes harder to design procedures which address this misalignment. In natural language processing, this is often addressed using reinforcement learning techniques that align models with a task reward. We adopt this approach and show its surprising effectiveness across multiple computer vision tasks, such as object detection, panoptic segmentation, colorization and image captioning. We believe this approach has the potential to be widely useful for better aligning models with a diverse range of computer vision tasks.) <|cite_end|> <|cite_start|> (Reference: Self-critical Sequence Training for Image Captioning: Recently it has been shown that policy-gradient methods for reinforcement learning can be utilized to train deep end-to-end systems directly on non-differentiable metrics for the task at hand. In this paper we consider the problem of optimizing image captioning systems using reinforcement learning, and show that by carefully optimizing our systems using the test metrics of the MSCOCO task, significant gains in performance can be realized. Our systems are built using a new optimization approach that we call self-critical sequence training (SCST). SCST is a form of the popular REINFORCE algorithm that, rather than estimating a "baseline" to normalize the rewards and reduce variance, utilizes the output of its own test-time inference algorithm to normalize the rewards it experiences. Using this approach, estimating the reward signal (as actor-critic methods must do) and estimating normalization (as REINFORCE algorithms typically do) is avoided, while at the same time harmonizing the model with respect to its test-time inference procedure. Empirically we find that directly optimizing the CIDEr metric with SCST and greedy decoding at test-time is highly effective. Our results on the MSCOCO evaluation sever establish a new state-of-the-art on the task, improving the best result in terms of CIDEr from 104.9 to 114.7.) <|cite_end|>, for instance to integrate human aesthetics into image generation <|cite_start|> (Reference: Aligning Text-to-Image Models using Human Feedback: Deep generative models have shown impressive results in text-to-image synthesis. However, current text-to-image models often generate images that are inadequately aligned with text prompts. We propose a fine-tuning method for aligning such models using human feedback, comprising three stages. First, we collect human feedback assessing model output alignment from a set of diverse text prompts. We then use the human-labeled image-text dataset to train a reward function that predicts human feedback. Lastly, the text-to-image model is fine-tuned by maximizing reward-weighted likelihood to improve image-text alignment. Our method generates objects with specified colors, counts and backgrounds more accurately than the pre-trained model. We also analyze several design choices and find that careful investigations on such design choices are important in balancing the alignment-fidelity tradeoffs. Our results demonstrate the potential for learning from human feedback to significantly improve text-to-image models.) <|cite_end|> <|cite_start|> (Reference: Better Aligning Text-to-Image Models with Human Preference: Recent years have witnessed a rapid growth of deep generative models, with text-to-image models gaining significant attention from the public. However, existing models often generate images that do not align well with human aesthetic preferences, such as awkward combinations of limbs and facial expressions. To address this issue, we collect a dataset of human choices on generated images from the Stable Foundation Discord channel. Our experiments demonstrate that current evaluation metrics for generative models do not correlate well with human choices. Thus, we train a human preference classifier with the collected dataset and derive a Human Preference Score (HPS) based on the classifier. Using the HPS, we propose a simple yet effective method to adapt Stable Diffusion to better align with human aesthetic preferences. Our experiments show that the HPS outperforms CLIP in predicting human choices and has good generalization capability towards images generated from other models. By tuning Stable Diffusion with the guidance of the HPS, the adapted model is able to generate images that are more preferred by human users. The project page is available here: https://tgxs002.github.io/align sd web/.) <|cite_end|> <|cite_start|> (Reference: HIVE: Harnessing Human Feedback for Instructional Visual Editing: Incorporating human feedback has been shown to be crucial to align text generated by large language models to human preferences. We hypothesize that state-of-the-art instructional image editing models, where outputs are generated based on an input image and an editing instruction, could similarly benefit from human feedback, as their outputs may not adhere to the correct instructions and preferences of users. In this paper, we present a novel framework to harness human feedback for instructional visual editing (HIVE). Specifically, we collect human feedback on the edited images and learn a reward function to capture the underlying user preferences. We then introduce scalable diffusion model fine-tuning methods that can incorporate human preferences based on the estimated reward. Besides, to mitigate the bias brought by the limitation of data, we contribute a new 1M training dataset, a 3.6K reward dataset for rewards learning, and a 1K evaluation dataset to boost the performance of instructional image editing. We conduct extensive empirical experiments quantitatively and qualitatively, showing that HIVE is favored over previous state-of-the-art instructional image editing approaches by a large margin.) <|cite_end|>. \textbf{Diversity of proxy rewards.} RL is usually seen as more challenging than supervised training <|cite_start|> (Reference: Challenges of real-world reinforcement learning: definitions, benchmarks and analysis: ) <|cite_end|>, notably because the real reward---ideally reflecting the users' preferences---is often not specified at training time. Proxy rewards are therefore developed to guide the learning, either as hand-engineered metrics <|cite_start|> (Reference: {BLEU}: a method for automatic evaluation of machine translation: Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.) <|cite_end|> <|cite_start|> (Reference: Automatic evaluation of summaries using n-gram co-occurrence statistics: Following the recent adoption by the machine translation community of automatic evaluation using the BLEU/NIST scoring process, we conduct an in-depth study of a similar idea for evaluating summaries. The results show that automatic evaluation using unigram co-occurrences between summary pairs correlates surprising well with human evaluations, based on various statistical metrics; while direct application of the BLEU evaluation procedure does not always give good results.) <|cite_end|> <|cite_start|> (Reference: CIDEr: Consensus-based image description evaluation: Automatically describing an image with a sentence is a long-standing challenge in computer vision and natural language processing. Due to recent progress in object detection, attribute classification, action recognition, etc., there is renewed interest in this area. However, evaluating the quality of descriptions has proven to be challenging. We propose a novel paradigm for evaluating image descriptions that uses human consensus. This paradigm consists of three main parts: a new triplet-based method of collecting human annotations to measure consensus, a new automated metric that captures consensus, and two new datasets: PASCAL-50S and ABSTRACT-50S that contain 50 sentences describing each image. Our simple metric captures human judgment of consensus better than existing metrics across sentences generated by various sources. We also evaluate five state-of-the-art image description approaches using this new protocol and provide a benchmark for future comparisons. A version of CIDEr named CIDEr-D is available as a part of MS COCO evaluation server to enable systematic evaluation and benchmarking.) <|cite_end|>or more recently in RLHF as models trained to reflect human preferences <|cite_start|> (Reference: Deep reinforcement learning from human preferences: For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than one percent of our agent's interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any that have been previously learned from human feedback.) <|cite_end|> <|cite_start|> (Reference: Reward Design with Language Models: Reward design in reinforcement learning (RL) is challenging since specifying human notions of desired behavior may be difficult via reward functions or require many expert demonstrations. Can we instead cheaply design rewards using a natural language interface? This paper explores how to simplify reward design by prompting a large language model (LLM) such as GPT-3 as a proxy reward function, where the user provides a textual prompt containing a few examples (few-shot) or a description (zero-shot) of the desired behavior. Our approach leverages this proxy reward function in an RL framework. Specifically, users specify a prompt once at the beginning of training. During training, the LLM evaluates an RL agent's behavior against the desired behavior described by the prompt and outputs a corresponding reward signal. The RL agent then uses this reward to update its behavior. We evaluate whether our approach can train agents aligned with user objectives in the Ultimatum Game, matrix games, and the DealOrNoDeal negotiation task. In all three tasks, we show that RL agents trained with our framework are well-aligned with the user's objectives and outperform RL agents trained with reward functions learned via supervised learning) <|cite_end|> <|cite_start|> (Reference: ImageReward: Learning and Evaluating Human Preferences for Text-to-Image Generation: We present a comprehensive solution to learn and improve text-to-image models from human preference feedback. To begin with, we build ImageReward -- the first general-purpose text-to-image human preference reward model -- to effectively encode human preferences. Its training is based on our systematic annotation pipeline including rating and ranking, which collects 137k expert comparisons to date. In human evaluation, ImageReward outperforms existing scoring models and metrics, making it a promising automatic metric for evaluating text-to-image synthesis. On top of it, we propose Reward Feedback Learning (ReFL), a direct tuning algorithm to optimize diffusion models against a scorer. Both automatic and human evaluation support ReFL's advantages over compared methods. All code and datasets are provided at \url{https://github.com/THUDM/ImageReward}.) <|cite_end|>. Nonetheless, designing reliable proxy rewards for evaluation is difficult. This \textit{reward misspecification} <|cite_start|> (Reference: Concrete Problems in AI Safety: Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.) <|cite_end|> <|cite_start|> (Reference: The Effects of Reward Misspecification: Mapping and Mitigating Misaligned Models: Reward hacking -- where RL agents exploit gaps in misspecified reward functions -- has been widely observed, but not yet systematically studied. To understand how reward hacking arises, we construct four RL environments with misspecified rewards. We investigate reward hacking as a function of agent capabilities: model capacity, action space resolution, observation space noise, and training time. More capable agents often exploit reward misspecifications, achieving higher proxy reward and lower true reward than less capable agents. Moreover, we find instances of phase transitions: capability thresholds at which the agent's behavior qualitatively shifts, leading to a sharp decrease in the true reward. Such phase transitions pose challenges to monitoring the safety of ML systems. To address this, we propose an anomaly detection task for aberrant policies and offer several baseline detectors.) <|cite_end|>between the proxy reward and the users' actual rewards can lead to unforeseen consequences <|cite_start|> (Reference: Understanding Learned Reward Functions: In many real-world tasks, it is not possible to procedurally specify an RL agent's reward function. In such cases, a reward function must instead be learned from interacting with and observing humans. However, current techniques for reward learning may fail to produce reward functions which accurately reflect user preferences. Absent significant advances in reward learning, it is thus important to be able to audit learned reward functions to verify whether they truly capture user preferences. In this paper, we investigate techniques for interpreting learned reward functions. In particular, we apply saliency methods to identify failure modes and predict the robustness of reward functions. We find that learned reward functions often implement surprising algorithms that rely on contingent aspects of the environment. We also discover that existing interpretability techniques often attend to irrelevant changes in reward output, suggesting that reward interpretability may need significantly different methods from policy interpretability.) <|cite_end|>. Moreover, the diversity of objectives in real-world applications complicates the challenge. In particular, human opinions can vary significantly <|cite_start|> (Reference: Choosing preferences by constructing institutions: A cultural theory of preference formation: Preferences come from the most ubiquitous human activity: living with other people. Support for and opposition to different ways of life, the shared values legitimating social relations (here called cultures) are the generators of diverse preferences. After discussing why it is not helpful to conceive of interests as preferences or to dismiss preference formation as external to organized social life, I explain how people are able to develop many preferences from few clues by using their social relations to interrogate their environment. The social filter is the source of preferences. I then argue that culture is a more powerful construct than conceptual rivals: heuristics, schemas, ideologies. Two initial applications—to the ideology of the left-right distinctions and to perceptions of danger—test the claim that this theory of how individuals use political cultures to develop their preferences outperforms the alternatives.) <|cite_end|> <|cite_start|> (Reference: Handling preferences in evolutionary multiobjective optimization: A survey: Despite the relatively high volume of research conducted on evolutionary multiobjective optimization in the last few years. Little attention has been paid to the decision making process that is required to select a final solution to the multiobjective optimization problem at hand. This paper reviews the most important preference handling approaches used with evolutionary algorithms, analyzing their advantages and disadvantages, and then, it proposes some of the potential areas of future research in this discipline.) <|cite_end|> <|cite_start|> (Reference: An overview of the Schwartz Theory of Basic Values: This article presents an overview of the Schwartz theory of basic human values. It discusses the nature of values and spells out the features that are common to all values and what distinguishes one value from another. The theory identifies ten basic personal values that are recognized across cultures and explains where they come from. At the heart of the theory is the idea that values form a circular structure that reflects the motivations each value expresses. This circular structure, that captures the conflicts and compatibility among the ten values is apparently culturally universal. The article elucidates the psychological principles that give rise to it. Next, it presents the two major methods developed to measure the basic values, the Schwartz Value Survey and the Portrait Values Questionnaire. Findings from 82 countries, based on these and other methods, provide evidence for the validity of the theory across cultures. The findings reveal substantial differences in the value priorities of individuals. Surprisingly, however, the average value priorities of most societal groups exhibit a similar hierarchical order whose existence the article explains. The last section of the article clarifies how values differ from other concepts used to explain behavior—attitudes, beliefs, norms, and traits. Creative Commons License This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 License. This article is available in Online Readings in Psychology and Culture: http://scholarworks.gvsu.edu/orpc/vol2/iss1/11) <|cite_end|>on subjects such as aesthetics <|cite_start|> (Reference: Neuroaesthetics and art's diversity and universality: There is a duality to art. It is enormously varied and culturally diverse, and yet it is also universal, common to all humans. Art's variability and distinctiveness seem to elude science, better equipped to account for constant or regular phenomena. We believe that art's cultural particularity can be reconciled with its biological universality. The emergence of variability and distinctiveness from common mechanisms is at the core of biological explanation; it is a basic fact of life, and a basic fact of brain function. The individual, cultural, and historical diversity of art, both in its production and its appreciation, owe to basic features of the organization and function of the human brain. Each encounter with an artwork engages flexible neural networks that are modulated by context, expectations, emotional states, goals, and experience. Because these factors change from one occasion to another, each encounter with art has its distinct flavor. Repeated encounters with art over the course of a lifetime lead people develop personal preferences for art, as the network connections become strengthened in unique ways. These flexible and adaptable networks evolved in humans as a consequence of the relaxation of genetic constraints on the development of brain regions involved in orchestrating network dynamics, enabling a greater impact of learning and experience. In sum, art is universal and common because it arises from neural systems that are common to all humans, and it is variable and diverse because those neural systems evolved to be flexible, attuned to momentary contexts and goals, and changing through a lifetime of experiences. This article is categorized under: Economics > Individual Decision-Making Cognitive Biology > Evolutionary Roots of Cognition Neuroscience > Cognition.) <|cite_end|>, politics or fairness <|cite_start|> (Reference: Measuring and signing fairness as performance under multiple stakeholder distributions: As learning machines increase their influence on decisions concerning human lives, analyzing their fairness properties becomes a subject of central importance. Yet, our best tools for measuring the fairness of learning systems are rigid fairness metrics encapsulated as mathematical one-liners, offer limited power to the stakeholders involved in the prediction task, and are easy to manipulate when we exhort excessive pressure to optimize them. To advance these issues, we propose to shift focus from shaping fairness metrics to curating the distributions of examples under which these are computed. In particular, we posit that every claim about fairness should be immediately followed by the tagline "Fair under what examples, and collected by whom?". By highlighting connections to the literature in domain generalization, we propose to measure fairness as the ability of the system to generalize under multiple stress tests -- distributions of examples with social relevance. We encourage each stakeholder to curate one or multiple stress tests containing examples reflecting their (possibly conflicting) interests. The machine passes or fails each stress test by falling short of or exceeding a pre-defined metric value. The test results involve all stakeholders in a discussion about how to improve the learning system, and provide flexible assessments of fairness dependent on context and based on interpretable data. We provide full implementation guidelines for stress testing, illustrate both the benefits and shortcomings of this framework, and introduce a cryptographic scheme to enable a degree of prediction accountability from system providers.) <|cite_end|>. Humans have also different expectations from machines: for example, while <|cite_start|> (Reference: Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned.: We describe our early efforts to red team language models in order to simultaneously discover, measure, and attempt to reduce their potentially harmful outputs. We make three main contributions. First, we investigate scaling behaviors for red teaming across 3 model sizes (2.7B, 13B, and 52B parameters) and 4 model types: a plain language model (LM); an LM prompted to be helpful, honest, and harmless; an LM with rejection sampling; and a model trained to be helpful and harmless using reinforcement learning from human feedback (RLHF). We find that the RLHF models are increasingly difficult to red team as they scale, and we find a flat trend with scale for the other model types. Second, we release our dataset of 38,961 red team attacks for others to analyze and learn from. We provide our own analysis of the data and find a variety of harmful outputs, which range from offensive language to more subtly harmful non-violent unethical outputs. Third, we exhaustively describe our instructions, processes, statistical methodologies, and uncertainty about red teaming. We hope that this transparency accelerates our ability to work together as a community in order to develop shared norms, practices, and technical standards for how to red team language models.) <|cite_end|>stressed aligning LLMs towards harmless feedback, <|cite_start|> (Reference: Constitutional AI: Harmlessness from AI Feedback: As AI systems become more capable, we would like to enlist their help to supervise other AIs. We experiment with methods for training a harmless AI assistant through self-improvement, without any human labels identifying harmful outputs. The only human oversight is provided through a list of rules or principles, and so we refer to the method as 'Constitutional AI'. The process involves both a supervised learning and a reinforcement learning phase. In the supervised phase we sample from an initial model, then generate self-critiques and revisions, and then finetune the original model on revised responses. In the RL phase, we sample from the finetuned model, use a model to evaluate which of the two samples is better, and then train a preference model from this dataset of AI preferences. We then train with RL using the preference model as the reward signal, i.e. we use 'RL from AI Feedback' (RLAIF). As a result we are able to train a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them. Both the SL and RL methods can leverage chain-of-thought style reasoning to improve the human-judged performance and transparency of AI decision making. These methods make it possible to control AI behavior more precisely and with far fewer human labels.) <|cite_end|>requested helpful non-evasive responses, and others' <|cite_start|> (Reference: Rewarding chatbots for real-world engagement with millions of users.: The emergence of pretrained large language models has led to the deployment of a range of social chatbots for chitchat. Although these chatbots demonstrate language ability and fluency, they are not guaranteed to be engaging and can struggle to retain users. This work investigates the development of social chatbots that prioritize user engagement to enhance retention, specifically examining the use of human feedback to efficiently develop highly engaging chatbots. The proposed approach uses automatic pseudo-labels collected from user interactions to train a reward model that can be used to reject low-scoring sample responses generated by the chatbot model at inference time. Intuitive evaluation metrics, such as mean conversation length (MCL), are introduced as proxies to measure the level of engagement of deployed chatbots. A/B testing on groups of 10,000 new daily chatbot users on the Chai Research platform shows that this approach increases the MCL by up to 70%, which translates to a more than 30% increase in user retention for a GPT-J 6B model. Future work aims to use the reward model to realise a data fly-wheel, where the latest user conversations can be used to alternately fine-tune the language model and the reward model.) <|cite_end|>interests are to make LLMs engaging and enjoyable. \ifthenelse{\boolean{isarxiv}}{}{Even hand-engineered metrics can be in tension: generating shorter descriptions with higher precision can increase the BLEU <|cite_start|> (Reference: {BLEU}: a method for automatic evaluation of machine translation: Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.) <|cite_end|>score but decrease the ROUGE <|cite_start|> (Reference: Automatic evaluation of summaries using n-gram co-occurrence statistics: Following the recent adoption by the machine translation community of automatic evaluation using the BLEU/NIST scoring process, we conduct an in-depth study of a similar idea for evaluating summaries. The results show that automatic evaluation using unigram co-occurrences between summary pairs correlates surprising well with human evaluations, based on various statistical metrics; while direct application of the BLEU evaluation procedure does not always give good results.) <|cite_end|>score due to reduced recall.} \input{figures/main/fig_pareto.tex} \textbf{Towards multi-policy strategies.} Considering these challenges, a single model cannot be aligned with everyone’s preferences <|cite_start|> (Reference: Training language models to follow instructions with human feedback: Making language models bigger does not inherently make them better at following a user's intent. For example, large language models can generate outputs that are untruthful, toxic, or simply not helpful to the user. In other words, these models are not aligned with their users. In this paper, we show an avenue for aligning language models with user intent on a wide range of tasks by fine-tuning with human feedback. Starting with a set of labeler-written prompts and prompts submitted through the OpenAI API, we collect a dataset of labeler demonstrations of the desired model behavior, which we use to fine-tune GPT-3 using supervised learning. We then collect a dataset of rankings of model outputs, which we use to further fine-tune this supervised model using reinforcement learning from human feedback. We call the resulting models InstructGPT. In human evaluations on our prompt distribution, outputs from the 1.3B parameter InstructGPT model are preferred to outputs from the 175B GPT-3, despite having 100x fewer parameters. Moreover, InstructGPT models show improvements in truthfulness and reductions in toxic output generation while having minimal performance regressions on public NLP datasets. Even though InstructGPT still makes simple mistakes, our results show that fine-tuning with human feedback is a promising direction for aligning language models with human intent.) <|cite_end|>. Existing works align towards a consensus-based user <|cite_start|> (Reference: Fine-tuning language models to find agreement among humans with diverse preferences: Recent work in large language modeling (LLMs) has used fine-tuning to align outputs with the preferences of a prototypical user. This work assumes that human preferences are static and homogeneous across individuals, so that aligning to a a single "generic" user will confer more general alignment. Here, we embrace the heterogeneity of human preferences to consider a different challenge: how might a machine help people with diverse views find agreement? We fine-tune a 70 billion parameter LLM to generate statements that maximize the expected approval for a group of people with potentially diverse opinions. Human participants provide written opinions on thousands of questions touching on moral and political issues (e.g., "should we raise taxes on the rich?"), and rate the LLM's generated candidate consensus statements for agreement and quality. A reward model is then trained to predict individual preferences, enabling it to quantify and rank consensus statements in terms of their appeal to the overall group, defined according to different aggregation (social welfare) functions. The model produces consensus statements that are preferred by human users over those from prompted LLMs (>70%) and significantly outperforms a tight fine-tuned baseline that lacks the final ranking step. Further, our best model's consensus statements are preferred over the best human-generated opinions (>65%). We find that when we silently constructed consensus statements from only a subset of group members, those who were excluded were more likely to dissent, revealing the sensitivity of the consensus to individual contributions. These results highlight the potential to use LLMs to help groups of humans align their values with one another.) <|cite_end|> <|cite_start|> (Reference: Generative CI through collective response systems: How can many people (who may disagree) come together to answer a question or make a decision?"Collective response systems"are a type of generative collective intelligence (CI) facilitation process meant to address this challenge. They enable a form of"generative voting", where both the votes, and the choices of what to vote on, are provided by the group. Such systems overcome the traditional limitations of polling, town halls, standard voting, referendums, etc. The generative CI outputs of collective response systems can also be chained together into iterative"collective dialogues", analogously to some kinds of generative AI. Technical advances across domains including recommender systems, language models, and human-computer interaction have led to the development of innovative and scalable collective response systems. For example, Polis has been used around the world to support policy-making at different levels of government, and Remesh has been used by the UN to understand the challenges and needs of ordinary people across war-torn countries. This paper aims to develop a shared language by defining the structure, processes, properties, and principles of such systems. Collective response systems allow non-confrontational exploration of divisive issues, help identify common ground, and elicit insights from those closest to the issues. As a result, they can help overcome gridlock around conflict and governance challenges, increase trust, and develop mandates. Continued progress toward their development and adoption could help revitalize democracies, reimagine corporate governance, transform conflict, and govern powerful AI systems -- both as a complement to deeper deliberative democratic processes and as an option where deeper processes are not applicable or possible.) <|cite_end|>, relying on the \enquote{wisdom of the crowd} <|cite_start|> (Reference: Training a helpful and harmless assistant with reinforcement learning from human feedback.: We apply preference modeling and reinforcement learning from human feedback (RLHF) to finetune language models to act as helpful and harmless assistants. We find this alignment training improves performance on almost all NLP evaluations, and is fully compatible with training for specialized skills such as python coding and summarization. We explore an iterated online mode of training, where preference models and RL policies are updated on a weekly cadence with fresh human feedback data, efficiently improving our datasets and models. Finally, we investigate the robustness of RLHF training, and identify a roughly linear relation between the RL reward and the square root of the KL divergence between the policy and its initialization. Alongside our main results, we perform peripheral analyses on calibration, competing objectives, and the use of OOD detection, compare our models with human writers, and provide samples from our models using prompts appearing in recent related work. Figure These plots show that PM accuracy decreases as we focus exclusively on comparisons between pairs of samples with high score. We have normalized all preference models to have the same mean score on a held-out dataset so that they’re directly comparable, and then plotted accuracy for the comparisons where both samples have scores above a specific threshold.) <|cite_end|>, inherently prioritizing certain principles <|cite_start|> (Reference: Constitutional AI: Harmlessness from AI Feedback: As AI systems become more capable, we would like to enlist their help to supervise other AIs. We experiment with methods for training a harmless AI assistant through self-improvement, without any human labels identifying harmful outputs. The only human oversight is provided through a list of rules or principles, and so we refer to the method as 'Constitutional AI'. The process involves both a supervised learning and a reinforcement learning phase. In the supervised phase we sample from an initial model, then generate self-critiques and revisions, and then finetune the original model on revised responses. In the RL phase, we sample from the finetuned model, use a model to evaluate which of the two samples is better, and then train a preference model from this dataset of AI preferences. We then train with RL using the preference model as the reward signal, i.e. we use 'RL from AI Feedback' (RLAIF). As a result we are able to train a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them. Both the SL and RL methods can leverage chain-of-thought style reasoning to improve the human-judged performance and transparency of AI decision making. These methods make it possible to control AI behavior more precisely and with far fewer human labels.) <|cite_end|>, resulting in unfair representations of marginalized groups <|cite_start|> (Reference: Ethical and social risks of harm from language models.: This paper aims to help structure the risk landscape associated with large-scale Language Models (LMs). In order to foster advances in responsible innovation, an in-depth understanding of the potential risks posed by these models is needed. A wide range of established and anticipated risks are analysed in detail, drawing on multidisciplinary expertise and literature from computer science, linguistics, and social sciences. We outline six specific risk areas: I. Discrimination, Exclusion and Toxicity, II. Information Hazards, III. Misinformation Harms, V. Malicious Uses, V. Human-Computer Interaction Harms, VI. Automation, Access, and Environmental Harms. The first area concerns the perpetuation of stereotypes, unfair discrimination, exclusionary norms, toxic language, and lower performance by social group for LMs. The second focuses on risks from private data leaks or LMs correctly inferring sensitive information. The third addresses risks arising from poor, false or misleading information including in sensitive domains, and knock-on risks such as the erosion of trust in shared information. The fourth considers risks from actors who try to use LMs to cause harm. The fifth focuses on risks specific to LLMs used to underpin conversational agents that interact with human users, including unsafe use, manipulation or deception. The sixth discusses the risk of environmental harm, job automation, and other challenges that may have a disparate effect on different social groups or communities. In total, we review 21 risks in-depth. We discuss the points of origin of different risks and point to potential mitigation approaches. Lastly, we discuss organisational responsibilities in implementing mitigations, and the role of collaboration and participation. We highlight directions for further research, particularly on expanding the toolkit for assessing and evaluating the outlined risks in LMs.) <|cite_end|> <|cite_start|> (Reference: Personalisation within bounds: A risk taxonomy and policy framework for the alignment of large language models with personalised feedback.: Large language models (LLMs) are used to generate content for a wide range of tasks, and are set to reach a growing audience in coming years due to integration in product interfaces like ChatGPT or search engines like Bing. This intensifies the need to ensure that models are aligned with human preferences and do not produce unsafe, inaccurate or toxic outputs. While alignment techniques like reinforcement learning with human feedback (RLHF) and red-teaming can mitigate some safety concerns and improve model capabilities, it is unlikely that an aggregate fine-tuning process can adequately represent the full range of users' preferences and values. Different people may legitimately disagree on their preferences for language and conversational norms, as well as on values or ideologies which guide their communication. Personalising LLMs through micro-level preference learning processes may result in models that are better aligned with each user. However, there are several normative challenges in defining the bounds of a societally-acceptable and safe degree of personalisation. In this paper, we ask how, and in what ways, LLMs should be personalised. First, we review literature on current paradigms for aligning LLMs with human feedback, and identify issues including (i) a lack of clarity regarding what alignment means; (ii) a tendency of technology providers to prescribe definitions of inherently subjective preferences and values; and (iii) a 'tyranny of the crowdworker', exacerbated by a lack of documentation in who we are really aligning to. Second, we present a taxonomy of benefits and risks associated with personalised LLMs, for individuals and society at large. Finally, we propose a three-tiered policy framework that allows users to experience the benefits of personalised alignment, while restraining unsafe and undesirable LLM-behaviours within (supra-)national and organisational bounds.) <|cite_end|>. The trade-offs <|cite_start|> (Reference: Do the rewards justify the means? measuring trade-offs between rewards and ethical behavior in the MACHIAVELLI benchmark.: Artificial agents have traditionally been trained to maximize reward, which may incentivize power-seeking and deception, analogous to how next-token prediction in language models (LMs) may incentivize toxicity. So do agents naturally learn to be Machiavellian? And how do we measure these behaviors in general-purpose models such as GPT-4? Towards answering these questions, we introduce MACHIAVELLI, a benchmark of 134 Choose-Your-Own-Adventure games containing over half a million rich, diverse scenarios that center on social decision-making. Scenario labeling is automated with LMs, which are more performant than human annotators. We mathematize dozens of harmful behaviors and use our annotations to evaluate agents' tendencies to be power-seeking, cause disutility, and commit ethical violations. We observe some tension between maximizing reward and behaving ethically. To improve this trade-off, we investigate LM-based methods to steer agents' towards less harmful behaviors. Our results show that agents can both act competently and morally, so concrete progress can currently be made in machine ethics--designing agents that are Pareto improvements in both safety and capabilities.) <|cite_end|>are decided a priori before training, shifting the responsibility to the engineers, reducing transparency and explainability <|cite_start|> (Reference: A Practical Guide to Multi-Objective Reinforcement Learning and Planning: Real-world decision-making tasks are generally complex, requiring trade-offs between multiple, often conflicting, objectives. Despite this, the majority of research in reinforcement learning and decision-theoretic planning either assumes only a single objective, or that multiple objectives can be adequately handled via a simple linear combination. Such approaches may oversimplify the underlying problem and hence produce suboptimal results. This paper serves as a guide to the application of multi-objective methods to difficult problems, and is aimed at researchers who are already familiar with single-objective reinforcement learning and planning methods who wish to adopt a multi-objective perspective on their research, as well as practitioners who encounter multi-objective decision problems in practice. It identifies the factors that may influence the nature of the desired solution, and illustrates by example how these influence the design of multi-objective decision-making systems for complex problems.) <|cite_end|>, and actually aligning towards the \enquote{researchers designing the study} <|cite_start|> (Reference: Training language models to follow instructions with human feedback: Making language models bigger does not inherently make them better at following a user's intent. For example, large language models can generate outputs that are untruthful, toxic, or simply not helpful to the user. In other words, these models are not aligned with their users. In this paper, we show an avenue for aligning language models with user intent on a wide range of tasks by fine-tuning with human feedback. Starting with a set of labeler-written prompts and prompts submitted through the OpenAI API, we collect a dataset of labeler demonstrations of the desired model behavior, which we use to fine-tune GPT-3 using supervised learning. We then collect a dataset of rankings of model outputs, which we use to further fine-tune this supervised model using reinforcement learning from human feedback. We call the resulting models InstructGPT. In human evaluations on our prompt distribution, outputs from the 1.3B parameter InstructGPT model are preferred to outputs from the 175B GPT-3, despite having 100x fewer parameters. Moreover, InstructGPT models show improvements in truthfulness and reductions in toxic output generation while having minimal performance regressions on public NLP datasets. Even though InstructGPT still makes simple mistakes, our results show that fine-tuning with human feedback is a promising direction for aligning language models with human intent.) <|cite_end|> <|cite_start|> (Reference: Whose Opinions Do Language Models Reflect?: Language models (LMs) are increasingly being used in open-ended contexts, where the opinions reflected by LMs in response to subjective queries can have a profound impact, both on user satisfaction, as well as shaping the views of society at large. In this work, we put forth a quantitative framework to investigate the opinions reflected by LMs -- by leveraging high-quality public opinion polls and their associated human responses. Using this framework, we create OpinionsQA, a new dataset for evaluating the alignment of LM opinions with those of 60 US demographic groups over topics ranging from abortion to automation. Across topics, we find substantial misalignment between the views reflected by current LMs and those of US demographic groups: on par with the Democrat-Republican divide on climate change. Notably, this misalignment persists even after explicitly steering the LMs towards particular demographic groups. Our analysis not only confirms prior observations about the left-leaning tendencies of some human feedback-tuned LMs, but also surfaces groups whose opinions are poorly reflected by current LMs (e.g., 65+ and widowed individuals). Our code and data are available at https://github.com/tatsu-lab/opinions_qa.) <|cite_end|>. These limitations, discussed in \Cref{app:discussion:single_policy}, highlight the inability of single-policy alignment strategies to handle human diversity. Yet, \enquote{human-aligned artificial intelligence is a multi-objective problem} <|cite_start|> (Reference: Human-aligned artificial intelligence is a multiobjective problem: ) <|cite_end|>. Thus, we draw inspiration from the multi-objective reinforcement learning (MORL) literature <|cite_start|> (Reference: {Learning all optimal policies with multiple criteria: We describe an algorithm for learning in the presence of multiple criteria. Our technique generalizes previous approaches in that it can learn optimal policies for all linear preference assignments over the multiple reward criteria at once. The algorithm can be viewed as an extension to standard reinforcement learning for MDPs where instead of repeatedly backing up maximal expected rewards, we back up the set of expected rewards that are maximal for some set of linear preferences (given by a weight vector, w). We present the algorithm along with a proof of correctness showing that our solution gives the optimal policy for any linear preference function. The solution reduces to the standard value iteration algorithm for a specific weight vector, w.) <|cite_end|> <|cite_start|> (Reference: Deep Reinforcement Learning for Multiobjective Optimization: This article proposes an end-to-end framework for solving multiobjective optimization problems (MOPs) using deep reinforcement learning (DRL), that we call DRL-based multiobjective optimization algorithm (DRL-MOA). The idea of decomposition is adopted to decompose the MOP into a set of scalar optimization subproblems. Then, each subproblem is modeled as a neural network. Model parameters of all the subproblems are optimized collaboratively according to a neighborhood-based parameter-transfer strategy and the DRL training algorithm. Pareto-optimal solutions can be directly obtained through the trained neural-network models. Specifically, the multiobjective traveling salesman problem (MOTSP) is solved in this article using the DRL-MOA method by modeling the subproblem as a Pointer Network. Extensive experiments have been conducted to study the DRL-MOA and various benchmark methods are compared with it. It is found that once the trained model is available, it can scale to newly encountered problems with no need for retraining the model. The solutions can be directly obtained by a simple forward calculation of the neural network; thereby, no iteration is required and the MOP can be always solved in a reasonable time. The proposed method provides a new way of solving the MOP by means of DRL. It has shown a set of new characteristics, for example, strong generalization ability and fast solving speed in comparison with the existing methods for multiobjective optimizations. The experimental results show the effectiveness and competitiveness of the proposed method in terms of model performance and running time.) <|cite_end|> <|cite_start|> (Reference: Multitask reinforcement learning on the distribution of mdps: In this paper we address a new problem in reinforcement learning. Here we consider an agent that faces multiple learning tasks within its lifetime. The agent's objective is to maximize its total reward in the lifetime as well as a conventional return in each task. To realize this, it has to be endowed an important ability to keep its past learning experiences and utilize them for improving future learning performance. This time we try to phrase this problem formally. The central idea is to introduce an environmental class, BV-MDPs that is defined with the distribution of MDPs. As an approach to exploiting past learning experiences, we focus on statistics (mean and deviation) about the agent's value tables. The mean can be used as initial values of the table when a new task is presented. The deviation can be viewed as measuring reliability of the mean, and we utilize it in calculating priority of simulated backups. We conduct experiments in computer simulation to evaluate the effectiveness.) <|cite_end|> <|cite_start|> (Reference: Multi-objective reinforcement learning using sets of pareto dominating policies: Many real-world problems involve the optimization of multiple, possibly conicting objectives. Multi-objective reinforcement learning (MORL) is a generalization of standard reinforcement learning where the scalar reward signal is extended to multiple feedback signals, in essence, one for each objective. MORL is the process of learning policies that optimize multiple criteria simultaneously. In this paper, we present a novel temporal dierence learning algorithm that integrates the Pareto dominance relation into a reinforcement learning approach. This algorithm is a multi-policy algorithm that learns a set of Pareto dominating policies in a single run. We name this algorithm Pareto Q-learning and it is applicable in episodic environments with deterministic as well as stochastic transition functions. A crucial aspect of Pareto Q-learning is the updating mechanism that bootstraps sets of Q-vectors. One of our main contributions in this paper is a mechanism that separates the expected immediate reward vector from the set of expected future discounted reward vectors. This decomposition allows us to update the sets and to exploit the learned policies consistently throughout the state space. To balance exploration and exploitation during learning, we also propose three set evaluation mechanisms. These three mechanisms evaluate the sets of vectors to accommodate for standard action selection strategies, such as -greedy. More precisely, these mechanisms use multi-objective evaluation principles such as the hypervolume measure, the cardinality indicator and the Pareto dominance relation to select the most promising actions. We experimentally validate the algorithm on multiple environments with two and three objectives and we demonstrate that Pareto Q-learning outperforms current state-of-the-art MORL algorithms with respect to the hypervolume of the obtained policies. We note that (1) Pareto Q-learning is able to learn the entire Pareto front under the usual assumption that each state-action pair is suciently sampled, while (2) not being biased by the shape of the Pareto front. Furthermore, (3) the set evaluation mechanisms provide indicative measures for local action selection and (4) the learned policies can be retrieved throughout the state and action space.) <|cite_end|>
[ "<|reference_start|> Learning to summarize with human feedback: <|reference_end|>", "<|reference_start|> Better Aligning Text-to-Image Models with Human Preference: Recent years have witnessed a rapid growth of deep generative models, with text-to-image models gaining significant attention from the public. However, existing models often generate images that do not align well with human aesthetic preferences, such as awkward combinations of limbs and facial expressions. To address this issue, we collect a dataset of human choices on generated images from the Stable Foundation Discord channel. Our experiments demonstrate that current evaluation metrics for generative models do not correlate well with human choices. Thus, we train a human preference classifier with the collected dataset and derive a Human Preference Score (HPS) based on the classifier. Using the HPS, we propose a simple yet effective method to adapt Stable Diffusion to better align with human aesthetic preferences. Our experiments show that the HPS outperforms CLIP in predicting human choices and has good generalization capability towards images generated from other models. By tuning Stable Diffusion with the guidance of the HPS, the adapted model is able to generate images that are more preferred by human users. The project page is available here: https://tgxs002.github.io/align sd web/. <|reference_end|>", "<|reference_start|> Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned.: We describe our early efforts to red team language models in order to simultaneously discover, measure, and attempt to reduce their potentially harmful outputs. We make three main contributions. First, we investigate scaling behaviors for red teaming across 3 model sizes (2.7B, 13B, and 52B parameters) and 4 model types: a plain language model (LM); an LM prompted to be helpful, honest, and harmless; an LM with rejection sampling; and a model trained to be helpful and harmless using reinforcement learning from human feedback (RLHF). We find that the RLHF models are increasingly difficult to red team as they scale, and we find a flat trend with scale for the other model types. Second, we release our dataset of 38,961 red team attacks for others to analyze and learn from. We provide our own analysis of the data and find a variety of harmful outputs, which range from offensive language to more subtly harmful non-violent unethical outputs. Third, we exhaustively describe our instructions, processes, statistical methodologies, and uncertainty about red teaming. We hope that this transparency accelerates our ability to work together as a community in order to develop shared norms, practices, and technical standards for how to red team language models. <|reference_end|>", "<|reference_start|> Do the rewards justify the means? measuring trade-offs between rewards and ethical behavior in the MACHIAVELLI benchmark.: Artificial agents have traditionally been trained to maximize reward, which may incentivize power-seeking and deception, analogous to how next-token prediction in language models (LMs) may incentivize toxicity. So do agents naturally learn to be Machiavellian? And how do we measure these behaviors in general-purpose models such as GPT-4? Towards answering these questions, we introduce MACHIAVELLI, a benchmark of 134 Choose-Your-Own-Adventure games containing over half a million rich, diverse scenarios that center on social decision-making. Scenario labeling is automated with LMs, which are more performant than human annotators. We mathematize dozens of harmful behaviors and use our annotations to evaluate agents' tendencies to be power-seeking, cause disutility, and commit ethical violations. We observe some tension between maximizing reward and behaving ethically. To improve this trade-off, we investigate LM-based methods to steer agents' towards less harmful behaviors. Our results show that agents can both act competently and morally, so concrete progress can currently be made in machine ethics--designing agents that are Pareto improvements in both safety and capabilities. <|reference_end|>" ]
[ 11, 27, 44, 56 ]
{"<|cite_1|>": "ss-1853082", "<|multi_cite_2_1|>": "arxiv-175879", "<|multi_cite_2_2|>": "ss-1197880", "<|multi_cite_2_3|>": "arxiv-337689", "<|multi_cite_2_4|>": "arxiv-323919", "<|multi_cite_3_1|>": "ss-1113121", "<|multi_cite_3_2|>": "arxiv-68419", "<|cite_4|>": "ss-962153", "<|multi_cite_5_1|>": "arxiv-100620", "<|multi_cite_5_2|>": "ss-1352791", "<|multi_cite_5_3|>": "arxiv-443731", "<|multi_cite_6_1|>": "ss-1291599", "<|multi_cite_6_2|>": "arxiv-403294", "<|multi_cite_6_3|>": "arxiv-481932", "<|multi_cite_7_1|>": "ss-1291599", "<|multi_cite_7_2|>": "arxiv-126589", "<|multi_cite_7_3|>": "arxiv-224461", "<|multi_cite_7_4|>": "arxiv-368794", "<|multi_cite_8_1|>": "arxiv-403294", "<|multi_cite_8_2|>": "arxiv-489148", "<|cite_9|>": "ss-986248", "<|multi_cite_10_1|>": "arxiv-364691", "<|multi_cite_10_2|>": "arxiv-413503", "<|cite_11|>": "ss-1176709", "<|multi_cite_12_1|>": "arxiv-481932", "<|multi_cite_12_2|>": "arxiv-111611", "<|multi_cite_13_1|>": "arxiv-483802", "<|multi_cite_13_2|>": "ss-680933", "<|multi_cite_13_3|>": "arxiv-489563", "<|cite_14|>": "ss-687323", "<|multi_cite_15_1|>": "ss-822419", "<|multi_cite_15_2|>": "ss-1370787", "<|multi_cite_15_3|>": "ss-2294239", "<|multi_cite_16_1|>": "arxiv-126589", "<|multi_cite_16_2|>": "arxiv-485049", "<|multi_cite_16_3|>": "arxiv-496540", "<|multi_cite_17_1|>": "arxiv-100620", "<|multi_cite_17_2|>": "arxiv-391931", "<|cite_18|>": "arxiv-309194", "<|multi_cite_19_1|>": "ss-1355555", "<|multi_cite_19_2|>": "ss-2133798", "<|multi_cite_19_3|>": "ss-905628", "<|cite_20|>": "ss-1355554", "<|cite_21|>": "arxiv-435171", "<|cite_22|>": "ss-1834246", "<|cite_23|>": "arxiv-469808", "<|cite_24|>": "ss-737004", "<|cite_25|>": "ss-822419", "<|cite_26|>": "ss-1370787", "<|cite_27|>": "arxiv-403294", "<|multi_cite_28_1|>": "arxiv-465410", "<|multi_cite_28_2|>": "ss-737005", "<|cite_29|>": "ss-1165624", "<|cite_30|>": "arxiv-469808", "<|multi_cite_31_1|>": "ss-1855709", "<|multi_cite_31_2|>": "ss-737006", "<|cite_32|>": "ss-737007", "<|cite_33|>": "arxiv-328009", "<|multi_cite_34_1|>": "arxiv-403294", "<|multi_cite_34_2|>": "arxiv-493387", "<|cite_35|>": "ss-737008", "<|multi_cite_36_1|>": "ss-740302", "<|multi_cite_36_2|>": "ss-1200442", "<|multi_cite_36_3|>": "ss-912669", "<|multi_cite_36_4|>": "ss-1205212", "<|multi_cite_36_5|>": "arxiv-56404", "<|multi_cite_36_6|>": "arxiv-222279", "<|multi_cite_36_7|>": "ss-737009", "<|multi_cite_36_8|>": "ss-1846474", "<|cite_37|>": "arxiv-328009", "<|cite_38|>": "ss-737010", "<|multi_cite_39_1|>": "arxiv-328009", "<|multi_cite_40_1|>": "arxiv-238976", "<|multi_cite_40_2|>": "arxiv-286661", "<|multi_cite_41_1|>": "ss-1189878", "<|multi_cite_41_2|>": "arxiv-420730", "<|multi_cite_41_3|>": "arxiv-381929", "<|multi_cite_41_4|>": "arxiv-439546", "<|multi_cite_41_5|>": "arxiv-467009", "<|multi_cite_41_6|>": "arxiv-470823", "<|cite_42|>": "ss-1189878"}
2010.02744
<|paper_start|> Title: Stepwise Extractive Summarization and Planning with Structured Transformers Abstract: Stepwise Extractive Summarization and Planning with Structured Transformers: We propose encoder-centric stepwise models for extractive summarization using structured transformers -- HiBERT and Extended Transformers. We enable stepwise summarization by injecting the previously generated summary into the structured transformer as an auxiliary sub-structure. Our models are not only efficient in modeling the structure of long inputs, but they also do not rely on task-specific redundancy-aware modeling, making them a general purpose extractive content planner for different tasks. When evaluated on CNN/DailyMail extractive summarization, stepwise models achieve state-of-the-art performance in terms of Rouge without any redundancy aware modeling or sentence filtering. This also holds true for Rotowire table-to-text generation, where our models surpass previously reported metrics for content selection, planning and ordering, highlighting the strength of stepwise modeling. Amongst the two structured transformers we test, stepwise Extended Transformers provides the best performance across both datasets and sets a new standard for these challenges. Introduction \label{sec:intro} Extractive document summarization is the task of creating a summary by identifying (and subsequently concatenating) the most important sentences in a document <|cite_start|> (Reference: LexRank: Graph-based Lexical Centrality as Salience in Text Summarization: We introduce a stochastic graph-based method for computing relative importance of textual units for Natural Language Processing. We test the technique on the problem of Text Summarization (TS). Extractive TS relies on the concept of sentence salience to identify the most important sentences in a document or set of documents. Salience is typically defined in terms of the presence of particular important words or in terms of similarity to a centroid pseudo-sentence. We consider a new approach, LexRank, for computing sentence importance based on the concept of eigenvector centrality in a graph representation of sentences. In this model, a connectivity matrix based on intra-sentence cosine similarity is used as the adjacency matrix of the graph representation of sentences. Our system, based on LexRank ranked in first place in more than one task in the recent DUC 2004 evaluation. In this paper we present a detailed analysis of our approach and apply it to a larger data set including data from earlier DUC evaluations. We discuss several methods to compute centrality using the similarity graph. The results show that degree-based methods (including LexRank) outperform both centroid-based methods and other systems participating in DUC in most of the cases. Furthermore, the LexRank with threshold method outperforms the other degree-based techniques including continuous LexRank. We also show that our approach is quite insensitive to the noise in the data that may result from an imperfect topical clustering of documents.) <|cite_end|> <|cite_start|> (Reference: Automatic summarization of API reviews: With the proliferation of online developer forums as informal documentation, developers often share their opinions about the APIs they use. However, given the plethora of opinions available for an API in various online developer forums, it can be challenging for a developer to make informed decisions about the APIs. While automatic summarization of opinions have been explored for other domains (e.g., cameras, cars), we found little research that investigates the benefits of summaries of public API reviews. In this paper, we present two algorithms (statistical and aspect-based) to summarize opinions about APIs. To investigate the usefulness of the techniques, we developed, Opiner, an online opinion summarization engine that presents summaries of opinions using both our proposed techniques and existing six off-the-shelf techniques. We investigated the usefulness of Opiner using two case studies, both involving professional software engineers. We found that developers were interested to use our proposed summaries much more frequently than other summaries (daily vs once a year) and that while combined with Stack Overflow, Opiner helped developers to make the right decision with more accuracy and confidence and in less time.) <|cite_end|>. In recent years this task has matured significantly, mostly thanks to advances in deep neural networks. \newcite{Cheng2016Neural} conceptualize extractive summarization as a sequence labeling task in which first a hierarchical long short-term memory network (LSTM; <|cite_start|> (Reference: Long {Short-Term} memory: Model compression is significant for the wide adoption of Recurrent Neural Networks (RNNs) in both user devices possessing limited resources and business clusters requiring quick responses to large-scale service requests. This work aims to learn structurally-sparse Long Short-Term Memory (LSTM) by reducing the sizes of basic structures within LSTM units, including input updates, gates, hidden states, cell states and outputs. Independently reducing the sizes of basic structures can result in inconsistent dimensions among them, and consequently, end up with invalid LSTM units. To overcome the problem, we propose Intrinsic Sparse Structures (ISS) in LSTMs. Removing a component of ISS will simultaneously decrease the sizes of all basic structures by one and thereby always maintain the dimension consistency. By learning ISS within LSTM units, the obtained LSTMs remain regular while having much smaller basic structures. Based on group Lasso regularization, our method achieves 10.59× speedup without losing any perplexity of a language modeling of Penn TreeBank dataset. It is also successfully evaluated through a compact model with only 2.69M weights for machine Question Answering of SQuAD dataset. Our approach is successfully extended to nonLSTM RNNs, like Recurrent Highway Networks (RHNs). Our source code is available1.) <|cite_end|>, <|cite_start|> (Reference: Long {Short-Term} memory: Model compression is significant for the wide adoption of Recurrent Neural Networks (RNNs) in both user devices possessing limited resources and business clusters requiring quick responses to large-scale service requests. This work aims to learn structurally-sparse Long Short-Term Memory (LSTM) by reducing the sizes of basic structures within LSTM units, including input updates, gates, hidden states, cell states and outputs. Independently reducing the sizes of basic structures can result in inconsistent dimensions among them, and consequently, end up with invalid LSTM units. To overcome the problem, we propose Intrinsic Sparse Structures (ISS) in LSTMs. Removing a component of ISS will simultaneously decrease the sizes of all basic structures by one and thereby always maintain the dimension consistency. By learning ISS within LSTM units, the obtained LSTMs remain regular while having much smaller basic structures. Based on group Lasso regularization, our method achieves 10.59× speedup without losing any perplexity of a language modeling of Penn TreeBank dataset. It is also successfully evaluated through a compact model with only 2.69M weights for machine Question Answering of SQuAD dataset. Our approach is successfully extended to nonLSTM RNNs, like Recurrent Highway Networks (RHNs). Our source code is available1.) <|cite_end|>) is used to encode a document and then another LSTM is used to predict for each sentence whether it should be included in the summary. This architecture was later adopted by \newcite{NallapatiZM16}, \newcite{Nallapati2017SummaRuNNer}, \newcite{narayan-etal-2018-ranking}, \newcite{zhang2018neural} and \newcite{dong-etal-2018-banditsum}. Following the success of pre-trained transformer-based architectures for many tasks <|cite_start|> (Reference: Attention Is All You Need: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.) <|cite_end|> <|cite_start|> (Reference: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding: We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).) <|cite_end|>, the current state-of-the-art approach to extractive summarization uses transformers to learn sentence representations and to rank sentences by their saliency <|cite_start|> (Reference: Fine-tune BERT for Extractive Summarization: BERT, a pre-trained Transformer model, has achieved ground-breaking performance on multiple NLP tasks. In this paper, we describe BERTSUM, a simple variant of BERT, for extractive summarization. Our system is the state of the art on the CNN/Dailymail dataset, outperforming the previous best-performed system by 1.65 on ROUGE-L. The codes to reproduce our results are available at https://github.com/nlpyang/BertSum) <|cite_end|> <|cite_start|> (Reference: Text Summarization with Pretrained Encoders: Bidirectional Encoder Representations from Transformers (BERT) represents the latest incarnation of pretrained language models which have recently advanced a wide range of natural language processing tasks. In this paper, we showcase how BERT can be usefully applied in text summarization and propose a general framework for both extractive and abstractive models. We introduce a novel document-level encoder based on BERT which is able to express the semantics of a document and obtain representations for its sentences. Our extractive model is built on top of this encoder by stacking several inter-sentence Transformer layers. For abstractive summarization, we propose a new fine-tuning schedule which adopts different optimizers for the encoder and the decoder as a means of alleviating the mismatch between the two (the former is pretrained while the latter is not). We also demonstrate that a two-staged fine-tuning approach can further boost the quality of the generated summaries. Experiments on three datasets show that our model achieves state-of-the-art results across the board in both extractive and abstractive settings. Our code is available at https://github.com/nlpyang/PreSumm) <|cite_end|> <|cite_start|> (Reference: HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization: Neural extractive summarization models usually employ a hierarchical encoder for document encoding and they are trained using sentence-level labels, which are created heuristically using rule-based methods. Training the hierarchical encoder with these \emph{inaccurate} labels is challenging. Inspired by the recent work on pre-training transformer sentence encoders \cite{devlin:2018:arxiv}, we propose {\sc Hibert} (as shorthand for {\bf HI}erachical {\bf B}idirectional {\bf E}ncoder {\bf R}epresentations from {\bf T}ransformers) for document encoding and a method to pre-train it using unlabeled data. We apply the pre-trained {\sc Hibert} to our summarization model and it outperforms its randomly initialized counterpart by 1.25 ROUGE on the CNN/Dailymail dataset and by 2.0 ROUGE on a version of New York Times dataset. We also achieve the state-of-the-art performance on these two datasets.) <|cite_end|> <|cite_start|> (Reference: Searching for Effective Neural Extractive Summarization: What Works and What's Next: The recent years have seen remarkable success in the use of deep neural networks on text summarization. However, there is no clear understanding of \textit{why} they perform so well, or \textit{how} they might be improved. In this paper, we seek to better understand how neural extractive summarization systems could benefit from different types of model architectures, transferable knowledge and learning schemas. Additionally, we find an effective way to improve current frameworks and achieve the state-of-the-art result on CNN/DailyMail by a large margin based on our observations and analyses. Hopefully, our work could provide more clues for future research on extractive summarization.) <|cite_end|> <|cite_start|> (Reference: AREDSUM: Adaptive Redundancy-Aware Iterative Sentence Ranking for Extractive Document Summarization: Redundancy-aware extractive summarization systems score the redundancy of the sentences to be included in a summary either jointly with their salience information or separately as an additional sentence scoring step. Previous work shows the efficacy of jointly scoring and selecting sentences with neural sequence generation models. It is, however, not well-understood if the gain is due to better encoding techniques or better redundancy reduction approaches. Similarly, the contribution of salience versus diversity components on the created summary is not studied well. Building on the state-of-the-art encoding methods for summarization, we present two adaptive learning models: AREDSUM-SEQ that jointly considers salience and novelty during sentence selection; and a two-step AREDSUM-CTX that scores salience first, then learns to balance salience and redundancy, enabling the measurement of the impact of each aspect. Empirical results on CNN/DailyMail and NYT50 datasets show that by modeling diversity explicitly in a separate step, AREDSUM-CTX achieves significantly better performance than AREDSUM-SEQ as well as state-of-the-art extractive summarization baselines.) <|cite_end|>. The top scoring sentences are then assembled to produce an extract of the document. Summaries built in this fashion <|cite_start|> (Reference: Neural Summarization by Extracting Sentences and Words: Traditional approaches to extractive summarization rely heavily on human-engineered features. In this work we propose a data-driven approach based on neural networks and continuous sentence features. We develop a general framework for single-document summarization composed of a hierarchical document encoder and an attention-based extractor. This architecture allows us to develop different classes of summarization models which can extract sentences or words. We train our models on large scale corpora containing hundreds of thousands of document-summary pairs. Experimental results on two summarization datasets demonstrate that our models obtain results comparable to the state of the art without any access to linguistic annotation.) <|cite_end|> <|cite_start|> (Reference: Supplementary: Document Modeling with External Attention for Sentence Extraction: It is a challenging task to rely only on the main body of the document for extraction cues, as it requires document understanding. Documents in practice often have additional information, such as the title, image captions, videos, images and twitter handles, along with the main body of the document. These types of information are often available for newswire articles. Figure 1 shows an example of a newswire article taken from CNN (CNN.com). It shows the additional information such as the title (first block) and the images with their captions (third block) along with the main body of the document (second block). The last block shows a manually written summary of the document in terms of “highlights” to allow readers to quickly gather information on stories. As one can see in this example, gold highlights focus on sentences from the fourth paragraph, i.e., on key events such as the “PM’s resignation”, “bribery scandal and its investigation”, “suicide” and “leaving an important note”. Interestingly, the essence of the article is explicitly or implicitly mentioned in the title and the image captions of the document.) <|cite_end|> <|cite_start|> (Reference: Neural Latent Extractive Document Summarization: Extractive summarization models require sentence-level labels, which are usually created heuristically (e.g., with rule-based methods) given that most summarization datasets only have document-summary pairs. Since these labels might be suboptimal, we propose a latent variable extractive model where sentences are viewed as latent variables and sentences with activated variables are used to infer gold summaries. During training the loss comes \emph{directly} from gold summaries. Experiments on the CNN/Dailymail dataset show that our model improves over a strong extractive baseline trained on heuristically approximated labels and also performs competitively to several recent models.) <|cite_end|> <|cite_start|> (Reference: BanditSum: Extractive Summarization as a Contextual Bandit: In this work, we propose a novel method for training neural networks to perform single-document extractive summarization without heuristically-generated extractive labels. We call our approach BanditSum as it treats extractive summarization as a contextual bandit (CB) problem, where the model receives a document to summarize (the context), and chooses a sequence of sentences to include in the summary (the action). A policy gradient reinforcement learning algorithm is used to train the model to select sequences of sentences that maximize ROUGE score. We perform a series of experiments demonstrating that BanditSum is able to achieve ROUGE scores that are better than or comparable to the state-of-the-art for extractive summarization, and converges using significantly fewer update steps than competing approaches. In addition, we show empirically that BanditSum performs significantly better than competing approaches when good summary sentences appear late in the source document.) <|cite_end|> are prone to contain redundant information. Several recent approaches have explored mechanisms to better handle redundancy, such as heuristic-based Trigram Blocking (TriBlk; <|cite_start|> (Reference: Text Summarization with Pretrained Encoders: Bidirectional Encoder Representations from Transformers (BERT) represents the latest incarnation of pretrained language models which have recently advanced a wide range of natural language processing tasks. In this paper, we showcase how BERT can be usefully applied in text summarization and propose a general framework for both extractive and abstractive models. We introduce a novel document-level encoder based on BERT which is able to express the semantics of a document and obtain representations for its sentences. Our extractive model is built on top of this encoder by stacking several inter-sentence Transformer layers. For abstractive summarization, we propose a new fine-tuning schedule which adopts different optimizers for the encoder and the decoder as a means of alleviating the mismatch between the two (the former is pretrained while the latter is not). We also demonstrate that a two-staged fine-tuning approach can further boost the quality of the generated summaries. Experiments on three datasets show that our model achieves state-of-the-art results across the board in both extractive and abstractive settings. Our code is available at https://github.com/nlpyang/PreSumm) <|cite_end|>, <|cite_start|> (Reference: Text Summarization with Pretrained Encoders: Bidirectional Encoder Representations from Transformers (BERT) represents the latest incarnation of pretrained language models which have recently advanced a wide range of natural language processing tasks. In this paper, we showcase how BERT can be usefully applied in text summarization and propose a general framework for both extractive and abstractive models. We introduce a novel document-level encoder based on BERT which is able to express the semantics of a document and obtain representations for its sentences. Our extractive model is built on top of this encoder by stacking several inter-sentence Transformer layers. For abstractive summarization, we propose a new fine-tuning schedule which adopts different optimizers for the encoder and the decoder as a means of alleviating the mismatch between the two (the former is pretrained while the latter is not). We also demonstrate that a two-staged fine-tuning approach can further boost the quality of the generated summaries. Experiments on three datasets show that our model achieves state-of-the-art results across the board in both extractive and abstractive settings. Our code is available at https://github.com/nlpyang/PreSumm) <|cite_end|>; <|cite_start|> (Reference: Heterogeneous Graph Neural Networks for Extractive Document Summarization: As a crucial step in extractive document summarization, learning cross-sentence relations has been explored by a plethora of approaches. An intuitive way is to put them in the graph-based neural network, which has a more complex structure for capturing inter-sentence relationships. In this paper, we present a heterogeneous graph-based neural network for extractive summarization (HeterSumGraph), which contains semantic nodes of different granularity levels apart from sentences. These additional nodes act as the intermediary between sentences and enrich the cross-sentence relations. Besides, our graph structure is flexible in natural extension from a single-document setting to multi-document via introducing document nodes. To our knowledge, we are the first one to introduce different types of nodes into graph-based neural networks for extractive document summarization and perform a comprehensive qualitative analysis to investigate their benefits. The code will be released on Github) <|cite_end|>, <|cite_start|> (Reference: Heterogeneous Graph Neural Networks for Extractive Document Summarization: As a crucial step in extractive document summarization, learning cross-sentence relations has been explored by a plethora of approaches. An intuitive way is to put them in the graph-based neural network, which has a more complex structure for capturing inter-sentence relationships. In this paper, we present a heterogeneous graph-based neural network for extractive summarization (HeterSumGraph), which contains semantic nodes of different granularity levels apart from sentences. These additional nodes act as the intermediary between sentences and enrich the cross-sentence relations. Besides, our graph structure is flexible in natural extension from a single-document setting to multi-document via introducing document nodes. To our knowledge, we are the first one to introduce different types of nodes into graph-based neural networks for extractive document summarization and perform a comprehensive qualitative analysis to investigate their benefits. The code will be released on Github) <|cite_end|>), handcrafted feature-driven models <|cite_start|> (Reference: Leveraging Contextual Sentence Relations for Extractive Summarization Using a Neural Attention Model: As a framework for extractive summarization, sentence regression has achieved state-of-the-art performance in several widely-used practical systems. The most challenging task within the sentence regression framework is to identify discriminative features to encode a sentence into a feature vector. So far, sentence regression approaches have neglected to use features that capture contextual relations among sentences. We propose a neural network model, Contextual Relation-based Summarization (CRSum), to take advantage of contextual relations among sentences so as to improve the performance of sentence regression. Specifically, we first use sentence relations with a word-level attentive pooling convolutional neural network to construct sentence representations. Then, we use contextual relations with a sentence-level attentive pooling recurrent neural network to construct context representations. Finally, CRSum automatically learns useful contextual features by jointly learning representations of sentences and similarity scores between a sentence and sentences in its context. Using a two-level attention mechanism, CRSum is able to pay attention to important content, i.e., words and sentences, in the surrounding context of a given sentence. We carry out extensive experiments on six benchmark datasets. CRSum alone can achieve comparable performance with state-of-the-art approaches; when combined with a few basic surface features, it significantly outperforms the state-of-the-art in terms of multiple ROUGE metrics.) <|cite_end|> and redundancy aware neural sequence models <|cite_start|> (Reference: Neural Document Summarization by Jointly Learning to Score and Select Sentences: Sentence scoring and sentence selection are two main steps in extractive document summarization systems. However, previous works treat them as two separated subtasks. In this paper, we present a novel end-to-end neural network framework for extractive document summarization by jointly learning to score and select sentences. It first reads the document sentences with a hierarchical encoder to obtain the representation of sentences. Then it builds the output summary by extracting sentences one by one. Different from previous methods, our approach integrates the selection strategy into the scoring model, which directly predicts the relative importance given previously selected sentences. Experiments on the CNN/Daily Mail dataset show that the proposed framework significantly outperforms the state-of-the-art extractive summarization models.) <|cite_end|> <|cite_start|> (Reference: AREDSUM: Adaptive Redundancy-Aware Iterative Sentence Ranking for Extractive Document Summarization: Redundancy-aware extractive summarization systems score the redundancy of the sentences to be included in a summary either jointly with their salience information or separately as an additional sentence scoring step. Previous work shows the efficacy of jointly scoring and selecting sentences with neural sequence generation models. It is, however, not well-understood if the gain is due to better encoding techniques or better redundancy reduction approaches. Similarly, the contribution of salience versus diversity components on the created summary is not studied well. Building on the state-of-the-art encoding methods for summarization, we present two adaptive learning models: AREDSUM-SEQ that jointly considers salience and novelty during sentence selection; and a two-step AREDSUM-CTX that scores salience first, then learns to balance salience and redundancy, enabling the measurement of the impact of each aspect. Empirical results on CNN/DailyMail and NYT50 datasets show that by modeling diversity explicitly in a separate step, AREDSUM-CTX achieves significantly better performance than AREDSUM-SEQ as well as state-of-the-art extractive summarization baselines.) <|cite_end|>. One common problem with these models is that their focus is limited to content overlap and to respecting length budgets. However, these are but a small subset of the dimensions necessary to produce informative and coherent summaries. Ideally, models would utilize enriched document and summary representations in order to implicitly learn better extractive plans for producing summaries <|cite_start|> (Reference: What comes next? Extractive summarization by next-sentence prediction: Existing approaches to automatic summarization assume that a length limit for the summary is given, and view content selection as an optimization problem to maximize informativeness and minimize redundancy within this budget. This framework ignores the fact that human-written summaries have rich internal structure which can be exploited to train a summarization system. We present NEXTSUM, a novel approach to summarization based on a model that predicts the next sentence to include in the summary using not only the source article, but also the summary produced so far. We show that such a model successfully captures summary-specific discourse moves, and leads to better content selection performance, in addition to automatically predicting how long the target summary should be. We perform experiments on the New York Times Annotated Corpus of summaries, where NEXTSUM outperforms lead and content-model summarization baselines by significant margins. We also show that the lengths of summaries produced by our system correlates with the lengths of the human-written gold standards.) <|cite_end|> <|cite_start|> (Reference: Jointly Extracting and Compressing Documents with Summary State Representations: We present a new neural model for text summarization that first extracts sentences from a document and then compresses them. The proposed model offers a balance that sidesteps the difficulties in abstractive methods while generating more concise summaries than extractive methods. In addition, our model dynamically determines the length of the output summary based on the gold summaries it observes during training and does not require length constraints typical to extractive summarization. The model achieves state-of-the-art results on the CNN/DailyMail and Newsroom datasets, improving over current extractive and abstractive methods. Human evaluations demonstrate that our model generates concise and informative summaries. We also make available a new dataset of oracle compressive summaries derived automatically from the CNN/DailyMail reference summaries.) <|cite_end|>. One such method is \emph{stepwise} summarization <|cite_start|> (Reference: What comes next? Extractive summarization by next-sentence prediction: Existing approaches to automatic summarization assume that a length limit for the summary is given, and view content selection as an optimization problem to maximize informativeness and minimize redundancy within this budget. This framework ignores the fact that human-written summaries have rich internal structure which can be exploited to train a summarization system. We present NEXTSUM, a novel approach to summarization based on a model that predicts the next sentence to include in the summary using not only the source article, but also the summary produced so far. We show that such a model successfully captures summary-specific discourse moves, and leads to better content selection performance, in addition to automatically predicting how long the target summary should be. We perform experiments on the New York Times Annotated Corpus of summaries, where NEXTSUM outperforms lead and content-model summarization baselines by significant margins. We also show that the lengths of summaries produced by our system correlates with the lengths of the human-written gold standards.) <|cite_end|>, where a summary is constructed incrementally by choosing new content conditioned on previously planned content. In this paper, we propose encoder-centric stepwise models for extractive summarization using \emph{structured transformers}. Structured transformers are transformer-based architectures that have the flexibility to model some form of structure of the input, e.g., hierarchical document structure. In this paper, we specifically study two such architectures -- HiBERT <|cite_start|> (Reference: HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization: Neural extractive summarization models usually employ a hierarchical encoder for document encoding and they are trained using sentence-level labels, which are created heuristically using rule-based methods. Training the hierarchical encoder with these \emph{inaccurate} labels is challenging. Inspired by the recent work on pre-training transformer sentence encoders \cite{devlin:2018:arxiv}, we propose {\sc Hibert} (as shorthand for {\bf HI}erachical {\bf B}idirectional {\bf E}ncoder {\bf R}epresentations from {\bf T}ransformers) for document encoding and a method to pre-train it using unlabeled data. We apply the pre-trained {\sc Hibert} to our summarization model and it outperforms its randomly initialized counterpart by 1.25 ROUGE on the CNN/Dailymail dataset and by 2.0 ROUGE on a version of New York Times dataset. We also achieve the state-of-the-art performance on these two datasets.) <|cite_end|> and Extended Transformers Construction (ETC; <|cite_start|> (Reference: ETC: encoding long and structured data in transformers: Transformer-based models have pushed the state of the art in many natural language processing tasks. However, one of their main limitations is the quadratic computational and memory cost of the standard attention mechanism. In this paper, we present a new family of Transformer models, which we call the Extended Transformer Construction (ETC), that allows for significant increases in input sequence length by introducing a new global-local attention mechanism between a global memory and the standard input tokens. We also show that combining global-local attention with relative position encodings allows ETC to handle structured data with ease. Empirical results on the Natural Questions data set show the promise of the approach.) <|cite_end|>, <|cite_start|> (Reference: ETC: encoding long and structured data in transformers: Transformer-based models have pushed the state of the art in many natural language processing tasks. However, one of their main limitations is the quadratic computational and memory cost of the standard attention mechanism. In this paper, we present a new family of Transformer models, which we call the Extended Transformer Construction (ETC), that allows for significant increases in input sequence length by introducing a new global-local attention mechanism between a global memory and the standard input tokens. We also show that combining global-local attention with relative position encodings allows ETC to handle structured data with ease. Empirical results on the Natural Questions data set show the promise of the approach.) <|cite_end|>). Details of these are given in Sections~\ref{sec:stepHiBERT} and~\ref{sec:stepetc}. We enable stepwise summarization by injecting the previously planned summary content into the structured transformer as an auxiliary sub-structure. The model then can holistically learn any document-level coherence properties, such as saliency, redundancy, and ordering, embodied in the gold summaries. This differs from other methods which are either task specific (e.g., redundancy aware modeling in <|cite_start|> (Reference: AREDSUM: Adaptive Redundancy-Aware Iterative Sentence Ranking for Extractive Document Summarization: Redundancy-aware extractive summarization systems score the redundancy of the sentences to be included in a summary either jointly with their salience information or separately as an additional sentence scoring step. Previous work shows the efficacy of jointly scoring and selecting sentences with neural sequence generation models. It is, however, not well-understood if the gain is due to better encoding techniques or better redundancy reduction approaches. Similarly, the contribution of salience versus diversity components on the created summary is not studied well. Building on the state-of-the-art encoding methods for summarization, we present two adaptive learning models: AREDSUM-SEQ that jointly considers salience and novelty during sentence selection; and a two-step AREDSUM-CTX that scores salience first, then learns to balance salience and redundancy, enabling the measurement of the impact of each aspect. Empirical results on CNN/DailyMail and NYT50 datasets show that by modeling diversity explicitly in a separate step, AREDSUM-CTX achieves significantly better performance than AREDSUM-SEQ as well as state-of-the-art extractive summarization baselines.) <|cite_end|>, <|cite_start|> (Reference: AREDSUM: Adaptive Redundancy-Aware Iterative Sentence Ranking for Extractive Document Summarization: Redundancy-aware extractive summarization systems score the redundancy of the sentences to be included in a summary either jointly with their salience information or separately as an additional sentence scoring step. Previous work shows the efficacy of jointly scoring and selecting sentences with neural sequence generation models. It is, however, not well-understood if the gain is due to better encoding techniques or better redundancy reduction approaches. Similarly, the contribution of salience versus diversity components on the created summary is not studied well. Building on the state-of-the-art encoding methods for summarization, we present two adaptive learning models: AREDSUM-SEQ that jointly considers salience and novelty during sentence selection; and a two-step AREDSUM-CTX that scores salience first, then learns to balance salience and redundancy, enabling the measurement of the impact of each aspect. Empirical results on CNN/DailyMail and NYT50 datasets show that by modeling diversity explicitly in a separate step, AREDSUM-CTX achieves significantly better performance than AREDSUM-SEQ as well as state-of-the-art extractive summarization baselines.) <|cite_end|>) or not holistic (e.g., manually curated features in <|cite_start|> (Reference: What comes next? Extractive summarization by next-sentence prediction: Existing approaches to automatic summarization assume that a length limit for the summary is given, and view content selection as an optimization problem to maximize informativeness and minimize redundancy within this budget. This framework ignores the fact that human-written summaries have rich internal structure which can be exploited to train a summarization system. We present NEXTSUM, a novel approach to summarization based on a model that predicts the next sentence to include in the summary using not only the source article, but also the summary produced so far. We show that such a model successfully captures summary-specific discourse moves, and leads to better content selection performance, in addition to automatically predicting how long the target summary should be. We perform experiments on the New York Times Annotated Corpus of summaries, where NEXTSUM outperforms lead and content-model summarization baselines by significant margins. We also show that the lengths of summaries produced by our system correlates with the lengths of the human-written gold standards.) <|cite_end|>, <|cite_start|> (Reference: What comes next? Extractive summarization by next-sentence prediction: Existing approaches to automatic summarization assume that a length limit for the summary is given, and view content selection as an optimization problem to maximize informativeness and minimize redundancy within this budget. This framework ignores the fact that human-written summaries have rich internal structure which can be exploited to train a summarization system. We present NEXTSUM, a novel approach to summarization based on a model that predicts the next sentence to include in the summary using not only the source article, but also the summary produced so far. We show that such a model successfully captures summary-specific discourse moves, and leads to better content selection performance, in addition to automatically predicting how long the target summary should be. We perform experiments on the New York Times Annotated Corpus of summaries, where NEXTSUM outperforms lead and content-model summarization baselines by significant margins. We also show that the lengths of summaries produced by our system correlates with the lengths of the human-written gold standards.) <|cite_end|>). An added advantage of structured encoders is that they break the quadratic attention mechanism of transformers <|cite_start|> (Reference: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding: We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).) <|cite_end|>, making them more efficient and able to process longer inputs, instead of truncating the inputs to 512 tokens <|cite_start|> (Reference: Text Summarization with Pretrained Encoders: Bidirectional Encoder Representations from Transformers (BERT) represents the latest incarnation of pretrained language models which have recently advanced a wide range of natural language processing tasks. In this paper, we showcase how BERT can be usefully applied in text summarization and propose a general framework for both extractive and abstractive models. We introduce a novel document-level encoder based on BERT which is able to express the semantics of a document and obtain representations for its sentences. Our extractive model is built on top of this encoder by stacking several inter-sentence Transformer layers. For abstractive summarization, we propose a new fine-tuning schedule which adopts different optimizers for the encoder and the decoder as a means of alleviating the mismatch between the two (the former is pretrained while the latter is not). We also demonstrate that a two-staged fine-tuning approach can further boost the quality of the generated summaries. Experiments on three datasets show that our model achieves state-of-the-art results across the board in both extractive and abstractive settings. Our code is available at https://github.com/nlpyang/PreSumm) <|cite_end|> <|cite_start|> (Reference: AREDSUM: Adaptive Redundancy-Aware Iterative Sentence Ranking for Extractive Document Summarization: Redundancy-aware extractive summarization systems score the redundancy of the sentences to be included in a summary either jointly with their salience information or separately as an additional sentence scoring step. Previous work shows the efficacy of jointly scoring and selecting sentences with neural sequence generation models. It is, however, not well-understood if the gain is due to better encoding techniques or better redundancy reduction approaches. Similarly, the contribution of salience versus diversity components on the created summary is not studied well. Building on the state-of-the-art encoding methods for summarization, we present two adaptive learning models: AREDSUM-SEQ that jointly considers salience and novelty during sentence selection; and a two-step AREDSUM-CTX that scores salience first, then learns to balance salience and redundancy, enabling the measurement of the impact of each aspect. Empirical results on CNN/DailyMail and NYT50 datasets show that by modeling diversity explicitly in a separate step, AREDSUM-CTX achieves significantly better performance than AREDSUM-SEQ as well as state-of-the-art extractive summarization baselines.) <|cite_end|>, which is critical for long inputs and outputs which require non-trivial planning. When evaluated on the CNN/DailyMail summarization dataset <|cite_start|> (Reference: Teaching Machines to Read and Comprehend: Teaching machines to read natural language documents remains an elusive challenge. Machine reading systems can be tested on their ability to answer questions posed on the contents of documents that they have seen, but until now large scale training and test datasets have been missing for this type of evaluation. In this work we define a new methodology that resolves this bottleneck and provides large scale supervised reading comprehension data. This allows us to develop a class of attention based deep neural networks that learn to read real documents and answer complex questions with minimal prior knowledge of language structure.) <|cite_end|>, we achieve state-of-the-art performance in terms of Rouge <|cite_start|> (Reference: Automatic evaluation of summaries using n-gram co-occurrence statistics: Following the recent adoption by the machine translation community of automatic evaluation using the BLEU/NIST scoring process, we conduct an in-depth study of a similar idea for evaluating summaries. The results show that automatic evaluation using unigram co-occurrences between summary pairs correlates surprising well with human evaluations, based on various statistical metrics; while direct application of the BLEU evaluation procedure does not always give good results.) <|cite_end|> without any redundancy <|cite_start|> (Reference: Neural Document Summarization by Jointly Learning to Score and Select Sentences: Sentence scoring and sentence selection are two main steps in extractive document summarization systems. However, previous works treat them as two separated subtasks. In this paper, we present a novel end-to-end neural network framework for extractive document summarization by jointly learning to score and select sentences. It first reads the document sentences with a hierarchical encoder to obtain the representation of sentences. Then it builds the output summary by extracting sentences one by one. Different from previous methods, our approach integrates the selection strategy into the scoring model, which directly predicts the relative importance given previously selected sentences. Experiments on the CNN/Daily Mail dataset show that the proposed framework significantly outperforms the state-of-the-art extractive summarization models.) <|cite_end|> <|cite_start|> (Reference: AREDSUM: Adaptive Redundancy-Aware Iterative Sentence Ranking for Extractive Document Summarization: Redundancy-aware extractive summarization systems score the redundancy of the sentences to be included in a summary either jointly with their salience information or separately as an additional sentence scoring step. Previous work shows the efficacy of jointly scoring and selecting sentences with neural sequence generation models. It is, however, not well-understood if the gain is due to better encoding techniques or better redundancy reduction approaches. Similarly, the contribution of salience versus diversity components on the created summary is not studied well. Building on the state-of-the-art encoding methods for summarization, we present two adaptive learning models: AREDSUM-SEQ that jointly considers salience and novelty during sentence selection; and a two-step AREDSUM-CTX that scores salience first, then learns to balance salience and redundancy, enabling the measurement of the impact of each aspect. Empirical results on CNN/DailyMail and NYT50 datasets show that by modeling diversity explicitly in a separate step, AREDSUM-CTX achieves significantly better performance than AREDSUM-SEQ as well as state-of-the-art extractive summarization baselines.) <|cite_end|> or sentence selection mechanisms <|cite_start|> (Reference: Text Summarization with Pretrained Encoders: Bidirectional Encoder Representations from Transformers (BERT) represents the latest incarnation of pretrained language models which have recently advanced a wide range of natural language processing tasks. In this paper, we showcase how BERT can be usefully applied in text summarization and propose a general framework for both extractive and abstractive models. We introduce a novel document-level encoder based on BERT which is able to express the semantics of a document and obtain representations for its sentences. Our extractive model is built on top of this encoder by stacking several inter-sentence Transformer layers. For abstractive summarization, we propose a new fine-tuning schedule which adopts different optimizers for the encoder and the decoder as a means of alleviating the mismatch between the two (the former is pretrained while the latter is not). We also demonstrate that a two-staged fine-tuning approach can further boost the quality of the generated summaries. Experiments on three datasets show that our model achieves state-of-the-art results across the board in both extractive and abstractive settings. Our code is available at https://github.com/nlpyang/PreSumm) <|cite_end|>. Our model's task-agnostic approach allows it to implicitly learn and leverage content plans directly from the data. Moreover, structured transformers form the basis of our model, which are flexible in terms of content type (e.g., text or tables) that can be modeled. We demonstrate this by learning intricate extractive content plan for the Rotowire table-to-text generation task <|cite_start|> (Reference: Challenges in Data-to-Document Generation: Recent neural models have shown significant progress on the problem of generating short descriptive texts conditioned on a small number of database records. In this work, we suggest a slightly more difficult data-to-text generation task, and investigate how effective current approaches are on this task. In particular, we introduce a new, large-scale corpus of data records paired with descriptive documents, propose a series of extractive evaluation methods for analyzing performance, and obtain baseline results using current neural generation methods. Experiments show that these models produce fluent text, but fail to convincingly approximate human-generated documents. Moreover, even templated baselines exceed the performance of these neural models on some metrics, though copy- and reconstruction-based extensions lead to noticeable improvements.) <|cite_end|>. This task requires the generation of long summaries from large score tables detailing the the specifics of a sports match, which often necessitates dedicated content selection and planning models to generate a high-quality summary <|cite_start|> (Reference: Challenges in Data-to-Document Generation: Recent neural models have shown significant progress on the problem of generating short descriptive texts conditioned on a small number of database records. In this work, we suggest a slightly more difficult data-to-text generation task, and investigate how effective current approaches are on this task. In particular, we introduce a new, large-scale corpus of data records paired with descriptive documents, propose a series of extractive evaluation methods for analyzing performance, and obtain baseline results using current neural generation methods. Experiments show that these models produce fluent text, but fail to convincingly approximate human-generated documents. Moreover, even templated baselines exceed the performance of these neural models on some metrics, though copy- and reconstruction-based extensions lead to noticeable improvements.) <|cite_end|> <|cite_start|> (Reference: Data-to-Text Generation with Content Selection and Planning: Recent advances in data-to-text generation have led to the use of large-scale datasets and neural network models which are trained end-to-end, without explicitly modeling what to say and in what order. In this work, we present a neural network architecture which incorporates content selection and planning without sacrificing end-to-end training. We decompose the generation task into two stages. Given a corpus of data records (paired with descriptive documents), we first generate a content plan highlighting which information should be mentioned and in which order and then generate the document while taking the content plan into account. Automatic and human-based evaluation experiments show that our model outperforms strong baselines improving the state-of-the-art on the recently released RotoWire dataset.) <|cite_end|>. We show that our stepwise framework achieves higher content selection, planning and ordering scores relative to prior work with task-specific planning mechanisms. The contributions of the paper are as follows: 1) this is first study to use ETC <|cite_start|> (Reference: ETC: encoding long and structured data in transformers: Transformer-based models have pushed the state of the art in many natural language processing tasks. However, one of their main limitations is the quadratic computational and memory cost of the standard attention mechanism. In this paper, we present a new family of Transformer models, which we call the Extended Transformer Construction (ETC), that allows for significant increases in input sequence length by introducing a new global-local attention mechanism between a global memory and the standard input tokens. We also show that combining global-local attention with relative position encodings allows ETC to handle structured data with ease. Empirical results on the Natural Questions data set show the promise of the approach.) <|cite_end|> for summarization for its ability and flexibility to better model long and structured inputs; 2) we propose augmentions of two structured transformers, HiBERT and ETC, in order to enable stepwise models for extractive planning; 3) we demonstrate empirically that our models are general purpose and can be adapted as an extractive document summarizer or as a content planner for table-to-text generation; 4) Our experiments highlight the effectiveness of stepwise modeling, specifically stepwise ETC, which sets a new standard for both tasks. Related Work \label{sec:related} \paragraph{Redundancy.} Summarization models often use a dedicated {\em sentence selection} step after {\em sentence scoring} to address redundancy. Maximal Marginal Relevance <|cite_start|> (Reference: The Use of MMR, Diversity-Based Reranking for Reordering Documents and Producing Summaries: This paper presents a method for combining query-relevance with information-novelty in the context of text retrieval and summarization. The Maximal Marginal Relevance (MMR) criterion strives to reduce redundancy while maintaining query relevance in re-ranking retrieved documents and in selecting appropriate passages for text summarization. Preliminary results indicate some benefits for MMR diversity ranking in document retrieval and in single document summarization. The latter are borne out by the recent results of the SUMMAC conference in the evaluation of summarization systems. However, the clearest advantage is demonstrated in constructing non-redundant multi-document summaries, where MMR results are clearly superior to non-MMR passage selection.) <|cite_end|> based methods select the content that has the maximal score and is minimally redundant with the previously constructed partial summary. Others treated sentence selection as an optimization problem under some constraints such as summary length <|cite_start|> (Reference: A Study of Global Inference Algorithms in Multi-document Summarization: ) <|cite_end|> <|cite_start|> (Reference: A Class of Submodular Functions for Document Summarization: We design a class of submodular functions meant for document summarization tasks. These functions each combine two terms, one which encourages the summary to be representative of the corpus, and the other which positively rewards diversity. Critically, our functions are monotone nondecreasing and submodular, which means that an efficient scalable greedy optimization scheme has a constant factor guarantee of optimality. When evaluated on DUC 2004-2007 corpora, we obtain better than existing state-of-art results in both generic and query-focused document summarization. Lastly, we show that several well-established methods for document summarization correspond, in fact, to submodular function optimization, adding further evidence that submodular functions are a natural fit for document summarization.) <|cite_end|>. \newcite{Liu2019TextSW} and \newcite{wang-extsum-acl20} used heuristic-based Trigram Blocking (TriBlk) for redundancy elimination. \newcite{renetal2017} trained two neural networks with handcrafted features; one is used to rank sentences, and the other one is used to model redundancy during sentence selection. \newcite{zhou-etal-2018-neural} and \newcite{aredsum} proposed redundancy-aware models by modeling redundancy and saliency jointly during the scoring process using neural sequence models. In contrast to these approaches, our models are not redundancy-aware. Instead, they implicitly model redundancy by injecting previously generated summary representations. By virtue of this our models are not text-specific and can be applied to other tasks (see Section~\ref{sec:exprotowire}). \paragraph{Partial Summary Representations.} Ultilizing representations of partially generated summaries is relatively less studied in summarization. \newcite{mendes-etal-2019-jointly} proposed to dynamically model the generated summary using an LSTM to iteratively increment summaries based on previously extracted information. \newcite{nextsum} used a feed-forward neural network driven by hand-curated features capturing the prevalence of domain subtopics in the source and the summary. To the best of our knowledge, our models are first to use summary representations with structured transformers for summarization. Our models learn to make summary-informed next-sentence predictions without any hand-curated features. \paragraph{Long-form Summarization.} It is well known that a better content selection benefits abstractive summarizers to generate summaries that are not only fluent but also informative <|cite_start|> (Reference: Bottom-Up Abstractive Summarization: Neural network-based methods for abstractive summarization produce outputs that are more fluent than other techniques, but which can be poor at content selection. This work proposes a simple technique for addressing this issue: use a data-efficient content selector to over-determine phrases in a source document that should be part of the summary. We use this selector as a bottom-up attention step to constrain the model to likely phrases. We show that this approach improves the ability to compress text, while still generating fluent summaries. This two-step process is both simpler and higher performing than other end-to-end content selection models, leading to significant improvements on ROUGE for both the CNN-DM and NYT corpus. Furthermore, the content selector can be trained with as little as 1,000 sentences, making it easy to transfer a trained summarizer to a new domain.) <|cite_end|> <|cite_start|> (Reference: A Unified Model for Extractive and Abstractive Summarization using Inconsistency Loss: We propose a unified model combining the strength of extractive and abstractive summarization. On the one hand, a simple extractive model can obtain sentence-level attention with high ROUGE scores but less readable. On the other hand, a more complicated abstractive model can obtain word-level dynamic attention to generate a more readable paragraph. In our model, sentence-level attention is used to modulate the word-level attention such that words in less attended sentences are less likely to be generated. Moreover, a novel inconsistency loss function is introduced to penalize the inconsistency between two levels of attentions. By end-to-end training our model with the inconsistency loss and original losses of extractive and abstractive models, we achieve state-of-the-art ROUGE scores while being the most informative and readable summarization on the CNN/Daily Mail dataset in a solid human evaluation.) <|cite_end|>. It can be particularly important when generating long abstractive summaries <|cite_start|> (Reference: Generating Wikipedia by Summarizing Long Sequences: We show that generating English Wikipedia articles can be approached as a multi- document summarization of source documents. We use extractive summarization to coarsely identify salient information and a neural abstractive model to generate the article. For the abstractive model, we introduce a decoder-only architecture that can scalably attend to very long sequences, much longer than typical encoder- decoder architectures used in sequence transduction. We show that this model can generate fluent, coherent multi-sentence paragraphs and even whole Wikipedia articles. When given reference documents, we show it can extract relevant factual information as reflected in perplexity, ROUGE scores and human evaluations.) <|cite_end|> <|cite_start|> (Reference: Hierarchical Transformers for Multi-Document Summarization: In this paper, we develop a neural summarization model which can effectively process multiple input documents and distill Transformer architecture with the ability to encode documents in a hierarchical manner. We represent cross-document relationships via an attention mechanism which allows to share information as opposed to simply concatenating text spans and processing them as a flat sequence. Our model learns latent dependencies among textual units, but can also take advantage of explicit graph representations focusing on similarity or discourse relations. Empirical results on the WikiSum dataset demonstrate that the proposed architecture brings substantial improvements over several strong baselines.) <|cite_end|> or summarizing multiple documents <|cite_start|> (Reference: Graph-based Neural Multi-Document Summarization: We propose a neural multi-document summarization (MDS) system that incorporates sentence relation graphs. We employ a Graph Convolutional Network (GCN) on the relation graphs, with sentence embeddings obtained from Recurrent Neural Networks as input node features. Through multiple layer-wise propagation, the GCN generates high-level hidden sentence features for salience estimation. We then use a greedy heuristic to extract salient sentences while avoiding redundancy. In our experiments on DUC 2004, we consider three types of sentence relation graphs and demonstrate the advantage of combining sentence relations in graphs with the representation power of deep neural networks. Our model improves upon traditional graph-based extractive approaches and the vanilla GRU sequence model with no graph, and it achieves competitive results against other state-of-the-art multi-document summarization systems.) <|cite_end|>. Earlier multi-document summarization methods have addressed the issue of long form input by graph-based representations of sentences or passages <|cite_start|> (Reference: LexRank: Graph-based Lexical Centrality as Salience in Text Summarization: We introduce a stochastic graph-based method for computing relative importance of textual units for Natural Language Processing. We test the technique on the problem of Text Summarization (TS). Extractive TS relies on the concept of sentence salience to identify the most important sentences in a document or set of documents. Salience is typically defined in terms of the presence of particular important words or in terms of similarity to a centroid pseudo-sentence. We consider a new approach, LexRank, for computing sentence importance based on the concept of eigenvector centrality in a graph representation of sentences. In this model, a connectivity matrix based on intra-sentence cosine similarity is used as the adjacency matrix of the graph representation of sentences. Our system, based on LexRank ranked in first place in more than one task in the recent DUC 2004 evaluation. In this paper we present a detailed analysis of our approach and apply it to a larger data set including data from earlier DUC evaluations. We discuss several methods to compute centrality using the similarity graph. The results show that degree-based methods (including LexRank) outperform both centroid-based methods and other systems participating in DUC in most of the cases. Furthermore, the LexRank with threshold method outperforms the other degree-based techniques including continuous LexRank. We also show that our approach is quite insensitive to the noise in the data that may result from an imperfect topical clustering of documents.) <|cite_end|> <|cite_start|> (Reference: Towards Coherent Multi-Document Summarization: This paper presents G-FLOW, a novel system for coherent extractive multi-document summarization (MDS). 1 Where previous work on MDS considered sentence selection and ordering separately, G-FLOW introduces a joint model for selection and ordering that balances coherence and salience. G-FLOW’s core representation is a graph that approximates the discourse relations across sentences based on indicators including discourse cues, deverbal nouns, co-reference, and more. This graph enables G-FLOW to estimate the coherence of a candidate summary. We evaluate G-FLOW on Mechanical Turk, and find that it generates dramatically better summaries than an extractive summarizer based on a pipeline of state-of-the-art sentence selection and reordering components, underscoring the value of our joint model.) <|cite_end|>. Recently, \newcite{Yasunaga2017Graph} proposed a neural version of this framework using graph convolutional networks <|cite_start|> (Reference: Semi-Supervised Classification with Graph Convolutional Networks: We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.) <|cite_end|>. \newcite{liu-lapata-2019-hierarchical} used cross-document attention mechanism to share information as opposed to simply concatenating text spans using hierarchical transformers. Similar to this motivation, we also explore better encoding of long inputs with structured transformers. \begin{figure*}[th!] \centering \begin{tabular}{ccc} \includegraphics[scale=0.45]{Transformers2-Transformer.pdf} & \includegraphics[scale=0.45]{Transformers2-HiBert.pdf} & \includegraphics[scale=0.10]{Transformers2-ETC.png} \end{tabular} \vspace{-0.2cm} \caption{Memory usage and attentions in standard transformers <|cite_start|> (Reference: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding: We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).) <|cite_end|>, HiBERT <|cite_start|> (Reference: HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization: Neural extractive summarization models usually employ a hierarchical encoder for document encoding and they are trained using sentence-level labels, which are created heuristically using rule-based methods. Training the hierarchical encoder with these \emph{inaccurate} labels is challenging. Inspired by the recent work on pre-training transformer sentence encoders \cite{devlin:2018:arxiv}, we propose {\sc Hibert} (as shorthand for {\bf HI}erachical {\bf B}idirectional {\bf E}ncoder {\bf R}epresentations from {\bf T}ransformers) for document encoding and a method to pre-train it using unlabeled data. We apply the pre-trained {\sc Hibert} to our summarization model and it outperforms its randomly initialized counterpart by 1.25 ROUGE on the CNN/Dailymail dataset and by 2.0 ROUGE on a version of New York Times dataset. We also achieve the state-of-the-art performance on these two datasets.) <|cite_end|> and ETC <|cite_start|> (Reference: ETC: encoding long and structured data in transformers: Transformer-based models have pushed the state of the art in many natural language processing tasks. However, one of their main limitations is the quadratic computational and memory cost of the standard attention mechanism. In this paper, we present a new family of Transformer models, which we call the Extended Transformer Construction (ETC), that allows for significant increases in input sequence length by introducing a new global-local attention mechanism between a global memory and the standard input tokens. We also show that combining global-local attention with relative position encodings allows ETC to handle structured data with ease. Empirical results on the Natural Questions data set show the promise of the approach.) <|cite_end|>.} \label{fig:attention-rep} \vspace{-0.4cm} \end{figure*} \paragraph{Table-to-Text Content Planning.} \newcite{wiseman-etal-2017-challenges} introduced the Rotowire dataset, which requires multi-sentence summaries of large tables. Several works found that the key to generate fluent and informative summaries for this task is to have dedicated content planning and realization steps <|cite_start|> (Reference: Data-to-Text Generation with Content Selection and Planning: Recent advances in data-to-text generation have led to the use of large-scale datasets and neural network models which are trained end-to-end, without explicitly modeling what to say and in what order. In this work, we present a neural network architecture which incorporates content selection and planning without sacrificing end-to-end training. We decompose the generation task into two stages. Given a corpus of data records (paired with descriptive documents), we first generate a content plan highlighting which information should be mentioned and in which order and then generate the document while taking the content plan into account. Automatic and human-based evaluation experiments show that our model outperforms strong baselines improving the state-of-the-art on the recently released RotoWire dataset.) <|cite_end|> <|cite_start|> (Reference: University of Edinburgh's submission to the document-level generation and translation shared task: The University of Edinburgh participated in all six tracks: NLG, MT, and MT+NLG with both English and German as targeted languages. For the NLG track, we submitted a multilingual system based on the Content Selection and Planning model of Puduppully et al (2019). For the MT track, we submitted Transformer-based Neural Machine Translation models, where out-of-domain parallel data was augmented with in-domain data extracted from monolingual corpora. Our MT+NLG systems disregard the structured input data and instead rely exclusively on the source summaries.) <|cite_end|> <|cite_start|> (Reference: Selecting, Planning, and Rewriting: A Modular Approach for Data-to-Document Generation and Translation: In this paper, we report our system submissions to all 6 tracks of the WNGT 2019 shared task on Document-Level Generation and Translation. The objective is to generate a textual document from either structured data: generation task, or a document in a different language: translation task. For the translation task, we focused on adapting a large scale system trained on WMT data by fine tuning it on the RotoWire data. For the generation task, we participated with two systems based on a selection and planning model followed by (a) a simple language model generation, and (b) a GPT-2 pre-trained language model approach. The selection and planning module chooses a subset of table records in order, and the language models produce text given such a subset.) <|cite_end|>. \newcite{rotowire-msgpt} and \newcite{rotowire-systran} used a transformer encoder, and, \newcite{gong-etal-2019-table} used multi-dimensional hierarchical LSTM encoders to compute better table entry representations. Following these lines of work, we evaluate our models to generate long content plans for this task using structured transformers. <|paper_end|>
[ "<|reference_start|> HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization: Neural extractive summarization models usually employ a hierarchical encoder for document encoding and they are trained using sentence-level labels, which are created heuristically using rule-based methods. Training the hierarchical encoder with these \\emph{inaccurate} labels is challenging. Inspired by the recent work on pre-training transformer sentence encoders \\cite{devlin:2018:arxiv}, we propose {\\sc Hibert} (as shorthand for {\\bf HI}erachical {\\bf B}idirectional {\\bf E}ncoder {\\bf R}epresentations from {\\bf T}ransformers) for document encoding and a method to pre-train it using unlabeled data. We apply the pre-trained {\\sc Hibert} to our summarization model and it outperforms its randomly initialized counterpart by 1.25 ROUGE on the CNN/Dailymail dataset and by 2.0 ROUGE on a version of New York Times dataset. We also achieve the state-of-the-art performance on these two datasets. <|reference_end|>", "<|reference_start|> ETC: encoding long and structured data in transformers: Transformer-based models have pushed the state of the art in many natural language processing tasks. However, one of their main limitations is the quadratic computational and memory cost of the standard attention mechanism. In this paper, we present a new family of Transformer models, which we call the Extended Transformer Construction (ETC), that allows for significant increases in input sequence length by introducing a new global-local attention mechanism between a global memory and the standard input tokens. We also show that combining global-local attention with relative position encodings allows ETC to handle structured data with ease. Empirical results on the Natural Questions data set show the promise of the approach. <|reference_end|>", "<|reference_start|> AREDSUM: Adaptive Redundancy-Aware Iterative Sentence Ranking for Extractive Document Summarization: Redundancy-aware extractive summarization systems score the redundancy of the sentences to be included in a summary either jointly with their salience information or separately as an additional sentence scoring step. Previous work shows the efficacy of jointly scoring and selecting sentences with neural sequence generation models. It is, however, not well-understood if the gain is due to better encoding techniques or better redundancy reduction approaches. Similarly, the contribution of salience versus diversity components on the created summary is not studied well. Building on the state-of-the-art encoding methods for summarization, we present two adaptive learning models: AREDSUM-SEQ that jointly considers salience and novelty during sentence selection; and a two-step AREDSUM-CTX that scores salience first, then learns to balance salience and redundancy, enabling the measurement of the impact of each aspect. Empirical results on CNN/DailyMail and NYT50 datasets show that by modeling diversity explicitly in a separate step, AREDSUM-CTX achieves significantly better performance than AREDSUM-SEQ as well as state-of-the-art extractive summarization baselines. <|reference_end|>", "<|reference_start|> The Use of MMR, Diversity-Based Reranking for Reordering Documents\nand Producing Summaries: This paper presents a method for combining\nquery-relevance with information-novelty in the context\nof text retrieval and summarization. The Maximal\nMarginal Relevance (MMR) criterion strives to reduce\nredundancy while maintaining query relevance in\nre-ranking retrieved documents and in selecting appropriate passages for text summarization. Preliminary results\nindicate some benefits for MMR diversity ranking\nin document retrieval and in single document summarization.\nThe latter are borne out by the recent results of the\nSUMMAC conference in the evaluation of summarization\nsystems. However, the clearest advantage is demonstrated\nin constructing non-redundant multi-document\nsummaries, where MMR results are clearly superior to\nnon-MMR passage selection. <|reference_end|>" ]
[ 8, 27, 38, 44 ]
{"<|multi_cite_1_1|>": "arxiv-24478", "<|multi_cite_1_2|>": "ss-728122", "<|cite_36|>": "ss-710343", "<|cite_30|>": "ss-710343", "<|multi_cite_2_1|>": "arxiv-126595", "<|multi_cite_2_2|>": "arxiv-175879", "<|multi_cite_3_1|>": "arxiv-196632", "<|multi_cite_3_2|>": "arxiv-219897", "<|multi_cite_3_3|>": "arxiv-204460", "<|multi_cite_3_4|>": "arxiv-213401", "<|multi_cite_3_5|>": "ss-1055205", "<|multi_cite_4_1|>": "arxiv-94491", "<|multi_cite_4_2|>": "ss-1259347", "<|multi_cite_4_3|>": "arxiv-169875", "<|multi_cite_4_4|>": "arxiv-173990", "<|cite_37|>": "arxiv-219897", "<|cite_31|>": "arxiv-219897", "<|cite_38|>": "arxiv-261588", "<|cite_32|>": "arxiv-261588", "<|cite_5|>": "ss-1131840", "<|multi_cite_6_1|>": "arxiv-164977", "<|multi_cite_6_2|>": "ss-1055205", "<|multi_cite_7_1|>": "arxiv-187373", "<|multi_cite_7_2|>": "arxiv-198084", "<|cite_8|>": "arxiv-187373", "<|cite_9|>": "arxiv-204460", "<|cite_39|>": "ss-985259", "<|cite_33|>": "ss-985259", "<|cite_40|>": "ss-1055205", "<|cite_34|>": "ss-1055205", "<|cite_41|>": "arxiv-187373", "<|cite_35|>": "arxiv-187373", "<|cite_10|>": "arxiv-175879", "<|multi_cite_11_1|>": "arxiv-219897", "<|multi_cite_11_2|>": "ss-1055205", "<|cite_12|>": "arxiv-79164", "<|cite_13|>": "ss-1370787", "<|multi_cite_14_1|>": "arxiv-164977", "<|multi_cite_14_2|>": "ss-1055205", "<|cite_15|>": "arxiv-219897", "<|cite_16|>": "arxiv-130274", "<|multi_cite_17_1|>": "arxiv-130274", "<|multi_cite_17_2|>": "arxiv-171151", "<|cite_18|>": "ss-985259", "<|cite_19|>": "ss-787970", "<|multi_cite_20_1|>": "ss-1907217", "<|multi_cite_20_2|>": "ss-1708203", "<|multi_cite_21_1|>": "arxiv-170928", "<|multi_cite_21_2|>": "arxiv-158655", "<|multi_cite_22_1|>": "arxiv-146761", "<|multi_cite_22_2|>": "arxiv-207007", "<|cite_23|>": "arxiv-127286", "<|multi_cite_24_1|>": "arxiv-24478", "<|multi_cite_24_2|>": "ss-1970501", "<|cite_25|>": "arxiv-105493", "<|cite_26|>": "arxiv-175879", "<|cite_27|>": "arxiv-204460", "<|cite_28|>": "ss-985259", "<|multi_cite_29_1|>": "arxiv-171151", "<|multi_cite_29_2|>": "ss-1974213", "<|multi_cite_29_3|>": "ss-1973160"}
2406.12223
<|paper_start|> Title: ToxiCloakCN: Evaluating Robustness of Offensive Language Detection in Chinese with Cloaking Perturbations Abstract: ToxiCloakCN: Evaluating Robustness of Offensive Language Detection in Chinese with Cloaking Perturbations: Detecting hate speech and offensive language is essential for maintaining a safe and respectful digital environment. This study examines the limitations of state-of-the-art large language models (LLMs) in identifying offensive content within systematically perturbed data, with a focus on Chinese, a language particularly susceptible to such perturbations. We introduce \textsf{ToxiCloakCN}, an enhanced dataset derived from ToxiCN, augmented with homophonic substitutions and emoji transformations, to test the robustness of LLMs against these cloaking perturbations. Our findings reveal that existing models significantly underperform in detecting offensive content when these perturbations are applied. We provide an in-depth analysis of how different types of offensive content are affected by these perturbations and explore the alignment between human and model explanations of offensiveness. Our work highlights the urgent need for more advanced techniques in offensive language detection to combat the evolving tactics used to evade detection mechanisms. Introduction Offensive language, which includes hate speech, cyberbullying, and adult-oriented content, poses significant risks to user well-being and social harmony <|cite_start|> (Reference: Racial Bias in Hate Speech and Abusive Language Detection Datasets: Technologies for abusive language detection are being developed and applied with little consideration of their potential biases. We examine racial bias in five different sets of Twitter data annotated for hate speech and abusive language. We train classifiers on these datasets and compare the predictions of these classifiers on tweets written in African-American English with those written in Standard American English. The results show evidence of systematic racial bias in all datasets, as classifiers trained on them tend to predict that tweets written in African-American English are abusive at substantially higher rates. If these abusive language detection systems are used in the field they will therefore have a disproportionate negative impact on African-American social media users. Consequently, these systems may discriminate against the groups who are often the targets of the abuse we are trying to detect.) <|cite_end|>. With the rapid expansion and widespread usage of social media platforms, the proliferation of offensive language has become a critical issue. Consequently, social media platforms and researchers have explored developing robust machine learning and linguistic analysis solutions to effectively identify and mitigate the harmful effects of offensive content <|cite_start|> (Reference: Automated Hate Speech Detection and the Problem of Offensive Language: A key challenge for automatic hate-speech detection on social media is the separation of hate speech from other instances of offensive language. Lexical detection methods tend to have low precision because they classify all messages containing particular terms as hate speech and previous work using supervised learning has failed to distinguish between the two categories. We used a crowd-sourced hate speech lexicon to collect tweets containing hate speech keywords. We use crowd-sourcing to label a sample of these tweets into three categories: those containing hate speech, only offensive language, and those with neither. We train a multi-class classifier to distinguish between these different categories. Close analysis of the predictions and the errors shows when we can reliably separate hate speech from other offensive language and when this differentiation is more difficult. We find that racist and homophobic tweets are more likely to be classified as hate speech but that sexist tweets are generally classified as offensive. Tweets without explicit hate keywords are also more difficult to classify.) <|cite_end|> <|cite_start|> (Reference: Hate speech detection in Asian languages: a survey: In this study, we present a language-based survey of hate speech detection in Asian languages. The motivation of this survey is to encourage the development of an automated hate speech detection system for Malayalam. Any message from social media spreading negativity in the society related to sex, caste, religion, politics, race etc. can be called a hateful message. This kind of text is very challenging to detect. Here we have taken only language-specific studies for hate speech detection and analyzed the approaches used in each work. We have used three parameters in this paper to analyze the overall scenario of this problem among Asian languages. This study tries to identify the best classification algorithm for this task and also find the relation between classification approach, type and size of dataset and accuracy. So this survey will become the foundation of future studies in this area and will help to understand the challenges also.) <|cite_end|>. Recent advances in Natural Language Processing (NLP), particularly with Large Language Models (LLMs), have significantly improved the ability to detect offensive language across multiple languages <|cite_start|> (Reference: Effective hate-speech detection in Twitter data using recurrent neural networks: ) <|cite_end|> <|cite_start|> (Reference: Offensive Language and Hate Speech Detection with Deep Learning and Transfer Learning: Toxic online speech has become a crucial problem nowadays due to an exponential increase in the use of internet by people from different cultures and educational backgrounds. Differentiating if a text message belongs to hate speech and offensive language is a key challenge in automatic detection of toxic text content. In this paper, we propose an approach to automatically classify tweets into three classes: Hate, offensive and Neither. Using public tweet data set, we first perform experiments to build BI-LSTM models from empty embedding and then we also try the same neural network architecture with pre-trained Glove embedding. Next, we introduce a transfer learning approach for hate speech detection using an existing pre-trained language model BERT (Bidirectional Encoder Representations from Transformers), DistilBert (Distilled version of BERT) and GPT-2 (Generative Pre-Training). We perform hyper parameters tuning analysis of our best model (BI-LSTM) considering different neural network architectures, learn-ratings and normalization methods etc. After tuning the model and with the best combination of parameters, we achieve over 92 percent accuracy upon evaluating it on test data. We also create a class module which contains main functionality including text classification, sentiment checking and text data augmentation. This model could serve as an intermediate module between user and Twitter.) <|cite_end|> <|cite_start|> (Reference: {A Survey of Offensive Language Detection for the Arabic Language: The use of offensive language in user-generated content is a serious problem that needs to be addressed with the latest technology. The field of Natural Language Processing (NLP) can support the automatic detection of offensive language. In this survey, we review previous NLP studies that cover Arabic offensive language detection. This survey investigates the state-of-the-art in offensive language detection for the Arabic language, providing a structured overview of previous approaches, including core techniques, tools, resources, methods, and main features used. This work also discusses the limitations and gaps of the previous studies. Findings from this survey emphasize the importance of investing further effort in detecting Arabic offensive language, including the development of benchmark resources and the invention of novel preprocessing and feature extraction techniques.) <|cite_end|> <|cite_start|> (Reference: Building a formal model for hate detection in French corpora: ) <|cite_end|> <|cite_start|> (Reference: A Turkish hate speech dataset and detection system: Social media posts containing hate speech are reproduced and redistributed at an accelerated pace, reaching greater audiences at a higher speed. We present a machine learning system for automatic detection of hate speech in Turkish, along with a hate speech dataset consisting of tweets collected in two separate domains. We first adopted a definition for hate speech that is in line with our goals and amenable to easy annotation; then designed the annotation schema for annotating the collected tweets. The Istanbul Convention dataset consists of tweets posted following the withdrawal of Turkey from the Istanbul Convention. The Refugees dataset was created by collecting tweets about immigrants by filtering based on commonly used keywords related to immigrants. Finally, we have developed a hate speech detection system using the transformer architecture (BERTurk), to be used as a baseline for the collected dataset. The binary classification accuracy is 77% when the system is evaluated using 5-fold cross-validation on the Istanbul Convention dataset and 71% for the Refugee dataset. We also tested a regression model with 0.66 and 0.83 RMSE on a scale of [0-4], for the Istanbul Convention and Refugees datasets.) <|cite_end|> <|cite_start|> (Reference: Hate speech detection in Asian languages: a survey: In this study, we present a language-based survey of hate speech detection in Asian languages. The motivation of this survey is to encourage the development of an automated hate speech detection system for Malayalam. Any message from social media spreading negativity in the society related to sex, caste, religion, politics, race etc. can be called a hateful message. This kind of text is very challenging to detect. Here we have taken only language-specific studies for hate speech detection and analyzed the approaches used in each work. We have used three parameters in this paper to analyze the overall scenario of this problem among Asian languages. This study tries to identify the best classification algorithm for this task and also find the relation between classification approach, type and size of dataset and accuracy. So this survey will become the foundation of future studies in this area and will help to understand the challenges also.) <|cite_end|> <|cite_start|> (Reference: Cross-Cultural Transfer Learning for Chinese Offensive Language Detection: Detecting offensive language is a challenging task. Generalizing across different cultures and languages becomes even more challenging: besides lexical, syntactic and semantic differences, pragmatic aspects such as cultural norms and sensitivities, which are particularly relevant in this context, vary greatly. In this paper, we target Chinese offensive language detection and aim to investigate the impact of transfer learning using offensive language detection data from different cultural backgrounds, specifically Korean and English. We find that culture-specific biases in what is considered offensive negatively impact the transferability of language models (LMs) and that LMs trained on diverse cultural data are sensitive to different features in Chinese offensive language detection. In a few-shot learning scenario, however, our study shows promising prospects for non-English offensive language detection with limited resources. Our findings highlight the importance of cross-cultural transfer learning in improving offensive language detection and promoting inclusive digital spaces.) <|cite_end|>. However, these models often struggle with systematically perturbed data designed to evade detection mechanisms. Common perturbation techniques include homophonic substitutions, emoji replacement, insertions, character splits, and synonyms <|cite_start|> (Reference: RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining: Large-scale pretrained language models have achieved SOTA results on NLP tasks. However, they have been shown vulnerable to adversarial attacks especially for logographic languages like Chinese. In this work, we propose RoCBert: a pretrained Chinese Bert that is robust to various forms of adversarial attacks like word perturbation, synonyms, typos, etc. It is pretrained with the contrastive learning objective which maximizes the label consistency under different synthesized adversarial examples. The model takes as input multimodal information including the semantic, phonetic and visual features. We show all these features areimportant to the model robustness since the attack can be performed in all the three forms. Across 5 Chinese NLU tasks, RoCBert outperforms strong baselines under three blackbox adversarial algorithms without sacrificing the performance on clean testset. It also performs the best in the toxic content detection task under human-made attacks.) <|cite_end|> <|cite_start|> (Reference: Hatemoji: A Test Suite and Adversarially-Generated Dataset for Benchmarking and Detecting Emoji-based Hate: Detecting online hate is a complex task, and low-performing models have harmful consequences when used for sensitive applications such as content moderation. Emoji-based hate is an emerging challenge for automated detection. We present HatemojiCheck, a test suite of 3,930 short-form statements that allows us to evaluate performance on hateful language expressed with emoji. Using the test suite, we expose weaknesses in existing hate detection models. To address these weaknesses, we create the HatemojiBuild dataset using a human-and-model-in-the-loop approach. Models built with these 5,912 adversarial examples perform substantially better at detecting emoji-based hate, while retaining strong performance on text-only hate. Both HatemojiCheck and HatemojiBuild are made publicly available. See our Github Repository (https://github.com/HannahKirk/Hatemoji). HatemojiCheck, HatemojiBuild, and the final Hatemoji Model are also available on HuggingFace (https://huggingface.co/datasets/HannahRoseKirk/).) <|cite_end|>. These techniques, referred to as ''cloaking'', exploit linguistic nuances to mask offensive content, posing a substantial challenge to both automated systems and human moderators. The Chinese language, in particular, is heavily impacted by these techniques due to intensive lexicon-based censorship, leading to a new linguistic phenomenon <|cite_start|> (Reference: Grass-Mud Horses to Victory: The Phonological Constraints of Subversive Puns: In 2008, Chinese netizens began creating subversive puns. These puns, including the well known “grass -mud horse,” were designed to engag e in a satirical online movement against internet censorship of vulgar or politically sensitive words. By examining online subversive puns’ birth and development , this paper presents a phonological analysis of the growing Chinese internet lexicon. First, the relevant phonological features of the puns are identified, which underscore how the game plays with the inherent characteristics of Mandarin. Next, a series of rules or constraints are identified; these highlight both the formulaic nature of subversive puns as well as the flexibility of the language. Finally, using Optimality Theory as a descriptive tool, this paper explores the interaction of universal constraints with several possible new language game constraints. Through this examination, this paper identifies implications for Mandarin lexical access and Mandarin word form encoding. 1. Intr oduction The Chinese language has a rich history of word play. Language game research dates back to Chao’s (1931) preliminary study in which he outlined a series of 反切语 fanqieyu ‘secret languages’ that made use of the syllable onset and fixed rime spelling system. Branner (2010) has suggested that these games are, in fact, rooted in an even earlier military fanqie cipher dating back to the 16 th century. Furthermore, these Chinese language games are not restricted to one ‘regional dialect’ or 方言 fangyan. In addition to the secret languages and games Chao cites, research into Taiwanese (Li 1985), Hakka (Branner 2010), Shanxi dialect (Hou 1988), and Cantonese (Bolton and Hutton 1995) has underscored the ubiquitous creativity and metaphor inherent in speakers throughout China. Language games and secret languages are by no means restricted to Chinese. Laycock (1972) first coined the term ludling by combining the Latin word for ‘game’ 1 Several people have shared their time and suggestions in order to improve this paper. Rebecca Morley was especially helpful with regards to the OT framework. Heather Inwood was equally invaluable, reviewing earlier drafts and offering encouraging feedback on the Chinese internet. Any errors that remain are those of the author.) <|cite_end|> where significant parts of sentences are replaced by either homophones or emojis to mask underlying offensive content or to circumvent censorship rules. Figure \ref{fig:framework} shows two examples of offensive texts cloaked using homophone and emoji replacement techniques. In these examples, the words and phrases highlighted in yellow are replaced with homophones or emojis. In the first example, homophones are used to replace phrases that identify the target (e.g., “贺楠仁” as the homophone for “河南人,” which means people from the Henan region in China) and offensive terms such as “太贱” with “肽键.” Similarly, in the second example, the offensive term “舔狗” (i.e., Simps) is replaced with \emoji{lick}\emoji{dog}. Using such techniques, users can fool automated offensive language detectors into misclassifying these sentences as non-offensive, even though avid Chinese social media users will have no problem understanding the offensive context of the text. Addressing this problem is crucial to improve the effectiveness of offensive language detection systems. As these evasion techniques evolve, it becomes increasingly important for these offensive langauge detection systems to adapt and accurately identify cloaked offensive content. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{framework.pdf} \caption{Example of cloaked Chinese offensive language using homophone and emoji replacement. By using such techniques, users will be able to fool the automated offensive language detector into misclassifying them as normal sentences.} \label{fig:framework} \end{figure} In this work, we introduce \textsf{ToxiCloakCN}, a novel Chinese offensive content dataset that benchmark content moderation models' ability to detect offensive texts cloaked using homophone and emoji replacements. Specifically, we conduct extensive experiments and evaluate state-of-the-art LLMs on the \textsf{ToxiCloakCN} dataset. The experiments demonstrated that both perturbation methods significantly affect the models' capabilities in detecting offensive text. We also analyze the effect of prompts on the experimental results by testing the models using six different prompts. Additionally, we analyze the perturbation effects on different types of offensive content: sexism, racism, regional bias, and anti-LGBTQ+. This research underscores the critical need for developing more robust models to effectively moderate cloaked online offensive content. We summarize the main contributions of this paper as follows: \begin{itemize} \item We introduce \textsf{ToxiCloakCN}, a novel dataset specifically designed to evaluate the robustness of LLMs against homophonic and emoji perturbations, addressing a significant gap in current offensive language detection research. \item We conduct a comprehensive evaluation of state-of-the-art LLMs. Our experimental results reveal that leading LLMs struggle to detect cloaked offensive content, highlighting the limitations of current approaches and the need for more advanced detection techniques. \item We analyze how different types of offensive content are impacted by cloaking perturbations, providing critical insights for improving model robustness and effectiveness in real-world applications. \end{itemize} <|paper_end|>
[ "<|reference_start|> Racial Bias in Hate Speech and Abusive Language Detection Datasets: Technologies for abusive language detection are being developed and applied with little consideration of their potential biases. We examine racial bias in five different sets of Twitter data annotated for hate speech and abusive language. We train classifiers on these datasets and compare the predictions of these classifiers on tweets written in African-American English with those written in Standard American English. The results show evidence of systematic racial bias in all datasets, as classifiers trained on them tend to predict that tweets written in African-American English are abusive at substantially higher rates. If these abusive language detection systems are used in the field they will therefore have a disproportionate negative impact on African-American social media users. Consequently, these systems may discriminate against the groups who are often the targets of the abuse we are trying to detect. <|reference_end|>", "<|reference_start|> {A Survey of Offensive Language Detection for the Arabic Language: The use of offensive language in user-generated content is a serious problem that needs to be addressed with the latest technology. The field of Natural Language Processing (NLP) can support the automatic detection of offensive language. In this survey, we review previous NLP studies that cover Arabic offensive language detection. This survey investigates the state-of-the-art in offensive language detection for the Arabic language, providing a structured overview of previous approaches, including core techniques, tools, resources, methods, and main features used. This work also discusses the limitations and gaps of the previous studies. Findings from this survey emphasize the importance of investing further effort in detecting Arabic offensive language, including the development of benchmark resources and the invention of novel preprocessing and feature extraction techniques. <|reference_end|>", "<|reference_start|> Hatemoji: A Test Suite and Adversarially-Generated Dataset for Benchmarking and Detecting Emoji-based Hate: Detecting online hate is a complex task, and low-performing models have harmful consequences when used for sensitive applications such as content moderation. Emoji-based hate is an emerging challenge for automated detection. We present HatemojiCheck, a test suite of 3,930 short-form statements that allows us to evaluate performance on hateful language expressed with emoji. Using the test suite, we expose weaknesses in existing hate detection models. To address these weaknesses, we create the HatemojiBuild dataset using a human-and-model-in-the-loop approach. Models built with these 5,912 adversarial examples perform substantially better at detecting emoji-based hate, while retaining strong performance on text-only hate. Both HatemojiCheck and HatemojiBuild are made publicly available. See our Github Repository (https://github.com/HannahKirk/Hatemoji). HatemojiCheck, HatemojiBuild, and the final Hatemoji Model are also available on HuggingFace (https://huggingface.co/datasets/HannahRoseKirk/). <|reference_end|>", "<|reference_start|> Grass-Mud Horses to Victory: The Phonological Constraints of Subversive Puns: In 2008, Chinese netizens began creating subversive puns. These puns, including the well known “grass -mud horse,” were designed to engag e in a satirical online movement against internet censorship of vulgar or politically sensitive words. By examining online subversive puns’ birth and development , this paper presents a phonological analysis of the growing Chinese internet lexicon. First, the relevant phonological features of the puns are identified, which underscore how the game plays with the inherent characteristics of Mandarin. Next, a series of rules or constraints are identified; these highlight both the formulaic nature of subversive puns as well as the flexibility of the language. Finally, using Optimality Theory as a descriptive tool, this paper explores the interaction of universal constraints with several possible new language game constraints. Through this examination, this paper identifies implications for Mandarin lexical access and Mandarin word form encoding. 1. Intr oduction The Chinese language has a rich history of word play. Language game research dates back to Chao’s (1931) preliminary study in which he outlined a series of 反切语 fanqieyu ‘secret languages’ that made use of the syllable onset and fixed rime spelling system. Branner (2010) has suggested that these games are, in fact, rooted in an even earlier military fanqie cipher dating back to the 16 th century. Furthermore, these Chinese language games are not restricted to one ‘regional dialect’ or 方言 fangyan. In addition to the secret languages and games Chao cites, research into Taiwanese (Li 1985), Hakka (Branner 2010), Shanxi dialect (Hou 1988), and Cantonese (Bolton and Hutton 1995) has underscored the ubiquitous creativity and metaphor inherent in speakers throughout China. Language games and secret languages are by no means restricted to Chinese. Laycock (1972) first coined the term ludling by combining the Latin word for ‘game’ 1 Several people have shared their time and suggestions in order to improve this paper. Rebecca Morley was especially helpful with regards to the OT framework. Heather Inwood was equally invaluable, reviewing earlier drafts and offering encouraging feedback on the Chinese internet. Any errors that remain are those of the author. <|reference_end|>" ]
[ 0, 5, 11, 12 ]
{"<|cite_1|>": "arxiv-206730", "<|multi_cite_2_1|>": "arxiv-118845", "<|multi_cite_2_2|>": "ss-1860777", "<|multi_cite_3_1|>": "ss-1239767", "<|multi_cite_3_2|>": "arxiv-359586", "<|multi_cite_3_3|>": "ss-979168", "<|multi_cite_3_4|>": "ss-1860778", "<|multi_cite_3_5|>": "ss-1860779", "<|multi_cite_3_6|>": "ss-1860777", "<|multi_cite_3_8|>": "arxiv-493566", "<|multi_cite_4_1|>": "ss-962265", "<|multi_cite_4_2|>": "arxiv-360690", "<|cite_5|>": "ss-1860780"}
1703.09859
<|paper_start|> Title: Click Here: Human-Localized Keypoints as Guidance for Viewpoint Estimation Abstract: Click Here: Human-Localized Keypoints as Guidance for Viewpoint Estimation: We motivate and address a human-in-the-loop variant of the monocular viewpoint estimation task in which the location and class of one semantic object keypoint is available at test time. In order to leverage the keypoint information, we devise a Convolutional Neural Network called Click-Here CNN (CH-CNN) that integrates the keypoint information with activations from the layers that process the image. It transforms the keypoint information into a 2D map that can be used to weigh features from certain parts of the image more heavily. The weighted sum of these spatial features is combined with global image features to provide relevant information to the prediction layers. To train our network, we collect a novel dataset of 3D keypoint annotations on thousands of CAD models, and synthetically render millions of images with 2D keypoint information. On test instances from PASCAL 3D+, our model achieves a mean class accuracy of 90.7%, whereas the state-of-the-art baseline only obtains 85.7% mean class accuracy, justifying our argument for human-in-the-loop inference. Introduction \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{figs/keypoint_guidance_motivation} \end{center} \vspace{-5pt} \caption{Semantic keypoint information can help address ambiguities that are difficult to resolve from the image alone. Each diagram shows the available information on the left, the high-level structure of the model in the middle, and the confidences of the azimuth angle on the right. In the black bars, gray indicates confidence, magenta marks the final prediction, and the green triangle marks the ground truth. The orange star indicates the human-provided keypoint. Both the light mask and orange star on the bottom left image are for visualization purposes only, and are not part of the input to any network.} \label{fig:keypoint_guidance_motivation} \label{fig:pageone} \vspace{-10pt} \end{figure} It is well understood that humans and computers have complementary abilities. Humans, for example, are good at visual perception---even in rather challenging scenarios such as finding a toy in a cluttered room---and, consequently, subsequent abstract reasoning from visually acquired information. On the other hand, computers are good at processing large amounts of data quickly and with great precision, such as predicting viewpoints for millions of images within an exact, but possibly inaccurate, degree. Although we, as a community, design automatic systems that seek to extract information from images automatically---and have done this quite well, e.g., <|cite_start|> (Reference: Deep Residual Learning for Image Recognition: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.) <|cite_end|> <|cite_start|> (Reference: SSD: Single Shot MultiBox Detector: ) <|cite_end|>---there are indeed situations that are beyond the capabilities of current systems, such as inferring the extent of damage to two vehicles involved in a car accident from data acquired by a dash-cam. In such exceptionally challenging cases, integrating the abilities of both humans and computers during inference is necessary; we call this methodology \textit{hybrid intelligence}, borrowing a term from social computing <|cite_start|> (Reference: Kurator: Using the crowd to help families with personal curation tasks: People capture photos, audio recordings, video, and more on a daily basis, but organizing all these digital artifacts quickly becomes a daunting task. Automated solutions struggle to help us manage this data because they cannot understand its meaning. In this paper, we introduce Kurator, a hybrid intelligence system leveraging mixed-expertise crowds to help families curate their personal digital content. Kurator produces a refined set of content via a combination of automated systems able to scale to large data sets and human crowds able to understand the data. Our results with 5 families show that Kurator can reduce the amount of effort needed to find meaningful memories within a large collection. This work also suggests that crowdsourcing can be used effectively even in domains where personal preference is key to accurately solving the task.) <|cite_end|>. This strategy can lead to pipelines that achieve better performance than fully automatic systems without incurring a significant burden on the human (Figure \ref{fig:pageone} illustrates such an example). Indeed, numerous computer vision researchers have begun to investigate tasks inspired by this methodology, such as learning on a budget <|cite_start|> (Reference: Far-Sighted Active Learning on a Budget for Image and Video Recognition: Active learning methods aim to select the most informative unlabeled instances to label first, and can help to focus image or video annotations on the examples that will most improve a recognition system. However, most existing methods only make myopic queries for a single label at a time, retraining at each iteration. We consider the problem where at each iteration the active learner must select a set of examples meeting a given budget of supervision, where the budget is determined by the funds (or time) available to spend on annotation. We formulate the budgeted selection task as a continuous optimization problem where we determine which subset of possible queries should maximize the improvement to the classifier's objective, without overspending the budget. To ensure far-sighted batch requests, we show how to incorporate the predicted change in the model that the candidate examples will induce. We demonstrate the proposed algorithm on three datasets for object recognition, activity recognition, and content-based retrieval, and we show its clear practical advantages over random, myopic, and batch selection baselines.) <|cite_end|> and Markov Decision Process-based fusion <|cite_start|> (Reference: Best of Both Worlds: Human-Machine Collaboration for Object Annotation: The long-standing goal of localizing every object in an image remains elusive. Manually annotating objects is quite expensive despite crowd engineering innovations. Current state-of-the-art automatic object detectors can accurately detect at most a few objects per image. This paper brings together the latest advancements in object detection and in crowd engineering into a principled framework for accurately and efficiently localizing objects in images. The input to the system is an image to annotate and a set of annotation constraints: desired precision, utility and/or human cost of the labeling. The output is a set of object annotations, informed by human feedback and computer vision. Our model seamlessly integrates multiple computer vision models with multiple sources of human input in a Markov Decision Process. We empirically validate the effectiveness of our human-in-the-loop labeling approach on the ILSVRC2014 object detection dataset.) <|cite_end|>. Continuing in this vein of work, we focus on integrating the information provided by a human as additional input during inference to a novel convolutional neural network (CNN) architecture. We refer to this architecture as the \textit{Click-Here Convolutional Neural Network}, or CH-CNN. In training, we learn how to best make use of the additional keypoint information. We develop a means to encode the location and identity of a single semantic keypoint on an image as the extra human guidance, and automatically learn how to integrate it within the part of the network that processes the image. The human guidance keypoint essentially determines a weighting, or attention mechanism <|cite_start|> (Reference: Show, Attend and Tell: Neural Image Caption Generation with Visual Attention: Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.) <|cite_end|>, to identify particularly discriminative locations of information as data flows through the network. To the best of our knowledge, this is the first work to integrate such human guidance into a CNN at inference time. To ground this work, we focus on the specific problem of monocular viewpoint estimation---the problem of identifying the camera's position with respect to the target object from a single RGB image. This challenging problem has applications in numerous areas such as automated driving, robotics, and scene understanding, many of which we envision a possible human-in-the-loop during inference. Although discriminative CNN-based methods have achieved remarkable performance on this task <|cite_start|> (Reference: Viewpoints and Keypoints: We characterize the problem of pose estimation for rigid objects in terms of determining viewpoint to explain coarse pose and keypoint prediction to capture the finer details. We address both these tasks in two different settings - the constrained setting with known bounding boxes and the more challenging detection setting where the aim is to simultaneously detect and correctly estimate pose of objects. We present Convolutional Neural Network based architectures for these and demonstrate that leveraging viewpoint estimates can substantially improve local appearance based keypoint predictions. In addition to achieving significant improvements over state-of-the-art in the above tasks, we analyze the error modes and effect of object characteristics on performance to guide future efforts towards this goal.) <|cite_end|> <|cite_start|> (Reference: Render for CNN: Viewpoint Estimation in Images Using CNNs Trained with Rendered 3D Model Views: Object viewpoint estimation from 2D images is an essential task in computer vision. However, two issues hinder its progress: scarcity of training data with viewpoint annotations, and a lack of powerful features. Inspired by the growing availability of 3D models, we propose a framework to address both issues by combining render-based image synthesis and CNNs (Convolutional Neural Networks). We believe that 3D models have the potential in generating a large number of images of high variation, which can be well exploited by deep CNN with a high learning capacity. Towards this goal, we propose a scalable and overfit-resistant image synthesis pipeline, together with a novel CNN specifically tailored for the viewpoint estimation task. Experimentally, we show that the viewpoint estimation from our pipeline can significantly outperform state-of-the-art methods on PASCAL 3D+ benchmark.) <|cite_end|> <|cite_start|> (Reference: Deep Supervision with Shape Concepts for Occlusion-Aware 3D Object Parsing: Monocular 3D object parsing is highly desirable in various scenarios including occlusion reasoning and holistic scene interpretation. We present a deep convolutional neural network (CNN) architecture to localize semantic parts in 2D image and 3D space while inferring their visibility states, given a single RGB image. Our key insight is to exploit domain knowledge to regularize the network by deeply supervising its hidden layers, in order to sequentially infer intermediate concepts associated with the final task. To acquire training data in desired quantities with ground truth 3D shape and relevant concepts, we render 3D object CAD models to generate large-scale synthetic data and simulate challenging occlusion configurations between objects. We train the network only on synthetic data and demonstrate state-of-the-art performances on real image benchmarks including an extended version of KITTI, PASCAL VOC, PASCAL3D+ and IKEA for 2D and 3D keypoint localization and instance segmentation. The empirical results substantiate the utility of our deep supervision scheme by demonstrating effective transfer of knowledge from synthetic data to real images, resulting in less overfitting compared to standard end-to-end training.) <|cite_end|> <|cite_start|> (Reference: Single Image 3D Interpreter Network: Understanding 3D object structure from a single image is an important but difficult task in computer vision, mostly due to the lack of 3D object annotations in real images. Previous work tackles this problem by either solving an optimization task given 2D keypoint positions, or training on synthetic data with ground truth 3D information. In this work, we propose 3D INterpreter Network (3D-INN), an end-to-end framework which sequentially estimates 2D keypoint heatmaps and 3D object structure, trained on both real 2D-annotated images and synthetic 3D data. This is made possible mainly by two technical innovations. First, we propose a Projection Layer, which projects estimated 3D structure to 2D space, so that 3D-INN can be trained to predict 3D structural parameters supervised by 2D annotations on real images. Second, heatmaps of keypoints serve as an intermediate representation connecting real and synthetic data, enabling 3D-INN to benefit from the variation and abundance of synthetic 3D objects, without suffering from the difference between the statistics of real and synthesized images due to imperfect rendering. The network achieves state-of-the-art performance on both 2D keypoint estimation and 3D structure recovery. We also show that the recovered 3D information can be used in other vision applications, such as 3D rendering and image retrieval.) <|cite_end|>, they often make mistakes when faced with three types of challenges: \textit{occlusion}, \textit{truncation}, and \textit{highly symmetrical objects} <|cite_start|> (Reference: Render for CNN: Viewpoint Estimation in Images Using CNNs Trained with Rendered 3D Model Views: Object viewpoint estimation from 2D images is an essential task in computer vision. However, two issues hinder its progress: scarcity of training data with viewpoint annotations, and a lack of powerful features. Inspired by the growing availability of 3D models, we propose a framework to address both issues by combining render-based image synthesis and CNNs (Convolutional Neural Networks). We believe that 3D models have the potential in generating a large number of images of high variation, which can be well exploited by deep CNN with a high learning capacity. Towards this goal, we propose a scalable and overfit-resistant image synthesis pipeline, together with a novel CNN specifically tailored for the viewpoint estimation task. Experimentally, we show that the viewpoint estimation from our pipeline can significantly outperform state-of-the-art methods on PASCAL 3D+ benchmark.) <|cite_end|>. In the first two cases, there is not enough visual information for the model to make the correct prediction, whereas in the third case, the model cannot identify the visual cues necessary to select among multiple plausible viewpoints. Monocular viewpoint estimation is well-suited to our hybrid intelligence setup as humans can locate semantic keypoints on objects, such as the center of the left-front wheel on a car, fairly easily and with high confidence. CH-CNN is able to integrate such a keypoint directly into the inference pipeline. It computes a distance transform based on the keypoint location, combines it with a one-hot vector that indicates the keypoint class label, and then uses these data to generate a weight map that is combined with hidden activations from the convolutional layers that operate on the image. At a high level, our model learns to extract two types of information---global image information and keypoint-conditional information---and uses them to obtain the final viewpoint prediction. We train CH-CNN with over 8,000 computer-aided design (CAD) models from ShapeNet <|cite_start|> (Reference: ShapeNet: An Information-Rich 3D Model Repository: We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans.) <|cite_end|> annotated with a custom, web-based interface. To our knowledge, our keypoint annotation dataset is an order of magnitude larger than the next largest keypoint dataset for ShapeNet CAD models <|cite_start|> (Reference: Deep Supervision with Shape Concepts for Occlusion-Aware 3D Object Parsing: Monocular 3D object parsing is highly desirable in various scenarios including occlusion reasoning and holistic scene interpretation. We present a deep convolutional neural network (CNN) architecture to localize semantic parts in 2D image and 3D space while inferring their visibility states, given a single RGB image. Our key insight is to exploit domain knowledge to regularize the network by deeply supervising its hidden layers, in order to sequentially infer intermediate concepts associated with the final task. To acquire training data in desired quantities with ground truth 3D shape and relevant concepts, we render 3D object CAD models to generate large-scale synthetic data and simulate challenging occlusion configurations between objects. We train the network only on synthetic data and demonstrate state-of-the-art performances on real image benchmarks including an extended version of KITTI, PASCAL VOC, PASCAL3D+ and IKEA for 2D and 3D keypoint localization and instance segmentation. The empirical results substantiate the utility of our deep supervision scheme by demonstrating effective transfer of knowledge from synthetic data to real images, resulting in less overfitting compared to standard end-to-end training.) <|cite_end|> in terms of number of annotated models. As our thorough experiments show, we are able to use this human guidance to vastly improve viewpoint estimation performance: on human-guidance instances from the PASCAL 3D+ validation set <|cite_start|> (Reference: Beyond pascal: A benchmark for 3D object detection in the wild: 3D object detection and pose estimation methods have become popular in recent years since they can handle ambiguities in 2D images and also provide a richer description for objects compared to 2D object detectors. However, most of the datasets for 3D recognition are limited to a small amount of images per category or are captured in controlled environments. In this paper, we contribute PASCAL3D+ dataset, which is a novel and challenging dataset for 3D object detection and pose estimation. PASCAL3D+ augments 12 rigid categories of the PASCAL VOC 2012 [4] with 3D annotations. Furthermore, more images are added for each category from ImageNet [3]. PASCAL3D+ images exhibit much more variability compared to the existing 3D datasets, and on average there are more than 3,000 object instances per category. We believe this dataset will provide a rich testbed to study 3D detection and pose estimation and will help to significantly push forward research in this area. We provide the results of variations of DPM [6] on our new dataset for object detection and viewpoint estimation in different scenarios, which can be used as baselines for the community. Our benchmark is available online at http://cvgl.stanford.edu/projects/pascal3d.) <|cite_end|>, a fine-tuned version of the state-of-the-art model from Su et al. <|cite_start|> (Reference: Render for CNN: Viewpoint Estimation in Images Using CNNs Trained with Rendered 3D Model Views: Object viewpoint estimation from 2D images is an essential task in computer vision. However, two issues hinder its progress: scarcity of training data with viewpoint annotations, and a lack of powerful features. Inspired by the growing availability of 3D models, we propose a framework to address both issues by combining render-based image synthesis and CNNs (Convolutional Neural Networks). We believe that 3D models have the potential in generating a large number of images of high variation, which can be well exploited by deep CNN with a high learning capacity. Towards this goal, we propose a scalable and overfit-resistant image synthesis pipeline, together with a novel CNN specifically tailored for the viewpoint estimation task. Experimentally, we show that the viewpoint estimation from our pipeline can significantly outperform state-of-the-art methods on PASCAL 3D+ benchmark.) <|cite_end|> achieves 85.7\% mean class accuracy, while our CH-CNN achieves 90.7\% mean class accuracy. Additionally, our model is well-suited for handling challenges that the state-of-the-art model often fails to overcome, as shown by our qualitative results. We summarize our contributions as follows. First, we propose a novel CNN that integrates two types of information---an image and information about a single keypoint---to output viewpoint predictions; this model is designed to be incorporated into a hybrid-intelligence viewpoint estimation pipeline. Second, to train our model, we collect keypoint locations on thousands of CAD models, and use these data to render millions of synthetic images with 2D keypoint information. Finally, we evaluate our model on the PASCAL 3D+ viewpoint estimation dataset <|cite_start|> (Reference: Beyond pascal: A benchmark for 3D object detection in the wild: 3D object detection and pose estimation methods have become popular in recent years since they can handle ambiguities in 2D images and also provide a richer description for objects compared to 2D object detectors. However, most of the datasets for 3D recognition are limited to a small amount of images per category or are captured in controlled environments. In this paper, we contribute PASCAL3D+ dataset, which is a novel and challenging dataset for 3D object detection and pose estimation. PASCAL3D+ augments 12 rigid categories of the PASCAL VOC 2012 [4] with 3D annotations. Furthermore, more images are added for each category from ImageNet [3]. PASCAL3D+ images exhibit much more variability compared to the existing 3D datasets, and on average there are more than 3,000 object instances per category. We believe this dataset will provide a rich testbed to study 3D detection and pose estimation and will help to significantly push forward research in this area. We provide the results of variations of DPM [6] on our new dataset for object detection and viewpoint estimation in different scenarios, which can be used as baselines for the community. Our benchmark is available online at http://cvgl.stanford.edu/projects/pascal3d.) <|cite_end|> and achieve substantially better performance than the leading state-of-the-art, image-only method, validating our hybrid intelligence-based approach. Our code and 3D CAD keypoint annotations are available on our project website at \href{http://ryanszeto.com/projects/ch-cnn}{\texttt{ryanszeto.com/projects/ch-cnn}}. \begin{figure*} \begin{center} \includegraphics[width=\linewidth]{figs/model_architecture} \end{center} \vspace{-10pt} \caption{The architecture for CH-CNN. A weighting over the \convfour{} activation depth columns is learned by taking linear transformations of the keypoint data and applying a softmax operation to the result. The keypoint features are obtained by taking the sum of each activation depth column weighted by the corresponding value in the weight map. These features are concatenated to the \texttt{fc7} image features to aid with inference. The orange star only visualizes the keypoint in this figure; it is not used as input to the network.} \vspace{-10pt} \label{fig:model_architecture} \end{figure*} Related Work \noindent \textbf{Monocular Viewpoint Estimation.} Viewpoint estimation and pose estimation of rigid objects have been tackled using a wide variety of approaches. One line of work has extended Deformable Part Models (DPMs) <|cite_start|> (Reference: {Object Detection with Discriminatively Trained Part-Based Models: We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.) <|cite_end|> to simultaneously localize objects and predict their viewpoint <|cite_start|> (Reference: Beyond pascal: A benchmark for 3D object detection in the wild: 3D object detection and pose estimation methods have become popular in recent years since they can handle ambiguities in 2D images and also provide a richer description for objects compared to 2D object detectors. However, most of the datasets for 3D recognition are limited to a small amount of images per category or are captured in controlled environments. In this paper, we contribute PASCAL3D+ dataset, which is a novel and challenging dataset for 3D object detection and pose estimation. PASCAL3D+ augments 12 rigid categories of the PASCAL VOC 2012 [4] with 3D annotations. Furthermore, more images are added for each category from ImageNet [3]. PASCAL3D+ images exhibit much more variability compared to the existing 3D datasets, and on average there are more than 3,000 object instances per category. We believe this dataset will provide a rich testbed to study 3D detection and pose estimation and will help to significantly push forward research in this area. We provide the results of variations of DPM [6] on our new dataset for object detection and viewpoint estimation in different scenarios, which can be used as baselines for the community. Our benchmark is available online at http://cvgl.stanford.edu/projects/pascal3d.) <|cite_end|> <|cite_start|> (Reference: Teaching 3d geometry to deformable part models: Current object class recognition systems typically target 2D bounding box localization, encouraged by benchmark data sets, such as Pascal VOC. While this seems suitable for the detection of individual objects, higher-level applications such as 3D scene understanding or 3D object tracking would benefit from more fine-grained object hypotheses incorporating 3D geometric information, such as viewpoints or the locations of individual parts. In this paper, we help narrowing the representational gap between the ideal input of a scene understanding system and object class detector output, by designing a detector particularly tailored towards 3D geometric reasoning. In particular, we extend the successful discriminatively trained deformable part models to include both estimates of viewpoint and 3D parts that are consistent across viewpoints. We experimentally verify that adding 3D geometric information comes at minimal performance loss w.r.t. 2D bounding box localization, but outperforms prior work in 3D viewpoint estimation and ultra-wide baseline matching.) <|cite_end|> <|cite_start|> (Reference: 3d object detection and viewpoint estimation with a deformable {3D} cuboid model: This paper addresses the problem of category-level 3D object detection. Given a monocular image, our aim is to localize the objects in 3D by enclosing them with tight oriented 3D bounding boxes. We propose a novel approach that extends the well-acclaimed deformable part-based model [1] to reason in 3D. Our model represents an object class as a deformable 3D cuboid composed of faces and parts, which are both allowed to deform with respect to their anchors on the 3D box. We model the appearance of each face in fronto-parallel coordinates, thus effectively factoring out the appearance variation induced by viewpoint. Our model reasons about face visibility patters called aspects. We train the cuboid model jointly and discriminatively and share weights across all aspects to attain efficiency. Inference then entails sliding and rotating the box in 3D and scoring object hypotheses. While for inference we discretize the search space, the variables are continuous in our model. We demonstrate the effectiveness of our approach in indoor and outdoor scenarios, and show that our approach significantly outperforms the state-of-the-art in both 2D [1] and 3D object detection [2].) <|cite_end|>. However, DPM-based methods can only predict a limited set of viewpoints, since each viewpoint requires a separate set of models. Patch alignment-based approaches identify discriminative patches from the test image and match them to a database of rendered 3D CAD models <|cite_start|> (Reference: {Seeing 3D chairs: exemplar part-based 2D-3D alignment using a large dataset of CAD models: This paper poses object category detection in images as a type of 2D-to-3D alignment problem, utilizing the large quantities of 3D CAD models that have been made publicly available online. Using the "chair" class as a running example, we propose an exemplar-based 3D category representation, which can explicitly model chairs of different styles as well as the large variation in viewpoint. We develop an approach to establish part-based correspondences between 3D CAD models and real photographs. This is achieved by (i) representing each 3D model using a set of view-dependent mid-level visual elements learned from synthesized views in a discriminative fashion, (ii) carefully calibrating the individual element detectors on a common dataset of negative images, and (iii) matching visual elements to the test image allowing for small mutual deformations but preserving the viewpoint and style constraints. We demonstrate the ability of our system to align 3D models with 2D objects in the challenging PASCAL VOC images, which depict a wide variety of chairs in complex scenes.) <|cite_end|> <|cite_start|> (Reference: Parsing IKEA objects: fine pose estimation: We address the problem of localizing and estimating the fine-pose of objects in the image with exact 3D models. Our main focus is to unify contributions from the 1970s with recent advances in object detection: use local keypoint detectors to find candidate poses and score global alignment of each candidate pose to the image. Moreover, we also provide a new dataset containing fine-aligned objects with their exactly matched 3D models, and a set of models for widely used objects. We also evaluate our algorithm both on object detection and fine pose estimation, and show that our method outperforms state-of-the art algorithms.) <|cite_end|>. More recent approaches have leveraged CNNs <|cite_start|> (Reference: {3D Object Proposals for Accurate Object Class Detection: The goal of this paper is to generate high-quality 3D object proposals in the context of autonomous driving. Our method exploits stereo imagery to place proposals in the form of 3D bounding boxes. We formulate the problem as minimizing an energy function encoding object size priors, ground plane as well as several depth informed features that reason about free space, point cloud densities and distance to the ground. Our experiments show significant performance gains over existing RGB and RGB-D object proposal methods on the challenging KITTI benchmark. Combined with convolutional neural net (CNN) scoring, our approach outperforms all existing results on all three KITTI object classes.) <|cite_end|> <|cite_start|> (Reference: Monocular 3D object detection for autonomous driving: The goal of this paper is to perform 3D object detection from a single monocular image in the domain of autonomous driving. Our method first aims to generate a set of candidate class-specific object proposals, which are then run through a standard CNN pipeline to obtain high-quality object detections. The focus of this paper is on proposal generation. In particular, we propose an energy minimization approach that places object candidates in 3D using the fact that objects should be on the ground-plane. We then score each candidate box projected to the image plane via several intuitive potentials encoding semantic segmentation, contextual information, size and location priors and typical object shape. Our experimental evaluation demonstrates that our object proposal generation approach significantly outperforms all monocular approaches, and achieves the best detection performance on the challenging KITTI benchmark, among published monocular competitors.) <|cite_end|> <|cite_start|> (Reference: Single Image 3D Interpreter Network: Understanding 3D object structure from a single image is an important but difficult task in computer vision, mostly due to the lack of 3D object annotations in real images. Previous work tackles this problem by either solving an optimization task given 2D keypoint positions, or training on synthetic data with ground truth 3D information. In this work, we propose 3D INterpreter Network (3D-INN), an end-to-end framework which sequentially estimates 2D keypoint heatmaps and 3D object structure, trained on both real 2D-annotated images and synthetic 3D data. This is made possible mainly by two technical innovations. First, we propose a Projection Layer, which projects estimated 3D structure to 2D space, so that 3D-INN can be trained to predict 3D structural parameters supervised by 2D annotations on real images. Second, heatmaps of keypoints serve as an intermediate representation connecting real and synthetic data, enabling 3D-INN to benefit from the variation and abundance of synthetic 3D objects, without suffering from the difference between the statistics of real and synthesized images due to imperfect rendering. The network achieves state-of-the-art performance on both 2D keypoint estimation and 3D structure recovery. We also show that the recovered 3D information can be used in other vision applications, such as 3D rendering and image retrieval.) <|cite_end|> <|cite_start|> (Reference: Deep Supervision with Shape Concepts for Occlusion-Aware 3D Object Parsing: Monocular 3D object parsing is highly desirable in various scenarios including occlusion reasoning and holistic scene interpretation. We present a deep convolutional neural network (CNN) architecture to localize semantic parts in 2D image and 3D space while inferring their visibility states, given a single RGB image. Our key insight is to exploit domain knowledge to regularize the network by deeply supervising its hidden layers, in order to sequentially infer intermediate concepts associated with the final task. To acquire training data in desired quantities with ground truth 3D shape and relevant concepts, we render 3D object CAD models to generate large-scale synthetic data and simulate challenging occlusion configurations between objects. We train the network only on synthetic data and demonstrate state-of-the-art performances on real image benchmarks including an extended version of KITTI, PASCAL VOC, PASCAL3D+ and IKEA for 2D and 3D keypoint localization and instance segmentation. The empirical results substantiate the utility of our deep supervision scheme by demonstrating effective transfer of knowledge from synthetic data to real images, resulting in less overfitting compared to standard end-to-end training.) <|cite_end|> <|cite_start|> (Reference: Viewpoints and Keypoints: We characterize the problem of pose estimation for rigid objects in terms of determining viewpoint to explain coarse pose and keypoint prediction to capture the finer details. We address both these tasks in two different settings - the constrained setting with known bounding boxes and the more challenging detection setting where the aim is to simultaneously detect and correctly estimate pose of objects. We present Convolutional Neural Network based architectures for these and demonstrate that leveraging viewpoint estimates can substantially improve local appearance based keypoint predictions. In addition to achieving significant improvements over state-of-the-art in the above tasks, we analyze the error modes and effect of object characteristics on performance to guide future efforts towards this goal.) <|cite_end|> <|cite_start|> (Reference: Render for CNN: Viewpoint Estimation in Images Using CNNs Trained with Rendered 3D Model Views: Object viewpoint estimation from 2D images is an essential task in computer vision. However, two issues hinder its progress: scarcity of training data with viewpoint annotations, and a lack of powerful features. Inspired by the growing availability of 3D models, we propose a framework to address both issues by combining render-based image synthesis and CNNs (Convolutional Neural Networks). We believe that 3D models have the potential in generating a large number of images of high variation, which can be well exploited by deep CNN with a high learning capacity. Towards this goal, we propose a scalable and overfit-resistant image synthesis pipeline, together with a novel CNN specifically tailored for the viewpoint estimation task. Experimentally, we show that the viewpoint estimation from our pipeline can significantly outperform state-of-the-art methods on PASCAL 3D+ benchmark.) <|cite_end|>, which achieve high performance without requiring the hand-crafted features used by earlier work. Additionally, unlike DPM-based approaches, CNNs extend easily to fine-grained viewpoints by regressing from the image to either a continuous viewpoint space <|cite_start|> (Reference: {3D Object Proposals for Accurate Object Class Detection: The goal of this paper is to generate high-quality 3D object proposals in the context of autonomous driving. Our method exploits stereo imagery to place proposals in the form of 3D bounding boxes. We formulate the problem as minimizing an energy function encoding object size priors, ground plane as well as several depth informed features that reason about free space, point cloud densities and distance to the ground. Our experiments show significant performance gains over existing RGB and RGB-D object proposal methods on the challenging KITTI benchmark. Combined with convolutional neural net (CNN) scoring, our approach outperforms all existing results on all three KITTI object classes.) <|cite_end|> <|cite_start|> (Reference: Monocular 3D object detection for autonomous driving: The goal of this paper is to perform 3D object detection from a single monocular image in the domain of autonomous driving. Our method first aims to generate a set of candidate class-specific object proposals, which are then run through a standard CNN pipeline to obtain high-quality object detections. The focus of this paper is on proposal generation. In particular, we propose an energy minimization approach that places object candidates in 3D using the fact that objects should be on the ground-plane. We then score each candidate box projected to the image plane via several intuitive potentials encoding semantic segmentation, contextual information, size and location priors and typical object shape. Our experimental evaluation demonstrates that our object proposal generation approach significantly outperforms all monocular approaches, and achieves the best detection performance on the challenging KITTI benchmark, among published monocular competitors.) <|cite_end|> or a discrete, but fine-grained space <|cite_start|> (Reference: Viewpoints and Keypoints: We characterize the problem of pose estimation for rigid objects in terms of determining viewpoint to explain coarse pose and keypoint prediction to capture the finer details. We address both these tasks in two different settings - the constrained setting with known bounding boxes and the more challenging detection setting where the aim is to simultaneously detect and correctly estimate pose of objects. We present Convolutional Neural Network based architectures for these and demonstrate that leveraging viewpoint estimates can substantially improve local appearance based keypoint predictions. In addition to achieving significant improvements over state-of-the-art in the above tasks, we analyze the error modes and effect of object characteristics on performance to guide future efforts towards this goal.) <|cite_end|> <|cite_start|> (Reference: Render for CNN: Viewpoint Estimation in Images Using CNNs Trained with Rendered 3D Model Views: Object viewpoint estimation from 2D images is an essential task in computer vision. However, two issues hinder its progress: scarcity of training data with viewpoint annotations, and a lack of powerful features. Inspired by the growing availability of 3D models, we propose a framework to address both issues by combining render-based image synthesis and CNNs (Convolutional Neural Networks). We believe that 3D models have the potential in generating a large number of images of high variation, which can be well exploited by deep CNN with a high learning capacity. Towards this goal, we propose a scalable and overfit-resistant image synthesis pipeline, together with a novel CNN specifically tailored for the viewpoint estimation task. Experimentally, we show that the viewpoint estimation from our pipeline can significantly outperform state-of-the-art methods on PASCAL 3D+ benchmark.) <|cite_end|>. Even better performance can be achieved by supervising the CNN training stage with intermediate representations <|cite_start|> (Reference: Single Image 3D Interpreter Network: Understanding 3D object structure from a single image is an important but difficult task in computer vision, mostly due to the lack of 3D object annotations in real images. Previous work tackles this problem by either solving an optimization task given 2D keypoint positions, or training on synthetic data with ground truth 3D information. In this work, we propose 3D INterpreter Network (3D-INN), an end-to-end framework which sequentially estimates 2D keypoint heatmaps and 3D object structure, trained on both real 2D-annotated images and synthetic 3D data. This is made possible mainly by two technical innovations. First, we propose a Projection Layer, which projects estimated 3D structure to 2D space, so that 3D-INN can be trained to predict 3D structural parameters supervised by 2D annotations on real images. Second, heatmaps of keypoints serve as an intermediate representation connecting real and synthetic data, enabling 3D-INN to benefit from the variation and abundance of synthetic 3D objects, without suffering from the difference between the statistics of real and synthesized images due to imperfect rendering. The network achieves state-of-the-art performance on both 2D keypoint estimation and 3D structure recovery. We also show that the recovered 3D information can be used in other vision applications, such as 3D rendering and image retrieval.) <|cite_end|> <|cite_start|> (Reference: Deep Supervision with Shape Concepts for Occlusion-Aware 3D Object Parsing: Monocular 3D object parsing is highly desirable in various scenarios including occlusion reasoning and holistic scene interpretation. We present a deep convolutional neural network (CNN) architecture to localize semantic parts in 2D image and 3D space while inferring their visibility states, given a single RGB image. Our key insight is to exploit domain knowledge to regularize the network by deeply supervising its hidden layers, in order to sequentially infer intermediate concepts associated with the final task. To acquire training data in desired quantities with ground truth 3D shape and relevant concepts, we render 3D object CAD models to generate large-scale synthetic data and simulate challenging occlusion configurations between objects. We train the network only on synthetic data and demonstrate state-of-the-art performances on real image benchmarks including an extended version of KITTI, PASCAL VOC, PASCAL3D+ and IKEA for 2D and 3D keypoint localization and instance segmentation. The empirical results substantiate the utility of our deep supervision scheme by demonstrating effective transfer of knowledge from synthetic data to real images, resulting in less overfitting compared to standard end-to-end training.) <|cite_end|>. Nonetheless, most fully-automatic approaches struggle from three specific challenges: occlusion <|cite_start|> (Reference: Beyond pascal: A benchmark for 3D object detection in the wild: 3D object detection and pose estimation methods have become popular in recent years since they can handle ambiguities in 2D images and also provide a richer description for objects compared to 2D object detectors. However, most of the datasets for 3D recognition are limited to a small amount of images per category or are captured in controlled environments. In this paper, we contribute PASCAL3D+ dataset, which is a novel and challenging dataset for 3D object detection and pose estimation. PASCAL3D+ augments 12 rigid categories of the PASCAL VOC 2012 [4] with 3D annotations. Furthermore, more images are added for each category from ImageNet [3]. PASCAL3D+ images exhibit much more variability compared to the existing 3D datasets, and on average there are more than 3,000 object instances per category. We believe this dataset will provide a rich testbed to study 3D detection and pose estimation and will help to significantly push forward research in this area. We provide the results of variations of DPM [6] on our new dataset for object detection and viewpoint estimation in different scenarios, which can be used as baselines for the community. Our benchmark is available online at http://cvgl.stanford.edu/projects/pascal3d.) <|cite_end|> <|cite_start|> (Reference: Render for CNN: Viewpoint Estimation in Images Using CNNs Trained with Rendered 3D Model Views: Object viewpoint estimation from 2D images is an essential task in computer vision. However, two issues hinder its progress: scarcity of training data with viewpoint annotations, and a lack of powerful features. Inspired by the growing availability of 3D models, we propose a framework to address both issues by combining render-based image synthesis and CNNs (Convolutional Neural Networks). We believe that 3D models have the potential in generating a large number of images of high variation, which can be well exploited by deep CNN with a high learning capacity. Towards this goal, we propose a scalable and overfit-resistant image synthesis pipeline, together with a novel CNN specifically tailored for the viewpoint estimation task. Experimentally, we show that the viewpoint estimation from our pipeline can significantly outperform state-of-the-art methods on PASCAL 3D+ benchmark.) <|cite_end|> <|cite_start|> (Reference: {Seeing 3D chairs: exemplar part-based 2D-3D alignment using a large dataset of CAD models: This paper poses object category detection in images as a type of 2D-to-3D alignment problem, utilizing the large quantities of 3D CAD models that have been made publicly available online. Using the "chair" class as a running example, we propose an exemplar-based 3D category representation, which can explicitly model chairs of different styles as well as the large variation in viewpoint. We develop an approach to establish part-based correspondences between 3D CAD models and real photographs. This is achieved by (i) representing each 3D model using a set of view-dependent mid-level visual elements learned from synthesized views in a discriminative fashion, (ii) carefully calibrating the individual element detectors on a common dataset of negative images, and (iii) matching visual elements to the test image allowing for small mutual deformations but preserving the viewpoint and style constraints. We demonstrate the ability of our system to align 3D models with 2D objects in the challenging PASCAL VOC images, which depict a wide variety of chairs in complex scenes.) <|cite_end|>, truncation <|cite_start|> (Reference: Beyond pascal: A benchmark for 3D object detection in the wild: 3D object detection and pose estimation methods have become popular in recent years since they can handle ambiguities in 2D images and also provide a richer description for objects compared to 2D object detectors. However, most of the datasets for 3D recognition are limited to a small amount of images per category or are captured in controlled environments. In this paper, we contribute PASCAL3D+ dataset, which is a novel and challenging dataset for 3D object detection and pose estimation. PASCAL3D+ augments 12 rigid categories of the PASCAL VOC 2012 [4] with 3D annotations. Furthermore, more images are added for each category from ImageNet [3]. PASCAL3D+ images exhibit much more variability compared to the existing 3D datasets, and on average there are more than 3,000 object instances per category. We believe this dataset will provide a rich testbed to study 3D detection and pose estimation and will help to significantly push forward research in this area. We provide the results of variations of DPM [6] on our new dataset for object detection and viewpoint estimation in different scenarios, which can be used as baselines for the community. Our benchmark is available online at http://cvgl.stanford.edu/projects/pascal3d.) <|cite_end|> <|cite_start|> (Reference: Render for CNN: Viewpoint Estimation in Images Using CNNs Trained with Rendered 3D Model Views: Object viewpoint estimation from 2D images is an essential task in computer vision. However, two issues hinder its progress: scarcity of training data with viewpoint annotations, and a lack of powerful features. Inspired by the growing availability of 3D models, we propose a framework to address both issues by combining render-based image synthesis and CNNs (Convolutional Neural Networks). We believe that 3D models have the potential in generating a large number of images of high variation, which can be well exploited by deep CNN with a high learning capacity. Towards this goal, we propose a scalable and overfit-resistant image synthesis pipeline, together with a novel CNN specifically tailored for the viewpoint estimation task. Experimentally, we show that the viewpoint estimation from our pipeline can significantly outperform state-of-the-art methods on PASCAL 3D+ benchmark.) <|cite_end|>, and highly symmetric objects <|cite_start|> (Reference: Render for CNN: Viewpoint Estimation in Images Using CNNs Trained with Rendered 3D Model Views: Object viewpoint estimation from 2D images is an essential task in computer vision. However, two issues hinder its progress: scarcity of training data with viewpoint annotations, and a lack of powerful features. Inspired by the growing availability of 3D models, we propose a framework to address both issues by combining render-based image synthesis and CNNs (Convolutional Neural Networks). We believe that 3D models have the potential in generating a large number of images of high variation, which can be well exploited by deep CNN with a high learning capacity. Towards this goal, we propose a scalable and overfit-resistant image synthesis pipeline, together with a novel CNN specifically tailored for the viewpoint estimation task. Experimentally, we show that the viewpoint estimation from our pipeline can significantly outperform state-of-the-art methods on PASCAL 3D+ benchmark.) <|cite_end|> <|cite_start|> (Reference: Parsing IKEA objects: fine pose estimation: We address the problem of localizing and estimating the fine-pose of objects in the image with exact 3D models. Our main focus is to unify contributions from the 1970s with recent advances in object detection: use local keypoint detectors to find candidate poses and score global alignment of each candidate pose to the image. Moreover, we also provide a new dataset containing fine-aligned objects with their exactly matched 3D models, and a set of models for widely used objects. We also evaluate our algorithm both on object detection and fine pose estimation, and show that our method outperforms state-of-the art algorithms.) <|cite_end|>. As we show in Section \ref{sec:experiments}, CH-CNN helps reduce the error caused by these challenges. \noindent \textbf{Human Interaction for Vision Tasks.} Most prior work in the vision community on integrating information from humans at inference time are examples of either active learning or dynamic inference. Active learning approaches reduce the amount of labeled data required for sufficient performance by intelligently selecting unlabeled instances for the human to annotate <|cite_start|> (Reference: Far-Sighted Active Learning on a Budget for Image and Video Recognition: Active learning methods aim to select the most informative unlabeled instances to label first, and can help to focus image or video annotations on the examples that will most improve a recognition system. However, most existing methods only make myopic queries for a single label at a time, retraining at each iteration. We consider the problem where at each iteration the active learner must select a set of examples meeting a given budget of supervision, where the budget is determined by the funds (or time) available to spend on annotation. We formulate the budgeted selection task as a continuous optimization problem where we determine which subset of possible queries should maximize the improvement to the classifier's objective, without overspending the budget. To ensure far-sighted batch requests, we show how to incorporate the predicted change in the model that the candidate examples will induce. We demonstrate the proposed algorithm on three datasets for object recognition, activity recognition, and content-based retrieval, and we show its clear practical advantages over random, myopic, and batch selection baselines.) <|cite_end|> <|cite_start|> (Reference: Video annotation and tracking with active learning: We introduce a novel active learning framework for video annotation. By judiciously choosing which frames a user should annotate, we can obtain highly accurate tracks with minimal user effort. We cast this problem as one of active learning, and show that we can obtain excellent performance by querying frames that, if annotated, would produce a large expected change in the estimated object track. We implement a constrained tracker and compute the expected change for putative annotations with efficient dynamic programming algorithms. We demonstrate our framework on four datasets, including two benchmark datasets constructed with key frame annotations obtained by Amazon Mechanical Turk. Our results indicate that we could obtain equivalent labels for a small fraction of the original cost.) <|cite_end|> <|cite_start|> (Reference: Far-Sighted Active Learning on a Budget for Image and Video Recognition: Active learning methods aim to select the most informative unlabeled instances to label first, and can help to focus image or video annotations on the examples that will most improve a recognition system. However, most existing methods only make myopic queries for a single label at a time, retraining at each iteration. We consider the problem where at each iteration the active learner must select a set of examples meeting a given budget of supervision, where the budget is determined by the funds (or time) available to spend on annotation. We formulate the budgeted selection task as a continuous optimization problem where we determine which subset of possible queries should maximize the improvement to the classifier's objective, without overspending the budget. To ensure far-sighted batch requests, we show how to incorporate the predicted change in the model that the candidate examples will induce. We demonstrate the proposed algorithm on three datasets for object recognition, activity recognition, and content-based retrieval, and we show its clear practical advantages over random, myopic, and batch selection baselines.) <|cite_end|> <|cite_start|> (Reference: Beyond comparing image pairs: Setwise active learning for relative attributes: It is useful to automatically compare images based on their visual properties - to predict which image is brighter, more feminine, more blurry, etc. However, comparative models are inherently more costly to train than their classification counterparts. Manually labeling all pairwise comparisons is intractable, so which pairs should a human supervisor compare? We explore active learning strategies for training relative attribute ranking functions, with the goal of requesting human comparisons only where they are most informative. We introduce a novel criterion that requests a partial ordering for a set of examples that minimizes the total rank margin in attribute space, subject to a visual diversity constraint. The setwise criterion helps amortize effort by identifying mutually informative comparisons, and the diversity requirement safeguards against requests a human viewer will find ambiguous. We develop an efficient strategy to search for sets that meet this criterion. On three challenging datasets and experiments with "live" online annotators, the proposed method outperforms both traditional passive learning as well as existing active rank learning methods.) <|cite_end|>. Our task differs from active learning in that the information from the human (the keypoint) is available at \textit{inference time} rather than \textit{training time}, and we leverage auxiliary human information to improve the accuracy of our model rather than to achieve sufficient performance with fewer examples. In dynamic inference, a system proposes questions with the goal of improving the confidence or quality of its final answer <|cite_start|> (Reference: Best of Both Worlds: Human-Machine Collaboration for Object Annotation: The long-standing goal of localizing every object in an image remains elusive. Manually annotating objects is quite expensive despite crowd engineering innovations. Current state-of-the-art automatic object detectors can accurately detect at most a few objects per image. This paper brings together the latest advancements in object detection and in crowd engineering into a principled framework for accurately and efficiently localizing objects in images. The input to the system is an image to annotate and a set of annotation constraints: desired precision, utility and/or human cost of the labeling. The output is a set of object annotations, informed by human feedback and computer vision. Our model seamlessly integrates multiple computer vision models with multiple sources of human input in a Markov Decision Process. We empirically validate the effectiveness of our human-in-the-loop labeling approach on the ILSVRC2014 object detection dataset.) <|cite_end|> <|cite_start|> (Reference: Visual Recognition with Humans in the Loop: ) <|cite_end|> <|cite_start|> (Reference: Multiclass Recognition and Part Localization with Humans in the Loop: We propose a visual recognition system that is designed for fine-grained visual categorization. The system is composed of a machine and a human user. The user, who is unable to carry out the recognition task by himself, is interactively asked to provide two heterogeneous forms of information: clicking on object parts and answering binary questions. The machine intelligently selects the most informative question to pose to the user in order to identify the object's class as quickly as possible. By leveraging computer vision and analyzing the user responses, the overall amount of human effort required, measured in seconds, is minimized. We demonstrate promising results on a challenging dataset of uncropped images, achieving a significant average reduction in human effort over previous methods.) <|cite_end|> <|cite_start|> (Reference: Similarity Comparisons for Interactive Fine-Grained Categorization: Current human-in-the-loop fine-grained visual categorization systems depend on a predefined vocabulary of attributes and parts, usually determined by experts. In this work, we move away from that expert-driven and attribute-centric paradigm and present a novel interactive classification system that incorporates computer vision and perceptual similarity metrics in a unified framework. At test time, users are asked to judge relative similarity between a query image and various sets of images, these general queries do not require expert-defined terminology and are applicable to other domains and basic-level categories, enabling a flexible, efficient, and scalable system for fine-grained categorization with humans in the loop. Our system outperforms existing state-of-the-art systems for relevance feedback-based image retrieval as well as interactive classification, resulting in a reduction of up to 43% in the average number of questions needed to correctly classify an image.) <|cite_end|> <|cite_start|> (Reference: Click Carving: Segmenting Objects in Video with Point Clicks: We present a novel form of interactive video object segmentation where a few clicks by the user helps the system produce a full spatio-temporal segmentation of the object of interest. Whereas conventional interactive pipelines take the user's initialization as a starting point, we show the value in the system taking the lead even in initialization. In particular, for a given video frame, the system precomputes a ranked list of thousands of possible segmentation hypotheses (also referred to as object region proposals) using image and motion cues. Then, the user looks at the top ranked proposals, and clicks on the object boundary to carve away erroneous ones. This process iterates (typically 2-3 times), and each time the system revises the top ranked proposal set, until the user is satisfied with a resulting segmentation mask. Finally, the mask is propagated across the video to produce a spatio-temporal object tube. On three challenging datasets, we provide extensive comparisons with both existing work and simpler alternative methods. In all, the proposed Click Carving approach strikes an excellent balance of accuracy and human effort. It outperforms all similarly fast methods, and is competitive or better than those requiring 2 to 12 times the effort.) <|cite_end|>. This line of work has demonstrated the potential of incorporating human input at inference time. Contrasting with work in dynamic inference, which emphasizes the process of selecting questions for the human to answer, we focus on the problem of learning how to integrate answers in an end-to-end approach for viewpoint estimation CNNs. <|paper_end|>
[ "<|reference_start|> Deep Residual Learning for Image Recognition: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. <|reference_end|>", "<|reference_start|> Kurator: Using the crowd to help families with personal curation tasks: People capture photos, audio recordings, video, and more on a daily basis, but organizing all these digital artifacts quickly becomes a daunting task. Automated solutions struggle to help us manage this data because they cannot understand its meaning. In this paper, we introduce Kurator, a hybrid intelligence system leveraging mixed-expertise crowds to help families curate their personal digital content. Kurator produces a refined set of content via a combination of automated systems able to scale to large data sets and human crowds able to understand the data. Our results with 5 families show that Kurator can reduce the amount of effort needed to find meaningful memories within a large collection. This work also suggests that crowdsourcing can be used effectively even in domains where personal preference is key to accurately solving the task. <|reference_end|>", "<|reference_start|> Best of Both Worlds: Human-Machine Collaboration for Object Annotation: The long-standing goal of localizing every object in an image remains elusive. Manually annotating objects is quite expensive despite crowd engineering innovations. Current state-of-the-art automatic object detectors can accurately detect at most a few objects per image. This paper brings together the latest advancements in object detection and in crowd engineering into a principled framework for accurately and efficiently localizing objects in images. The input to the system is an image to annotate and a set of annotation constraints: desired precision, utility and/or human cost of the labeling. The output is a set of object annotations, informed by human feedback and computer vision. Our model seamlessly integrates multiple computer vision models with multiple sources of human input in a Markov Decision Process. We empirically validate the effectiveness of our human-in-the-loop labeling approach on the ILSVRC2014 object detection dataset. <|reference_end|>", "<|reference_start|> Deep Supervision with Shape Concepts for Occlusion-Aware 3D Object Parsing: Monocular 3D object parsing is highly desirable in various scenarios including occlusion reasoning and holistic scene interpretation. We present a deep convolutional neural network (CNN) architecture to localize semantic parts in 2D image and 3D space while inferring their visibility states, given a single RGB image. Our key insight is to exploit domain knowledge to regularize the network by deeply supervising its hidden layers, in order to sequentially infer intermediate concepts associated with the final task. To acquire training data in desired quantities with ground truth 3D shape and relevant concepts, we render 3D object CAD models to generate large-scale synthetic data and simulate challenging occlusion configurations between objects. We train the network only on synthetic data and demonstrate state-of-the-art performances on real image benchmarks including an extended version of KITTI, PASCAL VOC, PASCAL3D+ and IKEA for 2D and 3D keypoint localization and instance segmentation. The empirical results substantiate the utility of our deep supervision scheme by demonstrating effective transfer of knowledge from synthetic data to real images, resulting in less overfitting compared to standard end-to-end training. <|reference_end|>" ]
[ 0, 2, 4, 12 ]
{"<|multi_cite_1_1|>": "arxiv-88870", "<|multi_cite_1_2|>": "ss-697426", "<|cite_2|>": "ss-1940574", "<|cite_3|>": "ss-1675398", "<|cite_4|>": "ss-1009178", "<|cite_5|>": "arxiv-72863", "<|multi_cite_6_1|>": "arxiv-69129", "<|multi_cite_6_2|>": "ss-1229191", "<|multi_cite_6_3|>": "arxiv-112129", "<|multi_cite_6_4|>": "arxiv-96935", "<|cite_7|>": "ss-1229191", "<|cite_8|>": "arxiv-88804", "<|cite_9|>": "arxiv-112129", "<|cite_10|>": "ss-969926", "<|cite_11|>": "ss-1229191", "<|cite_12|>": "ss-969926", "<|cite_13|>": "ss-680918", "<|multi_cite_14_1|>": "ss-969926", "<|multi_cite_14_2|>": "ss-1005337", "<|multi_cite_14_3|>": "ss-845513", "<|multi_cite_15_1|>": "ss-1158472", "<|multi_cite_15_2|>": "ss-977283", "<|multi_cite_16_1|>": "ss-924299", "<|multi_cite_16_2|>": "ss-772764", "<|multi_cite_16_3|>": "arxiv-96935", "<|multi_cite_16_4|>": "arxiv-112129", "<|multi_cite_16_5|>": "arxiv-69129", "<|multi_cite_16_6|>": "ss-1229191", "<|multi_cite_17_1|>": "ss-924299", "<|multi_cite_17_2|>": "ss-772764", "<|multi_cite_18_1|>": "arxiv-69129", "<|multi_cite_18_2|>": "ss-1229191", "<|multi_cite_19_1|>": "arxiv-96935", "<|multi_cite_19_2|>": "arxiv-112129", "<|multi_cite_20_1|>": "ss-969926", "<|multi_cite_20_2|>": "ss-1229191", "<|multi_cite_20_3|>": "ss-1158472", "<|multi_cite_21_1|>": "ss-969926", "<|multi_cite_21_2|>": "ss-1229191", "<|multi_cite_22_1|>": "ss-1229191", "<|multi_cite_22_2|>": "ss-977283", "<|multi_cite_23_1|>": "ss-1675398", "<|multi_cite_23_2|>": "ss-2334068", "<|multi_cite_23_3|>": "ss-1675398", "<|multi_cite_23_4|>": "ss-1118379", "<|multi_cite_24_1|>": "ss-1009178", "<|multi_cite_24_2|>": "ss-1375471", "<|multi_cite_24_3|>": "ss-2378165", "<|multi_cite_24_4|>": "ss-1301908", "<|multi_cite_24_5|>": "arxiv-101490"}
2010.00055
<|paper_start|> Title: Analyzing the Capacity of Distributed Vector Representations to Encode Spatial Information Abstract: Analyzing the Capacity of Distributed Vector Representations to Encode Spatial Information: Vector Symbolic Architectures belong to a family of related cognitive modeling approaches that encode symbols and structures in high-dimensional vectors. Similar to human subjects, whose capacity to process and store information or concepts in short-term memory is subject to numerical restrictions,the capacity of information that can be encoded in such vector representations is limited and one way of modeling the numerical restrictions to cognition. In this paper, we analyze these limits regarding information capacity of distributed representations. We focus our analysis on simple superposition and more complex, structured representations involving convolutive powers to encode spatial information. In two experiments, we find upper bounds for the number of concepts that can effectively be stored in a single vector. Introduction \label{sec:introduction} Understanding and building cognitive systems has seen extensive research over the last decades leading to the development of several cognitive architectures. A cognitive architecture is a \enquote{general proposal about the representation and processes that produce intelligent thought} <|cite_start|> (Reference: {Cognitive Architectures: A cognitive architecture is a general proposal about the representations and processes that produce intelligent thought. Cognitive architectures have primarily been used to explain important aspects of human thinking such as problem solving, memory, and learning. But they can also be used as blueprints for designing computers and robots that possess some of the cognitive abilities of humans. The most influential cognitive architectures that have been developed are either rule-based, using if-then rules and procedures that operate on them to explain thinking, or connectionist, using artificial neural networks. This chapter will describe the central structures and processes of these two kind of architectures, and review how well they succeed as general theories of mental processing. I argue that advances in neuroscience hold the promise for producing a general cognitive theory that encompasses the advantages of both rule-based and connectionist architectures. What is an explanation in cognitive science? In keeping with much recent philosophical research on explanation, I maintain that scientific explanations are typically descriptions of mechanisms that produce the phenomena to be explained (Bechtel and Abrahamsen, 2005; Machamer, Darden and Craver, 2000). A mechanism is a system of related parts whose interactions produce regular changes. For example, to explain how a bicycle works, we describe how its parts such as the pedals, chain, and wheels are connected to each other and how they interact to produce the movement of the bike. 2 Similarly, explanation in physics, chemistry, and biology identifies relevant parts such as atoms, molecules, and cells and describes how they interact to produce observed changes in things and organisms. Explanations in cognitive science are typically mechanistic in that they describe how different kinds of thinking occur as the result of mental representations (parts) operated on by computational procedures (interactions) that change mental states. A cognitive architecture is a proposal about the kinds of mental representation and computational procedure that constitute a mechanism for explaining a broad range of kinds of thinking. A complete unified general theory of cognition would provide consciousness. Let us now review the history of cognitive architectures. The term " cognitive architecture " developed from the idea of a computer architecture, which originated with a description of the first widely used computer, the IBM 360 (Amdahl, Blaaw, and Brooks, 1964). A computer architecture is the conceptual structure and functional behavior of a system as seen by a programmer, not the computer's physical implementation. John Anderson's …) <|cite_end|>. On the one hand, these architectures are used to explain and better understand important aspects of human behavior and intelligence. On the other hand, they are also used to design computers and robots mimicking certain cognitive abilities of humans. \acfp{VSA} <|cite_start|> (Reference: Vector S-Parametric Analysis of Signal Phase Dynamic Radio Images: ) <|cite_end|> refers to a family of related cognitive modeling approaches that represent symbols and structures by mapping them to (high-dimensional) vectors. Such vectors are one variant of distributed representations in the sense that information is captured over all dimensions of the vector instead of one single number, which allows to encode both, symbol-like and numerical structures in a similar and unified way. Additionally, the architectures' algebraic operations allow manipulation and combination of represented entities into structured representations. There are several architectures such as \ac{MAP}, \acp{BSC} <|cite_start|> (Reference: Birnbaum Importance for Linear Consecutive-k-out-of-n Systems With Sparse d: Since Birnbaum importance was introduced in 1969, there have been more than twenty kinds of importance measures so far. Among the various measures, Birnbaum importance plays an extremely important role because many importance measures have been defined under its illumination and have relationships with it. A lot of work has been done for Birnbaum importance in consecutive- $k$ systems since the systems were introduced. Because the problems in practice are increasingly complicated, in 2007, Zhao proposed consecutive- $k$ systems with sparse $d$ , which is an extension of the current consecutive- $k$ systems. In this paper, we study Birnbaum importance for linear consecutive- $k$ -out-of- $n$ systems with sparse $d$ . Some equations on Birnbaum importance are proposed. With these equations, the ranking of components in the system on the basis of Birnbaum importance is given; and then some patterns of ranking are presented. Finally, two numerical examples are given to illustrate the results obtained in this paper.) <|cite_end|> and \acp{HRR} <|cite_start|> (Reference: A New Technique for Verifying the Consistency of Distributed R-Trees: The ever-increasing of the large spatial datasets and the widely application of the complex computation have motivated the emergence of distributed algorithms to process spatial operations efficiently. The R-tree index is broadly used by researches as a distributed spatial structure for indexing and retrieval of spatial objects. However, a big challenge has arisen, that is, how to check the consistency of distributed R-Trees.  In the past few years researches have been published on both distributed R-Tree and verification of distributed systems. Though none of them has proposed a technique to check the consistency of distributed R-Trees. This article presents a new approach for verifying the consistency of distributed R-Trees, which is called RConsistency.  It allows collect information about the distributed R-Tree once it has been created.  RConsistency also collects information about the distribute R-Tree and can helps to reduce the overlapping and dead area. It can be used with any index similar to R-Tree, since the RConsistency algorithm uses the nodes organization of the R-Tree to collect consistency information.  The algorithm was used on DistGeo, a platform to process distributed spatial operations.  A graphic tool, named RConsistency Visualizer, was developed to show the output of the RConsistency algorithm.) <|cite_end|>, which propose different compressed multiplication operations replacing the initially used tensor product <|cite_start|> (Reference: Tensor P-Spline Smoothing for Spatial Analysis of Plant Breeding Trials: Large agricultural field trials may display irregular spatial trends that cannot be fully captured by a purely randomization-based analysis. For this reason, paralleling the development of analysis-of-variance procedures for randomized field trials, there is a long history of spatial modelling for field trials, starting with the early work of Papadakis on nearest neighbour analysis, which can be cast in terms of first or second differences among neighbouring plot values. This kind of spatial modelling is amenable to a natural extension using P-splines, as has been demonstrated in recent publications in the field. Here, we consider the P-spline framework, focussing on model options that are easy to implement in linear mixed model packages. Two examples serve to illustrate and evaluate the methods. A key conclusion is that first differences are rather competitive with second differences. A further key observation is that second differences require special attention regarding the representation of the null space of the smooth terms for spatial interaction, and that an unstructured variance-covariance structure is required to ensure invariance to translation and rotation of eigenvectors associated with that null space. We develop a strategy that permits fitting this model with ease, but the approach is more demanding than that needed for fitting models using first differences. Hence, even though in other areas second differences are very commonly used in the application of P-splines, our main conclusion is that with field trials first differences have advantages for routine use.) <|cite_end|> and resulting in vectors with the same dimension as the input vectors. One advantage of this approach is that the number of dimensions remains fixed, independent of the number of entities combined through the architecture's algebraic operations. Schlegel et al. <|cite_start|> (Reference: Measuring security development in information technologies: A scientometric framework using arXiv e-prints: ) <|cite_end|> give an overview of eight different variants of \acp{VSA} and compare their properties and characteristics. \acp{VSA} have been employed in a diverse variety of application domains, for instance, as one building block for implementing cognitive tasks such as \acp{RPM} <|cite_start|> (Reference: {A Neural Model of Rule Generation in Inductive Reasoning: Inductive reasoning is a fundamental and complex aspect of human intelligence. In particular, how do subjects, given a set of particular examples, generate general descriptions of the rules governing that set? We present a biologically plausible method for accomplishing this task and implement it in a spiking neuron model. We demonstrate the success of this model by applying it to the problem domain of Raven's Progressive Matrices, a widely used tool in the field of intelligence testing. The model is able to generate the rules necessary to correctly solve Raven's items, as well as recreate many of the experimental effects observed in human subjects.) <|cite_end|> in \acp{SNN} <|cite_start|> (Reference: Transport properties of heterostructures composed of Mo(S,Se)$_2$ on \emph{h}-BN: ) <|cite_end|> for the \ac{Spaun} model <|cite_start|> (Reference: A Large-scale Model of the Functioning Brain: clicking here. colleagues, clients, or customers by , you can order high-quality copies for your If you wish to distribute this article to others here. following the guidelines can be obtained by Permission to republish or repurpose articles or portions of articles ): July 2, 2013 www.sciencemag.org (this information is current as of The following resources related to this article are available online at http://www.sciencemag.org/content/338/6113/1420.2.full.html A correction has been published for this article at: http://www.sciencemag.org/content/338/6111/1202.full.html version of this article at: including high-resolution figures, can be found in the online Updated information and services, http://www.sciencemag.org/content/suppl/2012/11/28/338.6111.1202.DC1.html can be found at: Supporting Online Material http://www.sciencemag.org/content/338/6111/1202.full.html#related found at: can be related to this article A list of selected additional articles on the Science Web sites http://www.sciencemag.org/content/338/6111/1202.full.html#ref-list-1 , 18 of which can be accessed free: cites 58 articles This article http://www.sciencemag.org/content/338/6111/1202.full.html#related-urls 2 articles hosted by HighWire Press; see: cited by This article has been http://www.sciencemag.org/cgi/collection/comp_math Computers, Mathematics subject collections: This article appears in the following) <|cite_end|>. Furthermore, \acp{VSA} have been used for encoding and manipulating concepts <|cite_start|> (Reference: Concepts as Semantic Pointers: A Framework and Computational Model: The reconciliation of theories of concepts based on prototypes, exemplars, and theory-like structures is a longstanding problem in cognitive science. In response to this problem, researchers have recently tended to adopt either hybrid theories that combine various kinds of representational structure, or eliminative theories that replace concepts with a more finely grained taxonomy of mental representations. In this paper, we describe an alternative approach involving a single class of mental representations called "semantic pointers." Semantic pointers are symbol-like representations that result from the compression and recursive binding of perceptual, lexical, and motor representations, effectively integrating traditional connectionist and symbolic approaches. We present a computational model using semantic pointers that replicates experimental data from categorization studies involving each prior paradigm. We argue that a framework involving semantic pointers can provide a unified account of conceptual phenomena, and we compare our framework to existing alternatives in accounting for the scope, content, recursive combination, and neural implementation of concepts.) <|cite_end|> as well as for human-scale knowledge representation of language vocabularies <|cite_start|> (Reference: Levothyroxine absorption test: a therapeutic strategy for improving medication adherence: ) <|cite_end|>. Kleyko et al. <|cite_start|> (Reference: Imitation of honey bees’ concept learning processes using Vector Symbolic Architectures: ) <|cite_end|> used \acp{VSA} to imitate the concept learning capabilities of honey bees. In robotics <|cite_start|> (Reference: An Introduction to Hyperdimensional Computing for Robotics: ) <|cite_end|>, \acp{VSA} have been used to learn navigation policies for simple reactive behaviors to control a Braitenberg-vehicle robot <|cite_start|> (Reference: Learning Vector Symbolic Architectures for Reactive Robot Behaviours: Vector Symbolic Architectures (VSA) combine a hypervector space and a set of operations on these vectors. Hypervectors provide powerful and noise-robust representations and VSAs are associated with promising theoretical properties for approaching high-level cognitive tasks. However, a major drawback of VSAs is the lack of opportunities to learn them from training data. Their power is merely an effect of good (and elaborate) design rather than learning. We exploit highlevel knowledge about the structure of reactive robot problems to learn a VSA based on training data. We demonstrate preliminary results on a simple navigation task. Given a successful demonstration of a navigation run by pairs of sensor input and actuator output, the system learns a single hypervector that encodes this reactive behaviour. When executing (and combining) such VSA-based behaviours, the advantages of hypervectors (i.e. the representational power and robustness to noise) are preserved. Moreover, a particular beauty of this approach is that it can learn encodings for behaviours that have exactly the same form (a hypervector) no matter how complex the sensor input or the behaviours are.) <|cite_end|>. In previous work, we proposed an automotive environment representation based on the \ac{SPA}, one particular \ac{VSA}, and employed this representation to tasks such as context classification <|cite_start|> (Reference: Towards cognitive automotive environment modelling: reasoning based on vector representations: . In this paper, we propose a novel approach to knowledge representation for automotive environment modelling based on Vector Symbolic Architectures (VSAs). We build a vector representation describing structured information and relations within the current scene based on high-level object-lists perceived by individual sensors. Such a representation can be applied to different tasks with little modifications. In a sample instantiation, we focus on two example tasks, namely driving context classification and simple behavior prediction, to demonstrate the general applicability of our approach. Allowing efficient implementation in Spiking Neural Networks (SNNs), we envision to improve task performance of our approach through online-learning.) <|cite_end|> and vehicle trajectory prediction <|cite_start|> (Reference: An Investigation of Vehicle Behavior Prediction Using a Vector Power Representation to Encode Spatial Positions of Multiple Objects and Neural Networks: Predicting future behavior and positions of other traffic participants from observations is a key problem that needs to be solved by human drivers and automated vehicles alike to safely navigate their environment and to reach their desired goal. In this paper, we expand on previous work on an automotive environment model based on vector symbolic architectures (VSAs). We investigate a vector-representation to encapsulate spatial information of multiple objects based on a convolutive power encoding. Assuming that future positions of vehicles are influenced not only by their own past positions and dynamics (e.g., velocity and acceleration) but also by the behavior of the other traffic participants in the vehicle's surroundings, our motivation is 3-fold: we hypothesize that our structured vector-representation will be able to capture these relations and mutual influence between multiple traffic participants. Furthermore, the dimension of the encoding vectors remains fixed while being independent of the number of other vehicles encoded in addition to the target vehicle. Finally, a VSA-based encoding allows us to combine symbol-like processing with the advantages of neural network learning. In this work, we use our vector representation as input for a long short-term memory (LSTM) network for sequence to sequence prediction of vehicle positions. In an extensive evaluation, we compare this approach to other LSTM-based benchmark systems using alternative data encoding schemes, simple feed-forward neural networks as well as a simple linear prediction model for reference. We analyze advantages and drawbacks of the presented methods and identify specific driving situations where our approach performs best. We use characteristics specifying such situations as a foundation for an online-learning mixture-of-experts prototype, which chooses at run time between several available predictors depending on the current driving situation to achieve the best possible forecast.) <|cite_end|>. For the latter <|cite_start|> (Reference: An Investigation of Vehicle Behavior Prediction Using a Vector Power Representation to Encode Spatial Positions of Multiple Objects and Neural Networks: Predicting future behavior and positions of other traffic participants from observations is a key problem that needs to be solved by human drivers and automated vehicles alike to safely navigate their environment and to reach their desired goal. In this paper, we expand on previous work on an automotive environment model based on vector symbolic architectures (VSAs). We investigate a vector-representation to encapsulate spatial information of multiple objects based on a convolutive power encoding. Assuming that future positions of vehicles are influenced not only by their own past positions and dynamics (e.g., velocity and acceleration) but also by the behavior of the other traffic participants in the vehicle's surroundings, our motivation is 3-fold: we hypothesize that our structured vector-representation will be able to capture these relations and mutual influence between multiple traffic participants. Furthermore, the dimension of the encoding vectors remains fixed while being independent of the number of other vehicles encoded in addition to the target vehicle. Finally, a VSA-based encoding allows us to combine symbol-like processing with the advantages of neural network learning. In this work, we use our vector representation as input for a long short-term memory (LSTM) network for sequence to sequence prediction of vehicle positions. In an extensive evaluation, we compare this approach to other LSTM-based benchmark systems using alternative data encoding schemes, simple feed-forward neural networks as well as a simple linear prediction model for reference. We analyze advantages and drawbacks of the presented methods and identify specific driving situations where our approach performs best. We use characteristics specifying such situations as a foundation for an online-learning mixture-of-experts prototype, which chooses at run time between several available predictors depending on the current driving situation to achieve the best possible forecast.) <|cite_end|>, we used the convolutive power of vectors to encapsulate spatial positions of several vehicles in vectors of fixed length (cf. Fig.~\ref{fig:spa_power_scene} and Sec.~\ref{sec:materials_and_methods}). Komer et al. <|cite_start|> (Reference: A Neural Representation of Continuous Space using Fractional Binding: We present a novel method for constructing neurally implemented spatial representations that we show to be useful for building models of spatial cognition. This method represents continuous (i.e., real-valued) spaces using neurons, and identifies a set of operations for manipulating these representations. Specifically, we use “fractional binding” to construct “spatial semantic pointers” (SSPs) that we use to generate and manipulate representations of spatial maps encoding the positions of objects. We show how these representations can be transformed to answer queries about the location and identities of objects, move the relative or global position of items, and answer queries about regions of space, among other things. We demonstrate that the neural implementation in spiking networks of SSPs have similar accuracy and capacity as the mathematical ideal.) <|cite_end|> propose a similar representation of continuous space using convolutive powers and analyze it from neural perspective. \begin{figure*}[t!] \centering \includegraphics[width=0.99\linewidth]{scene.eps} \caption{Visualization of the convolutive vector power encoding one particular driving scene in a \num{512}-dimensional vector. The left plot depicts a scene from a real-world driving data set, while the middle and right plots visualize the similarity between the representation vector of that scene and auxiliary comparison vectors created from a sequence of discrete values as heat map for the target vehicle (middle) and other cars (right). } \label{fig:spa_power_scene} \end{figure*} However, given the mathematical properties of \acp{VSA}, there are systematical limitations to the amount of information that can be encoded in such a vector representation. These limitations are strongly connected to the chosen dimension of the underlying vector space and are a feature of such modeling architectures for being able to model limitations of cognitive functions of living beings, who are also not able to store unlimited amounts of information. Considering human subjects for instance, the capacity to process and store information or concepts in short-term memory as well as other cognitive tasks is subject to numerical restrictions <|cite_start|> (Reference: The Magical Number Seven, Plus or Minus Two: Some Limits On Our Capacity For Processing Information: First, the span of absolute judgment and the span of immediate memory impose severe limitations on the amount of information that we are able to receive, process, and remember. By organizing the stimulus input simultaneously into several dimensions and successively into a sequence or chunks, we manage to break (or at least stretch) this informational bottleneck. Second, the process of recoding is a very important one in human psychology and deserves much more explicit attention than it has received. In particular, the kind of linguistic recoding that people do seems to me to be the very lifeblood of the thought processes. Recoding procedures are a constant concern to clinicians, social psychologists, linguists, and anthropologists and yet, probably because recoding is less accessible to experimental manipulation than nonsense syllables or T mazes, the traditional experimental psychologist has contributed little or nothing to their analysis. Nevertheless, experimental techniques can be used, methods of recoding can be specified, behavioral indicants can be found. And I anticipate that we will find a very orderly set of relations describing what now seems an uncharted wilderness of individual differences. Third, the concepts and measures provided by the theory of information provide a quantitative way of getting at some of these questions. The theory provides us with a yardstick for calibrating our stimulus materials and for measuring the performance of our subjects. In the interests of communication I have suppressed the technical details of information measurement and have tried to express the ideas in more familiar terms; I hope this paraphrase will not lead you to think they are not useful in research. Informational concepts have already proved valuable in the study of discrimination and of language; they promise a great deal in the study of learning and memory; and it has even been proposed that they can be useful in the study of concept formation. A lot of questions that seemed fruitless twenty or thirty years ago may now be worth another look. In fact, I feel that my story here must stop just as it begins to get really interesting. And finally, what about the magical number seven? What about the seven wonders of the world, the seven seas, the seven deadly sins, the seven daughters of Atlas in the Pleiades, the seven ages of man, the seven levels of hell, the seven primary colors, the seven notes of the musical scale, and the seven days of the week? What about the seven-point rating scale, the seven categories for absolute judgment, the seven objects in the span of attention, and the seven digits in the span of immediate memory? For the present I propose to withhold judgment. Perhaps there is something deep and profound behind all these sevens, something just calling out for us to discover it. But I suspect that it is only a pernicious, Pythagorean coincidence.) <|cite_end|>. Hence, numerical limitations of cognitive architectures like \acp{VSA} are one way of modeling the numerical restrictions to cognition observed in human subjects. In our context of interest, i.e., automated driving <|cite_start|> (Reference: An Investigation of Vehicle Behavior Prediction Using a Vector Power Representation to Encode Spatial Positions of Multiple Objects and Neural Networks: Predicting future behavior and positions of other traffic participants from observations is a key problem that needs to be solved by human drivers and automated vehicles alike to safely navigate their environment and to reach their desired goal. In this paper, we expand on previous work on an automotive environment model based on vector symbolic architectures (VSAs). We investigate a vector-representation to encapsulate spatial information of multiple objects based on a convolutive power encoding. Assuming that future positions of vehicles are influenced not only by their own past positions and dynamics (e.g., velocity and acceleration) but also by the behavior of the other traffic participants in the vehicle's surroundings, our motivation is 3-fold: we hypothesize that our structured vector-representation will be able to capture these relations and mutual influence between multiple traffic participants. Furthermore, the dimension of the encoding vectors remains fixed while being independent of the number of other vehicles encoded in addition to the target vehicle. Finally, a VSA-based encoding allows us to combine symbol-like processing with the advantages of neural network learning. In this work, we use our vector representation as input for a long short-term memory (LSTM) network for sequence to sequence prediction of vehicle positions. In an extensive evaluation, we compare this approach to other LSTM-based benchmark systems using alternative data encoding schemes, simple feed-forward neural networks as well as a simple linear prediction model for reference. We analyze advantages and drawbacks of the presented methods and identify specific driving situations where our approach performs best. We use characteristics specifying such situations as a foundation for an online-learning mixture-of-experts prototype, which chooses at run time between several available predictors depending on the current driving situation to achieve the best possible forecast.) <|cite_end|>, however, we need to analyze these restrictions imposed by \acp{VSA} in general and the \ac{SPA} in particular to provide upper borders regarding the amount of information that can be stored in our vector representation. \subsection{Contribution} \label{subsec:contribution} In this paper, we analyze the limits regarding information capacity of distributed representations with the goal of finding bounds for, e.g., the number of concepts that can effectively be stored in a single vector before the accumulation of noise makes it impossible to retrieve the original individual vectors. Therefore, our contribution is a two-stage analysis: First, we analyze the amount of information that can effectively be stored in a single vector through superposition (i.e., addition) of several concept vectors. A similar but slightly different experiment has been conducted in <|cite_start|> (Reference: Deterministic Binary Vectors for Efficient Automated Indexing of MEDLINE/PubMed Abstracts: The need to maintain accessibility of the biomedical literature has led to development of methods to assist human indexers by recommending index terms for newly encountered articles. Given the rapid expansion of this literature, it is essential that these methods be scalable. Document vector representations are commonly used for automated indexing, and Random Indexing (RI) provides the means to generate them efficiently. However, RI is difficult to implement in real-world indexing systems, as (1) efficient nearest-neighbor search requires retaining all document vectors in RAM, and (2) it is necessary to maintain a store of randomly generated term vectors to index future documents. Motivated by these concerns, this paper documents the development and evaluation of a deterministic binary variant of RI. The increased capacity demonstrated by binary vectors has implications for information retrieval, and the elimination of the need to retain term vectors facilitates distributed implementations, enhancing the scalability of RI.) <|cite_end|>: the atomic vocabulary vectors, referred to as elemental vectors in <|cite_start|> (Reference: Deterministic Binary Vectors for Efficient Automated Indexing of MEDLINE/PubMed Abstracts: The need to maintain accessibility of the biomedical literature has led to development of methods to assist human indexers by recommending index terms for newly encountered articles. Given the rapid expansion of this literature, it is essential that these methods be scalable. Document vector representations are commonly used for automated indexing, and Random Indexing (RI) provides the means to generate them efficiently. However, RI is difficult to implement in real-world indexing systems, as (1) efficient nearest-neighbor search requires retaining all document vectors in RAM, and (2) it is necessary to maintain a store of randomly generated term vectors to index future documents. Motivated by these concerns, this paper documents the development and evaluation of a deterministic binary variant of RI. The increased capacity demonstrated by binary vectors has implications for information retrieval, and the elimination of the need to retain term vectors facilitates distributed implementations, enhancing the scalability of RI.) <|cite_end|>, are sparse in the sense, that they mostly contain \num{0} elements, and the superposed vectors are normalized after adding them. Furthermore, <|cite_start|> (Reference: Deterministic Binary Vectors for Efficient Automated Indexing of MEDLINE/PubMed Abstracts: The need to maintain accessibility of the biomedical literature has led to development of methods to assist human indexers by recommending index terms for newly encountered articles. Given the rapid expansion of this literature, it is essential that these methods be scalable. Document vector representations are commonly used for automated indexing, and Random Indexing (RI) provides the means to generate them efficiently. However, RI is difficult to implement in real-world indexing systems, as (1) efficient nearest-neighbor search requires retaining all document vectors in RAM, and (2) it is necessary to maintain a store of randomly generated term vectors to index future documents. Motivated by these concerns, this paper documents the development and evaluation of a deterministic binary variant of RI. The increased capacity demonstrated by binary vectors has implications for information retrieval, and the elimination of the need to retain term vectors facilitates distributed implementations, enhancing the scalability of RI.) <|cite_end|> only compares the similarity between the superposition and the original vector with the similarity between the original vector and the most recently added random vector as baseline for the expected similarity between randomly chosen vectors. In contrast, we calculate the similarity between the superposition vector and $n$ other random vectors for reference. Secondly, we analyze the information capacity of vector representations involving the convolutive vector power for encapsulating spatial information. Given our scene representation proposed in <|cite_start|> (Reference: An Investigation of Vehicle Behavior Prediction Using a Vector Power Representation to Encode Spatial Positions of Multiple Objects and Neural Networks: Predicting future behavior and positions of other traffic participants from observations is a key problem that needs to be solved by human drivers and automated vehicles alike to safely navigate their environment and to reach their desired goal. In this paper, we expand on previous work on an automotive environment model based on vector symbolic architectures (VSAs). We investigate a vector-representation to encapsulate spatial information of multiple objects based on a convolutive power encoding. Assuming that future positions of vehicles are influenced not only by their own past positions and dynamics (e.g., velocity and acceleration) but also by the behavior of the other traffic participants in the vehicle's surroundings, our motivation is 3-fold: we hypothesize that our structured vector-representation will be able to capture these relations and mutual influence between multiple traffic participants. Furthermore, the dimension of the encoding vectors remains fixed while being independent of the number of other vehicles encoded in addition to the target vehicle. Finally, a VSA-based encoding allows us to combine symbol-like processing with the advantages of neural network learning. In this work, we use our vector representation as input for a long short-term memory (LSTM) network for sequence to sequence prediction of vehicle positions. In an extensive evaluation, we compare this approach to other LSTM-based benchmark systems using alternative data encoding schemes, simple feed-forward neural networks as well as a simple linear prediction model for reference. We analyze advantages and drawbacks of the presented methods and identify specific driving situations where our approach performs best. We use characteristics specifying such situations as a foundation for an online-learning mixture-of-experts prototype, which chooses at run time between several available predictors depending on the current driving situation to achieve the best possible forecast.) <|cite_end|> (cf. Fig.~\ref{fig:spa_power_scene} and Sec.~\ref{sec:materials_and_methods}), we are primarily interested in representing two-dimensional values in vectors, which is why we focus our analysis of the convolutive power encoding scheme on this case. In our analysis, we show that the information capacity is tightly linked to the dimension of the underlying vector space and furthermore, we give upper bounds for the capacity of superposition and convolutive power representations for three different vector dimensionalities. <|paper_end|>
[ "<|reference_start|> Vector S-Parametric Analysis of Signal Phase Dynamic Radio Images: <|reference_end|>", "<|reference_start|> A New Technique for Verifying the Consistency of Distributed R-Trees: The ever-increasing of the large spatial datasets and the widely application of the complex computation have motivated the emergence of distributed algorithms to process spatial operations efficiently. The R-tree index is broadly used by researches as a distributed spatial structure for indexing and retrieval of spatial objects. However, a big challenge has arisen, that is, how to check the consistency of distributed R-Trees.  In the past few years researches have been published on both distributed R-Tree and verification of distributed systems. Though none of them has proposed a technique to check the consistency of distributed R-Trees. This article presents a new approach for verifying the consistency of distributed R-Trees, which is called RConsistency.  It allows collect information about the distributed R-Tree once it has been created.  RConsistency also collects information about the distribute R-Tree and can helps to reduce the overlapping and dead area. It can be used with any index similar to R-Tree, since the RConsistency algorithm uses the nodes organization of the R-Tree to collect consistency information.  The algorithm was used on DistGeo, a platform to process distributed spatial operations.  A graphic tool, named RConsistency Visualizer, was developed to show the output of the RConsistency algorithm. <|reference_end|>", "<|reference_start|> Concepts as Semantic Pointers: A Framework and Computational Model: The reconciliation of theories of concepts based on prototypes, exemplars, and theory-like structures is a longstanding problem in cognitive science. In response to this problem, researchers have recently tended to adopt either hybrid theories that combine various kinds of representational structure, or eliminative theories that replace concepts with a more finely grained taxonomy of mental representations. In this paper, we describe an alternative approach involving a single class of mental representations called \"semantic pointers.\" Semantic pointers are symbol-like representations that result from the compression and recursive binding of perceptual, lexical, and motor representations, effectively integrating traditional connectionist and symbolic approaches. We present a computational model using semantic pointers that replicates experimental data from categorization studies involving each prior paradigm. We argue that a framework involving semantic pointers can provide a unified account of conceptual phenomena, and we compare our framework to existing alternatives in accounting for the scope, content, recursive combination, and neural implementation of concepts. <|reference_end|>", "<|reference_start|> The Magical Number Seven, Plus or Minus Two: Some Limits On Our Capacity For Processing Information: First, the span of absolute judgment and the span of immediate memory impose severe limitations on the amount of information that we are able to receive, process, and remember. By organizing the stimulus input simultaneously into several dimensions and successively into a sequence or chunks, we manage to break (or at least stretch) this informational bottleneck. Second, the process of recoding is a very important one in human psychology and deserves much more explicit attention than it has received. In particular, the kind of linguistic recoding that people do seems to me to be the very lifeblood of the thought processes. Recoding procedures are a constant concern to clinicians, social psychologists, linguists, and anthropologists and yet, probably because recoding is less accessible to experimental manipulation than nonsense syllables or T mazes, the traditional experimental psychologist has contributed little or nothing to their analysis. Nevertheless, experimental techniques can be used, methods of recoding can be specified, behavioral indicants can be found. And I anticipate that we will find a very orderly set of relations describing what now seems an uncharted wilderness of individual differences. Third, the concepts and measures provided by the theory of information provide a quantitative way of getting at some of these questions. The theory provides us with a yardstick for calibrating our stimulus materials and for measuring the performance of our subjects. In the interests of communication I have suppressed the technical details of information measurement and have tried to express the ideas in more familiar terms; I hope this paraphrase will not lead you to think they are not useful in research. Informational concepts have already proved valuable in the study of discrimination and of language; they promise a great deal in the study of learning and memory; and it has even been proposed that they can be useful in the study of concept formation. A lot of questions that seemed fruitless twenty or thirty years ago may now be worth another look. In fact, I feel that my story here must stop just as it begins to get really interesting. And finally, what about the magical number seven? What about the seven wonders of the world, the seven seas, the seven deadly sins, the seven daughters of Atlas in the Pleiades, the seven ages of man, the seven levels of hell, the seven primary colors, the seven notes of the musical scale, and the seven days of the week? What about the seven-point rating scale, the seven categories for absolute judgment, the seven objects in the span of attention, and the seven digits in the span of immediate memory? For the present I propose to withhold judgment. Perhaps there is something deep and profound behind all these sevens, something just calling out for us to discover it. But I suspect that it is only a pernicious, Pythagorean coincidence. <|reference_end|>" ]
[ 1, 3, 9, 18 ]
{"<|cite_1|>": "ss-1977247", "<|cite_2|>": "ss-1977248", "<|cite_4|>": "ss-1977249", "<|cite_5|>": "ss-1937515", "<|cite_6|>": "ss-1867525", "<|cite_7|>": "ss-981752", "<|cite_8|>": "ss-1977250", "<|cite_9|>": "ss-888331", "<|cite_10|>": "ss-935360", "<|cite_11|>": "ss-1339012", "<|cite_12|>": "ss-697662", "<|cite_13|>": "ss-1977251", "<|cite_14|>": "ss-1525892", "<|cite_15|>": "ss-1378953", "<|cite_16|>": "ss-1977252", "<|cite_17|>": "ss-1977253", "<|cite_18|>": "ss-1977253", "<|cite_19|>": "ss-1525903", "<|cite_20|>": "ss-713858", "<|cite_21|>": "ss-1977253", "<|cite_22|>": "ss-1977254", "<|cite_23|>": "ss-1977254", "<|cite_24|>": "ss-1977254", "<|cite_25|>": "ss-1977253"}
2204.11304
<|paper_start|> Title: Dictionary Attacks on Speaker Verification Abstract: Dictionary Attacks on Speaker Verification: In this paper, we propose dictionary attacks against speaker verification - a novel attack vector that aims to match a large fraction of speaker population by chance. We introduce a generic formulation of the attack that can be used with various speech representations and threat models. The attacker uses adversarial optimization to maximize raw similarity of speaker embeddings between a seed speech sample and a proxy population. The resulting master voice successfully matches a non-trivial fraction of people in an unknown population. Adversarial waveforms obtained with our approach can match on average 69% of females and 38% of males enrolled in the target system at a strict decision threshold calibrated to yield false alarm rate of 1%. By using the attack with a black-box voice cloning system, we obtain master voices that are effective in the most challenging conditions and transferable between speaker encoders. We also show that, combined with multiple attempts, this attack opens even more to serious issues on the security of these systems. Introduction Biometric technologies constitute one of the most popular solutions to user authentication. They can offer high reliability and better user experience than classic password-based systems, especially on mobile devices <|cite_start|> (Reference: Biometrics: Trust, but Verify: Over the past two decades, biometric recognition has exploded into a plethora of different applications around the globe. This proliferation can be attributed to the high levels of authentication accuracy and user convenience that biometric recognition systems afford end-users. However, in-spite of the success of biometric recognition systems, there are a number of outstanding problems and concerns pertaining to the various sub-modules of biometric recognition systems that create an element of mistrust in their use - both by the scientific community and also the public at large. Some of these problems include: i) questions related to system recognition performance, ii) security (spoof attacks, adversarial attacks, template reconstruction attacks and demographic information leakage), iii) uncertainty over the bias and fairness of the systems to all users, iv) explainability of the seemingly black-box decisions made by most recognition systems, and v) concerns over data centralization and user privacy. In this paper, we provide an overview of each of the aforementioned open-ended challenges. We survey work that has been conducted to address each of these concerns and highlight the issues requiring further attention. Finally, we provide insights into how the biometric community can address core biometric recognition systems design issues to better instill trust, fairness, and security for all.) <|cite_end|>. Among the plethora of available modalities, the most commonly deployed verification systems look at faces <|cite_start|> (Reference: Deep Face Recognition: A Survey: ) <|cite_end|>, fingerprints <|cite_start|> (Reference: Handbook of Fingerprint Recognition, Second Edition: ) <|cite_end|>, and speech <|cite_start|> (Reference: Speaker Recognition by Machines and Humans: A tutorial review: Identifying a person by his or her voice is an important human trait most take for granted in natural human-to-human interaction/communication. Speaking to someone over the telephone usually begins by identifying who is speaking and, at least in cases of familiar speakers, a subjective verification by the listener that the identity is correct and the conversation can proceed. Automatic speaker-recognition systems have emerged as an important means of verifying identity in many e-commerce applications as well as in general business interactions, forensics, and law enforcement. Human experts trained in forensic speaker recognition can perform this task even better by examining a set of acoustic, prosodic, and linguistic characteristics of speech in a general approach referred to as structured listening. Techniques in forensic speaker recognition have been developed for many years by forensic speech scientists and linguists to help reduce any potential bias or preconceived understanding as to the validity of an unknown audio sample and a reference template from a potential suspect. Experienced researchers in signal processing and machine learning continue to develop automatic algorithms to effectively perform speaker recognition?with ever-improving performance?to the point where automatic systems start to perform on par with human listeners. In this article, we review the literature on speaker recognition by machines and humans, with an emphasis on prominent speaker-modeling techniques that have emerged in the last decade for automatic systems. We discuss different aspects of automatic systems, including voice-activity detection (VAD), features, speaker models, standard evaluation data sets, and performance metrics. Human speaker recognition is discussed in two parts?the first part involves forensic speaker-recognition methods, and the second illustrates how a na?ve listener performs this task from a neuroscience perspective. We conclude this review with a comparative study of human versus machine speaker recognition and attempt to point out strengths and weaknesses of each.) <|cite_end|> - all of which can be used in modern smartphones. In this study, we focus on speaker verification, a key component of voice assistants, which represent a rapidly growing human-computer interaction method popularized by smart speakers <|cite_start|> (Reference: {Alexa, Siri, Cortana, and more: an introduction to voice assistants: ABSTRACT Voice assistants are software agents that can interpret human speech and respond via synthesized voices. Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana, and Google’s Assistant are the most popular voice assistants and are embedded in smartphones or dedicated home speakers. Users can ask their assistants questions, control home automation devices and media playback via voice, and manage other basic tasks such as email, to-do lists, and calendars with verbal commands. This column will explore the basic workings and common features of today’s voice assistants. It will also discuss some of the privacy and security issues inherent to voice assistants and some potential future uses for these devices. As voice assistants become more widely used, librarians will want to be familiar with their operation and perhaps consider them as a means to deliver library services and materials.) <|cite_end|>. Like other biometric modalities, speech remains susceptible to attacks <|cite_start|> (Reference: Biometrics: Trust, but Verify: Over the past two decades, biometric recognition has exploded into a plethora of different applications around the globe. This proliferation can be attributed to the high levels of authentication accuracy and user convenience that biometric recognition systems afford end-users. However, in-spite of the success of biometric recognition systems, there are a number of outstanding problems and concerns pertaining to the various sub-modules of biometric recognition systems that create an element of mistrust in their use - both by the scientific community and also the public at large. Some of these problems include: i) questions related to system recognition performance, ii) security (spoof attacks, adversarial attacks, template reconstruction attacks and demographic information leakage), iii) uncertainty over the bias and fairness of the systems to all users, iv) explainability of the seemingly black-box decisions made by most recognition systems, and v) concerns over data centralization and user privacy. In this paper, we provide an overview of each of the aforementioned open-ended challenges. We survey work that has been conducted to address each of these concerns and highlight the issues requiring further attention. Finally, we provide insights into how the biometric community can address core biometric recognition systems design issues to better instill trust, fairness, and security for all.) <|cite_end|> which target both speech recognition (e.g, by crafting hidden voice commands <|cite_start|> (Reference: Prince of Wales's Hospital Fund for London: Amongst those present were: Lord Rowton, Lord Rothschild, Lord Iveagh, Lord Farquhar, the President of the Royal Society (Lord Lister), the Chairman of the London School Board (Lord Reay), Sir Savile Cro^sley, Sir Henry Burdett, K.C.B., Cardinal Yaughan, the Chief Rabbi (the Rev. Dr. Adler), the Rev. T. Bowman Stephenson, D.D., Mr. Sydney Buxton, M.P., Mr. Julius Wernher, and Mr. J. G. Craggs. Sir Sayile Ckossley, Honorary Secretary, read letters of regret from the following, who were unable) <|cite_end|>) and speaker verification (e.g., impersonation via spoofing, re-play or voice synthesis/conversion <|cite_start|> (Reference: {SAS:: “任务引领型”教学是以实践过程为主线,以任务引领型课程为主体,以任务驱动为主要模式,通过教学示范讲解与实践操作中的指导,引导学生完成设计的“任务”,实现教学目标。本文以《SAS统计软件》实践课程教学为例,采用“任务引领型”的教学模式,构建((SAS统计软件》课程的教学结构设计方案,使学生积极、主动地参与到获取知识的课堂学习中,从而获得比较好的教学效果。) <|cite_end|> <|cite_start|> (Reference: Adversarial Attacks on GMM I-Vector Based Speaker Verification Systems: This work investigates the vulnerability of Gaussian Mixture Model (GMM) i-vector based speaker verification systems to adversarial attacks, and the transferability of adversarial samples crafted from GMM i-vector based systems to x-vector based systems. In detail, we formulate the GMM i-vector system as a scoring function of enrollment and testing utterance pairs. Then we leverage the fast gradient sign method (FGSM) to optimize testing utterances for adversarial samples generation. These adversarial samples are used to attack both GMM i-vector and x-vector systems. We measure the system vulnerability by the degradation of equal error rate and false acceptance rate. Experiment results show that GMM i-vector systems are seriously vulnerable to adversarial attacks, and the crafted adversarial samples are proved to be transferable and pose threats to neural network speaker embedding based systems (e.g. x-vector systems).) <|cite_end|>). Speaker impersonation studied to date exclusively focuses on \emph{targeted attacks}, which make two critical assumptions: (i) there is a specific single \emph{victim} (i.e., a target identity whose voice the attacker tries to imitate) and (ii) a sample of the victim's voice is available (or needs to be obtained). While the required sample size varies, and tends to change depending on the attack method and authentication protocol (e.g., text-independent <|cite_start|> (Reference: A Tutorial on Text-independent Speaker Verification: This paper presents an overview of a state-of-the-art text-independent speaker verification system. First, an introduction proposes a modular scheme of the training and test phases of a speaker verification system. Then, the most commonly speech parameterization used in speaker verification, namely, cepstral analysis, is detailed. Gaussian mixture modeling, which is the speaker modeling technique used in most systems, is then explained. A few speaker modeling alternatives, namely, neural networks and support vector machines, are mentioned. Normalization of scores is then explained, as this is a very important step to deal with real-world data. The evaluation of a speaker verification system is then detailed, and the detection error trade-off (DET) curve is explained. Several extensions of speaker verification are then enumerated, including speaker tracking and segmentation by speakers. Then, some applications of speaker verification are proposed, including on-site applications, remote applications, applications relative to structuring audio information, and games. Issues concerning the forensic area are then recalled, as we believe it is very important to inform people about the actual performance and limitations of speaker verification systems. This paper concludes by giving a few research trends in speaker verification for the next couple of years.) <|cite_end|> <|cite_start|> (Reference: Deep Neural Network Embeddings with Gating Mechanisms for Text-Independent Speaker Verification: In this paper, gating mechanisms are applied in deep neural network (DNN) training for x-vector-based text-independent speaker verification. First, a gated convolution neural network (GCNN) is employed for modeling the frame-level embedding layers. Compared with the time-delay DNN (TDNN), the GCNN can obtain more expressive frame-level representations through carefully designed memory cell and gating mechanisms. Moreover, we propose a novel gated-attention statistics pooling strategy in which the attention scores are shared with the output gate. The gated-attention statistics pooling combines both gating and attention mechanisms into one framework; therefore, we can capture more useful information in the temporal pooling layer. Experiments are carried out using the NIST SRE16 and SRE18 evaluation datasets. The results demonstrate the effectiveness of the GCNN and show that the proposed gated-attention statistics pooling can further improve the performance.) <|cite_end|> <|cite_start|> (Reference: Introduction to the Issue on Spoofing and Countermeasures for Automatic Speaker Verification: The papers in this special issue focus on automatic speaker verification (ASV) technologies and applications for their use. ASV offers a low-cost and flexible solution to biometric authentication. While there liability ofASV systems is now considered sufficient to support mass-market adoption, there are concerns that the technology is vulnerable to spoofing, also referred to as presentation attacks. Spoofing refers to an attack whereby a fraudster attempts to manipulate a biometric system by masquerading as another, enrolled person. Replayed, synthesized and converted speech spoofing attacks can all be used to present high-quality, convincing speech signals which are representative of other, specific speakers and thus present a genuine threat to the reliability of ASV authentication systems.) <|cite_end|> or interactive challenge-response <|cite_start|> (Reference: End-to-End Text-Dependent Speaker Verification: In this paper we present a data-driven, integrated approach to speaker verification, which maps a test utterance and a few reference utterances directly to a single score for verification and jointly optimizes the system's components using the same evaluation protocol and metric as at test time. Such an approach will result in simple and efficient systems, requiring little domain-specific knowledge and making few model assumptions. We implement the idea by formulating the problem as a single neural network architecture, including the estimation of a speaker model on only a few utterances, and evaluate it on our internal "Ok Google" benchmark for text-dependent speaker verification. The proposed approach appears to be very effective for big data applications like ours that require highly accurate, easy-to-maintain systems with a small footprint.) <|cite_end|> <|cite_start|> (Reference: Deep Neural Networks For Small Footprint Text-dependent Speaker Verification: In this paper we investigate the use of deep neural networks (DNNs) for a small footprint text-dependent speaker verification task. At development stage, a DNN is trained to classify speakers at the frame-level. During speaker enrollment, the trained DNN is used to extract speaker specific features from the last hidden layer. The average of these speaker features, or d-vector, is taken as the speaker model. At evaluation stage, a d-vector is extracted for each utterance and compared to the enrolled speaker model to make a verification decision. Experimental results show the DNN based speaker verification system achieves good performance compared to a popular i-vector system on a small footprint text-dependent speaker verification task. In addition, the DNN based system is more robust to additive noise and outperforms the i-vector system at low False Rejection operating points. Finally the combined system outperforms the i-vector system by 14% and 25% relative in equal error rate (EER) for clean and noisy conditions respectively.) <|cite_end|>), the principle remains the same. In this paper, we propose a novel attack vector against speaker verification systems: \emph{untargeted dictionary attacks}. In contrast to targeted attacks, the goal is to match a non-trivial fraction of the user population by pure chance, without any knowledge of the victim's identity or voice. Such an attack could be leveraged for unlocking a phone found on the street or facilitating mass-scale voice commands to voice assistants in compromised home networks <|cite_start|> (Reference: Compromised Computers Meet Voice Assistants: Stealthily Exfiltrating Data as Voice over Telephony: New security concerns arise due to the growing popularity of voice assistants (VA) in home and enterprise networks. We explore how malware infected computers can encode sensitive data into audio and leverage nearby VAs to exfiltrate it. Such low cost attacks can be launched remotely, at scale, and can bypass network defenses. By using Dual-Tone Multi-Frequency tones to encode data into audio that is played over ordinary computer speakers, modest amounts of data (e.g., a kilobyte) can be transmitted with a phone call lasting a few minutes. This can be done while making the audio nearly inaudible for most people. With the help of a prototype built by us, we experimentally assess the impact of several factors that impact data transfer rates and transmission accuracy achieved by such attacks. Our results show that voice assistants in the vicinity of computers can pose new threats to data stored on them.) <|cite_end|>. Our approach involves adversarial optimization of a novel attack objective and can be applied to arbitrary speech representations (e.g., waveforms, spectrograms, speaker embeddings), making it adaptable to different systems and verification protocols (e.g., text-dependent or independent). This attack opens up a novel threat against the voice modality. The feasibility of dictionary attacks has recently been shown for the fingerprint <|cite_start|> (Reference: MasterPrint: Exploring the Vulnerability of Partial Fingerprint-Based Authentication Systems: This paper investigates the security of partial fingerprint-based authentication systems, especially when multiple fingerprints of a user are enrolled. A number of consumer electronic devices, such as smartphones, are beginning to incorporate fingerprint sensors for user authentication. The sensors embedded in these devices are generally small and the resulting images are, therefore, limited in size. To compensate for the limited size, these devices often acquire multiple partial impressions of a single finger during enrollment to ensure that at least one of them will successfully match with the image obtained from the user during authentication. Furthermore, in some cases, the user is allowed to enroll multiple fingers, and the impressions pertaining to multiple partial fingers are associated with the same identity (i.e., one user). A user is said to be successfully authenticated if the partial fingerprint obtained during authentication matches any one of the stored templates. This paper investigates the possibility of generating a “MasterPrint,” a synthetic or real partial fingerprint that serendipitously matches one or more of the stored templates for a significant number of users. Our preliminary results on an optical fingerprint data set and a capacitive fingerprint data set indicate that it is indeed possible to locate or generate partial fingerprints that can be used to impersonate a large number of users. In this regard, we expose a potential vulnerability of partial fingerprint-based authentication systems, especially when multiple impressions are enrolled per finger.) <|cite_end|> <|cite_start|> (Reference: DeepMasterPrints: Generating MasterPrints for Dictionary Attacks via Latent Variable Evolution\({: Recent research has demonstrated the vulnerability of fingerprint recognition systems to dictionary attacks based on MasterPrints. MasterPrints are real or synthetic fingerprints that can fortuitously match with a large number of fingerprints thereby undermining the security afforded by fingerprint systems. Previous work by Roy et al. generated synthetic MasterPrints at the feature-level. In this work we generate complete image-level MasterPrints known as DeepMasterPrints, whose attack accuracy is found to be much superior than that of previous methods. The proposed method, referred to as Latent Variable Evolution, is based on training a Generative Adversarial Network on a set of real fingerprint images. Stochastic search in the form of the Covariance Matrix Adaptation Evolution Strategy is then used to search for latent input variables to the generator network that can maximize the number of impostor matches as assessed by a fingerprint recognizer. Experiments convey the efficacy of the proposed method in generating DeepMasterPrints. The underlying method is likely to have broad applications in fingerprint security as well as fingerprint synthesis.) <|cite_end|> and the face <|cite_start|> (Reference: Generating Master Faces for Use in Performing Wolf Attacks on Face Recognition Systems: Due to its convenience, biometric authentication, especial face authentication, has become increasingly mainstream and thus is now a prime target for attackers. Presentation attacks and face morphing are typical types of attack. Previous research has shown that finger-vein- and fingerprint-based authentication methods are susceptible to wolf attacks, in which a wolf sample matches many enrolled user templates. In this work, we demonstrated that wolf (generic) faces, which we call "master faces," can also compromise face recognition systems and that the master face concept can be generalized in some cases. Motivated by recent similar work in the fingerprint domain, we generated high-quality master faces by using the state-of-the-art face generator StyleGAN in a process called latent variable evolution. Experiments demonstrated that even attackers with limited resources using only pre-trained models available on the Internet can initiate master face attacks. The results, in addition to demonstrating performance from the attacker's point of view, can also be used to clarify and improve the performance of face recognition systems and harden face authentication systems.) <|cite_end|> modalities. The inspiration comes from \emph{biometric menagerie} <|cite_start|> (Reference: The biometric menagerie: It is commonly accepted that users of a biometric system may have differing degrees of accuracy within the system. Some people may have trouble authenticating, while others may be particularly vulnerable to impersonation. Goats, wolves, and lambs are labels commonly applied to these problem users. These user types are defined in terms of verification performance when users are matched against themselves (goats) or when matched against others (lambs and wolves). The relationship between a user's genuine and impostor match results suggests four new user groups: worms, doves, chameleons, and phantoms. We establish formal definitions for these animals and a statistical test for their existence. A thorough investigation is conducted using a broad range of biometric modalities, including 2D and 3D faces, fingerprints, iris, speech, and keystroke dynamics. Patterns that emerge from the results expose novel, important, and encouraging insights into the nature of biometric match results. A new framework for the evaluation of biometric systems based on the biometric menagerie, as opposed to collective statistics, is proposed.) <|cite_end|>, a well-established principle of numerous biometrics to exhibit large variations of matching propensity across individuals. In particular, the most relevant group for our work is represented by people who tend to match others easily (\emph{wolves}) and people highly susceptible to be matched (\emph{lambs}). Dictionary attacks aim to exploit this phenomenon to generate \emph{master biometric examples} that maximize the impersonation capability of generated samples. Combined with rapidly improving generative machine-learning models, e.g., generative adversarial networks <|cite_start|> (Reference: Generative {{Adversarial Nets}}: CNN and RNN are classifiers for image and speech recognition, and are used in many computer vision. However, this model alone does not produce images...) <|cite_end|> or variational auto-encoders <|cite_start|> (Reference: Auto-Encoding Variational Bayes: How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions are two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.) <|cite_end|>, this attack may soon create the perfect storm for biometric authentication. Our study makes the first step to formalize and extensively evaluate dictionary attacks against speaker verification systems. The main contributions of our work are listed below. \begin{enumerate} \item We propose a generic formulation of the attack based on adversarial optimization driven by raw similarity of speaker embeddings. The attack can be applied to various speech representation domains and threat models. \item We evaluate the attack, comparing three speech representations and several speaker encoders, under white- and black-box settings, showing strong generalization to an unseen speaker population and (in some settings) non-trivial transferability to unseen encoders. \item We show that speaker verification systems are susceptible to this attack and that the effect varies across genders. In our experiments, an accidental intrinsic bias of speaker encoders made female speakers remarkably more vulnerable to the attack. \end{enumerate} Compared to our prior study <|cite_start|> (Reference: {Adversarial Optimization for Dictionary Attacks on Speaker Verification: In this paper, we assess vulnerability of speaker verification systems to dictionary attacks. We seek master voices, i.e., adversarial utterances optimized to match against a large number of users by pure chance. First, we perform menagerie analysis to identify utterances which intrinsically hold this property. Then, we propose an adversarial optimization approach for generating master voices synthetically. Our experiments show that, even in the most secure configuration, on average, a master voice can match approx. 20% of females and 10% of males without any knowledge about the population. We demonstrate that dictionary attacks should be considered as a feasible threat model for sensitive and high-stakes deployments of speaker verification.) <|cite_end|>, we have revised and generalized the attack to enable seamless application to various speech representation domains. We also extended the evaluation to include several speaker encoders and various threat models. Our version in this paper leads to substantially better results and can be even used in challenging conditions, e.g., to evolve transferable master voices based on black-box access to a third-party voice cloning system with variable output. Related Work \subsection{Speaker Modelling} Speaker recognition involves two main tasks <|cite_start|> (Reference: Speaker Recognition by Machines and Humans: A tutorial review: Identifying a person by his or her voice is an important human trait most take for granted in natural human-to-human interaction/communication. Speaking to someone over the telephone usually begins by identifying who is speaking and, at least in cases of familiar speakers, a subjective verification by the listener that the identity is correct and the conversation can proceed. Automatic speaker-recognition systems have emerged as an important means of verifying identity in many e-commerce applications as well as in general business interactions, forensics, and law enforcement. Human experts trained in forensic speaker recognition can perform this task even better by examining a set of acoustic, prosodic, and linguistic characteristics of speech in a general approach referred to as structured listening. Techniques in forensic speaker recognition have been developed for many years by forensic speech scientists and linguists to help reduce any potential bias or preconceived understanding as to the validity of an unknown audio sample and a reference template from a potential suspect. Experienced researchers in signal processing and machine learning continue to develop automatic algorithms to effectively perform speaker recognition?with ever-improving performance?to the point where automatic systems start to perform on par with human listeners. In this article, we review the literature on speaker recognition by machines and humans, with an emphasis on prominent speaker-modeling techniques that have emerged in the last decade for automatic systems. We discuss different aspects of automatic systems, including voice-activity detection (VAD), features, speaker models, standard evaluation data sets, and performance metrics. Human speaker recognition is discussed in two parts?the first part involves forensic speaker-recognition methods, and the second illustrates how a na?ve listener performs this task from a neuroscience perspective. We conclude this review with a comparative study of human versus machine speaker recognition and attempt to point out strengths and weaknesses of each.) <|cite_end|>: \emph{identification} aims to identify the speaker among a set of possible hypotheses; \emph{verification} aims to confirm the identity of the claimed speaker and operates in an open-set regime based on a gallery of enrolled speech samples. Speaker modeling has recently been dominated by deep neural networks <|cite_start|> (Reference: Speaker recognition based on deep learning: An overview: ) <|cite_end|> (DNNs) which remarkably outperform classic solutions like GMM-UBM <|cite_start|> (Reference: Speaker verification using Adapted Gaussian mixture models: Reynolds, Douglas A., Quatieri, Thomas F., and Dunn, Robert B., Speaker Verification Using Adapted Gaussian Mixture Models, Digital Signal Processing10(2000), 19Â?41.In this paper we describe the major elements of MIT Lincoln Laboratory's Gaussian mixture model (GMM)-based speaker verification system used successfully in several NIST Speaker Recognition Evaluations (SREs). The system is built around the likelihood ratio test for verification, using simple but effective GMMs for likelihood functions, a universal background model (UBM) for alternative speaker representation, and a form of Bayesian adaptation to derive speaker models from the UBM. The development and use of a handset detector and score normalization to greatly improve verification performance is also described and discussed. Finally, representative performance benchmarks and system behavior experiments on NIST SRE corpora are presented.) <|cite_end|> or i-vector <|cite_start|> (Reference: Front-end Factor Analysis For Speaker Verification: It is known that the performance of the i-vectors/PLDA based speaker verification systems is affected in the cases of short utterances and limited training data. The performance degradation appears because the shorter the utterance, the less reliable the extracted i-vector is, and because the total variability covariance matrix and the underlying PLDA matrices need a significant amount of data to be robustly estimated. Considering the “MIT Mobile Device Speaker Verification Corpus” (MIT-MDSVC) as a representative dataset for robust speaker verification tasks on limited amount of training data, this paper investigates which configuration and which parameters lead to the best performance of an i-vectors/PLDA based speaker verification. The i-vectors/PLDA based system achieved good performance only when the total variability matrix and the underlying PLDA matrices were trained with data belonging to the enrolled speakers. This way of training means that the system should be fully retrained when new enrolled speakers were added. The performance of the system was more sensitive to the amount of training data of the underlying PLDA matrices than to the amount of training data of the total variability matrix. Overall, the Equal Error Rate performance of the i-vectors/PLDA based system was around 1% below the performance of a GMM-UBM system on the chosen dataset. The paper presents at the end some preliminary experiments in which the utterances comprised in the CSTR VCTK corpus were used besides utterances from MIT-MDSVC for training the total variability covariance matrix and the underlying PLDA matrices.) <|cite_end|>. DNNs are typically pre-trained for the identification task, but are then adapted to open-set verification by discarding the classification head and extracting a compact intermediate representation, referred to as a \emph{speaker embedding}. The embeddings are then compared between the query and enrolled samples to confirm the speaker's identity. Speaker enrollment typically involves the collection of multiple speech samples, whose embeddings need to be combined. Some of the traditional methods (e.g., a PLDA model <|cite_start|> (Reference: Speaker Recognition by Machines and Humans: A tutorial review: Identifying a person by his or her voice is an important human trait most take for granted in natural human-to-human interaction/communication. Speaking to someone over the telephone usually begins by identifying who is speaking and, at least in cases of familiar speakers, a subjective verification by the listener that the identity is correct and the conversation can proceed. Automatic speaker-recognition systems have emerged as an important means of verifying identity in many e-commerce applications as well as in general business interactions, forensics, and law enforcement. Human experts trained in forensic speaker recognition can perform this task even better by examining a set of acoustic, prosodic, and linguistic characteristics of speech in a general approach referred to as structured listening. Techniques in forensic speaker recognition have been developed for many years by forensic speech scientists and linguists to help reduce any potential bias or preconceived understanding as to the validity of an unknown audio sample and a reference template from a potential suspect. Experienced researchers in signal processing and machine learning continue to develop automatic algorithms to effectively perform speaker recognition?with ever-improving performance?to the point where automatic systems start to perform on par with human listeners. In this article, we review the literature on speaker recognition by machines and humans, with an emphasis on prominent speaker-modeling techniques that have emerged in the last decade for automatic systems. We discuss different aspects of automatic systems, including voice-activity detection (VAD), features, speaker models, standard evaluation data sets, and performance metrics. Human speaker recognition is discussed in two parts?the first part involves forensic speaker-recognition methods, and the second illustrates how a na?ve listener performs this task from a neuroscience perspective. We conclude this review with a comparative study of human versus machine speaker recognition and attempt to point out strengths and weaknesses of each.) <|cite_end|>) assume statistical independence, which is hard to achieve in practice. As a result, simpler scoring strategies are often preferred, e.g., averaging the embeddings or taking the one with maximum similarity. A recent study <|cite_start|> (Reference: From single to multiple enrollment i-vectors: Practical PLDA scoring variants for speaker verification: ) <|cite_end|> showed that the average embedding often leads to superior performance, which makes it a popular choice <|cite_start|> (Reference: End-to-end text-independent speaker verification with flexibility in utterance duration: We continue to investigate end-to-end text-independent speaker verification by incorporating the variability from different utterance durations. Our previous study [1] showed a competitive performance with a triplet loss based end-to-end text-independent speaker verification system. To normalize the duration variability, we provided fixed length inputs to the network by a simple cropping or padding operation. Those operations do not seem ideal, particularly for long duration where some amount of information is discarded, while an i-vector system typically has improved accuracy with an increase in input duration. In this study, we propose to replace the final max/average pooling layer with a Spatial Pyramid Pooling layer in the Inception-Resnet-v1 architecture, which allows us to relax the fixed-length input constraint and train the entire network with the arbitrary size of input in an end-to-end fashion. In this way, the modified network can map variable length utterances into fixed length embeddings. Experiments shows that the new end-to-end system with variable size input relatively reduces EER by 8.4% over the end-to-end system with fixed-length input, and 24.0% over the i-vector/PLDA baseline system. an end-to-end system with.) <|cite_end|> <|cite_start|> (Reference: Speaker Verification Experiments for Adults and Children Using Shared Embedding Spaces: For children, the system trained on a large corpus of adult speakers performed worse than a system trained on a much smaller corpus of children’s speech. This is due to the acoustic mismatch between training and testing data. To capture more acoustic variability we trained a shared system with mixed data from adults and children. The shared system yields the best EER for children with no degradation for adults. Thus, the single system trained with mixed data is applicable for speaker verification for both adults and children.) <|cite_end|>. Countless model architectures have been proposed for speaker encoding. Some of the most prominent differences involve selection of the input acoustic representation, backbone network, and temporal pooling strategy. While directly using waveforms to learn a representation is possible <|cite_start|> (Reference: Speaker Recognition from Raw Waveform with SincNet: Deep learning is progressively gaining popularity as a viable alternative to i-vectors for speaker recognition. Promising results have been recently obtained with Convolutional Neural Networks (CNNs) when fed by raw speech samples directly. Rather than employing standard hand-crafted features, the latter CNNs learn low-level speech representations from waveforms, potentially allowing the network to better capture important narrow-band speaker characteristics such as pitch and formants. Proper design of the neural network is crucial to achieve this goal. This paper proposes a novel CNN architecture, called SincNet, that encourages the first convolutional layer to discover more meaningful filters. SincNet is based on parametrized sinc functions, which implement band-pass filters. In contrast to standard CNNs, that learn all elements of each filter, only low and high cutoff frequencies are directly learned from data with the proposed method. This offers a very compact and efficient way to derive a customized filter bank specifically tuned for the desired application. Our experiments, conducted on both speaker identification and speaker verification tasks, show that the proposed architecture converges faster and performs better than a standard CNN on raw waveforms.) <|cite_end|>, it is much more common to use a hand-crafted 2D representation (e.g., spectrograms or filterbanks). The latter enables adaptation of successful backbones from computer vision, e.g., VGG <|cite_start|> (Reference: Voxceleb: Large-scale speaker verification in the wild: ) <|cite_end|> or residual networks (ResNet) <|cite_start|> (Reference: Deep Residual Learning for Image Recognition: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.) <|cite_end|> <|cite_start|> (Reference: Multi-Resolution Multi-Head Attention in Deep Speaker Embedding: Pooling is an essential component to capture long-term speaker characteristics for speaker recognition. This paper proposes simple but effective pooling methods to compute attentive weights for better temporal aggregation over the variable-length input speech, enabling the end-to-end neural network to have improved performance for discriminating among speakers. Particularly, we observe that using multiple heads for attentive pooling over the entire encoded sequence, a method we term as global multi-head attention, significantly improves performance in comparison to various pooling methods, including the recently proposed multi-head attention [1]. To improve diversity of attention heads, we further propose multi-resolution multi-head attention for pooling that has an additional temperature hyperparameter for each head. This leads to even larger performance gain, on top of that achieved using multiple heads. On the benchmark VoxCeleb1 dataset, the proposed method achieves the state-of-the-art performance of Equal Error Rate (EER) of 3.966%. Our analysis shows that using multiple heads and having multiple resolutions on these heads with different temperatures lead to improved certainty of attentive weights in the new state-of-the-art system.) <|cite_end|> <|cite_start|> (Reference: Frequency and temporal convolutional attention for text-independent speaker recognition: Majority of the recent approaches for text-independent speaker recognition apply attention or similar techniques for aggregation of frame-level feature descriptors generated by a deep neural network (DNN) front-end. In this paper, we propose methods of convolutional attention for independently modelling temporal and frequency information in a convolutional neural network (CNN) based front-end. Our system utilizes convolutional block attention modules (CBAMs) [1] appropriately modified to accommodate spectrogram inputs. The proposed CNN front-end fitted with the proposed convolutional attention modules outperform the no-attention and spatial-CBAM baselines by a significant margin on the VoxCeleb [2, 3] speaker verification benchmark, and our best model achieves an equal error rate of 2:031% on the VoxCeleb1 test set, improving the existing state of the art result by a significant margin. For a more thorough assessment of the effects of frequency and temporal attention in real-world conditions, we conduct ablation experiments by randomly dropping frequency bins and temporal frames from the input spectrograms, concluding that instead of modelling either of the entities, simultaneously modelling temporal and frequency attention translates to better real-world performance.) <|cite_end|>. Dealing with the time dimension can rely on recurrence <|cite_start|> (Reference: Generalized End-to-End Loss for Speaker Verification: In this paper, we propose a new loss function called generalized end-to-end (GE2E) loss, which makes the training of speaker verification models more efficient than our previous tuple-based end-to-end (TE2E) loss function. Unlike TE2E, the GE2E loss function updates the network in a way that emphasizes examples that are difficult to verify at each step of the training process. Additionally, the GE2E loss does not require an initial stage of example selection. With these properties, our model with the new loss function decreases speaker verification EER by more than 10%, while reducing the training time by 60% at the same time. We also introduce the MultiReader technique, which allows us to do domain adaptation - training a more accurate model that supports multiple keywords (i.e. "OK Google" and "Hey Google") as well as multiple dialects.) <|cite_end|>, pooling <|cite_start|> (Reference: Frequency and temporal convolutional attention for text-independent speaker recognition: Majority of the recent approaches for text-independent speaker recognition apply attention or similar techniques for aggregation of frame-level feature descriptors generated by a deep neural network (DNN) front-end. In this paper, we propose methods of convolutional attention for independently modelling temporal and frequency information in a convolutional neural network (CNN) based front-end. Our system utilizes convolutional block attention modules (CBAMs) [1] appropriately modified to accommodate spectrogram inputs. The proposed CNN front-end fitted with the proposed convolutional attention modules outperform the no-attention and spatial-CBAM baselines by a significant margin on the VoxCeleb [2, 3] speaker verification benchmark, and our best model achieves an equal error rate of 2:031% on the VoxCeleb1 test set, improving the existing state of the art result by a significant margin. For a more thorough assessment of the effects of frequency and temporal attention in real-world conditions, we conduct ablation experiments by randomly dropping frequency bins and temporal frames from the input spectrograms, concluding that instead of modelling either of the entities, simultaneously modelling temporal and frequency attention translates to better real-world performance.) <|cite_end|> <|cite_start|> (Reference: Self-attentive speaker embeddings for text-independent speaker verification: This paper introduces a new method to extract speaker embed-dings from a deep neural network (DNN) for text-independent speaker verification. Usually, speaker embeddings are extracted from a speaker-classification DNN that averages the hidden vectors over the frames of a speaker; the hidden vectors produced from all the frames are assumed to be equally important. We relax this assumption and compute the speaker embedding as a weighted average of a speaker’s frame-level hidden vectors, and their weights are automatically determined by a self-attention mechanism. The effect of multiple attention heads are also investigated to capture different aspects of a speaker’s input speech. Finally, a PLDA classifier is used to compare pairs of embeddings. The proposed self-attentive speaker embedding system is compared with a strong DNN embedding baseline on NIST SRE 2016. We find that the self-attentive embeddings achieve superior performance. Moreover, the improvement produced by the self-attentive speaker embeddings is consistent with both short and long testing utterances.) <|cite_end|> or specialized architectural designs. As an example, Time Delay Neural Networks (TDNNs) use a 1D convolution structure along the temporal axis and are adopted in the popular x-vector architecture <|cite_start|> (Reference: {Deep neural network embeddings for text-independent speaker verification: This paper investigates replacing i-vectors for text-independent speaker verification with embeddings extracted from a feed-forward deep neural network. Long-term speaker characteristics are captured in the network by a temporal pooling layer that aggregates over the input speech. This enables the network to be trained to discriminate between speakers from variable-length speech segments. After training, utterances are mapped directly to fixed-dimensional speaker embeddings and pairs of embeddings are scored using a PLDA-based backend. We compare performance with a traditional i-vector baseline on NIST SRE 2010 and 2016. We find that the embeddings outperform i-vectors for short speech segments and are competitive on long duration test conditions. Moreover, the two representations are complementary, and their fusion improves on the baseline at all operating points. Similar systems have recently shown promising results when trained on very large proprietary datasets, but to the best of our knowledge, these are the best results reported for speaker-discriminative neural networks when trained and tested on publicly available corpora.) <|cite_end|> <|cite_start|> (Reference: Speaker recognition for multi-speaker conversations using x-vectors: Recently, deep neural networks that map utterances to fixed-dimensional embeddings have emerged as the state-of-the-art in speaker recognition. Our prior work introduced x-vectors, an embedding that is very effective for both speaker recognition and diarization. This paper combines our previous work and applies it to the problem of speaker recognition on multi-speaker conversations. We measure performance on Speakers in the Wild and report what we believe are the best published error rates on this dataset. Moreover, we find that diarization substantially reduces error rate when there are multiple speakers, while maintaining excellent performance on single-speaker recordings. Finally, we introduce an easily implemented method to remove the domain-sensitive threshold typically used in the clustering stage of a diarization system. The proposed method is more robust to domain shifts, and achieves similar results to those obtained using a well-tuned threshold.) <|cite_end|>. Usually, trainable pooling layers achieve better results than simple pooling operators, (e.g., average pooling <|cite_start|> (Reference: Frequency and temporal convolutional attention for text-independent speaker recognition: Majority of the recent approaches for text-independent speaker recognition apply attention or similar techniques for aggregation of frame-level feature descriptors generated by a deep neural network (DNN) front-end. In this paper, we propose methods of convolutional attention for independently modelling temporal and frequency information in a convolutional neural network (CNN) based front-end. Our system utilizes convolutional block attention modules (CBAMs) [1] appropriately modified to accommodate spectrogram inputs. The proposed CNN front-end fitted with the proposed convolutional attention modules outperform the no-attention and spatial-CBAM baselines by a significant margin on the VoxCeleb [2, 3] speaker verification benchmark, and our best model achieves an equal error rate of 2:031% on the VoxCeleb1 test set, improving the existing state of the art result by a significant margin. For a more thorough assessment of the effects of frequency and temporal attention in real-world conditions, we conduct ablation experiments by randomly dropping frequency bins and temporal frames from the input spectrograms, concluding that instead of modelling either of the entities, simultaneously modelling temporal and frequency attention translates to better real-world performance.) <|cite_end|> or statistical pooling <|cite_start|> (Reference: X-Vectors: Robust DNN Embeddings for Speaker Recognition: In this paper, we use data augmentation to improve performance of deep neural network (DNN) embeddings for speaker recognition. The DNN, which is trained to discriminate between speakers, maps variable-length utterances to fixed-dimensional embeddings that we call x-vectors. Prior studies have found that embeddings leverage large-scale training datasets better than i-vectors. However, it can be challenging to collect substantial quantities of labeled data for training. We use data augmentation, consisting of added noise and reverberation, as an inexpensive method to multiply the amount of training data and improve robustness. The x-vectors are compared with i-vector baselines on Speakers in the Wild and NIST SRE 2016 Cantonese. We find that while augmentation is beneficial in the PLDA classifier, it is not helpful in the i-vector extractor. However, the x-vector DNN effectively exploits data augmentation, due to its supervised training. As a result, the x-vectors achieve superior performance on the evaluation datasets.) <|cite_end|>). Some of the most successful learned designs include the family of VLAD models. NetVLAD <|cite_start|> (Reference: Utterance-level Aggregation For Speaker Recognition In The Wild: The objective of this paper is speaker recognition "in the wild"-where utterances may be of variable length and also contain irrelevant signals. Crucial elements in the design of deep networks for this task are the type of trunk (frame level) network, and the method of temporal aggregation. We propose a powerful speaker recognition deep network, using a "thin-ResNet" trunk architecture, and a dictionary-based NetVLAD or GhostVLAD layer to aggregate features across time, that can be trained end-to-end. We show that our network achieves state of the art performance by a significant margin on the VoxCeleb1 test set for speaker recognition, whilst requiring fewer parameters than previous methods. We also investigate the effect of utterance length on performance, and conclude that for "in the wild" data, a longer length is beneficial.) <|cite_end|> assigns each frame-level descriptor to a cluster and computes residuals to encode the output features. Its variant GhostVLAD <|cite_start|> (Reference: Utterance-level Aggregation For Speaker Recognition In The Wild: The objective of this paper is speaker recognition "in the wild"-where utterances may be of variable length and also contain irrelevant signals. Crucial elements in the design of deep networks for this task are the type of trunk (frame level) network, and the method of temporal aggregation. We propose a powerful speaker recognition deep network, using a "thin-ResNet" trunk architecture, and a dictionary-based NetVLAD or GhostVLAD layer to aggregate features across time, that can be trained end-to-end. We show that our network achieves state of the art performance by a significant margin on the VoxCeleb1 test set for speaker recognition, whilst requiring fewer parameters than previous methods. We also investigate the effect of utterance length on performance, and conclude that for "in the wild" data, a longer length is beneficial.) <|cite_end|> improved performance by excluding some of the original NetVLAD clusters from the final concatenation, such that undesirable speech sections are down-weighted. \subsection{Adversarial Attacks in Speech Processing} Originally introduced in computer vision <|cite_start|> (Reference: Generative {{Adversarial Nets}}: CNN and RNN are classifiers for image and speech recognition, and are used in many computer vision. However, this model alone does not produce images...) <|cite_end|>, adversarial attacks refer to genuine samples imperceptibly modified by tiny perturbations to fool classifiers with high chance. In the context of speech, this type of attack can be broadly categorized based on the targeted task, i.e., speech or speaker recognition. In the former, the goal is to embed carefully crafted perturbations to yield automatic transcription of a specific malicious phrase. In <|cite_start|> (Reference: {Hidden Voice Commands: Voice interfaces are becoming more ubiquitous and are now the primary input method for many devices. We explore in this paper how they can be attacked with hidden voice commands that are unintelligible to human listeners but which are interpreted as commands by devices. We evaluate these attacks under two different threat models. In the black-box model, an attacker uses the speech recognition system as an opaque oracle. We show that the adversary can produce difficult to understand commands that are effective against existing systems in the black-box model. Under the white-box model, the attacker has full knowledge of the internals of the speech recognition system and uses it to create attack commands that we demonstrate through user testing are not understandable by humans. We then evaluate several defenses, including notifying the user when a voice command is accepted; a verbal challenge-response protocol; and a machine learning approach that can detect our attacks with 99.8% accuracy.) <|cite_end|>, the attacker uses inverse feature extraction to generate obfuscated audio played over-the-air, which allows for issuing hidden commands to voice assistants. Later, <|cite_start|> (Reference: Audio Adversarial Examples: Targeted Attacks on Speech-to-Text: We construct targeted audio adversarial examples on automatic speech recognition. Given any audio waveform, we can produce another that is over 99.9% similar, but transcribes as any phrase we choose (recognizing up to 50 characters per second of audio). We apply our white-box iterative optimization-based attack to Mozilla's implementation DeepSpeech end-to-end, and show it has a 100% success rate. The feasibility of this attack introduce a new domain to study adversarial examples.) <|cite_end|> proposed a white-box attack based on gradient optimization, leading to quasi-perceptible adversarial perturbations, finally improved using psychoacoustic modeling <|cite_start|> (Reference: Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition: Adversarial examples are inputs to machine learning models designed by an adversary to cause an incorrect output. So far, adversarial examples have been studied most extensively in the image domain. In this domain, adversarial examples can be constructed by imperceptibly modifying images to cause misclassification, and are practical in the physical world. In contrast, current targeted adversarial examples applied to speech recognition systems have neither of these properties: humans can easily identify the adversarial perturbations, and they are not effective when played over-the-air. This paper makes advances on both of these fronts. First, we develop effectively imperceptible audio adversarial examples (verified through a human study) by leveraging the psychoacoustic principle of auditory masking, while retaining 100% targeted success rate on arbitrary full-sentence targets. Next, we make progress towards physical-world over-the-air audio adversarial examples by constructing perturbations which remain effective even after applying realistic simulated environmental distortions.) <|cite_end|>. To avoid repeated optimization hindering real-time use, a recent work by <|cite_start|> (Reference: Universal Adversarial Perturbations for Speech Recognition Systems: In this work, we demonstrate the existence of universal adversarial audio perturbations that cause mis-transcription of audio signals by automatic speech recognition (ASR) systems. We propose an algorithm to find a single quasi-imperceptible perturbation, which when added to any arbitrary speech signal, will most likely fool the victim speech recognition model. Our experiments demonstrate the application of our proposed technique by crafting audio-agnostic universal perturbations for the state-of-the-art ASR system -- Mozilla DeepSpeech. Additionally, we show that such perturbations generalize to a significant extent across models that are not available during training, by performing a transferability test on a WaveNet based ASR system.) <|cite_end|> designed an algorithm to find a single universal perturbation, that can be added to any speech waveform to cause an error in transcription with high probability. Finally, <|cite_start|> (Reference: Prince of Wales's Hospital Fund for London: Amongst those present were: Lord Rowton, Lord Rothschild, Lord Iveagh, Lord Farquhar, the President of the Royal Society (Lord Lister), the Chairman of the London School Board (Lord Reay), Sir Savile Cro^sley, Sir Henry Burdett, K.C.B., Cardinal Yaughan, the Chief Rabbi (the Rev. Dr. Adler), the Rev. T. Bowman Stephenson, D.D., Mr. Sydney Buxton, M.P., Mr. Julius Wernher, and Mr. J. G. Craggs. Sir Sayile Ckossley, Honorary Secretary, read letters of regret from the following, who were unable) <|cite_end|> showed that adversarial commands can be also hidden in music. The authors used a surrogate model to create transferable adversarial examples that can achieve this goal. Attacking speaker verification systems initially relied on spoofing and replay attacks. Susceptibility to adversarial examples has gained attention only recently. The goal is to craft an attack sample from a voice uttered by a seed speaker, so that it is misclassified as a different one (either specific or any), while still being recognized as the seed speaker by human listeners. In a white-box setting, the FGSM attack made it possible to generate adversarial examples with high success rate <|cite_start|> (Reference: Adversarial Attacks on GMM I-Vector Based Speaker Verification Systems: This work investigates the vulnerability of Gaussian Mixture Model (GMM) i-vector based speaker verification systems to adversarial attacks, and the transferability of adversarial samples crafted from GMM i-vector based systems to x-vector based systems. In detail, we formulate the GMM i-vector system as a scoring function of enrollment and testing utterance pairs. Then we leverage the fast gradient sign method (FGSM) to optimize testing utterances for adversarial samples generation. These adversarial samples are used to attack both GMM i-vector and x-vector systems. We measure the system vulnerability by the degradation of equal error rate and false acceptance rate. Experiment results show that GMM i-vector systems are seriously vulnerable to adversarial attacks, and the crafted adversarial samples are proved to be transferable and pose threats to neural network speaker embedding based systems (e.g. x-vector systems).) <|cite_end|>. <|cite_start|> (Reference: Inaudible Adversarial Perturbations for Targeted Attack in Speaker Recognition: Speaker recognition is a popular topic in biometric authentication and many deep learning approaches have achieved extraordinary performances. However, it has been shown in both image and speech applications that deep neural networks are vulnerable to adversarial examples. In this study, we aim to exploit this weakness to perform targeted adversarial attacks against the x-vector based speaker recognition system. We propose to generate inaudible adversarial perturbations achieving targeted white-box attacks to speaker recognition system based on the psychoacoustic principle of frequency masking. Specifically, we constrict the perturbation under the masking threshold of original audio, instead of using a common l_p norm to measure the perturbations. Experiments on Aishell-1 corpus show that our approach yields up to 98.5% attack success rate to arbitrary gender speaker targets, while retaining indistinguishable attribute to listeners. Furthermore, we also achieve an effective speaker attack when applying the proposed approach to a completely irrelevant waveform, such as music.) <|cite_end|> constrained the perturbation based on a psychoacoustic masking threshold to obtain imperceptible samples. To obtain robustness against reverberation and noise, <|cite_start|> (Reference: Real-time, Robust and Adaptive Universal Adversarial Attacks Against Speaker Recognition Systems: ) <|cite_end|> proposed a gradient-based optimization that generates robust universal adversarial examples (though the attack was not tested over-the-air). <|cite_start|> (Reference: Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems: Speaker recognition (SR) is widely used in our daily life as a biometric authentication or identification mechanism. The popularity of SR brings in serious security concerns, as demonstrated by recent adversarial attacks. However, the impacts of such threats in the practical black-box setting are still open, since current attacks consider the white-box setting only. In this paper, we conduct the first comprehensive and systematic study of the adversarial attacks on SR systems (SRSs) to understand their security weakness in the practical blackbox setting. For this purpose, we propose an adversarial attack, named FAKEBOB, to craft adversarial samples. Specifically, we formulate the adversarial sample generation as an optimization problem, incorporated with the confidence of adversarial samples and maximal distortion to balance between the strength and imperceptibility of adversarial voices. One key contribution is to propose a novel algorithm to estimate the score threshold, a feature in SRSs, and use it in the optimization problem to solve the optimization problem. We demonstrate that FAKEBOB achieves 99% targeted attack success rate on both open-source and commercial systems. We further demonstrate that FAKEBOB is also effective on both open-source and commercial systems when playing over the air in the physical world. Moreover, we have conducted a human study which reveals that it is hard for human to differentiate the speakers of the original and adversarial voices. Last but not least, we show that four promising defense methods for adversarial attack from the speech recognition domain become ineffective on SRSs against FAKEBOB, which calls for more effective defense methods. We highlight that our study peeks into the security implications of adversarial attacks on SRSs, and realistically fosters to improve the security robustness of SRSs.) <|cite_end|> used a gradient estimation algorithm (NES) in a black-box setting. While the study used a small dataset, the attack had a high success rate in a practical setting. All of the existing attacks (including both spoofed and adversarial samples) are targeted, i.e., they aim to pass authentication as a specific individual. However, biometric systems exhibit large variations in matching propensity across individuals, which can be exploited to open a novel threat vector. Hence, the untargeted nature of the proposed dictionary attacks is fundamentally different from the untargeted nature of adversarial attacks on machine learning models. In this context, the latter would aim to prevent authentication as a particular person without specifying the desired target identity. \subsection{Dictionary Attacks in Biometrics} Dictionary attacks use prior knowledge about the expected success rate to triage brute-force authentication attempts. They naturally apply to passwords, but until recently have not been considered for other authentication modalities. In biometrics, such attacks are qualitatively different from spoofing and do not require any knowledge about the victim (e.g., speech samples) <|cite_start|> (Reference: Spoofing and countermeasures for speaker verification: A survey: ) <|cite_end|>. This threat is enabled by large variation in matching propensity across individuals (biometric menagerie <|cite_start|> (Reference: The biometric menagerie: It is commonly accepted that users of a biometric system may have differing degrees of accuracy within the system. Some people may have trouble authenticating, while others may be particularly vulnerable to impersonation. Goats, wolves, and lambs are labels commonly applied to these problem users. These user types are defined in terms of verification performance when users are matched against themselves (goats) or when matched against others (lambs and wolves). The relationship between a user's genuine and impostor match results suggests four new user groups: worms, doves, chameleons, and phantoms. We establish formal definitions for these animals and a statistical test for their existence. A thorough investigation is conducted using a broad range of biometric modalities, including 2D and 3D faces, fingerprints, iris, speech, and keystroke dynamics. Patterns that emerge from the results expose novel, important, and encouraging insights into the nature of biometric match results. A new framework for the evaluation of biometric systems based on the biometric menagerie, as opposed to collective statistics, is proposed.) <|cite_end|>) and further exacerbated by the usability-security trade-offs in mass deployments (e.g., partial finger impressions <|cite_start|> (Reference: MasterPrint: Exploring the Vulnerability of Partial Fingerprint-Based Authentication Systems: This paper investigates the security of partial fingerprint-based authentication systems, especially when multiple fingerprints of a user are enrolled. A number of consumer electronic devices, such as smartphones, are beginning to incorporate fingerprint sensors for user authentication. The sensors embedded in these devices are generally small and the resulting images are, therefore, limited in size. To compensate for the limited size, these devices often acquire multiple partial impressions of a single finger during enrollment to ensure that at least one of them will successfully match with the image obtained from the user during authentication. Furthermore, in some cases, the user is allowed to enroll multiple fingers, and the impressions pertaining to multiple partial fingers are associated with the same identity (i.e., one user). A user is said to be successfully authenticated if the partial fingerprint obtained during authentication matches any one of the stored templates. This paper investigates the possibility of generating a “MasterPrint,” a synthetic or real partial fingerprint that serendipitously matches one or more of the stored templates for a significant number of users. Our preliminary results on an optical fingerprint data set and a capacitive fingerprint data set indicate that it is indeed possible to locate or generate partial fingerprints that can be used to impersonate a large number of users. In this regard, we expose a potential vulnerability of partial fingerprint-based authentication systems, especially when multiple impressions are enrolled per finger.) <|cite_end|>). The concept of dictionary attacks in biometrics was introduced only recently. The vulnerability was first demonstrated on fingerprints <|cite_start|> (Reference: MasterPrint: Exploring the Vulnerability of Partial Fingerprint-Based Authentication Systems: This paper investigates the security of partial fingerprint-based authentication systems, especially when multiple fingerprints of a user are enrolled. A number of consumer electronic devices, such as smartphones, are beginning to incorporate fingerprint sensors for user authentication. The sensors embedded in these devices are generally small and the resulting images are, therefore, limited in size. To compensate for the limited size, these devices often acquire multiple partial impressions of a single finger during enrollment to ensure that at least one of them will successfully match with the image obtained from the user during authentication. Furthermore, in some cases, the user is allowed to enroll multiple fingers, and the impressions pertaining to multiple partial fingers are associated with the same identity (i.e., one user). A user is said to be successfully authenticated if the partial fingerprint obtained during authentication matches any one of the stored templates. This paper investigates the possibility of generating a “MasterPrint,” a synthetic or real partial fingerprint that serendipitously matches one or more of the stored templates for a significant number of users. Our preliminary results on an optical fingerprint data set and a capacitive fingerprint data set indicate that it is indeed possible to locate or generate partial fingerprints that can be used to impersonate a large number of users. In this regard, we expose a potential vulnerability of partial fingerprint-based authentication systems, especially when multiple impressions are enrolled per finger.) <|cite_end|> and subsequently extended to faces <|cite_start|> (Reference: Generating Master Faces for Use in Performing Wolf Attacks on Face Recognition Systems: Due to its convenience, biometric authentication, especial face authentication, has become increasingly mainstream and thus is now a prime target for attackers. Presentation attacks and face morphing are typical types of attack. Previous research has shown that finger-vein- and fingerprint-based authentication methods are susceptible to wolf attacks, in which a wolf sample matches many enrolled user templates. In this work, we demonstrated that wolf (generic) faces, which we call "master faces," can also compromise face recognition systems and that the master face concept can be generalized in some cases. Motivated by recent similar work in the fingerprint domain, we generated high-quality master faces by using the state-of-the-art face generator StyleGAN in a process called latent variable evolution. Experiments demonstrated that even attackers with limited resources using only pre-trained models available on the Internet can initiate master face attacks. The results, in addition to demonstrating performance from the attacker's point of view, can also be used to clarify and improve the performance of face recognition systems and harden face authentication systems.) <|cite_end|>. Initially, an existing fingerprint with the highest impostor score was selected as a \emph{master print} <|cite_start|> (Reference: MasterPrint: Exploring the Vulnerability of Partial Fingerprint-Based Authentication Systems: This paper investigates the security of partial fingerprint-based authentication systems, especially when multiple fingerprints of a user are enrolled. A number of consumer electronic devices, such as smartphones, are beginning to incorporate fingerprint sensors for user authentication. The sensors embedded in these devices are generally small and the resulting images are, therefore, limited in size. To compensate for the limited size, these devices often acquire multiple partial impressions of a single finger during enrollment to ensure that at least one of them will successfully match with the image obtained from the user during authentication. Furthermore, in some cases, the user is allowed to enroll multiple fingers, and the impressions pertaining to multiple partial fingers are associated with the same identity (i.e., one user). A user is said to be successfully authenticated if the partial fingerprint obtained during authentication matches any one of the stored templates. This paper investigates the possibility of generating a “MasterPrint,” a synthetic or real partial fingerprint that serendipitously matches one or more of the stored templates for a significant number of users. Our preliminary results on an optical fingerprint data set and a capacitive fingerprint data set indicate that it is indeed possible to locate or generate partial fingerprints that can be used to impersonate a large number of users. In this regard, we expose a potential vulnerability of partial fingerprint-based authentication systems, especially when multiple impressions are enrolled per finger.) <|cite_end|>. In the next iteration, synthetic master prints were created by first-order hill-climbing, initialized on the most promising real fingerprints from the first approach. However, local search algorithms may get stuck in local minima or take a long time to converge. <|cite_start|> (Reference: DeepMasterPrints: Generating MasterPrints for Dictionary Attacks via Latent Variable Evolution\({: Recent research has demonstrated the vulnerability of fingerprint recognition systems to dictionary attacks based on MasterPrints. MasterPrints are real or synthetic fingerprints that can fortuitously match with a large number of fingerprints thereby undermining the security afforded by fingerprint systems. Previous work by Roy et al. generated synthetic MasterPrints at the feature-level. In this work we generate complete image-level MasterPrints known as DeepMasterPrints, whose attack accuracy is found to be much superior than that of previous methods. The proposed method, referred to as Latent Variable Evolution, is based on training a Generative Adversarial Network on a set of real fingerprint images. Stochastic search in the form of the Covariance Matrix Adaptation Evolution Strategy is then used to search for latent input variables to the generator network that can maximize the number of impostor matches as assessed by a fingerprint recognizer. Experiments convey the efficacy of the proposed method in generating DeepMasterPrints. The underlying method is likely to have broad applications in fingerprint security as well as fingerprint synthesis.) <|cite_end|> used diversity-quality evolution to address this issue and a generative adversarial network (GAN) to parametrize the search space. The same approach was successful for faces <|cite_start|> (Reference: Generating Master Faces for Use in Performing Wolf Attacks on Face Recognition Systems: Due to its convenience, biometric authentication, especial face authentication, has become increasingly mainstream and thus is now a prime target for attackers. Presentation attacks and face morphing are typical types of attack. Previous research has shown that finger-vein- and fingerprint-based authentication methods are susceptible to wolf attacks, in which a wolf sample matches many enrolled user templates. In this work, we demonstrated that wolf (generic) faces, which we call "master faces," can also compromise face recognition systems and that the master face concept can be generalized in some cases. Motivated by recent similar work in the fingerprint domain, we generated high-quality master faces by using the state-of-the-art face generator StyleGAN in a process called latent variable evolution. Experiments demonstrated that even attackers with limited resources using only pre-trained models available on the Internet can initiate master face attacks. The results, in addition to demonstrating performance from the attacker's point of view, can also be used to clarify and improve the performance of face recognition systems and harden face authentication systems.) <|cite_end|>. So far, dictionary attacks have not been studied for speech. Our preliminary work <|cite_start|> (Reference: {Adversarial Optimization for Dictionary Attacks on Speaker Verification: In this paper, we assess vulnerability of speaker verification systems to dictionary attacks. We seek master voices, i.e., adversarial utterances optimized to match against a large number of users by pure chance. First, we perform menagerie analysis to identify utterances which intrinsically hold this property. Then, we propose an adversarial optimization approach for generating master voices synthetically. Our experiments show that, even in the most secure configuration, on average, a master voice can match approx. 20% of females and 10% of males without any knowledge about the population. We demonstrate that dictionary attacks should be considered as a feasible threat model for sensitive and high-stakes deployments of speaker verification.) <|cite_end|> demonstrated that adversarial optimization of spectrograms in a white-box setting consistently increases impersonation rates in VGGVox <|cite_start|> (Reference: Voxceleb: Large-scale speaker verification in the wild: ) <|cite_end|>. The resulting adversarial samples could match, on average, 20\% (10\%) of female (male) speakers in an unseen population. In this paper, we generalize our attack and test it against multiple systems and diverse speech representations. We achieved substantially improved impersonation rates and demonstrate non-trivial transferability across speaker encoders. <|paper_end|>
[ "<|reference_start|> {Adversarial Optimization for Dictionary Attacks on Speaker Verification: In this paper, we assess vulnerability of speaker verification systems to dictionary attacks. We seek master voices, i.e., adversarial utterances optimized to match against a large number of users by pure chance. First, we perform menagerie analysis to identify utterances which intrinsically hold this property. Then, we propose an adversarial optimization approach for generating master voices synthetically. Our experiments show that, even in the most secure configuration, on average, a master voice can match approx. 20% of females and 10% of males without any knowledge about the population. We demonstrate that dictionary attacks should be considered as a feasible threat model for sensitive and high-stakes deployments of speaker verification. <|reference_end|>", "<|reference_start|> Speaker recognition based on deep learning: An overview: <|reference_end|>", "<|reference_start|> Generalized End-to-End Loss for Speaker Verification: In this paper, we propose a new loss function called generalized end-to-end (GE2E) loss, which makes the training of speaker verification models more efficient than our previous tuple-based end-to-end (TE2E) loss function. Unlike TE2E, the GE2E loss function updates the network in a way that emphasizes examples that are difficult to verify at each step of the training process. Additionally, the GE2E loss does not require an initial stage of example selection. With these properties, our model with the new loss function decreases speaker verification EER by more than 10%, while reducing the training time by 60% at the same time. We also introduce the MultiReader technique, which allows us to do domain adaptation - training a more accurate model that supports multiple keywords (i.e. \"OK Google\" and \"Hey Google\") as well as multiple dialects. <|reference_end|>", "<|reference_start|> Generating Master Faces for Use in Performing Wolf Attacks on Face Recognition Systems: Due to its convenience, biometric authentication, especial face authentication, has become increasingly mainstream and thus is now a prime target for attackers. Presentation attacks and face morphing are typical types of attack. Previous research has shown that finger-vein- and fingerprint-based authentication methods are susceptible to wolf attacks, in which a wolf sample matches many enrolled user templates. In this work, we demonstrated that wolf (generic) faces, which we call \"master faces,\" can also compromise face recognition systems and that the master face concept can be generalized in some cases. Motivated by recent similar work in the fingerprint domain, we generated high-quality master faces by using the state-of-the-art face generator StyleGAN in a process called latent variable evolution. Experiments demonstrated that even attackers with limited resources using only pre-trained models available on the Internet can initiate master face attacks. The results, in addition to demonstrating performance from the attacker's point of view, can also be used to clarify and improve the performance of face recognition systems and harden face authentication systems. <|reference_end|>" ]
[ 21, 23, 35, 58 ]
{"<|cite_1|>": "arxiv-340885", "<|cite_2|>": "ss-1939087", "<|cite_3|>": "ss-1220289", "<|cite_4|>": "ss-1203842", "<|multi_cite_5_2|>": "ss-930354", "<|cite_6|>": "arxiv-340885", "<|cite_7|>": "ss-699452", "<|multi_cite_8_1|>": "ss-1541119", "<|multi_cite_8_2|>": "ss-1251123", "<|multi_cite_9_1|>": "ss-1081771", "<|multi_cite_9_2|>": "arxiv-197173", "<|multi_cite_9_3|>": "ss-1220290", "<|multi_cite_10_1|>": "arxiv-84644", "<|multi_cite_10_2|>": "ss-849300", "<|cite_11|>": "ss-1220291", "<|multi_cite_12_1|>": "ss-1250664", "<|multi_cite_12_2|>": "ss-2553839", "<|cite_13|>": "arxiv-271967", "<|cite_14|>": "ss-1437247", "<|cite_15|>": "ss-805363", "<|cite_16|>": "arxiv-54350", "<|cite_17|>": "ss-1359657", "<|cite_18|>": "ss-1203842", "<|cite_19|>": "ss-1109456", "<|cite_20|>": "ss-706060", "<|cite_21|>": "ss-1032707", "<|cite_22|>": "ss-1203842", "<|cite_23|>": "ss-1220292", "<|multi_cite_24_1|>": "ss-865081", "<|multi_cite_24_2|>": "ss-1220293", "<|cite_25|>": "arxiv-167897", "<|cite_26|>": "ss-1279418", "<|multi_cite_27_1|>": "arxiv-88870", "<|multi_cite_27_2|>": "ss-1220294", "<|multi_cite_27_3|>": "arxiv-229156", "<|cite_28|>": "arxiv-138454", "<|multi_cite_29_1|>": "arxiv-229156", "<|multi_cite_29_2|>": "ss-1358911", "<|multi_cite_30_1|>": "ss-895607", "<|multi_cite_30_2|>": "ss-1519604", "<|cite_31|>": "arxiv-229156", "<|cite_32|>": "ss-988790", "<|cite_33|>": "arxiv-193048", "<|cite_34|>": "arxiv-193048", "<|cite_35|>": "ss-805363", "<|cite_36|>": "ss-1538095", "<|cite_37|>": "arxiv-144734", "<|cite_38|>": "arxiv-196638", "<|cite_39|>": "arxiv-203477", "<|cite_40|>": "ss-699452", "<|cite_41|>": "ss-1251123", "<|cite_42|>": "arxiv-266923", "<|cite_43|>": "ss-1220295", "<|cite_44|>": "arxiv-232584", "<|cite_45|>": "ss-1246854", "<|cite_46|>": "ss-1437247", "<|cite_47|>": "ss-1250664", "<|cite_48|>": "ss-1250664", "<|cite_49|>": "arxiv-271967", "<|cite_50|>": "ss-1250664", "<|cite_51|>": "ss-2553839", "<|cite_52|>": "arxiv-271967", "<|cite_53|>": "ss-1359657", "<|cite_54|>": "ss-1279418"}
2210.17321
<|paper_start|> Title: Dominator Coloring and CD Coloring in Almost Cluster Graphs Abstract: Dominator Coloring and CD Coloring in Almost Cluster Graphs: In this paper, we study two popular variants of Graph Coloring -- Dominator Coloring and CD Coloring. In both problems, we are given a graph $G$ and a natural number $\ell$ as input and the goal is to properly color the vertices with at most $\ell$ colors with specific constraints. In Dominator Coloring, we require for each $v \in V(G)$, a color $c$ such that $v$ dominates all vertices colored $c$. In CD Coloring, we require for each color $c$, a $v \in V(G)$ which dominates all vertices colored $c$. These problems, defined due to their applications in social and genetic networks, have been studied extensively in the last 15 years. While it is known that both problems are fixed-parameter tractable (FPT) when parameterized by $(t,\ell)$ where $t$ is the treewidth of $G$, we consider strictly structural parameterizations which naturally arise out of the problems' applications. We prove that Dominator Coloring is FPT when parameterized by the size of a graph's cluster vertex deletion (CVD) set and that CD Coloring is FPT parameterized by CVD set size plus the number of remaining cliques. En route, we design a simpler and faster FPT algorithms when the problems are parameterized by the size of a graph's twin cover, a special CVD set. When the parameter is the size of a graph's clique modulator, we design a randomized single-exponential time algorithm for the problems. These algorithms use an inclusion-exclusion based polynomial sieving technique and add to the growing number of applications using this powerful algebraic technique. Introduction \noindent Graphs motivated by applications in bio-informatics, social networks, and machine learning regularly define edges between data points based on some notion of similarity. As a consequence, we are often interested in how ``close" a given graph is to a (special type of) \textit{cluster graph} -- a graph where every component is a clique. A popular measure of this ``closeness" is the \textit{cluster editing distance}. A graph $G$ has cluster-editing distance $k$ if it is the smallest number such that there exists a set of $k$ edges whose addition to or deletion from $G$ results in a cluster graph. As an introduction to the extensive literature surrounding this parameter, we refer the reader to <|cite_start|> (Reference: Going weighted: Parameterized algorithms for cluster editing: ) <|cite_end|> <|cite_start|> (Reference: A more effective linear kernelization for cluster editing: ) <|cite_end|> <|cite_start|> (Reference: Exact Algorithms for Cluster Editing: Evaluation and Experiments: ) <|cite_end|> <|cite_start|> (Reference: Tight bounds for parameterized complexity of cluster editing with a small number of clusters: In the Correlation Clustering problem, also known as Cluster Editing, we are given an undirected graph G and a positive integer k; the task is to decide whether G can be transformed into a cluster graph, i.e., a disjoint union of cliques, by changing at most k adjacencies, that is, by adding or deleting at most k edges. The motivation of the problem stems from various tasks in computational biology (Ben-Dor et al., Journal of Computational Biology 1999) and machine learning (Bansal et al., Machine Learning 2004). Although in general Correlation Clustering is APX-hard (Charikar et al., FOCS 2003), the version of the problem where the number of cliques may not exceed a prescribed constant p admits a PTAS (Giotis and Guruswami, SODA 2006). We study the parameterized complexity of Correlation Clustering with this restriction on the number of cliques to be created. We give an algorithm that - in time O(2^{O(sqrt{pk})} + n+m) decides whether a graph G on n vertices and m edges can be transformed into a cluster graph with exactly p cliques by changing at most k adjacencies. We complement these algorithmic findings by the following, surprisingly tight lower bound on the asymptotic behavior of our algorithm. We show that unless the Exponential Time Hypothesis (ETH) fails - for any constant 0 <= sigma <= 1, there is p = Theta(k^sigma) such that there is no algorithm deciding in time 2^{o(sqrt{pk})} n^{O(1)} whether an n-vertex graph G can be transformed into a cluster graph with at most p cliques by changing at most k adjacencies. Thus, our upper and lower bounds provide an asymptotically tight analysis of the multivariate parameterized complexity of the problem for the whole range of values of p from constant to a linear function of k.) <|cite_end|>. Another popular parameter of this type is the \textit{cluster vertex deletion set size} (CVD set size) <|cite_start|> (Reference: Fixed-Parameter Algorithms for Cluster Vertex Deletion: ) <|cite_end|> <|cite_start|> (Reference: Cluster Vertex Deletion: A Parameterization between Vertex Cover and Clique-Width: ) <|cite_end|> <|cite_start|> (Reference: FPT: 为迎接明年的"绿色"世博,并顺应国家城市公交系统"清洁、节能、环保"的重点发展方向,上海于今年11月1日针对部分机动车实施尾气国Ⅳ排放标准,其中包括以柴油和天然气发动机为主的公交、货运行业。) <|cite_end|> <|cite_start|> (Reference: Structural Parameterizations of Dominating Set Variants: ) <|cite_end|>. A CVD set in a graph is a subset of vertices whose deletion leaves a cluster graph. The CVD set size of a graph is the size of a smallest sized CVD set. Note that a graph with cluster-editing distance $k$ has a CVD set of size $2k$. Thus, CVD set size is a smaller parameter than cluster-editing distance. In this paper we use CVD set size to study the (parameterized) complexity of two variants \Col{} -- \DomCol and \CDCol. A \textit{coloring} of a graph $G$ is a function $\rchi \colon V(G) \to C$, where $C$ is a set of \textit{colors}. A \textit{proper coloring} of $G$ is a coloring of $G$ such that $\rchi(u) \neq \rchi(v)$ for all $(u,v) \in E(G)$. The set of all vertices which are colored $c$, for a $c \in C$, is called the \textit{color class} $c$. We sometimes refer to the color $c$ itself as a color class. We let $|\rchi|$ denote $|\im{\rchi}|$, the size of the image of $\rchi$. A vertex $v \in V(G)$ \textit{dominates} $S \subseteq V(G)$ if $S \subseteq \closedneighbour{G}{v}$. A \textit{\domcol} $\rchi$ of $G$ is a proper coloring of the graph such that for all $v \in V(G)$, $v$ dominates a color class $c \in \im{\rchi}$. A \textit{\classdcol} $\rchi$ of $G$ is a proper coloring of the graph such that for all $c \in \im{\rchi}$ there exist a $v \in V(G)$ such that $v$ dominates all vertices in the color class $c$. We are now ready to define our problems of interest. \defproblem{\textsc{\DomCol} (\DC)} {A graph $G$; an integer $\ell$} {Does there exist a \domcol{} $\rchi$ of $G$ with $|\rchi|\leq \ell$?} \defproblem{\textsc{\CDCol} (\CDC)} {A graph $G$; an integer $\ell$} {Does there exist a \classdcol{} $\rchi$ of $G$ with $|\rchi|\leq \ell$?} We use $\DCinstance$ to denote an instance of both these problems since it will be clear from context which problem we are referring to. While both problems have a rich theoretical history (see \Cref{relatedwork}), \CDC has garnered renewed interest due to its practical applications in social networks and genetic networks -- the problem is equivalent to finding the minimum number of (i) \textit{stranger groups} with a common \textit{friend} in social network graphs <|cite_start|> (Reference: The Dominated Coloring Problem and Its Application: ) <|cite_end|>; and (ii) \textit{gene groups} that do not directly regulate each other but are regulated by a common gene in genetic networks <|cite_start|> (Reference: Dominated and dominator colorings over (edge) corona and hierarchical products: ) <|cite_end|>. \subsection{Notations}\label{Notations} \subsubsection{Graph Notations} Let $G$ be a graph. We use $V(G)$ and $E(G)$ to denote the set of vertices and edge of $G$, respectively. Throughout the paper we use $n$ to denote $|V(G)|$. For a vertex $v$, we use $\openneighbour{G}{v}$ to denote the set of its neighbors and $\closedneighbour{G}{v}$ is defined to be $\openneighbour{G}{v} \cup \{v\}$. For any graph $G$ and a set of vertices $M \subseteq V(G)$, we denote the subgraph of $G$ induced by $M$ by $G[M]$. We let $G - M$, for a $M \subseteq V(G)$, denote $G[V(G) \setminus M]$. A \textit{matching} $\M$ is a subset of edges with no common endpoints. If $(u,v) \in \M$, we let $\M(u) = v$. Most of the symbols and notations used for graph theoretical concepts are standard and taken from <|cite_start|> (Reference: Graph Theory, 4th Edition: ) <|cite_end|>. \subsubsection{Parameterized Complexity and Algorithms.} The goal of parameterized complexity is to find ways of solving \nph problems more efficiently than brute force: here the aim is to restrict the combinatorial explosion to a parameter that is hopefully much smaller than the input size. Formally, a {\em parameterization} of a problem is assigning a positive integer parameter $k$ to each input instance and we say that a parameterized problem is {\em fixed-parameter tractable} (\fpt) if there is an algorithm that solves the problem in time $f(k)\cdot \vert I \vert ^{O(1)}$, where $|I|$ is the size of the input and $f$ is an arbitrary computable function that depends only on the parameter $k$. We use $\ordernoinput{f(k)}$ to denote the running time of such an algorithm. Such an algorithm is called an \fpt algorithm and such a running time is called \fpt running time. There is also an accompanying theory of parameterized intractability using which one can identify parameterized problems that are unlikely to admit \fpt algorithms. These are essentially proved by showing that the problem is \wih. We refer the interested readers to books such as <|cite_start|> (Reference: Parameterized Algorithms: ) <|cite_end|> <|cite_start|> (Reference: Kernelization : Theory of Parameterized Preprocessing: ) <|cite_end|> for an introduction to the theory of parameterized algorithms. \subsection{Related Work} \label{relatedwork} \DC was introduced by Gera \textit{et al.} in 2006 <|cite_start|> (Reference: Dominator colorings and safe clique partitions: Given a graph G, the dominator coloring problem seeks a proper coloring of G with the additional property that every vertex in the graph dominates an entire color class. The safe clique partition problem seeks a partition of the vertices of a graph into cliques with the additional property that for each vertex v, there is a clique that has no element in the open neighborhood of v. We typically seek to minimize the number of color classes or cliques used, respectively. In this paper, we study these two problems and consider the relationship between them.) <|cite_end|> while \CDC was introduced by Merouane \textit{et al.} in 2012 <|cite_start|> (Reference: Dominated Colorings of Graphs: ) <|cite_end|> (the problem was termed \textsc{Dominated Coloring} here). These papers proved that \DC and \CDC are \nph (even for a fixed $\ell \geq 4$). Unlike \Col, both \DC and \CDC can be solved in polynomial time when $\ell=3$ <|cite_start|> (Reference: Dominator Colorings in Some Classes of Graphs: ) <|cite_end|> <|cite_start|> (Reference: Dominated Colorings of Graphs: ) <|cite_end|>. These problems, which marry two of the most well-studied problems in graph theory -- \Col and \DS, have been studied in several papers in the last 15 years. Results in these papers can be broadly categorized into two. First, there have been several crucial results which establish lower and upper bounds on the size of an optimal \domcol and \classdcol of graphs belonging to special graph classes. For example, refer papers <|cite_start|> (Reference: On the dominator colorings in bipartite graphs: A graph has a dominator coloring if it has a proper coloring in which each vertex of the graph dominates every vertex of some color class. The dominator chromatic number <sub>Xd</sub>(G) is the minimum number of color classes in a dominator coloring of a graph G. In this paper we study the dominator chromatic number for the hypercube, Q<sub>n </sub> = Q<sub>n-</sub> times K<sub>2</sub> (with Q<sub>1</sub> cong P<sub>2</sub>, n ges 2), and more generally for bipartite graphs. We then conclude it with open questions for further research) <|cite_end|> <|cite_start|> (Reference: Algorithmic Aspects of Dominator Colorings in Graphs: ) <|cite_end|> <|cite_start|> (Reference: On the dominator coloring in proper interval graphs and block graphs: In a graph G = (V,E), a vertex v dominates a vertex w if either v = w or v is adjacent to w. A subset of vertex set V that dominates all the vertices of G is called a dominating set of graph G. The minimum cardinality of a dominating set of G is called the domination number of G and is denoted by γ(G). A proper coloring of a graph G is an assignment of colors to the vertices of G such that any two adjacent vertices get different colors. The minimum number of colors required for a proper coloring of G is called the chromatic number of G and is denoted by χ(G). A dominator coloring of a graph G is a proper coloring of the vertices of G such that every vertex dominates all the vertices of at least one color class. The minimum number of colors required for a dominator coloring of G is called the dominator chromatic number of G and is denoted by χd(G). In this paper, we study the dominator chromatic number for the proper interval graphs and block graphs. We show that every proper interval graph G satisfies χ(G) + γ(G) − 2 ≤ χd(G) ≤ χ(G) + γ(G), and these bounds are sharp. For a block graph G, where one of the end block is of maximum size, we show that χ(G) + γ(G) − 1 ≤ χd(G) ≤ χ(G) + γ(G). We also characterize the block graphs with an end block of maximum size and attaining the lower bound.) <|cite_end|> <|cite_start|> (Reference: Total Dominator Colorings and Total Domination in Graphs: ) <|cite_end|> <|cite_start|> (Reference: On some domination colorings of graphs: ) <|cite_end|> <|cite_start|> (Reference: Dominator Colorings of Certain Cartesian Products of Paths and Cycles: ) <|cite_end|> for results on \DC and <|cite_start|> (Reference: The Dominated Coloring Problem and Its Application: ) <|cite_end|> <|cite_start|> (Reference: Dominated Colorings of Graphs: ) <|cite_end|> <|cite_start|> (Reference: On some domination colorings of graphs: ) <|cite_end|> <|cite_start|> (Reference: Dominated and dominator colorings over (edge) corona and hierarchical products: ) <|cite_end|> for results on \CDC. The second (seemingly more sparse) are algorithmic results on these two problems. Even for simple graph classes such as trees, algorithmic results have been hard to obtain -- indeed, after Gera \textit{et al.} showed that \DC can be solved in constant-time for paths in <|cite_start|> (Reference: Dominator colorings and safe clique partitions: Given a graph G, the dominator coloring problem seeks a proper coloring of G with the additional property that every vertex in the graph dominates an entire color class. The safe clique partition problem seeks a partition of the vertices of a graph into cliques with the additional property that for each vertex v, there is a clique that has no element in the open neighborhood of v. We typically seek to minimize the number of color classes or cliques used, respectively. In this paper, we study these two problems and consider the relationship between them.) <|cite_end|>, it took close to a decade and incremental works <|cite_start|> (Reference: Dominator Colorings in Some Classes of Graphs: ) <|cite_end|> <|cite_start|> (Reference: On the dominator colorings in trees: In a graph G, a vertex is said to dominate itself and all its neighbors. A dominating set of a graph G is a subset of vertices that dominates every vertex of G. The domination number γ(G) is the minimum cardinality of a dominating set of G. A proper coloring of a graph G is a function from the set of vertices of the graph to a set of colors such that any two adjacent vertices have different colors. A dominator coloring of a graph G is a proper coloring such that every vertex of V dominates all vertices of at least one color class (possibly its own class). The dominator chromatic number χd(G) is the minimum number of color classes in a dominator coloring of G. Gera showed that every nontrivial tree T satisfies γ(T ) + 1 ≤ χd(T ) ≤ γ(T ) + 2. In this note we characterize nontrivial trees T attaining each bound.) <|cite_end|> before a polynomial time algorithm was developed for trees in <|cite_start|> (Reference: An algorithm for the dominator chromatic number of a tree: ) <|cite_end|>! It is still unknown if \DC restricted to forests is polynomial time solvable. While \DC and \CDC seem extremely similar on the surface, we note a striking dichotomy in complexity results involving the two problems -- \DC is \nph restricted to \textit{claw-free graphs} while \CDC is polynomial time solvable for the same graph class <|cite_start|> (Reference: On some domination colorings of graphs: ) <|cite_end|>. The parameterized complexity of \DC and \CDC were first explored by Arumugam \textit{et al.} in 2011 <|cite_start|> (Reference: Algorithmic Aspects of Dominator Colorings in Graphs: ) <|cite_end|> and by Krithika \textit{et al.} in 2021 <|cite_start|> (Reference: Parameterized and Exact Algorithms for Class Domination Coloring: A class domination coloring (also called cd-Coloring or dominated coloring) of a graph is a proper coloring in which every color class is contained in the neighbourhood of some vertex. The minimum number of colors required for any cd-coloring of $G$, denoted by $\chi_{cd}(G)$, is called the class domination chromatic number (cd-chromatic number) of $G$. In this work, we consider two problems associated with the cd-coloring of a graph in the context of exact exponential-time algorithms and parameterized complexity. (1) Given a graph $G$ on $n$ vertices, find its cd-chromatic number. (2) Given a graph $G$ and integers $k$ and $q$, can we delete at most $k$ vertices such that the cd-chromatic number of the resulting graph is at most $q$? For the first problem, we give an exact algorithm with running time $\Oh(2^n n^4 \log n)$. Also, we show that the problem is \FPT\ with respect to the number $q$ of colors as the parameter on chordal graphs. On graphs of girth at least 5, we show that the problem also admits a kernel with $\Oh(q^3)$ vertices. For the second (deletion) problem, we show \NP-hardness for each $q \geq 2$. Further, on split graphs, we show that the problem is \NP-hard if $q$ is a part of the input and \FPT\ with respect to $k$ and $q$ as combined parameters. As recognizing graphs with cd-chromatic number at most $q$ is \NP-hard in general for $q \geq 4$, the deletion problem is unlikely to be \FPT\ when parameterized by the size of the deletion set on general graphs. We show fixed parameter tractability for $q \in \{2,3\}$ using the known algorithms for finding a vertex cover and an odd cycle transversal as subroutines.) <|cite_end|> respectively. The authors expressed the problems in \textit{Monodic Second-Order Logic} (MSOL) and used a theorem due to Courcelle and Mosbah <|cite_start|> (Reference: Monadic Second-Order Evaluations on Tree-Decomposable Graphs: ) <|cite_end|> to prove that \DC and \CDC parameterized by $(t,\ell)$, where $t$ is the \textit{treewidth} of the input graph, is \fpt. Their expression of these problems in MSOL immediately also shows (by <|cite_start|> (Reference: Upper bounds to the clique width of graphs: ) <|cite_end|>) that \DC and \CDC parameterized by $(w,\ell)$, where $w$ is the \textit{clique-width} of the input graph, is \fpt. However, both problems have remained unexplored when viewed through the lens of other \textit{structural parameters} that measure the distance (commonly vertex deletion) from a tractable graph class. Such parameters have become increasing popular in the world of parameterized algorithms since they are usually small in practice. We refer the interested reader to the following survey by Fellows \textit{et al.} for an overview of structural parameterization <|cite_start|> (Reference: Towards fully multivariate algorithmics: Parameter ecology and the deconstruction of computational complexity: ) <|cite_end|> and <|cite_start|> (Reference: Data Reduction for Graph Coloring Problems: This paper studies the kernelization complexity of graph coloring problems with respect to certain structural parameterizations of the input instances. We are interested in how well polynomial-time data reduction can provably shrink instances of coloring problems, in terms of the chosen parameter. It is well known that deciding 3-colorability is already NP-complete, hence parameterizing by the requested number of colors is not fruitful. Instead, we pick up on a research thread initiated by Cai (DAM, 2003) who studied coloring problems parameterized by the modification distance of the input graph to a graph class on which coloring is polynomial-time solvable; for example parameterizing by the number k of vertex-deletions needed to make the graph chordal. We obtain various upper and lower bounds for kernels of such parameterizations of q-Coloring, complementing Cai's study of the time complexity with respect to these parameters. Our results show that the existence of polynomial kernels for q-Coloring parameterized by the vertex-deletion distance to a graph class F is strongly related to the existence of a function f(q) which bounds the number of vertices which are needed to preserve the NO-answer to an instance of q-List-Coloring on F.) <|cite_end|> <|cite_start|> (Reference: Structural Parameterizations of Dominating Set Variants: ) <|cite_end|> for its use in studying \Col and \DS. Our paper initiates the study of structural parameterizations of \DC and \CDC. \subsection{Our Results, Techniques, and Organization of the Paper} \label{our results} As a graph with bounded vertex cover has bounded treewidth, using results from <|cite_start|> (Reference: Algorithmic Aspects of Dominator Colorings in Graphs: ) <|cite_end|> <|cite_start|> (Reference: Parameterized and Exact Algorithms for Class Domination Coloring: A class domination coloring (also called cd-Coloring or dominated coloring) of a graph is a proper coloring in which every color class is contained in the neighbourhood of some vertex. The minimum number of colors required for any cd-coloring of $G$, denoted by $\chi_{cd}(G)$, is called the class domination chromatic number (cd-chromatic number) of $G$. In this work, we consider two problems associated with the cd-coloring of a graph in the context of exact exponential-time algorithms and parameterized complexity. (1) Given a graph $G$ on $n$ vertices, find its cd-chromatic number. (2) Given a graph $G$ and integers $k$ and $q$, can we delete at most $k$ vertices such that the cd-chromatic number of the resulting graph is at most $q$? For the first problem, we give an exact algorithm with running time $\Oh(2^n n^4 \log n)$. Also, we show that the problem is \FPT\ with respect to the number $q$ of colors as the parameter on chordal graphs. On graphs of girth at least 5, we show that the problem also admits a kernel with $\Oh(q^3)$ vertices. For the second (deletion) problem, we show \NP-hardness for each $q \geq 2$. Further, on split graphs, we show that the problem is \NP-hard if $q$ is a part of the input and \FPT\ with respect to $k$ and $q$ as combined parameters. As recognizing graphs with cd-chromatic number at most $q$ is \NP-hard in general for $q \geq 4$, the deletion problem is unlikely to be \FPT\ when parameterized by the size of the deletion set on general graphs. We show fixed parameter tractability for $q \in \{2,3\}$ using the known algorithms for finding a vertex cover and an odd cycle transversal as subroutines.) <|cite_end|>, it is easy to show that the \DC and \CDC are \fpt parameterized by a graph's \textit{vertex cover}. We give details in \Cref{VC}. Due to its general nature, the algorithm has a large runtime. We design faster algorithms for more natural (and smaller) parameters. Our main results are tabulated in \Cref{resulttable}. Our overarching result is the following: \DC parameterized by CVD set size is \fpt. This is shown through an involved branching algorithm. We also show that \CDC parameterized by $(k,q)$, where $q$ the number of cliques that remain on deleting a CVD (which is of size $k$) is \fpt. We design much faster algorithms for larger parameters (i.e., special CVD sets). We consider two well-studied parameters of this type -- the size of a \textit{clique modulator} (CLQ) and that of a \textit{twin cover} (TC). In \Cref{clq}, we design randomized algorithms for the two problems which run in $\ordernoinput{c^k}$-time for a small constant $c$ when the parameter is CLQ. We show that our algorithm for \CDC is optimal unless the \textit{Exponential-Time Hypothesis} fails. These algorithms use an inclusion-exclusion based polynomial sieving technique in addition to an exact single-exponential algorithm to solve \DC that we develop in \Cref{exactalg}. We believe that this algebraic method holds great potential for use in other \Col variants. \begin{table}[ht] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline & Exact& CLQ & TC & CVD Set \\ \hline \DC & $\ordernopoly{4^n}$ $\clubsuit$& $\ordernoinput{16^k}$ $\clubsuit$& $\ordernoinput{2^{\order{k \log{k}}}}$ $\clubsuit$ & $\ordernoinput{2^{\order{2^k}}}$ $\clubsuit$ \\ \hline \CDC & $\ordernopoly{2^n}$ <|cite_start|> (Reference: Parameterized and Exact Algorithms for Class Domination Coloring: A class domination coloring (also called cd-Coloring or dominated coloring) of a graph is a proper coloring in which every color class is contained in the neighbourhood of some vertex. The minimum number of colors required for any cd-coloring of $G$, denoted by $\chi_{cd}(G)$, is called the class domination chromatic number (cd-chromatic number) of $G$. In this work, we consider two problems associated with the cd-coloring of a graph in the context of exact exponential-time algorithms and parameterized complexity. (1) Given a graph $G$ on $n$ vertices, find its cd-chromatic number. (2) Given a graph $G$ and integers $k$ and $q$, can we delete at most $k$ vertices such that the cd-chromatic number of the resulting graph is at most $q$? For the first problem, we give an exact algorithm with running time $\Oh(2^n n^4 \log n)$. Also, we show that the problem is \FPT\ with respect to the number $q$ of colors as the parameter on chordal graphs. On graphs of girth at least 5, we show that the problem also admits a kernel with $\Oh(q^3)$ vertices. For the second (deletion) problem, we show \NP-hardness for each $q \geq 2$. Further, on split graphs, we show that the problem is \NP-hard if $q$ is a part of the input and \FPT\ with respect to $k$ and $q$ as combined parameters. As recognizing graphs with cd-chromatic number at most $q$ is \NP-hard in general for $q \geq 4$, the deletion problem is unlikely to be \FPT\ when parameterized by the size of the deletion set on general graphs. We show fixed parameter tractability for $q \in \{2,3\}$ using the known algorithms for finding a vertex cover and an odd cycle transversal as subroutines.) <|cite_end|>& $\ordernoinput{2^k}$ $\clubsuit$ & $\ordernoinput{2^{\order{k \log{k}}}}$ $\clubsuit$& $\ordernoinput{2^{\order{2^kkq\log{q}}}}$ $\clubsuit$\\ \hline \end{tabular} \caption{A summary of results. Cells marked $\clubsuit$ are proved in this paper.} \label{resulttable} \end{table} We show that \DC and \CDC admit $\ordernoinput{2^{\order{k \log{k}}}}$-time algorithms when $k$ is the size of a twin cover in \Cref{tc}. For this purpose, we introduce the notion of a \pdc and a \pclassdcol and show that their corresponding extension problems (similar to \PreCol) can be solved quickly. We prove that the extension problem involving \DC can be solved using a relationship between \DC and \ListCol that we establish. On the other hand, we formulate the \CDC extension problem as an \textit{Integer Linear Program} which, in turn, can be solved using well known methods. We then show that an optimal-sized \domcol (resp. \classdcol) can be obtained as an extension of a small number of \pdc{}s (resp. \classdcol{}s). Since optimal CVD sets, TCs, and CLQs can be found quickly <|cite_start|> (Reference: Fixed-Parameter Algorithms for Cluster Vertex Deletion: ) <|cite_end|> <|cite_start|> (Reference: {Improving Vertex Cover as a Graph Parameter: Parameterized algorithms are often used to efficiently solve NP-hard problems on graphs. In this context, vertex cover is used as a powerful parameter for dealing with graph problems which are hard to solve even when parameterized by tree-width; however, the drawback of vertex cover is that bounding it severely restricts admissible graph classes. We introduce a generalization of vertex cover called twin-cover and show that FPT algorithms exist for a wide range of difficult problems when parameterized by twin-cover. The advantage of twin-cover over vertex cover is that it imposes a lesser restriction on the graph structure and attains low values even on dense graphs. Apart from introducing the parameter itself, this article provides a number of new FPT algorithms parameterized by twin-cover with a special emphasis on solving problems which are not in FPT even when parameterized by tree-width. It also shows that MS1 model checking can be done in elementary FPT time parameterized by twin-cover and discusses the field of kernelization.) <|cite_end|> <|cite_start|> (Reference: Parameterized Pre-coloring Extension and List Coloring Problems: Golovach, Paulusma and Song (Inf. Comput. 2014) asked to determine the parameterized complexity of the following problems parameterized by $k$: (1) Given a graph $G$, a clique modulator $D$ (a clique modulator is a set of vertices, whose removal results in a clique) of size $k$ for $G$, and a list $L(v)$ of colors for every $v\in V(G)$, decide whether $G$ has a proper list coloring; (2) Given a graph $G$, a clique modulator $D$ of size $k$ for $G$, and a pre-coloring $\lambda_P: X \rightarrow Q$ for $X \subseteq V(G),$ decide whether $\lambda_P$ can be extended to a proper coloring of $G$ using only colors from $Q.$ For Problem 1 we design an $O^*(2^k)$-time randomized algorithm and for Problem 2 we obtain a kernel with at most $3k$ vertices. Banik et al. (IWOCA 2019) proved the the following problem is fixed-parameter tractable and asked whether it admits a polynomial kernel: Given a graph $G$, an integer $k$, and a list $L(v)$ of exactly $n-k$ colors for every $v \in V(G),$ decide whether there is a proper list coloring for $G.$ We obtain a kernel with $O(k^2)$ vertices and colors and a compression to a variation of the problem with $O(k)$ vertices and $O(k^2)$ colors.) <|cite_end|>, we implicitly assume that these sets are also given as input. \Cref{lowerbounds} establishes some lower bounds for \DC and \CDC with respect to these parameters. <|paper_end|>
[ "<|reference_start|> Algorithmic Aspects of Dominator Colorings in Graphs: <|reference_end|>", "<|reference_start|> On some domination colorings of graphs: <|reference_end|>", "<|reference_start|> On some domination colorings of graphs: <|reference_end|>", "<|reference_start|> Dominator colorings and safe clique partitions: Given a graph G, the dominator coloring problem seeks a proper coloring of G with the additional property that every vertex in the graph dominates an entire color class. The safe clique partition problem seeks a partition of the vertices of a graph into cliques with the additional property that for each vertex v, there is a clique that has no element in the open neighborhood of v. We typically seek to minimize the number of color classes or cliques used, respectively. In this paper, we study these two problems and consider the relationship between them. <|reference_end|>" ]
[ 18, 21, 25, 27 ]
{"<|multi_cite_1_1|>": "ss-2348960", "<|multi_cite_1_2|>": "ss-1279826", "<|multi_cite_1_3|>": "ss-1107956", "<|multi_cite_1_4|>": "ss-1394361", "<|multi_cite_2_1|>": "ss-1515707", "<|multi_cite_2_2|>": "ss-1001121", "<|multi_cite_2_3|>": "ss-1244718", "<|multi_cite_2_4|>": "ss-1190108", "<|cite_3|>": "ss-2288506", "<|cite_4|>": "ss-2288507", "<|cite_5|>": "ss-1530756", "<|multi_cite_6_1|>": "ss-1350598", "<|multi_cite_6_2|>": "ss-1935981", "<|cite_7|>": "ss-1452551", "<|cite_8|>": "ss-978189", "<|multi_cite_9_1|>": "ss-1452555", "<|multi_cite_9_2|>": "ss-978189", "<|multi_cite_10_1|>": "ss-1452553", "<|multi_cite_10_2|>": "ss-1635479", "<|multi_cite_10_3|>": "ss-2348961", "<|multi_cite_10_4|>": "ss-931298", "<|multi_cite_10_5|>": "ss-1452557", "<|multi_cite_10_6|>": "ss-1635477", "<|multi_cite_11_1|>": "ss-2288506", "<|multi_cite_11_2|>": "ss-978189", "<|multi_cite_11_3|>": "ss-1452557", "<|multi_cite_11_4|>": "ss-2288507", "<|cite_12|>": "ss-1452551", "<|multi_cite_13_1|>": "ss-1452555", "<|multi_cite_13_2|>": "ss-1452556", "<|cite_14|>": "ss-2348962", "<|cite_15|>": "ss-1452557", "<|cite_16|>": "ss-1635479", "<|cite_17|>": "arxiv-406259", "<|cite_18|>": "ss-981194", "<|cite_19|>": "ss-679048", "<|cite_20|>": "ss-1270348", "<|multi_cite_21_1|>": "arxiv-20908", "<|multi_cite_21_2|>": "ss-1190108", "<|multi_cite_22_1|>": "ss-1635479", "<|multi_cite_22_2|>": "arxiv-406259", "<|cite_23|>": "arxiv-406259", "<|multi_cite_24_1|>": "ss-1515707", "<|multi_cite_24_2|>": "ss-1897272", "<|multi_cite_24_3|>": "arxiv-216429"}
1910.02534
<|paper_start|> Title: The CEO problem with inter-block memory Abstract: The CEO problem with inter-block memory: An $n$-dimensional source with memory is observed by $K$ isolated encoders via parallel channels, who compress their observations to transmit to the decoder via noiseless rate-constrained links while leveraging their memory of the past. At each time instant, the decoder receives $K$ new codewords from the observers, combines them with the past received codewords, and produces a minimum-distortion estimate of the latest block of $n$ source symbols. This scenario extends the classical one-shot CEO problem to multiple rounds of communication with communicators maintaining the memory of the past. We extend the Berger-Tung inner and outer bounds to the scenario with inter-block memory, showing that the minimum asymptotically (as $n \to \infty$) achievable sum rate required to achieve a target distortion is bounded by minimal directed mutual information problems. For the Gauss-Markov source observed via $K$ parallel AWGN channels, we show that the inner bound is tight and solve the corresponding minimal directed mutual information problem, thereby establishing the minimum asymptotically achievable sum rate. Finally, we explicitly bound the rate loss due to a lack of communication among the observers; that bound is attained with equality in the case of identical observation channels. The general coding theorem is proved via a new nonasymptotic bound that uses stochastic likelihood coders and whose asymptotic analysis yields an extension of the Berger-Tung inner bound to the causal setting. The analysis of the Gaussian case is facilitated by reversing the channels of the observers. Introduction We set up the CEO (chief executive or estimation officer) problem with inter-block memory as follows. An information source $\{X_i\}$ outputs a block of length $n$, $X_i \in \mathcal A^n$, at time $i$; it is observed by $K$ encoders through $K$ noisy channels; at time $i$, $k$th encoder sees $Y_i^k$ generated according to $P_{Y_i^k | X_1, \ldots, X_{i}, Y_{1}^k, \ldots, Y_{i-1}^k}$. See \figref{fig:system}. The encoders (observers) communicate to the decoder (CEO) via their separate noiseless rate-constrained links. At each time $i$, $k$th observer forms a codeword based on the observations it has seen so far, i.e., $Y_1^k, \ldots, Y_i^k$. The decoder at time $i$ chooses $\hat X_i \in \hat {\mathcal A}^n$ based on the codewords it received thus far. The goal is to minimize the average distortion \begin{equation} \frac 1 t \sum_{i = 1}^{t} \E{ \sd (X_i, \hat X_i)} \label{eq:dintro}, \end{equation} where $t$ is the \emph{time horizon} over which the source is being tracked, and $\sd \colon \mathcal A^n \times \hat {\mathcal A}^n \mapsto \mathbb R_+$ is the distortion measure. Encoding and decoding operations leverage memory of the past but cannot look in the future. In this causal setting no delay is allowed in producing $\hat X_i$. \vspace{5pt} \begin{figure}[htp] \begin{center} \includegraphics[width=1\linewidth]{sys} \end{center} \caption[]{The CEO problem with inter-block memory: encoders and decoder keep the memory of their past observations.} \label{fig:system} \end{figure} In the classical setting with $t = 1$, the CEO problem was first introduced by Berger et al. <|cite_start|> (Reference: The ceo problem [multiterminal source coding]: We consider a new problem in multiterminal source coding motivated by the following decentralized communication/estimation task. A firm's Chief Executive Officer (CEO) is interested in the data sequence {X(t)}/sub t=1//sup /spl infin// which cannot be observed directly, perhaps because it represents tactical decisions by a competing firm. The CEO deploys a team of L agents who observe independently corrupted versions of {X(t)}/sub t=1//sup /spl infin//. Because {X(t)} is only one among many pressing matters to which the CEO must attend, the combined data rate at which the agents may communicate information about their observations to the CEO is limited to, say, R bits per second. If the agents were permitted to confer and pool their data, then in the limit as L/spl rarr//spl infin/ they usually would be able to smooth out their independent observation noises entirely. Then they could use their R bits per second to provide the CEO with a representation of {X(t)} with fidelity D(R), where D(/spl middot/) is the distortion-rate function of {X(t)}. In particular, with such data pooling D can be made arbitrarily small if R exceeds the entropy rate H of {X(t)}. Suppose, however, that the agents are not permitted to convene, Agent i having to send data based solely on his own noisy observations {Y/sub i/(t)}. We show that then there does not exist a finite value of R for which even infinitely many agents can make D arbitrarily small. Furthermore, in this isolated-agents case we determine the asymptotic behavior of the minimal error frequency in the limit as L and then R tend to infinity.) <|cite_end|> for a finite alphabet source. In the classical Gaussian CEO problem, a Gaussian source is observed via Gaussian channels and reproduced under mean-square error (MSE) distortion. The Gaussian CEO problem was studied by Viswanathan and Berger <|cite_start|> (Reference: The quadratic Gaussian CEO problem: The following problem in multiterminal source coding was introduced in Berger and Zhang (see 1994 IEEE International Symposium on Information Theory, Trondheim, Norway). A firm's CEO is interested in a data sequence {X(t)}/sup /spl infin///sub t=1/ which cannot be observed directly. The CEO em ploys a team of L agents who observe independently corrupted versions of{X(t)}/sub /spl infin/t=1/. Let R be the total data rate at which the agents may communicate information about their observations to the CEO. The agents are not allowed to convene. Breger et al. determine the asymptotic behavior of the minimal error frequency in the limit as L and R tend to infinity. Their result is for discrete memoryless source and observations. We consider a special case of the continuous source and observations problem. We assume that the source is an i.i.d sequence of zero mean Gaussian random variables (/spl Nscr/(0,/spl sigma//sup 2//sub X/)) and the observations are corrupted by identical independent memoryless Gaussian noise (/spl Nscr/(0,/spl sigma//sup 2//sub N/)). The CEO is interested in reconstructing the source with minimum mean squared error. We study the asymptotic behavior of the minimum achievable distortion in the limit as first L and then R tends to infinity.) <|cite_end|>, who proved an achievability bound on the rate-distortion dimension for the case of $K$ identical Gaussian channels, by Oohama <|cite_start|> (Reference: The Rate-Distortion Function for the Quadratic Gaussian CEO Problem: A new multiterminal source coding problem called the CEO problem was presented and investigated by Berger, Zhang, and Viswanathan. Recently, Viswanathan and Berger have investigated an extension of the CEO problem to Gaussian sources and call it the quadratic Gaussian CEO problem. They considered this problem from a statistical viewpoint, deriving some interesting results. In this paper, we consider the quadratic Gaussian CEO problem from a standpoint of multiterminal rate-distortion theory. We regard the CEO problem as a certain multiterminal remote source coding problem with infinitely many separate encoders whose observations are conditionally independent if the remote source is given. This viewpoint leads us to a complete solution to the problem. We determine the tradeoff between the total amount of rate and squared distortion, deriving an explicit formula of the rate-distortion function. The derived function has the form of a sum of two nonnegative functions. One is a classical rate-distortion function for single Gaussian source and the other is a new rate-distortion function which dominates the performance of the system for a relatively small distortion. It follows immediately from our result that the conjecture of Viswanathan and Berger on the asymptotic behavior of minimum squared distortion for large rates is true.) <|cite_end|>, who derived the sum-rate rate-distortion region for that special case, by Prabharan et al. <|cite_start|> (Reference: Rate region of the quadratic gaussian ceo problem: In the so-called CEO problem, a hidden source random process is of interest to a central unit or the "CEO". But this process cannot be observed directly. L sensors or agents observe independently corrupted versions of the source. They encode their observations without cooperating with one another and send through rate constrained noiseless channels to the CEO. The problem was first studied by T. Berger et al. (1996) in the context of discrete memoryless sources. The quadratic Gaussian version of the problem was studied. The best result known to date is the characterization of the sum-rate when all the agents have the same quality of observations. Here we characterize the rate region for any number of agents without assuming that their quality of observations is the same. This is one of the few examples of multiterminal lossy source coding problems in which the rate region can be characterized completely.) <|cite_end|> and Oohama <|cite_start|> (Reference: Rate-distortion theory for Gaussian multiterminal source coding systems with several side informations at the decoder: In this paper, we consider the separate coding problem for L+1 correlated Gaussian memoryless sources. We deal with the case where L sources work as partial side information at the decoder for the reconstruction of the remaining source. The determination problem of the rate-distortion region for this system is the so-called many-help-one problem and it has been known as a highly challenging problem for almost 20 years. In this paper, we give a partial solution to this problem. We determine the rate-distortion region in the case where the L sources working as partial side information are conditionally independent if the remaining source we wish to reconstruct is given. The additive white Gaussian noise CEO problem is a special case of this. We also discuss the relation of the result previous results of ours) <|cite_end|>, who determined the full Gaussian CEO rate region, by Chen et al. <|cite_start|> (Reference: An upper bound on the sum-rate distortion function and its corresponding rate allocation schemes for the CEO problem: We consider a distributed sensor network in which several observations are communicated to the fusion center using limited transmission rate. The observation must be separately encoded so that the target can be estimated with minimum average distortion. We address the problem from an information theoretic perspective and establish the inner and outer bound of the admissible rate-distortion region. We derive an upper bound on the sum-rate distortion function and its corresponding rate allocation schemes by exploiting the contra-polymatroid structure of the achievable rate region. The quadratic Gaussian case is analyzed in detail and the optimal rate allocation schemes in the achievable rate region are characterized. We show that our upper bound on the sum-rate distortion function is tight for the quadratic Gaussian CEO problem in the case of same signal-to-noise ratios at the sensors.) <|cite_end|>, who proved that the minimum sum rate is achieved via waterfilling, by Behroozi and Soleymani <|cite_start|> (Reference: Optimal rate allocation in successively structured Gaussian CEO problem: We consider the Chief Executive Officer (CEO) problem in which agents encode their observations without collaborating with each other and send through rate constrained noiseless channels to a fusion center (FC).We apply the successive coding strategy into this problem and determine the closed-form expressions for optimal rates in order to achieve the minimum distortion under a sum-rate constraint. We show that the optimal sum-rate distortion performance for the Gaussian CEO problem is achievable using the successive coding strategy which is inherently a low complexity approach of obtaining a prescribed distortion. We also determine the optimal rate allocation region for the successively structured Gaussian CEO problem.) <|cite_end|> and by Chen and Berger <|cite_start|> (Reference: Successive Wyner-Ziv Coding Scheme and its Application to the Quadratic Gaussian CEO Problem: We introduce a distributed source coding scheme called successive Wyner-Ziv coding. We show that any point in the rate region of the quadratic Gaussian CEO problem can be achieved via the successive Wyner-Ziv coding. The concept of successive refinement in the single source coding is generalized to the distributed source coding scenario, which we refer to as distributed successive refinement. For the quadratic Gaussian CEO problem, we establish a necessary and sufficient condition for distributed successive refinement, where the successive Wyner-Ziv coding scheme plays an important role.) <|cite_end|>, who showed rate-optimal successive coding schemes. Wagner et al. <|cite_start|> (Reference: Rate Region of the Quadratic Gaussian Two-Encoder Source-Coding Problem: We determine the rate region of the quadratic Gaussian two-encoder source-coding problem. This rate region is achieved by a simple architecture that separates the analog and digital aspects of the compression. Furthermore, this architecture requires higher rates to send a Gaussian source than it does to send any other source with the same covariance. Our techniques can also be used to determine the sum rate of some generalizations of this classical problem. Our approach involves coupling the problem to a quadratic Gaussian ``CEO problem.'') <|cite_end|> found the rate region of the distributed Gaussian lossy compression problem by coupling it to the Gaussian CEO problem. Wagner and Anantharam <|cite_start|> (Reference: An improved outer bound for multiterminal source coding: We prove a new outer bound on the rate-distortion region for the multiterminal source-coding problem. This bound subsumes the best outer bound in the literature and improves upon it strictly in some cases. The improved bound enables us to obtain a new, conclusive result for the binary erasure version of the "CEO problem." The bound recovers many of the converse results that have been established for special cases of the problem, including the recent one for the Gaussian two-encoder problem.) <|cite_end|> showed an outer bound to the rate region of the multiterminal source coding problem that is tighter than the Berger-Tung outer bound <|cite_start|> (Reference: Quasi Linear Codes: Application to point-to-point and multi-terminal source coding: A new ensemble of structured codes is introduced. These codes are called Quasi Linear Codes (QLC). The QLC's are constructed by taking subsets of linear codes. They have a looser structure compared to linear codes and are not closed under addition. We argue that these codes provide gains in terms of achievable Rate-Distortions (RD) in different multi-terminal source coding problems. We derive the necessary covering bounds for analyzing the performance of QLC's. We then consider the Multiple-Descriptions (MD) problem, and prove through an example that the application of QLC's gives an improved achievable RD region for this problem. Finally, we derive an inner bound to the achievable RD region for the general MD problem which strictly contains all of the previous known achievable regions.) <|cite_end|> <|cite_start|> (Reference: Secure Multiterminal Source Coding With Actions: This paper studies the secure multiterminal source coding problem with actions. In particular, one main encoder observes an independent and identically distributed (i.i.d.) source Xn and wishes to compress this source lossyly to the decoder. Another encoder observes the source Yn and wants to compress this source losslessly to the decoder. A passive eavesdropper having access to the side information Zn can observe the information bits sent by the main encoder. In this scenario, the decoder is allowed to choose actions affecting the correlated source Yn and the side information Zn. For this problem, we characterize the optimal rate-distortion-cost-leakage region for a discrete memoryless setting.) <|cite_end|>. Wang et al. <|cite_start|> (Reference: On the sum rate of Gaussian multiterminal source coding: new proofs and results: We show that the lower bound on the sum rate of the direct and indirect Gaussian multiterminal source coding problems can be derived in a unified manner by exploiting the semidefinite partial order of the distortion covariance matrices associated with the minimum mean squared error (MMSE) estimation and the so-called reduced optimal linear estimation, through which an intimate connection between the lower bound and the Berger-Tung upper bound is revealed. We give a new proof of the minimum sum rate of the indirect Gaussian multiterminal source coding problem (i.e., the Gaussian CEO problem). For the direct Gaussian multiterminal source coding problem, we derive a general lower bound on the sum rate and establish a set of sufficient conditions under which the lower bound coincides with the Berger-Tung upper bound. We show that the sufficient conditions are satisfied for a class of sources and distortion constraints; in particular, they hold for arbitrary positive definite source covariance matrices in the high-resolution regime. In contrast with the existing proofs, the new method does not rely on Shannon's entropy power inequality.) <|cite_end|> showed a simple converse on the sum rate of the vector Gaussian CEO problem. Concurrently, Ekrem and Ulukus <|cite_start|> (Reference: An Outer Bound for the Vector Gaussian CEO Problem: We study the vector Gaussian CEO problem, where there are an arbitrary number of agents each having a noisy observation of a vector Gaussian source. The goal of the agents is to describe the source to a central unit, which wants to reconstruct the source within a given distortion. The rate-distortion region of the vector Gaussian CEO problem is unknown in general. Here, we provide an outer bound for the rate-distortion region of the vector Gaussian CEO problem. We obtain our outer bound by evaluating an outer bound for the multi-terminal source coding problem by means of a technique relying on the de Bruijn identity and the properties of the Fisher information. Next, we show that our outer bound strictly improves upon the existing outer bounds for all system parameters. We show this strict improvement by providing a specific example, and showing that there exists a gap between our outer bound and the existing outer bounds. Although our outer bound improves upon the existing outer bounds, we show that our outer bound does not provide the exact rate-distortion region in general. To this end, we provide an example and show that the rate-distortion region is strictly contained in our outer bound for this example.) <|cite_end|> and Wang and Chen <|cite_start|> (Reference: Vector Gaussian multiterminal source coding: We derive an outer bound of the rate region of the vector Gaussian L -terminal CEO problem by establishing a lower bound on each supporting hyperplane of the rate region. To this end, we prove a new extremal inequality by exploiting the connection between differential entropy and Fisher information as well as some fundamental estimation-theoretic inequalities. It is shown that the outer bound matches the Berger-Tung inner bound in the high-resolution regime. We then derive a lower bound on each supporting hyperplane of the rate region of the direct vector Gaussian L -terminal source coding problem by coupling it with the CEO problem through a limiting argument. The tightness of this lower bound in the high-resolution regime and the weak-dependence regime is also proved.) <|cite_end|> showed an outer bound to the rate region of the vector Gaussian CEO problem that is tight in some cases and not tight in others and that particularizes the outer bound in <|cite_start|> (Reference: An improved outer bound for multiterminal source coding: We prove a new outer bound on the rate-distortion region for the multiterminal source-coding problem. This bound subsumes the best outer bound in the literature and improves upon it strictly in some cases. The improved bound enables us to obtain a new, conclusive result for the binary erasure version of the "CEO problem." The bound recovers many of the converse results that have been established for special cases of the problem, including the recent one for the Gaussian two-encoder problem.) <|cite_end|> to the Gaussian case. Courtade and Weissman <|cite_start|> (Reference: Multiterminal Source Coding under Logarithmic Loss: We consider the classical two-encoder multiterminal source coding problem where distortion is measured under logarithmic loss. We provide a single-letter characterization of the achievable rate distortion region for arbitrarily correlated sources with finite alphabets. In doing so, we also give the rate distortion region for the $m$-encoder CEO problem (also under logarithmic loss). Several applications and examples are given.) <|cite_end|> determined the distortion region of the distributed source coding and the CEO problem under logarithmic loss. None of the above results directly apply to the tracking problem in \figref{fig:system} because of the past memory in encoding the $n$-blocks of observations and in producing $\hat X_i$ in \eqref{eq:dintro}, which imposes blockwise causality constraints to the coding process. The most basic scenario of source coding with causality constraints is that of a single observer directly seeing the information source. The causal rate-distortion function for the Gauss-Markov source was computed by Gorbunov and Pinsker. The link between the minimum attainable linear quadratic Gaussian (LQG) control cost and the causal rate-distortion function is elucidated in <|cite_start|> (Reference: Stochastic linear control over a communication channel: We examine linear stochastic control systems when there is a communication channel connecting the sensor to the controller. The problem consists of designing the channel encoder and decoder as well as the controller to satisfy some given control objectives. In particular, we examine the role communication has on the classical linear quadratic Gaussian problem. We give conditions under which the classical separation property between estimation and control holds and the certainty equivalent control law is optimal. We then present the sequential rate distortion framework. We present bounds on the achievable performance and show the inherent tradeoffs between control and communication costs. In particular, we show that optimal quadratic cost decomposes into two terms: A full knowledge cost and a sequential rate distortion cost.) <|cite_end|> <|cite_start|> (Reference: A Characterization of the Minimal Average Data Rate that Guarantees a Given Closed-Loop Performance Level: This paper studies networked control systems closed over noiseless digital channels. By focusing on noisy LTI plants with scalar-valued control inputs and sensor outputs, we derive an absolute lower bound on the minimal average data rate that allows one to achieve a prescribed level of stationary performance under Gaussianity assumptions. We also present a simple coding scheme that allows one to achieve average data rates that are at most 1.254 bits away from the derived lower bound, while satisfying the performance constraint. Our results are given in terms of the solution to a stationary signal-to-noise ratio minimization problem and builds upon a recently proposed framework to deal with average data rate constraints in feedback systems. A numerical example is presented to illustrate our findings.) <|cite_end|> <|cite_start|> (Reference: Rate-cost tradeoffs in control: Consider a control problem with a communication channel connecting the observer of a linear stochastic system to the controller. The goal of the controller is to minimize a quadratic cost function in the state variables and control signal, known as the linear quadratic regulator (LQR). We study the fundamental tradeoff between the communication rate $r$ bits/sec and the expected cost $b$. We obtain a lower bound on a certain rate-cost function, which quantifies the minimum directed mutual information between the channel input and output that is compatible with a target LQR cost. The rate-cost function has operational significance in multiple scenarios of interest: among others, it allows us to lower-bound the minimum communication rate for fixed and variable length quantization, and for control over noisy channels. We derive an explicit lower bound to the rate-cost function, which applies to the vector, non-Gaussian, and partially observed systems, thereby extending and generalizing an earlier explicit expression for the scalar Gaussian system, due to Tatikonda el al. The bound applies as long as the differential entropy of the system noise is not $-\infty$. It can be closely approached by a simple lattice quantization scheme that only quantizes the innovation, that is, the difference between the controller's belief about the current state and the true state. Via a separation principle between control and communication, similar results hold for causal lossy compression of additive noise Markov sources. Apart from standard dynamic programming arguments, our technical approach leverages the Shannon lower bound, develops new estimates for data compression with coding memory, and uses some recent results on high resolution variable-length vector quantization to prove that the new converse bounds are tight.) <|cite_end|>. A semidefinite program to compute the causal rate-distortion function for vector Gauss-Markov sources is provided in <|cite_start|> (Reference: Semidefinite Programming Approach to Gaussian Sequential Rate-Distortion Trade-offs: Sequential rate-distortion (SRD) theory provides a framework for studying the fundamental trade-off between data-rate and data-quality in real-time communication systems. In this paper, we consider the SRD problem for multi-dimensional time-varying Gauss-Markov processes under mean-square distortion criteria. We first revisit the sensor-estimator separation principle, which asserts that considered SRD problem is equivalent to a joint sensor and estimator design problem in which data-rate of the sensor output is minimized while the estimator's performance satisfies the distortion criteria. We then show that the optimal joint design can be performed by semidefinite programming. A semidefinite representation of the corresponding SRD function is obtained. Implications of the obtained result in the context of zero-delay source coding theory and applications to networked control theory are also discussed.) <|cite_end|>. The remote Gaussian causal rate-distortion function, which corresponds to setting $K = 1$ in \figref{fig:system}, is computed in <|cite_start|> (Reference: Rate-cost tradeoffs in control: Consider a control problem with a communication channel connecting the observer of a linear stochastic system to the controller. The goal of the controller is to minimize a quadratic cost function in the state variables and control signal, known as the linear quadratic regulator (LQR). We study the fundamental tradeoff between the communication rate $r$ bits/sec and the expected cost $b$. We obtain a lower bound on a certain rate-cost function, which quantifies the minimum directed mutual information between the channel input and output that is compatible with a target LQR cost. The rate-cost function has operational significance in multiple scenarios of interest: among others, it allows us to lower-bound the minimum communication rate for fixed and variable length quantization, and for control over noisy channels. We derive an explicit lower bound to the rate-cost function, which applies to the vector, non-Gaussian, and partially observed systems, thereby extending and generalizing an earlier explicit expression for the scalar Gaussian system, due to Tatikonda el al. The bound applies as long as the differential entropy of the system noise is not $-\infty$. It can be closely approached by a simple lattice quantization scheme that only quantizes the innovation, that is, the difference between the controller's belief about the current state and the true state. Via a separation principle between control and communication, similar results hold for causal lossy compression of additive noise Markov sources. Apart from standard dynamic programming arguments, our technical approach leverages the Shannon lower bound, develops new estimates for data compression with coding memory, and uses some recent results on high resolution variable-length vector quantization to prove that the new converse bounds are tight.) <|cite_end|>. The causal rate-distortion function of the Gauss-Markov source with Gaussian side observation available at decoder (the causal counterpart of the Wyner-Ziv setting) is computed in <|cite_start|> (Reference: Rate-cost tradeoffs in scalar LQG control and tracking with side information: Consider a control problem in which a remote controller chooses its control action based on two kinds of information about the system state: the information it receives from the system via a rate-constrained feedback link, and side information - a noisy measurement of the system state it observes directly. The goal of the controller is to minimize a quadratic cost function in the state variables and control signal, known as the linear quadratic regulator (LQR). We study the fundamental tradeoff between the communication rate, the expected cost b and the quality of side information. Due to a separation principle between estimation and control, we focus on the tracking problem, where the goal is to track the system state rather than to control it. We introduce the causal rate-distortion function with side information at the decoder. It is expressed in terms of directed mutual information, and it extends the classical (noncausal) Wyner-Ziv rate-distortion function to real-time tracking problems with causality constraints and memory of the past at both encoder and decoder. We compute that function in the scalar linear Gaussian setting; we draw a link with the Kalman filter; we show that making side information available also at the encoder does not help to improve the optimal tradeoffs.) <|cite_end|> for the scalar source and in <|cite_start|> (Reference: The minimal directed information needed to improve the LQG cost: We study a linear quadratic Gaussian (LQG) control problem, in which a noisy observation of the system state is available to the controller. To lower the achievable LQG cost, we introduce an extra communication link from the system to the controller. We investigate the trade-off between the improved LQG cost and the consumed communication (information) resources that are measured with the conditional directed information. The objective is to minimize the directed information over all encoding-decoding policies subject to a constraint on the LQG cost. The main result is a semidefinite programming formulation for the optimization problem in the finite-horizion scenario where the dynamical system may have time-varying parameters. This result extends the seminal work by Tanaka et al., where the direct noisy measurement of the system state at the controller is assumed to be absent. As part of our derivation to show the optimality of an encoder that transmits a Gaussian measurement of the state, we show that the presence of the noisy measurements at the encoder can not reduce the minimal directed information, extending a prior result of Kostina and Hassibi to the vector case. Finally, we show that the results in the finite-horizon case can be extended to the infinite-horizon scenario when assuming a time-invariant system, but possibly a time-varying policy. We show that the solution for this optimization problem can be realized by a time-invariant policy whose parameters can be computed explicitly from a finite-dimensional semidefinite program.) <|cite_end|> in the vector source. That causal Wyner-Ziv setting can be viewed a special case of our causal CEO problem \eqref{eq:xi}, \eqref{eq:yik} with two observers, with the second observer enjoying an infinite rate. Stability of linear Gaussian systems with multiple isolated observers is investigated in <|cite_start|> (Reference: Stochastic stabilization of partially observed and multi-sensor systems driven by unbounded noise under fixed-rate information constraints: We investigate the stabilization of unstable multidimensional partially observed single-sensor and multi-sensor (single-controller) discrete-time linear systems driven by unbounded noise and controlled over discrete noiseless channels. Stability is achieved under fixed-rate communication requirements that are asymptotically tight in the limit of large sampling periods. Through the use of similarity transforms, sampling and random-time drift conditions we obtain a coding and control policy leading to the existence of a unique invariant distribution and finite second moment for the sampled state. We obtain tight necessary and sufficient conditions for the general multi-sensor case under an assumption related to the Jordan form structure of such systems. In the absence of this assumption, we give sufficient conditions for stabilization.) <|cite_end|>. The first contribution of this paper is an extension of the Berger-Tung inner and outer bounds <|cite_start|> (Reference: Quasi Linear Codes: Application to point-to-point and multi-terminal source coding: A new ensemble of structured codes is introduced. These codes are called Quasi Linear Codes (QLC). The QLC's are constructed by taking subsets of linear codes. They have a looser structure compared to linear codes and are not closed under addition. We argue that these codes provide gains in terms of achievable Rate-Distortions (RD) in different multi-terminal source coding problems. We derive the necessary covering bounds for analyzing the performance of QLC's. We then consider the Multiple-Descriptions (MD) problem, and prove through an example that the application of QLC's gives an improved achievable RD region for this problem. Finally, we derive an inner bound to the achievable RD region for the general MD problem which strictly contains all of the previous known achievable regions.) <|cite_end|> <|cite_start|> (Reference: Secure Multiterminal Source Coding With Actions: This paper studies the secure multiterminal source coding problem with actions. In particular, one main encoder observes an independent and identically distributed (i.i.d.) source Xn and wishes to compress this source lossyly to the decoder. Another encoder observes the source Yn and wants to compress this source losslessly to the decoder. A passive eavesdropper having access to the side information Zn can observe the information bits sent by the main encoder. In this scenario, the decoder is allowed to choose actions affecting the correlated source Yn and the side information Zn. For this problem, we characterize the optimal rate-distortion-cost-leakage region for a discrete memoryless setting.) <|cite_end|> to the distributed tracking setting of \figref{fig:system} that sandwitch the minimum asymptotically achievable (as $n \to \infty$) sum rate $R_1 + \ldots + R_K$ required to achieve a given average distortion \eqref{eq:dintro}. Provided that the components of each $X_i \in \mathcal A^n$ are i.i.d. ($X_i$ can still depend on $X_1, \ldots, X_{i-1}$), the channels act on each of those components independently, and the distortion measure is separable, that minimum sum rate is bounded in terms of the directed mutual information from the encoders to the decoder. The converse (outer bound) follows via standard data processing and single-letterization arguments. To prove the achievability, we show a nonasymptotic bound for blockwise-causal distributed lossy source coding that can be viewed as an extension of the nonasymptotic Berger-Tung bound by Yassaee et al. <|cite_start|> (Reference: A Technique for Deriving One-Shot Achievability Results in Network Information Theory: This paper proposes a novel technique to prove a one-shot version of achievability results in network information theory. The technique is not based on covering and packing lemmas. In this technique, we use an stochastic encoder and decoder with a particular structure for coding that resembles both the ML and the joint-typicality coders. Although stochastic encoders and decoders do not usually enhance the capacity region, their use simplifies the analysis. The Jensen inequality lies at the heart of error analysis, which enables us to deal with the expectation of many terms coming from stochastic encoders and decoders at once. The technique is illustrated via several examples: point-to-point channel coding, Gelfand-Pinsker, Broadcast channel (Marton), Berger-Tung, Heegard-Berger/Kaspi, Multiple description coding and Joint source-channel coding over a MAC. Most of our one-shot results are new. The asymptotic forms of these expressions is the same as that of classical results. Our one-shot bounds in conjunction with multi-dimensional Berry-Essen CLT imply new results in the finite blocklength regime. In particular applying the one-shot result for the memoryless broadcast channel in the asymptotic case, we get the entire region of Marton's inner bound without any need for time-sharing.) <|cite_end|> <|cite_start|> (Reference: A Technique for Deriving One-Shot Achievability Results in Network Information Theory: This paper proposes a novel technique to prove a one-shot version of achievability results in network information theory. The technique is not based on covering and packing lemmas. In this technique, we use an stochastic encoder and decoder with a particular structure for coding that resembles both the ML and the joint-typicality coders. Although stochastic encoders and decoders do not usually enhance the capacity region, their use simplifies the analysis. The Jensen inequality lies at the heart of error analysis, which enables us to deal with the expectation of many terms coming from stochastic encoders and decoders at once. The technique is illustrated via several examples: point-to-point channel coding, Gelfand-Pinsker, Broadcast channel (Marton), Berger-Tung, Heegard-Berger/Kaspi, Multiple description coding and Joint source-channel coding over a MAC. Most of our one-shot results are new. The asymptotic forms of these expressions is the same as that of classical results. Our one-shot bounds in conjunction with multi-dimensional Berry-Essen CLT imply new results in the finite blocklength regime. In particular applying the one-shot result for the memoryless broadcast channel in the asymptotic case, we get the entire region of Marton's inner bound without any need for time-sharing.) <|cite_end|> to the setting with $K > 2$ sources and $t > 1$ time instances. We view the horizon-$t$ causal coding problem as a multiterminal coding problem in which at each step coded side information from past steps is available, and we use a stochastic likelihood coder (SLC) by Yassaee et al. <|cite_start|> (Reference: A Technique for Deriving One-Shot Achievability Results in Network Information Theory: This paper proposes a novel technique to prove a one-shot version of achievability results in network information theory. The technique is not based on covering and packing lemmas. In this technique, we use an stochastic encoder and decoder with a particular structure for coding that resembles both the ML and the joint-typicality coders. Although stochastic encoders and decoders do not usually enhance the capacity region, their use simplifies the analysis. The Jensen inequality lies at the heart of error analysis, which enables us to deal with the expectation of many terms coming from stochastic encoders and decoders at once. The technique is illustrated via several examples: point-to-point channel coding, Gelfand-Pinsker, Broadcast channel (Marton), Berger-Tung, Heegard-Berger/Kaspi, Multiple description coding and Joint source-channel coding over a MAC. Most of our one-shot results are new. The asymptotic forms of these expressions is the same as that of classical results. Our one-shot bounds in conjunction with multi-dimensional Berry-Essen CLT imply new results in the finite blocklength regime. In particular applying the one-shot result for the memoryless broadcast channel in the asymptotic case, we get the entire region of Marton's inner bound without any need for time-sharing.) <|cite_end|> <|cite_start|> (Reference: A Technique for Deriving One-Shot Achievability Results in Network Information Theory: This paper proposes a novel technique to prove a one-shot version of achievability results in network information theory. The technique is not based on covering and packing lemmas. In this technique, we use an stochastic encoder and decoder with a particular structure for coding that resembles both the ML and the joint-typicality coders. Although stochastic encoders and decoders do not usually enhance the capacity region, their use simplifies the analysis. The Jensen inequality lies at the heart of error analysis, which enables us to deal with the expectation of many terms coming from stochastic encoders and decoders at once. The technique is illustrated via several examples: point-to-point channel coding, Gelfand-Pinsker, Broadcast channel (Marton), Berger-Tung, Heegard-Berger/Kaspi, Multiple description coding and Joint source-channel coding over a MAC. Most of our one-shot results are new. The asymptotic forms of these expressions is the same as that of classical results. Our one-shot bounds in conjunction with multi-dimensional Berry-Essen CLT imply new results in the finite blocklength regime. In particular applying the one-shot result for the memoryless broadcast channel in the asymptotic case, we get the entire region of Marton's inner bound without any need for time-sharing.) <|cite_end|> to perform encoding operations. The SLC-based encoder mimics the operation of the joint typicality encoder while admitting sharp nonasymptotic bounds on its performance. While the SLC-based decoder of <|cite_start|> (Reference: A Technique for Deriving One-Shot Achievability Results in Network Information Theory: This paper proposes a novel technique to prove a one-shot version of achievability results in network information theory. The technique is not based on covering and packing lemmas. In this technique, we use an stochastic encoder and decoder with a particular structure for coding that resembles both the ML and the joint-typicality coders. Although stochastic encoders and decoders do not usually enhance the capacity region, their use simplifies the analysis. The Jensen inequality lies at the heart of error analysis, which enables us to deal with the expectation of many terms coming from stochastic encoders and decoders at once. The technique is illustrated via several examples: point-to-point channel coding, Gelfand-Pinsker, Broadcast channel (Marton), Berger-Tung, Heegard-Berger/Kaspi, Multiple description coding and Joint source-channel coding over a MAC. Most of our one-shot results are new. The asymptotic forms of these expressions is the same as that of classical results. Our one-shot bounds in conjunction with multi-dimensional Berry-Essen CLT imply new results in the finite blocklength regime. In particular applying the one-shot result for the memoryless broadcast channel in the asymptotic case, we get the entire region of Marton's inner bound without any need for time-sharing.) <|cite_end|> <|cite_start|> (Reference: A Technique for Deriving One-Shot Achievability Results in Network Information Theory: This paper proposes a novel technique to prove a one-shot version of achievability results in network information theory. The technique is not based on covering and packing lemmas. In this technique, we use an stochastic encoder and decoder with a particular structure for coding that resembles both the ML and the joint-typicality coders. Although stochastic encoders and decoders do not usually enhance the capacity region, their use simplifies the analysis. The Jensen inequality lies at the heart of error analysis, which enables us to deal with the expectation of many terms coming from stochastic encoders and decoders at once. The technique is illustrated via several examples: point-to-point channel coding, Gelfand-Pinsker, Broadcast channel (Marton), Berger-Tung, Heegard-Berger/Kaspi, Multiple description coding and Joint source-channel coding over a MAC. Most of our one-shot results are new. The asymptotic forms of these expressions is the same as that of classical results. Our one-shot bounds in conjunction with multi-dimensional Berry-Essen CLT imply new results in the finite blocklength regime. In particular applying the one-shot result for the memoryless broadcast channel in the asymptotic case, we get the entire region of Marton's inner bound without any need for time-sharing.) <|cite_end|> is ill-suited to the case $K > 2$, we propose a novel decoder that falls into the class of generalized likelihood decoders <|cite_start|> (Reference: The generalized stochastic likelihood decoder: Random coding and expurgated bounds: The likelihood decoder is a stochastic decoder that selects the decoded message at random, using the posterior distribution of the true underlying message given the channel output. In this paper, we study a generalized version of this decoder, where the posterior is proportional to a general function that depends only on the joint empirical distribution of the output vector and the code word. This framework allows both mismatched versions and universal versions of the likelihood decoder, as well as the corresponding ordinary deterministic decoders, among many others. We provide a direct analysis method that yields the exact random coding exponent (as opposed to separate upper bounds and lower bounds that turn out to be compatible, which were derived earlier by Scarlett et al.). We also extend the result from pure channel coding to combined source and channel coding (random binning followed by random channel coding) with side information available to the decoder. Finally, returning to pure channel coding, we derive also an expurgated exponent for the stochastic likelihood decoder, which turns out to be at least as tight (and in some cases, strictly so) as the classical expurgated exponent of the maximum likelihood decoder, even though the stochastic likelihood decoder is suboptimal.) <|cite_end|> and uses $K$ different threshold tests depending on the point of the rate-distortion region the code is operating at. An asymptotic analysis of our nonasymptotic bound yields an extension of the Berger-Tung inner bound <|cite_start|> (Reference: Quasi Linear Codes: Application to point-to-point and multi-terminal source coding: A new ensemble of structured codes is introduced. These codes are called Quasi Linear Codes (QLC). The QLC's are constructed by taking subsets of linear codes. They have a looser structure compared to linear codes and are not closed under addition. We argue that these codes provide gains in terms of achievable Rate-Distortions (RD) in different multi-terminal source coding problems. We derive the necessary covering bounds for analyzing the performance of QLC's. We then consider the Multiple-Descriptions (MD) problem, and prove through an example that the application of QLC's gives an improved achievable RD region for this problem. Finally, we derive an inner bound to the achievable RD region for the general MD problem which strictly contains all of the previous known achievable regions.) <|cite_end|> <|cite_start|> (Reference: Secure Multiterminal Source Coding With Actions: This paper studies the secure multiterminal source coding problem with actions. In particular, one main encoder observes an independent and identically distributed (i.i.d.) source Xn and wishes to compress this source lossyly to the decoder. Another encoder observes the source Yn and wants to compress this source losslessly to the decoder. A passive eavesdropper having access to the side information Zn can observe the information bits sent by the main encoder. In this scenario, the decoder is allowed to choose actions affecting the correlated source Yn and the side information Zn. For this problem, we characterize the optimal rate-distortion-cost-leakage region for a discrete memoryless setting.) <|cite_end|> to the setting with inter-block memory. The second contribution of the paper is an explicit evaluation of the minimum sum rate for the causal Gaussian CEO problem. In that scenario, the source is an $n$-dimensional Gauss-Markov source, \begin{align} X_{i+1} &= a X_i + V_i, \label{eq:xi} \end{align} $k$-th observer sees \begin{align} Y_{i}^k &= X_i + W_{i}^k, \quad k = 1, \ldots, K, \label{eq:yik} \end{align} where $X_1$ and $\{V_i, W_{i}^1, W_{i}^2, \ldots, W_{i}^K\}_{i = 1}^T$ are independent Gaussian vectors of length $n$; $V_i \sim \mathcal N(0, \sigma_{\mathsf V}^2 \mat I )$; $W_i^k \sim \mathcal N(0, \sigma_{\mathsf W_k}^2 \mat I)$. Note that different observation channels can have different noise powers. The distortion measure is normalized mean-square error (MSE) \begin{equation} \mathsf d \left(X_i, \hat X_i\right) = \frac 1 n \|X_i - \hat X_i\|^2. \label{eq:MSEdef} \end{equation} We characterize the minimum sum rate as a convex optimization problem over $K$ parameters; an explicit formula is given in the case of identical observation channels. Similar to the corresponding result for $t = 1$ <|cite_start|> (Reference: Rate region of the quadratic gaussian ceo problem: In the so-called CEO problem, a hidden source random process is of interest to a central unit or the "CEO". But this process cannot be observed directly. L sensors or agents observe independently corrupted versions of the source. They encode their observations without cooperating with one another and send through rate constrained noiseless channels to the CEO. The problem was first studied by T. Berger et al. (1996) in the context of discrete memoryless sources. The quadratic Gaussian version of the problem was studied. The best result known to date is the characterization of the sum-rate when all the agents have the same quality of observations. Here we characterize the rate region for any number of agents without assuming that their quality of observations is the same. This is one of the few examples of multiterminal lossy source coding problems in which the rate region can be characterized completely.) <|cite_end|> <|cite_start|> (Reference: Rate-distortion theory for Gaussian multiterminal source coding systems with several side informations at the decoder: In this paper, we consider the separate coding problem for L+1 correlated Gaussian memoryless sources. We deal with the case where L sources work as partial side information at the decoder for the reconstruction of the remaining source. The determination problem of the rate-distortion region for this system is the so-called many-help-one problem and it has been known as a highly challenging problem for almost 20 years. In this paper, we give a partial solution to this problem. We determine the rate-distortion region in the case where the L sources working as partial side information are conditionally independent if the remaining source we wish to reconstruct is given. The additive white Gaussian noise CEO problem is a special case of this. We also discuss the relation of the result previous results of ours) <|cite_end|>,\cite[Th. 12.3]{el2011network}, our extension of the Berger-Tung inner bound is tight in this case. To compute the bound, we split up the directed minimal mutual information problem into a sum of easier-to-solve optimization problems. To tie the parameters of those optimization problems back to those of the original optimization problem, we extend the technique developed by Wang et al. <|cite_start|> (Reference: On the sum rate of Gaussian multiterminal source coding: new proofs and results: We show that the lower bound on the sum rate of the direct and indirect Gaussian multiterminal source coding problems can be derived in a unified manner by exploiting the semidefinite partial order of the distortion covariance matrices associated with the minimum mean squared error (MMSE) estimation and the so-called reduced optimal linear estimation, through which an intimate connection between the lower bound and the Berger-Tung upper bound is revealed. We give a new proof of the minimum sum rate of the indirect Gaussian multiterminal source coding problem (i.e., the Gaussian CEO problem). For the direct Gaussian multiterminal source coding problem, we derive a general lower bound on the sum rate and establish a set of sufficient conditions under which the lower bound coincides with the Berger-Tung upper bound. We show that the sufficient conditions are satisfied for a class of sources and distortion constraints; in particular, they hold for arbitrary positive definite source covariance matrices in the high-resolution regime. In contrast with the existing proofs, the new method does not rely on Shannon's entropy power inequality.) <|cite_end|> for the time horizon $t = 1$, to $t > 1$. That extension is nontrivial. A device that facilitates an understanding of how estimation errors behave over multiple time instances is the reversal of the channels from $\{X_i\}$ to $\{Y_i^k\}$: \begin{align} X_i = \bar X_i^k + W_i^{k \prime}, \end{align} where \begin{align} \bar X_i^k \triangleq \E{X_i | Y_{1}^k, \ldots, Y_{i}^k}, \label{eq:Xbark} \end{align} and $W_i^{k\, \prime} \perp \bar X_i^k$ are Gaussian independent random vectors representing the errors in estimating $X_i$ from $\{Y_{j}^k\}_{j = 1}^i$. While for $t = 1$, it does not matter whether the encoders compress $Y_1$ or $\bar X_1$ since the latter is just a scaled version of the former, for $t > 1$, compressing $Y_i$ instead of $\bar X_i^k$ is only suboptimal. The third contribution of the paper is a bound on the rate loss due to a lack of communication among the different encoders in the causal Gaussian CEO problem: as long as the target distortion is not too small, the rate loss is bounded above by $K-1$ times the difference between the remote and the direct rate-distortion functions. The bound is attained with equality if the observation channels are identical, indicating that among all possible observer channels with the same error in estimation $\{X_i\}$ from $\{Y_j^{k}\}_{j \leq i, k = 1, \ldots, K}$, the identical channels case is the hardest to compress. The rest of the paper is organized as follows. In \secref{sec:dir}, we consider the general (non-Gaussian) causal CEO problem and prove direct and converse bounds to the minimum sum rate in terms of minimal directed mutual information (\thmref{thm:cg}). In \secref{sec:rd}, we characterize the causal Gaussian CEO rate-distortion function (\thmref{thm:causalceo}). In \secref{sec:loss}, we bound the rate loss due to isolated observers (\thmref{thm:loss}). \emph{Notation:} Logarithms are natural base. For a natural number $M$, $[M] \triangleq \{1, \ldots, M\}$. Notation $X \leftarrow Y$ reads ``replace $X$ by $Y$", and notation $X \perp Y$ reads ``$X$ is independent of $Y$''. The temporal index is indicated in the subscript and the spatial index in the superscript: $Y_{[t]}^k$ is the temporal vector $(Y_1^k, \ldots, Y_t^k)$; $Y_i^{[K]}$ is the spatial vector $(Y_i^{1}, \ldots, Y_{i}^K)^{\mathsf T}$; $Y_{[t]}^{[K]} \triangleq (Y_{[t]}^1, \ldots, Y_{[t]}^K)$. $\mathcal D$ denotes delay by one, i.e. $\mathcal D X_{[t]} \triangleq (0, X_1, \ldots, X_{t-1})$. For a random vector $X$ with i.i.d. components, $\mathsf X$ denotes a random variable distributed the same as each component of $X$. We adopt the following shorthand notation for causally conditional <|cite_start|> (Reference: Directed information for channels with feedback: ) <|cite_end|> probability kernels: \begin{equation} P_{Y_{[t]} || X_{[t]}} \triangleq \prod_{i=1}^{t} P_{Y_i | Y_{[i-1]}, X_{[i]}} \label{eq:causalcond}. \end{equation} Given a distribution $P_{X_{[t]}}$ and a causal kernel $P_{Y_{[t]} \| X_{[t]}}$, the directed mutual information is defined as <|cite_start|> (Reference: Causality, Feedback and Directed Information: It is shown that the "usual definition" of a discrete memoryless channel (DMC) in fact prohibits the use of feedback. The difficulty stems from the confusion of causality and statistical dependence. An adequate definition of a DMC is given, as well as a definition of using a channel without feedback. A definition, closely based on an old idea of Marko, is given for the directed information flowing from one sequence to another. This directed information is used to give a simple proof of the well-known fact that the use of feedback cannot increase the capacity of a DMC. It is shown that, when feedback is present, directed information is a more useful quantity than the traditional mutual information.) <|cite_end|> \begin{equation} I\left(X_{[t]} \to Y_{[t]}\right) \triangleq \sum_{i = 1}^{t} I\left(X_{[i]}; Y_i | Y_{[i-1]}\right). \label{eq:Idir} \end{equation} <|paper_end|>
[ "<|reference_start|> Rate region of the quadratic gaussian ceo problem: In the so-called CEO problem, a hidden source random process is of interest to a central unit or the \"CEO\". But this process cannot be observed directly. L sensors or agents observe independently corrupted versions of the source. They encode their observations without cooperating with one another and send through rate constrained noiseless channels to the CEO. The problem was first studied by T. Berger et al. (1996) in the context of discrete memoryless sources. The quadratic Gaussian version of the problem was studied. The best result known to date is the characterization of the sum-rate when all the agents have the same quality of observations. Here we characterize the rate region for any number of agents without assuming that their quality of observations is the same. This is one of the few examples of multiterminal lossy source coding problems in which the rate region can be characterized completely. <|reference_end|>", "<|reference_start|> Secure Multiterminal Source Coding With Actions: This paper studies the secure multiterminal source coding problem with actions. In particular, one main encoder observes an independent and identically distributed (i.i.d.) source Xn and wishes to compress this source lossyly to the decoder. Another encoder observes the source Yn and wants to compress this source losslessly to the decoder. A passive eavesdropper having access to the side information Zn can observe the information bits sent by the main encoder. In this scenario, the decoder is allowed to choose actions affecting the correlated source Yn and the side information Zn. For this problem, we characterize the optimal rate-distortion-cost-leakage region for a discrete memoryless setting. <|reference_end|>", "<|reference_start|> The generalized stochastic likelihood decoder: Random coding and expurgated bounds: The likelihood decoder is a stochastic decoder that selects the decoded message at random, using the posterior distribution of the true underlying message given the channel output. In this paper, we study a generalized version of this decoder, where the posterior is proportional to a general function that depends only on the joint empirical distribution of the output vector and the code word. This framework allows both mismatched versions and universal versions of the likelihood decoder, as well as the corresponding ordinary deterministic decoders, among many others. We provide a direct analysis method that yields the exact random coding exponent (as opposed to separate upper bounds and lower bounds that turn out to be compatible, which were derived earlier by Scarlett et al.). We also extend the result from pure channel coding to combined source and channel coding (random binning followed by random channel coding) with side information available to the decoder. Finally, returning to pure channel coding, we derive also an expurgated exponent for the stochastic likelihood decoder, which turns out to be at least as tight (and in some cases, strictly so) as the classical expurgated exponent of the maximum likelihood decoder, even though the stochastic likelihood decoder is suboptimal. <|reference_end|>", "<|reference_start|> Quasi Linear Codes: Application to point-to-point and multi-terminal source coding: A new ensemble of structured codes is introduced. These codes are called Quasi Linear Codes (QLC). The QLC's are constructed by taking subsets of linear codes. They have a looser structure compared to linear codes and are not closed under addition. We argue that these codes provide gains in terms of achievable Rate-Distortions (RD) in different multi-terminal source coding problems. We derive the necessary covering bounds for analyzing the performance of QLC's. We then consider the Multiple-Descriptions (MD) problem, and prove through an example that the application of QLC's gives an improved achievable RD region for this problem. Finally, we derive an inner bound to the achievable RD region for the general MD problem which strictly contains all of the previous known achievable regions. <|reference_end|>" ]
[ 3, 11, 33, 34 ]
{"<|cite_1|>": "ss-1944689", "<|cite_2|>": "ss-2021400", "<|cite_3|>": "ss-1955519", "<|cite_4|>": "ss-1012220", "<|cite_5|>": "ss-1384605", "<|cite_6|>": "ss-2011703", "<|cite_7|>": "ss-2106937", "<|cite_8|>": "arxiv-674122", "<|cite_9|>": "arxiv-673466", "<|cite_10|>": "ss-928673", "<|multi_cite_11_1|>": "ss-1125990", "<|multi_cite_11_2|>": "ss-1384604", "<|cite_12|>": "ss-1028415", "<|cite_13|>": "arxiv-28333", "<|cite_14|>": "ss-1708756", "<|cite_15|>": "ss-928673", "<|cite_16|>": "arxiv-25457", "<|multi_cite_19_1|>": "ss-1521864", "<|multi_cite_19_2|>": "arxiv-62936", "<|multi_cite_19_3|>": "arxiv-112005", "<|cite_20|>": "arxiv-69394", "<|cite_21|>": "arxiv-112005", "<|cite_22|>": "ss-1521869", "<|cite_23|>": "ss-1521870", "<|cite_24|>": "ss-1377627", "<|multi_cite_25_1|>": "ss-1125990", "<|multi_cite_25_2|>": "ss-1384604", "<|multi_cite_26_1|>": "arxiv-42580", "<|multi_cite_26_2|>": "arxiv-42580", "<|multi_cite_27_1|>": "arxiv-42580", "<|multi_cite_27_2|>": "arxiv-42580", "<|multi_cite_28_1|>": "arxiv-42580", "<|multi_cite_28_2|>": "arxiv-42580", "<|cite_29|>": "ss-1125991", "<|multi_cite_30_1|>": "ss-1125990", "<|multi_cite_30_2|>": "ss-1384604", "<|multi_cite_31_1|>": "ss-1012220", "<|multi_cite_31_2|>": "ss-1384605", "<|cite_32|>": "ss-1028415", "<|cite_33|>": "ss-1272619", "<|cite_34|>": "ss-1264121"}
2011.14526
<|paper_start|> Title: Deep Reinforcement Learning for Smart Grid Protection Against Coordinated Multistage Transmission Line Attacks Abstract: Deep Reinforcement Learning for Smart Grid Protection Against Coordinated Multistage Transmission Line Attacks: With the increase of connectivity in power grid, a cascading failure may be triggered by the failure of a transmission line, which can lead to substantial economic losses and serious negative social impacts. Therefore, it is very important to identify the critical lines under various types of attacks that may initiate a cascading failure and deploy defense resources to protect them. Since coordinated multistage line attacks can lead to larger negative impacts compared with a single-stage attack or a multistage attack without coordination, this paper intends to identify the critical lines under coordinated multistage attacks that may initiate a cascading failure and deploy limited defense resources optimally. To this end, we first formulate a total generation loss maximization problem with the consideration of multiple attackers and multiple stages. Due to the large size of solution space, it is very challenging to solve the formulated problem. To overcome the challenge, we reformulate the problem as a Markov game and design its components, e.g., state, action, and reward. Next, we propose a scalable algorithm to solve the Markov game based on multi-agent deep reinforcement learning and prioritized experience replay, which can determine the optimal attacking line sequences. Then, we design a defense strategy to decide the optimal defense line set. Extensive simulation results show the effectiveness of the proposed algorithm and the designed defense strategy. Introduction \label{s1} With the increase of connectivity in power grid, fewer power outages are incurred since the high demand in a region can be supplied by the local and remote generators. However, such connectivity also brings threats to power grid <|cite_start|> (Reference: A game theory approach to vulnerability analysis: Integrating power flows with topological analysis: ) <|cite_end|>. To be specific, a large-scale power outage may be triggered by the failure of a critical component (e.g., a transmission line) <|cite_start|> (Reference: Benchmarking and validation of cascading failure analysis tools: Cascading failure in electric power systems is a complicated problem for which a variety of models, software tools, and analytical tools have been proposed but are difficult to verify. Benchmarking and validation are necessary to understand how closely a particular modeling method corresponds to reality, what engineering conclusions may be drawn from a particular tool, and what improvements need to be made to the tool in order to reach valid conclusions. The community needs to develop the test cases tailored to cascading that are central to practical benchmarking and validation. In this paper, the IEEE PES working group on cascading failure reviews and synthesizes how benchmarking and validation can be done for cascading failure analysis, summarizes and reviews the cascading test cases that are available to the international community, and makes recommendations for improving the state of the art.) <|cite_end|> <|cite_start|> (Reference: An evolutionary computation approach for smart grid cascading failure vulnerability analysis: The cyber-physical security of smart grid is of great importance since it directly concerns the normal operating of a system. Recently, researchers found that organized sequential attacks can incur large-scale cascading failure to the smart grid. In this paper, we focus on the line-switching sequential attack, where the attacker aims to trip transmission lines in a designed order to cause significant system failures. Our objective is to identify the critical line-switching attack sequence, which can be instructional for the protection of smart grid. For this purpose, we develop an evolutionary computation based vulnerability analysis framework, which employs particle swarm optimization to search the critical attack sequence. Simulation studies on two benchmark systems, i.e., IEEE 24 bus reliability test system and Washington 30 bus dynamic test system, are implemented to evaluate the performance of our proposed method. Simulation results show that our method can yield a better performance comparing with the reinforcement learning based approach proposed in other prior work.) <|cite_end|>. For example, over 80 percent of power in Pakistan has been lost due to the outage of a transmission line, which was caused by a physical sabotage <|cite_start|> (Reference: Smart grid vulnerability under cascade-based sequential line-switching attacks: Recently, the sequential attack, where multiple malignant contingencies are launched by attackers sequentially, has revealed power grid vulnerability under cascading failures. This paper systematically analyzes properties and features of N-k cascaded- based sequential line-switching attacks using a DC power flow based cascading failure simulator (DC- CFS). This paper first explains the key factors behind cascade-based attacks, then compares three adopted metrics with an original line-margin metric to compute vulnerability indexes and design sequential attacks. Two target search schemes, i.e., offline and online target search in sequential attacks, are also presented. Simulation results of N-2 to N-4 line-switching attacks have suggested that the proposed line margin metric produces stronger sequential attacks, and online target search is more effective than offline search. Reasons behind counter-intuitive load loss resulting from different metrics are also analyzed to facilitate future study on the risk of sequential attacks.) <|cite_end|>. According to <|cite_start|> (Reference: A state-failure-network method to identify critical components in power systems: ) <|cite_end|>, a large-scale power outage can lead to substantial economic losses and serious negative social impacts. Therefore, it is of great importance to identify the critical transmission lines and deploy defense resources for their protection so that the negative impacts of power outages caused by intentional attacks or accidental damages could be reduced. Many approaches have been proposed to identify the critical transmission lines in power grid under multiple outages, e.g., random chemistry search <|cite_start|> (Reference: A ``Random Chemistry'' Algorithm for Identifying Collections of Multiple Contingencies That Initiate Cascading Failure: This paper describes a stochastic “Random Chemistry” (RC) algorithm to identify large collections of multiple (n-k) contingencies that initiate large cascading failures in a simulated power system. The method requires only O(log (n)) simulations per contingency identified, which is orders of magnitude faster than random search of this combinatorial space. We applied the method to a model of cascading failure in a power network with n=2896 branches and identify 148243 unique, minimal n-k branch contingencies (2 ≤ k ≤ 5) that cause large cascades, many of which would be missed by using pre-contingency flows, linearized line outage distribution factors, or performance indices as screening factors. Within each n-k collection, the frequency with which individual branches appear follows a power-law (or nearly so) distribution, indicating that a relatively small number of components contribute disproportionately to system vulnerability. The paper discusses various ways that RC generated collections of dangerous contingencies could be used in power systems planning and operations.) <|cite_end|>, graph theory <|cite_start|> (Reference: Complex Networks Theory For Modern Smart Grid Applications: A Survey: This paper provides a survey of studying complex network theory for modern smart grid applications. A brief overview of complex network theory will be explored first. Topological characteristics, statistic characteristics, such as self-organized criticality and critically slow down, and dynamical characteristics, including synchronizations, consensus control, and pinning control, will be briefly addressed. Then, we will illustrate how complex network theory can be applied to modern smart grids in structural vulnerability assessment, cascading blackouts, grid synchronization, network reconfigurations, distributed droop control, pinning control for micro-grid autonomous operations, and effective grid expansions. Some emerging topics and future perspectives are also addressed.) <|cite_end|>, game theory <|cite_start|> (Reference: A game theory approach to vulnerability analysis: Integrating power flows with topological analysis: ) <|cite_end|> <|cite_start|> (Reference: A game-theoretic analysis of cyber switching attacks and mitigation in smart grid systems: We propose a framework for the analysis of cyber switching attacks and control-based mitigation in cyber-enabled power systems. Our model of the switching attack is simple, only requiring knowledge of the sign of the local relative rotor speed, which may be estimated. The controller is modeled to be resource constrained, choosing to act only during select intervals of time. We make use of an iterated game-theoretic formulation to describe the interactions of the parties and its effect on system stability. Analytic results indicate the potential of the constrained controller to achieve transient stabilization over time using zero-determinant strategies. Numerical results of the New England 39-bus power system demonstrate the potential for such a controller to increase system resilience during cyber-attacks.) <|cite_end|> <|cite_start|> (Reference: A game-theoretic study of load redistribution attack and defense in power systems: ) <|cite_end|>, bilevel programming <|cite_start|> (Reference: Bilevel programming applied to power system vulnerability analysis under multiple contingencies: This study examines the use of bilevel programming to analyse the vulnerability of power systems under multiple contingencies. One of the main purposes of this study is to explain the state of the art of the subject matter. A minimum vulnerability model and a maximum vulnerability model are presented and discussed. In both models, the upper-level optimisation determines a set of simultaneous outages in the transmission network whereas the lower-level optimisation models the reaction of the system operator against the outages identified in the upper level. The system operator reacts by minimising the system load shed through an optimal operation of the power system. Two solution approaches for the resulting mixed-integer non-linear bilevel programs are analysed and compared. Both methodologies are based on the equivalent transformation of the lower-level problem into a set of constraints, so that the original bilevel programs, respectively, become a single-level optimisation problem. The first approach is based on the application of Karush-Kuhn-Tucker optimality conditions whereas the second procedure relies on duality theory. This study shows that both approaches are essentially equivalent from a rigorous mathematical viewpoint; however, the second method is more suitable for off-the-shell branch-and-cut software as corroborated by numerical simulations.) <|cite_end|>, trilevel programming <|cite_start|> (Reference: Multilevel Programming-Based Coordinated Cyber Physical Attacks and Countermeasures in Smart Grid: Since the Ukraine blackout in 2015, coordinated cyber-physical attacks (CCPAs) have been emerging and are used to mask line outages in the smart grid. In this paper, we investigate the features of CCPAs and constitute the mathematic formulation with respect to topologies and electric parameters of a power grid before and after attacks. With the objective of maximizing the number of overloaded lines, a bilevel programming model is developed to describe the interaction between the adversary and the control center. The most damaging CCPA can be determined by transforming the developed bilevel model to a single mixed-integer linear programming problem using the Karush–Kuhn–Tucker conditions. Based on the features of the bilevel model, the countermeasure is expressed as a trilevel model with one leader and multiple followers. The implicit enumeration-based searching strategy is proposed to solve the trilevel model to identify the protected meters. Both the implementation of CCPAs and the effectiveness of the developed countermeasure are verified on the modified IEEE 14-bus system.) <|cite_end|>, stochastic programming <|cite_start|> (Reference: An improved defender-attacker-defender model for transmission line defense considering offensive resource uncertainties: Developing efficient strategies for defending electric power systems against attacks is a major concern for contemporary power grids, especially when uncertainties are involved. This paper addresses the allocation of the defensive resource to minimize the damage when there are uncertainties regarding the resource that the attacker has. A multiple-attack-scenario (MAS) defender–attacker–defender (DAD) model is proposed by extending the conventional trilevel DAD model. The proposed model considers the uncertainties related to the offensive resource and the interactions involving the security personnel at the top-level, the attacker at the middle-level, and the power system operator at the bottom-level. The column-and-constraint generation algorithm is implemented by decomposing the MAS DAD model into an upper-level problem for the security personnel, and a lower-level problem for the attacker involving the optimal power flow analysis-based corrective power redispatch implemented by the power system operator. Case studies are performed based on the IEEE RTS79 and 57-bus systems, and the results validate that the proposed method is able to minimize the damage when uncertainties are involved in the offensive resource. This paper can offer meaningful insights into power system protection involving uncertainties.) <|cite_end|>, and state failure network <|cite_start|> (Reference: A state-failure-network method to identify critical components in power systems: ) <|cite_end|>. However, the above efforts mainly focus on a single-stage attack (or one-shot atack), which means that attacking multiple elements at a time <|cite_start|> (Reference: {A Multistage Game in Smart Grid Security: A Reinforcement Learning Solution: Existing smart grid security research investigates different attack techniques and cascading failures from the attackers’ viewpoints, while the defenders’ or the operators’ protection strategies are somehow neglected. Game theoretic methods are applied for the attacker–defender games in the smart grid security area. Yet, most of the existing works only use the one-shot game and do not consider the dynamic process of the electric power grid. In this paper, we propose a new solution for a multistage game (also called a dynamic game) between the attacker and the defender based on reinforcement learning to identify the optimal attack sequences given certain objectives (e.g., transmission line outages or generation loss). Different from a one-shot game, the attacker here learns a sequence of attack actions applying for the transmission lines and the defender protects a set of selected lines. After each time step, the cascading failure will be measured, and the line outage (and/or generation loss) will be used as the feedback for the attacker to generate the next action. The performance is evaluated on W&W 6-bus and IEEE 39-bus systems. A comparison between a multistage attack and a one-shot attack is conducted to show the significance of the multistage attack. Furthermore, different protection strategies are evaluated in simulation, which shows that the proposed reinforcement learning solution can identify optimal attack sequences under several attack objectives. It also indicates that attacker’s learned information helps the defender to enhance the security of the system.) <|cite_end|>. Compared with a single-stage attack, a multistage or sequential attack (i.e., several attacks are launched in a time sequence) can lead to greater negative impacts for power grid <|cite_start|> (Reference: {A Multistage Game in Smart Grid Security: A Reinforcement Learning Solution: Existing smart grid security research investigates different attack techniques and cascading failures from the attackers’ viewpoints, while the defenders’ or the operators’ protection strategies are somehow neglected. Game theoretic methods are applied for the attacker–defender games in the smart grid security area. Yet, most of the existing works only use the one-shot game and do not consider the dynamic process of the electric power grid. In this paper, we propose a new solution for a multistage game (also called a dynamic game) between the attacker and the defender based on reinforcement learning to identify the optimal attack sequences given certain objectives (e.g., transmission line outages or generation loss). Different from a one-shot game, the attacker here learns a sequence of attack actions applying for the transmission lines and the defender protects a set of selected lines. After each time step, the cascading failure will be measured, and the line outage (and/or generation loss) will be used as the feedback for the attacker to generate the next action. The performance is evaluated on W&W 6-bus and IEEE 39-bus systems. A comparison between a multistage attack and a one-shot attack is conducted to show the significance of the multistage attack. Furthermore, different protection strategies are evaluated in simulation, which shows that the proposed reinforcement learning solution can identify optimal attack sequences under several attack objectives. It also indicates that attacker’s learned information helps the defender to enhance the security of the system.) <|cite_end|>. In <|cite_start|> (Reference: Robust optimization for transmission defense against multi-period attacks with uncertainties: ) <|cite_end|>, a line defense method based on robust optimization and stochastic programming was proposed to minimize the load loss against multistage attacks under uncertainties (i.e., attacks have certain success probabilities). However, a possible cascading failure caused by multistage attacks was neglected. To identify the critical lines under multistage attacks that may initiate a cascading failure, Q-learning based methods were adopted in <|cite_start|> (Reference: {A Multistage Game in Smart Grid Security: A Reinforcement Learning Solution: Existing smart grid security research investigates different attack techniques and cascading failures from the attackers’ viewpoints, while the defenders’ or the operators’ protection strategies are somehow neglected. Game theoretic methods are applied for the attacker–defender games in the smart grid security area. Yet, most of the existing works only use the one-shot game and do not consider the dynamic process of the electric power grid. In this paper, we propose a new solution for a multistage game (also called a dynamic game) between the attacker and the defender based on reinforcement learning to identify the optimal attack sequences given certain objectives (e.g., transmission line outages or generation loss). Different from a one-shot game, the attacker here learns a sequence of attack actions applying for the transmission lines and the defender protects a set of selected lines. After each time step, the cascading failure will be measured, and the line outage (and/or generation loss) will be used as the feedback for the attacker to generate the next action. The performance is evaluated on W&W 6-bus and IEEE 39-bus systems. A comparison between a multistage attack and a one-shot attack is conducted to show the significance of the multistage attack. Furthermore, different protection strategies are evaluated in simulation, which shows that the proposed reinforcement learning solution can identify optimal attack sequences under several attack objectives. It also indicates that attacker’s learned information helps the defender to enhance the security of the system.) <|cite_end|> <|cite_start|> (Reference: Q-Learning-Based Vulnerability Analysis of Smart Grid Against Sequential Topology Attacks: Recent studies on sequential attack schemes revealed new smart grid vulnerability that can be exploited by attacks on the network topology. Traditional power systems contingency analysis needs to be expanded to handle the complex risk of cyber-physical attacks. To analyze the transmission grid vulnerability under sequential topology attacks, this paper proposes a Q-learning-based approach to identify critical attack sequences with consideration of physical system behaviors. A realistic power flow cascading outage model is used to simulate the system behavior, where attacker can use the Q-learning to improve the damage of sequential topology attack toward system failures with the least attack efforts. Case studies based on three IEEE test systems have demonstrated the learning ability and effectiveness of Q-learning-based vulnerability analysis.) <|cite_end|> <|cite_start|> (Reference: Stochastic Games for Power Grid Protection Against Coordinated Cyber-Physical Attacks: Due to the global reliance on the power grid, coordinated cyber-physical attacks on its critical infrastructure can lead to disastrous human and economic losses. In this paper, a stochastic game-theoretic approach is proposed to analyze the optimal strategies that a power grid defender can adopt to protect the grid against coordinated attacks. First, an optimal load shedding technique is devised to quantify the physical impacts of coordinated attacks. Taking these quantified impacts as input parameters, the interactions between a malicious attacker and the defender are modeled using a resource allocation stochastic game. The game is shown to admit a Nash equilibrium and a novel learning algorithm is introduced to enable the two players to reach their equilibrium strategies while maximizing their respective minimum rewards in a sequence of stages. The convergence of the proposed algorithm to a Nash equilibrium point is proved and its properties are studied. Simulation results of the stochastic game model on the WSCC 9-bus system and the IEEE 118-bus system are contrasted with those of static games, and show that different defense resources owned lead to different defense strategies.) <|cite_end|>. When the number of transmission lines is large, storing value function using Q-table has a very high requirement on memory. To overcome this drawback, a particle swarm optimization (PSO) based heuristic approach was proposed in <|cite_start|> (Reference: An evolutionary computation approach for smart grid cascading failure vulnerability analysis: The cyber-physical security of smart grid is of great importance since it directly concerns the normal operating of a system. Recently, researchers found that organized sequential attacks can incur large-scale cascading failure to the smart grid. In this paper, we focus on the line-switching sequential attack, where the attacker aims to trip transmission lines in a designed order to cause significant system failures. Our objective is to identify the critical line-switching attack sequence, which can be instructional for the protection of smart grid. For this purpose, we develop an evolutionary computation based vulnerability analysis framework, which employs particle swarm optimization to search the critical attack sequence. Simulation studies on two benchmark systems, i.e., IEEE 24 bus reliability test system and Washington 30 bus dynamic test system, are implemented to evaluate the performance of our proposed method. Simulation results show that our method can yield a better performance comparing with the reinforcement learning based approach proposed in other prior work.) <|cite_end|>. Although some advances have been made in the above efforts, they did not consider the problem of identifying the critical lines under coordinated multistage attacks (i.e., attacks are launched by multiple attackers coordinately and repeatedly until all attacking resources are used) that may initiate a cascading failure. Moreover, Q-learning and PSO based approaches have their respective limitations when the number of transmission lines is large. To be specific, Q-learning is known to be unstable or even to diverge when a nonlinear function approximator (e.g., a deep neural network) is used to represent the value function <|cite_start|> (Reference: Human-level control through deep reinforcement learning: ) <|cite_end|> and PSO also has less stable performance <|cite_start|> (Reference: Deep reinforcement learning for power system: An overview: Due to increasing complexity, uncertainty and data dimensions in power systems, conventional methods often meet bottlenecks when attempting to solve decision and control problems. Therefore, data-driven methods toward solving such problems are being extensively studied. Deep reinforcement learning (DRL) is one of these data-driven methods and is regarded as real artificial intelligence (AI). DRL is a combination of deep learning (DL) and reinforcement learning (RL). This field of research has been applied to solve a wide range of complex sequential decision-making problems, including those in power systems. This paper firstly reviews the basic ideas, models, algorithms and techniques of DRL. Applications in power systems such as energy management, demand response, electricity market, operational control, and others are then considered. In addition, recent advances in DRL including the combination of RL with other classical methods, and the prospect and challenges of applications in power systems are also discussed.) <|cite_end|>. Based on the above observation, this paper intends to identify the critical lines under coordinated multistage attacks that may initiate a cascading failure and deploy limited defense resources optimally. To achieve the aim, we first formulate a total generation loss maximization problem with the consideration of multiple attackers and multiple stages. Due to the large size of solution space, it is very challenging to solve the formulated problem. To overcome the challenge, we reformulate the problem as a Markov game <|cite_start|> (Reference: Markov Games as a Framework for Multi-Agent Reinforcement Learning: ) <|cite_end|>. Then, we propose an algorithm with low computational complexity to solve the Markov game based on multi-agent deep reinforcement learning (DRL) with attention mechanism <|cite_start|> (Reference: Multi-Agent Deep Reinforcement Learning for HVAC Control in Commercial Buildings: In commercial buildings, about 40%-50% of the total electricity consumption is attributed to Heating, Ventilation, and Air Conditioning (HVAC) systems, which places an economic burden on building operators. In this paper, we intend to minimize the energy cost of an HVAC system in a multi-zone commercial building under dynamic pricing with the consideration of random zone occupancy, thermal comfort, and indoor air quality comfort. Due to the existence of unknown thermal dynamics models, parameter uncertainties (e.g., outdoor temperature, electricity price, and number of occupants), spatially and temporally coupled constraints associated with indoor temperature and CO2 concentration, a large discrete solution space, and a non-convex and non-separable objective function, it is very challenging to achieve the above aim. To this end, the above energy cost minimization problem is reformulated as a Markov game. Then, an HVAC control algorithm is proposed to solve the Markov game based on multi-agent deep reinforcement learning with attention mechanism. The proposed algorithm does not require any prior knowledge of uncertain parameters and can operate without knowing building thermal dynamics models. Simulation results based on real-world traces show the effectiveness, robustness and scalability of the proposed algorithm.) <|cite_end|> and prioritized experience replay <|cite_start|> (Reference: Prioritized Experience Replay: Experience replay lets online reinforcement learning agents remember and reuse experiences from the past. In prior work, experience transitions were uniformly sampled from a replay memory. However, this approach simply replays transitions at the same frequency that they were originally experienced, regardless of their significance. In this paper we develop a framework for prioritizing experience, so as to replay important transitions more frequently, and therefore learn more efficiently. We use prioritized experience replay in Deep Q-Networks (DQN), a reinforcement learning algorithm that achieved human-level performance across many Atari games. DQN with prioritized experience replay achieves a new state-of-the-art, outperforming DQN with uniform replay on 41 out of 49 games.) <|cite_end|>. Next, we design an optimal defense strategy based on the obtained optimal attacking line sequences and the number of defense resources (i.e., the number of lines can be protected <|cite_start|> (Reference: An improved defender-attacker-defender model for transmission line defense considering offensive resource uncertainties: Developing efficient strategies for defending electric power systems against attacks is a major concern for contemporary power grids, especially when uncertainties are involved. This paper addresses the allocation of the defensive resource to minimize the damage when there are uncertainties regarding the resource that the attacker has. A multiple-attack-scenario (MAS) defender–attacker–defender (DAD) model is proposed by extending the conventional trilevel DAD model. The proposed model considers the uncertainties related to the offensive resource and the interactions involving the security personnel at the top-level, the attacker at the middle-level, and the power system operator at the bottom-level. The column-and-constraint generation algorithm is implemented by decomposing the MAS DAD model into an upper-level problem for the security personnel, and a lower-level problem for the attacker involving the optimal power flow analysis-based corrective power redispatch implemented by the power system operator. Case studies are performed based on the IEEE RTS79 and 57-bus systems, and the results validate that the proposed method is able to minimize the damage when uncertainties are involved in the offensive resource. This paper can offer meaningful insights into power system protection involving uncertainties.) <|cite_end|>). Compared with existing works (e.g., Q-learning and PSO), the proposed DRL-based algorithm has a more stable performance <|cite_start|> (Reference: Deep reinforcement learning for power system: An overview: Due to increasing complexity, uncertainty and data dimensions in power systems, conventional methods often meet bottlenecks when attempting to solve decision and control problems. Therefore, data-driven methods toward solving such problems are being extensively studied. Deep reinforcement learning (DRL) is one of these data-driven methods and is regarded as real artificial intelligence (AI). DRL is a combination of deep learning (DL) and reinforcement learning (RL). This field of research has been applied to solve a wide range of complex sequential decision-making problems, including those in power systems. This paper firstly reviews the basic ideas, models, algorithms and techniques of DRL. Applications in power systems such as energy management, demand response, electricity market, operational control, and others are then considered. In addition, recent advances in DRL including the combination of RL with other classical methods, and the prospect and challenges of applications in power systems are also discussed.) <|cite_end|>. The contributions of this paper are summarized as follows. \begin{itemize} \item We reformulate a total generation loss maximization problem in power grid under coordinated multistage line attacks based on the framework of Markov game and design its components, e.g., state, action, and reward. \item We propose a scalable algorithm to solve the Markov game based on multi-agent DRL with attention mechanism and prioritized experience replay, which can achieve higher generation loss by 21.46\%-85.98\% compared with existing schemes. \item We design an optimal defense strategy against coordinated multistage line attacks according to the feature of optimal attacking line sequences. When protecting 4.83\% of the total lines, the designed defense strategy can reduce generation loss by 6.62\%-17.79\% compared with other defense schemes. \end{itemize} The rest of this paper is organized as follows. In Section~\ref{s2}, we describe system model and formulate a total generation loss maximization problem as well as its variant. In Section~\ref{s3}, we propose an algorithm to solve the formulated problem. In Section~\ref{s4}, we design an optimal defense strategy against coordinated multistage attacks. In Section~\ref{s5}, performance evaluation is conducted. Finally, we draw a conclusion in Section~\ref{s6}. <|paper_end|>
[ "<|reference_start|> Benchmarking and validation of cascading failure analysis tools: Cascading failure in electric power systems is a complicated problem for which a variety of models, software tools, and analytical tools have been proposed but are difficult to verify. Benchmarking and validation are necessary to understand how closely a particular modeling method corresponds to reality, what engineering conclusions may be drawn from a particular tool, and what improvements need to be made to the tool in order to reach valid conclusions. The community needs to develop the test cases tailored to cascading that are central to practical benchmarking and validation. In this paper, the IEEE PES working group on cascading failure reviews and synthesizes how benchmarking and validation can be done for cascading failure analysis, summarizes and reviews the cascading test cases that are available to the international community, and makes recommendations for improving the state of the art. <|reference_end|>", "<|reference_start|> A ``Random Chemistry'' Algorithm for Identifying Collections of Multiple Contingencies That Initiate Cascading Failure: This paper describes a stochastic “Random Chemistry” (RC) algorithm to identify large collections of multiple (n-k) contingencies that initiate large cascading failures in a simulated power system. The method requires only O(log (n)) simulations per contingency identified, which is orders of magnitude faster than random search of this combinatorial space. We applied the method to a model of cascading failure in a power network with n=2896 branches and identify 148243 unique, minimal n-k branch contingencies (2 ≤ k ≤ 5) that cause large cascades, many of which would be missed by using pre-contingency flows, linearized line outage distribution factors, or performance indices as screening factors. Within each n-k collection, the frequency with which individual branches appear follows a power-law (or nearly so) distribution, indicating that a relatively small number of components contribute disproportionately to system vulnerability. The paper discusses various ways that RC generated collections of dangerous contingencies could be used in power systems planning and operations. <|reference_end|>", "<|reference_start|> {A Multistage Game in Smart Grid Security: A Reinforcement Learning Solution: Existing smart grid security research investigates different attack techniques and cascading failures from the attackers’ viewpoints, while the defenders’ or the operators’ protection strategies are somehow neglected. Game theoretic methods are applied for the attacker–defender games in the smart grid security area. Yet, most of the existing works only use the one-shot game and do not consider the dynamic process of the electric power grid. In this paper, we propose a new solution for a multistage game (also called a dynamic game) between the attacker and the defender based on reinforcement learning to identify the optimal attack sequences given certain objectives (e.g., transmission line outages or generation loss). Different from a one-shot game, the attacker here learns a sequence of attack actions applying for the transmission lines and the defender protects a set of selected lines. After each time step, the cascading failure will be measured, and the line outage (and/or generation loss) will be used as the feedback for the attacker to generate the next action. The performance is evaluated on W&W 6-bus and IEEE 39-bus systems. A comparison between a multistage attack and a one-shot attack is conducted to show the significance of the multistage attack. Furthermore, different protection strategies are evaluated in simulation, which shows that the proposed reinforcement learning solution can identify optimal attack sequences under several attack objectives. It also indicates that attacker’s learned information helps the defender to enhance the security of the system. <|reference_end|>", "<|reference_start|> Robust optimization for transmission defense against multi-period attacks with uncertainties: <|reference_end|>" ]
[ 1, 5, 15, 16 ]
{"<|cite_1|>": "ss-1314080", "<|cite_2|>": "ss-1804423", "<|cite_3|>": "ss-1314081", "<|cite_4|>": "ss-1314082", "<|cite_5|>": "ss-1314083", "<|cite_6|>": "ss-811448", "<|cite_7|>": "ss-982456", "<|cite_8|>": "ss-1314080", "<|cite_9|>": "ss-1132519", "<|cite_10|>": "ss-2317451", "<|cite_11|>": "ss-1971573", "<|cite_12|>": "ss-1874966", "<|cite_13|>": "ss-1314084", "<|cite_14|>": "ss-1314083", "<|cite_15|>": "ss-1218231", "<|cite_16|>": "ss-1218231", "<|cite_17|>": "ss-1314085", "<|cite_18|>": "ss-1218231", "<|cite_19|>": "ss-915199", "<|cite_20|>": "ss-2029311", "<|cite_21|>": "ss-1314081", "<|cite_22|>": "ss-749221", "<|cite_23|>": "ss-1249029", "<|cite_24|>": "ss-1126672", "<|cite_25|>": "arxiv-274381", "<|cite_26|>": "arxiv-87502", "<|cite_27|>": "ss-1314084", "<|cite_28|>": "ss-1249029"}
1302.2684-1
<|cite_start|> (Reference: Statistical algorithms and a lower bound for planted clique: We develop a framework for proving lower bounds on computational problems over distributions, including optimization and unsupervised learning. Our framework is based on defining a restricted class of algorithms, called statistical algorithms, that instead of accessing samples from the input distribution can only obtain an estimate of the expectation of any given function on a sample drawn randomly from the input distribution. Our definition captures many natural algorithms used in theory and practice, e.g. moments-based methods, local search, MCMC and simulated annealing. Our techniques are inspired by (and generalize) the statistical query model in learning theory, which captures the complexity of PAC learning using essentially all known learning methods [Kearns, 1998]. For specific well-known problems over distributions, we give lower bounds on the complexity of any statistical algorithm. These include an exponential lower bounds for moment maximization in R, and a nearly optimal lower bound for detecting planted clique distributions when the planted clique has size O(n1/2−δ) for any constant δ > 0. Variants of the latter problem have been assumed to be hard to prove hardness for other problems and for cryptographic applications. Our lower bounds provide concrete evidence supporting these assumptions. ∗This material is based upon work supported by the National Science Foundation under Grant #1019343 to the Computing Research Association for the CIFellows Project. †Research supported in part by NSF awards AF-0915903 and AF-0910584. ‡Research supported by a Simons Postdoctoral Fellowship. ISSN 1433-8092 Electronic Colloquium on Computational Complexity, Report No. 64 (2012)) <|cite_end|>provides lower bounds on the complexity of statistical algorithms, and shows that for cliques of size $O(n^{1/2-\delta})$, for any constant $\delta>0$, at least $n^{\Omega(\log \log n)}$ queries are needed to find the cliques. There are works relating the hardness of finding hidden cliques and the use of higher order moment tensors for this purpose. <|cite_start|> (Reference: {A new approach to the planted clique problem: We study the problem of finding a large planted clique in the random graph $G_{n,1/2}$. We reduce the problem to that of maximising a three dimensional tensor over the unit ball in $n$ dimensions. This latter problem has not been well studied and so we hope that this reduction will eventually lead to an improved solution to the planted clique problem.) <|cite_end|>relate the problem of finding a hidden clique to finding the top eigenvector of the third order tensor, corresponding to the maximum spectral norm. <|cite_start|> (Reference: Random Tensors and Planted Cliques: The r-parity tensor of a graph is a generalization of the adjacency matrix, where the tensor's entries denote the parity of the number of edges in subgraphs induced by r distinct vertices. For r=2, it is the adjacency matrix with 1's for edges and -1's for nonedges. It is well-known that the 2-norm of the adjacency matrix of a random graph is O(\sqrt{n}). Here we show that the 2-norm of the r-parity tensor is at most f(r)\sqrt{n}\log^{O(r)}n, answering a question of Frieze and Kannan who proved this for r=3. As a consequence, we get a tight connection between the planted clique problem and the problem of finding a vector that approximates the 2-norm of the r-parity tensor of a random graph. Our proof method is based on an inductive application of concentration of measure.) <|cite_end|>extend the result to arbitrary $r^{\tha} $-order tensors and the cliques have to be size $\Omega(n^{1/r})$ to enable recovery from $r^{\tha}$-order moment tensors in a $n$ node network. However, this problem (finding the top eigenvector of a tensor) is known to be NP-hard in general <|cite_start|> (Reference: Most tensor problems are NP hard: We prove that multilinear (tensor) analogues of many efficiently computable problems in numerical linear algebra are NP-hard. Our list includes: determining the feasibility of a system of bilinear equations, deciding whether a 3-tensor possesses a given eigenvalue, singular value, or spectral norm; approximating an eigenvalue, eigenvector, singular vector, or the spectral norm; and determining the rank or best rank-1 approximation of a 3-tensor. Furthermore, we show that restricting these problems to symmetric tensors does not alleviate their NP-hardness. We also explain how deciding nonnegative definiteness of a symmetric 4-tensor is NP-hard and how computing the combinatorial hyperdeterminant is NP-, #P-, and VNP-hard.) <|cite_end|>. Thus, tensors are useful for finding smaller hidden cliques in network (albeit by solving a computationally hard problem). In contrast, we consider tractable tensor decomposition through reduction to orthogonal tensors (under the scaling requirements of \eqref{eqn:condspecial-intro}), and our learning method is a fast and an iterative approach based on tensor power iterations and linear algebraic operations. <|cite_start|> (Reference: Stochastic Block Models and Reconstruction: The planted partition model (also known as the stochastic blockmodel) is a classical cluster-exhibiting random graph model that has been extensively studied in statistics, physics, and computer science. In its simplest form, the planted partition model is a model for random graphs on $n$ nodes with two equal-sized clusters, with an between-class edge probability of $q$ and a within-class edge probability of $p$. Although most of the literature on this model has focused on the case of increasing degrees (ie.\ $pn, qn \to \infty$ as $n \to \infty$), the sparse case $p, q = O(1/n)$ is interesting both from a mathematical and an applied point of view. A striking conjecture of Decelle, Krzkala, Moore and Zdeborov\'a based on deep, non-rigorous ideas from statistical physics gave a precise prediction for the algorithmic threshold of clustering in the sparse planted partition model. In particular, if $p = a/n$ and $q = b/n$, then Decelle et al.\ conjectured that it is possible to cluster in a way correlated with the true partition if $(a - b)^2 > 2(a + b)$, and impossible if $(a - b)^2 C (a + b)$ for some sufficiently large $C$. We prove half of their prediction, showing that it is indeed impossible to cluster if $(a - b)^2 2(a + b)$. Following Decelle et al, our work establishes a rigorous connection between the clustering problem, spin-glass models on the Bethe lattice and the so called reconstruction problem. This connection points to fascinating applications and open problems.) <|cite_end|>provide lower bounds on the separation $p-q$, the edge connectivity between intra-community and inter-community, for identifiability of communities in stochastic block models in the sparse regime (when $p, q\sim n^{-1}$), when the number of communities is a constant $k = O(1)$. Our method achieves the lower bounds on separation of edge connectivity up to poly-log factors. \paragraph{Likelihood-based Approaches to Learning MMSB: }Another class of approaches for learning MMSB models are based on optimizing the observed likelihood. Traditional approaches such as Gibbs sampling or expectation maximization (EM) can be too expensive apply in practice for MMSB models. Variational approaches which optimize the so-called evidence lower bound <|cite_start|> (Reference: Stochastic Variational Inference: We develop stochastic variational inference, a scalable algorithm for approximating posterior distributions. We develop this technique for a large class of probabilistic models and we demonstrate it with two probabilistic topic models, latent Dirichlet allocation and the hierarchical Dirichlet process topic model. Using stochastic variational inference, we analyze several large collections of documents: 300K articles from Nature, 1.8M articles from The New York Times, and 3.8M articles from Wikipedia. Stochastic inference can easily handle data sets of this size and outperforms traditional variational inference, which can only handle a smaller subset. (We also show that the Bayesian nonparametric topic model outperforms its parametric counterpart.) Stochastic variational inference lets us apply complex Bayesian models to massive data sets.) <|cite_end|> <|cite_start|> (Reference: Scalable inference of overlapping communities: We develop a scalable algorithm for posterior inference of overlapping communities in large networks. Our algorithm is based on stochastic variational inference in the mixed-membership stochastic blockmodel (MMSB). It naturally interleaves subsampling the network with estimating its community structure. We apply our algorithm on ten large, real-world networks with up to 60,000 nodes. It converges several orders of magnitude faster than the state-of-the-art algorithm for MMSB, finds hundreds of communities in large real-world networks, and detects the true communities in 280 benchmark networks with equal or better accuracy compared to other scalable algorithms.) <|cite_end|>, which is a lower bound on the marginal likelihood of the observed data (typically by applying a mean-field approximation), are efficient for practical implementation. Stochastic versions of the variational approach provide even further gains in efficiency and are state-of-art practical learning methods for MMSB models <|cite_start|> (Reference: Scalable inference of overlapping communities: We develop a scalable algorithm for posterior inference of overlapping communities in large networks. Our algorithm is based on stochastic variational inference in the mixed-membership stochastic blockmodel (MMSB). It naturally interleaves subsampling the network with estimating its community structure. We apply our algorithm on ten large, real-world networks with up to 60,000 nodes. It converges several orders of magnitude faster than the state-of-the-art algorithm for MMSB, finds hundreds of communities in large real-world networks, and detects the true communities in 280 benchmark networks with equal or better accuracy compared to other scalable algorithms.) <|cite_end|>. However, these methods lack theoretical guarantees; since they optimize a bound on the likelihood, they are not guaranteed to recover the underlying communities consistently. A recent work <|cite_start|> (Reference: Consistency of maximum-likelihood and variational estimators in the stochastic block model: The stochastic block model (SBM) is a probabilistic model de- signed to describe heterogeneous directed and undirected graphs. In this paper, we address the asymptotic inference on SBM by use of maximum- likelihood and variational approaches. The identi ability of SBM is proved, while asymptotic properties of maximum-likelihood and variational esti- mators are provided. In particular, the consistency of these estimators is settled, which is, to the best of our knowledge, the rst result of this type for variational estimators with random graphs.) <|cite_end|>establishes consistency of maximum likelihood and variational estimators for stochastic block models, which are special cases of the MMSB model. However, it is not known if the results extend to general MMSB models. Moreover, the framework of <|cite_start|> (Reference: Consistency of maximum-likelihood and variational estimators in the stochastic block model: The stochastic block model (SBM) is a probabilistic model de- signed to describe heterogeneous directed and undirected graphs. In this paper, we address the asymptotic inference on SBM by use of maximum- likelihood and variational approaches. The identi ability of SBM is proved, while asymptotic properties of maximum-likelihood and variational esti- mators are provided. In particular, the consistency of these estimators is settled, which is, to the best of our knowledge, the rst result of this type for variational estimators with random graphs.) <|cite_end|>assumes a fixed number of communities and growing network size, and provide only asymptotic consistency guarantees. Thus, they do not allow for high-dimensional settings, where the parameters of the learning problem also grow as the observed dimensionality grows. In contrast, in this paper, we allow for the number of communities to grow, and provide precise constraints on the scaling bounds for consistent estimation under finite samples. It is an open problem to obtain such bounds for maximum likelihood and variational estimators. On the practical side, a recent work deploying the tensor approach proposed in this paper by <|cite_start|> (Reference: Fast Detection of Overlapping Communities via Online Tensor Methods on GPUs: We present a fast tensor-based approach for detecting hidden overlapping communities under the Mixed Membership Stochastic Blockmodel (MMSB). We present two implementations, \viz a GPU-based implementation which exploits the parallelism of SIMD architectures and a CPU-based implementation for larger datasets, wherein the GPU memory does not suffice. Our GPU-based implementation involves a careful optimization of storage, data transfer and matrix computations. Our CPU-based implementation involves sparse linear algebraic operations which exploit the data sparsity. We use stochastic gradient descent for multilinear spectral optimization and this allows for flexibility in the tradeoff between node sub-sampling and accuracy of the results. We validate our results on datasets from Facebook, Yelp and DBLP where ground truth is available, using notions of $p$-values and false discovery rates, and obtain high accuracy for membership recovery. We compare our results, both in terms of execution time and accuracy, to the state-of-the-art algorithms such as the variational method, and report many orders of magnitude gain in the execution time. The tensor method is also applicable for unsupervised learning of a wide range of latent variable models, and we also demonstrate efficient recovery of topics from the Nytimes dataset.) <|cite_end|>shows that the tensor approach is more than an order of magnitude faster in recovering the communities than the variational approach, is scalable to networks with millions of nodes, and also has better accuracy in recovering the communities. <|paper_end|>
[ "<|reference_start|> Random Tensors and Planted Cliques: The r-parity tensor of a graph is a generalization of the adjacency matrix, where the tensor's entries denote the parity of the number of edges in subgraphs induced by r distinct vertices. For r=2, it is the adjacency matrix with 1's for edges and -1's for nonedges. It is well-known that the 2-norm of the adjacency matrix of a random graph is O(\\sqrt{n}). Here we show that the 2-norm of the r-parity tensor is at most f(r)\\sqrt{n}\\log^{O(r)}n, answering a question of Frieze and Kannan who proved this for r=3. As a consequence, we get a tight connection between the planted clique problem and the problem of finding a vector that approximates the 2-norm of the r-parity tensor of a random graph. Our proof method is based on an inductive application of concentration of measure. <|reference_end|>", "<|reference_start|> Stochastic Block Models and Reconstruction: The planted partition model (also known as the stochastic blockmodel) is a classical cluster-exhibiting random graph model that has been extensively studied in statistics, physics, and computer science. In its simplest form, the planted partition model is a model for random graphs on $n$ nodes with two equal-sized clusters, with an between-class edge probability of $q$ and a within-class edge probability of $p$. Although most of the literature on this model has focused on the case of increasing degrees (ie.\\ $pn, qn \\to \\infty$ as $n \\to \\infty$), the sparse case $p, q = O(1/n)$ is interesting both from a mathematical and an applied point of view. \nA striking conjecture of Decelle, Krzkala, Moore and Zdeborov\\'a based on deep, non-rigorous ideas from statistical physics gave a precise prediction for the algorithmic threshold of clustering in the sparse planted partition model. In particular, if $p = a/n$ and $q = b/n$, then Decelle et al.\\ conjectured that it is possible to cluster in a way correlated with the true partition if $(a - b)^2 > 2(a + b)$, and impossible if $(a - b)^2 C (a + b)$ for some sufficiently large $C$. \nWe prove half of their prediction, showing that it is indeed impossible to cluster if $(a - b)^2 2(a + b)$. Following Decelle et al, our work establishes a rigorous connection between the clustering problem, spin-glass models on the Bethe lattice and the so called reconstruction problem. This connection points to fascinating applications and open problems. <|reference_end|>", "<|reference_start|> Stochastic Variational Inference: We develop stochastic variational inference, a scalable algorithm for approximating posterior distributions. We develop this technique for a large class of probabilistic models and we demonstrate it with two probabilistic topic models, latent Dirichlet allocation and the hierarchical Dirichlet process topic model. Using stochastic variational inference, we analyze several large collections of documents: 300K articles from Nature, 1.8M articles from The New York Times, and 3.8M articles from Wikipedia. Stochastic inference can easily handle data sets of this size and outperforms traditional variational inference, which can only handle a smaller subset. (We also show that the Bayesian nonparametric topic model outperforms its parametric counterpart.) Stochastic variational inference lets us apply complex Bayesian models to massive data sets. <|reference_end|>", "<|reference_start|> Scalable inference of overlapping communities: We develop a scalable algorithm for posterior inference of overlapping communities in large networks. Our algorithm is based on stochastic variational inference in the mixed-membership stochastic blockmodel (MMSB). It naturally interleaves subsampling the network with estimating its community structure. We apply our algorithm on ten large, real-world networks with up to 60,000 nodes. It converges several orders of magnitude faster than the state-of-the-art algorithm for MMSB, finds hundreds of communities in large real-world networks, and detects the true communities in 280 benchmark networks with equal or better accuracy compared to other scalable algorithms. <|reference_end|>" ]
[ 2, 4, 5, 7 ]
{"<|multi_cite_38_1|>": "ss-819113", "<|multi_cite_38_2|>": "ss-768050", "<|multi_cite_38_3|>": "ss-806991", "<|multi_cite_38_4|>": "ss-845771", "<|cite_1|>": "ss-819113", "<|multi_cite_39_1|>": "ss-1107657", "<|multi_cite_39_2|>": "ss-1347982", "<|multi_cite_39_3|>": "ss-2491726", "<|multi_cite_39_4|>": "ss-1309251", "<|multi_cite_39_5|>": "ss-1230906", "<|multi_cite_39_6|>": "ss-1530422", "<|cite_40|>": "ss-970920", "<|cite_2|>": "arxiv-457", "<|cite_3|>": "ss-2480739", "<|cite_4|>": "ss-1813102", "<|cite_5|>": "arxiv-457", "<|cite_6|>": "arxiv-457", "<|cite_53|>": "arxiv-457", "<|cite_7|>": "ss-1021817", "<|cite_8|>": "ss-1352598", "<|cite_9|>": "ss-1352598", "<|cite_10|>": "ss-1352598", "<|cite_41|>": "ss-1288119", "<|cite_42|>": "ss-1356700", "<|cite_43|>": "ss-1432303", "<|cite_11|>": "ss-1078945", "<|cite_12|>": "ss-835166", "<|cite_13|>": "ss-842501", "<|cite_14|>": "ss-842501", "<|cite_15|>": "ss-842501", "<|multi_cite_44_1|>": "ss-2160512", "<|multi_cite_44_2|>": "ss-1562495", "<|multi_cite_16_1|>": "ss-1949518", "<|multi_cite_16_2|>": "ss-1326889", "<|cite_17|>": "ss-1352598", "<|cite_45|>": "ss-1530422", "<|cite_18|>": "ss-1021817", "<|cite_19|>": "ss-1352598", "<|cite_20|>": "ss-1352598", "<|cite_21|>": "ss-714230", "<|cite_22|>": "ss-985859", "<|cite_23|>": "arxiv-20993", "<|cite_24|>": "arxiv-26915", "<|cite_25|>": "ss-2013683", "<|cite_26|>": "arxiv-26915", "<|cite_27|>": "arxiv-26915", "<|cite_28|>": "ss-2013683", "<|cite_29|>": "arxiv-28401", "<|multi_cite_30_1|>": "ss-2491727", "<|multi_cite_30_2|>": "ss-1376358", "<|cite_46|>": "ss-1376358", "<|cite_31|>": "ss-1454354", "<|cite_47|>": "ss-1086931", "<|multi_cite_48_1|>": "arxiv-29414", "<|multi_cite_48_2|>": "ss-1078945", "<|multi_cite_48_3|>": "ss-2014989", "<|cite_32|>": "ss-2014989", "<|multi_cite_33_1|>": "arxiv-29414", "<|multi_cite_33_2|>": "ss-1078945", "<|multi_cite_33_3|>": "ss-2014989", "<|cite_34|>": "ss-1708004", "<|cite_35|>": "ss-821542", "<|cite_36|>": "arxiv-7431", "<|cite_49|>": "ss-1432303", "<|cite_37|>": "ss-795481", "<|multi_cite_50_1|>": "arxiv-33512", "<|multi_cite_50_2|>": "ss-1813102", "<|cite_51|>": "ss-1813102", "<|cite_52|>": "ss-1620994", "<|cite_54|>": "ss-1620994", "<|cite_55|>": "ss-1288119"}
2209.03224
<|paper_start|> Title: Dual Instrumental Method for Confounded Kernelized Bandits Abstract: Dual Instrumental Method for Confounded Kernelized Bandits: The contextual bandit problem is a theoretically justified framework with wide applications in various fields. While the previous study on this problem usually requires independence between noise and contexts, our work considers a more sensible setting where the noise becomes a latent confounder that affects both contexts and rewards. Such a confounded setting is more realistic and could expand to a broader range of applications. However, the unresolved confounder will cause a bias in reward function estimation and thus lead to a large regret. To deal with the challenges brought by the confounder, we apply the dual instrumental variable regression, which can correctly identify the true reward function. We prove the convergence rate of this method is near-optimal in two types of widely used reproducing kernel Hilbert spaces. Therefore, we can design computationally efficient and regret-optimal algorithms based on the theoretical guarantees for confounded bandit problems. The numerical results illustrate the efficacy of our proposed algorithms in the confounded bandit setting. Introduction Contextual bandit problems have been studied to capture the trade-off between \emph{exploration} and \emph{exploitation} in online decision-making. Various formulations of the problems show wide applications ranging from scheduling, dynamic pricing, packet routing, online auctions, e-commerce, and matching markets <|cite_start|> (Reference: Prediction, learning, and games: Empirical evidence to lend proper credence, however, continues to elude the quality literature. This hardly vexes Taguchi (or most of those who produce the corpus of the discipline), but it is importunate to the reviewer. In many settings, the loss function is unlikely to be symmetric with respect to the target and, furthermore, the behavior on either side of the target is not necessarily the same. Such seemingly obvious deviations have not deterred the vast majority from proclaiming the ubiquity of the function. The current book offers no new insights here. The treatment of experimental design is fairly strong. Taguchi’s use of outer arrays is one of his greatest contributions (and one that has caught the ire of a few academics). The book elucidates design adequately and illuminates Taguchi’s advances. Anyone who is well versed in design will be able to skip the introductions and go straight to the discussion of orthogonal arrays. In this reviewer’s opinion, this is the major strength of the book. Another strength is the extensive set of case studies that cover each topic from the previous chapters. Applications include robust engineering in polymer chemistry, material design in automatic transmissions, improvements in omelet taste, and the use of Mahalanobis distance to measure drug efficacy. The sheer range of topical coverage in the cases will doubtlessly find appeal for virtually any practitioner regardless of specific field. There is the obligatory mention of Six Sigma as it relates to Taguchi’s work. Given the scope of Six Sigma in the current landscape, finding your place therein is necessary. A glaring omission is the lack of a similar consideration of ISO and QS certifications (as is given in Juran). Do not assume that the reviewer sees this as a negative. It is hoped here that Taguchi sees these quality certifications as largely specious and unworthy of a reference. Overall, it is hard not to be impressed with the utter volume of Taguchi’s output. The expanse of coverage is not to be dismissed. As a vehicle for presenting his prolific production, the handbook succeeds. The book may appear to be somewhat self-indulgent (as if 1600+ pages about your previous work could appear otherwise!). No doubt an ambitious undertaking, the authors nevertheless generally hit their mark. One would be hard-pressed not to at least enjoy most of the ride. What is positive (negative) about the book is largely what one perceives to be positive (negative) about Taguchi. The aforementioned lack of scholarly references is unsurprising, because Taguchi largely practiced beyond the boundaries of academia. Many academics have tended to reciprocate with less attention to his work than is probably deserved. What can safely be said is that if you are a fan of Taguchi’s work, this is definitely for you. If you need a single reference for his work or simply desire a “complete quality library,” you cannot go wrong here. Otherwise, it is unlikely that you would be interested. But in the event that you are a practitioner itching to get acquainted with Taguchi and have $150 burning a hole in your wallet or Visa, this one’s a winner.) <|cite_end|>. At each round, a learner chooses an \emph{action} for an observed \emph{context} to generate a \emph{reward}, which depends on the action and context. The goal is to maximize the cumulative expected rewards, or equivalently, to minimize the cumulative expected \emph{regret}. Many algorithms have been designed in literature (see a book of bandit algorithm summary <|cite_start|> (Reference: Bandit Algorithms: sets of environments and policies respectively and ` : E ×Π→ [0, 1] a bounded loss function. Given a policy π let `(π) = (`(ν1, π), . . . , `(νN , π)) be the loss vector resulting from policy π. Define S = {`(π) : π ∈ Π} and λ(S) = {x ∈ cl(S) : y 6< x for all y ∈ S} , where y 6< x is defined to mean it is not true that yi ≤ xi for all i with strict inequality for at least one i. Prove that if λ(S) ⊆ S and S is convex, then for each x ∈ λ(S) there exists a prior q ∈ P(E) and policy π∗ such that `(π) = x and ∑ ν∈E q(ν)`(ν, π∗) = min π∈Π ∑) <|cite_end|>) to achieve near-optimal regret rates. Many existing studies on contextual bandits, including linear bandits <|cite_start|> (Reference: {Improved Algorithms for Linear Stochastic Bandits: We improve the theoretical analysis and empirical performance of algorithms for the stochastic multi-armed bandit problem and the linear stochastic multi-armed bandit problem. In particular, we show that a simple modification of Auer's UCB algorithm (Auer, 2002) achieves with high probability constant regret. More importantly, we modify and, consequently, improve the analysis of the algorithm for the for linear stochastic bandit problem studied by Auer (2002), Dani et al. (2008), Rusmevichientong and Tsitsiklis (2010), Li et al. (2010). Our modification improves the regret bound by a logarithmic factor, though experiments show a vast improvement. In both cases, the improvement stems from the construction of smaller confidence sets. For their construction we use a novel tail inequality for vector-valued martingales.) <|cite_end|>, generalized linear bandits <|cite_start|> (Reference: Provably Optimal Algorithms for Generalized Linear Contextual Bandits: Contextual bandits are widely used in Internet services from news recommendation to advertising, and to Web search. Generalized linear models (logistical regression in particular) have demonstrated stronger performance than linear models in many applications where rewards are binary. However, most theoretical analyses on contextual bandits so far are on linear bandits. In this work, we propose an upper confidence bound based algorithm for generalized linear contextual bandits, which achieves an $\tilde{O}(\sqrt{dT})$ regret over $T$ rounds with $d$ dimensional feature vectors. This regret matches the minimax lower bound, up to logarithmic terms, and improves on the best previous result by a $\sqrt{d}$ factor, assuming the number of arms is fixed. A key component in our analysis is to establish a new, sharp finite-sample confidence bound for maximum-likelihood estimates in generalized linear models, which may be of independent interest. We also analyze a simpler upper confidence bound algorithm, which is useful in practice, and prove it to have optimal regret for certain cases.) <|cite_end|> and kernelized bandits <|cite_start|> (Reference: Finite-Time Analysis of Kernelised Contextual Bandits: We tackle the problem of online reward maximisation over a large finite set of actions described by their contexts. We focus on the case when the number of actions is too big to sample all of them even once. However we assume that we have access to the similarities between actions' contexts and that the expected reward is an arbitrary linear function of the contexts' images in the related reproducing kernel Hilbert space (RKHS). We propose KernelUCB, a kernelised UCB algorithm, and give a cumulative regret bound through a frequentist analysis. For contextual bandits, the related algorithm GP-UCB turns out to be a special case of our algorithm, and our finite-time analysis improves the regret bound of GP-UCB for the agnostic case, both in the terms of the kernel-dependent quantity and the RKHS norm of the reward function. Moreover, for the linear kernel, our regret bound matches the lower bound for contextual linear bandits.) <|cite_end|>, rely on one essential assumption: \emph{the independence between noise and contexts}. In this paper, we relax this assumption by modeling the correlation using causal graphs where the noise becomes a latent confounder. Such a causal relationship is arguably sensible for practical applications in the real world. Many practical problems can be modeled using this framework <|cite_start|> (Reference: Deep iv: A flexible approach for counterfactual prediction: Counterfactual prediction requires understanding causal relationships between so-called treatment and outcome variables. This paper provides a recipe for augmenting deep learning methods to accurately characterize such relationships in the presence of instrument variables (IVs)—sources of treatment randomization that are conditionally independent from the outcomes. Our IV specification resolves into two prediction tasks that can be solved with deep neural nets: a first-stage network for treatment prediction and a second-stage network whose loss function involves integration over the conditional treatment distribution. This Deep IV framework allows us to take advantage of off-the-shelf supervised learning techniques to estimate causal effects by adapting the loss function. Experiments show that it outperforms existing machine learning approaches.) <|cite_end|> <|cite_start|> (Reference: Deep Proxy Causal Learning and its Application to Confounded Bandit Policy Evaluation: Proxy causal learning (PCL) is a method for estimating the causal effect of treatments on outcomes in the presence of unobserved confounding, using proxies (structured side information) for the confounder. This is achieved via two-stage regression: in the first stage, we model relations among the treatment and proxies; in the second stage, we use this model to learn the effect of treatment on the outcome, given the context provided by the proxies. PCL guarantees recovery of the true causal effect, subject to identifiability conditions. We propose a novel method for PCL, the deep feature proxy variable method (DFPV), to address the case where the proxies, treatments, and outcomes are high-dimensional and have nonlinear complex relationships, as represented by deep neural network features. We show that DFPV outperforms recent state-of-the-art PCL methods on challenging synthetic benchmarks, including settings involving high dimensional image data. Furthermore, we show that PCL can be applied to off-policy evaluation for the confounded bandit problem, in which DFPV also exhibits competitive performance.) <|cite_end|> <|cite_start|> (Reference: Dual Instrumental Variable Regression: We present a novel algorithm for non-linear instrumental variable (IV) regression, DualIV, which simplifies traditional two-stage methods via a dual formulation. Inspired by problems in stochastic programming, we show that two-stage procedures for non-linear IV regression can be reformulated as a convex-concave saddle-point problem. Our formulation enables us to circumvent the first-stage regression which is a potential bottleneck in real-world applications. We develop a simple kernel-based algorithm with an analytic solution based on this formulation. Empirical results show that we are competitive to existing, more complicated algorithms for non-linear instrumental variable regression.) <|cite_end|>. Under such a framework, we need to estimate the unknown function from noisy and possibly high-dimensional samples affected by the unobserved confounders while striking a balance between exploration and exploitation to achieve optimal regret. We apply causal tools, e.g., the instrumental variable (IV) regression, to tackle the challenge brought by latent confounders. Combined with the kernel trick and dual formulation, the instrumental variable method elegantly does regressions in reproducing kernel Hilbert spaces (RKHS) and accurately identify the causal effect. To deal with the non-i.i.d.\ issue of bandit data, we divide the time horizon into epochs, in each of which, our proposed action sampling policy can effectively balance the exploration and exploitation and efficiently reduce the computational burden. \paragraph{Contributions.} First, our work generalizes the kernelized contextual bandit in a causal setting with latent confounders. In this way, we allow the noise to become confounders that affect both contexts and rewards compared with previous works <|cite_start|> (Reference: {Improved Algorithms for Linear Stochastic Bandits: We improve the theoretical analysis and empirical performance of algorithms for the stochastic multi-armed bandit problem and the linear stochastic multi-armed bandit problem. In particular, we show that a simple modification of Auer's UCB algorithm (Auer, 2002) achieves with high probability constant regret. More importantly, we modify and, consequently, improve the analysis of the algorithm for the for linear stochastic bandit problem studied by Auer (2002), Dani et al. (2008), Rusmevichientong and Tsitsiklis (2010), Li et al. (2010). Our modification improves the regret bound by a logarithmic factor, though experiments show a vast improvement. In both cases, the improvement stems from the construction of smaller confidence sets. For their construction we use a novel tail inequality for vector-valued martingales.) <|cite_end|> <|cite_start|> (Reference: Finite-Time Analysis of Kernelised Contextual Bandits: We tackle the problem of online reward maximisation over a large finite set of actions described by their contexts. We focus on the case when the number of actions is too big to sample all of them even once. However we assume that we have access to the similarities between actions' contexts and that the expected reward is an arbitrary linear function of the contexts' images in the related reproducing kernel Hilbert space (RKHS). We propose KernelUCB, a kernelised UCB algorithm, and give a cumulative regret bound through a frequentist analysis. For contextual bandits, the related algorithm GP-UCB turns out to be a special case of our algorithm, and our finite-time analysis improves the regret bound of GP-UCB for the agnostic case, both in the terms of the kernel-dependent quantity and the RKHS norm of the reward function. Moreover, for the linear kernel, our regret bound matches the lower bound for contextual linear bandits.) <|cite_end|> <|cite_start|> (Reference: Efficient Kernel UCB for Contextual Bandits: In this paper, we tackle the computational efficiency of kernelized UCB algorithms in contextual bandits. While standard methods require a O(CT^3) complexity where T is the horizon and the constant C is related to optimizing the UCB rule, we propose an efficient contextual algorithm for large-scale problems. Specifically, our method relies on incremental Nystrom approximations of the joint kernel embedding of contexts and actions. This allows us to achieve a complexity of O(CTm^2) where m is the number of Nystrom points. To recover the same regret as the standard kernelized UCB algorithm, m needs to be of order of the effective dimension of the problem, which is at most O(\sqrt(T)) and nearly constant in some cases.) <|cite_end|>. We show that in such confounded settings, the learner can still achieve a near-optimal regret (up to $\mathcal{O}(\log\log T)$ terms) by our \cref{alg: DIV-BLS}, which is comparable performance with existing bandit algorithms in unconfounded settings. Our algorithm is computationally efficient because we reduce the number of solving optimization problems from $\mathcal{O}(T)$ to $\mathcal{O}(\log T)$ (or $\mathcal{O}(\log\log T)$ if $T$ is known) by an epoch-based learning strategy. Second, we analyze the convergence rate of the dual IV method and give a guideline on choosing the regularization parameters, which may be of independent interest. In <|cite_start|> (Reference: Dual Instrumental Variable Regression: We present a novel algorithm for non-linear instrumental variable (IV) regression, DualIV, which simplifies traditional two-stage methods via a dual formulation. Inspired by problems in stochastic programming, we show that two-stage procedures for non-linear IV regression can be reformulated as a convex-concave saddle-point problem. Our formulation enables us to circumvent the first-stage regression which is a potential bottleneck in real-world applications. We develop a simple kernel-based algorithm with an analytic solution based on this formulation. Empirical results show that we are competitive to existing, more complicated algorithms for non-linear instrumental variable regression.) <|cite_end|>, only the consistency of the dual IV estimator is proved under realizability, invertibility, and continuity assumptions. We consider the cases of both finite-dimensional (\cref{thm: oracle inequality for finite rank}) and infinite-dimensional (\cref{thm: oracle inequality}) spaces, and prove that this method achieves optimal convergence rates with large probability under the same conditions (\cref{thm: lower bound of finite dimensional spaces},\cref{thm: lower bound}). \subsection{Related Works} \emph{Unobserved Confounders.} The study of unobserved confounders is one of the central themes in the modern literature of causal inference <|cite_start|> (Reference: Causal inference in statistics: an overview: This review presents empiricalresearcherswith recent advances in causal inference, and stresses the paradigmatic shifts that must be un- dertaken in moving from traditionalstatistical analysis to causal analysis of multivariate data. Special emphasis is placed on the assumptions that un- derly all causal inferences, the languages used in formulating those assump- tions, the conditional nature of all causal and counterfactual claims, and the methods that have been developed for the assessment of such claims. These advances are illustrated using a general theory of causation based on the Structural Causal Model (SCM) described in Pearl (2000a), which subsumes and unifies other approaches to causation, and provides a coher- ent mathematical foundation for the analysis of causes and counterfactuals. In particular, the paper surveys the development of mathematical tools for inferring (from a combination of data and assumptions) answers to three types of causal queries: (1) queries about the effects of potential interven- tions, (also called "causal effects" or "policy evaluation") (2) queries about probabilities of counterfactuals, (including assessment of "regret," "attri- bution" or "causes of effects") and (3) queries about direct and indirect effects (also known as "mediation"). Finally, the paper defines the formal and conceptual relationships between the structural and potential-outcome frameworks and presents tools for a symbiotic analysis that uses the strong features of both.) <|cite_end|> <|cite_start|> (Reference: A Survey on Causal Inference: Causal inference is a critical research topic across many domains, such as statistics, computer science, education, public policy and economics, for decades. Nowadays, estimating causal effect from observational data has become an appealing research direction owing to the large amount of available data and low budget requirement, compared with randomized controlled trials. Embraced with the rapidly developed machine learning area, various causal effect estimation methods for observational data have sprung up. In this survey, we provide a comprehensive review of causal inference methods under the potential outcome framework, one of the well known causal inference framework. The methods are divided into two categories depending on whether they require all three assumptions of the potential outcome framework or not. For each category, both the traditional statistical methods and the recent machine learning enhanced methods are discussed and compared. The plausible applications of these methods are also presented, including the applications in advertising, recommendation, medicine and so on. Moreover, the commonly used benchmark datasets as well as the open-source codes are also summarized, which facilitate researchers and practitioners to explore, evaluate and apply the causal inference methods.) <|cite_end|>. With the presence of unobserved confounders, many novel methods are proposed; see <|cite_start|> (Reference: Causal inference in statistics: an overview: This review presents empiricalresearcherswith recent advances in causal inference, and stresses the paradigmatic shifts that must be un- dertaken in moving from traditionalstatistical analysis to causal analysis of multivariate data. Special emphasis is placed on the assumptions that un- derly all causal inferences, the languages used in formulating those assump- tions, the conditional nature of all causal and counterfactual claims, and the methods that have been developed for the assessment of such claims. These advances are illustrated using a general theory of causation based on the Structural Causal Model (SCM) described in Pearl (2000a), which subsumes and unifies other approaches to causation, and provides a coher- ent mathematical foundation for the analysis of causes and counterfactuals. In particular, the paper surveys the development of mathematical tools for inferring (from a combination of data and assumptions) answers to three types of causal queries: (1) queries about the effects of potential interven- tions, (also called "causal effects" or "policy evaluation") (2) queries about probabilities of counterfactuals, (including assessment of "regret," "attri- bution" or "causes of effects") and (3) queries about direct and indirect effects (also known as "mediation"). Finally, the paper defines the formal and conceptual relationships between the structural and potential-outcome frameworks and presents tools for a symbiotic analysis that uses the strong features of both.) <|cite_end|> <|cite_start|> (Reference: A Survey on Causal Inference: Causal inference is a critical research topic across many domains, such as statistics, computer science, education, public policy and economics, for decades. Nowadays, estimating causal effect from observational data has become an appealing research direction owing to the large amount of available data and low budget requirement, compared with randomized controlled trials. Embraced with the rapidly developed machine learning area, various causal effect estimation methods for observational data have sprung up. In this survey, we provide a comprehensive review of causal inference methods under the potential outcome framework, one of the well known causal inference framework. The methods are divided into two categories depending on whether they require all three assumptions of the potential outcome framework or not. For each category, both the traditional statistical methods and the recent machine learning enhanced methods are discussed and compared. The plausible applications of these methods are also presented, including the applications in advertising, recommendation, medicine and so on. Moreover, the commonly used benchmark datasets as well as the open-source codes are also summarized, which facilitate researchers and practitioners to explore, evaluate and apply the causal inference methods.) <|cite_end|> for an overview. These methods are also widely studied in bandit settings. <|cite_start|> (Reference: Bandits with unobserved confounders: A causal approach: The Multi-Armed Bandit problem constitutes an archetypal setting for sequential decision-making, permeating multiple domains including engineering, business, and medicine. One of the hallmarks of a bandit setting is the agent's capacity to explore its environment through active intervention, which contrasts with the ability to collect passive data by estimating associational relationships between actions and payouts. The existence of unobserved confounders, namely unmeasured variables affecting both the action and the outcome variables, implies that these two data-collection modes will in general not coincide. In this paper, we show that formalizing this distinction has conceptual and algorithmic implications to the bandit setting. The current generation of bandit algorithms implicitly try to maximize rewards based on estimation of the experimental distribution, which we show is not always the best strategy to pursue. Indeed, to achieve low regret in certain realistic classes of bandit problems (namely, in the face of unobserved confounders), both experimental and observational quantities are required by the rational agent. After this realization, we propose an optimization metric (employing both experimental and observational distributions) that bandit agents should pursue, and illustrate its benefits over traditional algorithms.) <|cite_end|> point out the possibility and necessity of causal approaches in bandits when faced with confounders. A novel causal bandit model is proposed in <|cite_start|> (Reference: Causal Bandits: Learning Good Interventions via Causal Inference: We study the problem of using causal models to improve the rate at which good interventions can be learned online in a stochastic environment. Our formalism combines multi-arm bandits and causal inference to model a novel type of bandit feedback that is not exploited by existing approaches. We propose a new algorithm that exploits the causal feedback and prove a bound on its simple regret that is strictly better (in all quantities) than algorithms that do not use the additional causal information.) <|cite_end|> to illustrate the causal relationships of actions by causal graphs. Further, <|cite_start|> (Reference: Online Learning for Causal Bandits: The Causal Multi-Arm Bandit framework (Lattimore & Reid, 2016) allows for modeling sequential decision problems in causal environments. In previous works, online learning in the Causal MAB framework has not been analyzed. We propose an algorithm, Online Causal Thompson Sampling (OC-TS), for online decision making in such environments and perform simulations to understand the performance of OC-TS compared to offline algorithms.) <|cite_end|> combine the causal methods with traditional bandit algorithms to demonstrate that causal approaches can significantly improve the regret bounds. \emph{Instrumental Variable Regression.} IV regression is another method for learning causal relationships with observational data. When measurements of input and output are confounded, the causal relationship, also called the structural relationship, can be identified if an instrumental variable is available (see \cref{fig: causal model with instrumental variables}). Instrumental variable regression involves two-stage regression (2SLS), and <|cite_start|> (Reference: Kernel Instrumental Variable Regression: Instrumental variable (IV) regression is a strategy for learning causal relationships in observational data. If measurements of input X and output Y are confounded, the causal relationship can nonetheless be identified if an instrumental variable Z is available that influences X directly, but is conditionally independent of Y given X and the unmeasured confounder. The classic two-stage least squares algorithm (2SLS) simplifies the estimation problem by modeling all relationships as linear functions. We propose kernel instrumental variable regression (KIV), a nonparametric generalization of 2SLS, modeling relations among X, Y, and Z as nonlinear functions in reproducing kernel Hilbert spaces (RKHSs). We prove the consistency of KIV under mild assumptions, and derive conditions under which convergence occurs at the minimax optimal rate for unconfounded, single-stage RKHS regression. In doing so, we obtain an efficient ratio between training sample sizes used in the algorithm's first and second stages. In experiments, KIV outperforms state of the art alternatives for nonparametric IV regression.) <|cite_end|> <|cite_start|> (Reference: Proximal Causal Learning with Kernels: Two-Stage Estimation and Moment Restriction: We address the problem of causal effect estimation in the presence of unobserved confounding, but where proxies for the latent confounder(s) are observed. We propose two kernel-based methods for nonlinear causal effect estimation in this setting: (a) a two-stage regression approach, and (b) a maximum moment restriction approach. We focus on the proximal causal learning setting, but our methods can be used to solve a wider class of inverse problems characterised by a Fredholm integral equation. In particular, we provide a unifying view of two-stage and moment restriction approaches for solving this problem in a nonlinear setting. We provide consistency guarantees for each algorithm, and we demonstrate these approaches achieve competitive results on synthetic data and data simulating a real-world task. In particular, our approach outperforms earlier methods that are not suited to leveraging proxy variables.) <|cite_end|> generalize this method to nonlinear settings. They also provide consistency guarantees for their kernel instrumental variable algorithm. The idea of dual formulation simplifies traditional two-stage methods. Several works <|cite_start|> (Reference: Dual Instrumental Variable Regression: We present a novel algorithm for non-linear instrumental variable (IV) regression, DualIV, which simplifies traditional two-stage methods via a dual formulation. Inspired by problems in stochastic programming, we show that two-stage procedures for non-linear IV regression can be reformulated as a convex-concave saddle-point problem. Our formulation enables us to circumvent the first-stage regression which is a potential bottleneck in real-world applications. We develop a simple kernel-based algorithm with an analytic solution based on this formulation. Empirical results show that we are competitive to existing, more complicated algorithms for non-linear instrumental variable regression.) <|cite_end|> <|cite_start|> (Reference: Bayesian Deconditional Kernel Mean Embeddings: Conditional kernel mean embeddings form an attractive nonparametric framework for representing conditional means of functions, describing the observation processes for many complex models. However, the recovery of the original underlying function of interest whose conditional mean was observed is a challenging inference task. We formalize deconditional kernel mean embeddings as a solution to this inverse problem, and show that it can be naturally viewed as a nonparametric Bayes' rule. Critically, we introduce the notion of task transformed Gaussian processes and establish deconditional kernel means as their posterior predictive mean. This connection provides Bayesian interpretations and uncertainty estimates for deconditional kernel mean embeddings, explains their regularization hyperparameters, and reveals a marginal likelihood for kernel hyperparameter learning. These revelations further enable practical applications such as likelihood-free inference and learning sparse representations for big data.) <|cite_end|> propose such ideas with similar mathematical structure. \emph{Instrumental Variable in Bandits.} A few papers apply IV regression in machine learning. <|cite_start|> (Reference: Instrument-Armed Bandits: We extend the classic multi-armed bandit (MAB) model to the setting of noncompliance, where the arm pull is a mere instrument and the treatment applied may differ from it, which gives rise to the instrument-armed bandit (IAB) problem. The IAB setting is relevant whenever the experimental units are human since free will, ethics, and the law may prohibit unrestricted or forced application of treatment. In particular, the setting is relevant in bandit models of dynamic clinical trials and other controlled trials on human interventions. Nonetheless, the setting has not been fully investigate in the bandit literature. We show that there are various and divergent notions of regret in this setting, all of which coincide only in the classic MAB setting. We characterize the behavior of these regrets and analyze standard MAB algorithms. We argue for a particular kind of regret that captures the causal effect of treatments but show that standard MAB algorithms cannot achieve sublinear control on this regret. Instead, we develop new algorithms for the IAB problem, prove new regret bounds for them, and compare them to standard MAB algorithms in numerical examples.) <|cite_end|> formalize an instrumented-armed bandit (IAB) framework in a multi-armed bandit setting. The arm pull is the choice of the instrumental variable which influences rather than guarantees the application of the treatment. As an application of IAB, <|cite_start|> (Reference: Incentivizing Bandit Exploration: Recommendations as Instruments: We study a multi-armed bandit learning setting where a social planner incentivizes a set of heterogeneous agents to efficiently explore the set of available arms. At each round, an agent arrives with their unobserved private type that determines both their prior preferences across the actions as well as their action-independent confounding shift in the rewards. The planner provides the agent with an arm recommendation that may alter their belief and incentivize them to explore potentially sub-optimal arms. Under this setting, we provide a novel recommendation mechanism that views the planner’s recommendations as a form of instrumental variables (IV) that only affect agents’ arm selection but not the observed rewards. We construct such IVs by carefully mapping the history–the interactions between the planner and the previous agents–to a random arm recommendation. Despite the unobserved confounding shift in the rewards, the resulting IV regression provides reliable estimates on the mean rewards of the actions and enables the social learning process to minimize regret over the long term.) <|cite_end|> develop a novel recommendation mechanism that views the recommendation as a form of instrumental variables. Such mechanisms strategically select instruments to incentive compliance over time to achieve optimal regrets up to logarithmic terms. However, both works assume the linearity of structural equations. When structural functions are nonlinear, <|cite_start|> (Reference: Deep Proxy Causal Learning and its Application to Confounded Bandit Policy Evaluation: Proxy causal learning (PCL) is a method for estimating the causal effect of treatments on outcomes in the presence of unobserved confounding, using proxies (structured side information) for the confounder. This is achieved via two-stage regression: in the first stage, we model relations among the treatment and proxies; in the second stage, we use this model to learn the effect of treatment on the outcome, given the context provided by the proxies. PCL guarantees recovery of the true causal effect, subject to identifiability conditions. We propose a novel method for PCL, the deep feature proxy variable method (DFPV), to address the case where the proxies, treatments, and outcomes are high-dimensional and have nonlinear complex relationships, as represented by deep neural network features. We show that DFPV outperforms recent state-of-the-art PCL methods on challenging synthetic benchmarks, including settings involving high dimensional image data. Furthermore, we show that PCL can be applied to off-policy evaluation for the confounded bandit problem, in which DFPV also exhibits competitive performance.) <|cite_end|> show that IV implementation can be broken into two supervised stages, which can be targeted with deep networks. <|cite_start|> (Reference: Provably Efficient Neural Estimation of Structural Equation Model: An Adversarial Approach: Structural equation models (SEMs) are widely used in sciences, ranging from economics to psychology, to uncover causal relationships underlying a complex system under consideration and estimate structural parameters of interest. We study estimation in a class of generalized SEMs where the object of interest is defined as the solution to a linear operator equation. We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using the stochastic gradient descent. We consider both 2-layer and multi-layer NNs with ReLU activation functions and prove global convergence in an overparametrized regime, where the number of neurons is diverging. The results are established using techniques from online learning and local linearization of NNs, and improve in several aspects the current state-of-the-art. For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.) <|cite_end|> take an important step in this direction and provide convergence analysis for neural networks. \emph{Kernelized Bandit.} The kernelized bandit was originally formulated by <|cite_start|> (Reference: Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design: Many applications require optimizing an unknown, noisy function that is expensive to evaluate. We formalize this task as a multi-armed bandit problem, where the payoff function is either sampled from a Gaussian process (GP) or has low RKHS norm. We resolve the important open problem of deriving regret bounds for this setting, which imply novel convergence rates for GP optimization. We analyze GP-UCB, an intuitive upper-confidence based algorithm, and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design. Moreover, by bounding the latter in terms of operator spectra, we obtain explicit sublinear regret bounds for many commonly used covariance functions. In some important cases, our bounds have surprisingly weak dependence on the dimensionality. In our experiments on real sensor data, GP-UCB compares favorably with other heuristical GP optimization approaches.) <|cite_end|>. This work generalizes stochastic linear optimization in a bandit setting, where the unknown reward function comes from a finite-dimensional reproducing kernel Hilbert space (RKHS). The smoothness assumptions about functions are encoded through the choice of kernels in a flexible nonparametric fashion. <|cite_start|> (Reference: Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design: Many applications require optimizing an unknown, noisy function that is expensive to evaluate. We formalize this task as a multi-armed bandit problem, where the payoff function is either sampled from a Gaussian process (GP) or has low RKHS norm. We resolve the important open problem of deriving regret bounds for this setting, which imply novel convergence rates for GP optimization. We analyze GP-UCB, an intuitive upper-confidence based algorithm, and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design. Moreover, by bounding the latter in terms of operator spectra, we obtain explicit sublinear regret bounds for many commonly used covariance functions. In some important cases, our bounds have surprisingly weak dependence on the dimensionality. In our experiments on real sensor data, GP-UCB compares favorably with other heuristical GP optimization approaches.) <|cite_end|> resolve the problem of deriving regret bounds via GP optimization in RKHS. <|cite_start|> (Reference: Finite-Time Analysis of Kernelised Contextual Bandits: We tackle the problem of online reward maximisation over a large finite set of actions described by their contexts. We focus on the case when the number of actions is too big to sample all of them even once. However we assume that we have access to the similarities between actions' contexts and that the expected reward is an arbitrary linear function of the contexts' images in the related reproducing kernel Hilbert space (RKHS). We propose KernelUCB, a kernelised UCB algorithm, and give a cumulative regret bound through a frequentist analysis. For contextual bandits, the related algorithm GP-UCB turns out to be a special case of our algorithm, and our finite-time analysis improves the regret bound of GP-UCB for the agnostic case, both in the terms of the kernel-dependent quantity and the RKHS norm of the reward function. Moreover, for the linear kernel, our regret bound matches the lower bound for contextual linear bandits.) <|cite_end|> propose Kernel-UCB algorithm and obtain $\tilde{\mathcal{O}} ( \sqrt{ \tilde{d} T} ) $ regret, where $\tilde{d}$ is the effective dimension of data. consider the cases when the kernel is of infinite rank, e.g., Mat{\'e}rn kernel. They use the improved GP-UCB algorithm to achieve a suboptimal regret upper bound $\tilde{\mathcal{O}} (T^{\frac{d(d+1)}{d(d+1) +2\nu }})$, where $\nu$ captures the smoothness of Mat{\'e}rn kernels and $d$ is the dimension of contexts. Later, <|cite_start|> (Reference: Efficient Kernel UCB for Contextual Bandits: In this paper, we tackle the computational efficiency of kernelized UCB algorithms in contextual bandits. While standard methods require a O(CT^3) complexity where T is the horizon and the constant C is related to optimizing the UCB rule, we propose an efficient contextual algorithm for large-scale problems. Specifically, our method relies on incremental Nystrom approximations of the joint kernel embedding of contexts and actions. This allows us to achieve a complexity of O(CTm^2) where m is the number of Nystrom points. To recover the same regret as the standard kernelized UCB algorithm, m needs to be of order of the effective dimension of the problem, which is at most O(\sqrt(T)) and nearly constant in some cases.) <|cite_end|> improve computational efficiency of kernelized UCB algorithms. They also show that the concepts of information gain in <|cite_start|> (Reference: Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design: Many applications require optimizing an unknown, noisy function that is expensive to evaluate. We formalize this task as a multi-armed bandit problem, where the payoff function is either sampled from a Gaussian process (GP) or has low RKHS norm. We resolve the important open problem of deriving regret bounds for this setting, which imply novel convergence rates for GP optimization. We analyze GP-UCB, an intuitive upper-confidence based algorithm, and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design. Moreover, by bounding the latter in terms of operator spectra, we obtain explicit sublinear regret bounds for many commonly used covariance functions. In some important cases, our bounds have surprisingly weak dependence on the dimensionality. In our experiments on real sensor data, GP-UCB compares favorably with other heuristical GP optimization approaches.) <|cite_end|> and effective dimension in <|cite_start|> (Reference: Finite-Time Analysis of Kernelised Contextual Bandits: We tackle the problem of online reward maximisation over a large finite set of actions described by their contexts. We focus on the case when the number of actions is too big to sample all of them even once. However we assume that we have access to the similarities between actions' contexts and that the expected reward is an arbitrary linear function of the contexts' images in the related reproducing kernel Hilbert space (RKHS). We propose KernelUCB, a kernelised UCB algorithm, and give a cumulative regret bound through a frequentist analysis. For contextual bandits, the related algorithm GP-UCB turns out to be a special case of our algorithm, and our finite-time analysis improves the regret bound of GP-UCB for the agnostic case, both in the terms of the kernel-dependent quantity and the RKHS norm of the reward function. Moreover, for the linear kernel, our regret bound matches the lower bound for contextual linear bandits.) <|cite_end|> <|cite_start|> (Reference: Efficient Kernel UCB for Contextual Bandits: In this paper, we tackle the computational efficiency of kernelized UCB algorithms in contextual bandits. While standard methods require a O(CT^3) complexity where T is the horizon and the constant C is related to optimizing the UCB rule, we propose an efficient contextual algorithm for large-scale problems. Specifically, our method relies on incremental Nystrom approximations of the joint kernel embedding of contexts and actions. This allows us to achieve a complexity of O(CTm^2) where m is the number of Nystrom points. To recover the same regret as the standard kernelized UCB algorithm, m needs to be of order of the effective dimension of the problem, which is at most O(\sqrt(T)) and nearly constant in some cases.) <|cite_end|> are equivalent up to logarithmic factors. \paragraph{Overview.} In \cref{sec: problem formulation}, we propose a contextual bandit with a nonlinear reward function and latent confounders. We also introduce the instrumental variable regression in this section, in order to tackle the challenge brought by the unobserved confounders. In \cref{sec: methods}, we propose the dual method to perform IV regression efficiently in the bandit setting. Then we analyze the dual method in reproducing kernel Hilbert spaces and obtain the concentration inequality. Next, we design a bandit algorithm that combines the idea of dual IV regression and epoch learning strategy. We show the regret upper and lower bounds in \cref{sec: regret analysis}. Moreover, we illustrate the numerical results in \cref{sec: numerical experiments}. In appendices, we provide all the proofs, and discuss the concentration inequalities and the regret of the new bandit algorithm for infinite-dimensional RKHSs. \paragraph{Notations.} We define the following notations which will be used throughout the paper. We denote $\innerH{\cdot}{\cdot}$ and $\normH{\cdot}$ as the inner product of the Hilbert space $\mathcal{H}$ and its induced norm, respectively. The $L^2$-norm of function $f$ associated with random variable $X$ is defined as $\normX{f}: = \sqrt{ \meanX{f^2(X)}}$. For function $f,g$, we write $f(x) = \mathcal{O} (g(x))$, if there exists a constant $C>0$ such that $f(x)\leq Cg(x)$, and write $f(x) = \Omega (g(x))$ if $g(x)=\mathcal{O} (f(x))$. The notation $\simeq$ means two quantities are of the same order up to a constant. We denote $\otimes$ as the tensor product. <|paper_end|>
[ "<|reference_start|> Efficient Kernel UCB for Contextual Bandits: In this paper, we tackle the computational efficiency of kernelized UCB algorithms in contextual bandits. While standard methods require a O(CT^3) complexity where T is the horizon and the constant C is related to optimizing the UCB rule, we propose an efficient contextual algorithm for large-scale problems. Specifically, our method relies on incremental Nystrom approximations of the joint kernel embedding of contexts and actions. This allows us to achieve a complexity of O(CTm^2) where m is the number of Nystrom points. To recover the same regret as the standard kernelized UCB algorithm, m needs to be of order of the effective dimension of the problem, which is at most O(\\sqrt(T)) and nearly constant in some cases. <|reference_end|>", "<|reference_start|> A Survey on Causal Inference: Causal inference is a critical research topic across many domains, such as statistics, computer science, education, public policy and economics, for decades. Nowadays, estimating causal effect from observational data has become an appealing research direction owing to the large amount of available data and low budget requirement, compared with randomized controlled trials. Embraced with the rapidly developed machine learning area, various causal effect estimation methods for observational data have sprung up. In this survey, we provide a comprehensive review of causal inference methods under the potential outcome framework, one of the well known causal inference framework. The methods are divided into two categories depending on whether they require all three assumptions of the potential outcome framework or not. For each category, both the traditional statistical methods and the recent machine learning enhanced methods are discussed and compared. The plausible applications of these methods are also presented, including the applications in advertising, recommendation, medicine and so on. Moreover, the commonly used benchmark datasets as well as the open-source codes are also summarized, which facilitate researchers and practitioners to explore, evaluate and apply the causal inference methods. <|reference_end|>", "<|reference_start|> Causal Bandits: Learning Good Interventions via Causal Inference: We study the problem of using causal models to improve the rate at which good interventions can be learned online in a stochastic environment. Our formalism combines multi-arm bandits and causal inference to model a novel type of bandit feedback that is not exploited by existing approaches. We propose a new algorithm that exploits the causal feedback and prove a bound on its simple regret that is strictly better (in all quantities) than algorithms that do not use the additional causal information. <|reference_end|>", "<|reference_start|> Finite-Time Analysis of Kernelised Contextual Bandits: We tackle the problem of online reward maximisation over a large finite set of actions described by their contexts. We focus on the case when the number of actions is too big to sample all of them even once. However we assume that we have access to the similarities between actions' contexts and that the expected reward is an arbitrary linear function of the contexts' images in the related reproducing kernel Hilbert space (RKHS). We propose KernelUCB, a kernelised UCB algorithm, and give a cumulative regret bound through a frequentist analysis. For contextual bandits, the related algorithm GP-UCB turns out to be a special case of our algorithm, and our finite-time analysis improves the regret bound of GP-UCB for the agnostic case, both in the terms of the kernel-dependent quantity and the RKHS norm of the reward function. Moreover, for the linear kernel, our regret bound matches the lower bound for contextual linear bandits. <|reference_end|>" ]
[ 10, 13, 17, 32 ]
{"<|cite_5|>": "ss-1351955", "<|cite_6|>": "ss-1516411", "<|cite_7|>": "ss-1288760", "<|cite_8|>": "arxiv-117821", "<|cite_9|>": "arxiv-50785", "<|multi_cite_10_1|>": "ss-783233", "<|multi_cite_10_2|>": "arxiv-346499", "<|multi_cite_10_3|>": "arxiv-231056", "<|multi_cite_11_1|>": "ss-1288760", "<|multi_cite_11_3|>": "arxiv-50785", "<|multi_cite_11_4|>": "arxiv-398496", "<|cite_12|>": "arxiv-231056", "<|multi_cite_13_1|>": "ss-761104", "<|multi_cite_13_2|>": "arxiv-246908", "<|multi_cite_14_1|>": "ss-761104", "<|multi_cite_14_2|>": "arxiv-246908", "<|cite_15|>": "ss-1259365", "<|cite_1|>": "arxiv-99800", "<|cite_16|>": "ss-955638", "<|multi_cite_17_1|>": "arxiv-207373", "<|multi_cite_17_2|>": "arxiv-340047", "<|multi_cite_2_1|>": "arxiv-231056", "<|multi_cite_2_2|>": "arxiv-207357", "<|cite_18|>": "arxiv-124665", "<|cite_19|>": "ss-955639", "<|cite_20|>": "arxiv-346499", "<|cite_21|>": "ss-955640", "<|cite_22|>": "arxiv-10642", "<|cite_23|>": "arxiv-10642", "<|cite_24|>": "arxiv-50785", "<|cite_26|>": "arxiv-398496", "<|cite_3|>": "arxiv-10642", "<|multi_cite_4_1|>": "arxiv-50785", "<|multi_cite_4_2|>": "arxiv-398496"}
2112.13260
<|paper_start|> Title: Utilizing gradient approximations to optimize data selection protocols for tumor growth model calibration Abstract: Utilizing gradient approximations to optimize data selection protocols for tumor growth model calibration: The use of mathematical models to make predictions about tumor growth and response to treatment has become increasingly more prevalent in the clinical setting. The level of complexity within these models ranges broadly, and the calibration of more complex models correspondingly requires more detailed clinical data. This raises questions about how much data should be collected and when, in order to minimize the total amount of data used and the time until a model can be calibrated accurately. To address these questions, we propose a Bayesian information-theoretic procedure, using a gradient-based score function to determine the optimal data collection times for model calibration. The novel score function introduced in this work eliminates the need for a weight parameter used in a previous study's score function, while still yielding accurate and efficient model calibration using even fewer scans on a sample set of synthetic data, simulating tumors of varying levels of radiosensitivity. We also conduct a robust analysis of the calibration accuracy and certainty, using both error and uncertainty metrics. Unlike the error analysis of the previous study, the inclusion of uncertainty analysis in this work|as a means for deciding when the algorithm can be terminated|provides a more realistic option for clinical decision-making, since it does not rely on data that will be collected later in time. Introduction In recent decades, mathematical modeling has frequently been used to advance our understanding of tumor evolution <|cite_start|> (Reference: The mathematics of cancer: integrating quantitative models: ) <|cite_end|> <|cite_start|> (Reference: The dynamics of drug resistance: a mathematical perspective.: ) <|cite_end|> <|cite_start|> (Reference: Mathematical modeling as a tool for planning anticancer therapy.: ) <|cite_end|> <|cite_start|> (Reference: Dissecting cancer through mathematics: from the cell to the animal model: ) <|cite_end|> <|cite_start|> (Reference: The 2019 Mathematical Oncology Roadmap: Whether the nom de guerre is Mathematical Oncology, Computational or Systems Biology, Theoretical Biology, Evolutionary Oncology, Bioinformatics, or simply Basic Science, there is no denying that mathematics continues to play an increasingly prominent role in cancer research. Mathematical Oncology—defined here simply as the use of mathematics in cancer research—complements and overlaps with a number of other fields that rely on mathematics as a core methodology. As a result, Mathematical Oncology has a broad scope, ranging from theoretical studies to clinical trials designed with mathematical models. This Roadmap differentiates Mathematical Oncology from related fields and demonstrates specific areas of focus within this unique field of research. The dominant theme of this Roadmap is the personalization of medicine through mathematics, modelling, and simulation. This is achieved through the use of patient-specific clinical data to: develop individualized screening strategies to detect cancer earlier; make predictions of response to therapy; design adaptive, patient-specific treatment plans to overcome therapy resistance; and establish domain-specific standards to share model predictions and to make models and simulations reproducible. The cover art for this Roadmap was chosen as an apt metaphor for the beautiful, strange, and evolving relationship between mathematics and cancer.) <|cite_end|>. Modeling of cancer can be performed from the complex, highly-refined cellular level to a more ``macro" level view, where we assume that the tumor acts as a mass of homogeneous tissue. Estimating the parameter values of such models requires detailed data, which may take many forms <|cite_start|> (Reference: {The Impact of Big Data Research on Practice, Policy, and Cancer Care: The concept of "big data" research-the aggregation and analysis of biologic, clinical, administrative, and other data sources to drive new advances in biomedical knowledge-has been embraced by the cancer research enterprise. Although much of the conversation has concentrated on the amalgamation of basic biologic data (e.g., genomics, metabolomics, tumor tissue), new opportunities to extend potential contributions of big data to clinical practice and policy abound. This article examines these opportunities through discussion of three major data sources: aggregated clinical trial data, administrative data (including insurance claims data), and data from electronic health records. We will discuss the benefits of data use to answer key oncology practice and policy research questions, along with limitations inherent in these complex data sources. Finally, the article will discuss overarching themes across data types and offer next steps for the research, practice, and policy communities. The use of multiple sources of big data has the promise of improving knowledge and providing more accurate data for clinicians and policy decision makers. In the future, optimization of machine learning may allow for current limitations of big data analyses to be attenuated, thereby resulting in improved patient care and outcomes.) <|cite_end|> <|cite_start|> (Reference: Big Data and machine learning in radiation oncology: State of the art and future prospects.: ) <|cite_end|>. The models can then be used to make predictions about the evolution of the tumor and its response to various treatment modalities, including radiotherapy, chemotherapy, immunotherapy, and viral therapy, among others. Recent technological advances have made it possible to collect a wide variety of data describing tumors, from the molecular level to the tissue level. Collecting data at multiple time points can aid in the calibration of mathematical models, which can be tailored to incorporate the available data. However, some data collection can be prohibitively expensive or invasive; this raises questions about how much data is needed to make accurate clinical predictions using mathematical models, and when this data should be collected. In the age of personalized medicine, clinicians are turning to individualized treatment protocols, each tailored to the unique patient. Mathematical modeling can play a significant role here; given data from an individual tumor, we can calibrate a model and determine patient-specific parameter values which may give insight into the efficacy of the proposed treatment regimen for that individual. However, it is important that we bridge the gap between the idealized math modeling framework and the clinical constraints. While highly complex models can be insightful as far as determining the underlying mechanisms of the tumor and predicting how different cell populations might interact, at the clinical level, we are very constrained in the level of detail that might be inferred from the available data. The question then is: can an inherently simplistic model calibrated solely from a very small budget of crude data (i.e. estimated tumor volume from an MRI scan) still yield useful information regarding predicted response to treatment? Because data collection in a clinical oncology setting is both expensive and potentially invasive for the patient, clinicians are constrained to a very sparse budget of measurements. Practically speaking, a clinician might collect a tumor volume scan at diagnosis, a second one at the start of treatment, and then neglect to measure again until the treatment period has ended. With such sparse data, it can be difficult to construct a model with any sort of predictive power; the amount of uncertainty in such a model will be prohibitive. Thus, we wish to investigate how one might get the most ``bang for their buck" for a specified data collection budget. If we are restricted to $n$ data points, at what points should we collect them? What time periods during the treatment regimen are most informative, in terms of reducing the uncertainty of the model parameters? In <|cite_start|> (Reference: Bayesian Information-Theoretic Calibration of Radiotherapy Sensitivity Parameters for Informing Effective Scanning Protocols in Cancer: With new advancements in technology, it is now possible to collect data for a variety of different metrics describing tumor growth, including tumor volume, composition, and vascularity, among others. For any proposed model of tumor growth and treatment, we observe large variability among individual patients’ parameter values, particularly those relating to treatment response; thus, exploiting the use of these various metrics for model calibration can be helpful to infer such patient-specific parameters both accurately and early, so that treatment protocols can be adjusted mid-course for maximum efficacy. However, taking measurements can be costly and invasive, limiting clinicians to a sparse collection schedule. As such, the determination of optimal times and metrics for which to collect data in order to best inform proper treatment protocols could be of great assistance to clinicians. In this investigation, we employ a Bayesian information-theoretic calibration protocol for experimental design in order to identify the optimal times at which to collect data for informing treatment parameters. Within this procedure, data collection times are chosen sequentially to maximize the reduction in parameter uncertainty with each added measurement, ensuring that a budget of n high-fidelity experimental measurements results in maximum information gain about the low-fidelity model parameter values. In addition to investigating the optimal temporal pattern for data collection, we also develop a framework for deciding which metrics should be utilized at each data collection point. We illustrate this framework with a variety of toy examples, each utilizing a radiotherapy treatment regimen. For each scenario, we analyze the dependence of the predictive power of the low-fidelity model upon the measurement budget.) <|cite_end|>, an algorithmic approach to determining an optimal selection of scans for model calibration was proposed by the authors. This approach relied on a Bayesian information-theoretic sequential experimental design framework, in which each data point was chosen in turn by maximizing a given score function, whereupon the model parameters were re-calibrated to give an updated model trajectory. The score function utilized in <|cite_start|> (Reference: Bayesian Information-Theoretic Calibration of Radiotherapy Sensitivity Parameters for Informing Effective Scanning Protocols in Cancer: With new advancements in technology, it is now possible to collect data for a variety of different metrics describing tumor growth, including tumor volume, composition, and vascularity, among others. For any proposed model of tumor growth and treatment, we observe large variability among individual patients’ parameter values, particularly those relating to treatment response; thus, exploiting the use of these various metrics for model calibration can be helpful to infer such patient-specific parameters both accurately and early, so that treatment protocols can be adjusted mid-course for maximum efficacy. However, taking measurements can be costly and invasive, limiting clinicians to a sparse collection schedule. As such, the determination of optimal times and metrics for which to collect data in order to best inform proper treatment protocols could be of great assistance to clinicians. In this investigation, we employ a Bayesian information-theoretic calibration protocol for experimental design in order to identify the optimal times at which to collect data for informing treatment parameters. Within this procedure, data collection times are chosen sequentially to maximize the reduction in parameter uncertainty with each added measurement, ensuring that a budget of n high-fidelity experimental measurements results in maximum information gain about the low-fidelity model parameter values. In addition to investigating the optimal temporal pattern for data collection, we also develop a framework for deciding which metrics should be utilized at each data collection point. We illustrate this framework with a variety of toy examples, each utilizing a radiotherapy treatment regimen. For each scenario, we analyze the dependence of the predictive power of the low-fidelity model upon the measurement budget.) <|cite_end|> was proposed as a means of adapting the pre-existing sequential design framework to handle time-series data, as opposed to other studies <|cite_start|> (Reference: An information theoretic approach to use high-fidelity codes to calibrate low-fidelity codes: ) <|cite_end|> <|cite_start|> (Reference: Bayesian experimental design for the active nitridation of graphite by atomic nitrogen: The problem of optimal data collection to efficiently learn the model parameters of a graphite nitridation experiment is studied in the context of Bayesian analysis using both synthetic and real experimental data. The paper emphasizes that the optimal design can be obtained as a result of an information theoretic sensitivity analysis. Thus, the preferred design is where the statistical dependence between the model parameters and observables is the highest possible. In this paper, the statistical dependence between random variables is quantified by mutual information and estimated using a k-nearest neighbor based approximation. It is shown, that by monitoring the inference process via measures such as entropy or Kullback-Leibler divergence, one can determine when to stop the data collection process. The methodology is applied to select the most informative designs on both a simulated data set and on an experimental data set, previously published in the literature. It is also shown that the sequential Bayesian analysis used in the experimental design can also be useful in detecting conflicting information between measurements and model predictions.) <|cite_end|> which dealt solely with non-temporal data (i.e. spatial design conditions). In addition to trying to maximize the reduction in parameter uncertainty through the choice of a highly informative data point, we also sought to penalize the algorithm for skipping too many data points, since the temporal data framework does not allow for those points to be subsequently collected at a later date. This penalization step, at the time, relied upon a penalization parameter $k$, which we varied over the interval $[0,1]$ in an attempt to optimize the efficiency and accuracy of the model calibration. The previous study tested this algorithm on three sets of synthetic data of varying radiation response types, and concluded that the optimal $k$ value varies depending on the strength of patient response to the radiotherapy treatment. For instance, in scenarios where the tumor was highly sensitive to radiation, the model calibration procedure benefited most from the use of a $k$ value near or at 1. Scenarios with data that was less responsive tended to favor $k$ values in the low-to-middle spectrum, i.e. $k=0$ to $k=0.3$. Although this framework was demonstrated to be effective in determining which scans to select for model calibration, the previous study did have several weaknesses. Most notably, the reliance of the choice of parameter value $k$ upon the shape of the patient data was constrictive; an optimal $k$ value could not be determined until the general shape of the data could be assessed, which required at least several data points. In a highly restrictive scan budget scenario---i.e., in the clinical scenarios we are attempting to mimic---this means that an optimal $k$ value realistically cannot be determined in time to have a positive impact on the algorithm efficiency. Thus, finding a way to eliminate dependence upon the penalization parameter value is a focus of this work; in particular, we propose using information gathered about gradient approximations to adapt the weighting of the penalization term as the algorithm progresses, in place of using a static parameter $k$. Additionally, we conduct an analysis of this new gradient-based score function with mean-square error, as was used in <|cite_start|> (Reference: Bayesian Information-Theoretic Calibration of Radiotherapy Sensitivity Parameters for Informing Effective Scanning Protocols in Cancer: With new advancements in technology, it is now possible to collect data for a variety of different metrics describing tumor growth, including tumor volume, composition, and vascularity, among others. For any proposed model of tumor growth and treatment, we observe large variability among individual patients’ parameter values, particularly those relating to treatment response; thus, exploiting the use of these various metrics for model calibration can be helpful to infer such patient-specific parameters both accurately and early, so that treatment protocols can be adjusted mid-course for maximum efficacy. However, taking measurements can be costly and invasive, limiting clinicians to a sparse collection schedule. As such, the determination of optimal times and metrics for which to collect data in order to best inform proper treatment protocols could be of great assistance to clinicians. In this investigation, we employ a Bayesian information-theoretic calibration protocol for experimental design in order to identify the optimal times at which to collect data for informing treatment parameters. Within this procedure, data collection times are chosen sequentially to maximize the reduction in parameter uncertainty with each added measurement, ensuring that a budget of n high-fidelity experimental measurements results in maximum information gain about the low-fidelity model parameter values. In addition to investigating the optimal temporal pattern for data collection, we also develop a framework for deciding which metrics should be utilized at each data collection point. We illustrate this framework with a variety of toy examples, each utilizing a radiotherapy treatment regimen. For each scenario, we analyze the dependence of the predictive power of the low-fidelity model upon the measurement budget.) <|cite_end|>, and we supplement this with uncertainty-based analysis, using credible intervals constructed by propagating parameter posterior distributions through the model to assess the level of certainty in the resulting model trajectory. The uncertainty analysis relies solely on the data that has been collected up to the current day, so it provides a more practical assessment of confidence in the model predictions for use in a clinical setting. We begin in Section \ref{sec:models} by describing the low-fidelity ordinary differential equation model that we'll use throughout the investigation to illustrate the algorithm. Additionally, we give a brief background about the source of the synthetic data used---obtained from a cellular automaton model---and describe how our virtual patient cohort was developed. Section \ref{sec:methodology} outlines the algorithm development, including the necessary background in Bayesian parameter estimation and sequential design, and the formulation of the new score function. Our metrics for model assessment are discussed in Section \ref{sec:assessment}. Section \ref{sec:results} first compares the results from the new score function to those obtained using the score function from the previous study <|cite_start|> (Reference: Bayesian Information-Theoretic Calibration of Radiotherapy Sensitivity Parameters for Informing Effective Scanning Protocols in Cancer: With new advancements in technology, it is now possible to collect data for a variety of different metrics describing tumor growth, including tumor volume, composition, and vascularity, among others. For any proposed model of tumor growth and treatment, we observe large variability among individual patients’ parameter values, particularly those relating to treatment response; thus, exploiting the use of these various metrics for model calibration can be helpful to infer such patient-specific parameters both accurately and early, so that treatment protocols can be adjusted mid-course for maximum efficacy. However, taking measurements can be costly and invasive, limiting clinicians to a sparse collection schedule. As such, the determination of optimal times and metrics for which to collect data in order to best inform proper treatment protocols could be of great assistance to clinicians. In this investigation, we employ a Bayesian information-theoretic calibration protocol for experimental design in order to identify the optimal times at which to collect data for informing treatment parameters. Within this procedure, data collection times are chosen sequentially to maximize the reduction in parameter uncertainty with each added measurement, ensuring that a budget of n high-fidelity experimental measurements results in maximum information gain about the low-fidelity model parameter values. In addition to investigating the optimal temporal pattern for data collection, we also develop a framework for deciding which metrics should be utilized at each data collection point. We illustrate this framework with a variety of toy examples, each utilizing a radiotherapy treatment regimen. For each scenario, we analyze the dependence of the predictive power of the low-fidelity model upon the measurement budget.) <|cite_end|>. We conclude in this section that the penalization parameter $k$ can now be discarded, and we present the remainder of the model calibration results for three spheroids of varying radiotherapy sensitivity. We conclude Section \ref{sec:results} with an analysis of how the model uncertainty is affected by measurement noise. Section \ref{sec:discussion} summarizes the findings of the investigation and discusses their implications. <|paper_end|>
[ "<|reference_start|> The mathematics of cancer: integrating quantitative models: <|reference_end|>", "<|reference_start|> The dynamics of drug resistance: a mathematical perspective.: <|reference_end|>", "<|reference_start|> Mathematical modeling as a tool for planning anticancer therapy.: <|reference_end|>", "<|reference_start|> Bayesian Information-Theoretic Calibration of Radiotherapy Sensitivity Parameters for Informing Effective Scanning Protocols in Cancer: With new advancements in technology, it is now possible to collect data for a variety of different metrics describing tumor growth, including tumor volume, composition, and vascularity, among others. For any proposed model of tumor growth and treatment, we observe large variability among individual patients’ parameter values, particularly those relating to treatment response; thus, exploiting the use of these various metrics for model calibration can be helpful to infer such patient-specific parameters both accurately and early, so that treatment protocols can be adjusted mid-course for maximum efficacy. However, taking measurements can be costly and invasive, limiting clinicians to a sparse collection schedule. As such, the determination of optimal times and metrics for which to collect data in order to best inform proper treatment protocols could be of great assistance to clinicians. In this investigation, we employ a Bayesian information-theoretic calibration protocol for experimental design in order to identify the optimal times at which to collect data for informing treatment parameters. Within this procedure, data collection times are chosen sequentially to maximize the reduction in parameter uncertainty with each added measurement, ensuring that a budget of n high-fidelity experimental measurements results in maximum information gain about the low-fidelity model parameter values. In addition to investigating the optimal temporal pattern for data collection, we also develop a framework for deciding which metrics should be utilized at each data collection point. We illustrate this framework with a variety of toy examples, each utilizing a radiotherapy treatment regimen. For each scenario, we analyze the dependence of the predictive power of the low-fidelity model upon the measurement budget. <|reference_end|>" ]
[ 0, 1, 2, 12 ]
{"<|multi_cite_1_1|>": "ss-2155032", "<|multi_cite_1_2|>": "ss-2155033", "<|multi_cite_1_3|>": "ss-2155034", "<|multi_cite_1_4|>": "ss-684627", "<|multi_cite_1_5|>": "ss-1900521", "<|multi_cite_2_1|>": "ss-2155035", "<|multi_cite_2_2|>": "ss-2155036", "<|cite_3|>": "ss-2178872", "<|cite_4|>": "ss-2178872", "<|multi_cite_5_1|>": "ss-2155048", "<|multi_cite_5_2|>": "arxiv-22826", "<|cite_6|>": "ss-2178872", "<|cite_7|>": "ss-2178872"}
2303.11923
<|paper_start|> Title: Performance-aware Approximation of Global Channel Pruning for Multitask CNNs Abstract: Performance-aware Approximation of Global Channel Pruning for Multitask CNNs: Global channel pruning (GCP) aims to remove a subset of channels (filters) across different layers from a deep model without hurting the performance. Previous works focus on either single task model pruning or simply adapting it to multitask scenario, and still face the following problems when handling multitask pruning: 1) Due to the task mismatch, a well-pruned backbone for classification task focuses on preserving filters that can extract category-sensitive information, causing filters that may be useful for other tasks to be pruned during the backbone pruning stage; 2) For multitask predictions, different filters within or between layers are more closely related and interacted than that for single task prediction, making multitask pruning more difficult. Therefore, aiming at multitask model compression, we propose a Performance-Aware Global Channel Pruning (PAGCP) framework. We first theoretically present the objective for achieving superior GCP, by considering the joint saliency of filters from intra- and inter-layers. Then a sequentially greedy pruning strategy is proposed to optimize the objective, where a performance-aware oracle criterion is developed to evaluate sensitivity of filters to each task and preserve the globally most task-related filters. Experiments on several multitask datasets show that the proposed PAGCP can reduce the FLOPs and parameters by over 60% with minor performance drop, and achieves 1.2x$\sim$3.3x acceleration on both cloud and mobile platforms. Introduction \label{sec:introduction}} \else Related Work \label{related_works} Our work is closely related to multitask models and channel pruning. We review the related works on the two fields and compare our method with the existing methods in the following part. \subsection{Multitask Models} Multitask models here refer to CNN-based models with multiple task-specific heads and optimization objectives, such as typical object detectors <|cite_start|> (Reference: Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition: Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g., 224x224) input image. This requirement is "artificial" and may reduce the recognition accuracy for the images or sub-images of an arbitrary size/scale. In this work, we equip the networks with another pooling strategy, "spatial pyramid pooling", to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size/scale. Pyramid pooling is also robust to object deformations. With these advantages, SPP-net should in general improve all CNN-based image classification methods. On the ImageNet 2012 dataset, we demonstrate that SPP-net boosts the accuracy of a variety of CNN architectures despite their different designs. On the Pascal VOC 2007 and Caltech101 datasets, SPP-net achieves state-of-the-art classification results using a single full-image representation and no fine-tuning. The power of SPP-net is also significant in object detection. Using SPP-net, we compute the feature maps from the entire image only once, and then pool features in arbitrary regions (sub-images) to generate fixed-length representations for training the detectors. This method avoids repeatedly computing the convolutional features. In processing test images, our method is 24-102x faster than the R-CNN method, while achieving better or comparable accuracy on Pascal VOC 2007. In ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2014, our methods rank #2 in object detection and #3 in image classification among all 38 teams. This manuscript also introduces the improvement made for this competition.) <|cite_end|> <|cite_start|> (Reference: Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.: State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available) <|cite_end|> <|cite_start|> (Reference: SSD: Single Shot MultiBox Detector: We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. Our SSD model is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stage and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets confirm that SSD has comparable accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. Compared to other single stage methods, SSD has much better accuracy, even with a smaller input image size. For $300\times 300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a comparable state of the art Faster R-CNN model. Code is available at https://github.com/weiliu89/caffe/tree/ssd .) <|cite_end|> <|cite_start|> (Reference: Single-Shot Refinement Neural Network for Object Detection: For object detection, the two-stage approach (e.g., Faster R-CNN) has been achieving the highest accuracy, whereas the one-stage approach (e.g., SSD) has the advantage of high efficiency. To inherit the merits of both while overcoming their disadvantages, in this paper, we propose a novel single-shot based detector, called RefineDet, that achieves better accuracy than two-stage methods and maintains comparable efficiency of one-stage methods. RefineDet consists of two inter-connected modules, namely, the anchor refinement module and the object detection module. Specifically, the former aims to (1) filter out negative anchors to reduce search space for the classifier, and (2) coarsely adjust the locations and sizes of anchors to provide better initialization for the subsequent regressor. The latter module takes the refined anchors as the input from the former to further improve the regression and predict multi-class label. Meanwhile, we design a transfer connection block to transfer the features in the anchor refinement module to predict locations, sizes and class labels of objects in the object detection module. The multi-task loss function enables us to train the whole network in an end-to-end way. Extensive experiments on PASCAL VOC 2007, PASCAL VOC 2012, and MS COCO demonstrate that RefineDet achieves state-of-the-art detection accuracy with high efficiency. Code is available at https://github.com/sfzhang15/RefineDet) <|cite_end|> <|cite_start|> (Reference: You Only Look Once: Unified, Real-Time Object Detection: We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is far less likely to predict false detections where nothing exists. Finally, YOLO learns very general representations of objects. It outperforms all other detection methods, including DPM and R-CNN, by a wide margin when generalizing from natural images to artwork on both the Picasso Dataset and the People-Art Dataset.) <|cite_end|> which include classification and regression heads. Besides, conventional multitask models <|cite_start|> (Reference: Cross-stitch Networks for Multi-task Learning: Multi-task learning in Convolutional Networks has displayed remarkable success in the field of recognition. This success can be largely attributed to learning shared representations from multiple supervisory tasks. However, existing multi-task approaches rely on enumerating multiple network architectures specific to the tasks at hand, that do not generalize. In this paper, we propose a principled approach to learn shared representations in ConvNets using multi-task learning. Specifically, we propose a new sharing unit: "cross-stitch" unit. These units combine the activations from multiple networks and can be trained end-to-end. A network with cross-stitch units can learn an optimal combination of shared and task-specific representations. Our proposed method generalizes across multiple tasks and shows dramatically improved performance over baseline methods for categories with few training examples.) <|cite_end|> <|cite_start|> (Reference: PAD-Net: Multi-Tasks Guided Prediction-and-Distillation Network for Simultaneous Depth Estimation and Scene Parsing: Depth estimation and scene parsing are two particularly important tasks in visual scene understanding. In this paper we tackle the problem of simultaneous depth estimation and scene parsing in a joint CNN. The task can be typically treated as a deep multi-task learning problem [42]. Different from previous methods directly optimizing multiple tasks given the input training data, this paper proposes a novel multi-task guided prediction-and-distillation network (PAD-Net), which first predicts a set of intermediate auxiliary tasks ranging from low level to high level, and then the predictions from these intermediate auxiliary tasks are utilized as multi-modal input via our proposed multi-modal distillation modules for the final tasks. During the joint learning, the intermediate tasks not only act as supervision for learning more robust deep representations but also provide rich multi-modal information for improving the final tasks. Extensive experiments are conducted on two challenging datasets (i.e. NYUD-v2 and Cityscapes) for both the depth estimation and scene parsing tasks, demonstrating the effectiveness of the proposed approach.) <|cite_end|> <|cite_start|> (Reference: NDDR-CNN: Layerwise Feature Fusing in Multi-Task CNNs by Neural Discriminative Dimensionality Reduction: In this paper, we propose a novel Convolutional Neural Network (CNN) structure for general-purpose multi-task learning (MTL), which enables automatic feature fusing at every layer from different tasks. This is in contrast with the most widely used MTL CNN structures which empirically or heuristically share features on some specific layers (e.g., share all the features except the last convolutional layer). The proposed layerwise feature fusing scheme is formulated by combining existing CNN components in a novel way, with clear mathematical interpretability as discriminative dimensionality reduction, which is referred to as Neural Discriminative Dimensionality Reduction (NDDR). Specifically, we first concatenate features with the same spatial resolution from different tasks according to their channel dimension. Then, we show that the discriminative dimensionality reduction can be fulfilled by 1x1 Convolution, Batch Normalization, and Weight Decay in one CNN. The use of existing CNN components ensures the end-to-end training and the extensibility of the proposed NDDR layer to various state-of-the-art CNN architectures in a "plug-and-play" manner. The detailed ablation analysis shows that the proposed NDDR layer is easy to train and also robust to different hyperparameters. Experiments on different task sets with various base network architectures demonstrate the promising performance and desirable generalizability of our proposed method. The code of our paper is available at https://github.com/ethanygao/NDDR-CNN.) <|cite_end|> <|cite_start|> (Reference: End-to-End Multi-Task Learning with Attention: We propose a novel multi-task learning architecture, which allows learning of task-specific feature-level attention. Our design, the Multi-Task Attention Network (MTAN), consists of a single shared network containing a global feature pool, together with a soft-attention module for each task. These modules allow for learning of task-specific features from the global features, whilst simultaneously allowing for features to be shared across different tasks. The architecture can be trained end-to-end and can be built upon any feed-forward neural network, is simple to implement, and is parameter efficient. We evaluate our approach on a variety of datasets, across both image-to-image predictions and image classification tasks. We show that our architecture is state-of-the-art in multi-task learning compared to existing methods, and is also less sensitive to various weighting schemes in the multi-task loss function. Code is available at https://github.com/lorenmt/mtan.) <|cite_end|> <|cite_start|> (Reference: MTI-Net: Multi-Scale Task Interaction Networks for Multi-Task Learning: In this paper, we argue about the importance of considering task interactions at multiple scales when distilling task information in a multi-task learning setup. In contrast to common belief, we show that tasks with high affinity at a certain scale are not guaranteed to retain this behaviour at other scales, and vice versa. We propose a novel architecture, namely MTI-Net, that builds upon this finding in three ways. First, it explicitly models task interactions at every scale via a multi-scale multi-modal distillation unit. Second, it propagates distilled task information from lower to higher scales via a feature propagation module. Third, it aggregates the refined task features from all scales via a feature aggregation unit to produce the final per-task predictions. Extensive experiments on two multi-task dense labeling datasets show that, unlike prior work, our multi-task model delivers on the full potential of multi-task learning, that is, smaller memory footprint, reduced number of calculations, and better performance w.r.t. single-task learning. The code is made publicly available: https://github.com/SimonVandenhende/Multi-Task-Learning-PyTorch.) <|cite_end|> which include multiple dense prediction tasks like semantic segmentation, depth estimation, surface normal estimation, etc., are also studied for long and become the mainstream for MTL research. In the following, we will review the object detection models and the conventional multitask models that are the two main pruning focuses in this work. \subsubsection{Object Detectors} CNN-based object detection can be roughly categorized into two classes: two-stage detectors <|cite_start|> (Reference: Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition: Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g., 224x224) input image. This requirement is "artificial" and may reduce the recognition accuracy for the images or sub-images of an arbitrary size/scale. In this work, we equip the networks with another pooling strategy, "spatial pyramid pooling", to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size/scale. Pyramid pooling is also robust to object deformations. With these advantages, SPP-net should in general improve all CNN-based image classification methods. On the ImageNet 2012 dataset, we demonstrate that SPP-net boosts the accuracy of a variety of CNN architectures despite their different designs. On the Pascal VOC 2007 and Caltech101 datasets, SPP-net achieves state-of-the-art classification results using a single full-image representation and no fine-tuning. The power of SPP-net is also significant in object detection. Using SPP-net, we compute the feature maps from the entire image only once, and then pool features in arbitrary regions (sub-images) to generate fixed-length representations for training the detectors. This method avoids repeatedly computing the convolutional features. In processing test images, our method is 24-102x faster than the R-CNN method, while achieving better or comparable accuracy on Pascal VOC 2007. In ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2014, our methods rank #2 in object detection and #3 in image classification among all 38 teams. This manuscript also introduces the improvement made for this competition.) <|cite_end|> <|cite_start|> (Reference: Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.: State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available) <|cite_end|> and single-stage detectors <|cite_start|> (Reference: SSD: Single Shot MultiBox Detector: We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. Our SSD model is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stage and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets confirm that SSD has comparable accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. Compared to other single stage methods, SSD has much better accuracy, even with a smaller input image size. For $300\times 300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a comparable state of the art Faster R-CNN model. Code is available at https://github.com/weiliu89/caffe/tree/ssd .) <|cite_end|> <|cite_start|> (Reference: Single-Shot Refinement Neural Network for Object Detection: For object detection, the two-stage approach (e.g., Faster R-CNN) has been achieving the highest accuracy, whereas the one-stage approach (e.g., SSD) has the advantage of high efficiency. To inherit the merits of both while overcoming their disadvantages, in this paper, we propose a novel single-shot based detector, called RefineDet, that achieves better accuracy than two-stage methods and maintains comparable efficiency of one-stage methods. RefineDet consists of two inter-connected modules, namely, the anchor refinement module and the object detection module. Specifically, the former aims to (1) filter out negative anchors to reduce search space for the classifier, and (2) coarsely adjust the locations and sizes of anchors to provide better initialization for the subsequent regressor. The latter module takes the refined anchors as the input from the former to further improve the regression and predict multi-class label. Meanwhile, we design a transfer connection block to transfer the features in the anchor refinement module to predict locations, sizes and class labels of objects in the object detection module. The multi-task loss function enables us to train the whole network in an end-to-end way. Extensive experiments on PASCAL VOC 2007, PASCAL VOC 2012, and MS COCO demonstrate that RefineDet achieves state-of-the-art detection accuracy with high efficiency. Code is available at https://github.com/sfzhang15/RefineDet) <|cite_end|> <|cite_start|> (Reference: You Only Look Once: Unified, Real-Time Object Detection: We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is far less likely to predict false detections where nothing exists. Finally, YOLO learns very general representations of objects. It outperforms all other detection methods, including DPM and R-CNN, by a wide margin when generalizing from natural images to artwork on both the Picasso Dataset and the People-Art Dataset.) <|cite_end|>. Typically, two-stage detectors divide the detection process into the initial proposal generation and the subsequent proposal refinement stage, where the Region-of-Interest (RoI) pooling plays a crucial role by mapping proposals of different sizes to fixed-size features for subsequent proposal refinement. In contrast, single-stage object detectors as a dense prediction task detect multiple objects without relying on the RoI pooling. Two-stage detectors often have more tasks than single-stage ones, which need to generate the coarse location and objectness scores of proposal objects in the first stage, followed by the object classification and localization in the second stage. These object detection models are hard to be deployed on resource constrained devices, which is mainly due to their huge computational cost and storage overhead. Thus, it is necessary to compress detectors and accelerate the object detection speed for resource-constrained mobile applications. \subsubsection{Conventional Multitask Models} Conventional multitask models <|cite_start|> (Reference: Cross-stitch Networks for Multi-task Learning: Multi-task learning in Convolutional Networks has displayed remarkable success in the field of recognition. This success can be largely attributed to learning shared representations from multiple supervisory tasks. However, existing multi-task approaches rely on enumerating multiple network architectures specific to the tasks at hand, that do not generalize. In this paper, we propose a principled approach to learn shared representations in ConvNets using multi-task learning. Specifically, we propose a new sharing unit: "cross-stitch" unit. These units combine the activations from multiple networks and can be trained end-to-end. A network with cross-stitch units can learn an optimal combination of shared and task-specific representations. Our proposed method generalizes across multiple tasks and shows dramatically improved performance over baseline methods for categories with few training examples.) <|cite_end|> <|cite_start|> (Reference: PAD-Net: Multi-Tasks Guided Prediction-and-Distillation Network for Simultaneous Depth Estimation and Scene Parsing: Depth estimation and scene parsing are two particularly important tasks in visual scene understanding. In this paper we tackle the problem of simultaneous depth estimation and scene parsing in a joint CNN. The task can be typically treated as a deep multi-task learning problem [42]. Different from previous methods directly optimizing multiple tasks given the input training data, this paper proposes a novel multi-task guided prediction-and-distillation network (PAD-Net), which first predicts a set of intermediate auxiliary tasks ranging from low level to high level, and then the predictions from these intermediate auxiliary tasks are utilized as multi-modal input via our proposed multi-modal distillation modules for the final tasks. During the joint learning, the intermediate tasks not only act as supervision for learning more robust deep representations but also provide rich multi-modal information for improving the final tasks. Extensive experiments are conducted on two challenging datasets (i.e. NYUD-v2 and Cityscapes) for both the depth estimation and scene parsing tasks, demonstrating the effectiveness of the proposed approach.) <|cite_end|> <|cite_start|> (Reference: NDDR-CNN: Layerwise Feature Fusing in Multi-Task CNNs by Neural Discriminative Dimensionality Reduction: In this paper, we propose a novel Convolutional Neural Network (CNN) structure for general-purpose multi-task learning (MTL), which enables automatic feature fusing at every layer from different tasks. This is in contrast with the most widely used MTL CNN structures which empirically or heuristically share features on some specific layers (e.g., share all the features except the last convolutional layer). The proposed layerwise feature fusing scheme is formulated by combining existing CNN components in a novel way, with clear mathematical interpretability as discriminative dimensionality reduction, which is referred to as Neural Discriminative Dimensionality Reduction (NDDR). Specifically, we first concatenate features with the same spatial resolution from different tasks according to their channel dimension. Then, we show that the discriminative dimensionality reduction can be fulfilled by 1x1 Convolution, Batch Normalization, and Weight Decay in one CNN. The use of existing CNN components ensures the end-to-end training and the extensibility of the proposed NDDR layer to various state-of-the-art CNN architectures in a "plug-and-play" manner. The detailed ablation analysis shows that the proposed NDDR layer is easy to train and also robust to different hyperparameters. Experiments on different task sets with various base network architectures demonstrate the promising performance and desirable generalizability of our proposed method. The code of our paper is available at https://github.com/ethanygao/NDDR-CNN.) <|cite_end|> <|cite_start|> (Reference: End-to-End Multi-Task Learning with Attention: We propose a novel multi-task learning architecture, which allows learning of task-specific feature-level attention. Our design, the Multi-Task Attention Network (MTAN), consists of a single shared network containing a global feature pool, together with a soft-attention module for each task. These modules allow for learning of task-specific features from the global features, whilst simultaneously allowing for features to be shared across different tasks. The architecture can be trained end-to-end and can be built upon any feed-forward neural network, is simple to implement, and is parameter efficient. We evaluate our approach on a variety of datasets, across both image-to-image predictions and image classification tasks. We show that our architecture is state-of-the-art in multi-task learning compared to existing methods, and is also less sensitive to various weighting schemes in the multi-task loss function. Code is available at https://github.com/lorenmt/mtan.) <|cite_end|> <|cite_start|> (Reference: MTI-Net: Multi-Scale Task Interaction Networks for Multi-Task Learning: In this paper, we argue about the importance of considering task interactions at multiple scales when distilling task information in a multi-task learning setup. In contrast to common belief, we show that tasks with high affinity at a certain scale are not guaranteed to retain this behaviour at other scales, and vice versa. We propose a novel architecture, namely MTI-Net, that builds upon this finding in three ways. First, it explicitly models task interactions at every scale via a multi-scale multi-modal distillation unit. Second, it propagates distilled task information from lower to higher scales via a feature propagation module. Third, it aggregates the refined task features from all scales via a feature aggregation unit to produce the final per-task predictions. Extensive experiments on two multi-task dense labeling datasets show that, unlike prior work, our multi-task model delivers on the full potential of multi-task learning, that is, smaller memory footprint, reduced number of calculations, and better performance w.r.t. single-task learning. The code is made publicly available: https://github.com/SimonVandenhende/Multi-Task-Learning-PyTorch.) <|cite_end|> perform multiple dense prediction tasks such as semantic segmentation, depth estimation, surface normal estimation, simultaneously. These MTL models require more interactions between multiple tasks in different stages to learn more complementary features from different tasks. CNN-based multitask models can be divided into encoder-focused and decoder-focused models. Commonly, both types adopt the one-encoder-multiple-decoder structure. The former conducts the interaction in the encoding stage by sharing parameters <|cite_start|> (Reference: Cross-stitch Networks for Multi-task Learning: Multi-task learning in Convolutional Networks has displayed remarkable success in the field of recognition. This success can be largely attributed to learning shared representations from multiple supervisory tasks. However, existing multi-task approaches rely on enumerating multiple network architectures specific to the tasks at hand, that do not generalize. In this paper, we propose a principled approach to learn shared representations in ConvNets using multi-task learning. Specifically, we propose a new sharing unit: "cross-stitch" unit. These units combine the activations from multiple networks and can be trained end-to-end. A network with cross-stitch units can learn an optimal combination of shared and task-specific representations. Our proposed method generalizes across multiple tasks and shows dramatically improved performance over baseline methods for categories with few training examples.) <|cite_end|> <|cite_start|> (Reference: NDDR-CNN: Layerwise Feature Fusing in Multi-Task CNNs by Neural Discriminative Dimensionality Reduction: In this paper, we propose a novel Convolutional Neural Network (CNN) structure for general-purpose multi-task learning (MTL), which enables automatic feature fusing at every layer from different tasks. This is in contrast with the most widely used MTL CNN structures which empirically or heuristically share features on some specific layers (e.g., share all the features except the last convolutional layer). The proposed layerwise feature fusing scheme is formulated by combining existing CNN components in a novel way, with clear mathematical interpretability as discriminative dimensionality reduction, which is referred to as Neural Discriminative Dimensionality Reduction (NDDR). Specifically, we first concatenate features with the same spatial resolution from different tasks according to their channel dimension. Then, we show that the discriminative dimensionality reduction can be fulfilled by 1x1 Convolution, Batch Normalization, and Weight Decay in one CNN. The use of existing CNN components ensures the end-to-end training and the extensibility of the proposed NDDR layer to various state-of-the-art CNN architectures in a "plug-and-play" manner. The detailed ablation analysis shows that the proposed NDDR layer is easy to train and also robust to different hyperparameters. Experiments on different task sets with various base network architectures demonstrate the promising performance and desirable generalizability of our proposed method. The code of our paper is available at https://github.com/ethanygao/NDDR-CNN.) <|cite_end|>, while the latter does this in the decoding stage <|cite_start|> (Reference: PAD-Net: Multi-Tasks Guided Prediction-and-Distillation Network for Simultaneous Depth Estimation and Scene Parsing: Depth estimation and scene parsing are two particularly important tasks in visual scene understanding. In this paper we tackle the problem of simultaneous depth estimation and scene parsing in a joint CNN. The task can be typically treated as a deep multi-task learning problem [42]. Different from previous methods directly optimizing multiple tasks given the input training data, this paper proposes a novel multi-task guided prediction-and-distillation network (PAD-Net), which first predicts a set of intermediate auxiliary tasks ranging from low level to high level, and then the predictions from these intermediate auxiliary tasks are utilized as multi-modal input via our proposed multi-modal distillation modules for the final tasks. During the joint learning, the intermediate tasks not only act as supervision for learning more robust deep representations but also provide rich multi-modal information for improving the final tasks. Extensive experiments are conducted on two challenging datasets (i.e. NYUD-v2 and Cityscapes) for both the depth estimation and scene parsing tasks, demonstrating the effectiveness of the proposed approach.) <|cite_end|> <|cite_start|> (Reference: End-to-End Multi-Task Learning with Attention: We propose a novel multi-task learning architecture, which allows learning of task-specific feature-level attention. Our design, the Multi-Task Attention Network (MTAN), consists of a single shared network containing a global feature pool, together with a soft-attention module for each task. These modules allow for learning of task-specific features from the global features, whilst simultaneously allowing for features to be shared across different tasks. The architecture can be trained end-to-end and can be built upon any feed-forward neural network, is simple to implement, and is parameter efficient. We evaluate our approach on a variety of datasets, across both image-to-image predictions and image classification tasks. We show that our architecture is state-of-the-art in multi-task learning compared to existing methods, and is also less sensitive to various weighting schemes in the multi-task loss function. Code is available at https://github.com/lorenmt/mtan.) <|cite_end|> <|cite_start|> (Reference: MTI-Net: Multi-Scale Task Interaction Networks for Multi-Task Learning: In this paper, we argue about the importance of considering task interactions at multiple scales when distilling task information in a multi-task learning setup. In contrast to common belief, we show that tasks with high affinity at a certain scale are not guaranteed to retain this behaviour at other scales, and vice versa. We propose a novel architecture, namely MTI-Net, that builds upon this finding in three ways. First, it explicitly models task interactions at every scale via a multi-scale multi-modal distillation unit. Second, it propagates distilled task information from lower to higher scales via a feature propagation module. Third, it aggregates the refined task features from all scales via a feature aggregation unit to produce the final per-task predictions. Extensive experiments on two multi-task dense labeling datasets show that, unlike prior work, our multi-task model delivers on the full potential of multi-task learning, that is, smaller memory footprint, reduced number of calculations, and better performance w.r.t. single-task learning. The code is made publicly available: https://github.com/SimonVandenhende/Multi-Task-Learning-PyTorch.) <|cite_end|>, which can perceive more discriminative information from ground-truth labels. Although multitask models save weight parameters through sharing the encoder, they still suffer from the heavy computation and memory cost in the interaction stage, where cross-task features are propagated by many attention operators <|cite_start|> (Reference: End-to-End Multi-Task Learning with Attention: We propose a novel multi-task learning architecture, which allows learning of task-specific feature-level attention. Our design, the Multi-Task Attention Network (MTAN), consists of a single shared network containing a global feature pool, together with a soft-attention module for each task. These modules allow for learning of task-specific features from the global features, whilst simultaneously allowing for features to be shared across different tasks. The architecture can be trained end-to-end and can be built upon any feed-forward neural network, is simple to implement, and is parameter efficient. We evaluate our approach on a variety of datasets, across both image-to-image predictions and image classification tasks. We show that our architecture is state-of-the-art in multi-task learning compared to existing methods, and is also less sensitive to various weighting schemes in the multi-task loss function. Code is available at https://github.com/lorenmt/mtan.) <|cite_end|> <|cite_start|> (Reference: MTI-Net: Multi-Scale Task Interaction Networks for Multi-Task Learning: In this paper, we argue about the importance of considering task interactions at multiple scales when distilling task information in a multi-task learning setup. In contrast to common belief, we show that tasks with high affinity at a certain scale are not guaranteed to retain this behaviour at other scales, and vice versa. We propose a novel architecture, namely MTI-Net, that builds upon this finding in three ways. First, it explicitly models task interactions at every scale via a multi-scale multi-modal distillation unit. Second, it propagates distilled task information from lower to higher scales via a feature propagation module. Third, it aggregates the refined task features from all scales via a feature aggregation unit to produce the final per-task predictions. Extensive experiments on two multi-task dense labeling datasets show that, unlike prior work, our multi-task model delivers on the full potential of multi-task learning, that is, smaller memory footprint, reduced number of calculations, and better performance w.r.t. single-task learning. The code is made publicly available: https://github.com/SimonVandenhende/Multi-Task-Learning-PyTorch.) <|cite_end|>. Thus, they still need compression for resource-constrained mobile applications. In addition, as filters in MTL models present more interactive correlations to serve multiple tasks together, the pruning of MTL models is thus more difficult than that of single task models. \subsection{Channel Pruning} Existing channel pruning approaches can be roughly categorized into two classes: saliency-based and sparsity-based. Saliency-based approaches hypothesize that the weight values indicate their importance to the final prediction. A heuristic idea in this way is that those weights with smaller magnitudes make little contribution to the output, and thus are less informative to the model. $\ell_1$ norm on either filters <|cite_start|> (Reference: Pruning Filters for Efficient ConvNets: The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the original accuracy by retraining the networks.) <|cite_end|> or activation maps <|cite_start|> (Reference: Network Trimming: A Data-Driven Neuron Pruning Approach towards Efficient Deep Architectures: State-of-the-art neural networks are getting deeper and wider. While their performance increases with the increasing number of layers and neurons, it is crucial to design an efficient deep architecture in order to reduce computational and memory costs. Designing an efficient neural network, however, is labor intensive requiring many experiments, and fine-tunings. In this paper, we introduce network trimming which iteratively optimizes the network by pruning unimportant neurons based on analysis of their outputs on a large dataset. Our algorithm is inspired by an observation that the outputs of a significant portion of neurons in a large network are mostly zero, regardless of what inputs the network received. These zero activation neurons are redundant, and can be removed without affecting the overall accuracy of the network. After pruning the zero activation neurons, we retrain the network using the weights before pruning as initialization. We alternate the pruning and retraining to further reduce zero activations in a network. Our experiments on the LeNet and VGG-16 show that we can achieve high compression ratio of parameters without losing or even achieving higher accuracy than the original network.) <|cite_end|> is consequently computed as the filter saliency for model compression. Liu \textit{et al.} <|cite_start|> (Reference: Learning Efficient Convolutional Networks through Network Slimming: The deployment of deep convolutional neural networks (CNNs) in many real world applications is largely hindered by their high computational cost. In this paper, we propose a novel learning scheme for CNNs to simultaneously 1) reduce the model size; 2) decrease the run-time memory footprint; and 3) lower the number of computing operations, without compromising accuracy. This is achieved by enforcing channel-level sparsity in the network in a simple but effective way. Different from many existing approaches, the proposed method directly applies to modern CNN architectures, introduces minimum overhead to the training process, and requires no special software/hardware accelerators for the resulting models. We call our approach network slimming, which takes wide and large networks as input models, but during training insignificant channels are automatically identified and pruned afterwards, yielding thin and compact models with comparable accuracy. We empirically demonstrate the effectiveness of our approach with several state-of-the-art CNN models, including VGGNet, ResNet and DenseNet, on various image classification datasets. For VGGNet, a multi-pass version of network slimming gives a 20x reduction in model size and a 5x reduction in computing operations.) <|cite_end|> hold that the saliency of filters could be represented by the scaling factor of the Batch Normalization layer. Yet, some works study the filter saliency from the oracle view based on each filter's direct contribution to the final loss. Molchanov \textit{et al.} <|cite_start|> (Reference: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR): ) <|cite_end|> approximate the importance of filters via ranking the first-order Taylor coefficients. Considering that the global loss computation is time-consuming, a few works focus on using the local reconstruction loss as the saliency criterion. In <|cite_start|> (Reference: ThiNet: Pruning CNN Filters for a Thinner Net: This paper aims at accelerating and compressing deep neural networks to deploy CNN models into small devices like mobile phones or embedded gadgets. We focus on filter level pruning, i.e., the whole filter will be discarded if it is less important. An effective and unified framework, ThiNet (stands for “Thin Net”), is proposed in this paper. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. We also propose “gcos” (Group COnvolution with Shuffling), a more accurate group convolution scheme, to further reduce the pruned model size. Experimental results demonstrate the effectiveness of our method, which has advanced the state-of-the-art. Moreover, we show that the original VGG-16 model can be compressed into a very small model (ThiNet-Tiny) with only 2.66 MB model size, but still preserve AlexNet level accuracy. This small model is evaluated on several benchmarks with different vision tasks (e.g., classification, detection, segmentation), and shows excellent generalization ability.) <|cite_end|>, the filters with little impact on the output feature maps are considered as less important. Apart from these, there are other criteria to rank the filter importance <|cite_start|> (Reference: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR): ) <|cite_end|> <|cite_start|> (Reference: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020: ) <|cite_end|>. Sparsity-based approaches aim at exploring the pruning ratio of each layer in a global or local manner. In the global manner, the sizes of all layers in the model are preset with the same compression ratio before pruning. Howard \textit{et al.} <|cite_start|> (Reference: MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications: We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.) <|cite_end|> apply the same pruning ratio to all layers, while Tan \textit{et al.} <|cite_start|> (Reference: Effects of boundary conditions on Min-Protein Oscillation in \emph {E. coli} using mesoscopic lattice Boltzmann method: Summary The Min-proteins oscillation in E. coli has an essential role in controlling the accuracy placement of cell-division septum at the middle cell zone of the bacteria. This biochemical process has been successfully described by a set of reactiondiffusion equation at the macroscopic level [1]. Recently, a mesoscopic modeling by the lattice Boltzmann method (LBM) has been proposed to simulate the Minproteins oscillation [2]. However, as pointed out by Zhang et al., the standard boundary conditions are not accuracy for a class of dispersion transport modeled by LBM [3]. In this present work, we investigated the boundary effects in LBM on the Min-proteins oscillation. It was found that the mirror-image method is a suitableboundary treatment for this problem. Physical significance of the results is extensively discussed.) <|cite_end|> further scale the depth and resolution together with the channels. Such pruning schemes are coarse for the ignorance of individual sensitivity in each layer, which is decisive to the pruning performance. In contrast, the local manner personalizes the pruning ratio of each layer with two general norms. One norm is sensitivity analysis based on saliency criteria <|cite_start|> (Reference: Pruning Filters for Efficient ConvNets: The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the original accuracy by retraining the networks.) <|cite_end|> <|cite_start|> (Reference: Rethinking the Value of Network Pruning: Network pruning is widely used for reducing the heavy inference cost of deep models in low-resource settings. A typical pruning algorithm is a three-stage pipeline, i.e., training (a large model), pruning and fine-tuning. During pruning, according to a certain criterion, redundant weights are pruned and important weights are kept to best preserve the accuracy. In this work, we make several surprising observations which contradict common beliefs. For all state-of-the-art structured pruning algorithms we examined, fine-tuning a pruned model only gives comparable or worse performance than training that model with randomly initialized weights. For pruning algorithms which assume a predefined target network architecture, one can get rid of the full pipeline and directly train the target network from scratch. Our observations are consistent for multiple network architectures, datasets, and tasks, which imply that: 1) training a large, over-parameterized model is often not necessary to obtain an efficient final model, 2) learned "important" weights of the large model are typically not useful for the small pruned model, 3) the pruned architecture itself, rather than a set of inherited "important" weights, is more crucial to the efficiency in the final model, which suggests that in some cases pruning can be useful as an architecture search paradigm. Our results suggest the need for more careful baseline evaluations in future research on structured pruning methods. We also compare with the "Lottery Ticket Hypothesis" (Frankle & Carbin 2019), and find that with optimal learning rate, the "winning ticket" initialization as used in Frankle & Carbin (2019) does not bring improvement over random initialization.) <|cite_end|>. The other norm is the global automatic search for the pruning ratio of each layer, such as <|cite_start|> (Reference: Learning Efficient Convolutional Networks through Network Slimming: The deployment of deep convolutional neural networks (CNNs) in many real world applications is largely hindered by their high computational cost. In this paper, we propose a novel learning scheme for CNNs to simultaneously 1) reduce the model size; 2) decrease the run-time memory footprint; and 3) lower the number of computing operations, without compromising accuracy. This is achieved by enforcing channel-level sparsity in the network in a simple but effective way. Different from many existing approaches, the proposed method directly applies to modern CNN architectures, introduces minimum overhead to the training process, and requires no special software/hardware accelerators for the resulting models. We call our approach network slimming, which takes wide and large networks as input models, but during training insignificant channels are automatically identified and pruned afterwards, yielding thin and compact models with comparable accuracy. We empirically demonstrate the effectiveness of our approach with several state-of-the-art CNN models, including VGGNet, ResNet and DenseNet, on various image classification datasets. For VGGNet, a multi-pass version of network slimming gives a 20x reduction in model size and a 5x reduction in computing operations.) <|cite_end|> <|cite_start|> (Reference: MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning: In this paper, we propose a novel meta learning approach for automatic channel pruning of very deep neural networks. We first train a PruningNet, a kind of meta network, which is able to generate weight parameters for any pruned structure given the target network. We use a simple stochastic structure sampling method for training the PruningNet. Then, we apply an evolutionary procedure to search for good-performing pruned networks. The search is highly efficient because the weights are directly generated by the trained PruningNet and we do not need any finetuning at search time. With a single PruningNet trained for the target network, we can search for various Pruned Networks under different constraints with little human participation. Compared to the state-of-the-art pruning methods, we have demonstrated superior performances on MobileNet V1/V2 and ResNet. Codes are available on https://github.com/liuzechun/MetaPruning.) <|cite_end|> <|cite_start|> (Reference: Anonymous Model Pruning for Compressing Deep Neural Networks: Many deep neural network compression algorithms need to fine-tune on source dataset, which makes them unpractical when the source datasets are unavailable. Although data-free methods can overcome this problem, they often suffer from a huge loss of accuracy. In this paper, we propose a novel approach named Anonymous-Model Pruning (AMP), which seeks to compress the network without the source data and the accuracy can be guaranteed without too much loss. AMP compresses deep neural networks via searching pruning rate automatically and fine-tuning the compressed model under the teacher-student diagram. The key innovations are that the pruning rate is automatically determined, and the finetuning process is under the guidance of uncompressed network instead of labels. Even without the source dataset, compared with existing pruning methods, our proposed method can still achieve comparable accuracy with similar pruning rate. For example, for ResNet50, our AMP method only incur 0.76% loss in top-1 accuracy with 32.72% pruning rate.) <|cite_end|> <|cite_start|> (Reference: Efficient Joint-Dimensional Search with Solution Space Regularization for Real-Time Semantic Segmentation: Semantic segmentation is a popular research topic in computer vision, and many efforts have been made on it with impressive results. In this paper, we intend to search an optimal network structure that can run in real-time for this problem. Towards this goal, we jointly search the depth, channel, dilation rate and feature spatial resolution, which results in a search space consisting of about 2.78*10^324 possible choices. To handle such a large search space, we leverage differential architecture search methods. However, the architecture parameters searched using existing differential methods need to be discretized, which causes the discretization gap between the architecture parameters found by the differential methods and their discretized version as the final solution for the architecture search. Hence, we relieve the problem of discretization gap from the innovative perspective of solution space regularization. Specifically, a novel Solution Space Regularization (SSR) loss is first proposed to effectively encourage the supernet to converge to its discrete one. Then, a new Hierarchical and Progressive Solution Space Shrinking method is presented to further achieve high efficiency of searching. In addition, we theoretically show that the optimization of SSR loss is equivalent to the L_0-norm regularization, which accounts for the improved search-evaluation gap. Comprehensive experiments show that the proposed search scheme can efficiently find an optimal network structure that yields an extremely fast speed (175 FPS) of segmentation with a small model size (1 M) while maintaining comparable accuracy.) <|cite_end|> <|cite_start|> (Reference: $\beta$-DARTS: Beta-Decay Regularization for Differentiable Architecture Search: Neural Architecture Search~(NAS) has attracted increasingly more attention in recent years because of its capability to design deep neural networks automatically. Among them, differential NAS approaches such as DARTS, have gained popularity for the search efficiency. However, they suffer from two main issues, the weak robustness to the performance collapse and the poor generalization ability of the searched architectures. To solve these two problems, a simple-but-efficient regularization method, termed as Beta-Decay, is proposed to regularize the DARTS-based NAS searching process. Specifically, Beta-Decay regularization can impose constraints to keep the value and variance of activated architecture parameters from too large. Furthermore, we provide in-depth theoretical analysis on how it works and why it works. Experimental results on NAS-Bench-201 show that our proposed method can help to stabilize the searching process and makes the searched network more transferable across different datasets. In addition, our search scheme shows an outstanding property of being less dependent on training time and data. Comprehensive experiments on a variety of search spaces and datasets validate the effectiveness of the proposed method.) <|cite_end|> <|cite_start|> (Reference: Pruning-as-Search: Efficient Neural Architecture Search via Channel Pruning and Structural Reparameterization: Neural architecture search (NAS) and network pruning are widely studied efficient AI techniques, but not yet perfect. NAS performs exhaustive candidate architecture search, incurring tremendous search cost. Though (structured) pruning can simply shrink model dimension, it remains unclear how to decide the per-layer sparsity automatically and optimally. In this work, we revisit the problem of layer-width optimization and propose Pruning-as-Search (PaS), an end-to-end channel pruning method to search out desired sub-network automatically and efficiently. Specifically, we add a depth-wise binary convolution to learn pruning policies directly through gradient descent. By combining the structural reparameterization and PaS, we successfully searched out a new family of VGG-like and lightweight networks, which enable the flexibility of arbitrary width with respect to each layer instead of each stage. Experimental results show that our proposed architecture outperforms prior arts by around $1.0\%$ top-1 accuracy under similar inference speed on ImageNet-1000 classification task. Furthermore, we demonstrate the effectiveness of our width search on complex tasks including instance segmentation and image translation. Code and models are released.) <|cite_end|> <|cite_start|> (Reference: Towards Accurate and Compact Architectures via Neural Architecture Transformer: Designing effective architectures is one of the key factors behind the success of deep neural networks. Existing deep architectures are either manually designed or automatically searched by some Neural Architecture Search (NAS) methods. However, even a well-designed/searched architecture may still contain many nonsignificant or redundant modules/operations. Thus, it is necessary to optimize the operations inside an architecture to improve the performance without introducing extra computational cost. To this end, we have proposed a Neural Architecture Transformer (NAT) method which casts the optimization problem into a Markov Decision Process (MDP) and seeks to replace the redundant operations with more efficient operations, such as skip or null connection. Note that NAT only considers a small number of possible transitions and thus comes with a limited search/transition space. As a result, such a small search space may hamper the performance of architecture optimization. To address this issue, we propose a Neural Architecture Transformer++ (NAT++) method which further enlarges the set of candidate transitions to improve the performance of architecture optimization. Specifically, we present a two-level transition rule to obtain valid transitions, i.e., allowing operations to have more efficient types (e.g., convolution->separable convolution) or smaller kernel sizes (e.g., 5x5->3x3). Note that different operations may have different valid transitions. We further propose a Binary-Masked Softmax (BMSoftmax) layer to omit the possible invalid transitions. Extensive experiments on several benchmark datasets show that the transformed architecture significantly outperforms both its original counterpart and the architectures optimized by existing methods.) <|cite_end|>. Although automatic searching can avoid lots of hand-crafted choices, it also involves more optimization operations, resulting in time-consuming channel pruning. Thus far, both paradigms have made much progress but still suffer from two dilemmas: the neglect of the joint impact on the compression performance by simultaneously pruning multiple filters, and the imperceptibility of the performance drop during the pruning process. The joint impact of multiple filters remains under-explored in the current works, which should be emphasized in the compression of multitask models due to the complex interaction among different filters. The performance drop as another key issue, is directly related to the prunability <|cite_start|> (Reference: ResRep: Lossless CNN Pruning via Decoupling Remembering and Forgetting: We propose ResRep, a novel method for lossless channel pruning (a.k.a. filter pruning), which slims down a CNN by reducing the width (number of output channels) of convolutional layers. Inspired by the neurobiology research about the independence of remembering and forgetting, we propose to re-parameterize a CNN into the remembering parts and forgetting parts, where the former learn to maintain the performance and the latter learn to prune. Via training with regular SGD on the former but a novel update rule with penalty gradients on the latter, we realize structured sparsity. Then we equivalently merge the remembering and forgetting parts into the original architecture with narrower layers. In this sense, ResRep can be viewed as a successful application of Structural Re-parameterization. Such a methodology distinguishes ResRep from the traditional learning-based pruning paradigm that applies a penalty on parameters to produce sparsity, which may suppress the parameters essential for the remembering. ResRep slims down a standard ResNet-50 with 76.15% accuracy on ImageNet to a narrower one with only 45% FLOPs and no accuracy drop, which is the first to achieve lossless pruning with such a high compression ratio. The code and models are at https://github.com/DingXiaoH/ResRep.) <|cite_end|>, \textit{a.k.a.} pruning ratio of a model. Most current works empirically set a small pruning threshold for each layer, which does not fully consider different layers' pruning potentials and different filters' co-influence for the multiple tasks, and thus are insufficient in controlling the performance drop at an optimal level. \begin{figure*}[t] \setlength{\abovecaptionskip}{0.1cm} \setlength{\belowcaptionskip}{-0.5cm} \centering \scalebox{0.27}{ \includegraphics{Fig/Fig2-v5.pdf}} \caption{The overview of the proposed Performance-Aware Global Channel Pruning (PAGCP) framework. Given an original well-trained multitask model, we sort all target layers in a new sequence based on each layer's contribution to total FLOPs reduction (computed by subtracting the FLOPs of the pruned model from the FLOPs of the original model), and compress each layer in such sequence. Specially, the original model produces an initial loss $\boldsymbol{\mathcal{L}}^{(0)}$ for the estimation of pruning ratio in $l_F^1$. Then at each step $i$, we select the task with largest performance drop when masking $\gamma$ filters in $l_F^i$ as the most sensitive task for the target filters to be pruned. In the compressor, based on the selected task and the saliency criterion, we maximize the compression ratio of $l_F^i$ under local constraints with $\boldsymbol{\mathcal{L}}^{(i-1)}$ generated from step $i-1$ as a reference. The compressor outputs a list of the pruned filters and updates $\boldsymbol{\mathcal{L}}^{(i)}$ for step $i+1$. After all layers are compressed, we reorder the pruned layers by evaluating their compression contributions again, and compress the top $P$ layers with the highest pruning ratios. Finally, we retrain the pruned model and repeat the above procedure until the reduction requirement of FLOPs or parameters is satisfied.} \label{fig2} \end{figure*} \subsection{Channel Pruning for Multitask Models} Typical channel pruning methods for multitask models mainly follow two practices: fine-tuning the pre-pruned classification backbone on multitask benchmarks (fine-tuning based pruning), and adapting the classification-based pruning strategy to multitask model pruning (adapting based pruning). Liu \textit{et al. } <|cite_start|> (Reference: Rethinking the Value of Network Pruning: Network pruning is widely used for reducing the heavy inference cost of deep models in low-resource settings. A typical pruning algorithm is a three-stage pipeline, i.e., training (a large model), pruning and fine-tuning. During pruning, according to a certain criterion, redundant weights are pruned and important weights are kept to best preserve the accuracy. In this work, we make several surprising observations which contradict common beliefs. For all state-of-the-art structured pruning algorithms we examined, fine-tuning a pruned model only gives comparable or worse performance than training that model with randomly initialized weights. For pruning algorithms which assume a predefined target network architecture, one can get rid of the full pipeline and directly train the target network from scratch. Our observations are consistent for multiple network architectures, datasets, and tasks, which imply that: 1) training a large, over-parameterized model is often not necessary to obtain an efficient final model, 2) learned "important" weights of the large model are typically not useful for the small pruned model, 3) the pruned architecture itself, rather than a set of inherited "important" weights, is more crucial to the efficiency in the final model, which suggests that in some cases pruning can be useful as an architecture search paradigm. Our results suggest the need for more careful baseline evaluations in future research on structured pruning methods. We also compare with the "Lottery Ticket Hypothesis" (Frankle & Carbin 2019), and find that with optimal learning rate, the "winning ticket" initialization as used in Frankle & Carbin (2019) does not bring improvement over random initialization.) <|cite_end|> validate the filter mismatch problem in the fine-tuning based pruning methods, and argue that the pruned model of fine-tuning based methods can perform well by training from scratch. ThiNet <|cite_start|> (Reference: ThiNet: Pruning CNN Filters for a Thinner Net: This paper aims at accelerating and compressing deep neural networks to deploy CNN models into small devices like mobile phones or embedded gadgets. We focus on filter level pruning, i.e., the whole filter will be discarded if it is less important. An effective and unified framework, ThiNet (stands for “Thin Net”), is proposed in this paper. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. We also propose “gcos” (Group COnvolution with Shuffling), a more accurate group convolution scheme, to further reduce the pruned model size. Experimental results demonstrate the effectiveness of our method, which has advanced the state-of-the-art. Moreover, we show that the original VGG-16 model can be compressed into a very small model (ThiNet-Tiny) with only 2.66 MB model size, but still preserve AlexNet level accuracy. This small model is evaluated on several benchmarks with different vision tasks (e.g., classification, detection, segmentation), and shows excellent generalization ability.) <|cite_end|> transfers the greedy channel pruning with the local reconstruction regularization to the detector compression. MLP <|cite_start|> (Reference: Multi-layer Pruning Framework for Compressing Single Shot MultiBox Detector: We propose a framework for compressing state-of-the-art Single Shot MultiBox Detector (SSD). The framework addresses compression in the following stages: Sparsity Induction, Filter Selection, and Filter Pruning. In the Sparsity Induction stage, the object detector model is sparsified via an improved global threshold. In Filter Selection & Pruning stage, we select and remove filters using sparsity statistics of filter weights in two consecutive convolutional layers. This results in the model with the size smaller than most existing compact architectures. We evaluate the performance of our framework with multiple datasets and compare over multiple methods. Experimental results show that our method achieves state-of-the-art compression of 6.7X and 4.9X on PASCAL VOC dataset on models SSD300 and SSD512 respectively. We further show that the method produces maximum compression of 26X with SSD512 on German Traffic Sign Detection Benchmark (GTSDB). Additionally, we also empirically show our method's adaptability for classification based architecture VGG16 on datasets CIFAR and German Traffic Sign Recognition Benchmark (GTSRB) achieving a compression rate of 125X and 200X with the reduction in flops by 90.50% and 96.6% respectively with no loss of accuracy. In addition to this, our method does not require any special libraries or hardware support for the resulting compressed models.) <|cite_end|> adapts the group pruning methods in <|cite_start|> (Reference: Pruning Filters for Efficient ConvNets: The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the original accuracy by retraining the networks.) <|cite_end|> to compress the SSD models by $\ell_1$ norm. A localization-aware auxiliary network is designed in <|cite_start|> (Reference: Localization-aware Channel Pruning for Object Detection: Channel pruning is one of the important methods for deep model compression. Most of existing pruning methods mainly focus on classification. Few of them conduct systematic research on object detection. However, object detection is different from classification, which requires not only semantic information but also localization information. In this paper, based on discrimination-aware channel pruning (DCP) which is state-of-the-art pruning method for classification, we propose a localization-aware auxiliary network to find out the channels with key information for classification and regression so that we can conduct channel pruning directly for object detection, which saves lots of time and computing resources. In order to capture the localization information, we first design the auxiliary network with a contextual ROIAlign layer which can obtain precise localization information of the default boxes by pixel alignment and enlarges the receptive fields of the default boxes when pruning shallow layers. Then, we construct a loss function for object detection task which tends to keep the channels that contain the key information for classification and regression. Extensive experiments demonstrate the effectiveness of our method. On MS COCO, we prune 70\% parameters of the SSD based on ResNet-50 with modest accuracy drop, which outperforms the-state-of-art method.) <|cite_end|> to find out important channels of detectors, which is adapted from the discrimination-aware channel pruning for classification <|cite_start|> (Reference: Discrimination-aware Channel Pruning for Deep Neural Networks: Channel pruning is one of the predominant approaches for deep model compression. Existing pruning methods either train from scratch with sparsity constraints on channels, or minimize the reconstruction error between the pre-trained feature maps and the compressed ones. Both strategies suffer from some limitations: the former kind is computationally expensive and difficult to converge, whilst the latter kind optimizes the reconstruction error but ignores the discriminative power of channels. To overcome these drawbacks, we investigate a simple-yet-effective method, called discrimination-aware channel pruning, to choose those channels that really contribute to discriminative power. To this end, we introduce additional losses into the network to increase the discriminative power of intermediate layers and then select the most discriminative channels for each layer by considering the additional loss and the reconstruction error. Last, we propose a greedy algorithm to conduct channel selection and parameter optimization in an iterative way. Extensive experiments demonstrate the effectiveness of our method. For example, on ILSVRC-12, our pruned ResNet-50 with 30% reduction of channels even outperforms the original model by 0.39% in top-1 accuracy.) <|cite_end|>. PAM <|cite_start|> (Reference: Pruning-Aware Merging for Efficient Multitask Inference: Many mobile applications demand selective execution of multiple correlated deep learning inference tasks on resource-constrained platforms. Given a set of deep neural networks, each pre-trained for a single task, it is desired that executing arbitrary combinations of tasks yields minimal computation cost. Pruning each network separately yields suboptimal computation cost due to task relatedness. A promising remedy is to merge the networks into a multitask network to eliminate redundancy across tasks before network pruning. However, pruning a multitask network combined by existing network merging schemes cannot minimise the computation cost of every task combination because they do not consider such a future pruning. To this end, we theoretically identify the conditions such that pruning a multitask network minimises the computation of all task combinations. On this basis, we propose Pruning-Aware Merging (PAM), a heuristic network merging scheme to construct a multitask network that approximates these conditions. The merged network is then ready to be further pruned by existing network pruning methods. Evaluations with different pruning schemes, datasets, and network architectures show that PAM achieves up to 4.87x less computation against the baseline without network merging, and up to 2.01x less computation against the baseline with a state-of-the-art network merging scheme.) <|cite_end|> merges multiple networks into a multitask network to eliminate redundancy across tasks before network pruning, and the pruning strategy is inspired by the existing classification-based pruning method. These methods face one common drawback that the co-importance of filters from different layers or groups is neglected, which should get more attention in the multitask setting, since filters are coupled and interacted more tightly to represent the multitask features than those in the single task models. <|paper_end|>
[ "<|reference_start|> NDDR-CNN: Layerwise Feature Fusing in Multi-Task CNNs by Neural Discriminative Dimensionality Reduction: In this paper, we propose a novel Convolutional Neural Network (CNN) structure for general-purpose multi-task learning (MTL), which enables automatic feature fusing at every layer from different tasks. This is in contrast with the most widely used MTL CNN structures which empirically or heuristically share features on some specific layers (e.g., share all the features except the last convolutional layer). The proposed layerwise feature fusing scheme is formulated by combining existing CNN components in a novel way, with clear mathematical interpretability as discriminative dimensionality reduction, which is referred to as Neural Discriminative Dimensionality Reduction (NDDR). Specifically, we first concatenate features with the same spatial resolution from different tasks according to their channel dimension. Then, we show that the discriminative dimensionality reduction can be fulfilled by 1x1 Convolution, Batch Normalization, and Weight Decay in one CNN. The use of existing CNN components ensures the end-to-end training and the extensibility of the proposed NDDR layer to various state-of-the-art CNN architectures in a \"plug-and-play\" manner. The detailed ablation analysis shows that the proposed NDDR layer is easy to train and also robust to different hyperparameters. Experiments on different task sets with various base network architectures demonstrate the promising performance and desirable generalizability of our proposed method. The code of our paper is available at https://github.com/ethanygao/NDDR-CNN. <|reference_end|>", "<|reference_start|> SSD: Single Shot MultiBox Detector: We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. Our SSD model is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stage and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets confirm that SSD has comparable accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. Compared to other single stage methods, SSD has much better accuracy, even with a smaller input image size. For $300\\times 300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan X and for $500\\times 500$ input, SSD achieves 75.1% mAP, outperforming a comparable state of the art Faster R-CNN model. Code is available at https://github.com/weiliu89/caffe/tree/ssd . <|reference_end|>", "<|reference_start|> End-to-End Multi-Task Learning with Attention: We propose a novel multi-task learning architecture, which allows learning of task-specific feature-level attention. Our design, the Multi-Task Attention Network (MTAN), consists of a single shared network containing a global feature pool, together with a soft-attention module for each task. These modules allow for learning of task-specific features from the global features, whilst simultaneously allowing for features to be shared across different tasks. The architecture can be trained end-to-end and can be built upon any feed-forward neural network, is simple to implement, and is parameter efficient. We evaluate our approach on a variety of datasets, across both image-to-image predictions and image classification tasks. We show that our architecture is state-of-the-art in multi-task learning compared to existing methods, and is also less sensitive to various weighting schemes in the multi-task loss function. Code is available at https://github.com/lorenmt/mtan. <|reference_end|>", "<|reference_start|> 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR): <|reference_end|>" ]
[ 7, 12, 18, 32 ]
{"<|multi_cite_1_1|>": "arxiv-62406", "<|multi_cite_1_2|>": "ss-686353", "<|multi_cite_1_3|>": "arxiv-88684", "<|multi_cite_1_4|>": "arxiv-140547", "<|multi_cite_1_5|>": "arxiv-79041", "<|multi_cite_2_1|>": "arxiv-95838", "<|multi_cite_2_2|>": "arxiv-158125", "<|multi_cite_2_3|>": "arxiv-146327", "<|multi_cite_2_4|>": "arxiv-153138", "<|multi_cite_2_5|>": "arxiv-244038", "<|multi_cite_3_1|>": "arxiv-62406", "<|multi_cite_3_2|>": "ss-686353", "<|multi_cite_4_1|>": "arxiv-88684", "<|multi_cite_4_2|>": "arxiv-140547", "<|multi_cite_4_3|>": "arxiv-79041", "<|multi_cite_5_1|>": "arxiv-95838", "<|multi_cite_5_2|>": "arxiv-158125", "<|multi_cite_5_3|>": "arxiv-146327", "<|multi_cite_5_4|>": "arxiv-153138", "<|multi_cite_5_5|>": "arxiv-244038", "<|multi_cite_6_1|>": "arxiv-95838", "<|multi_cite_6_2|>": "arxiv-146327", "<|multi_cite_7_1|>": "arxiv-158125", "<|multi_cite_7_2|>": "arxiv-153138", "<|multi_cite_7_3|>": "arxiv-244038", "<|multi_cite_8_1|>": "arxiv-153138", "<|multi_cite_8_2|>": "arxiv-244038", "<|cite_9|>": "arxiv-104875", "<|cite_10|>": "arxiv-101911", "<|cite_11|>": "arxiv-132523", "<|cite_12|>": "ss-786696", "<|multi_cite_13_2|>": "ss-1077850", "<|multi_cite_14_1|>": "ss-786696", "<|multi_cite_14_2|>": "ss-724521", "<|cite_15|>": "arxiv-121831", "<|cite_16|>": "ss-827947", "<|multi_cite_17_1|>": "arxiv-104875", "<|multi_cite_17_3|>": "arxiv-175999", "<|multi_cite_18_1|>": "arxiv-132523", "<|multi_cite_18_2|>": "arxiv-196622", "<|multi_cite_18_3|>": "ss-928737", "<|multi_cite_18_4|>": "arxiv-439427", "<|multi_cite_18_5|>": "arxiv-403064", "<|multi_cite_18_6|>": "arxiv-424367", "<|multi_cite_18_7|>": "arxiv-322502", "<|cite_19|>": "arxiv-276813", "<|cite_20|>": "arxiv-175999", "<|cite_21|>": "ss-1077850", "<|cite_22|>": "arxiv-181208", "<|cite_23|>": "arxiv-104875", "<|cite_24|>": "arxiv-232725", "<|cite_25|>": "arxiv-177876", "<|cite_26|>": "arxiv-205516"}
2108.01756
<|paper_start|> Title: Localisable Monads Abstract: Localisable Monads: Monads govern computational side-effects in programming semantics. They can be combined in a ''bottom-up'' way to handle several instances of such effects. Indexed monads and graded monads do this in a modular way. Here, instead, we equip monads with fine-grained structure in a ''top-down'' way, using techniques from tensor topology. This provides an intrinsic theory of local computational effects without needing to know how constituent effects interact beforehand. Specifically, any monoidal category decomposes as a sheaf of local categories over a base space. We identify a notion of localisable monads which characterises when a monad decomposes as a sheaf of monads. Equivalently, localisable monads are formal monads in an appropriate presheaf 2-category, whose algebras we characterise. Three extended examples demonstrate how localisable monads can interpret the base space as locations in a computer memory, as sites in a network of interacting agents acting concurrently, and as time in stochastic processes. Introduction The computation of some desired value may influence parts of the environment in which the computation occurs that are separate from the value itself. Rather than being accidental byproducts, several modern programming platforms harness such \emph{computational side-effects} to structure computations in a modular way <|cite_start|> (Reference: Edinburgh Research Explorer Computational effects and operations: an overview: We overview a programme to provide a unified semantics for computational effects based upon the notion of a countable enriched Lawvere theory. We define the notion of countable enriched Lawvere theory, show how the various leading examples of computational effects, except for continuations, give rise to them, and we compare the definition with that of a strong monad. We outline how one may use the notion to model three natural ways in which to combine computational effects: by their sum, by their commutative combination, and by distributivity. We also outline a unified account of operational semantics. We present results we have already shown, some partial results, and our plans for further development of the programme.) <|cite_end|> <|cite_start|> (Reference: Handlers of Algebraic Effects: ) <|cite_end|>. The most well-known use is via \emph{monads} <|cite_start|> (Reference: Computational lambda-calculus and monads: The lambda -calculus is considered a useful mathematical tool in the study of programming languages. However, if one uses beta eta -conversion to prove equivalence of programs, then a gross simplification is introduced. The author gives a calculus based on a categorical semantics for computations, which provides a correct basis for proving equivalence of programs, independent from any specific computational model.<<ETX>>) <|cite_end|> <|cite_start|> (Reference: Notions of Computation Determine Monads: ) <|cite_end|>, which let one analyse a computational effect apart from the rest of the computation. A computation may use more than one effect. The corresponding monads can then be combined using \emph{distributive laws} into a single monad in a ``bottom-up'' fashion <|cite_start|> (Reference: Combining effects: Sum and tensor: ) <|cite_end|> <|cite_start|> (Reference: Depolarization and distributive laws: Given a vector space with two multiplications, one commutative the other anticommutative, possibly connected by a distributive law, the depolarization principle allows to look at this triplet through a single nonassociative multiplication. This is the case of Poisson algebras. We are interested here in the cases of transposed Poisson algebras and we show in this case that depolarization cannot be done with a single multiplication. We also examine the depolarization for Hom-Lie algebras.) <|cite_end|>. This combination may involve other formalisms such as Lawvere theories <|cite_start|> (Reference: Indexed Lawvere theories for local state: Monads for global state and local state have been used to provide semantics of programming languages for many years. There is a computation- ally natural presentation of an ordinary Lawvere theory that corresponds to the monad on Set for global state, inevitably called the Lawvere theory for global state. Here, we introduce a notion of indexed Lawvere theory and use it to give a Lawvere-style account of local state, extending the theorem for global state to local state. En route, we develop the notion of comodel of a Lawvere theory and exploit a universal characterisation of the category of worlds for local state. Ultimately, we give both syntactic and semantic characterisations of the operation block that allows one to move between worlds and use them to characterise the monad for local state.) <|cite_end|> <|cite_start|> (Reference: Semantics for Local Computational Effects: ) <|cite_end|>, but we focus on monads here. An especially interesting case is when many instances of effects of the same kind are in play <|cite_start|> (Reference: Instances of computational effects: an algebraic perspective: We investigate the connections between computational effects, algebraic theories, and monads on functor categories. We develop a syntactic framework with variable binding that allows us to describe equations between programs while taking into account the idea that there may be different instances of a particular computational effect. We use our framework to give a general account of several notions of computation that had previously been analyzed in terms of monads on presheaf categories: the analysis of local store by Plotkin and Power; the analysis of restriction by Pitts; and the analysis of the pi calculus by Stark.) <|cite_end|>. The bottom-up nature comes out in the fact that the base category on which the monad lives is highly structured; usually it is a cartesian category of presheaves. A related use of monads is to have several layers of granularity to an effect. Indexed monads and \emph{graded monads} then model for example different levels of access to a computational effect <|cite_start|> (Reference: Towards a Formal Theory of Graded Monads: ) <|cite_end|> <|cite_start|> (Reference: Generic Trace Semantics and Graded Monads: Models of concurrent systems employ a wide variety of semantics inducing various notions of process equivalence, ranging from linear-time semantics such as trace equivalence to branching-time semantics such as strong bisimilarity. Many of these generalize to system types beyond standard transition systems, featuring, for example, weighted, probabilistic, or game-based transitions; this motivates the search for suitable coalgebraic abstractions of process equivalence that cover these orthogonal dimensions of generality, i.e. are generic both in the system type and in the notion of system equivalence. In recent joint work with Kurz, we have proposed a parametrization of system equivalence over an embedding of the coalgebraic type functor into a monad. In the present paper, we refine this abstraction to use graded monads, which come with a notion of depth that corresponds, e.g., to trace length or bisimulation depth. We introduce a notion of graded algebras and show how they play the role of formulas in trace logics.) <|cite_end|>. Again this is usually conceived of in a ``bottom-up'' fashion, where one specifies the behaviour at each level and then adds interplay between the levels. In this article we take the opposite, ``top-down'', approach. We start with a single monad on a category with some structure, and then ask when and how that monad is the combination of constituent monads. This work is a first step towards an \emph{intrinsic} theory of computational effects, one that doesn't need to specify in detail how constituent effects have to interact in advance. In particular, we do not postulate that the base category consists of presheaves, which is a consequence rather than an assumption. To do so, we follow the programme of \emph{tensor topology}, by observing that any monoidal category comes equipped with a notion of base space over which the category decomposes <|cite_start|> (Reference: Tensor topology: ) <|cite_end|> <|cite_start|> (Reference: Sheaf representation of monoidal categories: ) <|cite_end|> <|cite_start|> (Reference: Space in Monoidal Categories: The category of Hilbert modules may be interpreted as a naive quantum field theory over a base space. Open subsets of the base space are recovered as idempotent subunits, which form a meet-semilattice in any firm braided monoidal category. There is an operation of restriction to an idempotent subunit: it is a graded monad on the category, and has the universal property of algebraic localisation. Spacetime structure on the base space induces a closure operator on the idempotent subunits. Restriction is then interpreted as spacetime propagation. This lets us study relativistic quantum information theory using methods entirely internal to monoidal categories. As a proof of concept, we show that quantum teleportation is only successfully supported on the intersection of Alice and Bob's causal future.) <|cite_end|> <|cite_start|> (Reference: Tensor-restriction categories: Restriction categories were established to handle maps that are partially defined with respect to composition. Tensor topology realises that monoidal categories have an intrinsic notion of space, and deals with objects and maps that are partially defined with respect to this spatial structure. We introduce a construction that turns a firm monoidal category into a restriction category and axiomatise the monoidal restriction categories that arise this way, called tensor-restriction categories.) <|cite_end|>. This ``spatial'' aspect can be cleanly separated: any monoidal category embeds into a category of global sections of a sheaf of so-called local monoidal categories (see Theorems~\ref{thm:embedding} and~\ref{thm:sheafrepresentation} below). This is recalled in Section~\ref{sec:tensortopology}. Our main question is when and how a monad on a monoidal category respects this decomposition in the sense that it corresponds to a sheaf of monads on the local categories. The answer is a \emph{localisable monad}, discussed in Section~\ref{sec:localisablemonads}. To connect back to the ``bottom-up'' approach, we then characterise such monads as \emph{formal monads} <|cite_start|> (Reference: The formal theory of monads II: ) <|cite_end|> in a (pre)sheaf category in Section~\ref{sec:formalmonads}. This opens a way to analyse the (Kleisli) algebras for localisable monads, which we do in Section~\ref{sec:algebras}. The breadth of this approach is demonstrated in Section~\ref{sec:examples}, where we work out three extended examples. They show a range of how localisable monads may interpret the base space: as locations in a computer memory governed by a \emph{local state} monad; as sites in a network of interacting agents governed by a monad inspired by the \emph{pi calculus}; and as moments in time governed by a monad of \emph{stochastic processes}. Section~\ref{sec:conclusion} concludes, and Appendix~\ref{sec:proofs} gives proofs that were deferred from the main text. <|paper_end|>
[ "<|reference_start|> Edinburgh Research Explorer Computational effects and operations: an overview: We overview a programme to provide a unified semantics for computational effects based upon the notion of a countable enriched Lawvere theory. We define the notion of countable enriched Lawvere theory, show how the various leading examples of computational effects, except for continuations, give rise to them, and we compare the definition with that of a strong monad. We outline how one may use the notion to model three natural ways in which to combine computational effects: by their sum, by their commutative combination, and by distributivity. We also outline a unified account of operational semantics. We present results we have already shown, some partial results, and our plans for further development of the programme. <|reference_end|>", "<|reference_start|> Tensor topology: <|reference_end|>", "<|reference_start|> Space in Monoidal Categories: The category of Hilbert modules may be interpreted as a naive quantum field theory over a base space. Open subsets of the base space are recovered as idempotent subunits, which form a meet-semilattice in any firm braided monoidal category. There is an operation of restriction to an idempotent subunit: it is a graded monad on the category, and has the universal property of algebraic localisation. Spacetime structure on the base space induces a closure operator on the idempotent subunits. Restriction is then interpreted as spacetime propagation. This lets us study relativistic quantum information theory using methods entirely internal to monoidal categories. As a proof of concept, we show that quantum teleportation is only successfully supported on the intersection of Alice and Bob's causal future. <|reference_end|>", "<|reference_start|> Tensor-restriction categories: Restriction categories were established to handle maps that are partially defined with respect to composition. Tensor topology realises that monoidal categories have an intrinsic notion of space, and deals with objects and maps that are partially defined with respect to this spatial structure. We introduce a construction that turns a firm monoidal category into a restriction category and axiomatise the monoidal restriction categories that arise this way, called tensor-restriction categories. <|reference_end|>" ]
[ 0, 11, 13, 14 ]
{"<|multi_cite_1_1|>": "ss-1585168", "<|multi_cite_1_2|>": "ss-2220019", "<|multi_cite_2_1|>": "ss-683459", "<|multi_cite_2_2|>": "ss-1269271", "<|multi_cite_3_1|>": "ss-723127", "<|multi_cite_3_2|>": "ss-1089159", "<|multi_cite_4_1|>": "ss-2338901", "<|multi_cite_4_2|>": "ss-1009833", "<|cite_5|>": "ss-707946", "<|multi_cite_6_1|>": "ss-1562957", "<|multi_cite_6_2|>": "ss-1080475", "<|multi_cite_7_1|>": "ss-707947", "<|multi_cite_7_2|>": "ss-707948", "<|multi_cite_7_3|>": "arxiv-122636", "<|multi_cite_7_4|>": "ss-707949", "<|cite_8|>": "ss-1560275"}
2112.09239
<|paper_start|> Title: EEG-Transformer: Self-attention from Transformer Architecture for Decoding EEG of Imagined Speech Abstract: EEG-Transformer: Self-attention from Transformer Architecture for Decoding EEG of Imagined Speech: Transformers are groundbreaking architectures that have changed a flow of deep learning, and many high-performance models are developing based on transformer architectures. Transformers implemented only with attention with encoder-decoder structure following seq2seq without using RNN, but had better performance than RNN. Herein, we investigate the decoding technique for electroencephalography (EEG) composed of self-attention module from transformer architecture during imagined speech and overt speech. We performed classification of nine subjects using convolutional neural network based on EEGNet that captures temporal-spectral-spatial features from EEG of imagined speech and overt speech. Furthermore, we applied the self-attention module to decoding EEG to improve the performance and lower the number of parameters. Our results demonstrate the possibility of decoding brain activities of imagined speech and overt speech using attention modules. Also, only single channel EEG or ear-EEG can be used to decode the imagined speech for practical BCIs. Introduction Brain-computer interfaces (BCIs) are one of the most important consideration for communication systems in real life. Many researchers have studied BCI to recognize human cognitive state or intention based on brain signals such as electroencephalography (EEG) to recognize the crucial features from the brain activity. <|cite_start|> (Reference: Hybrid High-order Functional Connectivity Networks Using Resting-state Functional MRI for Mild Cognitive Impairment Diagnosis: ) <|cite_end|> <|cite_start|> (Reference: Brain-controlled robotic arm system based on multi-directional {CNN}-{BiLSTM} network using {EEG} signals: Brain-machine interfaces (BMIs) can be used to decode brain activity into commands to control external devices. This paper presents the decoding of intuitive upper extremity imagery for multi-directional arm reaching tasks in three-dimensional (3D) environments. We designed and implemented an experimental environment in which electroencephalogram (EEG) signals can be acquired for movement execution and imagery. Fifteen subjects participated in our experiments. We proposed a multi-directional convolution neural network-bidirectional long short-term memory network (MDCBN)-based deep learning framework. The decoding performances for six directions in 3D space were measured by the correlation coefficient (CC) and the normalized root mean square error (NRMSE) between predicted and baseline velocity profiles. The grand-averaged CCs of multi-direction were 0.47 and 0.45 for the execution and imagery sessions, respectively, across all subjects. The NRMSE values were below 0.2 for both sessions. Furthermore, in this study, the proposed MDCBN was evaluated by two online experiments for real-time robotic arm control, and the grand-averaged success rates were approximately 0.60 (±0.14) and 0.43 (±0.09), respectively. Hence, we demonstrate the feasibility of intuitive robotic arm control based on EEG signals for real-world environments.) <|cite_end|> <|cite_start|> (Reference: Strength and similarity guided group-level brain functional network construction for MCI diagnosis: ) <|cite_end|>. To enhance the performance of decoding EEG signals, preprocessing technology is also important to get a high quality signals with higher accuracy of decoding and lower signal-to-noise ratio <|cite_start|> (Reference: Application of an adaptive fuzzy logic controller to optimize the performances of the P&O algorithm: In this work, an Adaptive Fuzzy Logic Controller is studied to optimize the transfer of the power provided by a photovoltaic generator. The adaptation process is carried out online in two tasks: the adaptation of the rules consequences and the self-organization of the internal structure of the Fuzzy Controller. In comparison with two types of traditional controls, the performance of the controller studied is validated. The simulation results show that the controller studied makes it possible to reduce the response time by 3 % compared to the conventional controller, and minimizes the steady-state error by eliminating the phenomenon of oscillation around the PPM and that the proposed controller exhibits good behavior with a wide range of power.) <|cite_end|> <|cite_start|> (Reference: A Real-Time Movement Artifact Removal Method for Ambulatory Brain-Computer Interfaces: Recently, practical brain-computer interfaces (BCIs) have been widely investigated for detecting human intentions in real world. However, performance differences still exist between the laboratory and the real world environments. One of the main reasons for such differences comes from the user’s unstable physical states (e.g., human movements are not strictly controlled), which produce unexpected signal artifacts. Hence, to minimize the performance degradation of electroencephalography (EEG)-based BCIs, we present a novel artifact removal method named constrained independent component analysis with online learning (cIOL). The cIOL can find and reject the noise-like components related to human body movements (i.e., movement artifacts) in the EEG signals. To obtain movement information, isolated electrodes are used to block electrical signals from the brain using high-resistance materials. We estimate artifacts with movement information using constrained independent component analysis from EEG signals and then extract artifact-free signals using online learning in each sample. In addition, the cIOL is evaluated by signal processing under 16 different experimental conditions (two types of EEG devices $\times $ two BCI paradigms $\times $ four different walking speeds). The experimental results show that the cIOL has the highest accuracy in both scalp- and ear-EEG, and has the highest signal-to-noise ratio in scalp-EEG among the state-of-the-art methods, except for the case of steady-state visual evoked potential at 2.0 m/s with superposition problem.) <|cite_end|> <|cite_start|> (Reference: A lower limb exoskeleton control system based on steady state visual evoked potentials: Objective. We have developed an asynchronous brain–machine interface (BMI)-based lower limb exoskeleton control system based on steady-state visual evoked potentials (SSVEPs). Approach. By decoding electroencephalography signals in real-time, users are able to walk forward, turn right, turn left, sit, and stand while wearing the exoskeleton. SSVEP stimulation is implemented with a visual stimulation unit, consisting of five light emitting diodes fixed to the exoskeleton. A canonical correlation analysis (CCA) method for the extraction of frequency information associated with the SSVEP was used in combination with k-nearest neighbors. Main results. Overall, 11 healthy subjects participated in the experiment to evaluate performance. To achieve the best classification, CCA was first calibrated in an offline experiment. In the subsequent online experiment, our results exhibit accuracies of 91.3 ± 5.73%, a response time of 3.28 ± 1.82 s, an information transfer rate of 32.9 ± 9.13 bits/min, and a completion time of 1100 ± 154.92 s for the experimental parcour studied. Significance. The ability to achieve such high quality BMI control indicates that an SSVEP-based lower limb exoskeleton for gait assistance is becoming feasible.) <|cite_end|> <|cite_start|> (Reference: Reconstructing ERP Signals Using Generative Adversarial Networks for Mobile Brain-Machine Interface: Practical brain-machine interfaces have been widely studied to accurately detect human intentions using brain signals in the real world. However, the electroencephalography (EEG) signals are distorted owing to the artifacts such as walking and head movement, so brain signals may be large in amplitude rather than desired EEG signals. Due to these artifacts, detecting accurately human intention in the mobile environment is challenging. In this paper, we proposed the reconstruction framework based on generative adversarial networks using the event-related potentials (ERP) during walking. We used a pre-trained convolutional encoder to represent latent variables and reconstructed ERP through the generative model which shape similar to the opposite of encoder. Finally, the ERP was classified using the discriminative model to demonstrate the validity of our proposed framework. As a result, the reconstructed signals had important components such as N200 and P300 similar to ERP during standing. The accuracy of reconstructed EEG was similar to raw noisy EEG signals during walking. The signal-to-noise ratio of reconstructed EEG was significantly increased as 1.3. The loss of the generative model was 0.6301, which is comparatively low, which means training generative model had high performance. The reconstructed ERP consequentially showed an improvement in classification performance during walking through the effects of noise reduction. The proposed framework could help recognize human intention based on the brain-machine interface even in the mobile environment.) <|cite_end|>. Moreover, the decoding technologies including feature extraction and classification have improved significantly in recent years <|cite_start|> (Reference: {A Convolutional Neural Network for Steady State Visual Evoked Potential Classification Under Ambulatory Environment: The robust analysis of neural signals is a challenging problem. Here, we contribute a convolutional neural network (CNN) for the robust classification of a steady-state visual evoked potentials (SSVEPs) paradigm. We measure electroencephalogram (EEG)-based SSVEPs for a brain-controlled exoskeleton under ambulatory conditions in which numerous artifacts may deteriorate decoding. The proposed CNN is shown to achieve reliable performance under these challenging conditions. To validate the proposed method, we have acquired an SSVEP dataset under two conditions: 1) a static environment, in a standing position while fixated into a lower-limb exoskeleton and 2) an ambulatory environment, walking along a test course wearing the exoskeleton (here, artifacts are most challenging). The proposed CNN is compared to a standard neural network and other state-of-the-art methods for SSVEP decoding (i.e., a canonical correlation analysis (CCA)-based classifier, a multivariate synchronization index (MSI), a CCA combined with k-nearest neighbors (CCA-KNN) classifier) in an offline analysis. We found highly encouraging SSVEP decoding results for the CNN architecture, surpassing those of other methods with classification rates of 99.28% and 94.03% in the static and ambulatory conditions, respectively. A subsequent analysis inspects the representation found by the CNN at each layer and can thus contribute to a better understanding of the CNN’s robust, accurate decoding abilities.) <|cite_end|> <|cite_start|> (Reference: Dual-electrode motion artifact cancellation for mobile electroencephalography: Objective. Our purpose was to evaluate the ability of a dual electrode approach to remove motion artifact from electroencephalography (EEG) measurements. Approach. We used a phantom human head model and robotic motion platform to induce motion while collecting scalp EEG. We assembled a dual electrode array capturing (a) artificial neural signals plus noise from scalp EEG electrodes, and (b) electrically isolated motion artifact noise. We recorded artificial neural signals broadcast from antennae in the phantom head during continuous vertical sinusoidal movements (stationary, 1.00, 1.25, 1.50, 1.75, 2.00 Hz movement frequencies). We evaluated signal quality using signal-to-noise ratio (SNR), cross-correlation, and root mean square error (RMSE) between the ground truth broadcast signals and the recovered EEG signals. Main results. Signal quality was restored following noise cancellation when compared to single electrode EEG measurements collected with no phantom head motion. Significance. We achieved substantial motion artifact attenuation using secondary electrodes for noise cancellation. These methods can be applied to studying electrocortical signals during human locomotion to improve real-world neuroimaging using EEG.) <|cite_end|> <|cite_start|> (Reference: Network Properties in Transitions of Consciousness during Propofol-induced Sedation: ) <|cite_end|> <|cite_start|> (Reference: Subject-independent brain--computer interfaces based on deep convolutional neural networks: For a brain–computer interface (BCI) system, a calibration procedure is required for each individual user before he/she can use the BCI. This procedure requires approximately 20–30 min to collect enough data to build a reliable decoder. It is, therefore, an interesting topic to build a calibration-free, or subject-independent, BCI. In this article, we construct a large motor imagery (MI)-based electroencephalography (EEG) database and propose a subject-independent framework based on deep convolutional neural networks (CNNs). The database is composed of 54 subjects performing the left- and right-hand MI on two different days, resulting in 21 600 trials for the MI task. In our framework, we formulated the discriminative feature representation as a combination of the spectral–spatial input embedding the diversity of the EEG signals, as well as a feature representation learned from the CNN through a fusion technique that integrates a variety of discriminative brain signal patterns. To generate spectral–spatial inputs, we first consider the discriminative frequency bands in an information-theoretic observation model that measures the power of the features in two classes. From discriminative frequency bands, spectral–spatial inputs that include the unique characteristics of brain signal patterns are generated and then transformed into a covariance matrix as the input to the CNN. In the process of feature representations, spectral–spatial inputs are individually trained through the CNN and then combined by a concatenation fusion technique. In this article, we demonstrate that the classification accuracy of our subject-independent (or calibration-free) model outperforms that of subject-dependent models using various methods [common spatial pattern (CSP), common spatiospectral pattern (CSSP), filter bank CSP (FBCSP), and Bayesian spatio-spectral filter optimization (BSSFO)].) <|cite_end|> <|cite_start|> (Reference: Decoding Visual Responses based on Deep Neural Networks with Ear-EEG Signals: Recently, practical brain-computer interface is actively carried out, especially, in an ambulatory environment. However, the electroencephalography signals are distorted by movement artifacts and electromyography signals in ambulatory condition, which make hard to recognize human intention. In addition, as hardware issues are also challenging, ear-EEG has been developed for practical brain-computer interface and is widely used. However, ear-EEG still contains contaminated signals. In this paper, we proposed robust two-stream deep neural networks in walking conditions and analyzed the visual response EEG signals in the scalp and ear in terms of statistical analysis and brain-computer interface performance. We validated the signals with the visual response paradigm, steady-state visual evoked potential. The brain-computer interface performance deteriorated as 3~14% when walking fast at 1.6 m/s. When applying the proposed method, the accuracies increase 15% in cap-EEG and 7% in ear-EEG. The proposed method shows robust to the ambulatory condition in session dependent and session-to-session experiments.) <|cite_end|>. Recognizing brain activities during speech or imagined speech has recently attracted a lot of attention and is developing <|cite_start|> (Reference: Natural speech reveals the semantic maps that tile human cerebral cortex: ) <|cite_end|> <|cite_start|> (Reference: Frequency-specific directed interactions in the human brain network for language: Significance The brain’s remarkable capacity for language requires bidirectional interactions between functionally specialized brain regions. Although the functional role of individual regions in the brain network for language has been well studied, as of yet little is known about the mechanisms that facilitate the information exchange between these brain regions. In this paper we show that communication between language-relevant areas in the brain is supported by rhythmic neuronal synchronization and that different rhythms reflect the direction of information flow. These findings likely reflect a generic mechanism that allows for dynamic routing of information in a network of task-relevant brain regions during cognitive processing. The brain’s remarkable capacity for language requires bidirectional interactions between functionally specialized brain regions. We used magnetoencephalography to investigate interregional interactions in the brain network for language while 102 participants were reading sentences. Using Granger causality analysis, we identified inferior frontal cortex and anterior temporal regions to receive widespread input and middle temporal regions to send widespread output. This fits well with the notion that these regions play a central role in language processing. Characterization of the functional topology of this network, using data-driven matrix factorization, which allowed for partitioning into a set of subnetworks, revealed directed connections at distinct frequencies of interaction. Connections originating from temporal regions peaked at alpha frequency, whereas connections originating from frontal and parietal regions peaked at beta frequency. These findings indicate that the information flow between language-relevant brain areas, which is required for linguistic processing, may depend on the contributions of distinct brain rhythms.) <|cite_end|>. In particular,imagined speech is evaluated as an advanced technology for brain signal-based communication systems <|cite_start|> (Reference: Brain-computer interfaces for communication and control: The brain's electrical signals enable people without muscle control to physically interact with the world.) <|cite_end|> <|cite_start|> (Reference: Neural Decoding of Imagined Speech and Visual Imagery as Intuitive Paradigms for BCI Communication: Brain-computer interface (BCI) is oriented toward intuitive systems that users can easily operate. Imagined speech and visual imagery are emerging paradigms that can directly convey a user’s intention. We investigated the underlying characteristics that affect the decoding performance of these two paradigms. Twenty-two subjects performed imagined speech and visual imagery of twelve words/phrases frequently used for patients’ communication. Spectral features were analyzed with thirteen-class classification (including rest class) using EEG filtered in six frequency ranges. In addition, cortical regions relevant to the two paradigms were analyzed by classification using single-channel and pre-defined cortical groups. Furthermore, we analyzed the word properties that affect the decoding performance based on the number of syllables, concrete and abstract concepts, and the correlation between the two paradigms. Finally, we investigated multiclass scalability in both paradigms. The high-frequency band displayed a significantly superior performance to that in the case of any other spectral features in the thirteen-class classification (imagined speech: 39.73 ± 5.64%; visual imagery: 40.14 ± 4.17%). Furthermore, the performance of Broca’s and Wernicke’s areas and auditory cortex was found to have improved among the cortical regions in both paradigms. As the number of classes increased, the decoding performance decreased moderately. Moreover, every subject exceeded the confidence level performance, implying the strength of the two paradigms in BCI inefficiency. These two intuitive paradigms were found to be highly effective for multiclass communication systems, having considerable similarities between each other. The results could provide crucial information for improving the decoding performance for practical BCI applications.) <|cite_end|> <|cite_start|> (Reference: A High Performance Spelling System based on EEG-EOG Signals With Visual Feedback: In this paper, we propose a highly accurate and fast spelling system that employs multi-modal electroencephalography-electrooculography (EEG-EOG) signals and visual feedback technology. Over the last 20 years, various types of speller systems have been developed in brain-computer interface and EOG/eye-tracking research; however, these conventional systems have a tradeoff between the spelling accuracy (or decoding) and typing speed. Healthy users and physically challenged participants, in particular, may become exhausted quickly; thus, there is a need for a speller system with fast typing speed while retaining a high level of spelling accuracy. In this paper, we propose the first hybrid speller system that combines EEG and EOG signals with visual feedback technology so that the user and the speller system can act cooperatively for optimal decision-making. The proposed spelling system consists of a classic row-column event-related potential (ERP) speller, an EOG command detector, and visual feedback modules. First, the online ERP speller calculates classification probabilities for all candidate characters from the EEG epochs. Second, characters are sorted by their probability, and the characters with the highest probabilities are highlighted as visual feedback within the row-column spelling layout. Finally, the user can actively select the character as the target by generating an EOG command. The proposed system shows 97.6% spelling accuracy and an information transfer rate of 39.6 (±13.2) [bits/min] across 20 participants. In our extended experiment, we redesigned the visual feedback and minimized the number of channels (four channels) in order to enhance the speller performance and increase usability. Most importantly, a new weighted strategy resulted in 100% accuracy and a 57.8 (±23.6) [bits/min] information transfer rate across six participants. This paper demonstrates that the proposed system can provide a reliable communication channel for practical speller applications and may be used to supplement existing systems.) <|cite_end|>. Imagined speech refers to the internal pronunciation of speech only by imagination without auditory output or pronunciation <|cite_start|> (Reference: Biosignal-Based Spoken Communication: A Survey: Speech is a complex process involving a wide range of biosignals, including but not limited to acoustics. These biosignals—stemming from the articulators, the articulator muscle activities, the neural pathways, and the brain itself—can be used to circumvent limitations of conventional speech processing in particular, and to gain insights into the process of speech production in general. Research on biosignal-based speech processing is a wide and very active field at the intersection of various disciplines, ranging from engineering, computer science, electronics and machine learning to medicine, neuroscience, physiology, and psychology. Consequently, a variety of methods and approaches have been used to investigate the common goal of creating biosignal-based speech processing devices for communication applications in everyday situations and for speech rehabilitation, as well as gaining a deeper understanding of spoken communication. This paper gives an overview of the various modalities, research approaches, and objectives for biosignal-based spoken communication.) <|cite_end|>. Recent studies have shown some features and potentials of imagined speech decoding <|cite_start|> (Reference: Neural Decoding of Imagined Speech and Visual Imagery as Intuitive Paradigms for BCI Communication: Brain-computer interface (BCI) is oriented toward intuitive systems that users can easily operate. Imagined speech and visual imagery are emerging paradigms that can directly convey a user’s intention. We investigated the underlying characteristics that affect the decoding performance of these two paradigms. Twenty-two subjects performed imagined speech and visual imagery of twelve words/phrases frequently used for patients’ communication. Spectral features were analyzed with thirteen-class classification (including rest class) using EEG filtered in six frequency ranges. In addition, cortical regions relevant to the two paradigms were analyzed by classification using single-channel and pre-defined cortical groups. Furthermore, we analyzed the word properties that affect the decoding performance based on the number of syllables, concrete and abstract concepts, and the correlation between the two paradigms. Finally, we investigated multiclass scalability in both paradigms. The high-frequency band displayed a significantly superior performance to that in the case of any other spectral features in the thirteen-class classification (imagined speech: 39.73 ± 5.64%; visual imagery: 40.14 ± 4.17%). Furthermore, the performance of Broca’s and Wernicke’s areas and auditory cortex was found to have improved among the cortical regions in both paradigms. As the number of classes increased, the decoding performance decreased moderately. Moreover, every subject exceeded the confidence level performance, implying the strength of the two paradigms in BCI inefficiency. These two intuitive paradigms were found to be highly effective for multiclass communication systems, having considerable similarities between each other. The results could provide crucial information for improving the decoding performance for practical BCI applications.) <|cite_end|> <|cite_start|> (Reference: Inferring imagined speech using EEG signals: a new approach using Riemannian manifold features: Objective. In this paper, we investigate the suitability of imagined speech for brain–computer interface (BCI) applications. Approach. A novel method based on covariance matrix descriptors, which lie in Riemannian manifold, and the relevance vector machines classifier is proposed. The method is applied on electroencephalographic (EEG) signals and tested in multiple subjects. Main results. The method is shown to outperform other approaches in the field with respect to accuracy and robustness. The algorithm is validated on various categories of speech, such as imagined pronunciation of vowels, short words and long words. The classification accuracy of our methodology is in all cases significantly above chance level, reaching a maximum of 70% for cases where we classify three words and 95% for cases of two words. Significance. The results reveal certain aspects that may affect the success of speech imagery classification from EEG signals, such as sound, meaning and word complexity. This can potentially extend the capability of utilizing speech imagery in future BCI applications. The dataset of speech imagery collected from total 15 subjects is also published.) <|cite_end|>, but fundamental neural properties and their practical use remain to be investigated. Therefore, research on the decoding of imagined speech requires the development of brain signal decoding techniques for more accurate and practical BCI <|cite_start|> (Reference: {{EEG: ین پژوهش با هدف بررسی اثربخشی بیوفیدبک EEG در بهبود فرایند توجهی دانشجویان دختر دارای افت تحصیلی در دانشگاه محقق اردبیلی انجام گرفت. نمونه این مطالعه 33 نفر بودند که به روش نمونه گیری تصادفی ساده انتخاب شدند و در دو گروه آزمایش (15 نفر) و گروه کنترل (18 نفر) به تصادف جایگزین شدند. در قالب روش آزمایشی با طرح پیش آزمون ـ پس آزمون با گروه کنترل از دستگاه نوروفیدبک (NFT)، و آزمون عملکرد پیوسته (CPT) برای جمع آوری داده ها استفاده شد. گروه آزمایش به مدت 20 جلسه تحت آموزش نوروفیدبک قرار گرفت. نتایج تحلیل کوواریانس نشان داد دانشجویانی که در جلسات نوروفیدبک آموزش دیده بودند در مقایسه با گروه کنترل، در پاسخهای صحیح آزمون عملکرد پیوسته افزایش معناداری را نشان دادند و در مولفه های خطای حذف و خطای ارائه کاهش معناداری داشتند. نتایج این پژوهش حاکی از کارآیی نوروفیدبک به عنوان یک شیوه اثربخش در کاهش مشکلات توجه دانشجویان دارای افت تحصیلی است.) <|cite_end|> <|cite_start|> (Reference: Speech synthesis from neural decoding of spoken sentences: ) <|cite_end|>. Several deep learning techniques have been published to decode EEG brain signals, which are architectural designs that considers the characteristics of brain signal characteristics <|cite_start|> (Reference: Supplementary for: Deep learning with convolutional neural networks for EEG decoding and visualization: Translational Neurotechnology Lab, Epilepsy Center, Medical Center – University of Freiburg, Engelberger Str. 21, 79106 Freiburg, Germany BrainLinks-BrainTools Cluster of Excellence, University of Freiburg, Georges-Köhler-Allee 79, 79110 Freiburg, Germany Machine Learning Lab, Computer Science Dept., University of Freiburg, Georges-Köhler-Allee 79, 79110 Freiburg, Germany Neurobiology and Biophysics, Faculty of Biology, University of Freiburg, Hansastr. 9a, 79104 Freiburg, Germany Machine Learning for Automated Algorithm Design Lab, Computer Science Dept., University of Freiburg, Georges-Köhler-Allee 52, 79110 Freiburg im Breisgau, Germany Brain State Decoding Lab, Computer Science Dept., University of Freiburg, Albertstr. 23, 79104 Freiburg, Germany Autonomous Intelligent Systems Lab, Computer Science Dept., University of Freiburg, Georges-Köhler-Allee 79, 79110 Freiburg, Germany) <|cite_end|>. It was often used to decode human intention using motor imagery or event-related potential, and have shown superior performance than the conventional machine learning methods such as linear discriminant analysis and support vector machine <|cite_start|> (Reference: {A Novel Bayesian Framework for Discriminative Feature Extraction in Brain-Computer Interfaces: As there has been a paradigm shift in the learning load from a human subject to a computer, machine learning has been considered as a useful tool for Brain-Computer Interfaces (BCIs). In this paper, we propose a novel Bayesian framework for discriminative feature extraction for motor imagery classification in an EEG-based BCI in which the class-discriminative frequency bands and the corresponding spatial filters are optimized by means of the probabilistic and information-theoretic approaches. In our framework, the problem of simultaneous spatiospectral filter optimization is formulated as the estimation of an unknown posterior probability density function (pdf) that represents the probability that a single-trial EEG of predefined mental tasks can be discriminated in a state. In order to estimate the posterior pdf, we propose a particle-based approximation method by extending a factored-sampling technique with a diffusion process. An information-theoretic observation model is also devised to measure discriminative power of features between classes. From the viewpoint of classifier design, the proposed method naturally allows us to construct a spectrally weighted label decision rule by linearly combining the outputs from multiple classifiers. We demonstrate the feasibility and effectiveness of the proposed method by analyzing the results and its success on three public databases.) <|cite_end|> <|cite_start|> (Reference: Subject-independent brain--computer interfaces based on deep convolutional neural networks: For a brain–computer interface (BCI) system, a calibration procedure is required for each individual user before he/she can use the BCI. This procedure requires approximately 20–30 min to collect enough data to build a reliable decoder. It is, therefore, an interesting topic to build a calibration-free, or subject-independent, BCI. In this article, we construct a large motor imagery (MI)-based electroencephalography (EEG) database and propose a subject-independent framework based on deep convolutional neural networks (CNNs). The database is composed of 54 subjects performing the left- and right-hand MI on two different days, resulting in 21 600 trials for the MI task. In our framework, we formulated the discriminative feature representation as a combination of the spectral–spatial input embedding the diversity of the EEG signals, as well as a feature representation learned from the CNN through a fusion technique that integrates a variety of discriminative brain signal patterns. To generate spectral–spatial inputs, we first consider the discriminative frequency bands in an information-theoretic observation model that measures the power of the features in two classes. From discriminative frequency bands, spectral–spatial inputs that include the unique characteristics of brain signal patterns are generated and then transformed into a covariance matrix as the input to the CNN. In the process of feature representations, spectral–spatial inputs are individually trained through the CNN and then combined by a concatenation fusion technique. In this article, we demonstrate that the classification accuracy of our subject-independent (or calibration-free) model outperforms that of subject-dependent models using various methods [common spatial pattern (CSP), common spatiospectral pattern (CSSP), filter bank CSP (FBCSP), and Bayesian spatio-spectral filter optimization (BSSFO)].) <|cite_end|> <|cite_start|> (Reference: Analysis and classification of EEG signals: Electroencephalography (EEG) is one of the most clinically and scientifically exploited signals recorded from humans. Hence, its measurement plays a prominent role in brain studies. In particular, the examination of EEG signals has been recognized as the most preponderant approach to the problem of extracting knowledge of the brain dynamics. EEG recordings are particularly important in the diagnosis of epilepsy and in brain computer interface (BCI). In BCI systems, EEG signals help to restore sensory and motor functions in patients who have severe motor disabilities. Analysing EEG signals is very important both for supporting the diagnosis of brain diseases and for contributing to a better understanding of cognitive process. Although EEG signals provide a great deal of information about the brain, research in classification and evaluation of these signals is limited. Even today the EEG is often examined manually by experts. Therefore, there is an ever-increasing need for developing automatic classification techniques to evaluate and diagnose neurological disorders. Classification techniques can help to differentiate EEG segments and to decide whether a person is healthy. A big challenge is for BCI systems to correctly and efficiently identify different EEG signals of different motor imagery (MI) tasks using appropriate classification algorithms to assist motor disabled patients in communication. In this dissertation, we aim to develop methods for the analysis and classification of epileptic EEG signals and also for the identification of different categories of MI tasks based EEG signals in BCI’s development. In order to classify epileptic EEG signals, we propose two methods, simple sampling technique based least square support vector machine (SRS-LS-SVM) and clustering technique based least square support vector machine (CT-LS-SVM). The experimental results show that both algorithms perform well in the EEG signal classification and the CT-LS-SVM method takes much less execution time compared to the SRS-LS-SVM technique. The research findings also indicate that the proposed approaches are very efficient for classifying two categories of EEG signals. This research can help to provide clinical information about patients who have epilepsy, neurological disorders, mental or physiological problems. In BCI systems, if the MI tasks are reliably distinguished through identifying typical patterns in EEG data, motor disabled people could communicate with a device by composing sequences of these mental states. In this dissertation, for the identification of MI tasks in BCI applications, we developed three methods: (1) Cross-correlation based logistic regression (CC-LR). (2) Modified CC-LR with diverse feature sets. (3) Cross-correlation based least square support vector machine (CC-LS-SVM). The experimental results have demonstrated the effectiveness of the methods for the identification of MI tasks. These techniques can assist clinical diagnoses and rehabilitation tasks. Finally we investigated two issues for the MI classification: (1) Which algorithm performed better. (2) Which EEG data is more suitable for getting information about MI tasks. Is it the motor area data or the all-channels data? To answer these two questions, we considered the three algorithms: the CC-LSSVM, the CC-LR and the cross-correlation based kernel logistic regression (CCKLR). Based on the experimental results, we concluded that the CC-LS-SVM algorithm is the best algorithm for the MI tasks EEG signal classification, and the allchannels EEG data can provide better information than the motor area EEG data for the MI tasks classification. Furthermore, the CC-LS-SVM approach can correctly identify the discriminative MI tasks, demonstrating the algorithms superiority in the classification performance over other existing methods.) <|cite_end|>. Recently, there are several attempts to find optimal features of EEG by deep neural networks based on the three main features of EEG, temporal, spectral, and spatial features <|cite_start|> (Reference: Compact Convolutional Neural Networks for Classification of Asynchronous Steady-state Visual Evoked Potentials: Steady-State Visual Evoked Potentials (SSVEPs) are neural oscillations from the parietal and occipital regions of the brain that are evoked from flickering visual stimuli. SSVEPs are robust signals measurable in the electroencephalogram (EEG) and are commonly used in brain-computer interfaces (BCIs). However, methods for high-accuracy decoding of SSVEPs usually require hand-crafted approaches that leverage domain-specific knowledge of the stimulus signals, such as specific temporal frequencies in the visual stimuli and their relative spatial arrangement. When this knowledge is unavailable, such as when SSVEP signals are acquired asynchronously, such approaches tend to fail. In this paper, we show how a compact convolutional neural network (Compact-CNN), which only requires raw EEG signals for automatic feature extraction, can be used to decode signals from a 12-class SSVEP dataset without the need for any domain-specific knowledge or calibration data. We report across subject mean accuracy of approximately 80% (chance being 8.3%) and show this is substantially better than current state-of-the-art hand-crafted approaches using canonical correlation analysis (CCA) and Combined-CCA. Furthermore, we analyze our Compact-CNN to examine the underlying feature representation, discovering that the deep learner extracts additional phase and amplitude related features associated with the structure of the dataset. We discuss how our Compact-CNN shows promise for BCI applications that allow users to freely gaze/attend to any stimulus at any time (e.g., asynchronous BCI) as well as provides a method for analyzing SSVEP signals in a way that might augment our understanding about the basic processing in the visual cortex.) <|cite_end|> <|cite_start|> (Reference: Soft Computing-Based EEG Classification by Optimal Feature Selection and Neural Networks: Brain computer interface translates electroencephalogram (EEG) signals into control commands so that paralyzed people can control assistive devices. This human thought translation is a very challenging process as EEG signals contain noise. For noise removal, a bandpass filter or a filter bank is used. However, these techniques also remove useful information from the signal. Furthermore, after feature extraction, there are such features which do not play any significant role in effective classification. Thus, soft computing-based EEG classification followed by extraction and then selection of optimal features can produce better results. In this paper, subband common spatial patterns using sequential backward floating selection is being proposed in order to classify motor-imagery-based EEG signals. The signal is decomposed into subband using a filter bank having overlapped frequency cutoffs. Linear discriminant analysis followed by common spatial pattern is applied to the output of each filter for features extraction. Then, sequential backward floating selection is applied for selection of optimal features to train radial basis function neural networks. Two different datasets have been used for evaluation of results, i.e., Open BCI dataset and EEG signals acquired by Emotiv Epoc. The proposed system shows an overall accuracy of 93.05% and 85.00% for both datasets, respectively. The results show that the proposed optimal feature selection and neural network-based classification approach with overlapped frequency bands is an effective method for EEG classification as compared to previous techniques.) <|cite_end|>. In addition, EEG-based speaker identification studies also have actively applied machine learning or deep learning techniques <|cite_start|> (Reference: Subjects identification using EEG-recorded imagined speech: ) <|cite_end|> <|cite_start|> (Reference: Spatial and Spectral Fingerprint in the Brain: Speaker Identification from Single Trial MEG Signals: Brain activity signals are unique subject-specific biological features that can not be forged or stolen. Recognizing this inherent trait, brain waves are recently being acknowledged as a far more secure, sensitive, and confidential biometric approach for user identification. Yet, current electroencephalography (EEG) based biometric systems are still in infancy considering their requirement of a large number of sensors and lower recognition performance compared to present biometric modalities. In this study, we investigated the spatial and spectral fingerprints in the brain with magnetoencephalography (MEG) for speaker identification during rest (pre-stimuli) and speech production. Experimental results suggested that the frontal and the temporal regions of the brain and higher frequency (gamma and high gamma) neural oscillations are more dominating for speaker identification. Moreover, we also found that two optimally located MEG sensors are sufficient to obtain a high speaker classification accuracy during speech tasks whereas at least eight optimally located sensors are needed to accurately identify these subjects during rest-state (pre-stimuli). These results indicated the unique neural traits of speech production across speakers.) <|cite_end|>. Deep learning may be effective in capturing prominent features from brain signals to verify individual characteristics. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{figure1.pdf} \caption{Total frameworks in this study. We split an image into fixed-size patches, linearly embed each of them, add position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder. In order to perform classification, we use the standard approach of adding an extra learnable classification token to the sequence.} \label{fig1} \end{figure*} Transformer <|cite_start|> (Reference: Attention Is All You Need: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.) <|cite_end|> is a model from Google's 2017 paper ``Attention is all you need," and is implemented only with attention, while following the encoder-decoder, the existing structure of seq2seq. This model does not use RNN, and even though the encoder-decoder structure is designed, it also has better performance than RNN. This is the basis for famous language models such as GPT-3 and DALL-E, and tools such as the Hugging Face Transformers library have made it easy for machine learning engineers to solve a wide range of NLP tasks and have since promoted numerous innovations in NLP and other fields <|cite_start|> (Reference: Exploring Self-attention for Image Recognition: Recent work has shown that self-attention can serve as a basic building block for image recognition models. We explore variations of self-attention and assess their effectiveness for image recognition. We consider two forms of self-attention. One is pairwise self-attention, which generalizes standard dot-product attention and is fundamentally a set operator. The other is patchwise self-attention, which is strictly more powerful than convolution. Our pairwise self-attention networks match or outperform their convolutional counterparts, and the patchwise models substantially outperform the convolutional baselines. We also conduct experiments that probe the robustness of learned representations and conclude that self-attention networks may have significant benefits in terms of robustness and generalization.) <|cite_end|> <|cite_start|> (Reference: Self-Attention Generative Adversarial Networks: In this paper, we propose the Self-Attention Generative Adversarial Network (SAGAN) which allows attention-driven, long-range dependency modeling for image generation tasks. Traditional convolutional GANs generate high-resolution details as a function of only spatially local points in lower-resolution feature maps. In SAGAN, details can be generated using cues from all feature locations. Moreover, the discriminator can check that highly detailed features in distant portions of the image are consistent with each other. Furthermore, recent work has shown that generator conditioning affects GAN performance. Leveraging this insight, we apply spectral normalization to the GAN generator and find that this improves training dynamics. The proposed SAGAN achieves the state-of-the-art results, boosting the best published Inception score from 36.8 to 52.52 and reducing Frechet Inception distance from 27.62 to 18.65 on the challenging ImageNet dataset. Visualization of the attention layers shows that the generator leverages neighborhoods that correspond to object shapes rather than local regions of fixed shape.) <|cite_end|> <|cite_start|> (Reference: Image Transformer: Image generation has been successfully cast as an autoregressive sequence generation or transformation problem. Recent work has shown that self-attention is an effective way of modeling textual sequences. In this work, we generalize a recently proposed model architecture based on self-attention, the Transformer, to a sequence modeling formulation of image generation with a tractable likelihood. By restricting the self-attention mechanism to attend to local neighborhoods we significantly increase the size of images the model can process in practice, despite maintaining significantly larger receptive fields per layer than typical convolutional neural networks. While conceptually simple, our generative models significantly outperform the current state of the art in image generation on ImageNet, improving the best published negative log-likelihood on ImageNet from 3.83 to 3.77. We also present results on image super-resolution with a large magnification ratio, applying an encoder-decoder configuration of our architecture. In a human evaluation study, we find that images generated by our super-resolution model fool human observers three times more often than the previous state of the art.) <|cite_end|> <|cite_start|> (Reference: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.) <|cite_end|>. Transformer's attention was created to overcome the limitations of RNN, which was slow in computation due to difficulties in parallel processing. Transformers do not need to process data sequentially like RNNs. In addition, this processing method is possible because it allows much more parallelization than RNN. Recently, there have been several attempts to commercialize BCI technology <|cite_start|> (Reference: Single-channel in-ear-EEG detects the focus of auditory attention to concurrent tone streams and mixed speech: Objective. Conventional, multi-channel scalp electroencephalography (EEG) allows the identification of the attended speaker in concurrent-listening (‘cocktail party’) scenarios. This implies that EEG might provide valuable information to complement hearing aids with some form of EEG and to install a level of neuro-feedback. Approach. To investigate whether a listener’s attentional focus can be detected from single-channel hearing-aid-compatible EEG configurations, we recorded EEG from three electrodes inside the ear canal (‘in-Ear-EEG’) and additionally from 64 electrodes on the scalp. In two different, concurrent listening tasks, participants (n  =  7) were fitted with individualized in-Ear-EEG pieces and were either asked to attend to one of two dichotically-presented, concurrent tone streams or to one of two diotically-presented, concurrent audiobooks. A forward encoding model was trained to predict the EEG response at single EEG channels. Main results. Each individual participants’ attentional focus could be detected from single-channel EEG response recorded from short-distance configurations consisting only of a single in-Ear-EEG electrode and an adjacent scalp-EEG electrode. The differences in neural responses to attended and ignored stimuli were consistent in morphology (i.e. polarity and latency of components) across subjects. Significance. In sum, our findings show that the EEG response from a single-channel, hearing-aid-compatible configuration provides valuable information to identify a listener’s focus of attention.) <|cite_end|> <|cite_start|> (Reference: Unobtrusive ambulatory EEG using a smartphone and flexible printed electrodes around the ear: ) <|cite_end|>. For example, portable and non-hair EEGs were frequently investigated to improve the applicability of BCI in real life, and endogenous paradigms such as motor imagery and imagined speech are used rather than exogenous paradigm such as event-related potential and steady-state visual evoked potential, which needs external devices to give stimuli <|cite_start|> (Reference: Mobile BCI dataset of scalp- and ear-EEGs with ERP and SSVEP paradigms while standing, walking, and running: ) <|cite_end|>. In particular, the ear-EEG composed of electrodes disposed inside or around the ear has many advantages over the existing scalp-EEG in terms of stability and portability. In addition, since the Broca-Wernicke region, which is mainly analyzed during overt speech or imagined speech, is distributed close to the left ear channels, there is a possibility that only a small number of channels can be used to recognize the user's intention <|cite_start|> (Reference: Natural speech reveals the semantic maps that tile human cerebral cortex: ) <|cite_end|> <|cite_start|> (Reference: Neural Decoding of Imagined Speech and Visual Imagery as Intuitive Paradigms for BCI Communication: Brain-computer interface (BCI) is oriented toward intuitive systems that users can easily operate. Imagined speech and visual imagery are emerging paradigms that can directly convey a user’s intention. We investigated the underlying characteristics that affect the decoding performance of these two paradigms. Twenty-two subjects performed imagined speech and visual imagery of twelve words/phrases frequently used for patients’ communication. Spectral features were analyzed with thirteen-class classification (including rest class) using EEG filtered in six frequency ranges. In addition, cortical regions relevant to the two paradigms were analyzed by classification using single-channel and pre-defined cortical groups. Furthermore, we analyzed the word properties that affect the decoding performance based on the number of syllables, concrete and abstract concepts, and the correlation between the two paradigms. Finally, we investigated multiclass scalability in both paradigms. The high-frequency band displayed a significantly superior performance to that in the case of any other spectral features in the thirteen-class classification (imagined speech: 39.73 ± 5.64%; visual imagery: 40.14 ± 4.17%). Furthermore, the performance of Broca’s and Wernicke’s areas and auditory cortex was found to have improved among the cortical regions in both paradigms. As the number of classes increased, the decoding performance decreased moderately. Moreover, every subject exceeded the confidence level performance, implying the strength of the two paradigms in BCI inefficiency. These two intuitive paradigms were found to be highly effective for multiclass communication systems, having considerable similarities between each other. The results could provide crucial information for improving the decoding performance for practical BCI applications.) <|cite_end|>. <|paper_end|>
[ "<|reference_start|> Network Properties in Transitions of Consciousness during Propofol-induced Sedation: <|reference_end|>", "<|reference_start|> Natural speech reveals the semantic maps that tile human cerebral cortex: <|reference_end|>", "<|reference_start|> Compact Convolutional Neural Networks for Classification of Asynchronous Steady-state Visual Evoked Potentials: Steady-State Visual Evoked Potentials (SSVEPs) are neural oscillations from the parietal and occipital regions of the brain that are evoked from flickering visual stimuli. SSVEPs are robust signals measurable in the electroencephalogram (EEG) and are commonly used in brain-computer interfaces (BCIs). However, methods for high-accuracy decoding of SSVEPs usually require hand-crafted approaches that leverage domain-specific knowledge of the stimulus signals, such as specific temporal frequencies in the visual stimuli and their relative spatial arrangement. When this knowledge is unavailable, such as when SSVEP signals are acquired asynchronously, such approaches tend to fail. In this paper, we show how a compact convolutional neural network (Compact-CNN), which only requires raw EEG signals for automatic feature extraction, can be used to decode signals from a 12-class SSVEP dataset without the need for any domain-specific knowledge or calibration data. We report across subject mean accuracy of approximately 80% (chance being 8.3%) and show this is substantially better than current state-of-the-art hand-crafted approaches using canonical correlation analysis (CCA) and Combined-CCA. Furthermore, we analyze our Compact-CNN to examine the underlying feature representation, discovering that the deep learner extracts additional phase and amplitude related features associated with the structure of the dataset. We discuss how our Compact-CNN shows promise for BCI applications that allow users to freely gaze/attend to any stimulus at any time (e.g., asynchronous BCI) as well as provides a method for analyzing SSVEP signals in a way that might augment our understanding about the basic processing in the visual cortex. <|reference_end|>", "<|reference_start|> Soft Computing-Based EEG Classification by Optimal Feature Selection and Neural Networks: Brain computer interface translates electroencephalogram (EEG) signals into control commands so that paralyzed people can control assistive devices. This human thought translation is a very challenging process as EEG signals contain noise. For noise removal, a bandpass filter or a filter bank is used. However, these techniques also remove useful information from the signal. Furthermore, after feature extraction, there are such features which do not play any significant role in effective classification. Thus, soft computing-based EEG classification followed by extraction and then selection of optimal features can produce better results. In this paper, subband common spatial patterns using sequential backward floating selection is being proposed in order to classify motor-imagery-based EEG signals. The signal is decomposed into subband using a filter bank having overlapped frequency cutoffs. Linear discriminant analysis followed by common spatial pattern is applied to the output of each filter for features extraction. Then, sequential backward floating selection is applied for selection of optimal features to train radial basis function neural networks. Two different datasets have been used for evaluation of results, i.e., Open BCI dataset and EEG signals acquired by Emotiv Epoc. The proposed system shows an overall accuracy of 93.05% and 85.00% for both datasets, respectively. The results show that the proposed optimal feature selection and neural network-based classification approach with overlapped frequency bands is an effective method for EEG classification as compared to previous techniques. <|reference_end|>" ]
[ 9, 12, 26, 27 ]
{"<|multi_cite_1_1|>": "ss-1386586", "<|multi_cite_1_2|>": "ss-697922", "<|multi_cite_1_3|>": "ss-1470835", "<|multi_cite_2_1|>": "ss-1295289", "<|multi_cite_2_2|>": "ss-1295290", "<|multi_cite_2_3|>": "ss-1950716", "<|multi_cite_2_4|>": "arxiv-266068", "<|multi_cite_3_1|>": "ss-1106868", "<|multi_cite_3_2|>": "ss-1295292", "<|multi_cite_3_3|>": "ss-1295293", "<|multi_cite_3_4|>": "ss-904899", "<|multi_cite_3_5|>": "arxiv-246291", "<|multi_cite_4_1|>": "ss-882520", "<|multi_cite_4_2|>": "ss-1403931", "<|multi_cite_5_1|>": "ss-974453", "<|multi_cite_5_2|>": "ss-1403932", "<|multi_cite_5_3|>": "ss-1219573", "<|cite_6|>": "ss-1116508", "<|multi_cite_7_1|>": "ss-1403932", "<|multi_cite_7_2|>": "ss-1386588", "<|multi_cite_8_1|>": "ss-1409628", "<|multi_cite_8_2|>": "ss-1192146", "<|multi_cite_9_1|>": "ss-1123238", "<|multi_cite_10_1|>": "ss-1106863", "<|multi_cite_10_2|>": "ss-904899", "<|multi_cite_10_3|>": "ss-927793", "<|multi_cite_11_1|>": "arxiv-151359", "<|multi_cite_11_2|>": "ss-1403937", "<|multi_cite_12_1|>": "ss-1403938", "<|multi_cite_12_2|>": "ss-1403934", "<|cite_13|>": "arxiv-126595", "<|multi_cite_14_1|>": "arxiv-262095", "<|multi_cite_14_2|>": "arxiv-159359", "<|multi_cite_14_3|>": "arxiv-148539", "<|multi_cite_14_4|>": "arxiv-298443", "<|multi_cite_15_1|>": "ss-2541160", "<|multi_cite_15_2|>": "ss-873234", "<|cite_16|>": "ss-2173798", "<|multi_cite_17_1|>": "ss-882520", "<|multi_cite_17_2|>": "ss-1403932"}
2311.08299
<|paper_start|> Title: VERVE: Template-based ReflectiVE Rewriting for MotiVational IntErviewing Abstract: VERVE: Template-based ReflectiVE Rewriting for MotiVational IntErviewing: Reflective listening is a fundamental skill that counselors must acquire to achieve proficiency in motivational interviewing (MI). It involves responding in a manner that acknowledges and explores the meaning of what the client has expressed in the conversation. In this work, we introduce the task of counseling response rewriting, which transforms non-reflective statements into reflective responses. We introduce VERVE, a template-based rewriting system with paraphrase-augmented training and adaptive template updating. VERVE first creates a template by identifying and filtering out tokens that are not relevant to reflections and constructs a reflective response using the template. Paraphrase-augmented training allows the model to learn less-strict fillings of masked spans, and adaptive template updating helps discover effective templates for rewriting without significantly removing the original content. Using both automatic and human evaluations, we compare our method against text rewriting baselines and show that our framework is effective in turning non-reflective statements into more reflective responses while achieving a good content preservation-reflection style trade-off. Introduction During the Covid-19 pandemic, the number of people living with anxiety and depression rose more than four times, thus aggravating the ongoing disparity between unmet needs for mental health treatment and increased mental health disorders <|cite_start|> (Reference: Expression of Concern About: Trends in Mental Health Symptoms, Service Use, and Unmet Need for Services among US Adults through the First Nine Months of the COVID-19 Pandemic: Department of Counseling, Developmental, and Educational Psychology, Boston College, 140 Commonwealth Ave., Chestnut Hill, MA 02467, USA Department of Economics and School of Social Work, Boston College, Chestnut Hill, MA, USA Upon initial publication, it was noticed there was a coding error in the manuscript that significantly alters the outcomes of this paper. The coding has been revised by the authors and the updated findings are currently under review. A revised paper is forthcoming. applyparastyle "fig//caption/p[1]" parastyle "FigCapt" applyparastyle "fig" parastyle "Figure" applyparastyle "article/front/article-meta/contrib-group/affiliation/aff " parastyle "Affiliation") <|cite_end|>. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth,height=\textheight,keepaspectratio]{figures/fig_exchange.png} \caption{In this example of counselor response rewriting, a counseling trainee is asked to provide a reflective response given the client prompt and produces a poor response by giving a piece of advice rather than reflecting the client's concerns. Our system generates an improved response that preserves content and increases the use of reflective language. } \label{fig:turn} \end{figure} One driving cause behind this discrepancy is the shortage of mental health professionals, which is exacerbated by the fact that becoming a counselor requires extensive training <|cite_start|> (Reference: Developing the Mental Health Workforce: Review and Application of Training Approaches from Multiple Disciplines: ) <|cite_end|>. In particular, counselor training is difficult to speed up due to several factors, such as the need for expert supervision, and the laborious and time-extensive process needed to provide evaluative feedback. There have been several efforts to use NLP to assist counselor training, including automatic coding of counselor behavior <|cite_start|> (Reference: "am i a good therapist?" automated evaluation of psychotherapy skills using speech and language technologies: With the growing prevalence of psychological interventions, it is vital to have measures which rate the e ff ectiveness of psychological care, in order to assist in training, supervision, and quality assurance of services. Traditionally, quality assessment is addressed by human raters who evaluate recorded sessions along specific dimensions, often codified through constructs relevant to the approach and domain. This is however a cost-prohibitive and time-consuming method which leads to poor feasibility and limited use in real-world settings. To facilitate this process, we have developed an automated competency rating tool able to process the raw recorded audio of a session, analyzing who spoke when, what they said, and how the health professional used language to provide therapy. Focusing on a use case of a specific type of psychotherapy called Motivational Interviewing, our system gives comprehensive feedback to the therapist, including information about the dynamics of the session (e.g., therapist’s vs. client’s talking time), low-level psychological language descriptors (e.g., type of questions asked), as well as other high-level behavioral constructs (e.g., the extent to which the therapist understands the clients’ perspective). We describe our platform and its performance, using a dataset of more than 5,000 recordings drawn from its deployment in a real-world clinical setting used to assist training of new therapists. We are confident that a widespread use of automated psychotherapy rating tools in the near future will augment experts’ capabilities by providing an avenue for more e ff ective training and skill improvement and will eventually lead to more positive clinical outcomes.) <|cite_end|>, providing timing and language suggestions during client interactions <|cite_start|> (Reference: A computational approach to measure the linguistic characteristics of psychotherapy timing, responsiveness, and consistency: ) <|cite_end|> <|cite_start|> (Reference: Enhancing the quality of cognitive behavioral therapy in community mental health through artificial intelligence generated fidelity feedback (Project AFFECT): a study protocol: ) <|cite_end|>, and evaluating the quality of specific counseling skills <|cite_start|> (Reference: Counseling-style reflection generation using generative pretrained transformers with augmented context: We introduce a counseling dialogue system that seeks to assist counselors while they are learning and refining their counseling skills. The system generates counselors’reflections – i.e., responses that reflect back on what the client has said given the dialogue history. Our method builds upon the new generative pretrained transformer architecture and enhances it with context augmentation techniques inspired by traditional strategies used during counselor training. Through a set of comparative experiments, we show that the system that incorporates these strategies performs better in the reflection generation task than a system that is just fine-tuned with counseling conversations. To confirm our findings, we present a human evaluation study that shows that our system generates naturally-looking reflections that are also stylistically and grammatically correct.) <|cite_end|> <|cite_start|> (Reference: Constructing Image-Text Pair Dataset from Books: Digital archiving is becoming widespread owing to its effectiveness in protecting valuable books and providing knowledge to many people electronically. In this paper, we propose a novel approach to leverage digital archives for machine learning. If we can fully utilize such digitized data, machine learning has the potential to uncover unknown insights and ultimately acquire knowledge autonomously, just like humans read books. As a first step, we design a dataset construction pipeline comprising an optical character reader (OCR), an object detector, and a layout analyzer for the autonomous extraction of image-text pairs. In our experiments, we apply our pipeline on old photo books to construct an image-text pair dataset, showing its effectiveness in image-text retrieval and insight extraction.) <|cite_end|>. However, the progress in developing tools that can fulfill a ``mentoring role'' and offer alternative language suggestions for counselors in training has been limited. To fill this gap, we introduce the task of counselor response rewriting, which involves rephrasing trainees' responses with basic counseling skills into alternative responses that reflect a more advanced level of counseling proficiency. We focus on reflective listening as our main counseling skill, and on Motivational Interviewing as the counseling strategy. We show an example of our system output in Figure~\ref{fig:turn}. In this case, providing a numerical score or a reference reflection (i.e., a high-quality reflection) does not help the counselor understand what parts of their answer could be improved. Our system addresses this shortcoming by separating the behavior-relevant (e.g., reflection-like language) and the behavior-non-relevant parts, and using the latter as a template for creating an improved rewrite of the original. We introduce {\sc VERVE} (Reflecti\underline{VE} \underline{R}ewriting for Moti\underline{V}ational Int\underline{E}rviewing), a framework based on template editing methods from text style transfer that do not require parallel data, since expert annotation of rewritten responses is expensive and time-consuming. We propose two simple techniques to adapt template-based text rewriting to the counseling domain: paraphrase-augmented training, and adaptively template updating. The first helps the text generator to learn a more flexible mapping between a masked template and a full response so that the structure of the final rewrite is not constrained by the template. The second handles the content-edit trade-off (e.g., preserving part of the user response rather than completely rewriting) by iteratively updating the masked template based on the effect of the rewrite. We evaluate our framework against several baselines from previous text style transfer works using automatic evaluation and demonstrate that our system outperforms baselines in achieved reflection scores while still preserving content from the original response. Related Work Our work builds upon previous work in text style transfer, text rewriting, and NLP for counseling. Broadly, counselor response rewriting is related to text rewriting in NLP, which includes, text style transfer, content debiasing, and controlled generation <|cite_start|> (Reference: Delete, Retrieve, Generate: A Simple Approach to Sentiment and Style Transfer: We consider the task of text attribute transfer: transforming a sentence to alter a specific attribute (e.g., sentiment) while preserving its attribute-independent content (e.g., changing "screen is just the right size" to "screen is too small"). Our training data includes only sentences labeled with their attribute (e.g., positive or negative), but not pairs of sentences that differ only in their attributes, so we must learn to disentangle attributes from attribute-independent content in an unsupervised way. Previous work using adversarial methods has struggled to produce high-quality outputs. In this paper, we propose simpler methods motivated by the observation that text attributes are often marked by distinctive phrases (e.g., "too small"). Our strongest method extracts content words by deleting phrases associated with the sentence's original attribute value, retrieves new phrases associated with the target attribute, and uses a neural model to fluently combine these into a final output. On human evaluation, our best method generates grammatical and appropriate responses on 22% more inputs than the best previous system, averaged over three attribute transfer datasets: altering sentiment of reviews on Yelp, altering sentiment of reviews on Amazon, and altering image captions to be more romantic or humorous.) <|cite_end|> <|cite_start|> (Reference: Politeness Transfer: A Tag and Generate Approach: This paper introduces a new task of politeness transfer which involves converting non-polite sentences to polite sentences while preserving the meaning. We also provide a dataset of more than 1.39 instances automatically labeled for politeness to encourage benchmark evaluations on this new task. We design a tag and generate pipeline that identifies stylistic attributes and subsequently generates a sentence in the target style while preserving most of the source content. For politeness as well as five other transfer tasks, our model outperforms the state-of-the-art methods on automatic metrics for content preservation, with a comparable or better performance on style transfer accuracy. Additionally, our model surpasses existing methods on human evaluations for grammaticality, meaning preservation and transfer accuracy across all the six style transfer tasks. The data and code is located at https://github.com/tag-and-generate.) <|cite_end|>. In this work, we focus on rewriting through template-based editing (or prototype-based in other text style transfer literature <|cite_start|> (Reference: Deep Learning for Text Style Transfer: A Survey: Text style transfer is an important task in natural language generation, which aims to control certain attributes in the generated text, such as politeness, emotion, humor, and many others. It has a long history in the field of natural language processing, and recently has re-gained significant attention thanks to the promising performance brought by deep neural models. In this paper, we present a systematic survey of the research on neural text style transfer, spanning over 100 representative articles since the first neural text style transfer work in 2017. We discuss the task formulation, existing datasets and subtasks, evaluation, as well as the rich methodologies in the presence of parallel and non-parallel data. We also provide discussions on a variety of important topics regarding the future development of this task. Our curated paper list is at https://github.com/zhijing-jin/Text_Style_Transfer_Survey) <|cite_end|>. These systems offer several advantages over alternative frameworks such as latent style transfer or LLM-based methods <|cite_start|> (Reference: Style Transformer: Unpaired Text Style Transfer without Disentangled Latent Representation: Disentangling the content and style in the latent space is prevalent in unpaired text style transfer. However, two major issues exist in most of the current neural models. 1) It is difficult to completely strip the style information from the semantics for a sentence. 2) The recurrent neural network (RNN) based encoder and decoder, mediated by the latent representation, cannot well deal with the issue of the long-term dependency, resulting in poor preservation of non-stylistic semantic content. In this paper, we propose the Style Transformer, which makes no assumption about the latent representation of source sentence and equips the power of attention mechanism in Transformer to achieve better style transfer and better content preservation.) <|cite_end|> <|cite_start|> (Reference: Cognitive Reframing of Negative Thoughts through Human-Language Model Interaction: A proven therapeutic technique to overcome negative thoughts is to replace them with a more hopeful "reframed thought." Although therapy can help people practice and learn this Cognitive Reframing of Negative Thoughts, clinician shortages and mental health stigma commonly limit people's access to therapy. In this paper, we conduct a human-centered study of how language models may assist people in reframing negative thoughts. Based on psychology literature, we define a framework of seven linguistic attributes that can be used to reframe a thought. We develop automated metrics to measure these attributes and validate them with expert judgements from mental health practitioners. We collect a dataset of 600 situations, thoughts and reframes from practitioners and use it to train a retrieval-enhanced in-context learning model that effectively generates reframed thoughts and controls their linguistic attributes. To investigate what constitutes a "high-quality" reframe, we conduct an IRB-approved randomized field study on a large mental health website with over 2,000 participants. Amongst other findings, we show that people prefer highly empathic or specific reframes, as opposed to reframes that are overly positive. Our findings provide key implications for the use of LMs to assist people in overcoming negative thoughts.) <|cite_end|>. First, template-based editing systems offer high interpretability as they rely on predefined templates or patterns. Users can have precise control over the editing process by selecting specific templates or designing new ones. This allows for easier understanding and manipulation of the output, which is particularly important in applications where transparency is valued. Moreover, content preservation is another advantage of prototype-based editing, since the template generation process can be controlled to vary the amount of original content preserved in the rewrite. An important difference from previous studies is that we address text rewriting in dialog context, whereas previous studies are mostly concerned with transforming isolated text, such as product reviews <|cite_start|> (Reference: Evaluating Style Transfer for Text: Research in the area of style transfer for text is currently bottlenecked by a lack of standard evaluation practices. This paper aims to alleviate this issue by experimentally identifying best practices with a Yelp sentiment dataset. We specify three aspects of interest (style transfer intensity, content preservation, and naturalness) and show how to obtain more reliable measures of them from human evaluation than in previous work. We propose a set of metrics for automated evaluation and demonstrate that they are more strongly correlated and in agreement with human judgment: direction-corrected Earth Mover's Distance, Word Mover's Distance on style-masked texts, and adversarial classification for the respective aspects. We also show that the three examined models exhibit tradeoffs between aspects of interest, demonstrating the importance of evaluating style transfer models at specific points of their tradeoff plots. We release software with our evaluation metrics to facilitate research.) <|cite_end|>. Since counseling reflections often include empathy <|cite_start|> (Reference: More than reflections: empathy in motivational interviewing includes language style synchrony between therapist and client.: ) <|cite_end|>, empathetic text generation and rewriting are also relevant. While most of the empathetic generation literature focuses on modeling emotion for generating responses from scratch, <|cite_start|> (Reference: Towards Facilitating Empathic Conversations in Online Mental Health Support: A Reinforcement Learning Approach: Online peer-to-peer support platforms enable conversations between millions of people who seek and provide mental health support. If successful, web-based mental health conversations could improve access to treatment and reduce the global disease burden. Psychologists have repeatedly demonstrated that empathy, the ability to understand and feel the emotions and experiences of others, is a key component leading to positive outcomes in supportive conversations. However, recent studies have shown that highly empathic conversations are rare in online mental health platforms. In this paper, we work towards improving empathy in online mental health support conversations. We introduce a new task of empathic rewriting which aims to transform low-empathy conversational posts to higher empathy. Learning such transformations is challenging and requires a deep understanding of empathy while maintaining conversation quality through text fluency and specificity to the conversational context. Here we propose PARTNER, a deep reinforcement learning agent that learns to make sentence-level edits to posts in order to increase the expressed level of empathy while maintaining conversation quality. Our RL agent leverages a policy network, based on a transformer language model adapted from GPT-2, which performs the dual task of generating candidate empathic sentences and adding those sentences at appropriate positions. During training, we reward transformations that increase empathy in posts while maintaining text fluency, context specificity and diversity. Through a combination of automatic and human evaluation, we demonstrate that PARTNER successfully generates more empathic, specific, and diverse responses and outperforms NLP methods from related tasks like style transfer and empathic dialogue generation. Our work has direct implications for facilitating empathic conversations on web-based platforms.) <|cite_end|> directly models multiple aspects of empathy and applies reinforcement learning (RL)-based training for rewriting online mental health comments. Similarly, we leverage a classifier model for discriminating attribute labels for text but use simple supervised learning instead of policy gradient RL training. Our work is also related to recent work on NLP for the counseling domain aiming to assist counselors during their practice and ongoing training. Reflection is an important construct in counseling strategies such as MI, and previous works have studied how the frequency or quality of reflections can be used to evaluate counseling <|cite_start|> (Reference: Predicting Counselor Behaviors in Motivational Interviewing Encounters: As the number of people receiving psycho-therapeutic treatment increases, the automatic evaluation of counseling practice arises as an important challenge in the clinical domain. In this paper, we address the automatic evaluation of counseling performance by analyzing counselors’ language during their interaction with clients. In particular, we present a model towards the automation of Motivational Interviewing (MI) coding, which is the current gold standard to evaluate MI counseling. First, we build a dataset of hand labeled MI encounters; second, we use text-based methods to extract and analyze linguistic patterns associated with counselor behaviors; and third, we develop an automatic system to predict these behaviors. We introduce a new set of features based on semantic information and syntactic patterns, and show that they lead to accuracy figures of up to 90%, which represent a significant improvement with respect to features used in the past.) <|cite_end|> <|cite_start|> (Reference: "am i a good therapist?" automated evaluation of psychotherapy skills using speech and language technologies: With the growing prevalence of psychological interventions, it is vital to have measures which rate the e ff ectiveness of psychological care, in order to assist in training, supervision, and quality assurance of services. Traditionally, quality assessment is addressed by human raters who evaluate recorded sessions along specific dimensions, often codified through constructs relevant to the approach and domain. This is however a cost-prohibitive and time-consuming method which leads to poor feasibility and limited use in real-world settings. To facilitate this process, we have developed an automated competency rating tool able to process the raw recorded audio of a session, analyzing who spoke when, what they said, and how the health professional used language to provide therapy. Focusing on a use case of a specific type of psychotherapy called Motivational Interviewing, our system gives comprehensive feedback to the therapist, including information about the dynamics of the session (e.g., therapist’s vs. client’s talking time), low-level psychological language descriptors (e.g., type of questions asked), as well as other high-level behavioral constructs (e.g., the extent to which the therapist understands the clients’ perspective). We describe our platform and its performance, using a dataset of more than 5,000 recordings drawn from its deployment in a real-world clinical setting used to assist training of new therapists. We are confident that a widespread use of automated psychotherapy rating tools in the near future will augment experts’ capabilities by providing an avenue for more e ff ective training and skill improvement and will eventually lead to more positive clinical outcomes.) <|cite_end|> <|cite_start|> (Reference: Local dynamic mode of Cognitive Behavioral Therapy: In order to increase mental health equity among the most vulnerable and marginalized communities, it is important to increase access to high-quality therapists. One facet of addressing these needs, is to provide timely feedback to clinicians as they interact with their clients, in a way that is also contextualized to specific clients and interactions they have had. Dynamical systems provide a framework through which to analyze interactions. The present work applies these methods to the domain of automated psychotherapist evaluation for Cognitive Behavioral Therapy (CBT). Our methods extract local dynamic modes from short windows of conversation and learns to correlate the observed dynamics to CBT competence. The results demonstrate the value of this paradigm and outlines the way in which these methods can be used to study and improve therapeutic strategies.) <|cite_end|>. There also have been studies on generating reflections <|cite_start|> (Reference: Counseling-style reflection generation using generative pretrained transformers with augmented context: We introduce a counseling dialogue system that seeks to assist counselors while they are learning and refining their counseling skills. The system generates counselors’reflections – i.e., responses that reflect back on what the client has said given the dialogue history. Our method builds upon the new generative pretrained transformer architecture and enhances it with context augmentation techniques inspired by traditional strategies used during counselor training. Through a set of comparative experiments, we show that the system that incorporates these strategies performs better in the reflection generation task than a system that is just fine-tuned with counseling conversations. To confirm our findings, we present a human evaluation study that shows that our system generates naturally-looking reflections that are also stylistically and grammatically correct.) <|cite_end|> <|cite_start|> (Reference: Knowledge Enhanced Reflection Generation for Counseling Dialogues: In this paper, we study the effect of commonsense and domain knowledge while generating responses in counseling conversations using retrieval and generative methods for knowledge integration. We propose a pipeline that collects domain knowledge through web mining, and show that retrieval from both domain-specific and commonsense knowledge bases improves the quality of generated responses. We also present a model that incorporates knowledge generated by COMET using soft positional encoding and masked self-attention.We show that both retrieved and COMET-generated knowledge improve the system’s performance as measured by automatic metrics and also by human evaluation. Lastly, we present a comparative study on the types of knowledge encoded by our system showing that causal and intentional relationships benefit the generation task more than other types of commonsense relations.) <|cite_end|>. However, to the best of our knowledge, our work is the first to consider rewriting non-reflections into reflections. <|paper_end|>
[ "<|reference_start|> Politeness Transfer: A Tag and Generate Approach: This paper introduces a new task of politeness transfer which involves converting non-polite sentences to polite sentences while preserving the meaning. We also provide a dataset of more than 1.39 instances automatically labeled for politeness to encourage benchmark evaluations on this new task. We design a tag and generate pipeline that identifies stylistic attributes and subsequently generates a sentence in the target style while preserving most of the source content. For politeness as well as five other transfer tasks, our model outperforms the state-of-the-art methods on automatic metrics for content preservation, with a comparable or better performance on style transfer accuracy. Additionally, our model surpasses existing methods on human evaluations for grammaticality, meaning preservation and transfer accuracy across all the six style transfer tasks. The data and code is located at https://github.com/tag-and-generate. <|reference_end|>", "<|reference_start|> Deep Learning for Text Style Transfer: A Survey: Text style transfer is an important task in natural language generation, which aims to control certain attributes in the generated text, such as politeness, emotion, humor, and many others. It has a long history in the field of natural language processing, and recently has re-gained significant attention thanks to the promising performance brought by deep neural models. In this paper, we present a systematic survey of the research on neural text style transfer, spanning over 100 representative articles since the first neural text style transfer work in 2017. We discuss the task formulation, existing datasets and subtasks, evaluation, as well as the rich methodologies in the presence of parallel and non-parallel data. We also provide discussions on a variety of important topics regarding the future development of this task. Our curated paper list is at https://github.com/zhijing-jin/Text_Style_Transfer_Survey <|reference_end|>", "<|reference_start|> More than reflections: empathy in motivational interviewing includes language style synchrony between therapist and client.: <|reference_end|>", "<|reference_start|> Counseling-style reflection generation using generative pretrained transformers with augmented context: We introduce a counseling dialogue system that seeks to assist counselors while they are learning and refining their counseling skills. The system generates counselors’reflections – i.e., responses that reflect back on what the client has said given the dialogue history. Our method builds upon the new generative pretrained transformer architecture and enhances it with context augmentation techniques inspired by traditional strategies used during counselor training. Through a set of comparative experiments, we show that the system that incorporates these strategies performs better in the reflection generation task than a system that is just fine-tuned with counseling conversations. To confirm our findings, we present a human evaluation study that shows that our system generates naturally-looking reflections that are also stylistically and grammatically correct. <|reference_end|>" ]
[ 8, 9, 13, 18 ]
{"<|cite_1|>": "ss-2512405", "<|cite_2|>": "ss-2512406", "<|cite_3|>": "ss-2512407", "<|multi_cite_4_1|>": "ss-1176679", "<|multi_cite_4_2|>": "ss-2077463", "<|multi_cite_5_1|>": "ss-821896", "<|multi_cite_5_2|>": "ss-2512408", "<|multi_cite_7_1|>": "arxiv-155337", "<|multi_cite_7_2|>": "arxiv-262361", "<|cite_8|>": "arxiv-300619", "<|multi_cite_9_1|>": "arxiv-204128", "<|multi_cite_9_2|>": "arxiv-501969", "<|cite_10|>": "arxiv-198180", "<|cite_11|>": "ss-834748", "<|cite_14|>": "arxiv-315947", "<|multi_cite_12_1|>": "ss-1970096", "<|multi_cite_12_2|>": "ss-2512407", "<|multi_cite_12_3|>": "arxiv-420736", "<|multi_cite_13_1|>": "ss-821896", "<|multi_cite_13_2|>": "ss-1586885"}
2404.04514
<|paper_start|> Title: Joint Visual and Text Prompting for Improved Object-Centric Perception with Multimodal Large Language Models Abstract: Joint Visual and Text Prompting for Improved Object-Centric Perception with Multimodal Large Language Models: Multimodal Large Language Models (MLLMs) such as GPT-4V and Gemini Pro face challenges in achieving human-level perception in Visual Question Answering (VQA), particularly in object-oriented perception tasks which demand fine-grained understanding of object identities, locations or attributes, as indicated by empirical findings. This is mainly due to their limited capability to effectively integrate complex visual cues with textual information and potential object hallucinations. In this paper, we present a novel approach, Joint Visual and Text Prompting (VTPrompt), that employs fine-grained visual information to enhance the capability of MLLMs in VQA, especially for object-oriented perception. VTPrompt merges visual and text prompts to extract key concepts from textual questions and employs a detection model to highlight relevant objects as visual prompts in images. The processed images alongside text prompts are subsequently fed into MLLMs to produce more accurate answers. Our experiments with GPT-4V and Gemini Pro, on three benchmarks, i.e., MME , MMB and POPE, demonstrate significant improvements. Particularly, our method led to a score improvement of up to 183.5 for GPT-4V on MME and enhanced MMB performance by 8.17\% for GPT-4V and 15.69\% for Gemini Pro. Introduction A long-term yet challenging goal in AI is to achieve human-level perception with multimodal vision and textual information <|cite_start|> (Reference: Research on visual question answering based on dynamic memory network model of multiple attention mechanisms: ) <|cite_end|> <|cite_start|> (Reference: Towards Transparent AI Systems: Interpreting Visual Question Answering Models: Deep neural networks have shown striking progress and obtained state-of-the-art results in many AI research fields in the recent years. However, it is often unsatisfying to not know why they predict what they do. In this paper, we address the problem of interpreting Visual Question Answering (VQA) models. Specifically, we are interested in finding what part of the input (pixels in images or words in questions) the VQA model focuses on while answering the question. To tackle this problem, we use two visualization techniques -- guided backpropagation and occlusion -- to find important words in the question and important regions in the image. We then present qualitative and quantitative analyses of these importance maps. We found that even without explicit attention mechanisms, VQA models may sometimes be implicitly attending to relevant regions in the image, and often to appropriate words in the question.) <|cite_end|> <|cite_start|> (Reference: Visual Question Answering: A Survey on Techniques and Common Trends in Recent Literature: Visual Question Answering (VQA) is an emerging area of interest for researches, being a recent problem in natural language processing and image prediction. In this area, an algorithm needs to answer questions about certain images. As of the writing of this survey, 25 recent studies were analyzed. Besides, 6 datasets were analyzed and provided their link to download. In this work, several recent pieces of research in this area were investigated and a deeper analysis and comparison among them were provided, including results, the state-of-the-art, common errors, and possible points of improvement for future researchers.) <|cite_end|> <|cite_start|> (Reference: VQA: Visual Question Answering: We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing ~0.25M images, ~0.76M questions, and ~10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (http://cloudcv.org/vqa).) <|cite_end|>. In this endeavor, the Visual Question Answering (VQA) task stands out as a pivotal benchmark, which evaluates the ability of AI systems to analyze and interpret both visual and textual information to generate responses <|cite_start|> (Reference: Visual Question Answering: A Survey of Methods and Datasets: Visual Question Answering (VQA) is a challenging task that has received increasing attention from both the computer vision and the natural language processing communities. Given an image and a question in natural language, it requires reasoning over visual elements of the image and general knowledge to infer the correct answer. In the first part of this survey, we examine the state of the art by comparing modern approaches to the problem. We classify methods by their mechanism to connect the visual and textual modalities. In particular, we examine the common approach of combining convolutional and recurrent neural networks to map images and questions to a common feature space. We also discuss memory-augmented and modular architectures that interface with structured knowledge bases. In the second part of this survey, we review the datasets available for training and evaluating VQA systems. The various datatsets contain questions at different levels of complexity, which require different capabilities and types of reasoning. We examine in depth the question/answer pairs from the Visual Genome project, and evaluate the relevance of the structured annotations of images with scene graphs for VQA. Finally, we discuss promising future directions for the field, in particular the connection to structured knowledge bases and the use of natural language processing models.) <|cite_end|>. Recently, Multimodal Large Language Models (MLLMs), such as GPT-4V <|cite_start|> (Reference: {GPT-4V(ision) system card: GPT-4 with vision (GPT-4V) enables users to instruct GPT-4 to analyze image inputs provided by the user, and is the latest capability we are making broadly available. Incorporating additional modalities (such as image inputs) into large language models (LLMs) is viewed by some as a key frontier in artificial intelligence research and development [1, 2, 3]. Multimodal LLMs offer the possibility of expanding the impact of language-only systems with novel interfaces and capabilities, enabling them to solve new tasks and provide novel experiences for their users. In this system card, [4, 5] 1 we analyze the safety properties of GPT-4V. Our work on safety for GPT-4V builds on the work done for GPT-4 [7] and here we dive deeper into the evaluations, preparation, and mitigation work done specifically for image inputs. Similar to GPT-4, training of GPT-4V was completed in 2022 and we began providing early access to the system in March 2023. As GPT-4 is the technology behind the visual capabilities of GPT-4V, its training process was the same. The pre-trained model was first trained to predict the next word in a document, using a large dataset of text and image data from the Internet as well as licensed sources of data. It was then fine-tuned with additional data, using an algorithm called reinforcement learning from human feedback (RLHF),[8, 9] to produce outputs that are preferred by human trainers. Large multimodal models introduce different limitations and expand the risk surface compared to text-based language models. GPT-4V possesses the limitations and capabilities of each modality (text and vision), while at the same time presenting novel capabilities emerging from the intersection of said modalities and from the intelligence and reasoning afforded by large scale models. This system card outlines how OpenAI prepared the vision capabilities of GPT-4 for deployment. It describes the early access period of the model for small scale users and safety learnings OpenAI gained from this period, multimodal evaluations built to study the model’s fitness for deployment, key findings of expert red teamers, and the mitigations OpenAI implemented prior to broad release.) <|cite_end|>, Gemini Pro, LLaVA <|cite_start|> (Reference: Visual Instruction Tuning: Instruction tuning large language models (LLMs) using machine-generated instruction-following data has improved zero-shot capabilities on new tasks, but the idea is less explored in the multimodal field. In this paper, we present the first attempt to use language-only GPT-4 to generate multimodal language-image instruction-following data. By instruction tuning on such generated data, we introduce LLaVA: Large Language and Vision Assistant, an end-to-end trained large multimodal model that connects a vision encoder and LLM for general-purpose visual and language understanding.Our early experiments show that LLaVA demonstrates impressive multimodel chat abilities, sometimes exhibiting the behaviors of multimodal GPT-4 on unseen images/instructions, and yields a 85.1% relative score compared with GPT-4 on a synthetic multimodal instruction-following dataset. When fine-tuned on Science QA, the synergy of LLaVA and GPT-4 achieves a new state-of-the-art accuracy of 92.53%. We make GPT-4 generated visual instruction tuning data, our model and code base publicly available.) <|cite_end|> and MiniGPT4-V2 <|cite_start|> (Reference: MiniGPT-v2: large language model as a unified interface for vision-language multi-task learning: Large language models have shown their remarkable capabilities as a general interface for various language-related applications. Motivated by this, we target to build a unified interface for completing many vision-language tasks including image description, visual question answering, and visual grounding, among others. The challenge is to use a single model for performing diverse vision-language tasks effectively with simple multi-modal instructions. Towards this objective, we introduce MiniGPT-v2, a model that can be treated as a unified interface for better handling various vision-language tasks. We propose using unique identifiers for different tasks when training the model. These identifiers enable our model to better distinguish each task instruction effortlessly and also improve the model learning efficiency for each task. After the three-stage training, the experimental results show that MiniGPT-v2 achieves strong performance on many visual question-answering and visual grounding benchmarks compared to other vision-language generalist models. Our model and codes are available at https://minigpt-v2.github.io/) <|cite_end|>, have demonstrated promising capability in VQA tasks. However, our empirical evaluations in Figure \ref{fig:mmb} about Gemini Pro in MMB's multimodal perception tasks indicate their inferior performance on object-oriented tasks, such as object localization, spatial relationships, and attribute comparison. Consistent findings of GPT4V as shown in the Appendix highlight a specific weakness of MLLMs in handling object-oriented tasks. \begin{figure}[t!] \centering \includegraphics[width=1\linewidth]{mmb1.png} \caption{Performance of Gemini Pro on MMB <|cite_start|> (Reference: MMBench: Is Your Multi-modal Model an All-around Player?: Large vision-language models (VLMs) have recently achieved remarkable progress, exhibiting impressive multimodal perception and reasoning abilities. However, effectively evaluating these large VLMs remains a major challenge, hindering future development in this domain. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but lack fine-grained ability assessment and robust evaluation metrics. Meanwhile, subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, which is not scalable and may display significant bias. In response to these challenges, we propose MMBench, a bilingual benchmark for assessing the multi-modal capabilities of VLMs. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of the following key features: 1. MMBench is meticulously curated with well-designed quality control schemes, surpassing existing similar benchmarks in terms of the number and variety of evaluation questions and abilities; 2. MMBench introduces a rigorous CircularEval strategy and incorporates large language models to convert free-form predictions into pre-defined choices, which helps to yield accurate evaluation results for models with limited instruction-following capabilities. 3. MMBench incorporates multiple-choice questions in both English and Chinese versions, enabling an apples-to-apples comparison of VLMs' performance under a bilingual context. To summarize, MMBench is a systematically designed objective benchmark for a robust and holistic evaluation of vision-language models. We hope MMBench will assist the research community in better evaluating their models and facilitate future progress in this area. The evalutation code of MMBench has been integrated into VLMEvalKit: https://github.com/open-compass/VLMEvalKit.) <|cite_end|>. The inferior performance on the three object-oriented tasks (left-most) can be boosted with our VTPrompt.We also present the results based on GPT-4V in the Appendix Figure \ref{fig:mmb4v}.} \label{fig:mmb} \end{figure} \begin{figure*}[t!] \centering \includegraphics[width=1\linewidth]{pipeline4.pdf} \caption{ \textbf{(a)} Regular VQA with GPT-4V generating wrong answers. \textbf{(b-c)} Pipeline of our VTPrompt. The \protect\tikz[baseline=-0.5ex]\protect\draw[fill=customPink] (0,0) rectangle (0.5,0.2); represents the \textbf{Key Concepts Extraction}, corresponding to Section \ref{sec:2.21}, and the \protect\tikz[baseline=-0.5ex]\protect\draw[fill=customYellow] (0,0) rectangle (0.5,0.2); illustrates the \textbf{VPrompt Generation}, as detailed in Section \ref{sec:2.3}. The generated image with visual markers from \textbf{(b)} are processed in \textbf{(c)} which focuses on \textbf{TPrompt for Answer Generation} as in Section \ref{sec:2.4}, where the image enhanced with visual and text prompts are combined and fed into GPT-4V to produce the answers, as indicated by \protect\tikz[baseline=-0.5ex]\protect\draw[fill=customGreen] (0,0) rectangle (0.5,0.2);.} \label{fig:Pipeline} \vspace{-5mm} \end{figure*} Resolving object-oriented tasks with MLLMs remain challenging. On one hand, existing MLLMs usually experience difficulties for effective and accurate visual grounding and interpretation <|cite_start|> (Reference: Perception Matters: Detecting Perception Failures of VQA Models Using Metamorphic Testing: Visual question answering (VQA) takes an image and a natural-language question as input and returns a natural-language answer. To date, VQA models are primarily assessed by their accuracy on high-level reasoning questions. Nevertheless, Given that perception tasks (e.g., recognizing objects) are the building blocks in the compositional process required by high-level reasoning, there is a demanding need to gain insights into how much of a problem low-level perception is. Inspired by the principles of software metamorphic testing, we introduce MetaVQA, a model-agnostic framework for benchmarking perception capability of VQA models. Given an image i, MetaVQA is able to synthesize a low-level perception question q. It then jointly transforms (i, q) to one or a set of sub-questions and sub-images. MetaVQA checks whether the answer to (i, q) satisfies metamorphic relationships (MRs), denoting perception consistency, with the composed answers of transformed questions and images. Violating MRs denotes a failure of answering perception questions. MetaVQA successfully detects over 4.9 million perception failures made by popular VQA models with metamorphic testing. The state-of-the-art VQA models (e.g., the champion of VQA 2020 Challenge) suffer from perception consistency problems. In contrast, the Oscar VQA models, by using anchor points to align questions and images, show generally better consistency in perception tasks. We hope MetaVQA will revitalize interest in enhancing the low-level perceptual abilities of VQA models, a cornerstone of high-level reasoning.) <|cite_end|>, e.g., they may not be able to accurately find or locate the critial objects which are essential to correctly answer the question, as shown in Figure~\label{fig:case1} in the Appendix. On the other hand, there's a tendency towards object hallucination <|cite_start|> (Reference: HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V (ision), LLaVA-1.5, and Other Multi-modality Models: Large language models (LLMs), after being aligned with vision models and integrated into vision-language models (VLMs), can bring impressive improvement in image reasoning tasks. This was shown by the recently released GPT-4V(ison), LLaVA-1.5, etc. However, the strong language prior in these SOTA LVLMs can be a double-edged sword: they may ignore the image context and solely rely on the (even contradictory) language prior for reasoning. In contrast, the vision modules in VLMs are weaker than LLMs and may result in misleading visual representations, which are then translated to confident mistakes by LLMs. To study these two types of VLM mistakes, i.e., language hallucination and visual illusion , we curated “H ALLUSION B ENCH 1 ,” an image-context reasoning benchmark that is still challenging to even GPT-4V and LLaVA-1.5. We provide a detailed analysis of examples in H ALLUSION B ENCH , which sheds novel insights on the illusion or hallucination of VLMs and how to improve them in the future. The benchmark and codebase will be released at https://github.com/tianyi-lab/HallusionBench.) <|cite_end|> <|cite_start|> (Reference: Unified Hallucination Detection for Multimodal Large Language Models: Despite significant strides in multimodal tasks, Multimodal Large Language Models (MLLMs) are plagued by the critical issue of hallucination. The reliable detection of such hallucinations in MLLMs has, therefore, become a vital aspect of model evaluation and the safeguarding of practical application deployment. Prior research in this domain has been constrained by a narrow focus on singular tasks, an inadequate range of hallucination categories addressed, and a lack of detailed granularity. In response to these challenges, our work expands the investigative horizons of hallucination detection. We present a novel meta-evaluation benchmark, MHaluBench, meticulously crafted to facilitate the evaluation of advancements in hallucination detection methods. Additionally, we unveil a novel unified multimodal hallucination detection framework, UNIHD, which leverages a suite of auxiliary tools to validate the occurrence of hallucinations robustly. We demonstrate the effectiveness of UNIHD through meticulous evaluation and comprehensive analysis. We also provide strategic insights on the application of specific tools for addressing various categories of hallucinations.) <|cite_end|> <|cite_start|> (Reference: Exploring Boundary of GPT-4V on Marine Analysis: A Preliminary Case Study: Large language models (LLMs) have demonstrated a powerful ability to answer various queries as a general-purpose assistant. The continuous multi-modal large language models (MLLM) empower LLMs with the ability to perceive visual signals. The launch of GPT-4 (Generative Pre-trained Transformers) has generated significant interest in the research communities. GPT-4V(ison) has demonstrated significant power in both academia and industry fields, as a focal point in a new artificial intelligence generation. Though significant success was achieved by GPT-4V, exploring MLLMs in domain-specific analysis (e.g., marine analysis) that required domain-specific knowledge and expertise has gained less attention. In this study, we carry out the preliminary and comprehensive case study of utilizing GPT-4V for marine analysis. This report conducts a systematic evaluation of existing GPT-4V, assessing the performance of GPT-4V on marine research and also setting a new standard for future developments in MLLMs. The experimental results of GPT-4V show that the responses generated by GPT-4V are still far away from satisfying the domain-specific requirements of the marine professions. All images and prompts used in this study will be available at https://github.com/hkust-vgd/Marine_GPT-4V_Eval) <|cite_end|> <|cite_start|> (Reference: Woodpecker: Hallucination Correction for Multimodal Large Language Models: Hallucination is a big shadow hanging over the rapidly evolving Multimodal Large Language Models (MLLMs), referring to the phenomenon that the generated text is inconsistent with the image content. In order to mitigate hallucinations, existing studies mainly resort to an instruction-tuning manner that requires retraining the models with specific data. In this paper, we pave a different way, introducing a training-free method named Woodpecker. Like a woodpecker heals trees, it picks out and corrects hallucinations from the generated text. Concretely, Woodpecker consists of five stages: key concept extraction, question formulation, visual knowledge validation, visual claim generation, and hallucination correction. Implemented in a post-remedy manner, Woodpecker can easily serve different MLLMs, while being interpretable by accessing intermediate outputs of the five stages. We evaluate Woodpecker both quantitatively and qualitatively and show the huge potential of this new paradigm. On the POPE benchmark, our method obtains a 30.66%/24.33% improvement in accuracy over the baseline MiniGPT-4/mPLUG-Owl. The source code is released at https://github.com/BradyFU/Woodpecker.) <|cite_end|>, where MLLMs might perceive objects that aren't present, compounding the difficulty in achieving precise object-oriented perception. For instance, as shown in Figure~\ref{fig:Pipeline}(a), the MLLM failed to count the number of persons in the image due to wrong object recognition. In this paper, we introduce VTPrompt, a novel approach that significantly enhances MLLMs' object-oriented perception by integrating both visual and textual prompts. As illustrated in Figure~\ref{fig:Pipeline}, VTPrompt first extract key concepts from the textual question, which are employed to guide a detection model, e.g., SPHINX <|cite_start|> (Reference: SPHINX: The Joint Mixing of Weights, Tasks, and Visual Embeddings for Multi-modal Large Language Models: We present SPHINX, a versatile multi-modal large language model (MLLM) with a joint mixing of model weights, tuning tasks, and visual embeddings. First, for stronger vision-language alignment, we unfreeze the large language model (LLM) during pre-training, and introduce a weight mix strategy between LLMs trained by real-world and synthetic data. By directly integrating the weights from two domains, the mixed LLM can efficiently incorporate diverse semantics with favorable robustness. Then, to enable multi-purpose capabilities, we mix a variety of tasks for joint visual instruction tuning, and design task-specific instructions to avoid inter-task conflict. In addition to the basic visual question answering, we include more challenging tasks such as region-level understanding, caption grounding, document layout detection, and human pose estimation, contributing to mutual enhancement over different scenarios. Additionally, we propose to extract comprehensive visual embeddings from various network architectures, pre-training paradigms, and information granularity, providing language models with more robust image representations. Based on our proposed joint mixing, SPHINX exhibits superior multi-modal understanding capabilities on a wide range of applications. On top of this, we further propose an efficient strategy aiming to better capture fine-grained appearances of high-resolution images. With a mixing of different scales and high-resolution sub-images, SPHINX attains exceptional visual parsing and reasoning performance on existing evaluation benchmarks. We hope our work may cast a light on the exploration of joint mixing in future MLLM research. Code is released at https://github.com/Alpha-VLLM/LLaMA2-Accessory.) <|cite_end|> or SAM <|cite_start|> (Reference: Segment Anything: We introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation. Using our efficient model in a data collection loop, we built the largest segmentation dataset to date (by far), with over 1 billion masks on 11M licensed and privacy respecting images. The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks. We evaluate its capabilities on numerous tasks and find that its zero-shot performance is impressive -- often competitive with or even superior to prior fully supervised results. We are releasing the Segment Anything Model (SAM) and corresponding dataset (SA-1B) of 1B masks and 11M images at https://segment-anything.com to foster research into foundation models for computer vision.) <|cite_end|>, for precise object marking. This not only ensures accurate localization but also enriches the model's interpretive capabilities through text prompts that encapsulate the question, refined with visual cues. The refined image with visual markers are fed into the MLLMs, which are further guided by the optimized text prompts to grab meaningful understandings of the bounding boxes, the overall image, as well as fine-grained object perception to obtain the final answer. We evaluate VTPrompt on three popular benchmarks, i.e., MME <|cite_start|> (Reference: MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models: Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization. The data application manner and online leaderboards are released at https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models/tree/Evaluation.) <|cite_end|> , MMB <|cite_start|> (Reference: MMBench: Is Your Multi-modal Model an All-around Player?: Large vision-language models (VLMs) have recently achieved remarkable progress, exhibiting impressive multimodal perception and reasoning abilities. However, effectively evaluating these large VLMs remains a major challenge, hindering future development in this domain. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but lack fine-grained ability assessment and robust evaluation metrics. Meanwhile, subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, which is not scalable and may display significant bias. In response to these challenges, we propose MMBench, a bilingual benchmark for assessing the multi-modal capabilities of VLMs. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of the following key features: 1. MMBench is meticulously curated with well-designed quality control schemes, surpassing existing similar benchmarks in terms of the number and variety of evaluation questions and abilities; 2. MMBench introduces a rigorous CircularEval strategy and incorporates large language models to convert free-form predictions into pre-defined choices, which helps to yield accurate evaluation results for models with limited instruction-following capabilities. 3. MMBench incorporates multiple-choice questions in both English and Chinese versions, enabling an apples-to-apples comparison of VLMs' performance under a bilingual context. To summarize, MMBench is a systematically designed objective benchmark for a robust and holistic evaluation of vision-language models. We hope MMBench will assist the research community in better evaluating their models and facilitate future progress in this area. The evalutation code of MMBench has been integrated into VLMEvalKit: https://github.com/open-compass/VLMEvalKit.) <|cite_end|>, POPE <|cite_start|> (Reference: Evaluating Object Hallucination in Large Vision-Language Models: Inspired by the superior language abilities of large language models (LLM), large vision-language models (LVLM) have been recently explored by integrating powerful LLMs for improving the performance on complex multimodal tasks. Despite the promising progress on LVLMs, we find that LVLMs suffer from the hallucination problem, i.e. they tend to generate objects that are inconsistent with the target images in the descriptions. To investigate it, this work presents the first systematic study on object hallucination of LVLMs. We conduct the evaluation experiments on several representative LVLMs, and show that they mostly suffer from severe object hallucination issue. We further discuss that the visual instructions may influence the hallucination, and find that: objects that frequently occur in the visual instructions or co-occur with the image objects, are obviously prone to be hallucinated by LVLMs. Besides, we find that existing evaluation methods might be affected by the input instructions and generation styles of LVLMs. Thus, we further design an improved evaluation method for object hallucination by proposing a polling-based query method called POPE. Experiment results demonstrate that our POPE can evaluate the object hallucination in a more stable and flexible way. Our codes and data are publicly available at https://github.com/RUCAIBox/POPE.) <|cite_end|> with top-performing MLLMs, i.e., GPT-4V and Gemini Pro, demonstrated consistent performance enhancements. Notably, there was an increase of up to 183.5 points on the MME benchmark for GPT-4V, which is known for its complexity. Additionally, performance on MMB was enhanced by 8.17\% for GPT-4V and 15.69\% for Gemini Pro, which established new state-of-the-art performance on MMB. \begin{figure}[t!] \centering \includegraphics[width=1\linewidth]{mmbench_benchmark.pdf} \includegraphics[width=1\linewidth]{mme_benchmark.pdf} \caption{Performance of GPT-4V and Gemini Pro on Object-Oriented Perception Tasks in MMB and MME Benchmarks.} \label{fig:tableMMBench_MME} \end{figure} Related Work \subsection{VQA with MLLMs} Recent advancements in Large Language Models (LLMs) like ChatGPT, PaLM <|cite_start|> (Reference: PaLM: Scaling Language Modeling with Pathways: Large language models have been shown to achieve remarkable performance across a variety of natural language tasks using few-shot learning, which drastically reduces the number of task-specific training examples needed to adapt the model to a particular application. To further our understanding of the impact of scale on few-shot learning, we trained a 540-billion parameter, densely activated, Transformer language model, which we call Pathways Language Model PaLM. We trained PaLM on 6144 TPU v4 chips using Pathways, a new ML system which enables highly efficient training across multiple TPU Pods. We demonstrate continued benefits of scaling by achieving state-of-the-art few-shot learning results on hundreds of language understanding and generation benchmarks. On a number of these tasks, PaLM 540B achieves breakthrough performance, outperforming the finetuned state-of-the-art on a suite of multi-step reasoning tasks, and outperforming average human performance on the recently released BIG-bench benchmark. A significant number of BIG-bench tasks showed discontinuous improvements from model scale, meaning that performance steeply increased as we scaled to our largest model. PaLM also has strong capabilities in multilingual tasks and source code generation, which we demonstrate on a wide array of benchmarks. We additionally provide a comprehensive analysis on bias and toxicity, and study the extent of training data memorization with respect to model scale. Finally, we discuss the ethical considerations related to large language models and discuss potential mitigation strategies.) <|cite_end|>, OPT <|cite_start|> (Reference: OPT: Open Pre-trained Transformer Language Models: Large language models, which are often trained for hundreds of thousands of compute days, have shown remarkable capabilities for zero- and few-shot learning. Given their computational cost, these models are difficult to replicate without significant capital. For the few that are available through APIs, no access is granted to the full model weights, making them difficult to study. We present Open Pre-trained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M to 175B parameters, which we aim to fully and responsibly share with interested researchers. We show that OPT-175B is comparable to GPT-3, while requiring only 1/7th the carbon footprint to develop. We are also releasing our logbook detailing the infrastructure challenges we faced, along with code for experimenting with all of the released models.) <|cite_end|>, and BLOOM <|cite_start|> (Reference: BLOOM: A 176B-Parameter Open-Access Multilingual Language Model: Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.) <|cite_end|> have led to the development of Multimodal Large Language Models (MLLMs), including MiniGPT-4 <|cite_start|> (Reference: MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language Models: The recent GPT-4 has demonstrated extraordinary multi-modal abilities, such as directly generating websites from handwritten text and identifying humorous elements within images. These features are rarely observed in previous vision-language models. However, the technical details behind GPT-4 continue to remain undisclosed. We believe that the enhanced multi-modal generation capabilities of GPT-4 stem from the utilization of sophisticated large language models (LLM). To examine this phenomenon, we present MiniGPT-4, which aligns a frozen visual encoder with a frozen advanced LLM, Vicuna, using one projection layer. Our work, for the first time, uncovers that properly aligning the visual features with an advanced large language model can possess numerous advanced multi-modal abilities demonstrated by GPT-4, such as detailed image description generation and website creation from hand-drawn drafts. Furthermore, we also observe other emerging capabilities in MiniGPT-4, including writing stories and poems inspired by given images, teaching users how to cook based on food photos, and so on. In our experiment, we found that the model trained on short image caption pairs could produce unnatural language outputs (e.g., repetition and fragmentation). To address this problem, we curate a detailed image description dataset in the second stage to finetune the model, which consequently improves the model's generation reliability and overall usability. Our code, pre-trained model, and collected dataset are available at https://minigpt-4.github.io/.) <|cite_end|>, InstructBLIP <|cite_start|> (Reference: InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning: Large-scale pre-training and instruction tuning have been successful at creating general-purpose language models with broad competence. However, building general-purpose vision-language models is challenging due to the rich input distributions and task diversity resulting from the additional visual input. Although vision-language pretraining has been widely studied, vision-language instruction tuning remains under-explored. In this paper, we conduct a systematic and comprehensive study on vision-language instruction tuning based on the pretrained BLIP-2 models. We gather 26 publicly available datasets, covering a wide variety of tasks and capabilities, and transform them into instruction tuning format. Additionally, we introduce an instruction-aware Query Transformer, which extracts informative features tailored to the given instruction. Trained on 13 held-in datasets, InstructBLIP attains state-of-the-art zero-shot performance across all 13 held-out datasets, substantially outperforming BLIP-2 and larger Flamingo models. Our models also lead to state-of-the-art performance when finetuned on individual downstream tasks (e.g., 90.7% accuracy on ScienceQA questions with image contexts). Furthermore, we qualitatively demonstrate the advantages of InstructBLIP over concurrent multimodal models. All InstructBLIP models are open-sourced at https://github.com/salesforce/LAVIS/tree/main/projects/instructblip.) <|cite_end|>, LLaVA <|cite_start|> (Reference: Visual Instruction Tuning: Instruction tuning large language models (LLMs) using machine-generated instruction-following data has improved zero-shot capabilities on new tasks, but the idea is less explored in the multimodal field. In this paper, we present the first attempt to use language-only GPT-4 to generate multimodal language-image instruction-following data. By instruction tuning on such generated data, we introduce LLaVA: Large Language and Vision Assistant, an end-to-end trained large multimodal model that connects a vision encoder and LLM for general-purpose visual and language understanding.Our early experiments show that LLaVA demonstrates impressive multimodel chat abilities, sometimes exhibiting the behaviors of multimodal GPT-4 on unseen images/instructions, and yields a 85.1% relative score compared with GPT-4 on a synthetic multimodal instruction-following dataset. When fine-tuned on Science QA, the synergy of LLaVA and GPT-4 achieves a new state-of-the-art accuracy of 92.53%. We make GPT-4 generated visual instruction tuning data, our model and code base publicly available.) <|cite_end|>, Shikra <|cite_start|> (Reference: Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic: In human conversations, individuals can indicate relevant regions within a scene while addressing others. In turn, the other person can then respond by referring to specific regions if necessary. This natural referential ability in dialogue remains absent in current Multimodal Large Language Models (MLLMs). To fill this gap, this paper proposes an MLLM called Shikra, which can handle spatial coordinate inputs and outputs in natural language. Its architecture consists of a vision encoder, an alignment layer, and a LLM. It is designed to be straightforward and simple, without the need for extra vocabularies, position encoder, pre-/post-detection modules, or external plug-in models. All inputs and outputs are in natural language form. Referential dialogue is a superset of various vision-language (VL) tasks. Shikra can naturally handle location-related tasks like REC and PointQA, as well as conventional VL tasks such as Image Captioning and VQA. Experimental results showcase Shikra's promising performance. Furthermore, it enables numerous exciting applications, like providing mentioned objects' coordinates in chains of thoughts and comparing user-pointed regions similarities. Our code, model and dataset are accessed at https://github.com/shikras/shikra.) <|cite_end|>, and PaLM-E <|cite_start|> (Reference: PaLM 2 Technical Report: We introduce PaLM 2, a new state-of-the-art language model that has better multilingual and reasoning capabilities and is more compute-efficient than its predecessor PaLM. PaLM 2 is a Transformer-based model trained using a mixture of objectives. Through extensive evaluations on English and multilingual language, and reasoning tasks, we demonstrate that PaLM 2 has significantly improved quality on downstream tasks across different model sizes, while simultaneously exhibiting faster and more efficient inference compared to PaLM. This improved efficiency enables broader deployment while also allowing the model to respond faster, for a more natural pace of interaction. PaLM 2 demonstrates robust reasoning capabilities exemplified by large improvements over PaLM on BIG-Bench and other reasoning tasks. PaLM 2 exhibits stable performance on a suite of responsible AI evaluations, and enables inference-time control over toxicity without additional overhead or impact on other capabilities. Overall, PaLM 2 achieves state-of-the-art performance across a diverse set of tasks and capabilities. When discussing the PaLM 2 family, it is important to distinguish between pre-trained models (of various sizes), fine-tuned variants of these models, and the user-facing products that use these models. In particular, user-facing products typically include additional pre- and post-processing steps. Additionally, the underlying models may evolve over time. Therefore, one should not expect the performance of user-facing products to exactly match the results reported in this report.) <|cite_end|>. They combine language and vision through instruction tuning, enhancing performance in vision tasks. Research on improving MLLMs for Visual Question Answering (VQA) has focused on gradient-based <|cite_start|> (Reference: Prefix-Tuning: Optimizing Continuous Prompts for Generation: Fine-tuning is the de facto way to leverage large pretrained language models to perform downstream tasks. However, it modifies all the language model parameters and therefore necessitates storing a full copy for each task. In this paper, we propose prefix-tuning, a lightweight alternative to fine-tuning for natural language generation tasks, which keeps language model parameters frozen, but optimizes a small continuous task-specific vector (called the prefix). Prefix-tuning draws inspiration from prompting, allowing subsequent tokens to attend to this prefix as if it were "virtual tokens". We apply prefix-tuning to GPT-2 for table-to-text generation and to BART for summarization. We find that by learning only 0.1\% of the parameters, prefix-tuning obtains comparable performance in the full data setting, outperforms fine-tuning in low-data settings, and extrapolates better to examples with topics unseen during training.) <|cite_end|> <|cite_start|> (Reference: SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer: There has been growing interest in parameter-efficient methods to apply pre-trained language models to downstream tasks. Building on the Prompt Tuning approach of Lester et al. (2021), which learns task-specific soft prompts to condition a frozen pre-trained model to perform different tasks, we propose a novel prompt-based transfer learning approach called SPoT: Soft Prompt Transfer. SPoT first learns a prompt on one or more source tasks and then uses it to initialize the prompt for a target task. We show that SPoT significantly boosts the performance of Prompt Tuning across many tasks. More remarkably, across all model sizes, SPoT matches or outperforms standard Model Tuning (which fine-tunes all model parameters) on the SuperGLUE benchmark, while using up to 27,000x fewer task-specific parameters. To understand where SPoT is most effective, we conduct a large-scale study on task transferability with 26 NLP tasks in 160 combinations, and demonstrate that many tasks can benefit each other via prompt transfer. Finally, we propose an efficient retrieval approach that interprets task prompts as task embeddings to identify similar tasks and predict the most transferable source tasks for a novel target task.) <|cite_end|> <|cite_start|> (Reference: PPT: Pre-trained Prompt Tuning for Few-shot Learning: Prompts for pre-trained language models (PLMs) have shown remarkable performance by bridging the gap between pre-training tasks and various downstream tasks. Among these methods, prompt tuning, which freezes PLMs and only tunes soft prompts, provides an efficient and effective solution for adapting large-scale PLMs to downstream tasks. However, prompt tuning is yet to be fully explored. In our pilot experiments, we find that prompt tuning performs comparably with conventional full-model fine-tuning when downstream data are sufficient, whereas it performs much worse under few-shot learning settings, which may hinder the application of prompt tuning in practice. We attribute this low performance to the manner of initializing soft prompts. Therefore, in this work, we propose to pre-train prompts by adding soft prompts into the pre-training stage to obtain a better initialization. We name this Pre-trained Prompt Tuning framework "PPT". To ensure the generalization of PPT, we formulate similar classification tasks into a unified task form and pre-train soft prompts for this unified task. Extensive experiments show that tuning pre-trained prompts for downstream tasks can reach or even outperform full-model fine-tuning under both full-data and few-shot settings. Our approach is effective and efficient for using large-scale PLMs in practice.) <|cite_end|> <|cite_start|> (Reference: GPT Understands, Too: Prompting a pretrained language model with natural language patterns has been proved effective for natural language understanding (NLU). However, our preliminary study reveals that manual discrete prompts often lead to unstable performance -- e.g., changing a single word in the prompt might result in substantial performance drop. We propose a novel method P-Tuning that employs trainable continuous prompt embeddings in concatenation with discrete prompts. Empirically, P-Tuning not only stabilizes training by minimizing the gap between various discrete prompts, but also improves performance by a sizeable margin on a wide range of NLU tasks including LAMA and SuperGLUE. P-Tuning is generally effective for both frozen and tuned language models, under both the fully-supervised and few-shot settings.) <|cite_end|> <|cite_start|> (Reference: ClipCap: CLIP Prefix for Image Captioning: Image captioning is a fundamental task in vision-language understanding, where the model predicts a textual informative caption to a given input image. In this paper, we present a simple approach to address this task. We use CLIP encoding as a prefix to the caption, by employing a simple mapping network, and then fine-tunes a language model to generate the image captions. The recently proposed CLIP model contains rich semantic features which were trained with textual context, making it best for vision-language perception. Our key idea is that together with a pre-trained language model (GPT2), we obtain a wide understanding of both visual and textual data. Hence, our approach only requires rather quick training to produce a competent captioning model. Without additional annotations or pre-training, it efficiently generates meaningful captions for large-scale and diverse datasets. Surprisingly, our method works well even when only the mapping network is trained, while both CLIP and the language model remain frozen, allowing a lighter architecture with less trainable parameters. Through quantitative evaluation, we demonstrate our model achieves comparable results to state-of-the-art methods on the challenging Conceptual Captions and nocaps datasets, while it is simpler, faster, and lighter. Our code is available in https://github.com/rmokady/CLIP_prefix_caption.) <|cite_end|> <|cite_start|> (Reference: Controllable Natural Language Generation with Contrastive Prefixes: To guide the generation of large pretrained language models (LM), previous work has focused on directly fine-tuning the language model or utilizing an attribute discriminator. In this work, we propose a novel lightweight framework for controllable GPT2 generation, which utilizes a set of small attribute-specific vectors, called prefixes, to steer natural language generation. Different from prefix-tuning, where each prefix is trained independently, we take the relationship among prefixes into consideration and train multiple prefixes simultaneously. We propose a novel supervised method and also an unsupervised method to train the prefixes for single-aspect control while the combination of these two methods can achieve multi-aspect control. Experimental results on both single-aspect and multi-aspect control show that our methods can guide generation towards the desired attributes while keeping high linguistic quality.) <|cite_end|> <|cite_start|> (Reference: Input-Tuning: Adapting Unfamiliar Inputs to Frozen Pretrained Models: Recently the prompt-tuning paradigm has attracted significant attention. By only tuning continuous prompts with a frozen pre-trained language model (PLM), prompt-tuning takes a step towards deploying a shared frozen PLM to serve numerous downstream tasks. Although prompt-tuning shows good performance on certain natural language understanding (NLU) tasks, its effectiveness on natural language generation (NLG) tasks is still under-explored. In this paper, we argue that one of the factors hindering the development of prompt-tuning on NLG tasks is the unfamiliar inputs (i.e., inputs are linguistically different from the pretraining corpus). For example, our preliminary exploration reveals a large performance gap between prompt-tuning and fine-tuning when unfamiliar inputs occur frequently in NLG tasks. This motivates us to propose input-tuning, which fine-tunes both the continuous prompts and the input representations, leading to a more effective way to adapt unfamiliar inputs to frozen PLMs. Our proposed input-tuning is conceptually simple and empirically powerful. Experimental results on seven NLG tasks demonstrate that input-tuning is significantly and consistently better than prompt-tuning. Furthermore, on three of these tasks, input-tuning can achieve a comparable or even better performance than fine-tuning.) <|cite_end|> and prompt optimization methods <|cite_start|> (Reference: RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning: Prompting has shown impressive success in enabling large pretrained language models (LMs) to perform diverse NLP tasks, especially when only few downstream data are available. Automatically finding the optimal prompt for each task, however, is challenging. Most existing work resorts to tuning soft prompt (e.g., embeddings) which falls short of interpretability, reusability across LMs, and applicability when gradients are not accessible. Discrete prompt, on the other hand, is difficult to optimize, and is often created by "enumeration (e.g., paraphrasing)-then-selection" heuristics that do not explore the prompt space systematically. This paper proposes RLPrompt, an efficient discrete prompt optimization approach with reinforcement learning (RL). RLPrompt formulates a parameter-efficient policy network that generates the desired discrete prompt after training with reward. To overcome the complexity and stochasticity of reward signals by the large LM environment, we incorporate effective reward stabilization that substantially enhances the training efficiency. RLPrompt is flexibly applicable to different types of LMs, such as masked (e.g., BERT) and left-to-right models (e.g., GPTs), for both classification and generation tasks. Experiments on few-shot classification and unsupervised text style transfer show superior performance over a wide range of existing finetuning or prompting methods. Interestingly, the resulting optimized prompts are often ungrammatical gibberish text; and surprisingly, those gibberish prompts are transferrable between different LMs to retain significant performance, indicating LM prompting may not follow human language patterns.) <|cite_end|> <|cite_start|> (Reference: Multimodal Chain-of-Thought Reasoning in Language Models: Large language models (LLMs) have shown impressive performance on complex reasoning by leveraging chain-of-thought (CoT) prompting to generate intermediate reasoning chains as the rationale to infer the answer. However, existing CoT studies have primarily focused on the language modality. We propose Multimodal-CoT that incorporates language (text) and vision (images) modalities into a two-stage framework that separates rationale generation and answer inference. In this way, answer inference can leverage better generated rationales that are based on multimodal information. Experimental results on ScienceQA and A-OKVQA benchmark datasets show the effectiveness of our proposed approach. With Multimodal-CoT, our model under 1 billion parameters achieves state-of-the-art performance on the ScienceQA benchmark. Our analysis indicates that Multimodal-CoT offers the advantages of mitigating hallucination and enhancing convergence speed. Code is publicly available at https://github.com/amazon-science/mm-cot.) <|cite_end|>. The Multimodal Chain of Thought (MM-CoT) method <|cite_start|> (Reference: Multimodal Chain-of-Thought Reasoning in Language Models: Large language models (LLMs) have shown impressive performance on complex reasoning by leveraging chain-of-thought (CoT) prompting to generate intermediate reasoning chains as the rationale to infer the answer. However, existing CoT studies have primarily focused on the language modality. We propose Multimodal-CoT that incorporates language (text) and vision (images) modalities into a two-stage framework that separates rationale generation and answer inference. In this way, answer inference can leverage better generated rationales that are based on multimodal information. Experimental results on ScienceQA and A-OKVQA benchmark datasets show the effectiveness of our proposed approach. With Multimodal-CoT, our model under 1 billion parameters achieves state-of-the-art performance on the ScienceQA benchmark. Our analysis indicates that Multimodal-CoT offers the advantages of mitigating hallucination and enhancing convergence speed. Code is publicly available at https://github.com/amazon-science/mm-cot.) <|cite_end|> stands out by integrating visual and textual information within LLMs for superior reasoning task performance, though it increases training costs. Our research seeks a cost-effective prompting strategy to enhance MLLMs' object-level perception in VQA tasks. \subsection{Visual Perception With MLLMs} Research in MLLMs' visual perception focuses on Data Enhancement and Visual Integration Refinement. Data Enhancement, with works like SVIT <|cite_start|> (Reference: SVIT: Scaling up Visual Instruction Tuning: Thanks to the emerging of foundation models, the large language and vision models are integrated to acquire the multimodal ability of visual captioning, question answering, etc. Although existing multimodal models present impressive performance of visual understanding and reasoning, their limits are still largely under-explored due to the scarcity of high-quality instruction tuning data. To push the limits of multimodal capability, we Scale up Visual Instruction Tuning (SVIT) by constructing a dataset of 4.2 million visual instruction tuning data including 1.6M conversation question-answer (QA) pairs, 1.6M complex reasoning QA pairs, 1.0M referring QA pairs and 106K detailed image descriptions. Besides the volume, the proposed dataset is also featured by the high quality and rich diversity, which is generated by prompting GPT-4 with the abundant manual annotations of images. We also propose a new data recipe to select subset with better diversity and balance, which evokes model's superior capabilities. Extensive experiments verify that SVIT-v1.5, trained on the proposed dataset, outperforms state-of-the-art Multimodal Large Language Models on popular benchmarks. The data and code are publicly available at https://github.com/BAAI-DCAI/Visual-Instruction-Tuning.) <|cite_end|> and ShareGPT4V <|cite_start|> (Reference: ShareGPT4V: Improving Large Multi-Modal Models with Better Captions: In the realm of large multi-modal models (LMMs), efficient modality alignment is crucial yet often constrained by the scarcity of high-quality image-text data. To address this bottleneck, we introduce the ShareGPT4V dataset, a pioneering large-scale resource featuring 1.2 million highly descriptive captions, which surpasses existing datasets in diversity and information content, covering world knowledge, object properties, spatial relationships, and aesthetic evaluations. Specifically, ShareGPT4V originates from a curated 100K high-quality captions collected from advanced GPT4-Vision and has been expanded to 1.2M with a superb caption model trained on this subset. ShareGPT4V first demonstrates its effectiveness for the Supervised Fine-Tuning (SFT) phase, by substituting an equivalent quantity of detailed captions in existing SFT datasets with a subset of our high-quality captions, significantly enhancing the LMMs like LLaVA-7B, LLaVA-1.5-13B, and Qwen-VL-Chat-7B on the MME and MMBench benchmarks, with respective gains of 222.8/22.0/22.3 and 2.7/1.3/1.5. We further incorporate ShareGPT4V data into both the pre-training and SFT phases, obtaining ShareGPT4V-7B, a superior LMM based on a simple architecture that has remarkable performance across a majority of the multi-modal benchmarks. This project is available at https://ShareGPT4V.github.io to serve as a pivotal resource for advancing the LMMs community.) <|cite_end|>, aims to improve visual comprehension by enriching datasets. Meanwhile, mPLUG-Owl2 <|cite_start|> (Reference: mPLUG-Owl2: Revolutionizing Multi-modal Large Language Model with Modality Collaboration: Multi-modal Large Language Models (MLLMs) have demonstrated impressive instruction abilities across various open-ended tasks. However, previous methods primarily focus on enhancing multi-modal capabilities. In this work, we introduce a versatile multi-modal large language model, mPLUG-Owl2, which effectively leverages modality collaboration to improve performance in both text and multi-modal tasks. mPLUG-Owl2 utilizes a modularized network design, with the language decoder acting as a universal interface for managing different modalities. Specifically, mPLUG-Owl2 incorporates shared functional modules to facilitate modality collaboration and introduces a modality-adaptive module that preserves modality-specific features. Extensive experiments reveal that mPLUG-Owl2 is capable of generalizing both text tasks and multi-modal tasks and achieving state-of-the-art performances with a single generic model. Notably, mPLUG-Owl2 is the first MLLM model that demonstrates the modality collaboration phenomenon in both pure-text and multi-modal scenarios, setting a pioneering path in the development of future multi-modal foundation models.) <|cite_end|> enhances modality collaboration for diverse data perception. In Visual Integration Refinement, LION <|cite_start|> (Reference: LION: Empowering Multimodal Large Language Model with Dual-Level Visual Knowledge: Multimodal Large Language Models (MLLMs) have endowed LLMs with the ability to perceive and understand multimodal signals. However, most of the existing MLLMs mainly adopt vision encoders pretrained on coarsely aligned image-text pairs, leading to insufficient extraction and reasoning of visual knowledge. To address this issue, we devise a dual-Level vIsual knOwledge eNhanced Multimodal Large Language Model (LION), which empowers the MLLM by injecting visual knowledge in two levels. 1) Progressive incorporation of fine-grained spatial-aware visual knowledge. We design a vision aggregator cooperated with region-level vision-language (VL) tasks to incorporate fine-grained spatial-aware visual knowledge into the MLLM. To alleviate the conflict between imagelevel and region-level VL tasks during incorporation, we devise a dedicated stage-wise instruction-tuning strategy with mixture-of-adapters. This progressive incorporation scheme contributes to the mutual promotion between these two kinds of VL tasks. 2) Soft prompting of high-level semantic visual evidence. We facilitate the MLLM with high-level semantic visual evidence by leveraging diverse image tags. To mitigate the potential influence caused by imper-fect predicted tags, we propose a soft prompting method by embedding a learnable token into the tailored text instruction. Comprehensive experiments on several multimodal benchmarks demonstrate the superiority of our model (e.g., improvement of 5% accuracy on VSR and 3% CIDEr on TextCaps over InstructBLIP, 5% accuracy on RefCOCOg over Kosmos-2).) <|cite_end|> introduces spatial awareness, and SPHINX <|cite_start|> (Reference: SPHINX: The Joint Mixing of Weights, Tasks, and Visual Embeddings for Multi-modal Large Language Models: We present SPHINX, a versatile multi-modal large language model (MLLM) with a joint mixing of model weights, tuning tasks, and visual embeddings. First, for stronger vision-language alignment, we unfreeze the large language model (LLM) during pre-training, and introduce a weight mix strategy between LLMs trained by real-world and synthetic data. By directly integrating the weights from two domains, the mixed LLM can efficiently incorporate diverse semantics with favorable robustness. Then, to enable multi-purpose capabilities, we mix a variety of tasks for joint visual instruction tuning, and design task-specific instructions to avoid inter-task conflict. In addition to the basic visual question answering, we include more challenging tasks such as region-level understanding, caption grounding, document layout detection, and human pose estimation, contributing to mutual enhancement over different scenarios. Additionally, we propose to extract comprehensive visual embeddings from various network architectures, pre-training paradigms, and information granularity, providing language models with more robust image representations. Based on our proposed joint mixing, SPHINX exhibits superior multi-modal understanding capabilities on a wide range of applications. On top of this, we further propose an efficient strategy aiming to better capture fine-grained appearances of high-resolution images. With a mixing of different scales and high-resolution sub-images, SPHINX attains exceptional visual parsing and reasoning performance on existing evaluation benchmarks. We hope our work may cast a light on the exploration of joint mixing in future MLLM research. Code is released at https://github.com/Alpha-VLLM/LLaMA2-Accessory.) <|cite_end|> employs varied visual embeddings for richer visual knowledge integration. InternLM-XComposer2 <|cite_start|> (Reference: InternLM-XComposer2: Mastering Free-form Text-Image Composition and Comprehension in Vision-Language Large Model: We introduce InternLM-XComposer2, a cutting-edge vision-language model excelling in free-form text-image composition and comprehension. This model goes beyond conventional vision-language understanding, adeptly crafting interleaved text-image content from diverse inputs like outlines, detailed textual specifications, and reference images, enabling highly customizable content creation. InternLM-XComposer2 proposes a Partial LoRA (PLoRA) approach that applies additional LoRA parameters exclusively to image tokens to preserve the integrity of pre-trained language knowledge, striking a balance between precise vision understanding and text composition with literary talent. Experimental results demonstrate the superiority of InternLM-XComposer2 based on InternLM2-7B in producing high-quality long-text multi-modal content and its exceptional vision-language understanding performance across various benchmarks, where it not only significantly outperforms existing multimodal models but also matches or even surpasses GPT-4V and Gemini Pro in certain assessments. This highlights its remarkable proficiency in the realm of multimodal understanding. The InternLM-XComposer2 model series with 7B parameters are publicly available at https://github.com/InternLM/InternLM-XComposer.) <|cite_end|> advances this by merging visual knowledge with text-image composition. However, these advancements still face challenges in detailed object-oriented perception, not fully capturing the nuanced recognition and contextual understanding where human perception is superior. <|paper_end|>
[ "<|reference_start|> Exploring Boundary of GPT-4V on Marine Analysis: A Preliminary Case Study: Large language models (LLMs) have demonstrated a powerful ability to answer various queries as a general-purpose assistant. The continuous multi-modal large language models (MLLM) empower LLMs with the ability to perceive visual signals. The launch of GPT-4 (Generative Pre-trained Transformers) has generated significant interest in the research communities. GPT-4V(ison) has demonstrated significant power in both academia and industry fields, as a focal point in a new artificial intelligence generation. Though significant success was achieved by GPT-4V, exploring MLLMs in domain-specific analysis (e.g., marine analysis) that required domain-specific knowledge and expertise has gained less attention. In this study, we carry out the preliminary and comprehensive case study of utilizing GPT-4V for marine analysis. This report conducts a systematic evaluation of existing GPT-4V, assessing the performance of GPT-4V on marine research and also setting a new standard for future developments in MLLMs. The experimental results of GPT-4V show that the responses generated by GPT-4V are still far away from satisfying the domain-specific requirements of the marine professions. All images and prompts used in this study will be available at https://github.com/hkust-vgd/Marine_GPT-4V_Eval <|reference_end|>", "<|reference_start|> SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer: There has been growing interest in parameter-efficient methods to apply pre-trained language models to downstream tasks. Building on the Prompt Tuning approach of Lester et al. (2021), which learns task-specific soft prompts to condition a frozen pre-trained model to perform different tasks, we propose a novel prompt-based transfer learning approach called SPoT: Soft Prompt Transfer. SPoT first learns a prompt on one or more source tasks and then uses it to initialize the prompt for a target task. We show that SPoT significantly boosts the performance of Prompt Tuning across many tasks. More remarkably, across all model sizes, SPoT matches or outperforms standard Model Tuning (which fine-tunes all model parameters) on the SuperGLUE benchmark, while using up to 27,000x fewer task-specific parameters. To understand where SPoT is most effective, we conduct a large-scale study on task transferability with 26 NLP tasks in 160 combinations, and demonstrate that many tasks can benefit each other via prompt transfer. Finally, we propose an efficient retrieval approach that interprets task prompts as task embeddings to identify similar tasks and predict the most transferable source tasks for a novel target task. <|reference_end|>", "<|reference_start|> ClipCap: CLIP Prefix for Image Captioning: Image captioning is a fundamental task in vision-language understanding, where the model predicts a textual informative caption to a given input image. In this paper, we present a simple approach to address this task. We use CLIP encoding as a prefix to the caption, by employing a simple mapping network, and then fine-tunes a language model to generate the image captions. The recently proposed CLIP model contains rich semantic features which were trained with textual context, making it best for vision-language perception. Our key idea is that together with a pre-trained language model (GPT2), we obtain a wide understanding of both visual and textual data. Hence, our approach only requires rather quick training to produce a competent captioning model. Without additional annotations or pre-training, it efficiently generates meaningful captions for large-scale and diverse datasets. Surprisingly, our method works well even when only the mapping network is trained, while both CLIP and the language model remain frozen, allowing a lighter architecture with less trainable parameters. Through quantitative evaluation, we demonstrate our model achieves comparable results to state-of-the-art methods on the challenging Conceptual Captions and nocaps datasets, while it is simpler, faster, and lighter. Our code is available in https://github.com/rmokady/CLIP_prefix_caption. <|reference_end|>", "<|reference_start|> mPLUG-Owl2: Revolutionizing Multi-modal Large Language Model with Modality Collaboration: Multi-modal Large Language Models (MLLMs) have demonstrated impressive instruction abilities across various open-ended tasks. However, previous methods primarily focus on enhancing multi-modal capabilities. In this work, we introduce a versatile multi-modal large language model, mPLUG-Owl2, which effectively leverages modality collaboration to improve performance in both text and multi-modal tasks. mPLUG-Owl2 utilizes a modularized network design, with the language decoder acting as a universal interface for managing different modalities. Specifically, mPLUG-Owl2 incorporates shared functional modules to facilitate modality collaboration and introduces a modality-adaptive module that preserves modality-specific features. Extensive experiments reveal that mPLUG-Owl2 is capable of generalizing both text tasks and multi-modal tasks and achieving state-of-the-art performances with a single generic model. Notably, mPLUG-Owl2 is the first MLLM model that demonstrates the modality collaboration phenomenon in both pure-text and multi-modal scenarios, setting a pioneering path in the development of future multi-modal foundation models. <|reference_end|>" ]
[ 12, 28, 31, 39 ]
{"<|multi_cite_1_1|>": "ss-1862411", "<|multi_cite_1_2|>": "arxiv-104926", "<|multi_cite_1_3|>": "arxiv-506150", "<|multi_cite_1_4|>": "arxiv-77148", "<|cite_2|>": "arxiv-102462", "<|cite_3|>": "ss-1342345", "<|cite_5|>": "arxiv-497716", "<|cite_6|>": "arxiv-548967", "<|cite_8|>": "arxiv-522776", "<|cite_9|>": "ss-1225262", "<|multi_cite_10_1|>": "ss-1175467", "<|multi_cite_10_2|>": "arxiv-582288", "<|multi_cite_10_3|>": "arxiv-573112", "<|multi_cite_10_4|>": "arxiv-552273", "<|cite_11|>": "arxiv-558098", "<|cite_12|>": "arxiv-494904", "<|cite_13|>": "arxiv-518010", "<|cite_14|>": "arxiv-522776", "<|cite_15|>": "arxiv-505762", "<|cite_17|>": "arxiv-411079", "<|cite_18|>": "arxiv-416926", "<|cite_19|>": "arxiv-460885", "<|cite_20|>": "arxiv-498672", "<|cite_21|>": "arxiv-503928", "<|cite_22|>": "arxiv-497716", "<|cite_23|>": "arxiv-518837", "<|cite_24|>": "arxiv-505787", "<|multi_cite_25_1|>": "arxiv-313097", "<|multi_cite_25_2|>": "arxiv-374345", "<|multi_cite_25_3|>": "arxiv-365831", "<|multi_cite_25_4|>": "arxiv-328337", "<|multi_cite_25_5|>": "arxiv-381892", "<|multi_cite_25_6|>": "arxiv-401900", "<|multi_cite_25_7|>": "arxiv-403709", "<|multi_cite_26_1|>": "arxiv-422160", "<|multi_cite_26_3|>": "arxiv-478607", "<|cite_27|>": "arxiv-478607", "<|cite_28|>": "arxiv-521805", "<|cite_29|>": "arxiv-560685", "<|cite_30|>": "arxiv-556577", "<|cite_31|>": "ss-758432", "<|cite_32|>": "arxiv-558098", "<|cite_33|>": "arxiv-579738"}